[00:18:14] <deet1> blargh, solaris 11 express breaks my zones :(
[00:18:30] <deet1> maybe i just won't upgrade at all! take that, oracle!!
[00:19:47] <tsoome> upgrade what?
[00:21:30] *** yakov has quit IRC
[00:50:23] *** spanglywires has left #opensolaris
[00:52:53] *** cuboidtable has quit IRC
[00:58:23] <deet1> opensolaris 134b
[00:58:42] <deet1> my zones (all native) don't start
[00:59:17] <deet1> it looks like they have to be updated separately, but that's looking like a pain
[01:01:43] <tsoome> can be yep. zoneadm detach and attach -u may help.
[01:02:43] <tsoome> and there is no upgrade from express to ea;)
[01:07:12] *** Chris64 has quit IRC
[01:08:32] *** joshua_ has joined #opensolaris
[01:11:56] <deet1> well it isn't an urgent upgrade. osol keeps right on chugging
[01:11:58] *** ewdafa has quit IRC
[01:14:19] *** Chris64 has joined #opensolaris
[01:14:40] *** wdp has quit IRC
[01:21:14] *** Disorganized is now known as zz_Disorganized
[01:37:01] *** zz_Disorganized is now known as Disorganized
[01:37:50] *** miine has quit IRC
[01:43:17] *** symptom has quit IRC
[01:50:41] *** sommerfeld has joined #opensolaris
[01:50:41] *** ChanServ sets mode: +o sommerfeld
[01:51:38] *** InTheWings has quit IRC
[01:54:46] *** spanglywires has joined #opensolaris
[02:05:39] *** spanglywires has quit IRC
[02:08:54] *** Chris64 has quit IRC
[02:17:08] *** nachox has joined #opensolaris
[02:17:08] *** nachox has joined #opensolaris
[02:17:16] *** nachox_ has joined #opensolaris
[02:17:49] *** nachox_ has quit IRC
[02:49:01] *** Mdx4 has quit IRC
[02:56:29] *** smrt has quit IRC
[02:56:44] *** smrt has joined #opensolaris
[03:51:28] *** Nitial has quit IRC
[03:57:46] *** _Tenchi_ has joined #opensolaris
[04:18:38] *** Tpenta has quit IRC
[04:20:14] *** Tpenta has joined #opensolaris
[04:24:27] *** Shoggoth has joined #opensolaris
[04:25:46] <Shoggoth> I'm getting the following from zfs status -v: errors: Permanent errors have been detected in the following files: <0x97>:<0x1>
[04:26:18] <Shoggoth> This is somewhat surprising since the machine was cleanly shutdown and the pool is raid2z
[04:26:47] <Shoggoth> ...so... can anyone suggest why this would happen?
[04:27:07] <Shoggoth> and also... jhow do I identify which files <0x97>:<0x1> refer to?
[04:57:05] *** Disorganized is now known as zz_Disorganized
[04:57:47] *** zz_Disorganized is now known as Disorganized
[05:03:59] *** Tpenta has quit IRC
[05:05:12] *** Tpenta has joined #opensolaris
[05:06:30] *** cnu has quit IRC
[05:10:56] *** cnu has joined #opensolaris
[06:02:21] <Shoggoth> ping
[06:14:32] *** tsoome1 has joined #opensolaris
[06:14:32] *** tsoome has quit IRC
[06:14:33] *** tsoome1 is now known as tsoome
[06:15:42] *** tsoome1 has joined #opensolaris
[06:15:43] *** tsoome has quit IRC
[06:15:43] *** tsoome1 is now known as tsoome
[06:18:53] *** nachox has quit IRC
[06:20:33] <Shoggoth> ping
[06:34:01] *** joshua_ has quit IRC
[06:34:01] *** joshua_ has joined #opensolaris
[07:01:52] <Shoggoth> I'm getting the following from zfs status -v: errors: Permanent errors have been detected in the following files: <0x97>:<0x1>
[07:02:45] <Shoggoth> how do I identify which files <0x97>:<0x1> refer to?
[07:03:11] <joshua_> you might want to try #zfs
[07:03:31] *** alanc has quit IRC
[07:05:32] <Shoggoth> joshua_: thanks... didn't know that it had it's own channel
[08:09:10] *** Edgeman has quit IRC
[08:26:11] *** echobinary has quit IRC
[08:35:48] *** echobinary has joined #opensolaris
[08:49:18] *** spanglywires has joined #opensolaris
[08:54:49] *** fisted has quit IRC
[08:56:59] *** fOB has joined #opensolaris
[09:01:16] *** fisted has joined #opensolaris
[09:03:18] *** jacotton has quit IRC
[09:22:17] *** spanglywires has quit IRC
[09:59:54] *** spanglywires has joined #opensolaris
[10:18:44] *** spanglywires has quit IRC
[10:22:17] *** sphenxes has quit IRC
[10:22:42] *** ewdafa has joined #opensolaris
[10:26:04] *** Triskelios has quit IRC
[10:26:53] *** miine has joined #opensolaris
[10:30:47] *** wdp has joined #opensolaris
[10:30:47] *** wdp has joined #opensolaris
[10:40:42] *** TBFOOL has quit IRC
[10:53:06] *** yakov has joined #opensolaris
[10:54:51] *** TBCOOL has joined #opensolaris
[11:09:55] *** ARBALEST_ has joined #opensolaris
[11:10:17] *** kdavy has quit IRC
[11:10:31] *** yakov has quit IRC
[11:10:57] *** Chris64 has joined #opensolaris
[11:23:49] <eklof> Shoggoth: that is some metadata that is has detected errors in.
[11:23:55] <eklof> Do you use encryption=
[11:24:33] *** darrenb` has joined #opensolaris
[11:25:37] *** darrenb has quit IRC
[11:38:54] *** InTheWings has joined #opensolaris
[11:41:19] *** devians has joined #opensolaris
[11:42:29] <devians> Hey guys, im just doing the final rounds on a osol box before i move it to indiana or something. Im being told my hot spare is faulted with corrupted data. i've done a bit of googling but cant seem to turn up whats going on?
[11:43:31] <tsoome> hotspare?
[11:43:59] <tsoome> you mean, your pool disk has been spared out and now the spare got fault as well?
[11:44:24] <devians> nope, the raidz2 reports aok, the hot spare is saying corrupted data
[11:44:29] <devians> weird hey
[11:44:43] <tsoome> iostat -En confirms?
[11:46:03] <devians> oh thats interesting… i moved the hot spare previously from onboard the motherboard to the sas controller, and now it seems the hot spare is pointing to the os disk
[11:46:29] <tsoome> was the pool exported during the move?
[11:46:31] <devians> not sure how it managed that. i suppose i just remove the hot spare and readd the actual drive
[11:46:47] <devians> no it wasnt… i er.. forgot while i was monkeying around
[11:46:53] <devians> *sheepish*
[11:47:02] <tsoome> that can explain somewhat…..
[11:47:10] <devians> pebkac and all that :P
[11:47:42] <tsoome> guess yep, remove spare and add it again...
[11:47:55] * devians has at it
[11:49:44] <devians> erm, ok i cant detach it. must be a diff command?
[11:50:54] <tsoome> detach is for mirror sides, remove
[11:51:09] <devians> ah yes, i found that just then :)
[11:51:13] <tsoome> add is paired with remove, attach is paired with detach
[11:52:50] <devians> yeah
[11:52:56] <devians> ok im well confused
[11:53:47] <devians> zpool status storage says everything is fine, but the drive names/numbers differ completely to what i'm seeing in iostat -En
[11:54:09] <tsoome> export data pool and import it again
[11:54:28] <tsoome> the drive names in pool config will be fixed on import
[11:55:02] <tsoome> your /etc/zfs/zpool.cache contains old names
[11:55:55] <tsoome> removing zpool.cache and rebooting the system will fix it as well, i think.
[11:55:57] <devians> makes sense :)
[11:56:17] <tsoome> zpool cache will be updated on import/export
[11:58:42] <devians> there we go, everythings happy once more
[11:58:51] <devians> i love zfs, that was painless and i was a complete dolt
[11:59:24] <tsoome> there are few traps - such as messing with disks in live, imported pools and such;)
[12:00:23] <devians> aye. the only two things that bug me in zfs now are expanding vdevs (not a big issue) and the whole backwards compatibility with versions issue. (makes it hard to move to say, freenas if you started on opensolaris)
[12:00:54] <tsoome> you can set pool version on zpool create
[12:01:42] <devians> yeah but you cant downgrade an existing pool if you're migrating
[12:02:18] <tsoome> vdev size needs some planning ahead - there is also somewhat common sense required - wide vdev is only useful for streaming (or if you really wanna save on parity disks)
[12:02:54] <eklof> It would be great if oracle debuts bp-rewrite win solaris 11. However, that would be a bit optimistic i think :) Since I've read that Sol 11 EA is feature complete.
[12:03:02] <eklof> s/win/with
[12:03:06] <devians> yeah it doesnt really affect me, i run sets of raidz2 of 8 disks
[12:03:48] <devians> eklof whats bp-rewrite?
[12:04:17] <tsoome> I guess it will appear at some point of time, but its really not an big issue in enterprise setups where you can always get spare space for migrations….
[12:05:04] <tsoome> bp-rewrie support is the low level "toolkit" to make the relayout/rewrites etc possible.
[12:05:11] <eklof> block pointer rewrite. The feature that is years in the making which would allow for shrinking/growing raidz with a single disk, recompressing all data with new algorithms, defrag etc.
[12:05:59] <eklof> tsoome: i know. Mostly home users who "need" it. And that is not the target for ZFS, I know :)
[12:06:13] <eklof> But still, it would be awsome.
[12:06:30] <devians> eklof, sounds amazing :)
[12:06:55] <tsoome> indeed, atm its almost only feature which is missing compared to vxvm, linux lvm, raid controllers….
[12:07:28] <eklof> It is, but well, it's a bit like Duke Nukem Forever in my opinion :) But I take the debian stance. It's ready when it's ready.
[12:08:35] <tsoome> I'd also love to see even larger block support (EA has 1MB max) and setting vdev locality (dont stripe this dataset over all vdevs)
[12:10:04] <tsoome> locality by itself would also make to think about migration between vdevs in pool (automatic and manual) and that would really kick some ass:)
[12:10:14] <devians> i want to get stuck into this os migration but i should probably scrub first to make sure everything is happy
[12:12:02] <eklof> tsoome: i don't understand the relevance of block size, why is bigger better. Is it performance?
[12:12:12] <tsoome> perfomance yes
[12:12:15] <eklof> ok
[12:12:19] *** Disorganized is now known as zz_Disorganized
[12:12:43] <tsoome> your disk will get larger block with single write
[12:13:27] <eklof> Is it still variable so ZFS choose the blocksize automatically?
[12:13:28] <tsoome> if you have pool with many vdevs or wide vdev at, look on iostat -xn 1 and with simple math you can get the IO size for single op
[12:13:42] <eklof> I have a wide vdev, a 12 disk raidz2
[12:13:42] <tsoome> its variable yes
[12:14:07] <eklof> Will I have to adjust for it to use 1MB max? If my pools is upgraded from earlier versions?
[12:14:20] <tsoome> its the same rewrite
[12:14:20] <eklof> i think 128k was the largest block earlier?
[12:14:26] <eklof> ah ok.
[12:14:31] <eklof> But for new data i mean
[12:14:42] <tsoome> and you need to set recordsize i think
[12:15:01] <tsoome> if your dataset was created 128k, you need to adjust it
[12:15:20] <eklof> ah
[12:15:26] <eklof> yes mine is still 128K
[12:15:46] <eklof> Is there any drawbacks in terms of setting it to 1MB?
[12:16:02] <eklof> will a 40k file still use a 1MB block, or will ZFS adjust downwards.
[12:16:14] <eklof> since it's variable.
[12:16:39] <tsoome> it should be adjusted
[12:16:50] <eklof> So only advantages then. Hmm..
[12:16:58] <eklof> maybe I should adjust the recordsize then
[12:18:52] <tsoome> hm, interesting
[12:20:03] <tsoome> EA does create rpool and its datasets with 128k, but swap and dump are 1M
[12:20:37] <eklof> I just changed and will write a large file and read it back and see if there is any noticable difference.
[12:20:56] <tsoome> wonder if its an glitch in installer or some reason behind it….
[12:21:05] <Shoggoth> hi all
[12:21:09] <eklof> Meybe they don't trust it tsoome :)
[12:21:49] <tsoome> lol, well, that is hard to believe, considering the swap;)
[12:21:56] <eklof> :)
[12:22:03] <tsoome> if the swap will burn, you will notice it for sure;)
[12:22:13] <Shoggoth> I'm getting an unexpected message out of zfs status...
[12:22:23] <Shoggoth> Permanent errors have been detected in the following files: <0x97>:<0x1>
[12:22:44] <eklof> Shoggoth: did you read my question?
[12:22:47] <devians> can you change the block size on a pool full of data?
[12:23:04] <eklof> devians: yes you can, but it will only apply to new written data
[12:23:12] <tsoome> devians: you can but it will apply on new writes
[12:23:18] <tsoome> :D
[12:23:29] <Shoggoth> eklof: eh... missed it in the noise
[12:23:37] <tsoome> Shoggoth: missing redundancy for self repair?
[12:23:42] <devians> ah ok. would you want to somehow rewrite all the data to make it worthwhile? or is that not needed
[12:23:57] <Shoggoth> eklof: it's raid2z... so that aught not be the case
[12:24:11] <eklof> devians: you need to move off the data to another pool, and read it back if you want 1MB block size on everything, yes.
[12:24:22] <eklof> Shoggoth: so do you have encryption anabled on any dataset?
[12:24:24] <tsoome> check if you have checksums on still…..
[12:24:42] <Shoggoth> eklof: yes... encryption is on
[12:24:44] <tsoome> or yes, it can be encrypriotion....
[12:24:54] <eklof> So that is the "problem" or rather "bug"
[12:25:13] <eklof> Shoggoth: mount all your encrypted datasets and rerun the scrub.
[12:25:22] <Shoggoth> ok... so how do I find out which file(s) are affected?
[12:25:38] <eklof> It has probably scrubbes something when the encrypted dataset was unavailible.
[12:25:48] *** Triskelios has joined #opensolaris
[12:25:59] <eklof> Shoggoth: it's really hard, just scrub it away and ZFS will auto-repair ir.
[12:26:01] <Shoggoth> eklof: so a scrub should fix it...ahhh... I see so it's not a "real" error then
[12:26:03] <eklof> s/ir/it
[12:26:17] <tsoome> if its an real error
[12:26:35] <eklof> Shoggoth: well, it's ZFS that has a bit of trouble when doing scrub on a unmounted encryopted dataset.
[12:26:40] <tsoome> if the dataset was not mounted, the keys were not available and zfs couldnt access the data
[12:26:43] <eklof> It's most likely not a "real" error.
[12:27:00] <Shoggoth> that makes sense... but I'm surpirsed it hasn't happened before in that case
[12:27:24] <tsoome> people usually have datasets mounted;)
[12:27:41] <eklof> tsoome: that shouldn't be an issue. You are supposed to be able to scrub unmounted datasets. Maybe that is fixed in v33.
[12:27:44] <Shoggoth> lol... indeed... but this happened straight after boot
[12:27:57] <Shoggoth> hence the dataset was unmounted
[12:28:08] <eklof> tsoome: it occurs if you reboot your computer during a scrub, the fs gets unmounted, but the scrub starts after the reboot.
[12:28:17] <tsoome> aye
[12:28:21] <Shoggoth> ahhhhh
[12:28:22] <tsoome> that explains:)
[12:28:40] <Shoggoth> the light has just gone on... and it's blinding :)
[12:28:42] <tsoome> so, stop scrub before reboot
[12:28:51] <eklof> But it shouldn't be a problem. I still consider it's a bug. :=)
[12:28:56] <eklof> tsoome: yes that works too :)
[12:29:11] <Shoggoth> ok... I'm feeling much better now... was somewhat worried
[12:29:55] <eklof> Nothing to be worried about i think. It "should" go away after a full rescrub with datasets mounted. If it's what I think it is anyway :)
[12:30:06] <Shoggoth> just the same... I'd be curious to know how to translate the <0x?> syntax into a real filename
[12:30:14] <tsoome> havent really used encryption, just few tests....
[12:30:22] <eklof> Well, it is not files, it's metadata.
[12:30:39] <tsoome> in case of files you will be told file names
[12:30:41] <Shoggoth> ok... so that could be good or _very_ bad.... :)
[12:30:43] *** Nitial_ has joined #opensolaris
[12:30:43] <eklof> So no files are affected, but internal ZFS meta-data stuff/things/whatever
[12:30:49] <eklof> Shoggoth: indeed.
[12:31:09] <Shoggoth> that's what had me so confused... a cursory look at the dataset seemed ok
[12:31:23] <eklof> But since you run raidz2 I'm 99% sure it's the encryption bug.
[12:31:46] *** sphenxes has joined #opensolaris
[12:32:03] <Shoggoth> I hope you're right... I won't know for quite some time... it's a 12.8TB dataset
[12:32:09] <eklof> :)
[12:32:24] <eklof> Let's hear your results in a few days then....'
[12:32:30] <Shoggoth> I'll let you know how it turns out... will you be here during xmas?
[12:32:48] <tsoome> stop the scrub on this pool, mount all, zpool clear and start scrub to be sure
[12:33:56] <Shoggoth> oh!... I'd already started the scrub... and I thought "ok.... I'll stop it and do a zpool clear 1st"... and lo... the pool is ok now
[12:33:57] <Shoggoth> !!!
[12:33:59] <Shoggoth> :D
[12:34:10] <tsoome> clear is just to reset error counters
[12:34:20] <tsoome> its no problem if you didnt
[12:34:23] <Shoggoth> I hadn't done the clear yet
[12:34:40] <Shoggoth> so it seems whatever was wrong is already fixed....
[12:34:44] <Shoggoth> yippee
[12:34:47] <tsoome> :P
[12:34:58] <eklof> :)
[12:35:07] <Shoggoth> methinks I'll run a full scrub anyway.... just to be sure :)
[12:35:15] <tsoome> as eklof told, it was no issue at all, just an glitch
[12:35:43] <Shoggoth> I was kind of hoping that was the case... but I thought I'd ask here before I managed to make it worse...
[12:37:44] <eklof> tsoome: there was no difference in read/write speed using 1MB block on my pool. I went back to 128K.
[12:38:49] <Shoggoth> thankyou all... again!
[12:38:56] <eklof> Shoggoth: i think that issue might be fixed in zpool version 33 so if you have the time for it, upgrade. Iäm assuming you run Sol 11 Express with version 31 now?
[12:38:59] *** InTheWings has quit IRC
[12:39:50] <Shoggoth> eklof: mmm... not sure... remind me again on the zpool incantation that gives the zfs layout version?
[12:40:04] <eklof> zpool upgrade -v
[12:40:22] <Shoggoth> yep... 31
[12:40:56] *** InTheWings has joined #opensolaris
[12:40:58] <eklof> mm, since you are already down the closed road of oracle hell, you can just install Sol 11 EA and all it's bugfixes.
[12:41:13] <eklof> However, only a clean install will work with good results :)
[12:41:30] <tsoome> EA got bugfixes already?
[12:41:48] <eklof> No but bugfiex beteen express and ea i meant
[12:41:51] <Shoggoth> eklof: well... assuming oracle folllow through on their promise to publish source once 11 is out the door... I'm planning on migrating to illumos
[12:41:54] <eklof> spelling++
[12:42:02] <eklof> Shoggoth: well no one knows :)
[12:42:10] <Shoggoth> eh... I mean openindiana
[12:42:28] <eklof> they only have made a very "loose" promise to do so. Knowing oracle they can sure change their minds.
[12:42:36] <Shoggoth> eklof: yeah... I gathered...but I need the encryption support hence using 11express rather than oi
[12:42:45] <eklof> Yes, me to.
[12:43:15] <eklof> Maybe we are stuff on EA forever, but it workes quite ok i think :)
[12:43:21] <eklof> s/stuff/stuck
[12:43:36] <Shoggoth> hopefully the illumos ppl will manage to provide newer zfs on-disk-format support independantly
[12:43:50] <eklof> Well, not if Oracle holds on to the source.
[12:44:00] <eklof> Which is what I think to be honest.
[12:44:35] <Shoggoth> eklof: I'm suspicous also... but if the on-disk format is published you don't necessarily need the sourcve
[12:44:37] <Shoggoth> source*
[12:44:53] <eklof> However I have a plan B. I will build a second larger NAS, using FreeBSD or whatever, and migrate all data across.
[12:45:05] <eklof> :)
[12:45:46] <Shoggoth> well... the really depressing part is that (eventually) btrfs will probably reach some level of accetable feature parity...but of course that is oracle's beast also
[12:46:09] <eklof> Yes I will not hesitate to migrate to that if that is the case.
[12:46:21] <eklof> But today, nothing beats ZFS imho.
[12:46:32] <eklof> It's just so easy and it feels reliable.
[12:46:49] <eklof> I have only used it about 3 years but still.
[12:46:57] <Shoggoth> eklof: well... I'll be holding off a while longer... filesystems are probably the one part of any system of which you need to be the most cautious/conservative
[12:47:43] <eklof> But I recommend EA, it feels like they have fixed many "sharing" bugs with cifs/smb as well.
[12:47:56] <Shoggoth> EA?
[12:48:05] <Shoggoth> early access?
[12:48:15] <eklof> Early adopter release.
[12:48:23] <Shoggoth> is it publicly available?
[12:48:25] <eklof> It's a patched and feature complete Solaris 11,
[12:48:27] <eklof> It is.
[12:48:32] <Shoggoth> oh!!!!
[12:48:39] <Shoggoth> when did that arrive?
[12:48:47] <eklof> a month or so ago.
[12:48:49] <Shoggoth> I've been under a large rock for several months
[12:49:20] <Shoggoth> thankyou sir!
[12:49:23] <eklof> Still, this one you can't use in any prioduction environment.
[12:49:31] <eklof> no support deals as with sol 11 express.
[12:49:42] <Shoggoth> heh... depends what you mean by production :)
[12:49:48] <Shoggoth> mine's a SOHO fileserver
[12:49:49] <Shoggoth> :)
[12:50:01] <eklof> But it has snv_173 instead of snv_151 if you compare.
[12:50:18] <eklof> so much patching must have gone on :)
[12:50:26] <Shoggoth> how is it stability wise?
[12:51:11] <eklof> Well. For me, it seems more stable than express. But I had many issues with sharing of encrypted datasets for instance.
[12:51:40] <eklof> For instance, a zfs mount -a would not advertise the cifs-share again.
[12:51:55] <eklof> You had to set the zfs set sharesmb=name=nas again, for it to show.
[12:52:03] <Shoggoth> yeah... I had a weird problem when running 11express as a domU
[12:52:15] <eklof> domU has been removed in EA :)
[12:52:24] <eklof> XEN is removed altogheter.
[12:52:42] <eklof> So if you use that, EA is a no-go.
[12:52:55] <Shoggoth> sometimes the filesystem would lockup and you could fix it by running a script that periodically tried to unmount the volume
[12:53:18] <eklof> I can not vouch for it, just saying it's more stable for me.
[12:53:35] <Shoggoth> oh!... I knew they'd removed the dom0... didn't realise that removing domU was on the cards... so running sol11 under ovm isn't supported?
[12:53:47] <Shoggoth> seems odd
[12:53:52] <eklof> If in doubt, upgrade but hold off doing a zpool upgrade and revert back after you have tested it for a couple of weeks.
[12:54:16] <eklof> Shoggoth: Oracle consider XEN depricated in Solaris 11.
[12:54:26] <eklof> It only has zone-support.
[12:54:45] <tsoome> they are pushing their linux for it
[12:54:54] <eklof> There is a "clone" called SmartOS which have KVM but zpool version 28.
[12:55:38] <Shoggoth> yeah... again..,I knew they were dumping dom0 support... but not being able to run as a domU under oracleVM (the linux product) seems rather strange to me
[12:55:40] <eklof> But, tehn again, if you use encryption, that is a no-go aswell :)
[12:56:10] <eklof> Shoggoth: well, maybe that is working, i thought you meanr Dom0
[12:56:33] <Shoggoth> nah.... dom0 hasn't worked for quite a while afaik
[12:56:43] <eklof> yes, but not anymore :)
[12:56:51] <Shoggoth> even b4 the oracle thing happened
[12:56:54] <eklof> As in running Solaris as Dom0
[12:57:29] <Shoggoth> brb
[13:00:07] <tsoome> oh crap, damn installer has set arc_max in /etc/system…..
[13:00:29] <tsoome> and i was wondering why i cant get 700MB file into arc….
[13:10:00] <tsoome> ah, now we are talking!
[13:11:31] <Shoggoth> eklof:... sorry... you were saying.... smartos?.... is that a solaris derivative or is it linux/bsd/something else with zpool support?
[13:12:00] <eklof> it based on illumos i think
[13:12:22] <Shoggoth> ahh... joyent... yes... that would be illumos I expect
[13:12:52] <Shoggoth> interesting that they ported kvm rather than fixing up Xen support
[13:22:53] <tsoome> eklof: did try to test some streaming writes, but i dont have enough hardware to get consistent results to compare 1MB versus 128kb blocks:D troughput numbers are jumping between ~30MB/s to 50MB/s - all I have is 2 disk mirror on top of old parallel scsi disks:)
[13:23:23] <tsoome> but limiting arc_max is evil by oracle....
[13:29:23] *** JoergB has quit IRC
[13:34:06] *** JoergB has joined #opensolaris
[13:41:37] *** Shoggoth has left #opensolaris
[13:44:47] *** plat- has quit IRC
[13:47:16] *** JoergB has quit IRC
[13:47:32] *** JoergB has joined #opensolaris
[13:58:02] *** plat- has joined #opensolaris
[14:01:15] *** JoergB has quit IRC
[14:01:15] *** darrenb` has quit IRC
[14:01:15] *** snuff-home has quit IRC
[14:06:32] *** JoergB has joined #opensolaris
[14:06:32] *** darrenb` has joined #opensolaris
[14:06:32] *** snuff-home has joined #opensolaris
[14:14:15] *** JoergB has quit IRC
[14:14:16] *** darrenb` has quit IRC
[14:14:16] *** snuff-home has quit IRC
[14:16:41] *** InTheWings has quit IRC
[14:22:14] <CIA-56> illumos John Sonnenschein <johns at joyent dot com>: 1556 no reason why passwd -e should be disallowed on FILES repo Reviewed by: Richard Lowe <richlowe at richlowe dot net> Reviewed by: Dan McDonald <danmcd at nexenta dot com> Approved by: Richard Lowe <richlowe at richlowe dot net>
[14:27:09] <richlowe> tsoome: Huh?
[14:27:24] <richlowe> though given my normal attitude, I want to make clear that that's honest "what do you mean, I have no heard of this"
[14:27:31] <richlowe> "have not", even.
[14:29:20] <tsoome> +
[14:29:21] <tsoome> ?
[14:29:35] <richlowe> tsoome: arc_max being set at install time
[14:30:18] <tsoome> they set set zfs:zfs_arc_max=0x4002000 and set zfs:zfs_vdev_cache_size=0
[14:30:31] <tsoome> in EA…
[14:36:25] <richlowe> odd
[14:36:37] <richlowe> I could see doing the former with /etc/system, if it were done by the installer based on physical memory
[14:36:42] <richlowe> no idea why you'd do the latter
[14:36:49] <richlowe> it's not like it's different, safety-wise, than changing the code
[14:36:56] <richlowe> so "near a release" isn't a great excuse
[14:38:11] <tsoome> well, I fail to see why to bugger max size anyhow, it has working pretty fine so far
[14:38:21] <tsoome> has been*
[14:39:03] <|woody|> it is strange yes
[14:41:04] <tsoome> and why to limit it to 64MB……
[14:42:54] <|woody|> maybe it's a bug though.
[14:43:26] <|woody|> I can see that the installed image is limited to 64mb but not after it is installed
[14:44:01] <tsoome> tbh
[14:44:21] *** ARBALEST_ has quit IRC
[14:45:27] <tsoome> even with cd image you wanna have as much cached as possible, cd is not the fastest medium in the world…..
[14:53:41] *** Chris64 has quit IRC
[15:08:43] <richlowe> tsoome: in the CD case, it'd be caching the new rpool
[15:08:55] <richlowe> and never referencing it (for small values of "never")
[15:09:04] <tsoome> on install, yes.
[15:09:08] <richlowe> so you'd be pissing away memory which may also be tight
[15:09:30] <richlowe> |woody|'s theory was it was a miniroot-only value leaking through to the installed image
[15:10:26] <tsoome> and you could work it out by setting primarycache values (and resetting them on first boot.
[15:10:45] <tsoome> but well. doesnt really matter:)
[15:21:28] <RoyK> tsoome: what's zfs_arc_max?
[15:21:55] <tsoome> upper limit on arc size, default is ram size - 1GB
[15:22:19] <tsoome> 3GB on 4GB system for exmple.
[15:22:29] <tsoome> example*
[15:22:34] <RoyK> what is this about 64MB, then?
[15:23:38] <tsoome> somehow its set in /etc/system in EA fresh install. maybe its just bug from installer (which does cpio live image and doesnt remove its settings)
[15:25:34] <RoyK> EA?
[15:25:42] <tsoome> s11 early access:)
[15:26:05] <RoyK> as in alpha? ;)
[15:26:40] <tsoome> or beta, or whatever:)
[15:29:07] <RoyK> seems you have to be a gold partner or sleep with larry to get that...
[15:29:22] <tsoome> we are gold partner, yep.
[15:29:36] <RoyK> any BPR in that?
[15:29:40] <tsoome> nope
[15:30:16] *** Morfio has joined #opensolaris
[15:30:59] <RoyK> anyone here that knows the zfs guts well enough to say if it'd be hard to add a resilver priority tunable?
[15:34:06] <richlowe> RoyK: Try to download it
[15:34:17] <richlowe> Rumour strongly has it that it _says_ Gold, but doesn't _mean_ Gold
[15:35:29] <tsoome> tbh, i really like the changes to the zones and stuff
[15:39:43] <RoyK> richlowe: the download link sent me to a defunct login site
[15:40:13] <tsoome> they expect you to have account
[15:40:28] <RoyK> Oracle access manager: System error. Please re-try your action. If you continue to get this error, please contact the Administrator.
[15:41:29] <tsoome> at least support site login was working.
[15:43:18] *** Edgeman has joined #opensolaris
[15:55:23] *** JoergB has joined #opensolaris
[15:55:23] *** darrenb` has joined #opensolaris
[15:55:23] *** snuff-home has joined #opensolaris
[15:55:44] *** AxeZ has joined #opensolaris
[16:07:34] *** EisNerd_ is now known as EisNerd
[16:09:49] *** niq has quit IRC
[16:31:19] <eklof> tsoome: nice that you said it, mine is also throttled at 1GB :(
[16:32:03] <eklof> How to go back to the default physmem - 1GB ?
[16:32:04] <tsoome> only found because i did look on arc_summary:D
[16:32:28] <tsoome> just comment out those 2 or 1 set statements in /etc/system and reboot
[16:32:42] <eklof> I have 80% free mem right now.. what a waste
[16:32:50] <eklof> ah comment them out. Ok will do that.
[16:32:57] <tsoome> I even have no idea what second one does...
[16:33:44] <eklof> :)
[16:35:09] <eklof> rebooting....
[16:50:33] *** Triskelios has quit IRC
[17:18:25] *** alanc has joined #opensolaris
[17:18:26] *** ChanServ sets mode: +o alanc
[17:42:43] *** ZeroHour_ has joined #opensolaris
[17:45:46] *** ZeroHour has quit IRC
[17:56:20] *** jamesd has quit IRC
[18:02:01] *** niq has joined #opensolaris
[18:02:04] *** niq has joined #opensolaris
[18:08:02] *** jamesd has joined #opensolaris
[18:10:49] *** deet1 has quit IRC
[18:11:33] *** iceq has quit IRC
[18:19:48] *** piwi__ has quit IRC
[18:20:44] *** piwi__ has joined #opensolaris
[18:22:25] *** kimc has quit IRC
[18:23:06] *** iceq has joined #opensolaris
[18:24:36] *** ChanServ sets mode: +o jamesd
[18:38:56] *** Trisk[netbook] has joined #opensolaris
[18:38:56] *** ChanServ sets mode: +o Trisk[netbook]
[18:49:31] *** Trisk[netbook] has quit IRC
[18:51:37] *** Triskelios has joined #opensolaris
[18:57:35] *** Edgeman2 has joined #opensolaris
[18:59:07] *** Edgeman has quit IRC
[19:12:40] *** Edgeman2 is now known as Edgeman
[19:27:13] *** AlasAway is now known as Alasdairrr
[19:29:41] *** kdavy has joined #opensolaris
[19:35:39] *** DesiJat has joined #opensolaris
[19:36:19] *** DesiJat has left #opensolaris
[19:39:11] *** spanglywires has joined #opensolaris
[19:39:46] *** stoxx has quit IRC
[19:40:41] *** spanglywires has quit IRC
[19:43:32] *** stoxx has joined #opensolaris
[19:45:33] *** pino42 has joined #opensolaris
[19:54:15] *** ingenthr has quit IRC
[19:56:42] *** wdp has quit IRC
[19:58:17] *** wdp has joined #opensolaris
[19:58:17] *** wdp has joined #opensolaris
[19:59:12] *** miine has quit IRC
[20:04:22] *** JoergB has quit IRC
[20:05:39] *** JoergB has joined #opensolaris
[20:16:08] *** ingenthr has joined #opensolaris
[20:21:57] *** merzo has quit IRC
[20:29:15] *** nachox has joined #opensolaris
[20:29:16] *** nachox has joined #opensolaris
[20:30:09] *** Alasdairrr is now known as AlasAway
[20:36:53] *** Triskelios has quit IRC
[20:37:47] *** Triskelios has joined #opensolaris
[21:07:29] *** ewdafa has quit IRC
[21:08:13] *** ewdafa has joined #opensolaris
[21:08:24] *** Morfio has quit IRC
[21:26:58] *** fisted_ has joined #opensolaris
[21:29:14] *** fisted has quit IRC
[21:32:27] *** fisted_ has quit IRC
[21:34:06] *** fisted has joined #opensolaris
[21:35:23] *** Hedonista_ is now known as Hedonista
[21:45:50] *** zz_Disorganized is now known as Disorganized
[21:58:19] *** tsoome1 has joined #opensolaris
[22:00:05] *** tsoome has quit IRC
[22:00:05] *** tsoome1 is now known as tsoome
[22:15:19] *** pothos has quit IRC
[22:17:18] *** pothos_ has joined #opensolaris
[22:17:36] *** pothos_ is now known as pothos
[22:23:43] *** darrenb has joined #opensolaris
[22:24:17] *** milehigh has quit IRC
[22:24:56] *** yakov has joined #opensolaris
[22:25:31] *** darrenb` has quit IRC
[22:26:16] *** yakov has left #opensolaris
[22:28:28] *** merzo has joined #opensolaris
[22:46:06] *** gwr has quit IRC
[22:47:00] *** gwr has joined #opensolaris
[22:47:45] *** myrkraverk has quit IRC
[23:10:27] *** tsoome has quit IRC
[23:10:45] *** tsoome has joined #opensolaris
[23:11:04] *** heke has quit IRC
[23:12:01] *** heke has joined #opensolaris
[23:16:10] *** heke has quit IRC
[23:17:00] *** heke has joined #opensolaris
[23:20:01] *** deet has joined #opensolaris
[23:25:09] *** dnjaramba has joined #opensolaris
[23:27:36] *** dnjaramba has quit IRC