[00:06:30] *** merzo has quit IRC
[00:07:01] *** merzo has joined #openindiana
[00:08:55] *** melliott has joined #openindiana
[00:12:41] *** merzo has quit IRC
[00:13:47] *** merzo has joined #openindiana
[00:14:12] *** melik has quit IRC
[00:18:51] *** riccardo has joined #openindiana
[00:20:59] *** davenz has joined #openindiana
[00:21:55] *** tg has quit IRC
[00:24:55] *** riccardo has quit IRC
[00:25:43] *** Vutral has quit IRC
[00:28:04] *** tg has joined #openindiana
[00:28:56] *** Vutral has joined #openindiana
[00:29:43] *** smrt has quit IRC
[00:29:44] *** riccardo has joined #openindiana
[00:29:59] *** Vutral has quit IRC
[00:30:01] *** smrt has joined #openindiana
[00:32:23] *** InTheWings has quit IRC
[00:32:51] *** ozquera has quit IRC
[00:32:54] *** Vutral has joined #openindiana
[00:35:30] *** Kaishi has quit IRC
[00:49:31] *** ball has joined #openindiana
[00:51:09] *** hunter has joined #openindiana
[00:51:53] *** hunter has joined #openindiana
[00:52:58] *** hunter has quit IRC
[00:53:17] *** hunter has joined #openindiana
[00:54:54] *** jollyd has quit IRC
[00:55:48] <oninoshiko> there we go, that's better
[01:00:45] *** SANVisum has joined #openindiana
[01:10:32] *** wonko2 has joined #openindiana
[01:10:59] *** wonko has quit IRC
[01:19:50] *** riccardo has quit IRC
[01:21:46] <SANVisum> are there any special settings for OI installed on and running from an SSD drive?
[01:24:42] <Patrickdk> TechIsCool, using esxi 5.0?
[01:28:40] <oninoshiko> SANVisum, not really. Many use a more complex disk configuration, where spinning disks are used for primary storage and the SSD is used for cache though
[01:29:26] <SANVisum> yea, I was leaning that way, but I don't know that the remainder of this ssd drive will help in my environment
[01:30:14] <SANVisum> thank you oninoshiko
[01:33:19] *** rcorreia_ has quit IRC
[01:37:11] <oninoshiko> no problem
[01:38:26] <TechIsCool> Patrickdk: No 4.1u2
[01:39:06] <TechIsCool> its an all in one so the OI is running off mirrored drives and then it spins up. then I have access to the zfs.
[01:51:28] <oninoshiko> I noticed that there is a permissions difference on var/lib between some of the suff in our repo and some of the stuff in the jenkins-ci repo. Is there an easier solution then mogrifying the package from their repo into a local one?
[01:52:15] <oninoshiko> now that I think about it, maybe that question belongs in dev...
[01:52:37] *** jamesd has quit IRC
[01:57:38] <ball> I wonder if I should move all my data to an OpenIndiana box with zfs.
[02:00:21] <oninoshiko> some of us like it. YMMV
[02:01:32] * ball nods
[02:01:44] <ball> I'll have to have a look at home to see what disks I have.
[02:02:55] <ball> I may run my primary desktop on OpenIndiana for a while (since it seems to work well there) and back up onto a different machine, in case I get stuck somehow.
[02:15:53] *** sjorge has quit IRC
[02:19:35] *** sjorge has joined #openindiana
[02:32:38] *** jamesd has joined #openindiana
[02:34:44] *** CVLTCMK0 has joined #openindiana
[02:34:59] *** jellydonut has quit IRC
[02:37:08] <ball> hello jamesd
[02:38:59] <jamesd> hi ball
[02:46:44] *** Sachiru has joined #openindiana
[02:55:39] *** merzo has quit IRC
[02:56:44] *** merzo has joined #openindiana
[03:25:18] *** imaxs has quit IRC
[03:36:10] *** ira has quit IRC
[03:48:34] *** Kaishi has joined #openindiana
[04:39:59] <Patrickdk> TechIsCool, ok, was wondering cause there is a comstor bug that would crash the iscsi system in 5.0 if you trigger it
[04:40:07] <Patrickdk> fixed up in 151a3 I think
[04:40:27] <Patrickdk> though there are 3 different ways to work around the issue
[04:41:27] <TechIsCool> Patrickdk: I still can't prove what causes it. cifs and iscsi both stop responding to outside commands but console works until you try to backspace something then it stops
[04:41:43] <TechIsCool> but zfs still responds from console when the console works
[04:44:28] <echel0n> Tech ur on a esxi right ?
[04:44:42] <TechIsCool> echel0n: yes 4.1u2
[04:45:33] <echel0n> you set it up so OI is the first thing to start before all other vm's and the last thing to shutdown before all vm's ?
[04:45:53] *** BonzTM has quit IRC
[04:46:16] <TechIsCool> echel0n: correct is on it own hard drive and comes up before everything and down after everything no AD since domain is virtual aswell
[04:47:33] <TechIsCool> echel0n: The problem I experience is it hanging while its running. I can stream blu ray rips and large files for days but somehow every once in a while it just hangs but have yet to figure out the cuase
[04:49:17] <echel0n> memory leak ?
[04:49:34] <echel0n> or maybe improper cooling ?
[04:49:42] *** POloser has joined #openindiana
[04:49:55] *** POloser has left #openindiana
[04:50:14] <TechIsCool> I know it not cooling its in an air conditioned room with a server class case. Memory leak could be but I have never seen esxi complain about it
[04:50:51] <echel0n> Have you thought of switching up to esxi5
[04:51:07] *** POloser has joined #openindiana
[04:51:13] <TechIsCool> I have but I have 48GB of RAM so I would have to pay to upgrade
[04:51:26] <echel0n> hmmm
[04:51:29] <TechIsCool> kind of the reason I am still at 4.1
[04:51:46] <echel0n> wasn't aware there was limit of ram
[04:51:54] <TechIsCool> 32GB on the free version
[04:52:40] <echel0n> you stream ur movies/tv shows off the box with any sorta software from the server it self ?
[04:52:58] <TechIsCool> nope just vlc they are m2ts rips
[04:53:23] <TechIsCool> CPU is at under 25% usage with 10 servers
[04:53:53] <echel0n> hmmm
[04:54:18] <echel0n> iscsi from esxi or oi ?
[04:54:31] <TechIsCool> yup all of the virtual machines
[04:54:34] <TechIsCool> wait
[04:54:56] <TechIsCool> no nfs only on this
[04:55:17] <TechIsCool> no iscsi
[04:55:24] <echel0n> lol confused now
[04:55:42] <echel0n> are you serving up the zfs from OI to esxi via nfs or iscsi ?
[04:55:49] <TechIsCool> nfs
[04:56:06] <TechIsCool> it goes zfs from oi -> nfs -> esxi -> virtual machines
[04:56:16] <TechIsCool> sorry for the confusion I have another host that runs iscsi
[04:56:20] <TechIsCool> got confused
[04:56:36] <echel0n> ok so your having NFS issues or iSCSI issues ?
[04:56:52] <TechIsCool> NFS issues
[04:57:08] <TechIsCool> smb also dies when nfs goes down
[04:57:36] <echel0n> have you looked at the service logs at all ?
[04:58:05] <TechIsCool> I have not today but did not see anything
[04:59:12] <echel0n> when both smb and nfs go down can the oi vm still reach the outside world ?
[04:59:21] <TechIsCool> yes
[04:59:35] <TechIsCool> but it seems like the console locks up if I try to su
[04:59:50] *** antennageek has joined #openindiana
[04:59:52] <TechIsCool> if I am not logged in before it hangs I can't log in
[05:00:03] *** antennageek is now known as niner
[05:00:50] *** niner has left #openindiana
[05:01:33] <echel0n> was ver of OI u got running ?
[05:01:51] <echel0n> seems to me more then just a nfs/smb issue
[05:01:51] <TechIsCool> oi_151a4
[05:01:59] <echel0n> try a pkg update
[05:02:01] <echel0n> a6 is out
[05:02:23] <echel0n> no idea if it'll fix it but sure will eliminate one issue
[05:02:31] <TechIsCool> yup
[05:03:32] <TechIsCool> whats more irritating is I have not found the problem so to debug I have to wait...
[05:05:25] <echel0n> sounds largly like a vnic issue maybe
[05:05:55] <echel0n> then again the console locks up as well so could be a bug in esxi all together
[05:05:58] <TechIsCool> I originally ran the vmx nics but they did not work so they are now e1000 only
[05:06:14] <TechIsCool> well the console takes input but does not respond
[05:06:28] <TechIsCool> it prints it on screen but its like its in an endless loop
[05:06:36] <echel0n> hmm
[05:06:59] <echel0n> any real reason u run esxi as opposed to just OI and virtualbox ?
[05:07:23] <TechIsCool> 10 other servers on the same host
[05:08:38] <echel0n> running linux or ?
[05:08:53] <TechIsCool> 3 windows 4linux and two applicances
[05:09:07] <echel0n> intel or amd hardware for the host ?
[05:09:19] <TechIsCool> intel e5
[05:09:26] <TechIsCool> 2x
[05:09:32] <echel0n> oi have kvm now :)
[05:09:40] <echel0n> works for intel only at the moment
[05:09:58] <echel0n> look into qemu-kvm
[05:10:14] <echel0n> also virtualbox runs on oi without issue I found
[05:10:34] <echel0n> and using phpvirtualbox provides a slick cp to it
[05:10:48] <echel0n> could solve all your issues right there
[05:11:41] <TechIsCool> could but most likeley won't
[05:12:09] <echel0n> well I
[05:12:14] <TechIsCool> I could try a in place replacement
[05:12:18] *** benben159 has joined #openindiana
[05:12:18] <TechIsCool> since its just zfs
[05:12:24] <echel0n> I tried proxmox as well with the same setup u got for esxi and it failed badly
[05:12:24] <TechIsCool> that oi is managing
[05:12:37] <benben159> hello all.
[05:12:54] <benben159> how do i erase jds from OpenIndiana?
[05:13:00] <benben159> OpenIndiana desktop
[05:17:12] *** Seony has joined #openindiana
[05:20:52] *** DucBlangis has joined #openindiana
[05:21:05] <oninoshiko> ~
[05:45:59] <DucBlangis> ~~
[05:46:14] <echel0n> ~~~
[05:46:18] <benben159> ~~~~~
[05:46:33] <DucBlangis> my lord
[05:46:36] <echel0n> lol
[05:46:41] <echel0n> snake race
[05:47:28] <DucBlangis> I totally pictured that as something else
[05:47:33] <benben159> =))
[05:47:38] <DucBlangis> pervy ASCII art has ruined me
[05:48:21] <echel0n> lol
[05:48:35] <DucBlangis> Im liking this OI. My first Solaris boot since about 2003
[05:48:52] <echel0n> Oh its slick as hell man
[05:48:54] <echel0n> ur love it
[05:49:17] <ball> I like it too.
[05:49:22] <oninoshiko> sorry about that, I finally got pissed off enough with haveing to reboot the mac to SSH into another machine. I'm still storting out the interesting defaults in it's terminal emulator
[05:49:57] <benben159> reboot the mac to SSH? or reboot the mac via SSH?
[05:50:08] <echel0n> wait a sec ... mac ?
[05:50:11] <DucBlangis> What desktop environments are you all using?
[05:50:12] <echel0n> lol
[05:50:13] <DucBlangis> ...
[05:50:21] <DucBlangis> MAC or Mac?
[05:50:24] <ball> DucBlangis: Gnome (default)
[05:50:36] <oninoshiko> reboot the mac in general. I'm sshing into another box for IRC because I dont like losing my IRC session
[05:51:01] <echel0n> why not just screen them then
[05:51:10] <echel0n> problem solved
[05:51:49] <oninoshiko> because screen only helps if it's on another box I don't have to reboot
[05:51:59] <oninoshiko> which is exactly what I'm doing
[05:52:05] <benben159> yeah screen or tmux
[05:52:08] <benben159> i prefer tmux
[05:52:15] <DucBlangis> Im thinking about IceWM. Looks like old SCO or AIX boxes kinda
[05:52:18] <benben159> has green bar in the bottom of the screen
[05:52:22] <benben159> haha icewm
[05:52:34] <benben159> simple WM
[05:52:42] <benben159> but openbox will be much simpler
[05:53:04] <echel0n> I'm a console man my self
[05:53:09] <echel0n> as simple as they come lol
[05:53:13] <DucBlangis> OpenBox is nice looking, have that on my Desktop with NetBSD but not for this box
[05:53:17] <DucBlangis> this is a netbook
[05:53:26] <oninoshiko> it's not my preferred platform, but I don't have a choice in the matter.
[05:53:35] <DucBlangis> I like consoles too
[05:53:42] <DucBlangis> I prefer CLI all day
[05:53:47] <DucBlangis> using irssi and emcas right night
[05:54:02] <echel0n> what no BitchX lol
[05:54:16] * oninoshiko wistles innocently
[05:54:17] *** kart_ has joined #openindiana
[05:54:22] <DucBlangis> oh damn I haven't used that forever
[05:54:29] <benben159> oninoshiko, ?
[05:54:41] <oninoshiko> BitchX is what I went with
[05:55:06] <echel0n> <---- old
[05:55:51] <echel0n> I've been around since OS/2 and the Commadore 64
[05:56:01] <DucBlangis> How long has BitchX been around? I swear people were talking about it back in the mid 90's on the little 50GB .NL topsites I frequented (lol @ 50GB being huge enough for topsite)
[05:56:19] <benben159> PING 1347372146044621
[05:56:22] <echel0n> lol
[05:56:23] <oninoshiko> It's one of the earlier clients I used, and every time I get pissed off enough at whatever GUI client was easy, I end up installing it
[05:56:39] <echel0n> yeah mid 90s
[05:56:55] <ball> You old folks might like the thread in #packetpushers, we're talking about V.23 modems ;-)
[05:57:16] <oninoshiko> early to late 90's depending on if you want to count theo original script version for ircii
[05:57:20] <benben159> echel0n: you're so old. sir. *worship*
[05:57:43] <echel0n> lol
[05:58:10] <sol3>
[05:58:11] <sol3>
[05:58:11] <sol3>
[05:58:11] <benben159> V.23??
[05:58:12] <benben159> waw
[05:58:15] <sol3>
[05:58:17] <benben159> so old
[05:58:20] <sol3>
[05:58:21] <echel0n> rwarrrrr
[05:58:22] <echel0n> kik
[05:58:24] <sol3> wewps
[05:58:26] <sol3> old
[05:58:31] <echel0n> 14.4
[05:58:45] <echel0n> USRobtics SPortster
[05:58:46] <benben159> my first internet was via 56k modem
[05:58:58] <echel0n> I started on a 2400 baud
[05:59:05] <benben159> the internet was very expensive that day
[05:59:10] * oninoshiko is with echel0n
[05:59:17] <sol3> ya 2400 here
[05:59:28] <echel0n> Ran a renegade bbs
[05:59:51] <sol3> actually i lie the school network had a 900b before i got my 2400
[05:59:54] <DucBlangis> I don't even remember what I had, my Uncle hooked up my first machine around 1994. It ran Slackware. I was so pissed all the kids had Win or whatever and Oregon Trails and I had some goofy Linux box. Glad about it now though
[05:59:57] *** slx86 has joined #openindiana
[06:00:06] <oninoshiko> i was lucky, I lived near a collage, and one of the professors set up a public free public access service
[06:00:55] <DucBlangis> text bsed dungeoneering > Ultima Online was a huge step
[06:00:58] <DucBlangis> in my life
[06:01:00] <DucBlangis> as a pre-teen
[06:01:04] <dandyd449> any suggestions on cheaper 3TB drives to build an array out of?
[06:01:55] <echel0n> remember the BBS game Bre!
[06:01:55] <oninoshiko> I've had enough bad experences with drive that "cheaper" is normally not part of my list of criteria
[06:02:22] <benben159> cheaper T.T
[06:02:32] <dandyd449> not trying to spend 200 a drive...
[06:02:37] <benben159> cheaper things will end up faster
[06:02:44] <dandyd449> this is for a home server btw
[06:02:55] <echel0n> you get what you pay for !
[06:03:13] <DucBlangis> Barrens Realm
[06:03:26] <echel0n> yup!
[06:03:40] <ball> We didn't have BBS games because every call was a toll call in my country ;-)
[06:03:56] <dandyd449> ok well what kinda drives are you guys using? lol
[06:04:16] <benben159> i'll try 6x500GB array :D
[06:04:18] <echel0n> WD
[06:04:32] <echel0n> I'
[06:04:32] <benben159> WD is a lil bit more expensive
[06:04:35] <benben159> but it's good
[06:04:50] <echel0n> I've got a mix of there 1.5 and 2tb drives (GREEN)
[06:04:53] <ball> 500G WD Caviar Blue
[06:04:57] <oninoshiko> mostly seagate constallation (the SAS-2 version). I like nearline SAS disks.
[06:04:58] <dandyd449> my current array is 1TB green drives...
[06:05:02] <ball> (and green, on my daughter's machine)
[06:05:24] <horsi> anyone tried the WD30ezrx green disks (and more importnatly had issues?)
[06:06:01] <echel0n> WD EARS are the ones you want from what I remember
[06:06:19] <echel0n> EADs are good to but require you to turn some feature(s) off on them
[06:06:25] <horsi> echel0n: thats what ive read lately too
[06:06:31] <dandyd449> what features?
[06:06:36] <horsi> issues with hard parking at 8seconds
[06:06:43] <echel0n> exactly!
[06:06:56] <echel0n> its there power saving feature
[06:06:56] <horsi> and the newer ones you cant increase the threshold or disable
[06:07:04] <echel0n> you cant ?
[06:07:09] <ball> That seems an odd choice.
[06:07:14] <sol3> any one want a blast from the past telnet rmac.d-dial.com
[06:07:17] <benben159> hard parking at 8secs?
[06:07:18] <benben159> wew :/
[06:07:20] <sol3> take u back to 1985
[06:07:23] <sol3> ;)
[06:07:37] <echel0n> well it has someothing to do with zfs and green drives
[06:07:38] <horsi> some things have been saying the later wd30ezrx (march 2012 onwards) wdidle wont work
[06:07:41] <dandyd449> any experience with these?
[06:07:46] <horsi> chel0n: you're so old. sir. *worship*
[06:07:46] <horsi> 13:55 < echel0n> lol
[06:07:46] <horsi> 13:55 < sol3>
[06:07:46] <horsi> 13:55 < sol3>
[06:07:46] <horsi> 13:55 < sol3>
[06:07:48] <benben159> nope :|
[06:07:48] <horsi> 13:55 < benben159> V.23??
[06:07:51] <horsi> 13:55 < benben159> waw
[06:07:52] <echel0n> wdidle is the app u need to use
[06:07:53] <horsi> 13:55 < sol3>
[06:07:56] <horsi> 13:55 < benben159> so old
[06:07:58] <horsi> 13:55 < sol3>
[06:08:01] <horsi> 13:55 < echel0n> rwarrrrr
[06:08:02] *** ChanServ sets mode: +o richlowe
[06:08:03] <horsi> 13:55 < echel0n> kik
[06:08:06] <horsi> 13:56 < sol3> wewps
[06:08:07] *** horsi was kicked by richlowe (Nope Kicked by richlowe)
[06:08:16] * ball cheers
[06:08:19] *** horsi has joined #openindiana
[06:09:49] <horsi> do the greens cause any problems with zfs or would head parking (other than access speeds)
[06:10:58] <dandyd449> i running green drives... havnt had any issues besides im out of space.
[06:11:09] *** xxzz has joined #openindiana
[06:11:27] <echel0n> tler and lls
[06:11:58] <echel0n> I mean llc
[06:12:25] <echel0n> You need to fix head parking
[06:12:34] <echel0n> and enable TLER
[06:12:54] <horsi> isnt tler pointless with zfs?
[06:13:58] <echel0n> yeah sorry actually I mean disable it if you intend to use it for ZFS
[06:14:50] <dandyd449> how do you do that?
[06:17:37] <echel0n> well first off its a mixed review of idea
[06:17:57] <echel0n> some say disable tler and let the os provide that function and some say enable it
[06:18:18] <echel0n> WDTLER.exe works and you need to have the bios in IDE mode to use WDTLER.exe first off
[06:18:49] <dandyd449> and run it on each drive in a windows machine...
[06:18:52] <ball> What if you don't have a DOS (Windows?) machine around?
[06:19:28] <horsi> is there a way to disable it through smart? Only it wont survive a reboot?
[06:19:43] *** imaxs has joined #openindiana
[06:20:29] <echel0n> also EADs are 4k drives as well
[06:20:52] <echel0n> so adding them into the zfs raidz be sure to take that into account as well
[06:21:16] <oninoshiko> TLER limits the amount of time it will take for an error recovery. without it, it can take a long time trying to access a bad sector. Responding sooner with an error is generally considered better in a redundent ZFS config (ergo TLDR is good). If you are on a single-disk config not so much.
[06:21:42] *** jamesd has quit IRC
[06:23:02] <oninoshiko> sorry, TLER.
[06:23:56] <echel0n> well I guess you could enable it if you want to and your drive is capable of it but I wouldn't make it a deciding factor in buying a drive if it means double the price
[06:26:10] <oninoshiko> and rememeber, I'm always willing to offer a refund on the full puchase price of my advise :p
[06:26:34] <echel0n> lol
[06:27:49] <dandyd449> hmm
[06:28:05] <dandyd449> so fuck it and get the cheep drives? lol
[06:28:21] <echel0n> well wd green drives EARs or EADs are good
[06:28:46] <echel0n> fix there head parking with WDIDLE3.exe and enable TLER with WDTLER.exe
[06:29:17] <dandyd449> have a link handy for those btw?
[06:29:32] <oninoshiko> I use nearline sas, but I also have people paying... so I can afford to be a bit pickier.
[06:31:58] <echel0n> let me check
[06:40:08] <echel0n> ok honestly I don't remember if I set tler enabled or disabled on my drives but read this
[06:40:21] <echel0n> TLER can be dangerous as well
[06:40:34] <oninoshiko> I sware. I just ate, how am I hungery again?
[06:41:41] <echel0n> lol I was thinking the same here for me
[06:42:15] <echel0n> think my dog started watching me eye her up like food lol
[06:45:44] <dandyd449> hmm im reading you cant enable tler on green drives made after 2010?
[06:46:03] <oninoshiko> yes, if you are not redundent it doesn't have anywhere to get the data from, so you want it to keep trying. but if you are redundent, you want it to fail sooner and let it come off the other disk. It's worth remembering ZFS is a bit different then most raid controllers, in that it will grab the other block in the event of an error and report there is a problem with the first disk, but not fail it out.
[06:47:00] <dandyd449> well yea, so i want drives with tler then?
[06:47:06] <dandyd449> raid z2
[06:49:59] *** ball has quit IRC
[06:50:04] <horsi> dandyd449: I read that as not needing it with zfs - see the second last paragraph
[06:50:21] <oninoshiko> I would advise it. nothing BAD will happen if you don't have it, it will just perform worse when it encounters a bad sector
[06:50:42] <horsi> WD reds - no head parking and TLER enabled
[06:52:30] <oninoshiko> in my experence, raidz2's performance leaves something to be desired anyway (although i haven't used it since before oracle). I've tended to prefer mirrors. That should depend heavily on workload though.
[06:52:44] <dandyd449> 250 a drive too
[06:53:10] <horsi> oninoshiko: could you put a figure to that comment vs raidz?
[06:53:31] <horsi> im looking at a 8 disk raidz2 and all it needs to be able to do is max out a GB network
[06:53:46] <oninoshiko> both raidz and raidz2 should have similer performance charicteristics.
[06:54:19] <dandyd449> ditto horsi, im going to build a 8 drive array
[06:54:44] <echel0n> 4 drive raidz1 here works just fine
[06:54:45] <dandyd449> my current raidz almost maxes my gigabit
[06:56:00] <horsi> i have an 8 disk raidz atm and that can max it out - I like the idea of a stripe of mirrors but not keen on loosing 4 disks worth of capacity vs 2
[06:56:04] <oninoshiko> The rule (atleast used to be) the array has performance of the slowest disk in the array times the number of vdevs. for many workloads that's fine. I am running muliple VMs on it, so my workloads can get somewhat nasty.
[06:56:30] <dandyd449> tho i was having issues with multiusers the other day. granted im at 95% usage and low ram...
[06:56:53] <oninoshiko> (not accounting for the ARC, of course)
[06:57:29] <dandyd449> so 2 vdevs both raidz?
[06:57:58] *** alanc has quit IRC
[06:58:03] <oninoshiko> I'm not sure if that still holds though. I think someone came up with some performance enhancements for it.
[06:58:38] <echel0n> wonder if there is a simple cloud fs out there where I can just use drives across the net and form a raid array
[06:58:45] <dandyd449> what kind of enhancements?
[06:58:46] <echel0n> <--- not up on the latest tech
[06:59:52] <echel0n> like a zfs comprised of iscsi disks lol
[06:59:56] <oninoshiko> I'm afraid I don't know that much. I recall reading a post on one of the MLs, but it wasn't directly related to anything I am doing (as we already went with mirrors), so a vague recolection is about all I have at this point
[07:03:34] <echel0n> oh i love it already
[07:04:03] <echel0n> zfs pool made up of iscsi drives from around the world
[07:04:15] <echel0n> I wonder how BIG of an array one could build lol
[07:04:36] *** jw_urodoc has joined #openindiana
[07:05:44] <oninoshiko> and like I said, I run clients on this. my priorities are not losing data, and avoiding downtime. I already save so much, compaired to commercal solutions, the costs of the disks themselves are a distant third.
[07:06:50] <oninoshiko> anyway, I think it's about time for me to head home.
[07:06:54] *** flyz has quit IRC
[07:07:05] <echel0n> aahh
[07:07:07] <oninoshiko> have a good (whatever time of day it is where you are)!
[07:07:13] <echel0n> lol
[07:07:18] <echel0n> same!
[07:09:03] <echel0n> ok nap time
[07:09:18] <echel0n> 10pm and need to go drean if bash prompts
[07:09:19] <echel0n> kik
[07:09:21] <echel0n> lol
[07:09:26] <echel0n> bbiab
[07:10:20] *** Kaishi has quit IRC
[07:13:50] *** flyz has joined #openindiana
[07:19:12] *** DucBlangis has quit IRC
[07:24:56] *** xxzz has quit IRC
[07:27:25] *** davenz has quit IRC
[07:28:32] *** davenz has joined #openindiana
[07:38:11] *** imaxs has quit IRC
[07:42:47] *** Ducblangis has joined #openindiana
[07:46:41] *** jw_urodoc has quit IRC
[07:51:16] *** TechIsCool has quit IRC
[08:06:24] *** imaxs has joined #openindiana
[08:07:52] *** Ducblangis has quit IRC
[08:28:54] *** r4idenZA has joined #openindiana
[08:30:32] *** sjorge has quit IRC
[08:30:47] *** sjorge has joined #openindiana
[08:30:47] *** sjorge has joined #openindiana
[08:40:22] *** Webhostbudd_ has quit IRC
[08:41:59] *** ianh has joined #openindiana
[08:45:02] *** |AbsyntH| has joined #openindiana
[09:12:03] *** Neddie_ has joined #openindiana
[09:14:02] *** anikin has joined #openindiana
[09:16:19] *** SANVisum has left #openindiana
[09:26:34] *** andy_js has joined #openindiana
[09:48:16] *** movement has quit IRC
[09:50:48] *** tsoome has quit IRC
[10:03:02] *** movement has joined #openindiana
[10:03:19] *** |AbsyntH| has quit IRC
[10:05:11] *** |AbsyntH| has joined #openindiana
[10:05:23] *** Sachiru has quit IRC
[10:05:48] *** sjorge has quit IRC
[10:06:03] *** sjorge has joined #openindiana
[10:08:07] *** movement has quit IRC
[10:18:46] *** tsoome has joined #openindiana
[10:19:23] *** kforbz has quit IRC
[10:22:15] *** movement has joined #openindiana
[10:27:06] *** Hurri has quit IRC
[10:28:09] *** kforbz has joined #openindiana
[10:29:13] *** OMV-User has quit IRC
[10:33:05] *** r4idenZA has quit IRC
[10:47:24] *** Micr0mega has joined #openindiana
[10:47:29] *** POloser has left #openindiana
[10:48:01] *** POloser has joined #openindiana
[10:56:10] *** OMV-User has joined #openindiana
[10:57:57] *** movement has quit IRC
[11:03:19] *** enricop has joined #openindiana
[11:10:33] *** enricop has quit IRC
[11:10:40] *** movement has joined #openindiana
[11:22:15] *** TomJ has quit IRC
[11:22:57] *** gweiss has quit IRC
[11:26:15] *** Whoopsie has joined #openindiana
[11:26:16] *** ChanServ sets mode: +v Whoopsie
[11:55:04] *** movement has quit IRC
[12:11:14] *** movement has joined #openindiana
[12:16:31] *** heldchen has quit IRC
[12:23:57] *** Hurri has joined #openindiana
[12:28:11] *** |AbsyntH| has quit IRC
[12:29:25] *** jellydonut has joined #openindiana
[12:30:16] *** InTheWings has joined #openindiana
[12:30:35] *** RicardoSSP has joined #openindiana
[12:36:18] *** heldchen has joined #openindiana
[12:37:55] *** Whoopsie has quit IRC
[12:47:08] *** slx86 has quit IRC
[12:58:54] *** anikin has quit IRC
[13:01:45] *** held has joined #openindiana
[13:03:49] *** heldchen has quit IRC
[13:08:23] *** Whoopsie has joined #openindiana
[13:08:23] *** ChanServ sets mode: +v Whoopsie
[13:15:21] *** Seony has quit IRC
[13:16:02] *** Seony has joined #openindiana
[13:24:12] *** RicardoSSP has quit IRC
[13:24:33] *** ira has joined #openindiana
[13:30:48] *** echel0n has quit IRC
[13:44:58] <lennard> does someone have enough clue about DDT to tell me what my memory requirements would be for:
[13:45:12] <lennard> $ sudo zdb -DD archiving
[13:45:13] <lennard> Password:
[13:45:13] <lennard> DDT-sha256-mac-zap-duplicate: 435811 entries, size 1126 on disk, 204 in core
[13:45:16] <lennard> DDT-sha256-mac-zap-unique: 16950981 entries, size 1134 on disk, 206 in core
[13:45:27] *** patdk-lap has quit IRC
[13:45:37] <lennard> or with zpool status -D: DDT entries 17386792, size 1134 on disk, 206 in core
[13:55:41] *** POloser has left #openindiana
[13:56:23] *** movement has quit IRC
[13:58:38] *** patdk-lap has joined #openindiana
[14:09:01] *** movement has joined #openindiana
[14:18:06] *** anikin has joined #openindiana
[14:20:52] *** |AbsyntH| has joined #openindiana
[14:24:23] *** slx86 has joined #openindiana
[14:27:07] *** Sachiru has joined #openindiana
[14:30:21] *** sjorge has quit IRC
[14:30:45] *** sjorge has joined #openindiana
[14:30:46] *** sjorge has joined #openindiana
[14:30:53] *** sjorge has quit IRC
[14:32:34] *** sjorge has joined #openindiana
[14:32:49] *** sloop has left #openindiana
[14:35:08] *** CVLTCMK0 has quit IRC
[14:38:36] *** Sachiru has quit IRC
[14:39:48] *** Sachiru has joined #openindiana
[14:44:09] *** oninoshiko|2 has quit IRC
[14:46:12] *** lgtaube has quit IRC
[14:46:16] *** Neddie__ has joined #openindiana
[14:46:49] *** Neddie_ has quit IRC
[14:46:56] *** tcos_ has joined #openindiana
[14:47:07] *** DerSaidin has quit IRC
[14:47:08] *** lgtaube has joined #openindiana
[14:47:47] *** tcos has quit IRC
[14:47:48] *** Nemykal has quit IRC
[14:49:07] *** DerSaidin has joined #openindiana
[14:49:07] *** DerSaidin has joined #openindiana
[14:49:48] *** Nemykal has joined #openindiana
[14:56:15] *** thistle has quit IRC
[14:56:38] *** thistle has joined #openindiana
[14:59:58] *** jamesd has joined #openindiana
[15:05:55] *** benben159 has quit IRC
[15:06:51] *** Seony has quit IRC
[15:22:26] *** tsoome_ has joined #openindiana
[15:22:26] *** tsoome_ has quit IRC
[15:24:52] *** tsoome has quit IRC
[15:25:56] *** myrkraverk has quit IRC
[15:31:26] *** Ducblangis has joined #openindiana
[15:32:43] *** Whoopsie has quit IRC
[15:36:27] *** eryc is now known as er|c
[15:39:37] *** Uranio has joined #openindiana
[15:45:28] *** Whoopsie has joined #openindiana
[15:45:29] *** ChanServ sets mode: +v Whoopsie
[15:51:41] *** sjorge has quit IRC
[15:52:19] *** sjorge has joined #openindiana
[15:52:19] *** sjorge has joined #openindiana
[15:56:13] *** Sachiru has quit IRC
[15:58:02] *** dandyd449 has quit IRC
[16:00:43] *** ball has joined #openindiana
[16:09:10] *** tsoome has joined #openindiana
[16:11:33] *** held has quit IRC
[16:15:14] <Ducblangis> Is there a terminal based process manager like htop and atop? I can't find either of those in the packages
[16:15:17] <Ducblangis> when I search
[16:15:27] <Ducblangis> and I wouldn't mind trying a new one anyway
[16:16:26] <ball> Ducblangis: top?
[16:17:50] <ball> ps? :-)
[16:17:59] *** slx86 has quit IRC
[16:18:09] <oninoshiko> prstat
[16:18:34] <Ducblangis> heh yea I use top
[16:18:43] <Ducblangis> but I like HTOP a lot, got used to it
[16:18:46] <Ducblangis> for years
[16:19:09] <Ducblangis> iftop is available
[16:19:18] <Ducblangis> but that's for bandwidth
[16:20:35] <Hedonisto> adobe has stopped making flash for solaris will there be a substitute like gnash in openindiana's future?
[16:20:41] <ball> What do "htop" and "atop" offer over top?
[16:21:04] <tsoome> extra char.
[16:21:07] <tsoome> :D
[16:21:23] <Ducblangis> quite a few things, for me the main differences that I like
[16:22:18] <Ducblangis> is that you can easily kill a process quickly as opposed to typing in the process, the tree view, it supports mouse operations and no delay
[16:22:31] <Ducblangis> for each unassigned key you press
[16:22:50] <Ducblangis> I am sure I could get used to top itself but I just like htop
[16:23:01] <Ducblangis> personal preference, ya know? :)
[16:24:42] <Ducblangis> oninoshiko : no flash for us?
[16:24:45] <Ducblangis> no youtube
[16:26:28] <oninoshiko> adobe does make a flash release for solaris-based systems. It just isn't packaged up all nice
[16:26:48] <Ducblangis> there an ftp for it?
[16:26:55] <Ducblangis> public
[16:26:58] <ball> What is this "mouse" of which you speak? ;-)
[16:27:06] <Ducblangis> hehe
[16:27:17] <oninoshiko> last I looked, I went through adobe's site to find it.
[16:27:31] <oninoshiko> I really ment to get it packaged up all nice.
[16:27:39] <Ducblangis> I useterminal mainly
[16:27:47] <Ducblangis> screenshot from like a month ago
[16:28:05] <Ducblangis> so a mouse usually isn'tused on my machine
[16:28:06] <Ducblangis> but
[16:28:20] <oninoshiko> there is not a version of flash for lynx.
[16:28:25] <Ducblangis> it is nice to have something where I can just see the process, click it and F9
[16:28:32] <Ducblangis> I used elinks
[16:28:38] <Ducblangis> for my terminal browser
[16:30:28] <oninoshiko> really? that's unfortunate
[16:30:53] <Agnar> oninoshiko: Flash Player 11.2.202.223 is the last for solaris
[16:31:43] <oninoshiko> if by "good" you mean " better then a poke in the eye" yes, if you mean "actually useful for anything" no.
[16:31:58] <Ducblangis> Well I rate my software by eye pokes
[16:32:01] <Ducblangis> so perfect for me
[16:32:54] <oninoshiko> although, I'll admit that I haven't messed with it in some time. maybe it's better.
[16:33:11] <oninoshiko> Agnar: that version is still avalible though?
[16:33:56] <Agnar> oninoshiko: only via archived versions for developers on adobe.com
[16:34:01] <Agnar> and in ~olbohlen/tmp/
[16:34:04] <Agnar> ;)
[16:34:49] <oninoshiko> that's unfortunate. It used to be avalible to those with a distrobution license as well.
[16:34:55] <lblume> No worries. HTML5 to the rescue.
[16:36:03] <oninoshiko> which HTML5. the one with a actual spec, or the one I like to call "what standard? by what-wg?"
[16:36:03] <Agnar> lblume: well, fortunately oracle converted their MOS from flash to html...otherwise opening a case would be a problem ,)
[16:37:18] <|woody|> Does the firefox that comes with solaris 10 work with the MOS site? :)
[16:37:19] <lblume> That is the one positive effect of HTML5, prodding Oracle out of a language they did not quite master.
[16:37:50] <Agnar> woody: of course not - but on mozilla.com there are update packages from the jds team ,)
[16:38:11] <lblume> oninoshiko: The one that'll be superseded by some other incompatible-thing-with-a-hype-name as soon as it'll start to be actually usable.
[16:38:34] <Agnar> lblume: like CDE with JDS? ;)
[16:38:41] <|woody|> well I should open a case then sometime just to give support something to do :)
[16:38:48] <Agnar> or OpenView with CDE ;)
[16:38:52] <|woody|> I need a new case I'm board
[16:38:59] <lblume> Agnar: Let's not get over ourselves there.
[16:39:10] <oninoshiko> I'm pretty sure they announced the plan is to just keep is unusable, or something like that (the exact phrase my have been "living standared"
[16:39:12] <Agnar> err. OpenLook
[16:40:48] <lblume> oninoshiko: There is HTML5/Apple, for which you'll install Safari, then HTML5/MS, for which you'll need IE, and HTML5/Google, tuned for Chrome.
[16:40:58] <lblume> Thank gods it's an open standard.
[16:41:27] <oninoshiko> yes! that's the wonderful thing about standards! there are so many to choose from!
[16:41:55] <Agnar> lblume: nfs is a open standard too - and rumours say that with linux kernel 4.x it will become stable...eventually...
[16:42:58] <Agnar> ;)
[16:43:08] <Agnar> oninoshiko: hehe
[16:43:33] <|woody|> Agnar nfsv2? :)
[16:44:34] <tsoome> well….. most of the software is "opensource", yet there was asked for htop and atop;)
[16:45:00] <lblume> Agnar: Pah, never had to deal with S9 mount NFS shares from S10 with a zfs backend? Poor, poor users.
[16:45:32] <tsoome> whats about it?
[16:45:36] <ira> I don't know what our highest number is right now…. ;)
[16:45:43] <ira> But it isn't S10 at least.
[16:46:16] <Agnar> lblume: hehe, at least sol client survive a rebooting nfs server ;)
[16:46:20] <lblume> tsoome: S9 df being all confused with available space, ACLs support being, err, no, forget about that.
[16:47:02] <tsoome> df is not confused. user using df is.
[16:47:20] <lblume> But yeah, I had to deal with that. S10, RHEL4/5 servers, S10, 9, 8, RHEL4/5, Debian clients... *shudders*
[16:47:43] <lblume> tsoome: Don't tell me df was not confused since Sun implemented RFE for it :-P
[16:47:53] <tsoome> also v3 (and s9 does only v2/v3) does only know "posix" acl.
[16:48:18] * oninoshiko shudders with lblume
[16:48:52] <tsoome> but i can imagine it can be quite "fun":D
[16:48:55] <oninoshiko> there is a reason why monocultures are more popular then they should be.
[16:48:57] <lblume> Yup. And since Sun officially gave up having a translation layer POSIX<->NFSv4, fun, fun fun.
[16:49:11] <tsoome> but then again, why anyone is still using s9… :P
[16:49:18] <lblume> It was a while ago.
[16:49:22] <tsoome> aye
[16:49:26] <|woody|> well S9 not:)
[16:49:40] <lblume> I hope for them that they trashed those S9 boxes....
[16:49:41] <|woody|> still have like 10 or more S8 installs I know of
[16:50:00] <|woody|> here everone skipped S9
[16:50:18] <|woody|> they either used S8 or went to S10
[16:50:27] <Agnar> SunOS olca5017 5.9 Generic_122300-51 sun4us sparc FJSV,GPUZC-M
[16:50:30] <Agnar> *yuck* :)
[16:51:08] *** anikin has quit IRC
[16:51:20] <|woody|> 5.8 Generic_108528-29 sun4u sparc SUNW,Ultra-1
[16:51:37] <Agnar> woody: at work? as a server? ;)
[16:52:02] <|woody|> yes
[16:52:08] <Agnar> yay ;)
[16:52:10] <|woody|> 3 of them
[16:52:27] <Agnar> really a Ultra1 oder is it a E150?
[16:53:38] <oninoshiko> wow... I guess if it ain't broke....
[16:54:46] <|woody|> E150
[16:54:49] <|woody|> but still :)
[16:55:06] <Agnar> woody: oh those are rare
[16:55:21] <Agnar> and awful to maintain if you need to open them :)
[16:55:22] <|woody|> and it has some crazy cluster framework I never seen befor
[16:55:26] <Agnar> all these fillers
[16:55:40] <|woody|> no clue :) They will be migrated asap
[16:55:51] <|woody|> I don't want to see them again
[16:55:52] <|woody|> :)
[16:56:29] <Agnar> hehe
[16:56:34] <oninoshiko> I'm pretty sure I have a nice big sledge-hammer
[16:56:40] <Agnar> if they are off, open them :)
[16:56:43] <Agnar> have a look ;)
[16:57:08] <ball> Gah, did I miss a SPARC conversation? :-)
[16:57:23] <|woody|> Would love to but the trip is to far
[16:57:31] <|woody|> oh the other 2 are V100
[16:58:03] <|woody|> but I have one of those too: 5.9 Generic_118558-38 sun4us sparc FJSV,GPUZC-M
[16:58:12] <oninoshiko> which reminds me, I have a couple of p3s I need to go all galager on.
[16:58:16] <|woody|> they collected everything with a sparc lable on it :)
[16:58:46] <ball> Was the V100 PATA?
[16:58:51] <ball> (for the disks?0
[16:58:52] <ball> )
[16:59:02] <lblume> Yes
[16:59:19] <lblume> Has #2 slot in the crappiest system Sun ever made.
[16:59:47] <oninoshiko> lblume, what holds #1?
[16:59:54] <ball> 386i?
[17:00:03] <oninoshiko> that was what I was wondering.
[17:00:05] <lblume> U5/U10
[17:00:24] <ball> I think I'd rather have an Ultra5 or Ultra10 than a 386i
[17:01:05] <ball> What's the largest disk you can shove in a V100?
[17:01:21] <lblume> I never used a 386i, but they seemed more realist to their current market than the U5 were.
[17:01:45] <|woody|> 80GB was the largest SUN sold for it
[17:01:46] *** heldchen has joined #openindiana
[17:02:08] <|woody|> but worked with others too
[17:02:22] <|woody|> don't know whatever the bigest IDE disk was you could buy
[17:02:27] *** held has joined #openindiana
[17:02:55] <Agnar> lblume: nope, Blade100 was worse than U10
[17:02:57] <ball> I still see 320G PATA drives.
[17:03:09] <ball> I forget how slow the interface was on the V100 though
[17:03:13] <Agnar> lblume: and funny enough often slower than the older U10 ;)
[17:03:15] <ball> Hang on, let's get glock in here.
[17:04:27] <ball> I think I need a larger monitor
[17:04:33] <Agnar> and crappy sun systems...the SS670 sucked also
[17:05:00] <oninoshiko> could always just get a second one
[17:05:01] <ball> Was that SPARC in a VME chassis?
[17:05:16] <Agnar> getting the VME boards into the chassis is not nice if you have to put them in from above
[17:05:21] <Agnar> ball: yes.
[17:05:30] <lblume> Agnar: You just beat me by being older and having enjoyed them for a longer time ;-)
[17:05:38] <Agnar> my last VME machine - SS670 with 4x40MHz and 128MB RAM ;)
[17:06:00] *** heldchen has quit IRC
[17:06:02] <Agnar> with sbus presto-serve!!!one one ;)
[17:07:12] <ball> What's sad is that my Atom box is probably faster than that.
[17:07:42] <|woody|> they are gone though already
[17:07:54] <lblume> ball: Why, you wish things were static?
[17:08:31] <Agnar> then you know why you're going to hate servicing the E150 ;)
[17:09:10] *** ira is now known as ira_away
[17:09:11] <Whoopsie> E150 was the work of Satan
[17:09:29] <Agnar> Whoopsie: indeed :)
[17:09:48] <Whoopsie> I melted one - took it out the shipping box, opened it up to drop in some SCSI cards, removed all the foam 'packing'
[17:09:52] <Whoopsie> Plugged it in
[17:10:10] <Whoopsie> A day later it had a thermal event and never worked again
[17:10:42] <ball> lblume: No, it's helpful that today's servers are more affordable.
[17:10:42] <Agnar> hehe, common mistake :)
[17:11:05] <ball> (and more efficient)
[17:12:03] <lblume> Easier to offshort jobs!
[17:13:22] <ball> offshore?
[17:13:34] <oninoshiko> easier to afford the power bill
[17:14:10] <lblume> ball: yes, sorry. I'm attemtping to understand the magic word to set a default route in S11.
[17:14:32] <|woody|> route add -p
[17:15:13]
[17:15:27] <|woody|> route -p add default
[17:15:50] * ball is confused
[17:17:12] <lblume> |woody|: Ah, so I was right. I was confused. Well, I'm glad that route -p add is finally prominent.
[17:17:30] <tsoome> its already in s10 as well;)
[17:17:34] * oninoshiko rather likes that
[17:17:47] <|woody|> yes but no one used -p
[17:18:14] <lblume> Exactly.
[17:18:48] <lblume> Not many people even knew about it, considering how often I saw handmade scripts for routes.
[17:20:01] *** Micr0mega has quit IRC
[17:20:12] <lblume> Oooohhhh, zpool warns me when I create a stripe pool that it is unsafe. That's sweet =)
[17:20:54] <ball> Is it considered "unsafe" when the underlying elements aren't mirrors?
[17:21:20] <lblume> Yes, it tells me there is no redundancy. I guess it would notice if they were mirrors.
[17:21:38] <oninoshiko> I've not had it warn with anything with mirrors
[17:21:58] <ball> Interesting.
[17:23:28] <|woody|> gee I hope this E150 will be migrated soon. It scares my evertime I log into it :) Even though I would love to see it fail since it would have funny consequences :)
[17:23:43] <oninoshiko> i don't think it does if you just do a single disk, either.
[17:24:02] <lblume> oninoshiko: Let me try.
[17:24:07] <ball> What would be the sense in "striping across" a single disk?
[17:24:22] <ball> ...just that you could grow it by adding another later?
[17:24:26] <oninoshiko> i mean a single disk zpool, no striping
[17:25:05] <Agnar> woody: to be curious - do you run it with a 64bit kernel? (isainfo -b)
[17:25:16] <ball> Wouldn't it make sense for it to warn about a single disk zpool?
[17:25:22] <oninoshiko> it's not redundent, so it's bad, but it's less bad then striping without mirrors.
[17:25:23] <ball> (because it's not redundant?)
[17:25:36] <lblume> No warning.
[17:26:23] <oninoshiko> it's aguable either way. it's not redundent, and therefore not safe, that's true, but the configureation is so common it's not exactly unexpected.
[17:26:31] *** Ducblangis has quit IRC
[17:27:49] <lblume> And admittedly, the fact that zfs create stripes by default is implicit. You don't write zpool create stripe tank xxx, I remember back then I had to check to know what it was actually doing.
[17:28:23] <oninoshiko> and it's easy enough to miss the word "mirror" in there
[17:28:43] <oninoshiko> I'm embarrassed to say, I've done that before.
[17:29:08] <tsoome> IMO it should warn you if you are using zpool add
[17:29:27] <tsoome> considering there is no way to remove;)
[17:29:30] * ball makes some notes
[17:29:37] <ball> rpool is the root pool, right?
[17:29:44] <|woody|> Agnar no it's 32bit
[17:29:46] <tsoome> usually yes
[17:30:00] <oninoshiko> generally, sometimes it will be rpool1 or something like that though
[17:30:02] <tsoome> altho you can name it whatever you like
[17:30:15] <oninoshiko> and what tsoome said!
[17:30:33] <Agnar> woody: because of a strange security issue :) have you read the man kernel? ;)
[17:30:36] <ball> ...and a mirror is just called a "mirror"?
[17:30:53] <tsoome> ball, man zpool is your friend really:)
[17:31:05] <tsoome> and yes, mirror is just "mirror"
[17:31:12] <ball> Oh yes! I'm on an OpenIndiana box today!
[17:31:47] <oninoshiko> no a mirror is called whatever you want. you just have to tell it you want it to be a mirror. "zpool create NotTank mirror blah blah"
[17:31:58] <|woody|> Agnar that thing has to many security problems ...
[17:32:17] <Agnar> woody: of course :)
[17:33:26] <ball> What is the zfs equivalent of RAID-0?
[17:33:43] <tsoome> zpool create tank disk1 disk2 … diskn
[17:33:45] <Alasdairrr> ball: a pool with no raid parity
[17:33:50] <oninoshiko> ball: "zpool create NotTank blah blah"
[17:34:00] *** Ducblangis has joined #openindiana
[17:34:04] <ball> Hmm...
[17:34:26] <tsoome> or zpool create tank disk0; zpool add tank disk1; ...
[17:35:02] <oninoshiko> like I said, it's easy enough to accedentily create a strip (raid-0) when you mean a mirror. the warning is good.
[17:35:13] <tsoome> basically, you create pool from vdev (vdev is disk, mirror, raidz), you add next vdev and data is striped over vdevs
[17:35:16] <oninoshiko> stripe*
[17:36:53] <ball> So would a mirror just be a raidz with one ordinary disk and one parity disk?
[17:37:10] <ball> ...or is a mirror a separate animal?
[17:37:16] <oninoshiko> no. I mirror is a mirror. It doesn't use parity bits
[17:37:26] <ball> Ah good, thanks.
[17:37:48] * ball checks his disk space
[17:39:10] *** spanglywires has joined #openindiana
[17:40:27] <tsoome> note you cant expand raidz, but you can add next raidz vdev.
[17:40:59] <lblume> And if you create a funky thing, like zpool create pool c3t1d0 mirror c3t2d0 c3t3d0, you need to use -f
[17:41:34] <lblume> Well, you can expand raidz by replacing all disks with bigger ones. Admittedly limited, but still quite useful.
[17:42:14] <oninoshiko> yes. quite.
[17:42:40] <|woody|> that's why I tend to use mirrors for pools that might need to grow
[17:43:17] <tsoome> also, if you wanna play with pool setup, you can create pool from files instead of physical disks. so you can play with different setups and see what it does
[17:43:25] <oninoshiko> they also seem to perfom better (if that's an issue depends on what you are doing with it)
[17:43:51] <|woody|> I have raidz2 that perfom great
[17:44:15] <oninoshiko> YMMV
[17:44:27] <lblume> Or use VBox.
[17:44:38] * ball looks at the output from df on a single disk system and gets horribly confused
[17:44:43] <lblume> raidz2 sucks for writes, by design.
[17:45:12] <richlowe> "due to the", not "by", at the most.
[17:45:24] <richlowe> you make it sound like people sat around planning how to make it suck for writes
[17:45:25] <oninoshiko> I wouldn't say that, I would say it's a side effect of the ... what richlowe said
[17:45:47] <lblume> richlowe: Well, they had a choice, write hole or sucky writes ;-)
[17:45:53] <Agnar> richlowe: try to imagine that meeting :)
[17:46:05] <ball> Sucky writes ftw then.
[17:46:18] <lblume> "What is the best way to make people complain while still making it a selling point"
[17:46:54] <Agnar> *grin* "ok guys, what can we do to make zfs suck at least in one point?" :)
[17:46:59] <lblume> Yep, after all, if you want fast raid5, just put it over hw raid.
[17:47:28] <Agnar> lblume: which is not really faster...
[17:47:36] <lblume> Agnar: Oh, yes, yes, it is/
[17:47:52] <lblume> By at least one order of magnitude.
[17:48:14] <tsoome> depending on setup. it may be, but also it may not be
[17:48:31] <Agnar> ok, have to go...see you tomorrow :)
[17:48:49] <lblume> A basic PERC card a few years back had raid5 at 600MB/s for a simple config. You can't get that with raidz without splitting the writes on at least twice as many disks
[17:49:12] <oninoshiko> I don't know about the performance differences, but I've had issues with every HW raid i've ever used. To me that's more important then performance.
[17:49:20] <lblume> *nods*
[17:49:37] <tsoome> you can get it easily, but only if you have heavily streaming writes so you fill up the raidz stripe
[17:49:41] <ball> On the positive side, it's not a PERC.
[17:50:29] <Agnar> lblume: T4-4, 6 disks in raidz1, 760MB/s
[17:50:29] <|woody|> yes I use it to store backup streams which runs like hell
[17:50:41] <tsoome> the battery backed raid cards dont have to do full stripe writes and that is whats limiting raidz
[17:51:21] <lblume> Agnar: For read or write?
[17:51:37] <Agnar> lblume: write
[17:51:57] <ball> Hmmm... wonder if I should check the battery on my RAID card
[17:51:59] <Agnar> SAS-II 10k rpm 300GB disks
[17:52:58] <lblume> Agnar: Really? with a slog?
[17:53:28] <lblume> Because technically, unless a single disk does 760MB/s, it's not possible to do that with raidz.
[17:53:51] <tsoome> ?
[17:53:53] <lblume> ball: It should tell you when it's broken and disable write cache. If it's not a crappy raid card :-)
[17:53:54] <Agnar> config:
[17:53:54] <Agnar> NAME STATE READ WRITE CKSUM
[17:53:54] <Agnar> apppool ONLINE 0 0 0
[17:53:54] <Agnar> raidz1-0 ONLINE 0 0 0
[17:53:54] <Agnar> c0t5000CCA0258A73C8d0 ONLINE 0 0 0
[17:53:56] <Agnar> c0t5000CCA0258B8254d0 ONLINE 0 0 0
[17:53:59] <Agnar> c0t5000CCA0258A7E7Cd0 ONLINE 0 0 0
[17:54:01] <Agnar> c0t5000CCA02584B68Cd0 ONLINE 0 0 0
[17:54:04] <Agnar> c0t5000CCA0258B9930d0 ONLINE 0 0 0
[17:54:06] <Agnar> c0t5000CCA02584B6E8d0 ONLINE 0 0 0
[17:54:23] <Agnar> lblume: you need one disk that brings 760/5 and there you go
[17:54:39] <lblume> 760/5 ?
[17:54:39] <ball> lblume: I'm using an OS that doesn't talk to the card and I don't have LOM on that one.
[17:54:50] <ball> lblume: So I'll have to reboot while I'm sitting in front of it.
[17:55:00] <Agnar> lblume: 6 not 5. sorry, so ~130MB/s
[17:55:31] <ball> ...and it's 210km away.
[17:55:34] <Agnar> i have to leave...see you tomorrow
[17:55:40] <lblume> Agnar: For *reads*, yes, but *writes* are the speed of the single slowest disk. That's why raidz is not raid5.
[17:55:41] <ball> Bye Agnar
[17:55:55] <lblume> So I'm surprised. Bye anyway :-)
[17:56:27] <lblume> ball: Send a minion to hold his cell phone camera in front of it while you reboot it remotely
[17:56:27] <tsoome> lblume: its not exatly true
[17:57:30] <tsoome> for single block its about single disk troughput, but if you have stream data filling up entire stripe, you will get *N disk troughput
[17:57:35] <ball> lblume: I have no minions.
[17:57:45] <tsoome> well, not exactly as there is still parity around, but still
[17:58:52] <|woody|> ok gone too
[17:59:57] <oninoshiko> there are days I'd rather be lucky then good :)
[18:00:53] <lblume> and I'm gone too. Have a good evening and may your block checksums always match.
[18:01:18] <oninoshiko> bye, lblume
[18:01:28] <oninoshiko> (and everyone else leaving)
[18:01:42] *** joti has joined #openindiana
[18:01:46] *** ira_away is now known as ira
[18:02:42] *** Nitial_ is now known as nitial
[18:02:43] *** joti has left #openindiana
[18:03:13] *** andy_js has quit IRC
[18:05:02] *** joti has joined #openindiana
[18:07:00] *** |AbsyntH| has quit IRC
[18:09:12] *** GHAI_ has joined #openindiana
[18:09:45] *** imaxs has quit IRC
[18:10:02] *** GHAI_ is now known as GHAI
[18:11:28] *** GHAI has quit IRC
[18:12:16] *** GHAI_ has joined #openindiana
[18:12:46] *** GHAI_ is now known as GHAI
[18:13:09] * patdk-wk always uses 0's for his checksums
[18:14:24] <oninoshiko> the math is easier that way
[18:14:31] *** gweiss has joined #openindiana
[18:14:44] <oninoshiko> let's all give it up for noop!
[18:14:51] <patdk-wk> heh
[18:14:56] <ball> :-)
[18:15:04] <patdk-wk> I made a program to get small udp packets and check them with crc32
[18:15:16] <patdk-wk> some reason the sender keeps sending the crc32 as a 64bit value
[18:15:27] <patdk-wk> so I get a crc32 of 0, followed by the real crc32
[18:15:28] <oninoshiko> o.O
[18:15:49] <oninoshiko> that's... odd
[18:16:21] <patdk-wk> ya, no idea what they did
[18:16:25] <patdk-wk> but easy to fix on my end atleast
[18:16:41] <patdk-wk> I let them know about the issue, but haven't seen it fixed yet
[18:16:58] <patdk-wk> they got all other 16, 32, and 64bit values correct, except for the crc :)
[18:17:38] <oninoshiko> why should they fix it, you already "did"
[18:26:32] *** slx86 has joined #openindiana
[18:31:39] *** marcus-- has joined #openindiana
[18:32:53] <Ducblangis> Thats probably not going to be top priority if it is something a user can fix
[18:34:21] <patdk-wk> user?
[18:34:27] <patdk-wk> I never said I was a user
[18:34:39] <patdk-wk> I was managing the server the client app is reporting to
[18:34:54] <Ducblangis> oh you're on the development team?
[18:34:57] <Ducblangis> oh server
[18:34:59] <patdk-wk> kindof
[18:35:11] <patdk-wk> manage security for the app
[18:35:20] <Ducblangis> but you don't use OI?
[18:35:25] <patdk-wk> on developement team as far as security is involved
[18:35:31] <patdk-wk> heh?
[18:35:42] <Ducblangis> n/m
[18:35:43] <patdk-wk> none of this had to do with OI
[18:35:48] <Ducblangis> Security eh?
[18:36:17] <patdk-wk> encryption, firewalls, password hashing
[18:36:33] <Ducblangis> I did my Sec+ in 2010 and before that I did the Sec5 from ECC. What is the Sec+ likenow-a-days, do you know? I feel like I should get my certs updated
[18:36:42] <patdk-wk> have to make sure developers peoperly hash passwords :)
[18:36:47] <oninoshiko> all this talk of hash is making me hungry
[18:37:00] <oninoshiko> sha-512?
[18:37:15] * patdk-wk feeds sha-512 unsalted
[18:37:37] <Ducblangis> I might go for LPI instead though, that would be worth the money instead of updating my Security+ certificate
[18:37:48] <oninoshiko> BLASTPHEMER!
[18:37:50] <Ducblangis> I don't know if I want to do the CEH or not
[18:37:55] <Ducblangis> haha
[18:38:11] <oninoshiko> I ment unsalted hashes, but you too!
[18:38:26] <Ducblangis> oh
[18:38:27] <Ducblangis> I see
[18:38:33] <Ducblangis> well I love salt on my hash
[18:38:48] <oninoshiko> normally hash is salty enough
[18:39:25] <oninoshiko> although, I like to tell them to hash my hash with the eggs and whatnot...
[18:39:45] <oninoshiko> I still dont think they quite understand that...
[18:41:13] <Ducblangis> eggs + potatoes is the best thing ever
[18:41:32] <oninoshiko> maybe I'll order hash for lunch today
[18:41:35] * ball wondered what to choose for hash when I truecrypted a disk yesterday
[18:42:33] *** Kaishi has joined #openindiana
[18:42:37] <patdk-wk> sec+ is bad, they really need updated questions
[18:44:20] <Ducblangis> Yea
[18:44:31] <Ducblangis> I was looking over the latest CompTIA Sec+ book
[18:44:45] <Ducblangis> they really do go over a lot of the same stuff as I did back in 2010
[18:44:52] <Ducblangis> I dont see much of a change at all
[18:44:54] <patdk-wk> I also hate their *you must know our shortcut words*
[18:45:22] <patdk-wk> I don't care what c3braids is, I do care about what encryption protocols it has, and that most of them Iwouldn't touch
[18:45:58] <Ducblangis> I would love to do the OS certificates
[18:46:04] <patdk-wk> and Ireally could care less that wep uses rc4, I mean, if anyone used wep i nthe last 10years, they have issues
[18:46:14] <Ducblangis> but those are a good deal more in terms of cash
[18:46:24] <Ducblangis> OS and in Offensive Security
[18:46:35] <patdk-wk> I'm just taking a online practice test
[18:46:37] *** held has quit IRC
[18:46:39] <patdk-wk> never seen sec+ before
[18:46:44] <patdk-wk> I don't really believe in certs
[18:46:47] <Ducblangis> WHy not?
[18:46:56] <patdk-wk> I believe you know the stuff or you don't
[18:47:02] <Ducblangis> Well yea
[18:47:05] <patdk-wk> I have never had to *prove it* with a cert
[18:47:09] <Ducblangis> But thats not what most people do it for
[18:47:18] <Ducblangis> They do it to have something to put on their resume
[18:47:22] <Ducblangis> not to prove something to a teacher
[18:47:27] <patdk-wk> ya, I have never had to make a resume
[18:47:38] <patdk-wk> putting it on a resume is proving you know it
[18:48:01] <Ducblangis> haha well yea obviously
[18:48:15] <patdk-wk> oh, best question yet
[18:48:19] <patdk-wk> sec+ test, echo is on port 7
[18:48:25] <patdk-wk> hmm, why?
[18:48:45] <Ducblangis> The dumbest section on that Exam was the ease of use triangle
[18:49:18] <Ducblangis> And the compairson that a black-box test is only carried out by black-hats and white-box by white-hats
[18:49:24] <Ducblangis> when in reality yhat is wayyyyy off
[18:49:59] <ball> Is it a bad sign when I work in enterprise IT and we don't have any infosec people?
[18:50:10] <Ducblangis> heh maybe
[18:50:39] <Ducblangis> I work for IBM refurbishing and recovering wafers and I don't know one infosec guy in the building
[18:50:55] <Ducblangis> or rather, a division of IBM
[18:51:07] <patdk-wk> we have an offical infosec gal, I hate it
[18:51:15] <patdk-wk> just repost emails from dod once a month
[18:51:30] <ball> Wow... that's helpful.
[18:51:35] <Ducblangis> haha
[18:51:43] *** Whoopsie has quit IRC
[18:54:56] *** ball_ has joined #openindiana
[18:55:04] *** ball has quit IRC
[18:55:10] <ball_> Okay, so that was odd.
[18:55:13] *** ball_ is now known as ball
[18:56:07] <ball> My terminal windows all closed and when I clicked on the little terminal window icon in the menu bar, I got "Unable to fork"
[18:56:16] <ball> ...oh wierd, it's working now.
[18:56:44] <ball> modest bump in my xload window, nothing outrageous.
[18:57:00] <Ducblangis> fork off
[18:57:27] <Ducblangis> I hope when I get old I won't be unable to fork
[18:57:55] *** marcus-- has quit IRC
[18:59:01] *** slx86 has quit IRC
[18:59:12] <ball> That was strange though. The first real oddity I've experienced on OpenIndiana
[18:59:19] *** melik has joined #openindiana
[18:59:34] <melik> is there any way i can force all connections to go through isns
[18:59:38] <oninoshiko> normally, that's a low-memory thing
[18:59:58] <melik> i don't want any initiators to be able to directly connect over port 3260 (sendtargets)
[19:00:54] *** imaxs has joined #openindiana
[19:02:06] <ball> oninoshiko: That seems likely. I have half as much in this box as I thought I did.
[19:03:56] <ball> (that /that/ was half as much as I would like ;-)
[19:04:09] <oninoshiko> ouch?
[19:04:57] <ball> Memory: 1014M phys mem, 56M free mem, 987M total swap, 104M free swap
[19:06:02] <ball> brb, rebooting
[19:06:05] *** master_of_master has quit IRC
[19:06:09] *** ball has quit IRC
[19:07:25] *** master_of_master has joined #openindiana
[19:09:05] *** Ducblangis has quit IRC
[19:14:00] *** DucBlangis has joined #openindiana
[19:20:44] *** kart_ has quit IRC
[19:23:30] *** DucBlangis has quit IRC
[19:28:37] *** xenol has quit IRC
[19:30:04] *** Webhostbudd has joined #openindiana
[19:33:32] *** movement has quit IRC
[19:43:01] *** heldchen has joined #openindiana
[19:44:41] *** xenol has joined #openindiana
[19:53:27] *** maccampus has joined #openindiana
[19:55:10] *** xenol_ has joined #openindiana
[19:55:13] *** xenol has quit IRC
[20:16:30] *** slx86 has joined #openindiana
[20:17:49] *** jamesd has quit IRC
[20:25:23] *** ira is now known as here_
[20:25:34] *** here_ is now known as ira
[20:28:23] *** marcus-- has joined #openindiana
[20:30:25] *** APTX has quit IRC
[20:30:36] *** APTX has joined #openindiana
[20:31:43] *** xenol_ has quit IRC
[20:31:45] *** nightwalk has quit IRC
[20:33:06] *** Kaishi has quit IRC
[20:37:06] *** xenol has joined #openindiana
[20:46:16] *** maccampus has quit IRC
[20:47:07] *** jamesd has joined #openindiana
[20:49:12] *** cyberspace- has quit IRC
[20:49:21] *** lennard has quit IRC
[20:51:10] *** APTX has quit IRC
[20:51:19] *** APTX has joined #openindiana
[20:53:54] *** cyberspace- has joined #openindiana
[20:54:03] *** lennard has joined #openindiana
[21:14:47] *** nightwalk has joined #openindiana
[21:18:20] *** spanglywires has left #openindiana
[21:19:56] *** slx86 has quit IRC
[21:21:28] *** classix has quit IRC
[21:22:02] *** classix has joined #openindiana
[21:30:15] *** Seony has joined #openindiana
[21:47:45] *** MDGrein has quit IRC
[21:48:52] <melik> is there any way i can force all connections to go through isns, i don't want any initiators to be able to directly discover targets using sendtargets discovery over port 3260
[21:52:44] *** MDGrein has joined #openindiana
[22:01:33] *** movement has joined #openindiana
[22:05:55] *** Lumb has quit IRC
[22:07:07] *** alanc has joined #openindiana
[22:07:08] *** ChanServ sets mode: +o alanc
[22:11:37] *** pjfloyd has joined #openindiana
[22:15:27] <patdk-wk> melik, enough already :) google!
[22:15:56] <patdk-wk> or maybe that is only initator
[22:16:37] *** SupremeOverlord has quit IRC
[22:18:41] *** pjfloyd has quit IRC
[22:29:43] *** smrt has quit IRC
[22:30:02] *** smrt has joined #openindiana
[22:37:21] *** Lumb has joined #openindiana
[22:38:56] <melik> comstar is dumb :(
[22:50:29] *** Lumb has quit IRC
[22:54:37] *** movement has quit IRC
[23:01:43] *** Uranio has quit IRC
[23:01:56] *** DocHoliday has joined #openindiana
[23:02:30] *** melliott has quit IRC
[23:04:34] *** melliott has joined #openindiana
[23:07:07] *** ancoron_ has joined #openindiana
[23:07:13] *** movement has joined #openindiana
[23:07:18] *** OMV-User has quit IRC
[23:10:57] *** ancoron has quit IRC
[23:12:27] *** movement has quit IRC
[23:19:17] *** OMV-User has joined #openindiana
[23:21:26] *** marcus-- has quit IRC
[23:24:56] *** movement has joined #openindiana
[23:28:08] <patdk-lap> comstar is gods gift to techies
[23:28:15] <patdk-lap> cause women are just too much of a pain
[23:30:07] <tsoome> lol
[23:30:30] *** sjorge has quit IRC
[23:30:46] *** sjorge has joined #openindiana
[23:30:46] *** sjorge has joined #openindiana
[23:36:22] *** ajeffco has joined #openindiana
[23:40:19] <ajeffco> If using an ssd for the boot drive, and use 16GB for the install slice, can I use the remaining unallocated space of the ssd for L2ARC if necessary?
[23:54:09] *** sjorge has quit IRC
[23:55:33] *** smrt has quit IRC
[23:55:51] *** smrt has joined #openindiana
[23:56:34] *** sjorge has joined #openindiana