Switch to DuckDuckGo Search
   July 5, 2011  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >

Toggle Join/Part | bottom
[00:00:47] *** descipher has joined #openindiana
[00:00:49] <randomuser> i had specified a mount point at /silo; the mount point was gone
[00:00:57] <randomuser> i wonder if that affected the import
[00:01:02] *** axisys has joined #openindiana
[00:03:50] <sergefonville> didn't you just select a wrong be at boot?
[00:04:41] <randomuser> i selected a different (updated) be
[00:04:43] <tsoome> zfs will create mountpoint if needed
[00:04:52] <tsoome> thats not an issue at all
[00:05:03] <randomuser> i guess i wasn't aware it would break my pool
[00:06:09] <tsoome> well, system will know which pools to import from that cache file
[00:06:25] <tsoome> if you export the pool, its information will be removed from the cache
[00:07:08] <tsoome> but if there was some other pool cached there with those disks, it could confuse the system
[00:07:24] <randomuser> zpool import shows the pool healthy and online; zpool status only shows the rpool
[00:07:42] <tsoome> if you cleared the cache, you should be able to import it now
[00:08:28] <randomuser> yes, that did it
[00:08:36] *** madwizard has quit IRC
[00:08:41] <tsoome> so your cache had some stale information
[00:08:44] <randomuser> so what made my cache stale? new be?
[00:09:03] <tsoome> cant really guess from here;)
[00:09:19] <randomuser> heh
[00:09:21] <tsoome> IMO the zpool cache management is a bit too fragile...
[00:09:35] <randomuser> i'm sure your speculation would be better than mine
[00:09:39] <tsoome> but well, its not fatal one, easy to clear
[00:09:53] <tsoome> when was that another BE created?
[00:10:06] <randomuser> immediately after install; i did a pkg image-update
[00:10:24] <tsoome> well, before you did create that pool?
[00:10:55] <randomuser> before creating the pool; but I didnt boot into it until after
[00:11:00] *** wonslung has quit IRC
[00:11:13] *** madwizard has joined #openindiana
[00:11:19] <randomuser> 3nT3RPR1S3
[00:11:28] <randomuser> GARBAGE!
[00:11:38] <tsoome> well, then it got copy of old cache file, then you did create pool and voila, those two caches were different
[00:11:47] <randomuser> sorry, too many keyboards
[00:12:20] <tsoome> na just tell the name of the system as well, we know the root password now:P
[00:12:40] <randomuser> hah!
[00:12:44] <tsoome> :D
[00:13:18] <randomuser> but, do you know which port to knock on, and how?
[00:13:31] <tsoome> port?
[00:13:45] <randomuser> fwknopd
[00:14:42] <tsoome> no idea what it is:D
[00:15:50] <randomuser> it's a daemon that listens for an encrypted packet, generated by a hash from a pre-shared key, to be dropped on a specified port
[00:16:22] <tsoome> so its listening some port, pfiles will tell
[00:16:29] <tsoome> or lsof
[00:17:01] <randomuser> on observing the packet, it manipulates the iptables config; in this case, a 10 second window on port 22 for only the originating ip
[00:17:57] <randomuser> well, if you have physical access to the machine, it doesn't mater
[00:18:22] <randomuser> but ntop doesn't see the knocker, so it's good enough for me
[00:24:36] *** mikw has quit IRC
[00:24:57] <sergefonville> you used port knocking :P
[00:24:59] <sergefonville> cool :D
[00:27:53] <sergefonville> i can't compile php from SFE :(
[00:27:58] <sergefonville> build*
[00:29:30] <randomuser> goddammit!
[00:29:46] <sergefonville> did you break it :O??
[00:29:58] <randomuser> so, a new boot environment means my /etc/ configs are wiped???
[00:30:26] <sergefonville> a new BE is normalle based on the existing one
[00:31:04] <randomuser> well, i need to go burn things and drink beer
[00:31:12] <randomuser> thank you all for your help and patience
[00:33:21] <tsoome> if you wanna create BE to be used soon, you wanna create it just before switching to it.
[00:33:21] <tsoome> the BE's are created from zfs snapshot and clone
[00:33:21] <tsoome> so, if you create BE too early, you will miss later changes
[00:34:23] *** redgone has quit IRC
[00:36:26] <tsoome> but you can switch back now; remove theat BE, create it again and have all updates you did
[00:39:09] <randomuser> i'm not missing much
[00:39:20] <randomuser> it'll be good for me to do it again :)
[00:39:38] <sergefonville> practice does make perfect
[00:39:41] <sergefonville> so they say :P
[00:40:05] <randomuser> i was thinking more along the lines of 'familiarity breeds contempt'
[00:40:06] <randomuser> lol
[00:41:06] <tsoome> :P
[00:41:12] <randomuser> anyway, thanks again. going afk
[00:49:45] <sergefonville> repetitive failure breaks the keyboard
[00:50:51] <sergefonville> I gave up on php-fpm
[00:51:05] <sergefonville> now I'm looking into spawn-fcgi
[00:51:13] <sergefonville> but I don't think I get it
[00:55:00] *** ThomasB2k has joined #openindiana
[01:04:36] <sergefonville> I did it, edited the method and it 'just works'
[01:07:43] <sergefonville> I'm off, thanks everyone
[01:07:46] *** sergefonville has left #openindiana
[01:25:53] *** hjf_ is now known as hjf
[01:37:32] *** gea_ has quit IRC
[01:40:32] *** echobinary has joined #openindiana
[02:21:24] *** konobi has joined #openindiana
[02:21:39] <konobi> hello all... it's an illumos question... but i saw mentions of it here
[02:22:07] <konobi> has anyone seen issues with booting illumos/oi on a dell optiplex where it hangs at uhci/ohci stages?
[02:30:41] *** Triskelios has quit IRC
[02:35:38] *** Triskelios has joined #openindiana
[02:40:20] *** DanaG has joined #openindiana
[02:41:20] *** axisys has quit IRC
[02:41:33] <DanaG> hmm, anyone here run Deluge on OpenIndiana?
[02:44:23] *** InTheWings has quit IRC
[02:46:38] *** axisys has joined #openindiana
[02:48:11] <Triskelios> DanaG: I did the Deluge port to osol, but that was before OpenIndiana existed
[02:48:24] <Triskelios> SFEdeluge will work, but is quite old
[02:49:21] <DanaG> hmm, I'm trying to figure out what to use: freenas 7, openindiana, or ubuntu + zfs.
[02:49:51] <DanaG> FreeNAS is about perfect, except it uses Transmission, which sucks.
[02:50:06] <DanaG> If OpenIndiana has some sort of web interface, and will do Deluge, that's good.
[02:50:19] <Triskelios> I like Transmission
[02:53:44] <DanaG> It'd be fine with me, if it didn't keep insisting that certain files were 0% downloaded.
[02:54:01] <DanaG> Oh, and FreeNAS also didn't have ECC error reporting for AMD.
[02:54:08] <DanaG> Does OpenIndiana have that?
[02:58:35] <Triskelios> it should
[03:03:09] *** DrLou has joined #openindiana
[03:03:09] *** ChanServ sets mode: +o DrLou
[03:10:56] *** master_of_master has quit IRC
[03:12:58] *** master_of_master has joined #openindiana
[03:31:44] <DanaG> "OpenIndiana cannot be installed on any disk."
[03:44:44] *** miine has quit IRC
[04:27:05] *** POloser has joined #openindiana
[04:43:40] *** radsy has joined #openindiana
[04:58:24] *** kart_ has joined #openindiana
[04:59:08] <joffe> anyone here ever bought a kindle book on amazon? do i get a .mobi file so i can read it on my pc, or does it require their kindle thing?
[05:06:12] <alanc> they have a kindle client for the PC you can read it on
[05:06:25] <alanc> think the DRM prevents reading with a normal .mobi reader
[05:06:59] <richlowe> my understanding is that some of their catalogue is drm-free, but you have no way to tell in advance.
[05:07:09] <richlowe> alanc: build still ticking over?
[05:07:58] <alanc> yep, 247 runs without any pkgsend exceptions now
[05:39:54] *** Bahman has quit IRC
[05:48:23] *** DrLou has quit IRC
[05:52:55] *** Naresh``` has quit IRC
[05:53:49] *** EFree has joined #openindiana
[06:06:36] *** ThomasB2k has quit IRC
[06:23:02] *** blues has joined #openindiana
[06:23:15] <blues> anyone around?
[06:26:32] <McBofh> nope, we're all off huntin' wabbits
[06:26:40] <blues> trying to install oi latest dev version onto a whitebox, i5 proc, gigabyte mobo with p55 chipset, 2 x 2GB ddr3 ram. System has been working fine as a linux box, decided to try out OI and napp-it. Anyway, upon load of the disc i burned, after selecting first option for boot.. i hang
[06:27:17] <blues> i'm editing grub option to boot in verbose mode to try and see exactly where i'm hanging
[06:28:33] <richlowe> add -kvd not just -v, when it drops into the debugger the first time, enter: 'moddebug/W 80000000', 'prom_debug/W 1', and then ':c'
[06:28:39] <richlowe> makes module loading and early boot really verbose
[06:28:49] <richlowe> though lots of people have been complaining of early-boot hangs recently
[06:28:56] <richlowe> it's very odd, because nobody has touched anything near there that I know of.
[06:29:23] <blues> i'll do that as soon as it finishes this boot
[06:29:26] <blues> or hangs, rather.
[06:30:22] <blues> last line w/ just -v is : ehci1 is /pci@0,0/pci1458,5006@1d,7
[06:30:37] <blues> rebooting now to issue the debug commands you rquested
[06:33:34] <blues> ok
[06:33:45] <blues> last line i get before hang is installing uhci, module id 94
[06:34:46] *** radsy has quit IRC
[06:36:13] <richlowe> sigh, ok. That's familiar to me, but I don't know of any way to workaround it (or what's actually wrong, to fix it)
[06:36:31] <blues> bleh
[06:37:20] <blues> i've googled a bit before venturing here... i've tried some random stuff (went from achi to ide mode on drives, disabled legacy usb support, disabled all integrated devices i could)
[06:37:33] <blues> nothing mattered.
[06:39:51] <richlowe> only likely workaround would be disabling uhci, which would probably lose most USB
[06:40:19] <blues> honestly wouldn't be a huge deal for this application except that all i've got to interface with is a usb keyboard
[06:40:37] <blues> not sure bios will allow me to totally disable uhci either
[06:41:38] <richlowe> you could boot -Bdisable-uhci=true
[06:41:41] <richlowe> if you didn't need keyboard/mouse
[06:42:25] <blues> i'll try it just for shits and giggles to see if it works
[06:42:41] <blues> not that i can do much at that point
[06:44:27] <richlowe> All the other workarounds i know of for the bug I'm thinking of just cause different problems, later.
[06:45:18] <richlowe> one possible good bit of news is that it's likely someone else who hit it earlier today has easy access to people who'd be able to debug it.
[06:47:15] <blues> is there an older version i can fall back to ?
[06:48:00] <blues> or should i try something else... like solaris express? Really what i'm shooting for here is just to get a home grown san up and going.
[06:49:31] <richlowe> if it's the bug I'm thinking of, S11X would work
[06:49:41] <richlowe> any practical older solaris would behave the same way.
[06:53:02] <blues> hmm
[06:53:10] <blues> i put in that disable command, and i still hang
[06:57:14] *** DanaG has quit IRC
[06:58:27] *** Naresh``` has joined #openindiana
[07:00:44] <blues> oh no wait, i lied.. that does get me past... of course now i'm prompted for keyboard type and all i can do is sit here and grin at it
[07:01:24] *** Naresh``` is now known as Naresh
[07:01:26] *** Naresh has joined #openindiana
[07:02:51] <richlowe> that makes it very likely that S11X would work for you
[07:03:01] <richlowe> though nothing prior, and nothing still open source, would.
[07:03:25] <blues> does this have to do with the chipset i'm using?
[07:03:41] <blues> motherboard's chipset rather
[07:04:22] *** forquare has joined #openindiana
[07:13:56] *** keremet has joined #openindiana
[07:15:11] *** forquare has quit IRC
[07:25:08] *** Crypticf1rtune is now known as Crypticfortune
[07:53:38] <edgars> yo
[07:55:11] *** NIX_Y has quit IRC
[08:28:09] *** EFree has quit IRC
[08:30:57] *** gea has joined #openindiana
[08:33:45] <blues> well, solaris xpress hangs as well
[08:34:27] *** SH0x has quit IRC
[08:38:59] *** SH0x has joined #openindiana
[08:40:38] *** gea has quit IRC
[08:42:22] *** gea has joined #openindiana
[08:45:43] *** keremet has left #openindiana
[09:04:42] *** raichoo has joined #openindiana
[09:14:08] *** gea has quit IRC
[09:15:17] *** lblume1 has quit IRC
[09:16:39] *** lblume has joined #openindiana
[09:19:55] *** |AbsyntH| has joined #openindiana
[09:27:12] *** ivo_ has joined #openindiana
[09:30:10] *** Micr0mega has joined #openindiana
[09:32:05] *** bens1 has joined #openindiana
[09:38:25] *** Botanic has quit IRC
[09:39:25] *** miine has joined #openindiana
[09:41:16] *** Botanic has joined #openindiana
[09:52:59] *** Worsoe has joined #openindiana
[09:53:38] *** freedomrun has joined #openindiana
[09:55:29] *** ChanServ sets mode: +o madwizard
[09:55:54] <madwizard> Coffee
[10:03:38] <konobi> ENOCOFFEE
[10:07:10] <madwizard> Łeeee
[10:07:54] *** ivo_ has quit IRC
[10:32:21] *** mikw has joined #openindiana
[10:33:37] <AlasAway> EWANTLUNCHALREADY
[10:33:42] *** AlasAway is now known as Alasdairrr
[10:34:32] <raichoo> ENEEDSVACATION
[10:34:53] <madwizard> EEEEEEE
[10:34:55] <madwizard> EEEEEEE
[10:34:59] <madwizard> WWwwrwrrraaaaum!
[10:35:04] <madwizard> Just E
[10:52:13] <Alasdairrr> raichoo: i'm with you on the vaction
[10:52:22] <Alasdairrr> the problem is the more on vaction i go the more work that piles up at work
[10:54:43] *** GS has joined #openindiana
[10:55:40] <raichoo> Alasdairrr: Welcome to the club :/
[10:57:17] <Alasdairrr> it sucks
[10:57:19] <Alasdairrr> means i can't relax
[10:57:24] <Alasdairrr> i'm on a treadmill, strapped to it!
[10:58:14] <madwizard> Yeah, makes me glad I left working on my own and aquired a job at 17k+ people corporation
[11:17:49] <Alasdairrr> yeah
[11:18:08] <Alasdairrr> one day once EC gets big enough and I have enough henchmen i should be able to take holidays
[11:18:16] <Alasdairrr> but this is probably 3+ years away :P
[11:19:22] <madwizard> Look for admins in other countries. Ie. I'm for hire. :P
[11:22:40] *** GS has quit IRC
[11:23:16] *** GS has joined #openindiana
[11:27:00] *** held has quit IRC
[11:29:53] <Alasdairrr> madwizard: its the management layer i'm going to start needing soon
[11:30:24] <madwizard> :)
[11:30:44] <madwizard> Alasdairrr: I'm only certified at ITILv3 foundation and out of your country, so I don't think it is me :)
[11:30:48] <madwizard> But we can talk :)
[11:31:25] <Alasdairrr> lol
[11:37:35] *** merzo has joined #openindiana
[11:43:18] *** held has joined #openindiana
[11:43:36] <madwizard> Alasdairrr: I can always talk, it's something I never refuse :)
[11:43:43] <madwizard> Coffee too, of course :)
[12:07:57] *** |AbsyntH| has quit IRC
[12:08:14] *** hajma has quit IRC
[12:08:32] *** hajma has joined #openindiana
[12:09:07] *** Naresh` has joined #openindiana
[12:09:21] *** keremet has joined #openindiana
[12:09:24] *** Naresh has quit IRC
[12:11:57] *** GS has quit IRC
[12:12:24] *** GS has joined #openindiana
[12:14:50] *** Alasdairrr is now known as AlasAway
[12:17:19] *** InTheWings has joined #openindiana
[12:54:34] *** cwo has joined #openindiana
[12:55:04] *** cwo has quit IRC
[13:03:17] *** Hyphenex has joined #openindiana
[13:03:43] <Hyphenex> Howdy everyone. I'm loving my OpenSolaris file server atm
[13:03:55] <Hyphenex> but one thing that annoys me is the "special" permissions that show up over samba
[13:04:08] <Hyphenex> is there a way to make it just normal POSIX permissions over Samba?
[13:04:43] *** mikw has quit IRC
[13:17:17] <tsoome> special in what way?
[13:22:55] <McBofh> tomww: re SFEaudacity, you mentioned a workaround for the problems I tweeted - dunno where that workaround is, though - could you fill me in please?
[13:26:58] *** Naresh`` has joined #openindiana
[13:28:19] *** merzo has quit IRC
[13:28:30] *** ChanServ sets mode: +o Triskelios
[13:30:14] *** Naresh` has quit IRC
[13:30:17] *** DrLou has joined #openindiana
[13:30:17] *** ChanServ sets mode: +o DrLou
[13:32:44] *** Naresh`` is now known as Naresh
[13:32:47] *** Naresh has joined #openindiana
[13:33:42] *** Micr0mega is now known as Micr0mega|lunch
[13:42:39] <edgars> http://wiki.openindiana.org/oi/3.+Installing+software+and+package+management
[13:42:42] <edgars> :/
[13:43:23] <edgars> how can i install/search packages? :(
[13:45:32] <jkimball4> pkg install/pkg search
[13:46:52] <edgars> yeah
[13:47:04] <edgars> probles is that i cant find any package :/
[13:47:10] <raichoo> pkg search -r
[13:47:13] <raichoo> for remote searches
[13:47:19] <edgars> aaah
[13:47:23] <raichoo> pkg search searches installed packages
[13:47:59] *** Naresh has quit IRC
[13:51:09] <tomww> McBofh: the workaround looks like being int he link you gave in your tweet.
[13:51:49] <tomww> McBofh: besides that, I was not able to complete map you error message to the content in the link you gave
[13:55:54] *** McBofh has quit IRC
[13:56:35] <edgars> grr, nothing with search -r too :/
[13:56:59] <madwizard> I don't like the output format of pkg search
[14:01:04] *** McBofh has joined #openindiana
[14:04:51] <DeanoC> pkg search isn't great tbh
[14:04:55] <DeanoC> what you looking for edgars?
[14:05:07] <DeanoC> and when i say great i mean barely useable ;)
[14:06:44] *** quasi has quit IRC
[14:06:48] *** quasi_ has joined #openindiana
[14:08:32] <edgars> DeanoC: pure-ftpd
[14:11:24] <DeanoC> sfe repo has it http://staticdev.uk.openindiana.org:10002/en/search.shtml?token=ftp&action=Search
[14:11:58] <DeanoC> theres instructions on the wiki on how to add that repo to your ips repo list
[14:13:10] <DeanoC> not the wiki but first up from google how to add http://barbz.com.au/blog/?p=84
[14:13:12] <DeanoC> hth
[14:14:56] *** Naresh has joined #openindiana
[14:18:34] *** Micr0mega|lunch is now known as Micr0mega
[14:18:39] <Hyphenex> tsoome: as in drwx------+ 3 scott staff 3 2011-07-05 21:09 GlassTheme (note the +)
[14:19:02] <tsoome> :P
[14:21:09] *** quasi_ has quit IRC
[14:21:23] <edgars> hmmmm
[14:21:31] <tsoome> windows has no clue about posix permissions, its using only acl's. man smb.conf and search for acl
[14:22:05] *** quasi has joined #openindiana
[14:22:58] <Hyphenex> No manual entry for smb.conf. :P Can I turn off ACL? (and have SMB give access based on POSIX permissions?)
[14:23:18] <tsoome> no manual? are you sure you are using samba?
[14:23:29] <tsoome> or are you using in kernel smb service?
[14:23:41] <Hyphenex> kernel I believe
[14:23:46] <tsoome> kernel smb does use only acl.
[14:23:53] <edgars> wtf
[14:24:23] <tsoome> so with kernel smb service, there is nothing to configure, as its the only way to manage permissions.
[14:24:30] <edgars> http://pastebin.com/Q5aviumG
[14:24:36] <Hyphenex> tsoome: ahk, I'm thinking if I touch a file, then I won't be able to delete it over samba (or if it gets created over NFS)
[14:25:11] <tsoome> set correct default acl's. so if the file is created, it will inherit the correct permissions
[14:25:26] <Hyphenex> oh cool.. Any tips on setting the default acl?
[14:25:50] *** smrt has quit IRC
[14:26:06] <tsoome> http://mattwilson.org/blog/solaris/solaris-cifs-server-and-zfs-acls-the-problem/
[14:26:08] *** smrt has joined #openindiana
[14:26:16] <tsoome> read the entry and second comment
[14:27:13] <Hyphenex> oh thanks so much :)
[14:28:19] <tsoome> the learning curve is a bit deep, the mechanism is quite complicated unfortunately...
[14:31:48] <tsoome> also you may wanna read the windows interoperability guide from solaris 11 docs in oracle.com documentation site, it can help you quite a bit.
[14:33:15] *** quasi has quit IRC
[14:33:15] *** quasi has joined #openindiana
[14:33:18] <edgars> DeanoC: any ideas? :)
[14:36:13] <DeanoC> no idea, sound like the package is bad using a user that doesn't exists, u could try creating viskov and then try again?
[14:36:27] <DeanoC> but just a guess, haven't been involved with the actual packaging side of things
[14:36:54] <tsoome> that error message does hint the package is indeed buggy....
[14:39:36] *** kart_ has quit IRC
[14:40:45] <edgars> and what about beadm thing?
[14:41:47] <tsoome> remove that bad BE
[14:42:25] *** kart_ has joined #openindiana
[14:43:18] <edgars> stupid bug, with taht viskov
[14:44:09] <tsoome> file the bugreport and ask for fix:P
[14:44:36] *** mikw has joined #openindiana
[14:44:50] *** DontKnwMuch has joined #openindiana
[14:46:27] <DontKnwMuch> looking for suggestions ;) - 9x 3TB drives are available, I need it for NFS/iSCSI and SMB share at the same time. What should I do, raidz or mirror?
[14:47:30] <edgars> tsoome how/where? :)
[14:48:02] *** CoilDomain has joined #openindiana
[14:48:05] <tsoome> share types are not important for that decision, you need to know what kind of IO will happen - random versus streaming and how are reads and writes related
[14:48:41] <edgars> DontKnwMuch: raid10 :>
[14:49:35] <tsoome> edgars: good question - check if the package description has hints or SFE repo.... never needed to find that kind of information myself...
[14:50:14] *** |AbsyntH| has joined #openindiana
[14:50:23] *** POloser has left #openindiana
[14:50:25] <DontKnwMuch> random it is, no streaming at all. ESXI datastore and share for a bunch of users with files
[14:50:27] <tsoome> http://pkgbuild.sourceforge.net/spec-files-extra/
[14:50:32] *** dekar has joined #openindiana
[14:52:35] <DontKnwMuch> "raid 10" is made with: zpool create tank mirror c1d0 c2d0 mirror c3d0 c4d0... etc right?
[14:52:42] <tsoome> if you have over 50% writes, raid10....
[14:52:43] <tsoome> yes
[14:53:41] *** ThomasB2k has joined #openindiana
[14:53:48] <DontKnwMuch> I also want to sleep well, you know, drive failures and stuff, is 3x 3 drive raidz better in such a case as 4x2x2 drives?
[14:55:16] <tsoome> you can have 3-way mirrors as well, its all about price versus perfomance versus protection
[14:57:43] *** Whoopsie has joined #openindiana
[14:57:44] *** ChanServ sets mode: +v Whoopsie
[14:59:07] <tsoome> if you have single storage host, thats an SPOF already. sure, there is an chance you can loose both halfs of the mirror, but the big question is, will the 3+3+3 raidz serve the perfomance need.
[14:59:38] <tsoome> and in that case, you wanna still have 1 more disk for spare
[15:04:31] *** ThomasB2k has quit IRC
[15:05:16] *** quasi has quit IRC
[15:05:23] *** quasi has joined #openindiana
[15:05:50] *** quasi has quit IRC
[15:05:51] *** quasi has joined #openindiana
[15:06:20] <DontKnwMuch> is three way mirror way slower or does it not matter so much?
[15:08:06] <DontKnwMuch> tsoome: thanks for you thoughts ... and now a question: what would you pick in this case? ;)
[15:09:47] <tsoome> without knowing the IO requirements - probably raid10 (and have 9th disk as hotspare).
[15:10:18] <tsoome> altho 3TB disks indicate there cant be much IO perfomance requirements anyhow:P
[15:10:44] <sickness> I think raidz3 could be more elegant ;)
[15:11:01] <tsoome> well, for protection - sure.
[15:11:12] <DontKnwMuch> and also is a mirror rebuild faster than raidz rebuild?
[15:11:28] <DontKnwMuch> performance 1xglan is enough
[15:11:45] <DontKnwMuch> but iops I do not know for real
[15:12:02] *** wonslung has joined #openindiana
[15:12:12] <tsoome> 1Gb troughput is one thing, but IO count is another.)
[15:12:14] <tsoome> :)
[15:13:03] <DontKnwMuch> I think raidz3 would be slow as molases in this case.. hm... raid 10 sound better and better
[15:13:15] <DontKnwMuch> ;)
[15:13:21] <tsoome> also, as you wrote you gonna have nfs and iscsi as well, those default to sync mode anyhow, and to perform, would probably need the slog or sync=disabled setup
[15:13:37] <lblume> I forget all the time, zfs can doe raid1+0 or 0+1? Or both?
[15:13:39] <DontKnwMuch> I have a mirror SSD for slog
[15:14:14] <tsoome> only X0
[15:14:32] <tsoome> wether the X is 1 or 5 or 6 or ... :D
[15:14:51] <lblume> right, because of the non-nesting of vdevs
[15:15:49] *** axisys has quit IRC
[15:15:55] <lblume> So in practice, when one element of the mirror dies, it is a whole stripe that needs be reconstructed, that spans more than one drive.
[15:15:59] <edgars> DontKnwMuch: raid10 pwnz ;) some days ago i had a 250MB/s
[15:16:27] <edgars> wihout bbu even :>
[15:16:51] <DontKnwMuch> edgars: while scrubbing? ;)
[15:17:32] <edgars> nop, leaching pron :)
[15:17:39] <DontKnwMuch> lol
[15:17:41] <DontKnwMuch> :)
[15:17:47] <sickness> pr0n++
[15:17:48] <sickness> ;P
[15:18:19] <edgars> sort of lemonparty :D
[15:18:55] <madwizard> Coffee
[15:19:12] <sickness> ...
[15:20:34] <tsoome> if you have enough spindles, any raid setup can give you very nice troughput - it just depends on the type of load.
[15:21:03] <edgars> Darkproger: what drives??
[15:22:18] <DontKnwMuch> are there any prefered SSD drives knows to be ok and not to cause problems for slog?
[15:22:30] <DontKnwMuch> knows=known
[15:22:32] <lblume> madwizard: I had Starbucs today, but only for the cheesecake on the side :-P
[15:23:23] <edgars> DontKnwMuch: any with sandforce chip
[15:24:03] <DontKnwMuch> ok. thanks
[15:25:31] <edgars> maybe some crucial
[15:25:39] <edgars> or owc
[15:25:42] <DontKnwMuch> corsair f40?
[15:26:24] <edgars> looks good
[15:27:01] <edgars> not a newes one, but will be good :)
[15:27:33] <DontKnwMuch> I do not need much capacity, but am afraid what would happen if the power suddenly gets away ;) Will a pool recover or I am fuxated in such a case?
[15:27:58] <DontKnwMuch> as SSD will for sure not write it all at that moment
[15:29:07] <edgars> then you need a ups or raid card with bbu :)
[15:29:23] <dkeav> ups ftw
[15:29:28] <DontKnwMuch> if power supply goes by by no ups will help ;)
[15:29:41] <edgars> bbu is still alive ;)
[15:29:45] <dkeav> redundant psu and ups ftw!
[15:29:52] <edgars> and raid with bbu :D
[15:29:58] <dkeav> heh
[15:30:06] <dkeav> fine!
[15:30:52] <DontKnwMuch> I just think that raw simple controller is nice for zfs... I am just dumping the bbu raid5 I have (areca)... hm... let me think again
[15:31:06] <DontKnwMuch> redundant psu and ups I have
[15:31:13] *** TPickle has joined #openindiana
[15:32:07] <dkeav> then i wouldn't worry about it too much
[15:32:58] <dkeav> if something catastrophic enough happens to take that down, not having that last 20sec of data when you recover the pool is the least of your worries
[15:33:51] <DontKnwMuch> I just do not know if it will recover at all, does anyone have any experience for such a case?
[15:34:10] <dkeav> what case?
[15:34:48] <dkeav> losing a slog?
[15:35:02] <DontKnwMuch> yes
[15:35:55] <dkeav> if its just your ssd failing then some data may not get written to the pool though if the system stays stable, it will just be marked offline and the system should revert to useing the software intent log
[15:36:13] <dkeav> providing you are using a new enough zpool/zfs version
[15:36:20] *** merzo has joined #openindiana
[15:36:29] <edgars> which is a latest version?
[15:36:37] <dkeav> 28?
[15:36:49] <edgars> okey
[15:36:57] <edgars> then no wories :)
[15:37:04] <tsoome> 19 Log device removal
[15:37:24] <DontKnwMuch> ah.. 28 it is. so I theoretically will recover
[15:37:25] <tsoome> so at least version 19 is good to go
[15:37:29] <dkeav> so yea anything newer than 19
[15:37:33] <DontKnwMuch> I = it
[15:38:12] <dkeav> theoretically you won't even notice a disruption, your writes will just slow down and you will be warned about your degraded zil device
[15:38:15] <dkeav> no biggy
[15:39:04] <DontKnwMuch> practice an theory often are apart ;)
[15:39:22] <DontKnwMuch> slog does help only for sync writes, right?
[15:39:27] <tsoome> yes
[15:39:41] <dkeav> yep
[15:41:16] <DontKnwMuch> ok, than I will try with 8 drive raid10, no slog, and if needed, add it later.. hopefully it will be better than current 15 drive raid6 ... at least the same I hope
[15:41:50] <dkeav> it should be fairly fast on writes, because of the stripes
[15:42:25] <tsoome> if you have 15 disks in single raidz2......
[15:42:35] <tsoome> :P
[15:43:09] <DontKnwMuch> tsoome: I know ... it is not raidz2 it is areca raid6 and it is a horror ... I had two drives fail while rebuilding... nerve wrecking thing
[15:43:36] <DontKnwMuch> therefore I want something more "resiliant" to drive failures.. less drives higher redundancy
[15:44:24] <DontKnwMuch> but obvioulsy 7.2k RPM 3TB are not considered fast here around ;)
[15:44:34] <tsoome> dont build that wide raids...
[15:44:56] <DontKnwMuch> It is a 3 year old thing... way too waide
[15:45:00] <DontKnwMuch> wide
[15:45:13] <dkeav> aye you could still use raid5/z levels, but break it up
[15:45:32] <dkeav> 15 is rather large for a single array
[15:45:33] <lblume> Not the HBA killing the drives? Two drives failing in a raidz2 is no fun, either....
[15:46:25] <DontKnwMuch> after I transfer the data to the new one, I will wipe the thing,and do convert to zfs with smaller vdevs
[15:47:30] <dkeav> like 3x5 drive arrays
[15:48:42] <tsoome> after seen dead raid1 from LSI, sorry, i dont really have much faith in those raid cards:P
[15:49:28] <DontKnwMuch> I just have a jbod controllers now, IT mode, raw drives...
[15:49:43] <dkeav> when i still used raid cards i had good luck with 3ware, but that is about it
[15:49:55] <DontKnwMuch> statistically is 3x 3 drive raidz more "redundant" as 8 drive raid10 or what can I think now... it is hard to decide
[15:49:56] <dkeav> but now lsi owns them so meh
[15:50:15] <tsoome> 2 disks in raid1 for boot disk, 1 disk did die and blocked whole mirror. pulling out the dead disk did wake up the mirror and the system was able to boot again...
[15:50:41] <dkeav> but it never should have went down in the first place is the whole idea
[15:51:09] <DontKnwMuch> I did some testing and pulled some drives out of raidz for fun, and nothing happend, it just worked :)
[15:51:39] <dkeav> nice huh
[15:51:52] <DontKnwMuch> very :)
[15:51:54] <tsoome> pullout isnt really showing much
[15:52:16] <DontKnwMuch> and rebuild after putting it back was fast, only files changed in the meantime were resilvered
[15:52:48] <DontKnwMuch> in raid5 two hours for sure, in this cas only a minute or two
[15:53:16] <DontKnwMuch> so if I pull the wrong drive, this is not such a disaster... ;)
[15:54:57] <tsoome> .oO and dont forget cfgadm -c unconfigure before the pull:P
[15:55:51] <DontKnwMuch> tsoome: I learned this lesson too ;) but I just tried to see what happend, and it did well... perhaps I was lucky
[15:57:43] <DontKnwMuch> http://www.stringliterals.com/?p=161
[15:57:56] <DontKnwMuch> This does not look as a lot of less performance... hm...
[15:58:47] <tsoome> its only one aspect of the perfomance
[15:59:38] <tsoome> i did got my hands on 14x 146GB 10KRPM disks. all 14 disks in single raidz was the best setup for streaming write.
[16:01:46] <tsoome> you push loads of data, filling the stripe and raidz will write full stripe down. but as you are going to modify only blocks from that stripe, the troughput will go down really quick as you need to get whole stripe in ram to create checksum, and its it will still write whole stripe, despite you did modify maybe only 1 byte...
[16:02:42] <tsoome> mkfile/dd/cp test is only for streaming writes, if you have that kind of workload, fine.
[16:03:08] *** Worsoe has quit IRC
[16:14:51] <DontKnwMuch> nope, it is much more random thing, thanks for explanation
[16:17:36] <blues> hey guys, i was around early am with a problem, wanted to see if i could get some help now since there's more traffic. Tried installing latest OI on a p55 mobo w/ i5 proc. boot hangs though, and it appears to be a USB issue. Only way i can get OI (or solaris express) to pass -disable-uchi = true.
[16:18:54] <blues> guy was trying to help me last night but he wasn't aware of a feasible work-around. he thought trying Solaris Express might do the trick, but i have same issue with it.
[16:24:52] *** Bahman has joined #openindiana
[16:24:58] <Bahman> Hi all!
[16:25:51] <dkeav> blues: but it works if you disable uhci?
[16:25:58] <edgars> huh
[16:25:59] <tsoome> blues: have you checked for bios updates?
[16:26:15] <edgars> nice meeting with boss :>
[16:26:16] <blues> tsoome: i updated to latest bios, yes.
[16:26:39] <blues> dkeav: i can get to a keyboard select screen, but no way to go forward since all i've got are usb devices
[16:26:41] <dkeav> frankly how many usb 1.0 devices do you use that you need it?
[16:26:58] <OniAtWork2> keybord ane mouse are pretty common,i think
[16:27:10] <tsoome> :D
[16:27:15] <blues> it wouldn't be an issue except for keyboard and mouse
[16:27:32] <blues> and really won't be an issue there after setup..this will be a headless napp-it appliance
[16:27:32] <dkeav> god forbid manufacturers use ehci
[16:27:34] *** hsp has joined #openindiana
[16:27:35] <dkeav> bah!
[16:28:18] <OniAtWork2> even my sun branded keyboard is USB now
[16:28:30] <longcat> personally i like $3 usb devices
[16:28:49] <blues> If i turn off legacy USB support in bios and usb storage support, it looks like i lose the ability to use keyboard prior to boot into OS, but i am able to get in. However, a few seconds afterwards, i lose my keyboard and system seems to "lock"
[16:29:00] *** myrkraverk has joined #openindiana
[16:29:00] *** myrkraverk has joined #openindiana
[16:29:43] <dkeav> blues: does that mobo have any usb3 slots?
[16:30:27] <blues> no, this is a p55 and the particular model (gigabyte GA-P55-UD4P) didn't add in a usb3 controller
[16:30:38] <dkeav> k
[16:30:50] <dkeav> thought maybe it was an xhci controller emulating uhci/ohci
[16:30:56] <OniAtWork2> oh! that reminds me, I need to charge my phone
[16:32:41] <blues> The keyboard i'm using is a rather cheap old one. If i pick up something more recent / mainstream (ms / logitech wireless dekstop something or other) would i have better luck?
[16:34:24] <OniAtWork2> probibly not. HID devices are generally pretty standard. It sounds more like a mainboard issue
[16:34:50] <OniAtWork2> well, firmware issue anyway
[16:35:27] <dkeav> aye sounds like buggy firmware
[16:36:14] <longcat> sounds like you expect more help from a motherboard mfgr than trying to get a bug fix in oi ;P
[16:38:54] <blues> learning to hate this mobo
[16:39:19] *** datadigger has quit IRC
[16:39:27] <OniAtWork2> do we have a Illumos based disk yet? maybe there has been some improvements?
[16:39:34] <longcat> maybe there's already a workaround in linux that you can look through and patch yourself
[16:40:04] *** datadigger has joined #openindiana
[16:40:18] <longcat> or you can divine whether or not it's an os bug or firmware bug from others' reports
[16:41:11] <OniAtWork2> even if it is a firmware bug, if the kernel can be made more resilent, that's probibly not bad.
[16:41:17] <longcat> it's a lot of effort for a motherboard that i cant purchase on newegg anymore
[16:41:24] <blues> i found one person with same issue, different board, but switching to solx fixed him
[16:42:14] *** raichoo has quit IRC
[16:43:10] <edgars> http://www.youtube.com/watch?v=s3N9pYZSIpI
[16:43:12] <edgars> :>
[16:45:03] <blues> realistically, for a napp realistically, for a napp-it appliance, what do i need proc/mem wise?. i will be using quad gigabit n
[16:45:04] <blues> what do i need in terms of proc and mem for a napp- it appliance that will serve up vms to esxi for a lab?
[16:45:44] <blues> blah, damn vnc.., sorry for double post
[16:47:46] <dkeav> as much mem as you can stuff in it
[16:48:33] <blues> napp-it is mem hungry?
[16:49:07] <dkeav> no
[16:49:34] *** merzo has quit IRC
[16:49:35] <dkeav> oh vm's TO esxi
[16:49:40] <blues> yeah
[16:49:53] <dkeav> sorry, thought you were going to run esxi on the ....
[16:50:00] <dkeav> still, as much memory as you can stuff
[16:50:00] <dkeav> :D
[16:50:08] <dkeav> its cheap and zfs will use it
[16:50:19] <blues> the esxi box will have 16 gigs starting out.... that box is sitting there grinning waiting for an nfs share to shwo up
[16:50:41] <blues> is 8 GB overkill for a lab?
[16:50:55] <blues> small lab i should say.
[16:51:05] <dkeav> depends on what they are doing in the lab, and you said its going to be over wireless?
[16:51:33] <dkeav> oh gigabit, i saw n sorry
[16:51:59] <blues> oh god no. the napp-it box is gonna be bonded to the esxi box over a dedicated switch that supports bonding. will have 4x 1GB links bonded together between the two
[16:52:26] <dkeav> in that case 8gb would not be overkill at all
[16:52:54] <blues> management is on a separate lan segment, and then i'm also using onboard lan port to offer 1 GB access to media shares to general network
[16:53:10] <blues> So if i want to get this up and going in short order, my only option is to replace the motherobard.
[16:53:13] <blues> *motherboard
[16:53:16] <dkeav> you may want to set aside for some ssd's for caches
[16:53:19] *** DeanoC has quit IRC
[16:53:42] *** DeanoC has joined #openindiana
[16:53:42] *** ChanServ sets mode: +o DeanoC
[16:53:43] <tsoome> uhm, how many connections will be created between esx and storage?:P
[16:53:54] <dkeav> well not really, you could just install in another box and move the OS disk to the server with uhci disabled
[16:54:11] <dkeav> i mean once installed you don't really need a kb/mouse unless something goes quite wrong
[16:54:22] <dkeav> tsoome: shhhh
[16:54:28] <tsoome> :P
[16:54:37] <dkeav> he missed that part of networking class
[16:54:38] <blues> right now there's 1x intel 80 gb ssd , 4x 2 TB western digital green drives in the system. I'm installing to the 80 gig SSD, was hoping to use the rest of it for caching
[16:54:43] <longcat> what a fail job that will be ...server's down, gotta boot up the serrogate to debug it
[16:55:02] <dkeav> longcat: yea but its a lab
[16:55:07] <blues> the napp-it box is gonna be bonded to the esxi box over a dedicated switch that supports bonding. will have 4x 1GB links bonded together between the two
[16:55:34] <dkeav> not like its mission critical
[16:55:38] <tsoome> blues: thats nice waste of 4x network adapters:P
[16:56:19] <tsoome> meh, my math sucks tho:P waste of 6 adapters in total:P
[16:56:34] <blues> even though its a lab, we have standards :-p I'll just replace the motherboard, assuming i can find someone around here that is carrying a 1156 socket board
[16:56:47] <blues> tsoome: why a waste?
[16:57:35] <dkeav> the esxi box will only be making one connection, eg using one link
[16:58:27] <blues> i'm bonding the 4 together to give it 4Gbs of bandwidth rather than 1.
[16:58:37] <dkeav> ummm k
[16:58:55] *** mikw has quit IRC
[16:59:04] <blues> its a fairly common setup from what i've read. I don't have 10gigE / FC at my disposal
[16:59:06] *** AlasAway is now known as Alasdairrr
[16:59:27] <blues> but if i'm missing something obvious clue me in, its the first time i've set this type of environment up
[17:00:11] <tsoome> bonding 4x1Gb interfaces will give you 4x 1Gb pipe, not 1 4Gb one.
[17:00:19] *** Naresh has quit IRC
[17:00:22] <dkeav> ^^ that
[17:00:34] <dkeav> which is awesome for 4+ connections
[17:00:38] <dkeav> kinda pointless for 1
[17:01:15] <blues> wow, then i'm a dumbass
[17:01:17] <blues> good to know.
[17:01:21] <dkeav> now if you had a powerful enough box you could always do a all in one, napp-it esxi box and use the virtual 10gbe driver
[17:01:30] <tsoome> not dumbass, but many people dont realize that
[17:01:52] <blues> dkeav: hardware i've got to use doesn't support vt-d
[17:02:02] <dkeav> bah
[17:02:17] <blues> if i understand correctly with zfs, you've gotta pass the controller through to the vm in order to really do it right
[17:02:29] <dkeav> yep
[17:03:10] *** gea has joined #openindiana
[17:05:16] <dkeav> such as the case may be since you are limited by your network throughput wise, i guess you don't have to get too carried away on your hardware to max it out
[17:05:48] <dkeav> it is fairly easy to saturate a single gb connection over nfs
[17:05:56] <blues> yeah... i'm gonna have issues
[17:07:30] <blues> i've got to go back and read about 802.3ad again.... i totally missed that it was doing 4x1GB not 4GB/sec on 4 links
[17:08:32] <dkeav> http://en.wikipedia.org/wiki/Link_aggregation#Use_on_network_interface_cards
[17:11:40] *** viridari_ has joined #openindiana
[17:12:53] *** viridari has quit IRC
[17:13:18] *** raichoo has joined #openindiana
[17:14:14] <blues> so under the 4x scenerio i described. if i had 4 esxi hosts instead of just 1.. each one could have 1 Gb/sec link to the NFS store, but each link would be limited to a maximum data rate of 1Gbps
[17:14:39] <dkeav> pretty much yea
[17:15:16] <blues> so the only way to truely raise my point to point throughput will be to go to 10gigE / FC
[17:15:28] <dkeav> for single links, yes
[17:16:46] <blues> well hey, at least the usb issue isn't my biggest problem now
[17:17:42] <dkeav> nope
[17:18:01] <dkeav> fun trying to do big league computing with consumer grade equipment huh
[17:18:17] <blues> always
[17:18:27] <blues> small-biz ftw
[17:27:27] *** keremet has left #openindiana
[17:31:38] <dkeav> you could always go the ebay route and pick up a couple refurb 10gbe cards for about 150$USD a piece and link the servers
[17:32:04] *** axisys has joined #openindiana
[17:33:18] <jkimball4> only taking a full hour to get vbox upgraded sigh
[17:35:14] <jkimball4> fsflush is taking a full second consistently... good lord
[17:36:25] <jkimball4> any ideas what can cause latency here? nothing really running except this install
[17:37:16] <jkimball4> or is it supposed to flush one per second?
[17:38:03] <tsoome> why its doing fsflush in first place?
[17:38:14] <jkimball4> no idea
[17:38:44] <blues> i've never touched 10gbe equipment
[17:38:48] <tsoome> zfs storage?
[17:39:24] <jkimball4> it's zfs..
[17:39:45] <jkimball4> this looks bad too => ZFS ZIL writer I/O 17 2.0 sec 30.0 sec 3.2 %
[17:39:47] <lblume> tsoome: About what you said earlier, isn't a switch with LACP able to spread packets on all interfaces? Or is that Solaris that does not support that?
[17:40:40] <tsoome> LACP does spread connections afaik. not packets.
[17:42:19] <tsoome> altho, well yes, linux has round robin mode as well
[17:43:16] <tsoome> but, even then the packet will still go with 1Gb/s, not 4...
[17:43:59] <Whoopsie> round robin for outbound packets, the switch will still peg a client to a specific port for incoming packets
[17:44:28] <tsoome> yep
[17:45:07] <Whoopsie> frankly, link aggregration is total bunk unless you have at least three times as many clients as you have links
[17:46:17] <tsoome> thats what i have heard, yep.
[17:46:26] <lblume> Ok, got it. I knew it was not possible on Solaris, just wasn't sure exactly at which level. I had plans for an NFS/Samba server where it would have fitted nicely :-)
[17:46:32] <Whoopsie> Invest in FC or Infiniband ;-)
[17:47:08] <lblume> Sure, for point to point :-)
[17:47:32] <tsoome> if you have loads of nfs/cifs clients, then it can be nice, but then again, 10Gb is still better:P
[17:47:35] <dkeav> infiniband can be done rather cheaply these days
[17:47:41] <dkeav> point to point
[17:47:55] <Whoopsie> If you're doing FC point to point, you're doing it wrong
[17:48:18] <dkeav> esxi->san ??
[17:49:38] <lblume> tsoome: It is, but then need changing infrastructure stuff. Easier to justify even 50% more bandwidth on an x2100 by just plugging 2 more cables than ask for a 10Gb switch and new cables
[17:50:11] <tsoome> true that
[17:51:00] * dkeav patiently waits for 10gbe to become much much cheaper
[17:51:06] * dkeav taps fingers on desk
[17:51:20] <raichoo> Wait faster!
[17:51:20] <lblume> Aggregation is cheap and easy. Nice enough for many cases :-)
[17:52:10] *** Micr0mega has left #openindiana
[17:55:59] <quasi> lblume: yeah, dual 10gbe would be nice ;)
[17:57:31] *** held has quit IRC
[17:58:33] <lblume> Then tsoome will tell you to just buy that 100Gbe NIC instead, much better ;-)
[17:59:00] <tsoome> not really - Im not aware of any such options atm;)
[17:59:08] <tsoome> mayve after 10 years;)
[17:59:12] <tsoome> maybe*
[18:00:48] <dkeav> i can't wait that long!
[18:00:59] <dkeav> damn you moore's law!
[18:03:02] <lblume> dkeav: so many things to download?
[18:03:36] <dkeav> no not really, but when i want to stream an mp3, i wan't to stream it really fricken fast man
[18:04:07] <lblume> That is understandable.
[18:04:37] <lblume> And 320Kbps MP3s, to.
[18:04:41] <tsoome> well, there is little point to stream it faster than you can listen:P
[18:05:17] <dkeav> you sir must not be a fan of alvin and the chipmunks
[18:05:36] <lblume> Definitely not very much at all.
[18:05:49] <lblume> tsoome: I want ear upgrades too.
[18:06:05] <DeanoC> and alvin must use uncompressiong FLAC of 32 bit 1MHZ 24 channel surround sound for its true glory ;)
[18:14:37] *** gea has quit IRC
[18:16:16] <taemun> psh that's only 96MB/s uncompressed anyway...
[18:16:34] *** Alasdairrr is now known as AlasAway
[18:26:38] <DontKnwMuch> anyone virtualizing OI on esxi here, just interested why and how is it performing
[18:35:07] *** freedomrun has quit IRC
[18:39:08] *** |AbsyntH| has quit IRC
[18:43:14] *** gea has joined #openindiana
[18:49:08] <DontKnwMuch> how long does zpool import take, does it depend on size?
[18:50:18] <longcat> depends what was going on
[18:52:13] *** Whoopsie has quit IRC
[18:55:44] *** merzo has joined #openindiana
[19:01:31] *** bens1 has quit IRC
[19:05:35] *** sergefonville has joined #openindiana
[19:05:45] <sergefonville> good evening
[19:12:13] *** myrkraverk has quit IRC
[19:13:17] *** GS has quit IRC
[19:17:45] *** kart_ has quit IRC
[19:19:39] *** Botanic has quit IRC
[19:25:22] *** held has joined #openindiana
[19:31:34] <sergefonville> anyone know the difference between -C and -F for spawn-fcgi?
[19:41:26] *** freedomrun has joined #openindiana
[19:51:02] *** Botanic has joined #openindiana
[19:54:46] *** Botanic has quit IRC
[19:55:38] <viridari_> DontKnwMuch: I have it running on esxi so it works. but I haven't done much with it there yet to speak to the relative performance of it.
[19:59:04] *** Naresh has joined #openindiana
[19:59:23] *** merzo has quit IRC
[20:08:13] *** Botanic has joined #openindiana
[20:08:13] *** Botanic has joined #openindiana
[20:14:06] *** zhanglu9 has left #openindiana
[20:30:25] *** kforbz has joined #openindiana
[20:34:30] *** hsp has quit IRC
[20:41:07] *** hsp has joined #openindiana
[20:49:19] *** hajma has quit IRC
[20:52:35] *** hajma has joined #openindiana
[21:04:06] *** AlasAway is now known as Alasdairrr
[21:04:56] <DontKnwMuch> Can anyone check the last post in this link, is this true (pool dead is slog dies): http://www.nerdblog.com/2010/03/zfs-nas-followup-ssd-is-amazing.html
[21:04:59] *** Oriona has quit IRC
[21:05:02] *** Oriona has joined #openindiana
[21:06:11] *** McBofh has quit IRC
[21:06:58] <bdha> DontKnwMuch: That is old.
[21:07:13] <bdha> DontKnwMuch: Remotely recent versions of Solaris 10 and beyond have import -m.
[21:07:48] <bdha> You need to be on zpool v19 or newer.
[21:08:24] <bdha> And you can still recover on older zpools: http://mirrorshades.net/post/1485951163
[21:09:20] <DontKnwMuch> oh, thank you and great link, thanks!
[21:09:35] <bdha> np. That was a shitty day for me. Happy to share. :P
[21:10:38] <dkeav> that would ruin your day
[21:10:53] <bdha> Yup. But recovered. The fix was painfully trivial, too.
[21:10:56] <bdha> s/fix/workaround/
[21:11:01] <DontKnwMuch> :)
[21:15:24] <sergefonville> anyone know a resource where to read about all the PHP_FCGI* variables?
[21:20:46] *** fossala has joined #openindiana
[21:24:29] <fossala> I'm a FreeBSD user (for my server and desktop (OpenBSD on my router)). I got into computing about 2 years ago and was keeping my eye on OpenSolaris then Oracle took over and just carried on with FreeBSD. Now OpenSolaris has come up and taken up what OpenSolaris was I re-looking at it again. All my server does is zfs file server with 2 jails one mail one web. Would there be any advantage to using OpenIndiana?
[21:25:28] <fossala> BTW I don't wan't to start some flame war.
[21:26:00] <dkeav> if you are happy with what you have now, then not really
[21:26:15] <dkeav> as a freebsd and opensol/openind user
[21:26:26] <Woodstock> i doubt that it would make much sense at this time, unless you want to participate in the development of oi
[21:26:42] <fossala> OK then thanks, I will stick with FreeBSD then. Good luck with the project.
[21:27:02] *** mikw has joined #openindiana
[21:27:03] <tsoome> well, the tasks you do can be done with basically any os in the world.
[21:27:23] <dkeav> aye, so stick with what you know
[21:27:35] <dkeav> especially if it is already set up
[21:27:49] <tsoome> unless you wanna learn;)
[21:27:56] <dkeav> true
[21:28:04] <tsoome> which is always nice reason to do things
[21:28:06] <fossala> While I'm here If Oracle doesn't release the source for ZFS v30. Why don't FreeBSD and OI work on development together?
[21:28:24] <fossala> I love playing but also got my degree to work on.
[21:28:28] <bdha> fossala: They do.
[21:28:34] <bdha> There is a ZFS Working Group.
[21:28:35] <Woodstock> i think they do, there was a zfs working group formed a while ago
[21:28:45] <bdha> illumos, FreeBSD, Delphix, Joyent, Oracle and perhaps others are on it.
[21:28:51] <fossala> I'm try and find a mailing list
[21:28:56] <bdha> The ZFS-WG is private.
[21:29:07] <fossala> s/I'm/I'll
[21:29:23] <tsoome> the worst thing to happen is zfs splits (imo), thats in noones interest....
[21:30:36] <nahamu> http://www.youtube.com/watch?v=Gle4CU_lnls at about 5 minutes in
[21:30:48] *** freedomrun has quit IRC
[21:30:51] <fossala> No flash.
[21:30:53] <nahamu> discussion about the direction ZFS is going
[21:30:57] <dkeav> no youtubes at work
[21:31:16] <fossala> Girlfriend Pc will be free later (linux) will check it out on that. thx.
[21:32:30] <fossala> Hang on I've got a Wii with flash (Wiimc (Homebrew)).
[21:32:48] <longcat> my eyes
[21:32:50] <dkeav> whatever happened to their html5 push
[21:35:36] <nahamu> http://www.youtube.com/html5
[21:35:53] <nahamu> looks like they use a cookie or something like it.
[21:36:44] <sergefonville> I setup fastcgi now, and everyting seems to work fine
[21:37:13] <sergefonville> but when I do ab at it, I get Restarting too quickly. from svcs -x
[21:40:47] <DontKnwMuch> if I want to change a disk controller, should I export the pool, and later import it, or just turn the machine off, and connect the drives to a new controller
[21:41:09] *** myrkraverk has joined #openindiana
[21:41:10] *** myrkraverk has joined #openindiana
[21:41:24] <sergefonville> zpool configuration is stored on disk
[21:41:54] <DontKnwMuch> hm..
[21:48:42] <fossala> Thanks, intreasting video. I may watch the other ones.
[21:51:16] *** raichoo has quit IRC
[21:51:38] *** sergefonville has left #openindiana
[21:58:06] *** sergefonville has joined #openindiana
[22:03:35] <nahamu> if you have the time, I found them all pretty interesting.
[22:11:32] <DontKnwMuch> which benchmarking tool are you people using for checking iops?
[22:12:20] <tsoome> that depends how well you know your apps behaviour;)
[22:12:32] *** underscorer has joined #openindiana
[22:13:14] *** underscorer has quit IRC
[22:13:31] <DontKnwMuch> bonnie does show me a huge difference between raidz and raid10.. so probably it is right
[22:14:09] <tsoome> well, you have seen many factors to be mentioned on this channel
[22:14:58] <DontKnwMuch> yep.. I think raid10 will be my choice weapon in this case
[22:15:01] *** user001 has joined #openindiana
[22:15:14] <tsoome> relations between reads/writes, streaming versus random, also you may need to consider the parallelism - how many threads are generating IO, troughput (meaning the read or write sizes)
[22:15:37] *** melliott has quit IRC
[22:25:12] *** mikw has quit IRC
[22:33:18] *** bens1 has joined #openindiana
[22:44:07] *** melliott has joined #openindiana
[22:44:15] <DontKnwMuch> now for something completely different - I have two NICs, and want to have iscsi traffic separated from the rest, I have never done something like that, how will it 'know' through which nic to go, do I have to set the second nic in another subnet?
[22:45:18] *** rev909 has joined #openindiana
[22:47:41] *** jamon has quit IRC
[22:50:13] <longcat> you can probably bind iscsi to a certain nic
[22:50:23] <tsoome> no, you can limit iscsi to specific IP
[22:50:35] *** bens1 has quit IRC
[22:50:47] <tsoome> limit iscsi target to specific IP*
[22:52:26] <tsoome> if you have, say 2 nics for iscsi, you can create 2 targets and create nic-target 1-to-1 bindings.
[22:56:16] <sergefonville> anyone know what "Restarting too fast" means in regard to fastcgi/php?
[22:57:01] <richlowe> it's restarting more frequently than SMF thinks it should, and is assumed to be failing.
[22:57:16] <longcat> most likely a configuration problem, check the fastcgi logs
[22:57:20] <longcat> to see why it quits
[22:58:09] <sergefonville> I don;t think there are fastcgi logs
[22:58:10] <tsoome> as smf service? that happens if the process is crashing and needs an restart. or if the service itself is misconfigured - for example you did create service for daemon, but its running an script (process will finish)
[22:58:33] <tsoome> svcs -vx ?
[23:00:53] <DontKnwMuch> tsoome: now I know what to search for, thx
[23:01:19] <tsoome> thats always the first command to run. quite nice one.
[23:01:31] <sergefonville> it happens when I put a lot of load on it by using ab from another machine
[23:02:23] <sergefonville> State: maintenance
[23:02:23] <sergefonville> Reason: Restarting too quickly.
[23:02:35] <sergefonville> Impact: This service is not running.
[23:02:40] *** jamon has joined #openindiana
[23:02:47] <tsoome> check the log its referring
[23:02:56] <sergefonville> it says the same thing
[23:03:05] <richlowe> If it's restarting too quickly under load, it is likely restarting too quickly, legitimately.
[23:03:16] <richlowe> In that it's restarting intentionally, faster than SMF wants it to
[23:03:21] <richlowe> with illumos the limits and rates are tunabel
[23:03:26] <richlowe> "tunable"
[23:03:33] <richlowe> elsewhere, ... uh, find a way to make it not do that.
[23:03:52] <sergefonville> the point is, after a certain amount of request php-cgi restarts itself
[23:04:19] <tsoome> and you are flooding it?
[23:04:21] <sergefonville> and if I put abnormal load on it, that happens sooner
[23:04:36] <sergefonville> ab -i -n 100000 -c 200 http://192.168.1.1/
[23:04:52] <tsoome> tune it not to restart so often?
[23:05:43] <sergefonville> does that mean I have to do something with the restarter?
[23:05:53] <sergefonville> or os there something I need to tell php
[23:06:31] <sergefonville> is*
[23:06:34] <tsoome> well, as you wrote, it does restart itself, so the question is, is that configurable...
[23:07:02] <sergefonville> it is normal behaviour for php-cgi to clean up after a number of requests
[23:07:40] <richlowe> At present, you'd have to tell PHP
[23:07:47] <richlowe> unless you're using illumos, in which case you can tell SMF
[23:08:41] <sergefonville> what I want is, that if a process with a certain name exists, then do nothing, otherwise, try to start it.
[23:09:00] <sergefonville> i think
[23:09:17] <longcat> that's a terrible idea
[23:10:06] <sergefonville> how is "restarting too quickly" determined exactly?
[23:10:53] <sergefonville> the behaviour of php as fastcgi is that it restarts after a certain amount of requests
[23:11:32] <tsoome> "it restarts" - what it does mean, exactly?
[23:11:56] <tsoome> something is sending an signal, killing it?
[23:12:18] <tsoome> or it will just exit and expecting something to restart it?
[23:15:14] <tsoome> or it has main control process which will create new child?
[23:15:26] <sergefonville> no, it kills itself
[23:17:26] <tsoome> uhm? so, how the legacy init script type starter will resolve that suicide? start script is just looping?
[23:17:59] <sergefonville> the script returns immediately
[23:18:18] <sergefonville> I had to use spawn-fcgi to start php-cgi
[23:18:26] *** pothos_ has joined #openindiana
[23:18:31] <tsoome> well, I mean, if you start things with init scripts, there is nothing to watch the service.
[23:18:32] <sergefonville> othwerwise php-cgi would just die
[23:18:53] <sergefonville> shoudl I place the whole command in the exec_method then?
[23:19:27] <sergefonville> I don't think that would change anything since the spawn-fcgi returns immediately
[23:20:22] *** pothos has quit IRC
[23:20:22] <tsoome> seems to me the smf start method you are using, is flawed.
[23:20:33] *** pothos_ is now known as pothos
[23:21:11] <sergefonville> smf needs to know what to look for?
[23:21:41] <sergefonville> I know there are people that have nginx + fastcgi + php
[23:21:49] <sergefonville> here
[23:21:55] <sergefonville> or were...
[23:22:25] <longcat> man smf
[23:22:53] <tsoome> well, the smf start method will spawn the daemon and finish, smf will monitor the daemon, if it will die, the smf will restart it.
[23:23:14] <tsoome> if it happens too fast, the service will go into the maintenance
[23:25:22] *** kforbz_ has joined #openindiana
[23:26:51] <tsoome> you are using this one?
[23:29:09] <tsoome> http://technotes.tumblr.com/post/4259089859/php-fpm-smf
[23:29:35] <sergefonville> i tried php-fpm, but I get loads of error when trying to compile the new php5.3
[23:31:31] <tsoome> PHP-FPM is now included in PHP core as of PHP 5.3.3.
[23:31:51] <tsoome> according to http://php-fpm.org/download/
[23:33:13] <sergefonville> the spec files have 5.3.9, but I can't compile those
[23:33:24] <sergefonville> aften an hour of fiddling I gave up
[23:33:41] <tsoome> anyhow - thats the missing link for you
[23:34:41] <tsoome> as i have understood, php-fpm acts like restarter for fastcgi spawning it after that suicide. I havent checked the source, but it certainly smells like it;)
[23:36:56] *** axisys has quit IRC
[23:36:56] *** jamon has quit IRC
[23:38:07] *** gea has quit IRC
[23:38:07] <sergefonville> perhaps I need to compile it the oldfashined way instead of fiddling with non-existing spec-files
[23:40:03] <sergefonville> what is weird that there are depencies for specfiles that doe nog exist
[23:44:33] *** McBofh has joined #openindiana
[23:45:35] *** kforbz_ has quit IRC
[23:53:52] <sergefonville> do not*
[23:56:07] *** jaimef_ has quit IRC
[23:57:13] *** hsp has quit IRC
[23:58:24] <user001> Does Solaris have something similar to FreeBSD's chflags nodump?
top

   July 5, 2011  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >