[00:00:12] <longcat> eh? what are you trying to do?
[00:00:38] <dkeav> user001: solaris ufs doesn't have nodump support and ufsdump in solaris doesn't have exclude support
[00:00:45] <dkeav> you would have to use another backup strategy
[00:01:07] <user001> ok thanks
[00:01:18] <longcat> and there are decent ones, using a snapshot then incrementials
[00:05:35] <DontKnwMuch> is there a way to see lower level of disk "errors" as with zpool status
[00:09:53] <longcat> you can see the files they were in with zpool status -v
[00:11:54] <DontKnwMuch> oh.. nice. Is it a known fact that sas1068 controllers do not support smart in OI?
[00:15:46] <longcat> oh but that's not low level disk errors. that's just the pool perspective
[00:15:58] <longcat> iostat -En will show you counts per device, and dmesg should show blocks
[00:16:46] <DontKnwMuch> ah.. this was what I was looking for
[00:32:45] *** user001 has left #openindiana
[00:39:52] <sergefonville> what I still wonder about SMF though
[00:40:09] <sergefonville> what does it monitor to determine that a service stopped
[00:40:30] <sergefonville> especially in case of some 'init' file
[00:40:31] <richlowe> when the last process within the contract exited.
[00:40:46] <sergefonville> what contract?
[00:40:57] <sergefonville> where is that defined/determined?
[00:42:37] <richlowe> a contract is a group of processes + defined behaviour as to what to do if, say, one takes a signal, or crashes, or exits, or whatever.
[00:42:53] <richlowe> SMF puts each service in a new contract, and makes use of that.
[00:43:14] <richlowe> if you want documentation: ctrun(1), contract(4), probably ctstat(1) and ctwatch(1)
[00:44:25] <sergefonville> thank you, that explains a lot about how SMF handles things
[00:44:37] <sergefonville> since it looked like somtehing magical :P
[00:44:42] <sergefonville> now it makes sens
[00:44:48] <sergefonville> well, a little more at least
[00:44:57] *** Bahman has quit IRC
[00:52:03] *** InTheWings has quit IRC
[00:52:23] *** Aphelion has joined #openindiana
[00:53:10] <Aphelion> …any idea why virtual box mount points don't auto-mount when you set them to? they create the mount point but never follow through and mount the damn thing. also, it works when you manually mount it.
[00:54:34] *** CoilDomain has quit IRC
[00:55:42] <sergefonville> I'm gonna catch my sleep
[00:56:01] <sergefonville> tsoome, thank you very much for your time and explanation
[00:56:24] <sergefonville> richlowe, thank you very much for your added clarity and elaboration
[00:57:27] *** kforbz has quit IRC
[00:57:48] *** descipher has quit IRC
[00:57:57] <Aphelion> only in solaris
[00:58:04] *** descipher has joined #openindiana
[01:00:02] <tsoome> sergefonville: man svc.startd does explain the restarter on different types of services as well
[01:02:04] <sergefonville> awesome, thanks
[01:02:16] *** sergefonville has left #openindiana
[01:41:30] <infinity_> I have inheritance "fd" setup on the parent directory. When files are moved on the server, they don't take the directories permissions. They keep their original permissions. Is there a way to get around this and have the file/dir inherit its new parent directory permissions? This is on oi_148
[01:45:18] *** Alasdairrr is now known as AlasAway
[01:53:52] *** Aphelion_ has joined #openindiana
[01:53:53] *** Aphelion_ has quit IRC
[01:53:53] *** Aphelion_ has joined #openindiana
[01:53:53] *** Aphelion has quit IRC
[01:53:53] *** Aphelion_ is now known as Aphelion
[01:59:43] *** Aphelion has quit IRC
[02:14:22] *** hajma has quit IRC
[02:14:35] *** Hedonista has joined #openindiana
[02:25:00] *** miine has quit IRC
[02:28:25] *** konobi has quit IRC
[02:29:06] *** konobi has joined #openindiana
[02:48:43] <DrLou> Evening, all... someone here offered some real pearls of wisdom, in the form of short 'must dos' re VBox on Oi.
[02:49:00] <DrLou> anybody remember/know what this magic was?
[02:52:22] *** chototsu has quit IRC
[02:53:19] <dkeav> host i/o cache must be enabled
[02:53:47] <Triskelios> it's also in the install instructions for OI
[02:54:39] *** chototsu has joined #openindiana
[02:54:39] *** ChanServ sets mode: +v chototsu
[02:55:04] <DrLou> I did use those instructions, of course. I'll re-comb through them to be sure I didn't miss anything...
[02:55:12] <DrLou> Tks for the pointers, gents.
[02:55:36] <dkeav> whats the issue? like not booting the installer or something?
[02:58:34] *** DrLou has quit IRC
[03:09:38] *** redgone has joined #openindiana
[03:11:35] *** master_of_master has quit IRC
[03:13:22] *** master_of_master has joined #openindiana
[03:16:56] *** Hedonista has quit IRC
[03:18:33] *** Hedonista has joined #openindiana
[03:25:44] *** dekar has quit IRC
[03:36:00] <blues> so apparently my issue was one unique to gigabyte motherboards. Anyone recommend a good socket 1156 motherboard that is fully compatible with OI?
[03:38:00] <blues> funny, i was looking at that exact on
[03:40:56] <dkeav> main thing is to avoid realtek chipsets at all costs
[03:41:13] <dkeav> that one at least has intel network
[03:41:41] <dkeav> what was the gigabyte model you had?
[03:41:49] <blues> ga-p55-ud4p
[03:42:57] <dkeav> uhhh huh, realtek chipset on that ga board
[03:43:01] <blues> yep
[03:43:05] <blues> sad trombone
[03:43:32] <dkeav> i had a board that locked up constantly with opensolaris, ended up disabling onboard everything and slapping a intel nic in it, no problems after that
[03:43:35] <blues> it works with usb disabled... installing now via a ps2 keyboard i found deeeep in the junk drawer
[03:43:56] <dkeav> if you use the network and put any load on it though it will probably lock up on you
[03:44:12] <konobi> broadcom is to be avoided as well
[03:44:48] <blues> the joy of consumer parts and enterprise software
[03:44:57] <blues> god i love working for a small business
[03:46:22] <blues> while i'm shopping, whats a good disk controller to pick up? I'm looking for something that can handle 12 disks or so that solaris won't puke on.
[03:49:37] <dkeav> something with lsi 1068 or 2008 chipset in IT-mode work well
[03:54:54] <blues> my wife just made her first pimp-IT decision. our scheme for machine names will be based on harry potter's universe
[03:55:07] <dkeav> :(
[03:55:13] <dkeav> shes fired
[03:55:42] <blues> hey hey... the fact she didn't pick characters from True Blood makes me happy
[03:55:55] <dkeav> or twilight
[03:56:08] <blues> i refuse to have a SAN named Jacob
[03:56:26] <dkeav> amen
[03:57:08] <blues> Had it been left to me we'd have went with A song of ice and fire.. but hey, compromise is the heart of a strong marriage..something something i get sex
[03:57:28] * herzen tried to watch a Harry Potter movie several times, but could never get past the first five minutes
[03:58:00] <blues> you have to approach it from the perspective that you are enduring the first few books to fully enjoy the last few
[03:58:14] <blues> the movies i can't vouch for.
[03:58:25] <herzen> there are books?
[03:58:28] <konobi> nah... harry potter names are at _least_ grounds for a seperation
[03:58:28] <herzen> *kidding*
[04:00:20] <blues> Come on.. a SAN named Gringots ? thats at least a little cool contextually
[04:00:39] <dkeav> you sir have lost your concept of "cool"
[04:01:06] <dkeav> the penalty is 5 internets
[04:01:09] <blues> i am decidedly left of cool
[04:01:41] <herzen> If I understand correctly, at least your wife isn't running Windows. one could live with that.
[04:02:20] <dkeav> that is bonus
[04:02:28] <dkeav> does she drink beer?
[04:02:49] <alanc> you know you're just waiting for the chance to answer "What are you doing?" with "I'm on hermione"
[04:03:24] <dkeav> alanc: s/on/mounting
[04:04:31] <blues> she drinks beer, loves college football, and uses gcc and vi on a regular basis
[04:05:01] <dkeav> blues: well in that case, a good relationship is all about give and take
[04:05:48] <blues> girl knows her way around a "wand" and thinks code auto-completion is for pussies... i ain't complaining.
[04:06:06] <dkeav> yea, i let mine drive when i have a hangover
[04:06:13] <dkeav> we havn't gotten around to renaming computers
[04:06:43] <dkeav> maybe for a 50th anniversary present
[04:06:50] <blues> i like that intel controller card..
[04:07:12] <dkeav> you will probably have to flash the firmware and run it in IT mode
[04:07:18] <blues> yeah i'm noticing that
[04:07:22] <blues> from the reviews
[04:07:34] <dkeav> but nab a cheap JBOD expander enclosure and you have a nice neat setup
[04:07:35] <blues> it's amusing that people get pissed it won't handle drives over 2 TB
[04:08:02] <blues> When we get into our new house i'm hoping to rack-mount a norco case and do this right
[04:08:36] <blues> for now things are relegated to an antec case that will hold 14 drives and keep them middlingly cool.
[04:10:02] <dkeav> you can pickup used hp expander shelfs on ebay for decent some times
[04:10:46] <blues> yeah, saw a guy on [h]ardforum talking about one a while ago.
[04:11:03] <blues> i'll have options at that point..which is a welcome change.
[04:14:08] <dkeav> options are nice
[04:14:12] <dkeav> anywho, nappy time
[04:14:13] <dkeav> laters
[04:14:20] <blues> thanks for your help, later
[04:25:03] *** POloser has joined #openindiana
[04:41:36] *** kart_ has joined #openindiana
[04:46:08] *** Enox4 has left #openindiana
[05:03:38] *** GS has joined #openindiana
[05:07:39] *** axisys has joined #openindiana
[05:34:30] *** Naresh has quit IRC
[05:44:59] *** GS has quit IRC
[05:50:46] <Hedonista> a live solaris 11 cd should be able to import a lower version zpool
[05:51:06] <Hedonista> correct?
[05:51:26] <alanc> yes
[05:51:35] <Hedonista> thanks alanc
[06:01:16] *** axisys has quit IRC
[06:01:59] *** axisys has joined #openindiana
[06:02:57] *** redgone has quit IRC
[06:07:26] *** kart_ has quit IRC
[06:13:12] *** echobinary has quit IRC
[06:14:59] *** axisys has quit IRC
[06:22:54] *** freedomrun has joined #openindiana
[06:32:13] *** axisys has joined #openindiana
[06:44:00] <madwizard> Coffee
[06:50:33] *** Naresh has joined #openindiana
[06:51:38] <edogawaconan> I'm wondering if there's a starter guide for developer
[06:57:50] *** echobinary has joined #openindiana
[06:59:14] *** Botanic has quit IRC
[07:03:34] *** horsi has joined #openindiana
[07:03:43] *** GS has joined #openindiana
[07:04:58] *** horsi has quit IRC
[07:05:05] *** kart_ has joined #openindiana
[07:06:19] <edgars> morning
[07:09:38] *** ivo_ has joined #openindiana
[07:09:39] <edgars> 04:47 < konobi> broadcom is to be avoided as well
[07:09:43] <edgars> why so?
[07:10:21] <edgars> cisco, alcatel, hp uses broadcom for the switches :)
[07:13:22] <edgars> ok, hp looks like uses something own
[07:14:36] *** EisNerd has quit IRC
[07:18:01] *** Botanic has joined #openindiana
[07:18:07] *** DanaG has joined #openindiana
[07:18:30] <DanaG> Hmm, how do I make sure OpenIndiana is using ECC?
[07:18:53] <DanaG> I do dmesg | grep -i ecc, or grep -i edac, and get nothing.
[07:20:36] <jdoe> edgars: I imagine they have functional/solid drivers.
[07:20:42] *** Naresh` has joined #openindiana
[07:20:51] <jdoe> well, that and I imagine the chips aren't identical.
[07:20:58] <jdoe> broadcom makes some nice stuff. They also make some garbage.
[07:22:02] *** EisNerd has joined #openindiana
[07:22:13] *** Naresh has quit IRC
[07:23:04] <madwizard> edogawaconan: I think you can use the guide from opensolaris, for now
[07:23:16] *** forquare has joined #openindiana
[07:23:30] <DanaG> Is BCM5723 a good one, or a bad one?
[07:23:34] <DanaG> That's what my Microserver has.
[07:24:16] *** Naresh`` has joined #openindiana
[07:24:45] *** sponix has quit IRC
[07:25:59] <jdoe> ... also, I dunno about realtek, I have no problems with the rge driver under load.
[07:26:02] <jdoe> ymmv of course.
[07:26:21] *** Naresh` has quit IRC
[07:28:32] *** Naresh`` is now known as Naresh
[07:28:34] *** Naresh has joined #openindiana
[07:30:13] <DanaG> so, does openindiana watch ECC?
[07:30:22] <DanaG> I can't find anything useful with google.
[07:30:26] <sickness> what's ECC?
[07:30:40] <DanaG> Error-correcting memory.
[07:30:42] <alanc> isn't ECC something the motherboard handles, not the OS?
[07:30:45] <sickness> yeah
[07:30:55] <DanaG> Yaeah, but the OS should notify of those errors.
[07:31:00] <sickness> I think it's an hardware thing, it will work anyway
[07:31:05] <alanc> that'd probably be done via FMA then
[07:31:07] <DanaG> In Linux, there's the amd64_edac driver.
[07:31:33] <DanaG> FMA? Full Metal Alchemist? Joking (and I haven't even watched that)...
[07:31:53] *** forquare has quit IRC
[07:31:59] <alanc> Fault Management Architecture
[07:32:02] <sickness> fault management something :P
[07:34:12] <alanc> googling for "Solaris FMA ECC" finds some examples and more info
[07:34:23] <konobi> edgars: been having _lots_ of issues with broadcom
[07:34:29] <DanaG> Error Correction Type: 5 (single-bit ECC)
[07:34:37] *** yalu has quit IRC
[07:34:47] *** akamit has joined #openindiana
[07:34:47] <konobi> like dladm show-linkprop... kabaam... that interface is borken until reboot
[07:35:01] <konobi> lots of other issues we're still tracking down
[07:36:04] <DanaG> okay, nothing for "fma" or "fault" in dmesg.
[07:36:23] *** yalu has joined #openindiana
[07:36:41] <alanc> run "fmdump", not "dmesg" to see FMA events
[07:37:04] <DanaG> fmdump: failed to open /var/fm/fmd/fltlog: No such file or directory
[07:37:10] *** ivo_ has quit IRC
[07:37:22] <DanaG> I'm on the livecd right now, though.
[07:38:18] <richlowe> fmdump, fmdump -e, and fmadm faulty are useful.
[07:38:25] <DanaG> fmadm: failed to connect to fmd: RPC: Program not registered
[07:38:37] <richlowe> sigh, though not very useful if fmd isn't running.
[07:39:01] <DanaG> Might have to wait until I boot the installed system.
[07:39:10] <DanaG> But thanks, that's a big help.
[07:39:55] <DanaG> I'm quite familiar with Linux (or at least Ubuntu/Debian), so I have no clue where things are in Solaris.
[07:40:24] <DanaG> About the most I've done with SunOS is test my C code on it on Sparc, for a Networks course.
[07:41:34] <DanaG> Guawd, I'm not sure whether it's my USB stick or the installer, but it's godawful slow. It's taking a couple of hours to install.
[07:41:42] <DanaG> I'll have to compare it to an install of Ubuntu.
[07:42:13] <DanaG> USB stick is the target -- 8GB, with write speeds from 5 to 10 megabytes per second. That's probably it.
[07:42:28] <DanaG> Read is around 40.
[07:47:42] <sickness> cool :)
[07:50:06] <DanaG> I tried it in virtualbox USB, and it took 8 hours to get to like 70-80%.
[08:02:28] <sickness> I think that vbox usb goes 12mbit only, usb 1.x
[08:02:43] <DontKnwMuch> what does iostat -E output: Illegal Request: 1750 mean?
[08:02:45] <sickness> even if it emulates usb 2.x it's as slow as 1.x, at least vbox 3.x
[08:02:53] <sickness> I still have to try vbox 4.x
[08:03:23] <DontKnwMuch> or even: Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 1
[08:03:23] *** sponix has joined #openindiana
[08:04:09] <DontKnwMuch> All my drives have this illegal requests, but no errors in zpool status
[08:04:44] <sickness> which drives? usb ones?
[08:04:51] <DontKnwMuch> no, sata
[08:05:01] <sickness> iostat -xen
[08:05:11] <sickness> cfgadm -a
[08:05:54] <DontKnwMuch> sata0/2::dsk/c4t2d0 disk connected configured ok ....etc for all of them
[08:06:11] <sickness> ok
[08:06:16] <sickness> now: iostat -xen
[08:09:13] <DontKnwMuch> this is what I get out of iostat -xen
[08:12:00] *** Worsoe has joined #openindiana
[08:13:04] <DontKnwMuch> errors part looks interesting. what does it mean?
[08:17:39] *** gea has joined #openindiana
[08:18:36] <edogawaconan> I'm wondering what partition type I should set for zfs on mbr partition
[08:27:23] *** gea has quit IRC
[08:35:15] *** McBofh has quit IRC
[08:35:27] <madwizard> edogawaconan: Solaris
[08:36:00] *** McBofh has joined #openindiana
[08:36:54] *** |AbsyntH| has joined #openindiana
[08:37:04] <edogawaconan> wouldn't format tool recognize it as such then?
[08:37:17] <edogawaconan> or confused
[08:37:33] <edogawaconan> is it even allowed to have two solaris partitions in mbr
[08:39:22] <konobi> nothing
[08:42:10] <madwizard> edogawaconan: There is no format tool for zfs
[08:42:24] <madwizard> edogawaconan: You give zpool create a device, it may be whole disk, a slice or a partition
[08:42:39] <madwizard> And it then goes on about creating filesystem
[08:42:43] *** GS has quit IRC
[08:42:49] <edogawaconan> I mean
[08:42:56] <edogawaconan> the type in mbr table
[08:43:42] <edogawaconan> I can pass partition of any type yes
[08:43:47] <edogawaconan> but then linux will be confused
[08:44:09] <DanaG> Say, does openindiana do cpu frequency scaling by default?
[08:45:05] <DanaG> sickness: some people have noticed that vbox usb 2.0 is slower than vbox usb 1.1.
[08:46:44] <DanaG> or so some threads said.
[08:47:02] <DanaG> ugh, the installer has been saying 94% for like 30-60 minutes.
[08:50:28] *** miine has joined #openindiana
[08:51:07] <DanaG> where's the installer log written on the fly?
[08:51:10] <DanaG> if anywhere?
[08:52:22] <sickness> omg
[08:52:29] <sickness> didn't know that...
[08:55:53] *** bens1 has joined #openindiana
[08:56:18] *** DanaG has quit IRC
[09:02:05] *** DanaG has joined #openindiana
[09:02:18] <DanaG> Okay, now 99%.
[09:02:36] <DanaG> Does the installer save the install log in the installed system?
[09:04:50] *** drajen has joined #openindiana
[09:12:36] <miine> DanaG: don't think so.
[09:12:52] <DanaG> That's lame.
[09:13:24] <lennard> I think theres a log of some sort in /tmp during installation
[09:13:27] <miine> DanaG: I'm not sure, but if it writes I didn't find the place... but you can look at the sources.
[09:14:13] <miine> lennard: yep. but on live medium everything written is on tmp if not on network ;-)
[09:14:33] <miine> lennard: sorry. ramdisk...
[09:14:41] <lennard> true :)
[09:15:03] <lennard> well, except the installation itself
[09:15:08] <lennard> thats not written to ramdisk :P
[09:15:18] <miine> but it would be nice to have a log WINDOW ...
[09:17:03] <DanaG> Ah. Time for bed.
[09:17:18] <lennard> time to get up, actually :)
[09:21:27] <DanaG> I do see "ict_transfer_logs completed".
[09:22:32] *** akamit has quit IRC
[09:25:43] *** Micr0mega has joined #openindiana
[09:31:20] *** DanaG has quit IRC
[09:55:00] *** freedomrun has quit IRC
[09:58:23] *** akamit has joined #openindiana
[10:06:06] <DontKnwMuch> why does iostat -xen show errors, but zpool status does not
[10:06:13] <DontKnwMuch> Do I have to worry
[10:07:59] *** mikw has joined #openindiana
[10:08:04] <lblume> zpool reports errors at the data level. If an hardware error was retryable and succeeded, zpool would not know about it. Not all iostat errors have impact on data.
[10:08:19] <tsoome> check /var/adm/messages as well. probably just recoverable errors, iostat did register them, but zfs got the data
[10:08:48] *** held has quit IRC
[10:09:04] <DontKnwMuch> my iostat looks like that:
[10:09:30] <DontKnwMuch> I have two different controllers, seems like the drives on one are having some problems
[10:10:04] <DontKnwMuch> or can I ignore it
[10:10:10] <DontKnwMuch> sortof
[10:10:18] *** Whoopsie has joined #openindiana
[10:10:18] *** ChanServ sets mode: +v Whoopsie
[10:11:07] <lblume> Exact same number of errors on a bunch of drives would point to a central cause. Like tsoome said, check logs.
[10:12:55] <DontKnwMuch> the drives with short names are on ich10... the others are on sas2008, checking logs now
[10:13:32] *** syoyo has joined #openindiana
[10:17:51] <DontKnwMuch> could this be it:
[10:17:52] <DontKnwMuch> Jul 6 01:11:56 Deep2 ahci: [ID 296163 kern.warning] WARNING: ahci0: ahci port 2 has task file error
[10:18:01] <DontKnwMuch> Jul 6 01:11:56 Deep2 ahci: [ID 687168 kern.warning] WARNING: ahci0: ahci port 2 is trying to do error recovery
[10:18:38] <DontKnwMuch> Jul 6 01:11:56 Deep2 ahci: [ID 332577 kern.warning] WARNING: ahci0: the below command (s) on port 2 are aborted
[10:18:56] <DontKnwMuch> funny zfs didnt 'say' a thing
[10:19:31] <DontKnwMuch> task file error does not sound nice
[10:20:18] <DontKnwMuch> Jul 6 01:12:18 Deep2 scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
[10:20:27] <DontKnwMuch> can these be from smart not getting its data
[10:20:44] *** Whoopsie has quit IRC
[10:21:44] *** Whoopsie has joined #openindiana
[10:21:44] *** ChanServ sets mode: +v Whoopsie
[10:23:54] <tsoome> ahci, scsi, zfs are in different layers. if the error was soft error at lower layer, then it was possible to recover from it and so the error did not reach the upper layer. same as with ethernet - the packet collision does not mean your connection will drop.
[10:24:44] <DontKnwMuch> iostat -En gives Illegal Request: 18 to 24 for the drives in question
[10:25:18] <DontKnwMuch> coould be problem with different controllers in the same pool... or am I wrong?
[10:27:39] <DontKnwMuch> scrub does not increase the numbers..
[10:28:53] *** held has joined #openindiana
[10:31:43] <edgars> yello boyz
[10:33:43] *** ThothCrimson has joined #openindiana
[10:35:05] *** Whoopsie has quit IRC
[10:36:33] *** edogawaconan has quit IRC
[10:37:45] *** edogawaconan has joined #openindiana
[10:38:13] *** miine has quit IRC
[10:47:25] *** syoyo has quit IRC
[10:53:24] <DontKnwMuch> transitioned to maitenance...
[10:53:40] <DontKnwMuch> svcadm clear system/filesystem/local:default does not help ..
[10:55:32] <DontKnwMuch> mountpoint should be empy ... hm...
[10:57:32] <DontKnwMuch> Dump2 was a incomplete send - receive thing... possibly this is what happens in such a case
[11:10:29] <DontKnwMuch> zfs mount -a
[11:10:55] <DontKnwMuch> cannot moutn '/Data/Dump2': directory is not empty
[11:12:39] <DontKnwMuch> of course it is not empty, there are files in it... if it isnt mounted it should be empy.. how is it possible there is something in it
[11:13:13] *** |AbsyntH| has quit IRC
[11:13:21] *** |AbsyntH| has joined #openindiana
[11:16:52] <lblume> Because it was used when not mounted
[11:21:55] <DontKnwMuch> ah.. strange but when can it be not mounted if the system boots normally..
[11:31:45] *** Botanic has quit IRC
[11:39:01] *** Whoopsie has joined #openindiana
[11:39:02] *** ChanServ sets mode: +v Whoopsie
[11:40:21] <lblume> If it was manipulated after boot.
[12:01:38] *** Whoopsie has quit IRC
[12:03:31] *** Botanic has joined #openindiana
[12:10:25] *** ThothCrimson has quit IRC
[12:13:46] *** |AbsyntH| has quit IRC
[12:15:12] *** Botanic has quit IRC
[12:30:51] *** fossala has quit IRC
[12:42:09] *** InTheWings has joined #openindiana
[12:47:57] *** kart_ has quit IRC
[12:49:30] *** kart_ has joined #openindiana
[12:57:03] *** buffyg has quit IRC
[12:57:29] *** buffyg has joined #openindiana
[12:57:29] *** ChanServ sets mode: +o buffyg
[12:59:40] *** Botanic has joined #openindiana
[12:59:50] *** raichoo has joined #openindiana
[13:19:00] <DontKnwMuch> is it possible to check how much compression will actually compress the data somehow, anyone using compression?
[13:25:13] <DeanoC> always use compression
[13:25:20] <DeanoC> except in very cpu little cases
[13:25:29] <DeanoC> it save io
[13:25:44] *** hajma has joined #openindiana
[13:25:59] <DeanoC> you can change it at runtime so test with it on a particular settign and see how it does
[13:26:15] <DeanoC> or for gzip-X use gzip tool to get an idea
[13:26:55] <DontKnwMuch> I can select different types of compression? will have to read some more ;)
[13:27:41] <tsoome> gzip is practically only good for archive situations
[13:28:10] <DontKnwMuch> So you have compression on all the time? CPU is enough powerfull I think, will try it out, for old data I have to recopy it all right
[13:28:32] <DeanoC> depends on your server, i tend to have more cpu then io, so gzip-1 is good
[13:28:41] <DeanoC> you set it per filesystem
[13:29:00] <DeanoC> and any new files copied will use the new setting when accessing teh filesystem
[13:29:08] *** merzo has joined #openindiana
[13:29:44] <DeanoC> its fairly common to have different settings for different types of files
[13:30:38] <DeanoC> however there is never really a good reason to not at least have it on (default setting uses a very fast compresser, that pretty much free for most modern systems and likely result in performance increasing versus uncompresses io)
[13:30:58] <DontKnwMuch> oh.. ok :)
[13:31:18] <DontKnwMuch> I was just trying to as what is default compresser
[13:31:22] <DontKnwMuch> as=ask
[13:31:42] <DeanoC> its lz derivative
[13:34:00] <DontKnwMuch> I was just afraid what would it do with large uncompressable files, 4GB or so. I have a lot of very small files, thousands of fonts I beleive they are, and some large renders etc..
[13:34:14] <DontKnwMuch> all in the same fs
[13:39:05] *** DrLou has joined #openindiana
[13:39:05] *** ChanServ sets mode: +o DrLou
[13:40:55] *** Gugge has quit IRC
[13:40:56] *** AlasAway is now known as Alasdairrr
[13:41:15] <lblume> I am *still* waiting for actual benchmarks that shows that mythical "compression increases IO perfs" beast.
[13:41:49] *** Worsoe has quit IRC
[13:44:24] *** hajma has quit IRC
[13:44:26] <dkeav> still highly dependent on what data is on the filesystem and how compressible it is
[13:44:35] <dkeav> you can't expect gains from already compressed data
[13:44:41] <lblume> And also on what kind of reads you are doing.
[13:44:42] <DeanoC> it certainly does and benchmarkable outside of zfs, if its doesn't on zfs its doing some wrong
[13:45:00] *** McBofh has quit IRC
[13:45:51] <lblume> Small reads on highly compressible data can't be more efficient, as a lot more data must be moved around.
[13:46:17] <DeanoC> thats just buffer caches
[13:46:26] <tsoome> its all about same logic as caching is working - the best way to speed up the disk io is not to do one. if you have compress ratio 2:1 then you have 50% of physical IO saved.
[13:46:55] <lblume> That assumes that 100% of disk I/O are actually useful.
[13:47:09] <tsoome> 100% of disk IO is never useful
[13:47:19] <lblume> Exactly
[13:47:30] <tsoome> there is always some inflated IO, but its there nevertheless
[13:47:34] *** McBofh has joined #openindiana
[13:47:40] <lblume> So maybe you save 50% of that, but 50% of *what*?
[13:47:51] <DeanoC> but thre is no reason that compression makes IO more wasteful in most cases
[13:48:01] <tsoome> well, savings are savings;)
[13:48:09] <DeanoC> of the IO required to retrieve the data
[13:48:10] *** Gugge has joined #openindiana
[13:48:23] <tsoome> even if you do save from inflated io, its still the save;)
[13:48:34] <lblume> Are they the same IO on compressed and uncompressed data?
[13:48:48] <tsoome> what you mean?
[13:49:43] <DeanoC> to the IO subsystem the different is that 10K block is not 5K, assuming it loads 4K blocks compression is 1/3 faster IO wise
[13:49:50] <DeanoC> not=now*
[13:50:56] <DeanoC> there are corner cases where the 9K file only goes down to 8.5K so teh number of blocks is the same
[13:51:39] <tsoome> some say the "visible" effect is on laptops with slow disks, but I dont have any so cant confirm:)
[13:51:42] <lblume> Hmmm, from my understanding of zfs, you need to read a whole recordsize, so the question would be, are they actually smaller when compression is on? And how to check that?
[13:52:41] <tsoome> you can compare fsstat with zpool iostat before and after compression, but thats rather long test
[13:53:09] *** kmays has joined #openindiana
[13:53:23] <tsoome> fsstat will give you io statistics at vfs layer while iostat is on disk layer
[13:53:38] <DeanoC> also any over-read should be treated as accidental prefetch, which *may* help the next IO
[13:54:19] *** kmays has quit IRC
[13:54:39] <lblume> It all amounts to, a *lot* of analysis and tuning is needed to actually know that for a specific set of data.
[13:55:01] <tsoome> well, if you dont tune prefetch, it should be more or less on both cases
[13:55:08] <tsoome> more or less same*
[13:57:18] <lblume> I'd just like to see some documented way to evaluate that according to the kind of data (small/big files, compressible/not compressible). I'm kind of leery of blanket statements that I never, ever see substantiated by empirical evidence.
[13:58:05] <tsoome> Im using compression=on mostly on data like zone roots, for disk space, not so much about IO
[13:58:13] <DeanoC> because its well tested on systems that are easier to test and prove, so why wouldn't the results carry over?
[13:58:15] <lblume> (Oh, I did get most impressive disk throughput when using mkfile on an FS with dedup activated :-)
[13:59:03] <tsoome> 20 Compression using zle (zero-length encoding) should boost mkfiles as well:P
[13:59:26] <lblume> can you prove that? ;-)
[14:00:05] <lblume> Ok, I do see how specific cases would benefit :-)
[14:02:24] <DontKnwMuch> so... enable compression and be done with it, hardly it will be worse than without it, right? ;)
[14:03:25] <tsoome> it depends
[14:03:32] *** reddi has quit IRC
[14:03:34] <tsoome> gzip-9 can be very expensive
[14:03:48] <dkeav> very!
[14:03:58] <DontKnwMuch> I can imagine that it does...
[14:05:03] <DontKnwMuch> the default one will compress well, and fast I heard
[14:09:38] *** viridari_ is now known as viridari
[14:10:22] *** mikw has quit IRC
[14:10:48] *** anikin has joined #openindiana
[14:11:36] <lblume> Test it on your data
[14:12:45] <anikin> hi! i have a problem with idmap. can someone help me? )
[14:13:17] <anikin> server on oi_148, its cifs server in windows domain
[14:14:09] <anikin> last week i get error: box idmapd[514]: [ID 280452 daemon.error] Error: smb_lookup_sid failed.
[14:14:22] <anikin> box idmapd[514]: [ID 455671 daemon.error] Check SMB service (svc:/network/smb/server).
[14:14:58] <anikin> nothing helps me. only reboot.
[14:15:08] <dkeav> did you check the service?
[14:15:10] <anikin> today i get it again.
[14:15:32] <anikin> i can disable service, but i cant start it again, until reboot
[14:16:00] <anikin> server worked fine for a half-year before.
[14:16:23] <anikin> dont know what to do.
[14:16:40] <anikin> can it be a hardware error? may be memory?
[14:17:21] <tsoome> smb_lookup_sid failed may be just an hint about failed call to AD
[14:17:45] <anikin> another server with same configuration working fine.
[14:17:55] <anikin> DC working too.
[14:18:41] <anikin> idmap cant start after error, until reboot, with no messages in log.
[14:19:15] <anikin> google dont seen smb_lookup_sid before )
[14:19:47] <tsoome> what build?
[14:19:54] <anikin> 148
[14:20:09] <anikin> $ uname -a
[14:20:10] <anikin> SunOS box 5.11 oi_148 i86pc i386 i86pc Solaris
[14:20:14] <tsoome> syncking clocks with ntp?
[14:20:49] <anikin> yes. it syncked.
[14:20:51] <dkeav> syncing to DC at that
[14:21:21] <anikin> its not a time error i think.
[14:21:44] <descipher> anikin: just one dc?
[14:22:19] <anikin> 2, dc1 and dc2, both in krb5.conf
[14:22:45] <dkeav> anikin: what logs are you checking?
[14:23:18] <descipher> try switching the dc and check the dc logs. Maybe its not a local smb server issue.
[14:24:05] <dkeav> anikin: look in /var/svc/log/network-smb-server\:default.log
[14:24:46] *** reddi has joined #openindiana
[14:25:17] <anikin> [ Jul 6 14:44:22 Method or service exit timed out. Killing contract 87. ]
[14:25:49] <anikin> and so on
[14:25:58] <descipher> since restarting the local smb service fails then its external or a dependency on another failed local service that smb needs.
[14:26:54] <anikin> i think it idmapd service
[14:27:01] <anikin> i cant restart it
[14:27:09] <anikin> only after reboot
[14:27:38] <dkeav> when it is failed and you try to restart the service does it show a maintanaence state?
[14:27:48] <dkeav> wow butchered that word
[14:27:52] <anikin> no, it only show disabled
[14:27:58] <dkeav> hmm
[14:28:07] <anikin> and no log errors
[14:29:31] <anikin> i have a second server, for backups, with same configuration. it working fine.
[14:29:44] <descipher> compare local accounts and file system rights across the two smb servers, there is the possibility its an account id conflict.
[14:29:49] <tsoome> that doesnt mean much anything
[14:30:20] <anikin> but why doesnt idmap service starts?
[14:30:56] <tsoome> if you know truss and friends you can try to start idmap manually and trace it, may help with bugreport or diagnose, but otherwise ..... restart and wait for updates.
[14:31:04] <tsoome> seems like some bug...
[14:32:09] <anikin> thanks! waiting for updates.
[14:32:32] <anikin> i'll try to change memory, may be it helps
[14:32:58] <tsoome> maybe, but tbh, i doubt it has anything to do with hardware.
[14:34:46] <DontKnwMuch> controller failed, pool was not exported, disks connected to another controller, pools are degraded, and the disks that were on the failed controller are shown as configured but not in use. How can I add them back to the pool (this is just a test, not for real)..
[14:35:15] <descipher> anikin: compare the output of "idmap list" on the two servers
[14:35:36] <tsoome> DontKnwMuch: thats the case of stale zpool.cache in /etc/zfs
[14:36:20] <tsoome> DontKnwMuch: devfsadm -C; remove cache reboot and import the pool.
[14:37:37] <anikin> descipher: on "broken" server now i have empty idmap list, i think it because i delete /var/idmap/idmap.db after crash.
[14:38:34] *** POloser has left #openindiana
[14:39:19] <DontKnwMuch> tsoome: fantastic :) great info, will try it out now
[14:40:14] <tsoome> always remember to read manuals for commands and options you have been told;)
[14:40:35] <lblume> I forget if I asked here yesterday, but I'm still looking: is there a way to check consistency of /etc/user_attr? Kind of like pwck?
[14:40:54] <tsoome> not really.
[14:41:06] <tsoome> count the colons:P
[14:41:18] <lblume> It's worse than that :-P
[14:41:37] <lblume> Non-existent users there, and referring to root as a role, which it isn't anymore.
[14:41:47] <anikin> do i need idmap at all? server working fine with empty list now.
[14:42:04] <tsoome> well, there are also tools like auths, profiles, roles to help this check
[14:42:35] <lblume> Yes, but basically, I need to parse the thing myself.
[14:42:41] <tsoome> yep
[14:43:02] <tsoome> feel free to write some tool:P
[14:43:16] <lblume> Eg, roles will happily return the content from user_attr, without making even the most basic check on it
[14:43:48] <lblume> It just returns the string in the field, whatever that is.
[14:43:52] <anikin> # roles
[14:43:52] <anikin> No roles
[14:43:53] <anikin> hmm, i think its not ok. will try to rejoin server to domain.
[14:43:53] <tsoome> does userdel clear user_attr?
[14:46:02] <lblume> tsoome: I think it does, even if the content is invalid.
[14:46:27] <lblume> I believe I can blame the leftovers to manual edits of /etc/passwd
[14:46:35] <tsoome> :D
[14:46:58] <dkeav> >.<
[14:47:09] *** |AbsyntH| has joined #openindiana
[14:47:12] <tsoome> seems like you need to think whom to give root access:P
[14:47:21] <lblume> I've just started.
[14:47:27] *** CoilDomain has joined #openindiana
[14:47:32] <lblume> I'm cleaning up.
[14:47:44] <dkeav> i suggest the baseball method of rights managment
[14:47:57] <tsoome> but that can be painful. many people have made enemies that way:D
[14:48:02] <lblume> Previous sysadmin is not here anmore.
[14:48:06] <dkeav> as in when someone you give root access too does something stupid, you break their hands with a ballbat
[14:48:24] <tsoome> (including myself)
[14:48:43] <dkeav> lblume: thats the beauty of the baseball method, it doesn't end with career changes
[14:49:11] *** CoilDomain has quit IRC
[14:49:37] *** CoilDomain has joined #openindiana
[14:49:45] <DontKnwMuch> tsoome: It worked, but I had to export the degraded pool, delete the zpool.cache reboot, import it all back and it is ok now. Interesting exercise
[14:49:46] <lblume> Look, there were 3.6GO of mail flood reproducing themselves in /var/spool/clientmqueue since Jan 2010. So basically, user_attr is the easy stuff :-P
[14:49:54] <lblume> GB*
[14:50:10] <tsoome> :D
[14:50:13] <DontKnwMuch> tsoome: just deleting the cache and reboot did nothing interestingly
[14:50:15] *** CoilDomain has quit IRC
[14:50:36] <tsoome> DontKnwMuch: try to rejoin domain?
[14:51:36] *** CoilDomain has joined #openindiana
[14:51:40] <tsoome> lblume: the clientmqueue is quite easy to clean up, it will take some time tho
[14:52:03] <tsoome> monkeyjob;)
[14:52:26] <dkeav> pfy job
[14:52:50] <tsoome> pfy?
[14:53:21] <dkeav> pimply faced youth
[14:53:26] <tsoome> :D
[14:53:33] <lblume> tsoome: Somewhat faithful companion of the bofh!
[14:53:44] <dkeav> aye
[14:54:08] <tsoome> ah, i see:P
[14:54:29] <lblume> rm * did a quick job on those. I expect the bounce of the bounce of the bounce^50 of an email that failed to be distributed 1.5 year ago has little relevance.
[14:54:56] <tsoome> indeed.
[14:55:32] <tsoome> it really shouldnt stay there that long
[14:55:52] <tsoome> default settings will keep queue for 7 days afaik...
[14:57:37] <lblume> 5 days.
[14:57:46] <lblume> But what does it do them? It sends an email.
[14:57:51] <lblume> To the clientmqueue.
[14:58:05] <lblume> It had also sent one after 3 hours, as a warning.
[14:58:38] <lblume> So each emails spawns two, one after 3 hours, one after 5 days, each of them slightly bigger than the previous.
[15:00:12] <tsoome> DontKnwMuch: if you had to export pool, then you can leave the cache there, because the export will clear the pool data from the cache anyhow
[15:01:55] <DontKnwMuch> tsoome: oh.. ok. it worked after export, but just deleting the cache did not change a thing, pools were still there in degraded state after reboot
[15:01:56] *** akamit has quit IRC
[15:02:27] <DontKnwMuch> it just took long to reboot
[15:06:51] *** raichoo has quit IRC
[15:07:00] *** raichoo has joined #openindiana
[15:10:50] <tsoome> ye, i didnt realize you might have pool imported and still have references to old disk names. the device names in pool (you see from zpool status) will be updated on import only.
[15:26:04] *** madwizar1 has joined #openindiana
[15:27:26] *** DanaG has joined #openindiana
[15:27:40] <DanaG> Well, that really slow install finished... but now won't boot.
[15:28:32] <DanaG> Or maybe it's just really slow. It's taken like 30 minutes, and it's still not done.
[15:28:48] *** syoyo_ has joined #openindiana
[15:30:44] <DanaG> service:/system/filesystem/remvolmgr:default: method or service exit timed out. Killing contract 75.
[15:31:04] *** datadigger has quit IRC
[15:31:04] *** lblume has quit IRC
[15:31:04] *** madwizard has quit IRC
[15:31:05] *** Nitial has quit IRC
[15:31:10] <anikin> someone knows release date of oi_151 ?
[15:31:55] *** Nitial has joined #openindiana
[15:34:10] *** datadigger has joined #openindiana
[15:34:34] *** lblume has joined #openindiana
[15:36:28] *** gea has joined #openindiana
[15:37:01] <DanaG> ipagent service didn't start.... and now it's in maintenance mode. FAIL.
[15:37:03] <EisNerd> docsteel: here?
[15:39:00] <DanaG> Okay, I give up on OI for the moment.
[15:39:28] <DanaG> It shouldn't take 45 minutes to fail to boot.
[15:40:22] *** axisys has quit IRC
[15:41:39] <EisNerd> docsteel, said someone has trouble with cifs-service?
[15:43:22] *** xl0 has quit IRC
[15:44:04] <EisNerd> btw, could some one give me a hint how to get the pid of a process running in a bash process with known pid? should be a child relation or not?
[15:44:22] <EisNerd> in linux ps support forest like display
[15:45:13] <McBofh> EisNerd: ptree $parentpid
[15:45:19] <McBofh> iirc, we've also got pstree
[15:46:07] <raichoo> DanaG: What kind of hardware do you use? Did you post your problem on the mailing list?
[15:46:14] <quasi> pstree sounds linuxy
[15:46:15] <docsteel> EisNerd: 14:16:33 < anikin> hi! i have a problem with idmap. can someone help me? )
[15:47:43] <anikin> EisNerd: yes, i have some problem with cifs.
[15:48:00] <anikin> with idmap i think!
[15:49:11] <EisNerd> McBofh: yeah
[15:49:34] <DanaG> HP Microserver, booting from a USB flash drive.
[15:50:03] <DanaG> Well, not quite the same symptoms.
[15:50:05] <EisNerd> anikin: give some more detail about your setup
[15:50:09] <EisNerd> ad based mapping?
[15:51:17] <anikin> yes its ad based. dont know what to give more!?
[15:51:40] <anikin> zfs smbshare
[15:51:47] *** TheG0blin has joined #openindiana
[15:51:59] <anikin> problem with idmap service
[15:53:02] <anikin> Jun 29 17:50:27 box idmapd[514]: [ID 280452 daemon.error] Error: smb_lookup_sid failed.
[15:53:02] <anikin> Jun 29 17:50:27 box idmapd[514]: [ID 455671 daemon.error] Check SMB service (svc:/network/smb/server).
[15:53:03] <anikin> Jun 29 17:50:27 box idmapd[514]: [ID 174421 daemon.error] Check connectivity to Active Directory.
[15:53:04] <Micr0mega> DanaG: do you happen to have a small usb drive and loads of ram?
[15:53:46] <anikin> EisNerd, i can only stop idmap service, but cant start it again until reboot.
[15:54:28] <DanaG> 8 gig drive, 5 gigs RAM.
[15:54:32] <EisNerd> is your server joined correctly to AD
[15:54:41] <DanaG> It took multiple hours to install.
[15:54:50] <anikin> now i turn debug on for idmap. waiting for this bug again.
[15:55:06] <anikin> EisNerd, yes it correctly joined to AD.
[15:55:49] <anikin> EisNerd, all worked fine for a half-year, until last week :(
[15:56:11] <anikin> and today i got this error again.
[15:57:06] <EisNerd> strange
[15:57:13] <EisNerd> dcdiag is all fine?
[15:57:29] <anikin> yes
[15:57:49] <DanaG> Right now, I'm going to have to put this on hold until I get back from work.
[15:57:54] <anikin> my second server with same configuration working fine.
[16:00:42] <Micr0mega> DanaG: I was having problems with my 16GB usb drive and 16GB ram, because the assigend swap space on the drive was already 8GB. you might have a similar problem with space
[16:01:12] <DrLou> Gents, may I throw a VirtualBox 'Best Practices' Q in the mix here? as in VBox with Oi the host OS (not Oi the client in another OS)
[16:01:32] <DrLou> Anyone have any authoritative suggestions/absolutes/gotchas?
[16:03:11] <DrLou> eg, docs seem to suggest --hostiocache off is better, though some have suggested hostiocache 'on' Hmmm....
[16:03:40] <anikin> EisNerd, what do you think about it? can it be hardware?
[16:03:48] <longcat> that's kind of like the mysql double buffer problem
[16:05:22] <EisNerd> no I think not
[16:05:45] <EisNerd> McBofh: thx, now I was able implement my timeout
[16:07:46] <EisNerd> anikin: maybe your should restart your smb service and have a look at debug log on domain login of the server
[16:08:27] *** akamit has joined #openindiana
[16:08:29] <anikin> EisNerd, ok i'll try it.
[16:08:35] <EisNerd> damn cam
[16:08:40] <EisNerd> Available Capacity:
[16:08:59] <anikin> EisNerd, why do you think problem with smb but not idmap?
[16:10:06] <DanaG> Micr0mega: how did you fix your swap issue?
[16:11:42] <EisNerd> anikin: they both are closely linked with each other
[16:13:19] <EisNerd> and is there is a problem the debug log should show a lot while the service tries to loginto the ad
[16:13:50] <EisNerd> same with idmap, you should try to enable dabug logging and have a look at debug log while you try to start the service
[16:13:59] <DanaG> Can I just resize the existing partitions?
[16:14:34] <DanaG> I don't recall even seeing an option for swap!
[16:15:28] <anikin> EisNerd, i rejoin server to domain after turning idmap debug on. but cant find something criminal in log.
[16:15:28] <Micr0mega> DanaG: that's the thing, there is no option heh. the installer just takes half your ram as swap, no matter how large (small!) your install disk is
[16:15:46] <Micr0mega> DanaG: I wouldn't know about resizing, I did it with a fresh install anyway
[16:16:00] <longcat> swap isn't a partition
[16:16:03] <longcat> it's a zvol
[16:16:12] <DanaG> So can I delete it?
[16:16:18] <Micr0mega> DanaG: but you should first check if your pool is indeed full
[16:16:20] <longcat> zfs destroy rpool/swap && zfs create -V 4G rpool/swap
[16:16:39] <DanaG> Well, I have 5GB RAM and an 8GB flash drive for OS.
[16:16:46] <longcat> whatever you want
[16:16:53] <tsoome> your swap is too small or too large?
[16:17:06] <DanaG> I don't know... all I know is that it's taking 45 minutes to FAIL to boot.
[16:17:22] <tsoome> then its not the issue of swap:P
[16:18:11] <Micr0mega> tsoome: a too large swap caused my install to take over an hour and boot over 30 minutes, because the rpool was full
[16:18:37] <longcat> the install is really small, under 4gb
[16:18:41] <Micr0mega> but this might be a totally different issue, just saying what I experienced
[16:19:22] <longcat> the first boot is always long
[16:19:40] <Micr0mega> the dump zvol seemed to take a lot of space somehow, but don't remember the details
[16:19:46] <tsoome> only because of loading up smf database, but you can see that
[16:19:55] <longcat> and building xorg cache
[16:20:03] <DanaG> My install took like 2-3 hours.
[16:20:15] <dkeav> o.0
[16:20:21] <DanaG> And the first boot took 45 minutes, with several service timeouts, and it giving up.
[16:20:31] <DanaG> So it didn't even finish booting.
[16:20:31] <dkeav> wow thats bad
[16:20:52] <longcat> what's the internal block size of your flash drive?
[16:20:57] <tsoome> that kind of time is definitely not normal
[16:21:00] <dkeav> cli only install took like 10 minutes and booted in about 2
[16:21:09] <longcat> maybe it's smaller than zfs' default 128KB block size, and the unnecessary writes are slowing it down
[16:21:20] <longcat> dkeav: on server hardware?
[16:21:25] <dkeav> yea
[16:21:32] <DanaG> I'm not sure, and I need to head off to work soon.
[16:21:35] <longcat> no shit, but this install is to a usb flash drive
[16:21:52] <DanaG> But it's a Patriot Xporter XT Boost. 8GB.
[16:21:53] *** syoyo_ has quit IRC
[16:22:28] <dkeav> i've installed to flash drives before too, again headless, wasn't no 2-3 hours
[16:22:36] <dkeav> and definatly booted rather fast actually
[16:22:38] <longcat> server flash drives?
[16:22:44] <dkeav> consumer
[16:22:51] <dkeav> kingston cheapy
[16:23:10] <longcat> interesting
[16:23:25] <DanaG> I think it was more near 2 than 3.
[16:23:29] <dkeav> it did take longer to install as usb2 is quite a bit slower than sata or sas obviously
[16:24:57] <DanaG> A gnome-disk-utility benchmark of the drive in Linux:
[16:25:14] <DanaG> 5-10MB/s write, ~40MB/s read, 0.7ms access time.
[16:25:22] <longcat> attention
[16:26:18] <longcat> nevermind
[16:26:55] <longcat> can the utility divine the internal block size?
[16:27:09] <longcat> 5/10MB is a relatively worthless benchmark, since it's sequential
[16:27:17] <EisNerd> anikin: strange
[16:30:31] *** Micr0mega has left #openindiana
[16:30:37] <DanaG> hdparm -I: Logical/Physical Sector size: 512 bytes
[16:30:44] <DanaG> gotta get ready to go.
[16:31:41] <longcat> i doubt the internal flash sector is 512 bytes
[16:32:19] <longcat> the drive will happily write a 4kb flash sector 8 times if you write 8 512 byte blocks to it
[16:34:00] *** DanaG has quit IRC
[16:34:01] *** skeeziks has joined #openindiana
[16:34:52] *** gea has quit IRC
[16:42:07] <EisNerd> ok I have a strange problem, one of my OI boxes doesn't get an interactive logon done (ssh user@box hangs when the prompt should appear)
[16:42:26] <EisNerd> but ssh user@box ls works immediately
[16:42:32] <EisNerd> some ideas?
[16:49:59] <quasi> try removing your .profile and if that doesn't work, try another shell
[16:50:32] <quasi> (.profile or whatever your shell of choice uses)
[16:50:50] <quasi> usually it's some silly setting
[16:51:11] <quasi> I've also seen it be quota checks gone wild
[16:51:59] <EisNerd> quasi: it where working just minutes ago
[16:52:33] <EisNerd> when doing a ssh user@box bash
[16:52:54] <EisNerd> I get in somehow interactive session but with no prompt
[16:53:39] <quasi> ah, I seem to recall seeing that before on linux. had to remote power cycle the box and then it was fine after
[16:57:05] <EisNerd> hm would be interesting to not reboot the box
[16:57:19] <EisNerd> and especially to figure out why this happens
[16:57:19] <longcat> out of files?
[16:57:30] <EisNerd> hm
[16:57:37] <EisNerd> how to check this?
[16:57:49] <EisNerd> maybe out of terminals
[16:58:02] <longcat> does /dev/stdin and /dev/stdout exist?
[16:58:11] <quasi> well, you could try restarting sshd
[16:58:39] <EisNerd> ssh box ls /dev/stdi works
[16:58:44] <EisNerd> stdin
[16:59:02] <longcat> oh right, it wouldnt be that
[16:59:32] <longcat> being out of ptys would probably do it
[16:59:39] <EisNerd> ok how to check this
[16:59:59] *** anikin has quit IRC
[17:02:25] <longcat> check dmesg for errors
[17:04:49] <DontKnwMuch> why is hot spare good for.. I am just thinking out loud, I it running all the time *empty and is probably worn out too, isnt it better to just plug a new drive in.
[17:05:11] <DontKnwMuch> I it = it is
[17:05:42] <EisNerd> DontKnwMuch: if you are there to do it when it is needed?
[17:06:15] <DontKnwMuch> ah.. ok... this is one reason ;)
[17:06:40] <DontKnwMuch> is it possible to spin it down somehow in OI?
[17:07:28] <EisNerd> and a hot spare is just spinning not used so the wear out is not that high
[17:07:58] <longcat> no, it's not better to have a new drive
[17:08:09] <EisNerd> DontKnwMuch: spin down is possible but then you have to take care that it spins up when it is needed
[17:08:14] <longcat> google search shows the drive failure curve is a U curve
[17:08:34] <longcat> google's research
[17:09:15] <longcat> plus, it's better to have a midlife or old drive now than a (let's assume it wont die) new drive later
[17:10:00] *** myrkraverk has quit IRC
[17:10:26] <EisNerd> DontKnwMuch: additionally the idea is that the hot spare drives is there to jump in and you replace the broken disk asap, so there is again a hotspare for the next drive failure
[17:10:49] <EisNerd> so you need just a drive a bit less worn out than the failing one
[17:10:53] <DontKnwMuch> ok. U curve makes sense
[17:11:25] <DontKnwMuch> But at a new setup all the drives are the same... which is bad by itself right
[17:11:40] *** myrkraverk has joined #openindiana
[17:11:40] *** myrkraverk has joined #openindiana
[17:11:42] <EisNerd> lot of people faulting in replacing faulty disk asap as the think, the array is still there
[17:12:17] <longcat> interesting way of looking at it DontKnwMuch
[17:12:19] <EisNerd> DontKnwMuch: no, in really production half the disks is replaced after 3 years
[17:12:40] <EisNerd> DontKnwMuch: and the other half in normal turn after 5
[17:13:01] *** raichoo has quit IRC
[17:13:06] <EisNerd> this is where the cheap server drives on ebay come from
[17:13:18] <DontKnwMuch> and the other 70% while they fail in the 5 years :)
[17:13:20] <EisNerd> used but not broken
[17:13:52] <EisNerd> normally no real server drives fails in 5 years
[17:14:12] <EisNerd> if it does it has been broken from the first day
[17:14:32] <EisNerd> therefore those drives have a 5 year waranty
[17:14:43] <EisNerd> if it is up replace it
[17:15:06] <longcat> consumer drives are very high density and if you look at smart statistics, hardware ecc is used to recover virtually every sector read... server drives are low enough density that ecc may never be used until the drive dies
[17:15:34] <DontKnwMuch> True. and interesting. and luck needed ;)
[17:16:31] <EisNerd> hm not really just use really good drives and you don't need that much luck
[17:16:38] *** myrkraverk has quit IRC
[17:16:56] *** Naresh has quit IRC
[17:19:17] *** hsp has joined #openindiana
[17:19:19] *** bens1 has quit IRC
[17:21:14] *** kimc has joined #openindiana
[17:24:59] *** cruisereg has quit IRC
[17:25:48] *** cruisereg has joined #openindiana
[17:27:47] *** gea has joined #openindiana
[17:30:15] *** kart_ has quit IRC
[17:31:36] *** kart_ has joined #openindiana
[17:32:44] *** gea has quit IRC
[17:37:14] <DontKnwMuch> how can normal file copy (no compress/dedup) use all the cpu available.. this is not normal I beleive (smb)
[17:39:13] <RoyK> it shouldn't unless something is wrong
[17:39:21] <RoyK> what sort of drives?
[17:39:25] <RoyK> controllers?
[17:42:15] <DontKnwMuch> LSI SAS2008, 3TB Hitachi cheap 7200RPM drives
[17:42:44] <DontKnwMuch> ashift=12
[17:42:57] <DrLou> EisNerd: doesn't ssh - get you to interactive?
[17:45:05] *** kart_ has quit IRC
[17:51:15] *** rev909 has quit IRC
[17:54:00] *** merzo has quit IRC
[17:56:30] *** raichoo has joined #openindiana
[17:56:31] <dkeav> DontKnwMuch: is the controller running in IT mode?
[17:57:16] <DontKnwMuch> yes
[18:00:57] *** kart_ has joined #openindiana
[18:04:50] <RoyK> DontKnwMuch: IIRC Hitachi Deskstars use 512b sectors, so no need for ashift=12
[18:05:39] <DontKnwMuch> I know, but I had the zpool which made it 12... so...
[18:06:34] <RoyK> what's the pool layout?
[18:07:35] *** kart_ has quit IRC
[18:08:09] <DontKnwMuch> 8 drives in raid 10
[18:08:29] *** kart_ has joined #openindiana
[18:11:23] <RoyK> should be good
[18:12:18] <RoyK> I have a similar setup
[18:12:31] <RoyK> and that's _fast_
[18:12:43] <RoyK> hitachi deskstar 1TB drives
[18:13:31] <quasi> deathstar
[18:13:32] <dkeav> with ssd slogs and ssd caches, i'll bet
[18:13:37] <dkeav> nice
[18:13:46] <RoyK> quasi: that was 10 years ago...
[18:14:07] <RoyK> dkeav: yep - C300 for the L2ARC and OCZ Vertex 3 for the SLOG
[18:14:18] *** akamit has quit IRC
[18:14:28] <quasi> RoyK: still having nightmares about it
[18:14:30] <dkeav> i'm jelly
[18:14:50] <tsoome> RoyK: nfs?
[18:14:56] <RoyK> nfs and cifs
[18:15:04] *** akamit has joined #openindiana
[18:15:15] <tsoome> have you benchmarked it?
[18:15:28] <RoyK> no
[18:15:34] <DontKnwMuch> slogs :) I envy you :)
[18:15:39] <DontKnwMuch> very nice
[18:15:52] <RoyK> but I had a few scientists start a rather bad jobs from their clients and they were slightly impressed
[18:16:13] <tsoome> someone had issues with creating pileload of small files (~1kb), would be nice to know how that OCZ will do...
[18:16:20] <RoyK> it's on a gigabit link and that's the bottleneck so far
[18:16:42] <dkeav> you should *snicker* bond another link
[18:16:45] <dkeav> hehe
[18:17:38] <RoyK> dkeav: I did, but there seems to be some issue with the firmware on the switch so it hung after a while
[18:17:47] <dkeav> ah
[18:17:52] <dkeav> cisco catalyst?
[18:17:56] *** DanaG has joined #openindiana
[18:18:02] <tsoome> with tests that person got results like without slog ~100 create/s with slog ~300 create/s and with sync=disabled ~1000 create/s
[18:18:20] <DanaG> odd... trying to install from text installer, it stalls on "Preparing text install image for use"
[18:18:32] <RoyK> supermicro (similar to dell, produced by delta electronics) - we've tested those quite a bit and they work well except for LA
[18:18:49] <RoyK> tsoome: any particular test I should try?
[18:19:20] <dkeav> we had some LACP issues with some catalyst switches, but it was a firmware issue that was fixed in an update
[18:19:32] <RoyK> k
[18:19:40] <dkeav> the switches were rebooting under load
[18:19:44] <dkeav> wasn't pretty
[18:19:45] <tsoome> uhm well, if you care, i can mail you few lines of code:)
[18:19:53] <RoyK> sure
[18:20:18] <RoyK> for testing locally or over an nfs link?
[18:25:52] *** held has quit IRC
[18:28:57] *** reddi has quit IRC
[18:29:38] <tsoome> they did release it finally.
[18:31:55] <longcat> wow
[18:33:20] <tsoome> 227 create/s
[18:34:46] <tsoome> so it is about the same result.
[18:37:03] <RoyK> lemme test without the slog
[18:38:07] <RoyK> compression is enabled on pool/dataset, but no dedup
[18:38:23] <tsoome> the best result we got was with sync=disabled, which by itselt is not the safest one, but if you really need to deal with small files...
[18:38:49] <RoyK> I just removed the SLOG for testing
[18:38:56] <tsoome> as that code does use random data, i wouldnt expect it to compress much:)
[18:39:55] <tsoome> 57 create/s :P
[18:40:14] <RoyK> on striped mirrors
[18:40:40] <RoyK> eight of them
[18:40:45] <tsoome> mirror does not matter there really - its all about sync mode code path
[18:40:58] <RoyK> right
[18:41:14] <sponix> gmail working for everyone else ?
[18:41:17] <tsoome> basically you should see similar result if you add O_SYNC on open
[18:41:17] <RoyK> reattached the slog, testing once more...
[18:41:44] <tsoome> on local system, that is
[18:42:09] <RoyK> hm.. lemme test that
[18:42:51] <tsoome> if you wanna see async speed over nfs, zfs set sync=disabled:)
[18:43:16] *** |AbsyntH| has quit IRC
[18:43:28] <tsoome> uhm
[18:43:35] <tsoome> with slog, that is?
[18:43:39] <RoyK> yep
[18:43:44] <RoyK> lemme try without
[18:44:06] <tsoome> hm, i wonder if O_DSYNC is worse or not...
[18:45:02] <tsoome> but O_SYNC result was quite nice...
[18:45:53] <tsoome> try O_DSYNC as well:D
[18:46:15] <DanaG> ah, figured out my stall... cd-rw fail.
[18:46:15] <RoyK> what's dsync?
[18:46:25] <tsoome> data integrity
[18:46:45] <RoyK> O_SYNC | O_DSYNC or just the latter?
[18:46:59] *** sponix has quit IRC
[18:48:06] <tsoome> hm, ah, O_SYNC is both file and data integrity, so O_DSYNC is weaker
[18:48:28] <tsoome> but it means the nfs-zil code path sucks really bad.
[18:48:29] <RoyK> ok, no need to test dsync, then, i guess
[18:49:30] <tsoome> as O_SYNC locally did create 2000 files/s and over nfs you got like 250/s
[18:51:36] <RoyK> the test was done from an oldish machine - lemme check the load on it while running
[18:51:38] <tsoome> did you try with sync=disabled as well over nfs?
[18:52:17] <RoyK> let me test a few things first
[18:53:17] <RoyK> sync=disabled - is that set per dataset?
[18:53:29] <RoyK> btw, the load on the client is like 21%
[18:54:01] <RoyK> single 1,5GHz SPARC thing
[18:54:16] <RoyK> Sun Fire V215
[18:54:25] <RoyK> aren't those rather old?
[18:54:56] <RoyK> oh, and the client is on 100Mbps
[18:55:04] <RoyK> but I doubt that's the bottleneck
[18:55:14] <tsoome> yes its per dataset
[18:56:17] <tsoome> its like old zil_disable knob, but per dataset and safer to manage then poking kernel with mdb:)
[18:57:55] <RoyK> that's nfs with sync disabled
[18:58:01] <tsoome> 384/s guess thats where the load will start to kick in
[18:58:01] <RoyK> so no big deal, really
[18:58:36] <tsoome> as the difference with ssd is not that huge in your case...
[18:58:57] <DanaG> Okay, I burned a fresh CD, and it still stalls on "Preparing text install image for use".
[18:59:41] <RoyK> tsoome: what do you mean? when I tried to remove the slog, the speed wasn't very much to brag about
[19:01:52] <tsoome> well - you tests were like 55/s - 250/s - 380/s; but; local with ssd and O_SYNC was 2000/s. the another guy had series 100/s - 300/s - 1000/s (without slog - slog - sync=disabled)
[19:01:56] *** held has joined #openindiana
[19:02:15] *** drajen has quit IRC
[19:02:23] <tsoome> for some reason in your tests the difference with slog versus sync=disabled wasnt that big
[19:02:42] <tsoome> but it could be network and client/server loads
[19:03:02] <tsoome> hard to tell, as the test environments are different:)
[19:04:02] <RoyK> anyway - I can confirm that the slog helps a lot, which is good
[19:04:05] <tsoome> but still the difference 250/s versus 2000/s is quite huge one.
[19:04:09] <tsoome> yes, indeed
[19:04:24] *** hajma has joined #openindiana
[19:04:31] <RoyK> lemme test from linux...
[19:04:31] <bdha> It's funny how much L2ARC can help, even when you (theoretically) have 5G free for ARC to consume.
[19:04:52] <bdha> SSD L2ARC died in a box, and jobs that used to take 2m now take 30-50m.
[19:05:05] <bdha> Another box, L2ARC died and now users can't log in, load is so random and high.
[19:05:08] <tsoome> on my own tests i got about 300/s over fast ethernet as well
[19:05:16] <bdha> (IMAP load)
[19:05:57] <jkimball4> what's the hotkeys package supposed to do for me?
[19:06:23] <jkimball4> i saw xorg complain about it not being there, but i don't see that any of my supposed hotkeys working with it installed
[19:06:42] <alanc> warm the keys so your fingers don't get cold
[19:07:07] <alanc> oh, the hotkeys Xorg module makes the ACPI special keys on the Toshiba laptops work
[19:07:18] <jkimball4> won't do much for my thinkpad then :)
[19:07:47] <alanc> I think they were specific to the ACPI methods from the toshiba bios, never looked much at the kernel side underpinnings
[19:07:54] <jkimball4> would be nice for some of these thinkpad specialty features to work like the middle button or the page back/forward keys
[19:08:07] <jkimball4> xmodmap perhaps?
[19:08:07] <RoyK> tsoome: that's from a linux machine on a gigabit connection to the server
[19:08:09] <alanc> but it was so things like display switch key would run the randr magic to enable/disable external displays
[19:08:41] <tsoome> and slog or sync=disabled?
[19:08:51] <RoyK> with sync
[19:09:01] *** gea has joined #openindiana
[19:09:04] <tsoome> 400/s. interesting
[19:09:19] <tsoome> you have b148?
[19:09:42] <RoyK> yep
[19:10:31] *** sponix has joined #openindiana
[19:12:04] <tsoome> trying to get config details from the guy:) wonder how he got 1000/s :P
[19:13:02] <jkimball4> is thinkpad middle button scrolling something i'd setup in xorg.conf or somewhere else?
[19:15:27] <tsoome> diff is like 50 creates... well, at least the results you get are consistent:)
[19:16:46] <DanaG> So, how do I get the darn CD to boot?
[19:17:31] <RoyK> DanaG: for reference, can you try to boot it on another machine?
[19:20:54] <DanaG> Sure, trying that now.
[19:21:32] <DanaG> ah, that seems to have worked.
[19:22:48] *** Alasdairrr is now known as AlasAway
[19:28:11] <DanaG> error: ... fdisk part of TI failed
[19:38:59] <DanaG> hmm, tried again with a different flash drive, and that worked.
[19:39:33] *** Naresh has joined #openindiana
[19:40:01] *** Botanic has quit IRC
[19:40:19] <dkeav> DanaG: perhaps your flash drive is a turd after all
[19:42:38] <DanaG> I got two of the same model flash drive. And it seems like the one that DIDN'T work was blank, but the one that DID work now is the one that was slow booting earlier.
[19:43:07] <RoyK> it's in the nature of all things to eventually die
[19:43:13] <RoyK> sooner or later :P
[19:45:39] <dkeav> usually more sooner these days
[19:46:00] <dkeav> yay! mass market economy of crap
[19:46:11] <dkeav> but cheap, crap
[19:50:55] *** dijenerate has quit IRC
[19:52:03] *** Naresh has quit IRC
[19:59:19] *** russiane39 has joined #openindiana
[20:00:18] *** sergefonville has joined #openindiana
[20:00:29] <sergefonville> good evening :D
[20:01:19] <sergefonville> my php problem is solved entirely :D
[20:01:25] <sergefonville> in case anyone cares :P
[20:02:02] <dkeav> alanc: thanks
[20:03:35] *** Botanic has joined #openindiana
[20:05:39] *** Naresh has joined #openindiana
[20:14:55] *** Naresh` has joined #openindiana
[20:16:09] *** kart_ has quit IRC
[20:16:27] *** Naresh has quit IRC
[20:17:31] *** Naresh`` has joined #openindiana
[20:18:19] <tomww> alanc: runs on an ipad? would be the only app I would really need an ipad for :)
[20:19:00] <quasi> too bad there's unlikely ever to be an android version
[20:19:05] <alanc> tomww: yes, mainly for iPad (not sure if it even works on the smaller devices)
[20:19:31] <tomww> oh yeah, just read the first word of the page... nice .)
[20:19:38] *** Naresh` has quit IRC
[20:23:25] *** sergefonville1 has joined #openindiana
[20:27:23] *** sergefonville has quit IRC
[20:27:44] *** sergefonville1 has quit IRC
[20:28:52] * tomww is arguing with the minister of finance that he now needs an iPad
[20:29:25] *** sergefonville has joined #openindiana
[20:30:04] *** Naresh`` has quit IRC
[20:30:38] *** Naresh`` has joined #openindiana
[20:30:56] *** reddi has joined #openindiana
[20:32:46] <dkeav> ahem 2 ipads, 2
[20:33:20] *** sergefonville1 has joined #openindiana
[20:33:38] <tomww> hehe :)
[20:34:21] <DanaG> RoyK: the drives are brand new, though.
[20:36:05] <dkeav> DanaG: another thing i have noticed is not all bios do well with GPT partitioned usb keys and booting
[20:36:09] <dkeav> just sayin
[20:36:22] *** sergefonville has quit IRC
[20:36:22] <DanaG> Does it GPT partition them?
[20:36:30] *** sergefonville has joined #openindiana
[20:36:30] <dkeav> probably
[20:37:16] <dkeav> i have a mobo that will NOT even get through POST with a GPT partitioned usbkey in it
[20:37:39] *** sergefonville1 has quit IRC
[20:39:58] <DanaG> I don't have such issues with my systems.
[20:40:08] <DanaG> My laptop even offers EFI boot mode, though it breaks stuff like the console.
[20:41:03] *** sergefonville has quit IRC
[20:41:46] *** sergefonville has joined #openindiana
[20:49:01] *** miine has joined #openindiana
[20:52:02] *** akamit has quit IRC
[20:58:54] *** Naresh``` has joined #openindiana
[21:00:22] *** Naresh`` has quit IRC
[21:01:35] <miine> Hi. Does anybody know how much faster a 3 disk RAIDZ is compared to a 4 disk RAIDZ ?
[21:03:04] <RoyK> DanaG: new drives can fail too
[21:05:06] <DanaG> hmm, on this other computer, the install is taking a saner amount of time.
[21:05:20] <tsoome> faster in which scenario?
[21:05:58] *** sergefonville1 has joined #openindiana
[21:07:07] <konobi> DanaG: there's always plop too
[21:07:09] <miine> tsoome: just read that RAIDZ performance is optimal with 3 disks ....
[21:07:20] *** Naresh``` has quit IRC
[21:07:35] <tsoome> write perfomance, for randomish writes
[21:07:37] <tsoome> yes
[21:08:14] *** sergefonville has quit IRC
[21:08:29] <miine> tsoome: ok. that wouldn't matter to me as it will be used for backups of large files...
[21:09:07] <tsoome> in fact, the more large streaming writes, the wider raid is nice
[21:09:33] <tsoome> but you can test that yourself:D
[21:09:55] <miine> tsoome: i thought so. but at that point the GB ethernet will be the bottleneck I think...
[21:10:04] <tsoome> that too, sure
[21:12:14] * yalu is testing out if openindiana would be significantly faster than nexenta writing to a deduped zpool
[21:12:41] <tsoome> for backup application its the tradeoff of wider stripes and protection, double or maybe even triple parity...
[21:13:05] <tsoome> yalu: how much ram you have?
[21:13:41] <tsoome> and what kind of data are you storing?
[21:14:04] <yalu> tsoome: 2 GB but a relatively small dataset. It's also quite memory-starved (arc summary suggests 89% cache hits but I don't believe it)
[21:14:13] <miine> tsoome: just seen that my customer had no backups since the last 10 months (he didn't know). so RAIDZ even with single parity will be a big win :D
[21:14:20] *** hajma has quit IRC
[21:14:33] *** freedomrun has joined #openindiana
[21:14:47] <tsoome> 2GB means you have 1GB for arc, means you have 250MB for arc metadata
[21:14:58] <yalu> which I tuned, off course
[21:15:38] <miine> and Dell is fire-selling their old T110's beginning at 199 EUR. Dell should bundle OpenIndiana :D
[21:15:39] <tsoome> dedupe ratio?
[21:15:54] *** hajma has joined #openindiana
[21:16:12] <yalu> tsoome: 9 to one atm. a bit unrealistic since the original copy of my data were a lot of hardlinks
[21:16:41] <yalu> it's just a test anyway
[21:16:41] <tsoome> 9:1 is reasonable, if you can keep it:D
[21:17:57] <yalu> I'd reach a lot less if I had used the -H option to rsync to respect hardlinks, anyway it doesn't change the size of the dedup table
[21:19:40] *** hajma has quit IRC
[21:20:26] <yalu> so I'm exploring the lower limits of disk speeds achievable with relatively modern hardware :D
[21:22:40] <tsoome> :D
[21:24:02] <yalu> 2x ide disk (250+120MB) joined using lvm, added as a raw disk in virtualbox, which contains 10.8 GB of unique data after more than 21 hours
[21:34:50] <dkeav> miine: linky to firesale?
[21:35:24] <miine> dkeav: dell.com or dell.de ... go to business store.
[21:36:28] <miine> have to leave. see you later...
[21:36:36] *** miine has quit IRC
[21:42:21] *** hajma has joined #openindiana
[21:43:41] <DanaG> hmm, with raid-z of three 2TB drives, how much space would I get?
[21:47:33] <tsoome> ~4.
[21:48:13] <DanaG> Cool.
[21:48:43] <DanaG> Alternately, I might go with a pair of mirrored drives, plus some for offsite.
[21:48:53] <DanaG> Can you do a 3-way mirror?
[21:49:47] <DanaG> My ideal goal: two drives always present, and a third drive offsite. Would it be useful to rotate which drive is offsite?
[21:55:45] *** skeeziks has quit IRC
[21:57:47] <tsoome> yes you can do 3-way, and you can split mirror (man zpool)
[22:02:32] <dkeav> "split mirror"
[22:02:33] <dkeav> ?
[22:15:10] <DanaG> Cool. That's a good term to google -- thanks.
[22:15:36] <tsoome> man zpool.
[22:17:14] *** skeeziks has joined #openindiana
[22:23:49] <dkeav> heh i was thinking something else
[22:23:53] <dkeav> brainfart
[22:27:58] *** jpg has quit IRC
[22:28:13] *** freedomrun has quit IRC
[22:32:33] <dkeav> hmm thats better than the old way for sure
[22:32:56] <dkeav> seemed kinda dirty to detach and force an import elsewhere
[22:34:19] <tsoome> :)
[22:34:25] <tomww> well, a 3-way mirror with one copy offsite... wouldn't that be much better if the offsite copy be more timely updated with rsync or zfs send| zfs recv ?
[22:35:53] <dkeav> i suppose if you have another backup scheme like vtl or something for data security, then having an offsite mirror every so often is just disaster recovery convenience is shit hits the fan
[22:42:19] <sergefonville1> with every succes a new challenge :P
[22:42:31] <sergefonville1> now I can't setup smb shares...
[22:42:41] <dkeav> why not
[22:42:42] <sergefonville1> or more, I can't open them
[22:42:52] <sergefonville1> I can see them
[22:42:57] <sergefonville1> but that's it
[22:43:09] <dkeav> how did you share them
[22:44:57] <sergefonville1> zfs set sharesmb=websites rpool/websites
[22:45:28] <sergefonville1> where is logging for smb placed
[22:45:30] <sergefonville1> the auth errors
[22:48:05] <tsoome> ad or wg mode?
[22:51:36] <sergefonville1> wg I think, I did not do any aditional setup
[22:54:23] <richlowe> sergefonville1: zfs set sharesmb=name=...
[22:54:49] <richlowe> where that's an elipsis, not literal.
[22:56:01] <sergefonville1> sorry, there was a name= in there :P
[22:56:20] <sergefonville1> zfs set sharesmb=name=websites rpool/websites
[22:56:25] <sergefonville1> from the history
[22:57:23] <sergefonville1> where is the error log?
[23:00:05] *** konobi has left #openindiana
[23:02:50] <richlowe> would go to syslog, I'd hope
[23:08:31] <sergefonville1> the last 50 rules of syslog are filled with sendmail
[23:09:10] <tsoome> if its wg, did you set pam and change password after that?
[23:09:40] <sergefonville1> I don't think so
[23:09:48] <sergefonville1> how and where do i do that?
[23:10:43] *** bens1 has joined #openindiana
[23:11:14] <sergefonville1> problem solbed
[23:11:17] <sergefonville1> solved*
[23:11:22] <sergefonville1> thank you :D
[23:11:23] <sergefonville1> again :P
[23:12:31] <sergefonville1> my nex challend
[23:12:45] <sergefonville1> does oi support force group and force user for a share?
[23:13:45] *** axisys has joined #openindiana
[23:20:51] *** bens1 has quit IRC
[23:21:02] <tsoome> no
[23:23:09] <tsoome> you need to set up correct default acl so the permissions will be inherited
[23:28:19] <sergefonville1> a default acl...
[23:28:25] <sergefonville1> something to google :P
[23:28:58] <sergefonville1> File system doesn't support aclent_t style ACL's
[23:29:11] <sergefonville1> getfacl .
[23:29:16] <sergefonville1> while in websites
[23:30:14] <longcat> use /usr/bin/chmod and /usr/bin/ls to set/view acls
[23:30:43] <longcat> ie: /usr/bin/ls -V .
[23:31:18] <longcat> man /usr/bin/chmod has a shit ton of examples
[23:31:38] <longcat> 27 examples
[23:31:57] <longcat> it's one of the better man pages i've ever seen
[23:38:29] *** DanaG has left #openindiana
[23:43:52] <sergefonville1> it seems that the mountpoint of the filesystem does not have any permissions
[23:45:07] <sergefonville1> if I ls on dir high it does have an owner and a group and 775
[23:45:40] *** skeeziks has quit IRC
[23:46:53] *** raichoo has quit IRC
[23:48:42] *** skeeziks has joined #openindiana
[23:50:11] <AlasAway> VLC needs it
[23:50:13] <AlasAway> or wants it
[23:50:25] <longcat> looks like a math library
[23:51:15] <AlasAway> yeah
[23:55:17] *** birvin has joined #openindiana
[23:59:21] *** hajma has quit IRC