[00:01:07] *** melbogia1 has quit IRC
[00:04:21] *** melbogia has joined #opensolaris
[00:05:45] *** fisted_ has joined #opensolaris
[00:06:48] *** fisted has quit IRC
[00:08:51] *** ingenthr has quit IRC
[00:11:58] *** fisted_ is now known as fisted
[00:17:42] *** deet1 has quit IRC
[00:24:31] *** ewdafa has quit IRC
[00:36:09] *** kimc has quit IRC
[00:54:50] *** wdp_ has quit IRC
[00:55:25] *** wdp_ has joined #opensolaris
[01:05:18] *** brendang has quit IRC
[01:08:04] *** InTheWings has quit IRC
[01:15:48] *** jamesd_laptop has joined #opensolaris
[01:18:07] *** jamesd2 has quit IRC
[01:22:26] *** brendang has joined #opensolaris
[01:33:56] *** Mech0z has quit IRC
[01:36:05] *** jimmy1980 has quit IRC
[01:37:17] *** wdp__ has joined #opensolaris
[01:40:16] *** wdp_ has quit IRC
[01:57:01] *** russiane39 has joined #opensolaris
[02:02:43] *** FastJack has quit IRC
[02:07:32] *** FastJack has joined #opensolaris
[02:23:50] *** FrankLv has quit IRC
[02:32:32] *** mmu_man has quit IRC
[02:46:05] *** snuff-home has quit IRC
[02:49:01] *** Stellar has quit IRC
[02:49:30] *** stresstest4051 has joined #opensolaris
[02:49:37] *** stresstest4051 has quit IRC
[02:50:46] *** niq has quit IRC
[03:11:36] *** CodeWar has joined #opensolaris
[03:18:14] *** hajma has quit IRC
[03:27:23] *** jamesd__ has joined #opensolaris
[03:28:41] *** jamesd__ has quit IRC
[03:28:43] *** FrankLv has joined #opensolaris
[03:29:22] *** jamesd__ has joined #opensolaris
[03:29:51] *** jamesd_laptop has quit IRC
[03:38:13] *** FrankLv has quit IRC
[03:43:30] *** miine_ has joined #opensolaris
[03:45:28] *** miine has quit IRC
[03:45:29] *** miine_ is now known as miine
[03:50:18] <CodeWar> is this sufficient to export zfs as read write zfs set sharenfs=rw,nosuid rpool/home
[03:50:33] <CodeWar> showmount from another machine shows it but I cant write to this drive from another machine
[03:51:05] <jamesd__> CodeWar, are you trying to write to the drive as root?
[03:51:17] <CodeWar> jamesd__, from the linux box? tried both
[03:51:51] <jamesd__> root on nfs == nobody... better to set the owner of the filesystem as a user with the same id on both systems.
[03:52:32] <CodeWar> can i export using cifs ?
[03:54:51] <jamesd__> yeah.. i think its zfs set sharecifs=rw ... check the man page... or google it. you may need to set the domain and/or workgroup first... its covered
[03:57:13] <richlowe> sharesmb, I think.
[03:58:36] <jamesd__> yeah... i haven't done cifs shares in a while..
[03:59:20] <CodeWar> I think I have to install some pkg ? zfs get doesnt list sharesmb but its possibly because its solaris 11/06 busy googling
[04:07:01] *** stresstest5538 has joined #opensolaris
[04:07:15] *** stresstest5538 has quit IRC
[04:07:36] *** snuff-home has joined #opensolaris
[04:15:06] <CodeWar> it says PSARC 2005/695 build 71 is when cifs support was added .. "Solaris 10 11/06 s10s_u3wos_10 SPARC" any idea whats the build number of my system from this output?
[04:18:57] *** CodeWar has quit IRC
[04:22:26] *** bobbyz has quit IRC
[04:24:14] *** Toiletbowl has joined #opensolaris
[04:36:20] *** bobbyz has joined #opensolaris
[04:36:21] *** FrankLv has joined #opensolaris
[04:39:03] *** stresstest5880 has joined #opensolaris
[04:42:13] *** stresstest5880 has quit IRC
[04:45:17] *** bobbyz_ has joined #opensolaris
[04:45:58] *** bobbyz has quit IRC
[04:51:46] *** miine has quit IRC
[04:57:10] *** st-6260 has joined #opensolaris
[04:59:18] *** CodeWar has joined #opensolaris
[05:03:18] *** fOB_2 has joined #opensolaris
[05:04:30] *** st-6337 has joined #opensolaris
[05:05:03] *** fOB has quit IRC
[05:05:11] *** deet has joined #opensolaris
[05:06:59] *** st-6337 has quit IRC
[05:07:33] *** deet has quit IRC
[05:11:04] *** st-6404 has joined #opensolaris
[05:12:22] *** st-6404 has quit IRC
[05:20:29] *** ingenthr has joined #opensolaris
[05:24:56] *** Toiletbowl has quit IRC
[05:28:12] *** st-6603 has joined #opensolaris
[05:30:10] *** Toiletbowl has joined #opensolaris
[05:36:24] *** Tpenta has quit IRC
[05:42:20] *** shivraj has joined #opensolaris
[05:42:35] <shivraj> im trying to run solaris 11 express ... installed via live cd
[05:42:56] <shivraj> X is not loading I believe trying to load for the first time, this is on a toshiba tecra m11
[05:53:07] *** Tpenta has joined #opensolaris
[06:00:57] <comay> shivraj i believe the m11 uses intel graphics - this particular chipset isn't directly supported by s11 express other than in vesa mode
[06:01:14] <comay> the next release does have support though for this particular laptop
[06:04:02] *** hjf has joined #opensolaris
[06:04:58] <hjf> anyone knows if the SIL3124 is supported/works ?
[06:06:07] <richlowe> It should
[06:07:53] <hjf> good
[06:22:49] *** Toiletbowl has quit IRC
[06:25:47] *** shivraj has quit IRC
[06:26:17] *** javashin has joined #opensolaris
[06:26:17] *** javashin has joined #opensolaris
[06:27:24] *** javashin has quit IRC
[06:28:57] *** Toiletbowl has joined #opensolaris
[06:34:18] *** Cobi has joined #opensolaris
[06:37:10] *** Zubby has quit IRC
[06:40:46] *** hjf has quit IRC
[06:58:10] *** jamesd_laptop has joined #opensolaris
[07:01:44] *** jamesd__ has quit IRC
[07:33:30] *** LabMonkey has quit IRC
[07:37:45] *** LabMonkey has joined #opensolaris
[07:51:56] *** Toiletbowl has quit IRC
[07:56:59] *** Toiletbowl has joined #opensolaris
[07:59:14] *** ewdafa has joined #opensolaris
[07:59:55] *** st-6603 has quit IRC
[08:06:40] *** st-8688 has joined #opensolaris
[08:08:37] *** mavhc has quit IRC
[08:12:02] *** st-8688 has quit IRC
[08:15:52] *** st-8806 has joined #opensolaris
[08:24:26] *** mavhc has joined #opensolaris
[08:36:55] *** victori has quit IRC
[08:37:06] *** CodeWar has joined #opensolaris
[08:44:02] *** hrist has quit IRC
[08:49:04] <CIA-49> SFE kohju: Change Pemission /usr/share and %{libdir}/*/pkgconfig
[08:51:17] *** victori has joined #opensolaris
[08:55:57] *** CodeWar has left #opensolaris
[09:02:16] *** st-8806 has quit IRC
[09:20:54] *** Crypticfortune has joined #opensolaris
[09:30:53] *** miine has joined #opensolaris
[09:35:04] *** CosmicDJ has quit IRC
[09:46:56] *** niq has joined #opensolaris
[09:51:16] *** Toiletbowl has quit IRC
[09:52:11] <CIA-49> SFE jurikm: SFExfwm4.spec: bump to 4.8.0, move to SFE from osol xfce
[10:11:41] *** tabasko_ has joined #opensolaris
[10:11:56] *** brendang_ has joined #opensolaris
[10:11:58] *** DerSaidi1 has joined #opensolaris
[10:12:00] *** het_ has joined #opensolaris
[10:12:37] *** tomww_ has joined #opensolaris
[10:12:49] *** slx86_ has joined #opensolaris
[10:14:26] *** Edgeman2 has joined #opensolaris
[10:15:15] *** nollan has quit IRC
[10:15:20] *** lewellyn has quit IRC
[10:15:20] *** Cobi has quit IRC
[10:15:30] *** nollan_ has joined #opensolaris
[10:16:22] *** brendang has quit IRC
[10:16:22] *** melbogia has quit IRC
[10:16:22] *** Edgeman has quit IRC
[10:16:22] *** robinbowes has quit IRC
[10:16:23] *** DerSaidin has quit IRC
[10:16:23] *** tabasko has quit IRC
[10:16:23] *** slx86 has quit IRC
[10:16:23] *** het has quit IRC
[10:16:23] *** tomww has quit IRC
[10:16:37] *** Cobi has joined #opensolaris
[10:16:53] * RoyK watches the snow
[10:17:02] *** blu has quit IRC
[10:17:26] *** nollan_ is now known as nollan
[10:17:43] *** blu has joined #opensolaris
[10:19:24] *** st-2451 has joined #opensolaris
[10:22:30] *** lewellyn has joined #opensolaris
[10:22:30] *** ChanServ sets mode: +o lewellyn
[10:23:42] *** melbogia has joined #opensolaris
[10:32:35] <CIA-49> SFE jurikm: SFExfce-utils.spec: bump to 4.8.0, move to SFE from osol xfce
[10:47:04] *** robinbowes has joined #opensolaris
[10:51:23] *** jamesd2 has joined #opensolaris
[10:51:24] *** jamesd_laptop has quit IRC
[11:00:06] <CIA-49> SFE jurikm: SFExfce4-appfinder.spec: bump to 4.8.0, move to SFE from osol xfce
[11:02:21] *** jamesd2 has quit IRC
[11:06:17] *** jamesd2 has joined #opensolaris
[11:10:42] *** stevel has quit IRC
[11:19:59] <CIA-49> SFE jurikm: SFEthunar-vfs.spec: initial spec SFEgtk-xfce-engine.spec: bump to 2.8.0, move to SFE from osol xfce
[11:20:11] *** mikefut has joined #opensolaris
[11:23:25] *** derchris has quit IRC
[11:23:48] *** derchris has joined #opensolaris
[11:25:58] *** hsp has joined #opensolaris
[11:28:41] <CIA-49> SFE jurikm: SFExfce-loginmgr.spec: move to SFE from osol xfce
[11:42:38] *** InTheWings has joined #opensolaris
[11:44:27] *** Edgeman2 is now known as Edgeman
[11:55:35] *** jamesd_laptop has joined #opensolaris
[11:56:32] *** jamesd_laptop has joined #opensolaris
[11:58:02] *** jamesd2 has quit IRC
[12:28:30] <CIA-49> SFE jurikm: SFExfce-terminal.spec: bump to 0.4.6, move to SFE from osol xfce
[12:30:52] *** mmu_man has joined #opensolaris
[12:53:27] *** AxeZ has joined #opensolaris
[13:06:52] *** symptom has joined #opensolaris
[13:10:09] *** InTheWings has quit IRC
[13:12:30] *** AxeZ has quit IRC
[13:20:29] *** tomww_ is now known as tomww
[13:28:31] <CIA-49> SFE jurikm: base-specs/ffmpeg.spec: bump to 0.6.2
[13:35:45] *** niq has quit IRC
[13:38:52] *** niq has joined #opensolaris
[13:38:52] *** niq has joined #opensolaris
[13:48:48] *** Spencer_tt has joined #opensolaris
[13:48:49] *** Spencer_tt has quit IRC
[13:48:49] *** Spencer_tt has joined #opensolaris
[13:54:32] <eklof> Anyone running sol11express with support licence and tell me if the zfs version has been updated anytime ?
[13:55:21] *** hajma has joined #opensolaris
[13:58:12] <tsoome> This system is currently running ZFS pool version 31.
[14:03:20] <CIA-49> SFE jurikm: SFExfcalendar.spec: bump to 4.8.1, move to SFE from osol xfce
[14:06:22] *** InTheWings has joined #opensolaris
[14:10:57] *** InTheWings has quit IRC
[14:12:15] <CIA-49> SFE jurikm: SFExfce4-mixer.spec: bump to 4.8.0, move to SFE from osol xfce
[14:28:18] <CIA-49> SFE jurikm: SFExfcalendar.spec: fix ical build
[14:32:36] *** nonnooo has joined #opensolaris
[14:43:21] *** Andrew1 has quit IRC
[14:46:21] *** Andrew1 has joined #opensolaris
[14:50:08] <macros73> When you add add a mirror to a pool already consisting one one mirror set, is there a way to resilver across the two mirror sets so that the data is distributed evenly?
[14:50:35] <eklof> I think not, it will only write new data to the new mirror
[14:51:10] <eklof> That's something you'll need the block-rewriting stuff to be able to do.
[14:51:20] <macros73> Bah. So I would need to move all the data off to another pool, and then back?
[14:51:25] <eklof> yepp
[14:51:32] <eklof> But why is that important?
[14:51:53] <eklof> the pool's total size is still the same?
[14:52:24] <macros73> If the data is evenly distributed, wouldn't the reads also then be evenly distributed?
[14:52:31] *** kimc has joined #opensolaris
[14:52:56] <eklof> Ah I see where you are going.
[14:53:34] <eklof> Yes that is a problem, the reads from old data will still not benefit from the aditional devices.
[14:53:41] <eklof> No way around it unfortunatelty.
[14:53:57] <macros73> I've got 1.13T right now in this pool. Moving it to another pool would be a pain, but less of a pain now than if I wait another 6 months. :D
[14:54:21] <eklof> I've moved 4TB of data between pools a while ago, it was a pain.
[14:54:34] <eklof> Took forever (I have slow devices)
[14:54:42] <macros73> what do you consider slow?
[14:54:47] <eklof> But do you really need the added read speed ?
[14:54:56] <eklof> Well, it took like a week or so.
[14:55:05] <macros73> Not really, but I have nothing else to do this morning. :D
[14:55:51] <eklof> I's say, just do nothing and add new data as is. Not worth the trouble if you already have sufficient read speeds
[14:56:16] <macros73> although if I've read the various blogs right, since I am using dedup on ~1T of data, I should have at least 32G of combined memory + z2arc to hold the dedup tables?
[14:56:38] <macros73> adding something for that might be a better use of time than refreshing the pool
[14:56:58] <eklof> Oh well, dedupe is a memory beast. I had to turn it off.
[14:57:00] <macros73> yeah, I think you are right. if I do it now, I'll want to do it again the next time I add capacity to the pool.
[14:57:26] <eklof> But that is not right, I've heard numbers of about 2GB of memory per TB of deduplicated data
[14:58:08] <eklof> Still very much, and not worth it if you don't have very fast and expensive disks :)
[14:58:28] <eklof> cheaper to add more disks than dedupe and buy the memory
[14:58:31] <macros73> I have 8G of memory in this home server. It didn't cost all that much really.
[14:58:42] <macros73> though yeah, for the money I could have prolly got more disk
[14:59:08] <eklof> I turned dedupe off. I had about 1TB and it took ages to scrub.
[14:59:14] <eklof> Like 130-140 hours :)
[14:59:24] *** Andrew1 has quit IRC
[14:59:40] <macros73> "0 TB of unique data stored in 128K records or more than 1TB of unique data in 8K records would require about 32 GB of physical memory."
[14:59:47] <macros73> er, that first number is 20, not 0
[15:00:02] <macros73> How do I check the block size of the pool?
[15:00:40] <eklof> There is no fixed block-size in zfs i think
[15:00:42] <eklof> it's variable.
[15:00:51] <macros73> right, how do I see the average or something?
[15:00:54] <eklof> so it's very hard to calculate i guess
[15:00:59] <eklof> Have no clue :)
[15:01:00] <macros73> or, alternatively, see how much space the dedup table is using?
[15:01:32] <macros73> ah, arc_summary?
[15:02:21] <Stric> zpool -b yourpool might be able to give an average of block size
[15:02:47] <Stric> err. zdb -b yourpool
[15:04:07] <Stric> zdb -S yourpool gives simulated dedup stats, and also some block info (how many vs how much)
[15:05:17] <macros73> whoops, I have 4G, not 8G
[15:05:56] *** Andrew1 has joined #opensolaris
[15:06:15] <macros73> wait, this is wrong.
[15:06:17] <Stric> You will get into pain if you use dedup on that little memory (or at all)
[15:06:33] <Stric> prtconf|head
[15:06:39] <macros73> I installed an 8G kit. 2x4. Why is only 4 showing up?
[15:07:46] <macros73> shutting down to recheck the memory, hrm.
[15:10:14] <eklof> Wondor how that bp-rewrite work is going :)
[15:10:20] <eklof> wonder
[15:11:13] <macros73> OMG. Amazon/Crucial shipped me a 2X2G kit when I ordered a 2x4.
[15:11:23] <eklof> :(
[15:11:44] <eklof> And now you have used them, and can't turn them back..... :(
[15:11:52] *** InTheWings has joined #opensolaris
[15:11:52] <macros73> and worse, the plastic mem boxes SAY it is the 2x4 kit!
[15:12:02] *** AxeZ has joined #opensolaris
[15:12:02] <eklof> what a fraud.
[15:12:40] <macros73> I'm still within 30 days, so either Amazon or my CC will make it right.
[15:12:49] <Stric> shouldn't be a problem
[15:13:28] <macros73> just confirmed. the boxes I received were for the 2x4. The modules inside were 2x2.
[15:14:24] *** murakawa has joined #opensolaris
[15:14:43] <macros73> unless i got modules mixed when i was installed, checking
[15:15:44] *** Andrew1 has quit IRC
[15:16:40] <macros73> These are non-ECC modules, even.
[15:16:56] <macros73> i /had/ to have gotten them transposed
[15:24:27] *** Andrew1 has joined #opensolaris
[15:30:57] *** jamesd_laptop has quit IRC
[15:31:35] *** jamesd_laptop has joined #opensolaris
[15:33:14] *** jamesd_laptop has quit IRC
[15:33:48] *** jamesd_laptop has joined #opensolaris
[15:34:51] *** jamesd_laptop has quit IRC
[15:35:28] *** Andrew1 has quit IRC
[15:35:30] *** jamesd_laptop has joined #opensolaris
[16:01:44] *** hjf has joined #opensolaris
[16:06:45] *** hohum has quit IRC
[16:10:51] <macros73> yes, I transposed the modules. Doh.
[16:12:23] *** smrt has quit IRC
[16:12:36] *** hohum has joined #opensolaris
[16:12:38] *** smrt has joined #opensolaris
[16:14:18] *** javashin has joined #opensolaris
[16:16:52] *** Spencer_tt has quit IRC
[16:19:15] *** javashin has quit IRC
[16:21:30] *** FrankLv_ has joined #opensolaris
[16:23:07] *** hsp has quit IRC
[16:24:14] *** FrankLv has quit IRC
[16:27:39] <CIA-49> SFE tom68: SFExmlto.spec: add patch2, disable xmlto verification (temprarily) SFEgit.spec: fix compiler options by setting cc_is_gcc 1 and gcc to be sfw version
[16:29:47] *** Spencer_tt has joined #opensolaris
[16:30:44] *** hsp has joined #opensolaris
[16:34:21] *** javashin has joined #opensolaris
[16:34:52] <CIA-49> SFE jurikm: SFEcadaver.spec: initial spec
[16:35:58] *** CosmicDJ has joined #opensolaris
[16:40:11] *** Andrew1 has joined #opensolaris
[16:44:31] *** niq has quit IRC
[16:45:16] <CIA-49> SFE jurikm: SFEsmfgui.spec: initial spec
[16:47:34] <CosmicDJ> any ksh users here? can you suspend a root shell (i.e. the one you got after doing 'su'), keep in mind that roots shell must be ksh as well
[16:56:29] <CIA-49> SFE jurikm: SFEpython26-enchant.spec: initial spec
[17:04:43] <CosmicDJ> nvm
[17:12:11] *** mikefut has quit IRC
[17:19:14] *** murakawa has quit IRC
[17:27:45] *** bitbucket has joined #opensolaris
[17:34:05] <macros73> Any efforts to integrate webdav as another option to smb and nfs for zfs shares?
[17:36:05] <RoyK> macros73: I seriously doubt that would be a task you want in kernel
[17:36:41] <RoyK> current sharesmb/sharenfs are both controlling kernel-based shares
[17:37:29] <RoyK> I guess it should be doable to add settings for an apache-based webdav service in svccfg etc, but I don't know if anyone have done it yet
[17:37:34] <RoyK> s/have/has/
[17:37:40] <macros73> I realize they are pretty different, functionally, but if nfs/smb are integrated and controlled directly from zfs, why not webdav?
[17:38:12] <RoyK> macros73: I guess it's a simple matter of programming - go ahead :)
[17:38:45] <macros73> lol, I'm trying to inspire one of you actual qualified coders to do it. Now stop deflecting and get to work. :D
[17:39:02] <CIA-49> SFE jurikm: SFEunpaper.spec: initial spec
[17:39:19] <tsoome> whats wrong with apache mod_dav?
[17:39:26] <RoyK> heh - I'm not a coder, and even if I were one, I wouldn't write a fscking webdav extension for zfs
[17:39:45] <RoyK> tsoome: seems he wants to zfs set sharewebdav=on or something
[17:39:56] <macros73> Nothing, I just like the idea of zfs set sharedav=on pool/share
[17:40:20] <RoyK> macros73: just use apache for now
[17:40:33] <macros73> Yeah, that's what I'll need to do. No worries, that works fine. :D
[17:40:35] <RoyK> macros73: or pay someone to write a patch ;)
[17:41:47] <tsoome> well, the code is here, its not much the question about zfs programming, but sharemanager interaction with web service imo;)
[17:42:25] *** javashin_ has joined #opensolaris
[17:42:25] *** javashin has quit IRC
[17:44:31] <macros73> If I want to do this the traditional way, should I setup apache in its own zone or something?
[17:44:57] <tsoome> depends on you need for isolation
[17:45:12] <macros73> Would prefer to limit the damage a hacker could do. I don't mind losing a pool, just wouldn't want to lose the whole box.
[17:45:22] <macros73> home server, not work.
[17:46:00] <tsoome> if you dont need to share those files with nfs/cifs, then zoned approach may be fine
[17:46:13] <macros73> D'oh. There's the catch, I do.
[17:46:44] <macros73> Usage will generally be: cifs/nfs at home. webdav from the iPad when mobile
[17:47:24] <RoyK> macros73: it'll take a good portion of bad luck plus a good hacker to get through to anything with just webdav in front
[17:47:44] <RoyK> macros73: just snapshot frequently and keep a backup
[17:49:33] * RoyK setup his first openindiana fileserver really spec'ed to do file serving a few days back - striped mirrors, good SSD caching etc, and for a price that'll probably give me a hollow case from HP :þ
[18:07:45] *** bitbucket has quit IRC
[18:12:19] *** javashin_ has quit IRC
[18:14:58] <macros73> cool, thanks. What SSD caching are you oing?
[18:15:42] <macros73> I just setup a osol 11 express home server with two striped mirrors for the data pool. Replaces the raidz pool i was using previously on an older box.
[18:17:41] <macros73> this one has an AMD 420E, 8G of ram (now that i found and installed the right modules), and a 4T array (2x2G mirrors)
[18:17:47] <macros73> er..2x2T
[18:20:01] <hjf> out of curiosity, why 2x2T instead of zfs?
[18:20:09] <hjf> i mean raidz
[18:20:32] <macros73> hoped I might see faster read speeds.
[18:20:54] <macros73> turns out, not really. The bottleneck is the smb stack in OSX, not my server.
[18:21:06] <tomww> hjf: no booting from a raidz. no gain if only 2 disks if doing a raidz then
[18:21:09] <macros73> but keeping it that way to make it a little simpler to expand in the future.
[18:21:31] <hjf> oh i see, only 4 disks
[18:21:49] <macros73> if I want to add more capacity, I add 2 more disks to the system and the pool will start to use them.
[18:22:56] <tomww> hjf: oh, now I see. with 4 disks and a raidz you would get 3x2=6gb plus 2 parity. but would not be bootable then if all disks are in the raidz and no other media used for booting.
[18:23:05] <hjf> my server has 2x160GB for the system and 4x1tb for storage
[18:23:20] <macros73> actually, i have a separate disk for boot
[18:23:25] <macros73> will mirror it eventually
[18:23:38] <hjf> well... i would use a 4-way raidz if i were you
[18:24:00] <hjf> I read at over 200MB/s from my raidz (with WD "green" drives)
[18:24:01] <tomww> but you could still make the first two disks and first 100GB a mirror and then use the rest in a raidz. so you would end up with rpool 100GB, datapool with 3x1.9TB
[18:24:26] <macros73> hjf: Locally or over network?
[18:24:47] <hjf> locally. over network I think I used to get about 80MB/s gig-eth cat6
[18:24:57] <macros73> hjf: smb or nfs?
[18:25:01] <hjf> both
[18:25:24] <macros73> weird. I was getting ~80M over nfs and smb, unless the smb client was a mac, in which case it was ~50M/s
[18:25:42] <hjf> are you using samba or the solaris cifs server?
[18:25:52] <macros73> sharesmb
[18:26:16] <hjf> then the cifs server. try samba (svcadm disable cifs/server svcadm enable samba)
[18:26:47] <tsoome> samba cant be faster than in kernel cifs;)
[18:26:47] <macros73> will it need its own config or does it use cifs' config?
[18:26:58] <hjf> needs its own config
[18:27:53] <hjf> tsoome: he's getting 80MB/s, i assume a gigabit ethernet network. it can't go any faster than that
[18:28:07] <hjf> besides, the kernel's cifs server sucks
[18:28:11] <macros73> Yeah, 1Gb.
[18:28:22] <tsoome> sucks why?
[18:28:25] <hjf> i don't like how it handles zfs "directory" trees
[18:28:35] <tsoome> ?
[18:28:48] <macros73> What has me confused is that Solaris CIFS -> OSX == slow, CIFS --> Windows == faster
[18:28:48] <tsoome> cifs does not do trees.
[18:28:51] <hjf> as every zfs filesystem is a separate filesystem, you have to export each one as a different smb share
[18:29:40] <hjf> example:
[18:29:50] <hjf> tank/shared, tank/shared/audio, tank/shared/video
[18:30:09] <hjf> on cifs you see the following shares: tank_shared, tank_shared_audio, tank_shared_video
[18:30:32] <hjf> on samba you see shared. click on shared and see audio and video, and so on. as if it was a directory tree
[18:31:27] <hjf> i heard the "yeah it's supposed to be like that" thing. but i don't like it
[18:31:28] <tsoome> that does not mean it sucks. it means you need to think how to organize the data and what kind of service to use.
[18:32:13] <hjf> tsoome: i thought zfs was supposed to help me make management easier :)
[18:32:28] <tsoome> easier does not mean you dont need to think
[18:32:34] <hjf> say, i have a shared/audio, shared/video and shared/text_files
[18:32:50] <hjf> no point in compress=on audio and video. but it's useful for text files
[18:33:26] <hjf> the point is: I want to be able to see the directory tree JUST like i see it on the local machine. why can't i have that??
[18:34:09] <hjf> if i'm on a windows machine and have a "root" shared mounted as a network drive, i want to be able to see its children. not having them all displayed as a flat space
[18:34:26] <tsoome> because its not just directory tree, its filesystem tree.
[18:34:43] <tsoome> what you are actually missing is dfs links.
[18:35:48] <hjf> all i know is that Samba does what I expect it to do
[18:36:21] <hjf> because samba reads the files as they appear to a local user
[18:36:50] <tsoome> samba does share directory tree, and has limits you have with directory trees.
[18:37:07] <tsoome> each methods have their pros and cons.
[18:38:01] <tsoome> same example, you wanna share "/share", but not "/share/movies". what do you will do?
[18:38:28] <hjf> tsoome: then the CIFS server should have an option to show the share's children too. if it can display shared_audio then it should be smart enough to let show it as shared/audio
[18:38:47] <hjf> tsoome: maybe I can play with ACLs?
[18:39:11] <tsoome> no, I dont wanna share/movies to show up. at all.
[18:39:37] <tsoome> ;)
[18:40:46] <hjf> ok... so we agree that not wanting to show shared/audio would be a very specific case, right?
[18:41:01] <tsoome> lol;)
[18:41:04] <hjf> then you edit smb.conf and do "hide files = /pattern/"
[18:41:10] <hjf> there you go
[18:41:16] <tsoome> as i wrote, both methods have their pro's and cons.
[18:41:40] <tsoome> and what id some stupid user will create symlink?
[18:41:43] <tsoome> if*
[18:42:00] <hjf> "follow symlinks = no"
[18:42:22] <tsoome> but symlinks are needed for other files;)
[18:42:53] <tsoome> anyhow, sure, its an constructed example, you should get the idea anyhow
[18:43:33] <tsoome> but yea, the issue you have is missing DFS support, there is (was?) some project to create the support, but i have no idea about its status
[18:43:47] <hjf> well same goes for the CIFS server. the point is, it's good enough for the developers, and it will stay that way. luckily i can use samba and work aroud it
[18:44:07] <tsoome> because its not only the issue your child datasets are not shared, but also the browsing them is pita
[18:44:32] <hjf> that's the good thing about "open", you get to choose whatever you want to use
[18:44:40] <tsoome> and btw, samba does not support snapshots;)
[18:45:09] <tsoome> open has nothing to do with this issue, to correct term is alternatives.
[18:46:27] *** galt has joined #opensolaris
[18:47:37] <tsoome> there is another thing what sucks with cifs server tho
[18:49:12] <tsoome> sharing dataset with sharesmb=on is only an small bit, you also may need default acl's to be set, also some other mode settings depending on what clients you have, and even worse, some settings can be only set at dataset creation time.
[18:49:56] <hjf> casesensitivity?
[18:50:35] <tsoome> for example, yes
[18:50:40] <tsoome> also nbmand
[18:50:57] <macros73> how do I reinstall apache22? pkg won't let me uninstall, should I just force it?
[18:51:35] <tsoome> uninstall it and install again?
[18:51:59] <macros73> has a dependency, installadm
[18:52:53] <hjf> also, seems like PPM support is broken. i connected a serial GPS unit with PPS output and couldn't make it work
[18:53:18] *** galt has quit IRC
[18:53:52] <tsoome> macros73: pkg fix?
[18:54:11] <macros73> removed the dependent package, then reinstalled both. :D
[18:54:37] <hjf> is pkg still being actively developed?
[18:54:45] <hjf> or was it some sun thing?
[18:55:23] *** Alasdairrr has quit IRC
[18:56:14] <CosmicDJ> hjf: IIRC it will be on solaris 11
[18:56:18] *** Alasdairrr has joined #opensolaris
[18:56:21] <CosmicDJ> s/on/in/
[18:56:30] <CosmicDJ> as the package management system of choice
[18:57:15] <hjf> I don't really understand it
[18:57:27] <hjf> i'm used to Debian's APT
[18:57:54] <CosmicDJ> man pkg for starters
[18:58:16] <hjf> yeah i tried it, but couldn't find how to LIST all the available publishers on my system
[18:58:47] <alanc> pkg publishr
[18:58:52] <alanc> err, pkg publisher
[18:58:57] <tsoome> all available? you mean, available in internet or in your syste?;)
[18:59:05] <tsoome> +m
[18:59:10] *** niq has joined #opensolaris
[18:59:30] <tsoome> there is no central "internet" publishers registry.
[18:59:49] <hjf> in my system
[19:00:02] <tsoome> that one was answered already;)
[19:00:05] <hjf> also, what's a preferred or sticky publisher?
[19:00:43] <tsoome> preferred is obvious, sticky or non stiky, means if another publisher can override original publisher or not
[19:00:59] <alanc> if a package is available from more than one publisher, it picks the one from the preferred publisher unless you specify it as pkg://publisher-foo/system/library/libfoobaz
[19:01:28] <hjf> i see. also, how are packages stored in the server? are they individual files?
[19:01:44] <alanc> once you install a package from a sticky publisher, pkg update will only upgrade to new versions from that publisher
[19:03:14] <alanc> each file in the package is stored as an individual file, so when you upgrade, the client just requests the files whose sha hash has changed since the previous version
[19:03:32] <tsoome> making publisher non-sticky makes it possible to upgrade your opensolaris install to openindiana or solaris 11 express - as you will add new preferred publisher, the new packages will be installed from new publisher.
[19:03:34] <hjf> oh, that's what i was thinking
[19:04:21] <tsoome> obviously the sticky only applies if your package names are the same.
[19:04:34] <alanc> also means if more than one package includes files with identical contents, only one copy is stored on the server for all those packages (or much more likely, for all the versions of the same package that have unchanged copies of that file)
[19:04:36] *** dijenerate has quit IRC
[19:05:17] <hjf> i wonder if Debian will copy that sometime. I'm sure it would make it a lot easier on mirror's bandwidth
[19:06:06] *** niq has quit IRC
[19:06:41] <hjf> i mean, Debian's sid (the unstable branch) has 3 releases a day
[19:09:10] <hjf> also, if you run the testing branch you get like 5-10 package updates a day on a normal desktop system.
[19:09:23] <hjf> i wish that many people were involved in solaris
[19:10:44] <monsted> the number of updates is not an indicator of quality :)
[19:11:17] <tsoome> IMO the issue is more about if the corresponding branches are open to make it possible to post built packages.
[19:11:51] *** commander has quit IRC
[19:12:22] *** commander has joined #opensolaris
[19:12:52] <hjf> monsted: i mean that with so many people involved in a project, you have enough manpower to take care of a lot of bugs and adding new stuff to the system
[19:13:43] <hjf> sun's solaris development was slow, but when it offered releases it was stable and worked. because it was a company, with teams of people working, and getting paid for it.
[19:14:15] <hjf> but with linux you get "cool stuff to play with"
[19:14:26] <hjf> not necessarily something i would want on a server, but still
[19:14:49] <tsoome> if some insane gnu package developer wis releasing 10 updates per day to his package, should the wole system be rebuilt 10x per day as well?;)
[19:14:53] <tsoome> whole*
[19:15:29] <hjf> tsoome: that's kinda the spirit of sid. sometimes doing "apt-get upgrade" can mess your system
[19:16:18] <tsoome> mess can happen. thats the reason you have /release and /dev, thats not an problem at all
[19:17:00] <hjf> well yes, in debian you have three levels, release (stable), dev (testing), and sid (unstable)
[19:17:06] <tsoome> the problem is, for example, if some package is base for many others. new release of this package means you have to rebuild all others depending on it...
[19:17:25] <tsoome> and this kind of task will runn out of scale quite fast
[19:17:29] <hjf> tsoome: who cares? you're downloading binary packages, let someone else take care of rebuilding :P
[19:17:30] <tsoome> run*
[19:19:05] <hjf> of course if you want to build your own, you just add source repos to your sources list, and apt-get source --build
[19:19:13] <tsoome> you care, for example if you get updated libpng and after that update your perl, apache, php, .......... you name it, wont work;)
[19:19:47] <hjf> well that's why debian automagically manages dependencies
[19:20:19] <hjf> that sort of thing can happen if you run sid. if you're running testing, it doesn't happen.
[19:20:49] <hjf> there are silly things on the debian community tho... like the firefox/iceweasel thing
[19:21:20] *** stoxx has quit IRC
[19:21:40] <hjf> or that, for the same reason, you don't get nvidia drivers with debian (you have to manually install them)
[19:23:15] *** nonnooo has quit IRC
[19:28:45] *** stoxx has joined #opensolaris
[19:29:41] *** hsp has quit IRC
[19:30:08] *** hsp has joined #opensolaris
[19:30:15] <hjf> does opensolaris support AES instructions on new Core i processors? for things like IPSec or zfs crypto (for S11X of course)
[19:30:23] *** p3n has quit IRC
[19:34:10] <hjf> ah, silly windows XP. it only supports 3DES for IPSec
[19:34:55] <CIA-49> SFE jurikm: SFEpython26-imaging-sane.spec: initial spec SFEocrfeeder.spec: initial spec
[19:35:24] <hjf> i was trying it with a virtual machine, linux 64-bit runnig inside S11X VirtualBox
[19:35:44] <hjf> without IPSec I get 5MB/s SMB
[19:35:50] <hjf> with ipsec I get 600KB/s
[19:37:13] *** p3n has joined #opensolaris
[19:37:46] <hjf> does VirtualBox expose the host's CPU to its guest? I mean if I'm running Virtualbox in a Core i7 system, which supports AES instructions (and also VT and VT-x), will the guest be able to use those instructions?
[19:46:28] <CIA-49> SFE jurikm: SFEocrfeeder.spec: no more TODO
[20:07:16] *** st-2451 has quit IRC
[20:10:29] *** st-7097 has joined #opensolaris
[20:17:47] <CIA-49> SFE tom68: SFElibsndfile.spec: bump to 1.0.24
[20:18:00] *** st-7097 has quit IRC
[20:18:46] <tomww> some day we will be called spammers (SFE)
[20:18:59] <CIA-49> SFE jurikm: SFEvnc2flv.spec: initial spec
[20:22:15] *** st-7523 has joined #opensolaris
[20:26:30] <CIA-49> SFE jurikm: SFEvnc2flv.spec: IPS version
[20:32:16] <CIA-49> SFE tom68: SFElibsndfile.spec: bump to 1.0.24
[20:32:17] *** Crypticfortune has quit IRC
[20:33:10] *** p3n has quit IRC
[20:39:55] *** st-7523 has quit IRC
[20:44:01] *** st-7869 has joined #opensolaris
[20:46:12] *** st-7869 has quit IRC
[20:46:14] *** fisted has quit IRC
[20:47:25] *** fisted has joined #opensolaris
[20:54:23] *** st-7943 has joined #opensolaris
[20:54:39] *** CodeWar has joined #opensolaris
[20:57:22] *** st-7943 has quit IRC
[21:00:51] *** st-8196 has joined #opensolaris
[21:05:02] *** pothos_ has joined #opensolaris
[21:06:56] *** pothos has quit IRC
[21:07:03] *** pothos_ is now known as pothos
[21:11:25] *** hjf has quit IRC
[21:30:19] *** Zubby has joined #opensolaris
[21:35:54] *** CodeWar has quit IRC
[21:56:09] *** ttblrs_ has joined #opensolaris
[22:01:03] <CIA-49> SFE jurikm: SFEmod-wsgi.spec: initial spec
[22:15:21] *** symptom has quit IRC
[22:52:03] *** Zubby has quit IRC
[22:54:26] *** st-8196 has quit IRC
[23:00:34] *** Zubby has joined #opensolaris
[23:16:36] <macros73> does 4 hours to copy 80GB over 1GbE seem excessive?
[23:19:23] <jamesd_laptop> macros73, only if you are sitting there watching each byte get sent.... if you kick it off and go home or you do some other task, no body cares.
[23:23:45] *** derchris has quit IRC
[23:24:09] *** derchris has joined #opensolaris
[23:26:38] *** st-8942 has joined #opensolaris
[23:27:18] *** CodeWar has joined #opensolaris
[23:39:17] *** st-8942 has quit IRC
[23:42:06] *** hsp has quit IRC