Switch to DuckDuckGo Search
   March 17, 2011  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >

Toggle Join/Part | bottom
[00:05:17] *** kimc has quit IRC
[00:06:47] *** kimc has joined #opensolaris
[00:07:12] *** Statts[a] has quit IRC
[00:11:01] <McBofh> richlowe: rotflmao
[00:11:04] <McBofh> good work :)
[00:11:34] <McBofh> SunTzuKDE: re ffmpeg, you know about Murray Blakeman's ips repo for mplayer etc?
[00:25:27] *** mikefut has quit IRC
[00:26:33] *** deet has joined #opensolaris
[00:31:54] <SunTzuKDE> McBofh: yeah, but that doesn't help for S10. ;-)
[00:32:05] <McBofh> true
[00:32:27] <SunTzuKDE> kde4 project has a nice build infrastructure
[00:37:22] *** fisted has quit IRC
[00:43:08] *** mmu_man has quit IRC
[00:51:54] *** ewdafa has quit IRC
[00:52:59] *** niq has quit IRC
[00:57:50] *** fisted has joined #opensolaris
[01:10:21] *** stevel has quit IRC
[01:13:09] *** kimc has quit IRC
[01:29:01] *** idnar has quit IRC
[01:32:15] *** idnar has joined #opensolaris
[01:32:45] *** fiyawerx has joined #opensolaris
[01:37:03] *** idnar has quit IRC
[01:43:27] *** CodeWar has joined #opensolaris
[01:44:39] *** hecsa has quit IRC
[01:48:54] *** idnar has joined #opensolaris
[01:51:50] *** CodeWar has quit IRC
[01:59:55] *** j0ni_ has quit IRC
[02:00:36] *** j0ni has joined #opensolaris
[02:02:38] *** paniczero has quit IRC
[02:02:39] *** joshua_ has quit IRC
[02:04:13] *** timsf has quit IRC
[02:05:57] *** timsf has joined #opensolaris
[02:06:57] <CIA-108> SFE tom68: SFEgcc.spec: enforce SFEgmp SFEmpfr to have pkgtool --autodeps working in correct build-order
[02:17:13] *** joshua_ has joined #opensolaris
[02:17:54] *** Sloar has joined #opensolaris
[02:18:36] *** Sloar has left #opensolaris
[02:28:12] *** InTheWings has quit IRC
[02:45:27] *** f0rpaxe has quit IRC
[02:46:55] *** f0rpaxe has joined #opensolaris
[03:19:44] *** Roksteady has joined #opensolaris
[03:19:44] *** Roksteady has joined #opensolaris
[03:19:44] *** k1llah3rtz has quit IRC
[03:20:19] *** k1llah3rtz has joined #opensolaris
[03:21:44] *** zynox has quit IRC
[03:24:29] *** zynox has joined #opensolaris
[03:26:03] *** Toiletbowl has joined #opensolaris
[03:34:04] *** k1llah3rtz has quit IRC
[03:34:22] *** k1llah3rtz has joined #opensolaris
[03:41:28] *** miine_ has joined #opensolaris
[03:43:39] *** miine has quit IRC
[03:43:39] *** miine_ is now known as miine
[04:08:31] *** yippi has quit IRC
[04:09:06] *** deet has quit IRC
[04:22:56] *** stevel has joined #opensolaris
[04:22:57] *** ChanServ sets mode: +o stevel
[04:27:01] *** stevel_ has joined #opensolaris
[04:27:01] *** stevel has quit IRC
[04:28:40] *** ApOgEE__ has quit IRC
[04:34:40] *** ApOgEE__ has joined #opensolaris
[04:35:26] *** hehehe has joined #opensolaris
[04:38:16] *** Toiletbowl has quit IRC
[04:47:55] *** moepmoep has quit IRC
[04:53:24] *** timsf has quit IRC
[04:53:40] *** deet has joined #opensolaris
[05:15:16] *** fOB_2 has joined #opensolaris
[05:17:18] *** CodeWar has joined #opensolaris
[05:18:53] *** fOB has quit IRC
[05:21:45] *** ganbold has joined #opensolaris
[05:25:59] *** tomocha66 has joined #opensolaris
[05:42:07] *** galt has quit IRC
[06:04:42] *** nikolam has joined #opensolaris
[06:06:41] *** comay has joined #opensolaris
[06:07:00] *** ChanServ sets mode: +o comay
[06:18:18] *** myrkraverk has quit IRC
[06:18:53] *** idnar has quit IRC
[06:19:10] *** idnar has joined #opensolaris
[06:20:08] *** myrkraverk has joined #opensolaris
[06:44:02] *** ewdafa has joined #opensolaris
[06:47:06] *** Edgeman has quit IRC
[06:48:05] *** nikolam has quit IRC
[06:56:12] *** CodeWar has quit IRC
[07:02:26] *** nikolam has joined #opensolaris
[07:07:39] *** Edgeman has joined #opensolaris
[07:44:36] *** Thrae has quit IRC
[07:46:26] *** Thrae has joined #opensolaris
[07:47:19] *** nikolam has quit IRC
[07:49:05] *** nikolam has joined #opensolaris
[08:05:17] *** fisted has quit IRC
[08:11:32] *** hajma has quit IRC
[08:13:14] *** deet has quit IRC
[08:21:36] *** SunTzuTech has quit IRC
[08:22:10] *** Dagobert has quit IRC
[08:26:42] *** fisted has joined #opensolaris
[09:09:56] *** derchris has quit IRC
[09:11:59] <Zubby> Anyone know much about /etc/gdm/PostLogin/ and /etc/gdm/PreSession/? I want to be prompted to enter a second password after I login so I can mount my homedir. Need a push in the right direction :)
[09:13:05] *** mmu_man has joined #opensolaris
[09:13:14] <tsoome> man automount
[09:13:51] <Zubby> tsoome: Yeah I know about that but specifically I want to mount a crypto zfs with a key that I input
[09:14:22] <Zubby> If thats something that automount can help me with I'm interested and may have misunderstood
[09:15:40] <nikolam> that is interesting. I use Pidgin and smb client in nautilus and both of them uses local password store and ask for password on first use. Maybe that can be used for mounting dir, too. Like documents pool that is made out of image and encrypted.. etc
[09:16:34] <Zubby> In short I realised that I want my whole homedir encrypted and this would be a good thing to figure out and share to others too.
[09:16:50] <nikolam> Zubby, and as I know, there is no such thing as ZFS crypto on OpenSolaris. it is implemented only inside SolarisExpress11 and that is closed source for testing only.
[09:17:08] *** Dagobert has joined #opensolaris
[09:17:46] <Zubby> nikolam: yeah but I thought I might have better luck asking here for ideas than in #solaris :) After all its more of a gdm thing than a crypto thing in my mind
[09:18:31] <nikolam> I think that only way might be to mount encrypted image to zfs dataset
[09:20:27] <nikolam> And if under that closed SolEx11, you might first make new zfs dataset(filesystem) under /export/home/username and look for documentation on theirs zfs crypto. That is unavailable in OpenIndiana, so I will not learn nothing about that, untill I can use it in production, and I cant use legally Solaris Express in production.
[09:20:58] <Zubby> ....ok
[09:21:00] <nikolam> (Production means home server and notebook)
[09:27:38] *** tsoome has quit IRC
[09:30:27] *** tsoome has joined #opensolaris
[09:34:30] *** Roksteady has quit IRC
[09:34:58] *** Roksteady has joined #opensolaris
[09:34:58] *** Roksteady has joined #opensolaris
[09:35:32] *** McBofh has quit IRC
[09:46:21] *** tsoome has quit IRC
[09:47:17] *** Yu\2 has joined #opensolaris
[09:51:31] *** Mech0z has joined #opensolaris
[09:54:43] *** Erwann has joined #opensolaris
[09:55:28] <Mech0z> Anyone that knows what can cause that I from my windows pc logged into my opensolaris server cant delete files where Owner is Mech0z and that is the account I use to log in with from my windows system to that share
[09:57:17] <lblume> Yes, I know. I asked the same questions few days ago.
[09:58:48] <Mech0z> lblume did you get an answer?
[09:58:50] <lblume> The right answer is that the cifs server is still having a lot of issues: in that case, it's not happy that the ACL to allow removing children is not present
[09:59:19] <lblume> It ignores traditional unix permissions.
[09:59:28] <Mech0z> so whats the answer to fix it?
[09:59:41] <Mech0z> if there is any
[10:00:12] <lblume> Along the lines of chmod A=owner@:modify_set/delete_child:allow <your directory>
[10:00:20] *** nikolam has quit IRC
[10:01:17] *** stevel_ has quit IRC
[10:01:51] <lblume> Or maybe chmod A+owner@:delete_child:dir_inherit:allow <your directory>
[10:02:17] <lblume> I'm still a bit green on NFSv4 ACLs
[10:02:51] <Mech0z> Iam just a xnix noob in general :/
[10:03:23] <lblume> Then use Samba
[10:04:12] <lblume> It will generally work in a less confusing way, and has a much longer history and less bugs than the CIFS server
[10:04:34] <Mech0z> is it easy to install?
[10:04:46] <lblume> Sure, there's an ips package for that
[10:04:54] <lblume> And plenty of online documentation
[10:05:08] <Mech0z> ok, just need to figure out how I uninstall the CIF, cant rememeber how I made the share in hte first place (1½ year ago)
[10:05:11] *** hehehe has quit IRC
[10:05:20] *** wdp has joined #opensolaris
[10:05:21] *** wdp has quit IRC
[10:05:21] *** wdp has joined #opensolaris
[10:05:40] <lblume> zfs properties, shareadm, SMF servicve
[10:08:12] *** mmu_man has quit IRC
[10:10:11] *** hajma has joined #opensolaris
[10:12:03] *** tsoome has joined #opensolaris
[10:13:19] *** GdeLeo has joined #opensolaris
[10:13:52] *** nikolam has joined #opensolaris
[10:14:37] <Mech0z> lblume think I followed this guide bakc then http://blogs.sun.com/acworkma/entry/adventures_in_opensolaris_building_a
[10:15:41] *** mikefut has joined #opensolaris
[10:15:58] <Mech0z> isnt it svcadm enable -r smb/server I have to "reverse"
[10:17:10] *** InTheWings has joined #opensolaris
[10:17:44] <lblume> First remove the shares. I'd say the server doesn't start if there is no zfs shared via smb
[10:22:39] *** AxeZ has joined #opensolaris
[10:25:32] <Mech0z> ok iwll follow http://wikis.sun.com/display/OpenSolarisInfo200906/How+to+Set+Up+Samba+in+the+OpenSolaris+2009.06+Release then, hope it works
[10:25:56] *** ianj has quit IRC
[10:26:15] *** ianj has joined #opensolaris
[10:29:18] <lblume> It should, though I'm no big fan of swat. Also, samba config is now in /etc/samba
[10:33:40] *** cnu has quit IRC
[10:37:17] *** mmu_man has joined #opensolaris
[10:39:32] *** McBofh has joined #opensolaris
[10:41:33] *** cnu has joined #opensolaris
[10:45:02] *** k0x has quit IRC
[10:57:35] <Mech0z> takes a long time to enable samba
[10:58:27] <Mech0z> nwm
[11:06:39] *** nikolam has quit IRC
[11:12:46] *** hsp has joined #opensolaris
[11:13:03] *** snuff-home2 is now known as snuff-home
[11:14:26] <Mech0z> lblume cant find any guides for not using swap, and tried doing http://pastebin.com/BTDG7rXf but that dont seem to work
[11:15:03] *** ikonia has quit IRC
[11:15:47] <lblume> swat, not swap. You just edit smb.conf and get ir running. There's a man for that. For samba, you'll need to create samba users equivalent to the unix ones using smbpasswd
[11:16:14] <Mech0z> oh thoguht it used the unix users
[11:17:08] <lblume> It can't. The password format is not compatible. The CIFS server handles that via its pam module.
[11:19:21] *** dubf has joined #opensolaris
[11:21:41] *** Vanuatoo has quit IRC
[11:31:49] <Mech0z> lblume cant it working http://www.gratisimage.dk/image-7D4A_4D81E271.jpg
[11:32:06] <Mech0z> not sure if its due to the Domain window uses
[11:33:03] <lblume> Did you use smbpasswd to create the Mech0z user and give it a password?
[11:33:22] <Mech0z> yes you can see the user is created in the vnc connection in the background
[11:33:49] <Mech0z> but it complains when I try to log in
[11:34:45] *** nanase has joined #opensolaris
[11:34:50] <lblume> You gave it a password?
[11:35:18] <Mech0z> yes I entered a SMB password
[11:35:18] <lblume> Sorry, my window is small, I had missed the bottom
[11:35:56] <Mech0z> :)
[11:35:59] <lblume> Might be a recent windows passwd type fun. The site you showed me had something about it, to set up higher level of NTLM
[11:37:09] <Mech0z> ehm okay :/
[11:37:45] <lblume> Add those lines to smb.conf, then rerun smbpasswd to reencrypt the password:
[11:37:48] <lblume> lanman auth = no
[11:37:48] <lblume> ntlm auth = yes
[11:37:48] <lblume> client lanman auth = yes
[11:37:48] <lblume> encrypt passwords = yes
[11:38:10] <Mech0z> just at the bottem of the file?
[11:39:03] *** hsp has quit IRC
[11:39:24] <lblume> in the [global] section
[11:40:29] <Mech0z> and to make it get hte new configs I have to run the 2 svcadm enable samba wins and svcs samba wins?
[11:40:55] <lblume> No, samba re-reads its configuration automatically
[11:41:10] <lblume> It should be active after one minute at most
[11:41:36] <Mech0z> ok, btw in the status on the webpage thing it says winbindd: not running
[11:41:40] <lblume> You can use testparm to check if your config file is valid
[11:41:48] <Mech0z> smbd and nmbd are running though
[11:42:08] <lblume> You don't need winbind in a small setup like that
[11:42:12] <lblume> Keep it disabled.
[11:42:17] <Mech0z> kk
[11:42:31] <Mech0z> well guess I should should see it under View on the webclient when its enabled
[11:43:34] <Mech0z> nice it works :)
[11:44:31] <Mech0z> thanks a ton :)
[11:45:00] <Mech0z> this should work until I shift to linux at some point (For more apps)
[11:45:46] <lblume> Good point of samba is, you get to keep your config once it's done. It'll work the same on other platforms
[11:45:56] <Mech0z> cool
[11:46:09] <Mech0z> well I wont change until I know ZFS support is decent on linux
[11:46:18] *** hsp has joined #opensolaris
[11:46:33] <Mech0z> dont have backup space to change the filesystem without loosing all my files
[11:46:50] <lblume> Heh, don't hold your breath.
[11:46:56] <Mech0z> :(
[11:48:28] <Mech0z> well Iam considering moving to raid0 instead of raid-z with 5 drives
[11:48:35] <Mech0z> so expensive to upgrade 5 drives at a time
[11:50:34] *** smrt has quit IRC
[11:50:51] *** smrt has joined #opensolaris
[11:51:10] <lblume> raid0? you mean striping without redundancy? you like to lose your data?
[11:52:52] <Mech0z> whoops raid1
[11:53:55] <Mech0z> is it possible to make a Raid1 that is support in linux btw
[11:54:05] <Mech0z> some filesystem
[11:55:35] *** vinu has joined #opensolaris
[11:55:54] <vinu> hi
[11:56:04] <vinu> anyone from mumbai
[11:56:12] *** vinu has quit IRC
[12:00:13] <lblume> People from Mumbai must be able to read and type very quickly, I guess.
[12:00:56] <lblume> Mech0z: raid1 with 5 disks makes no sense. You need an even number then stripe them.
[12:01:04] <Mech0z> nono would be 2 and 2
[12:01:13] <Mech0z> so I only buy 2 new discs to upgrade
[12:01:30] <lblume> And no, there's no software raid in common with Linux you could use
[12:01:36] <Mech0z> then I would just have 5 x 2 dics in raid 1 instead of 2 x 5 discs in raidz
[12:01:38] <tsoome> well, raid 1 with 5 disks, means its 5 way mirror;)
[12:02:05] <tsoome> i would suppose you are talking about raid10
[12:02:08] <lblume> tsoome: Can we calculate the probability of losing data on that one? ;-)
[12:02:47] <lblume> Mech0z: You will lost half the space with that configuration instead of 1/5.
[12:02:55] <Mech0z> I know
[12:03:07] <Mech0z> but if I cant run proper ZFS on linux then I dont havem uch choise :s
[12:03:19] <Mech0z> and I dont like you can use online raid expansion on zfs
[12:03:23] <Mech0z> (Think thats the name)
[12:03:44] <Mech0z> but maybe I should use Raid-5
[12:04:31] <lblume> I'm not sure what you mean. zfs is zfs. whatever raid style you use, you won't read it on Linux.
[12:04:48] <lblume> Ok, not easily, and without proper performance
[12:06:16] <tsoome> if you need linux, use linux, if you need zfs, use solaris. plain and simple.
[12:06:37] <Mech0z> tsoome that dont solve my data transfer issues
[12:06:53] <tsoome> what data transfer issues?
[12:06:57] <lblume> Or OpenIndiana. Or FreeBSD
[12:07:14] <Mech0z> tsoome getting my current data to a filesystem on linux
[12:07:53] *** mmu_man has quit IRC
[12:07:56] <tsoome> ah, you need to "convert" existing filesystem?
[12:07:59] <Mech0z> lblume do either of those support wine
[12:08:06] *** dubf_ has joined #opensolaris
[12:08:06] <Mech0z> tsoome well I dont want to loose my data
[12:08:07] *** dubf has quit IRC
[12:08:37] <tsoome> then you need to use some temporary space to store the data
[12:09:01] <Mech0z> I would like to setup for example a little Win server for running some .net web server (
[12:09:05] <Mech0z> so need wine support
[12:09:34] <tsoome> and?
[12:09:53] <tsoome> afaik wine is available on many platforms.
[12:09:56] <Mech0z> lblume suggested using openindiana or freebsd
[12:09:59] <Mech0z> ok
[12:10:01] <tsoome> also wine is not the only option.
[12:10:05] <lblume> Or use virtualization if you need windows
[12:10:42] <lblume> time for lunch anyhow
[12:11:47] <Mech0z> Maybe I could take a backup of my current opensolaris install and then somehow make that runable in a VM
[12:11:56] <Mech0z> then use this opensolaris for filestorage
[12:12:02] <Mech0z> then have another VM for my windows server
[12:12:08] *** SunTzuTech has joined #opensolaris
[12:12:15] <Mech0z> then I just need some OS in the background that can run them
[12:23:17] *** moepmoep has joined #opensolaris
[12:26:00] *** Crypticfortune has quit IRC
[12:27:12] *** Crypticfortune has joined #opensolaris
[12:41:52] *** ikonia has joined #opensolaris
[12:43:22] *** Statts[a] has joined #opensolaris
[13:00:29] *** Yu\2 has quit IRC
[13:09:57] *** InTheWings has quit IRC
[13:14:57] *** nonnooo has joined #opensolaris
[13:20:44] <Mech0z> Is there some kind of VM service for opensolaris?
[13:21:06] <Mech0z> like VMWare (if that works)
[13:22:35] <DerSaidin> virtual box
[13:24:21] *** GdeLeo has quit IRC
[13:28:06] *** hsp has quit IRC
[13:28:28] *** hsp has joined #opensolaris
[13:34:26] *** Yu\2 has joined #opensolaris
[13:44:57] <Mech0z> DerSaidin when I try to add the package I just get "Error could not process datastream from <Virtual box.....>
[13:58:34] *** Toiletbowl has joined #opensolaris
[14:03:37] *** mmu_man has joined #opensolaris
[14:04:31] *** ingenthr has quit IRC
[14:11:51] *** AukeF has joined #opensolaris
[14:19:07] *** smrt has quit IRC
[14:19:22] *** smrt has joined #opensolaris
[14:23:58] *** nachox has joined #opensolaris
[14:39:24] *** Posterdati has quit IRC
[14:40:04] *** kimc has joined #opensolaris
[14:43:03] *** Posterdati has joined #opensolaris
[14:48:51] *** oninoshiko has quit IRC
[14:49:30] *** stoxx has quit IRC
[14:49:46] *** oninoshiko has joined #opensolaris
[14:59:14] *** stoxx has joined #opensolaris
[15:04:35] *** tty234 has joined #opensolaris
[15:08:52] *** axisys has joined #opensolaris
[15:10:37] *** AxeZ has quit IRC
[15:18:41] *** bcs000 has joined #opensolaris
[15:31:37] *** stevel_ has joined #opensolaris
[15:36:06] *** dubf_ has quit IRC
[15:37:30] *** stevel has joined #opensolaris
[15:37:30] *** ChanServ sets mode: +o stevel
[15:38:46] *** stevel_ has quit IRC
[15:47:09] *** Toiletbowl has quit IRC
[15:52:55] *** snuff-home2 has joined #opensolaris
[15:56:09] *** snuff-home has quit IRC
[15:56:18] <FrankLv> Hi, guys. I'm tring to install oracle on oracle solaris 11 express. stop at DISPLAY setup
[15:57:24] <FrankLv> user start X I run "xhost +" to disable acl.
[15:57:50] <FrankLv> oracle user run "export DISPLAY=localhost:0.0' then run xlogo to test
[15:58:05] <FrankLv> Error: Can't open display: localhost:0.0
[16:00:03] *** psychicist has joined #opensolaris
[16:01:36] <Triskelios> FrankLv: wrong DISPLAY. try ssh -X <user>@localhost instead
[16:04:29] <FrankLv> Triskelio: you command is enable X forward under ssh, good idea! it works for oracle user
[16:04:44] <Mech0z> Tightvnc close my connection all the time :s often when I press ctrl or shift
[16:04:51] <FrankLv> Thanks, move to next step
[16:04:52] <Mech0z> when connected to my opensolaris server*
[16:08:42] *** niq has joined #opensolaris
[16:08:42] *** niq has joined #opensolaris
[16:11:28] *** victori has quit IRC
[16:11:46] *** victori has joined #opensolaris
[16:18:19] <Mech0z> I press and hold ctrl and shift it close my vnc connectio nwithout warning
[16:19:20] <Mech0z> happens both with tightvnc and realvnc so opensolaris must be to blame
[16:20:49] *** stevel has quit IRC
[16:24:14] *** tsoome has quit IRC
[16:31:02] *** Statts[a] has quit IRC
[16:45:20] *** InTheWings has joined #opensolaris
[17:02:00] *** stevel has joined #opensolaris
[17:02:00] *** ChanServ sets mode: +o stevel
[17:06:09] *** jfisc has quit IRC
[17:07:15] <Gman_> for anyone interested, technology spotlight on IPS this month: http://www.oracle.com/technetwork/server-storage/solaris11/technologies/ips-323421.html
[17:07:23] <Gman_> (more stuff gets added to the page over time)
[17:10:34] *** jfisc has joined #opensolaris
[17:16:21] *** AukeF has quit IRC
[17:34:58] *** Dagobert has quit IRC
[17:35:24] *** robinbowes has quit IRC
[17:43:57] *** tsoome has joined #opensolaris
[17:49:27] *** nanase has quit IRC
[17:51:05] *** mmu_man has quit IRC
[17:54:33] *** fOB_2 has quit IRC
[17:59:55] *** fOB has joined #opensolaris
[18:00:43] *** melbogia1 has joined #opensolaris
[18:01:38] *** GdeLeo has joined #opensolaris
[18:03:11] *** melbogia has quit IRC
[18:10:34] <lblume> is there a way to disable the zfs smartness? Ie, data prefetching and the like? and have some regular fs caching?
[18:11:23] *** mikefut has quit IRC
[18:12:31] <Triskelios> lblume: the prefetch tunables are on solarisinternals.com
[18:12:40] <Triskelios> ZFS must use ARC for caching
[18:12:52] <tsoome> what are you trying to achieve?
[18:13:18] <tsoome> zfs is not "regular" file system.
[18:14:04] <lblume> yes, I know that, I'm *still* getting some extremely poor performance at time, which still baffles me
[18:14:27] <tomww> and if he is on databases with caching inside the database as well, then he can reduce the zfs extra caching eventually
[18:14:47] <lblume> I just want to do some tests at this point.
[18:15:40] *** smrt has quit IRC
[18:15:43] <tsoome> at first, define "extremely poor perfomance" at first
[18:16:21] <tsoome> .oO eating and typing may produce some side-effects:)
[18:17:15] <lblume> 10 MBps effective througput on a disk with processes being blocked. Stil the same thing I've been bitching about for a while.
[18:17:45] <tsoome> local io?
[18:17:46] *** GdeLeo has quit IRC
[18:17:55] <lblume> And wildly different values for throughput given by zpool iostat and from the user viewpoint
[18:18:03] <lblume> Yes, all local
[18:18:22] <tsoome> single disk or several ones?
[18:18:27] <lblume> Single
[18:18:37] <tsoome> its free to play with?
[18:18:47] <tsoome> i mean, can you nuke it as you wish?
[18:19:02] <lblume> Ah, not at the moment no
[18:19:19] <lblume> I've got a perfectly good Win7 on it, too :-P
[18:19:37] <tsoome> ah, its used as rpool disk?
[18:19:39] *** smrt has joined #opensolaris
[18:19:40] <lblume> Do you have anything in mind? I could try to find a way to do it.
[18:19:41] *** SunTzuTech has quit IRC
[18:20:30] <tsoome> very first idea, if you can nuke zpool from it, is to create ufs and see how it performs with ufs.
[18:20:39] <lblume> Yes, as rpool. Does that matter? I thought as long there is a single rpool on an fdisk partition, it s ok
[18:21:29] *** GdeLeo has joined #opensolaris
[18:21:41] <tsoome> that 10M/s is for read or write?
[18:21:57] <lblume> both
[18:22:05] <lblume> But it's not consistent
[18:22:35] <tsoome> prefetch wont really do anything on write, unless you are doing read-modify-write cycle
[18:22:46] <lblume> And as noted before, big files (ie that don't fit in memory¨have much more impact on performance than the same read distributed over many small files (that do fit)
[18:22:59] <lblume> read or write
[18:23:14] <lblume> But read does have more impact than write
[18:23:24] <tsoome> iostat -xnC 1 series sample?
[18:23:56] <lblume> Since iostat freezes when it happens, the number are rarely reliable
[18:24:07] <tsoome> iostat freezing?
[18:24:21] <tsoome> anything in dmesg, iostat -En ?
[18:24:48] <tsoome> this is real or virtual setup?
[18:25:00] <lblume> nah, everything's fine on the hardware side.
[18:25:10] <lblume> Fully real :-)
[18:25:24] <tsoome> if commands are freezing, thats an sign. and an bad one.
[18:25:56] <tsoome> you may wanna run intrstat during the IO stress time
[18:26:30] <tsoome> what disk and HBA is there?
[18:27:09] <lblume> Intel H67 (and not, *not* on the buggy ports), Seagate 500GB
[18:28:11] <tsoome> run intrstat in one window and generate some IO on another, and iostat in third;)
[18:28:33] <lblume> Trying that
[18:29:19] <tsoome> if commands are freezing, thats indication that something weird is going on, meaning, the tuning probably wont do anything good....
[18:29:49] <tsoome> you shouldnt see any freezes on simple file copy
[18:30:20] <lblume> funny, in iostat, the disk and partition values are exactly identical, except for %w
[18:30:37] <tsoome> can you pastebin it?
[18:31:39] <lblume> http://pastebin.fr/10763
[18:31:48] <lblume> But no freezing this time.... hmmmm
[18:32:21] <tsoome> on this snapshot, the numbers are quite decent
[18:33:21] <tsoome> nothing too bad - disk is serviceing requests with 47ms and you do 188 read ops, whitch is quite good
[18:33:40] <lblume> yup, it's not a bad disk :-)
[18:33:59] <lblume> Why the %w difference, though? Aren t they the same thing?
[18:34:40] <tsoome> 1 is for HBA another is for disk, %w is quite meaningless counter
[18:34:59] <tsoome> basically it means, how much time the target has something to do
[18:35:40] *** Erwann has quit IRC
[18:35:48] <tsoome> if you are looking on disk perfomance, you wanna compare IO count (first 2 columns) with bandwith and asvc_t
[18:36:27] <tsoome> if you have low IO count, and low bandwith but high asvc_t, its an indication about something bad going on.
[18:37:19] <lblume> yep, I've seen that happen. But for all the time I've looked at that issue, on different systems, I never saw any hardware issue
[18:37:30] <tsoome> wsvc_t is how long it takes for HBA to service IO and asvc_t is how long it will take for disk to service the IO
[18:38:08] <tsoome> you wanna run iostat -xnC 1 for at least some 40 sec or more during the test
[18:38:49] *** hajma has quit IRC
[18:38:51] <lblume> I'm creating some 5GB files to read them
[18:38:56] <tsoome> esp, for write tests, because with async writes the cache flush timer is at max 30 sec
[18:39:47] <tsoome> for reads you need to decide if you wanna test with cold arc or warm
[18:40:12] <tsoome> with warm arc, you neeed to get arc filled with data first, then run test
[18:40:29] *** nikolam has joined #opensolaris
[18:42:08] *** kimc has quit IRC
[18:43:23] <lblume> I hate smartness. cat file > /dev/null returns instantly.
[18:43:55] *** derchris has joined #opensolaris
[18:44:15] <tsoome> cat | cat > /dev/null ?
[18:44:34] <lblume> I had just reached the same conclusion
[18:44:39] <lblume> that does work
[18:44:52] <lblume> Some people need more kicking :-)
[18:45:37] <tsoome> ?:D
[18:45:51] *** derchris has quit IRC
[18:48:23] <lblume> Grrrr, I just had the mouse itself stop responding, and yet, I don't see anything striking
[18:48:25] *** derchris has joined #opensolaris
[18:49:43] <tsoome> keep intrstat running, that can give some hints
[18:49:56] <tsoome> or, maybe your X is the cause?
[18:49:59] *** FrankLv has quit IRC
[18:50:04] *** ingenthr has joined #opensolaris
[18:50:20] <tsoome> you can disable gdm and just use screen/console
[18:50:30] <lblume> I don't see anything very different in intrstat
[18:50:49] *** mmu_man has joined #opensolaris
[18:51:25] <tsoome> you may wanna disable X for the test time
[18:52:14] <tsoome> just to elliminate one "unknown" factor
[18:52:27] <lblume> yes, but I'm not sure how to notice the freeze then
[18:52:49] <lblume> With X at least it's obvious when the windows grey out
[18:52:51] <tsoome> you told the iostat output was feezing as well?
[18:53:10] <lblume> It might be a side effect of X freezing
[18:53:26] <tsoome> if you have iostat feeding output with 1 sec interval, that should be pretty noticeable
[18:54:13] *** GdeLeo has left #opensolaris
[18:54:15] <tsoome> yea, but the point is, if without X you wont see freezes, there is pretty good chance your disk/zfs has no part of the issue;)
[18:54:37] <lblume> It only happens when doing disk i/o
[18:55:28] <tsoome> then you should see that without X as well
[18:56:38] <lblume> yes, I found a way: doing an mkfile 10m file all the time
[18:57:04] <lblume> usually takes 0.004s, but sometimes takes more than 10s
[18:57:15] <lblume> when I have the cat file running
[18:57:43] <tsoome> iostat from that time?
[18:58:44] <tsoome> can it be the HBA is sharing interrupt with some other device and thats the cause?
[18:59:28] <tsoome> also obvious question - did you check if you have latest bios?
[19:00:03] <lblume> Not quite latest, but recent enough. The changelog for the latest was rather irrelevant.
[19:00:31] <tsoome> no acpi related fixes?
[19:00:50] *** niq has quit IRC
[19:01:05] <lblume> nope
[19:01:19] <lblume> Ah yes, a bug for i3 cpu
[19:01:22] <lblume> I have an i5
[19:02:23] <lblume> Let me kill X and have a look from the console...
[19:02:51] *** kdavy_ has quit IRC
[19:03:25] *** FrankLv has joined #opensolaris
[19:08:05] *** SunTzuKDE has quit IRC
[19:13:00] <tsoome> id network connection(s) also affected?
[19:13:06] <tsoome> are*
[19:13:51] *** SunTzuTech has joined #opensolaris
[19:16:24] <lblume> there's no traffic except this vnc I'm using to talk to you with , and it doesn't hang
[19:16:39] <lblume> I've checked the disk i/o are the same without X
[19:17:02] <lblume> Things just hang regularly
[19:17:03] <tsoome> same hungs?
[19:17:07] <lblume> yes
[19:17:50] <lblume> though for some reason, time | mkfile was even less accurate, as it was hung itself before starting, so it underreported the ¨real¨value
[19:18:18] <tsoome> did you got iostat sample from that "hung" time?
[19:18:28] <tsoome> and intrstat?
[19:18:52] <tsoome> you may wanna run vmsatat as well to see if there are any wierd peaks at the time
[19:19:54] <tsoome> vmstat*
[19:20:06] <tsoome> also, what version is the solaris?
[19:20:18] <lblume> I didn't notice anything abnormal. It could have happened more when it was actually writing to disk.
[19:20:27] <tsoome> I mean, can you test with different one?
[19:20:29] <lblume> It's S11X
[19:21:32] <tsoome> well, the immediate idea is that IO hungs are often related with issues with HBA; maybe the driver for this intel card is shaky
[19:22:21] *** FrankLv has quit IRC
[19:22:47] <lblume> ahci
[19:23:16] <lblume> But yeah, over the years, most or all people reporting the issue were using an Intel hba
[19:24:14] <tsoome> thats way the intrstat - was wondering if you see some interrupt burst during the hung time
[19:24:27] <lblume> Nope
[19:24:38] <lblume> Between 300-600
[19:24:41] <tsoome> also, normal writes on zfs are async, it can be the cache flushes will trigger it
[19:25:13] <tsoome> but if so, you should see the hungs at the approx 30 sec intervals during the test
[19:25:42] <lblume> hmmm
[19:25:45] <lblume> regularly?
[19:25:53] <lblume> I didn't try to time them yet
[19:26:01] <tsoome> sort of, yes, but that will imply you have long writes
[19:26:12] <lblume> For the interrupt, pcitool repurts ahci alone on cpu3
[19:26:36] <tsoome> yea, thats quite nrmal if you have no other activities going on
[19:27:04] <lblume> well, the write here I used mainly to notice the issue. It's really the read which trigger it. Hence my original question of disabling the smartness.
[19:28:03] <tsoome> yea, but read ahead would only hurt random reads, causing system to have "inflated" reads
[19:28:07] <tsoome> not hungs.
[19:29:17] *** FrankLv has joined #opensolaris
[19:29:21] <tsoome> for example, if you are reading, say, 10 byte chunks, but zfs will read 128k with every read, meaning wasted bandwith for 118k
[19:29:46] <lblume> No, I don't notice it on small reads
[19:30:04] <lblume> only reading big files.
[19:30:45] <tsoome> without using *any* tuning, you should not see *any* hungs
[19:31:11] <lblume> I definitely should not :-)
[19:31:15] <lblume> But I do.
[19:31:30] <lblume> And it's really easy to rrigger
[19:31:34] <lblume> trigger
[19:31:35] <tsoome> do you have any spare hba to test with?
[19:31:47] <lblume> Oh hey, actually yes
[19:31:59] <lblume> I have that si3124 I was going to ebay
[19:32:02] <tsoome> that maybe good idea to try.
[19:32:32] <lblume> yeah, but moving the system to another HBA, not funny :/
[19:32:36] <tsoome> ofc moving rpool disk is a bit of hassle;)
[19:32:45] *** nonnooo has quit IRC
[19:33:29] <lblume> unless I can trigger it from booting on media
[19:33:45] <tsoome> thats again easy to test
[19:34:00] *** nikolam has quit IRC
[19:34:00] <lblume> relatively
[19:34:23] <lblume> Oh wait, nope
[19:34:24] <tsoome> you can make test while using media to see if its the same hungs, then move disk to another HBA
[19:34:31] <lblume> \the hba I have is PCI-X
[19:35:08] <lblume> I'm afraid the performance will be too crippled to be conclusive
[19:35:56] <nachox> tsoome, how realistic is the scenario where you randomly read 10 byte chungs every time?
[19:36:26] <tsoome> depends on your app;)
[19:36:38] <tsoome> smc was reading 1 byte chunks;)
[19:36:45] <tsoome> hehe
[19:36:56] <lblume> And anyway, reading random 10 bytes has no issue, it's reading sequential 4GB that has ;-)
[19:37:06] <tsoome> what motherboard is it?
[19:37:39] <nachox> tsoome, smc was not an app
[19:37:54] <lblume> Asus P8H67-M Pro
[19:38:20] <lblume> And smc was a steaming pile.
[19:38:20] <nachox> it was an abomination
[19:39:05] *** libkeise1 has quit IRC
[19:40:14] *** libkeiser has joined #opensolaris
[19:44:24] <lblume> tsoome: I've always wondered it could be a driver <-> zfs interaction issue, but I was never able to prove it conclusively :-/
[19:45:01] <tsoome> well, only option is to test with different HBA
[19:45:07] *** ingenthr has quit IRC
[19:45:55] <tsoome> or. maybe if you try with -B acpi-user-options=8 but i have no idea if thats even will have any difference
[19:46:36] *** myrkraverk has quit IRC
[19:46:57] *** SunTzuTech has quit IRC
[19:48:46] *** myrkraverk has joined #opensolaris
[19:49:26] *** libkeiser has quit IRC
[19:49:53] *** ingenthr has joined #opensolaris
[19:51:34] *** Yu\2 has quit IRC
[19:54:59] *** ingenthr has quit IRC
[19:55:23] *** AxeZ has joined #opensolaris
[19:55:26] <lblume> hmmm, but there *was* that /etc/system option that had been advised as a workaround
[19:55:37] <tsoome> hm?
[19:56:00] <lblume> I just forget what it is :-/
[19:57:40] *** libkeiser has joined #opensolaris
[19:58:41] *** hajma has joined #opensolaris
[19:59:01] *** myrkraverk has quit IRC
[19:59:30] *** myrkraverk has joined #opensolaris
[19:59:35] <lblume> Heh, yay for IRC logs: set zfs:zfs_vdev_max_pending = 1
[20:00:22] <tsoome> ah, that may help, its reducing the queue and thus the burden on HBA level
[20:02:45] <tsoome> that iostat sampe you did pastebin had actv 9, so maybe the 1 is a bit too drastic.
[20:04:08] <tsoome> whats the actv you see at hung time?
[20:04:24] *** mikefut has joined #opensolaris
[20:04:44] <lblume> hmmm, lemmesee
[20:05:13] <tsoome> that zfs:zfs_vdev_max_pending is defaulting to 35 afaik
[20:07:22] *** Yu\2 has joined #opensolaris
[20:07:45] <tsoome> you can set it on the fly with echo zfs_vdev_max_pending/W0t10 | mdb -kw
[20:07:45] <tsoome> for value 10 for example
[20:07:57] *** Mech0z has quit IRC
[20:09:01] *** myrkraverk has quit IRC
[20:10:23] *** libkeiser has quit IRC
[20:10:36] <lblume> Dammit
[20:10:47] <lblume> More than 1 minute for screen to start
[20:11:19] *** timsf has joined #opensolaris
[20:12:24] *** SunTzuTech has joined #opensolaris
[20:13:45] *** Mech0z has joined #opensolaris
[20:14:06] *** hsp has quit IRC
[20:14:11] <lblume> I don't see actv above 10
[20:14:16] *** myrkraverk has joined #opensolaris
[20:14:21] <lblume> but it's not really related to the hanging
[20:15:07] <lblume> It can hang with actv = 1
[20:15:44] *** hsp has joined #opensolaris
[20:17:01] *** jfisc has quit IRC
[20:17:46] <tsoome> it does?
[20:18:00] <tsoome> then its not queue related issue.
[20:18:31] <tsoome> the pool version is latest i guess?
[20:18:34] *** libkeiser has joined #opensolaris
[20:20:05] <tsoome> tbh, i still think your best bet atm is to try to borrow some other hba for test
[20:21:05] <tsoome> the s11 by itself would require support to get some serious help, unless you can find some friendly engineer from oracle:D
[20:32:38] <lblume> haha, right :)
[20:33:35] <lblume> What Oracle hardware uses ahci those days?
[20:36:50] *** SunTzuTech has quit IRC
[20:59:20] *** jfisc has joined #opensolaris
[21:31:21] *** pothos_ has joined #opensolaris
[21:33:08] *** pothos has quit IRC
[21:33:20] *** pothos_ is now known as pothos
[21:37:42] *** gerard13 has quit IRC
[21:39:28] *** fOB has quit IRC
[21:43:07] *** gerard13 has joined #opensolaris
[21:47:45] *** fOB has joined #opensolaris
[21:51:27] *** ingenthr has joined #opensolaris
[21:51:51] *** darrenb` has quit IRC
[21:55:16] *** darrenb has joined #opensolaris
[21:59:08] *** odyi has quit IRC
[22:00:43] *** FrankLv has quit IRC
[22:01:47] *** FrankLv has joined #opensolaris
[22:11:19] *** odyi has joined #opensolaris
[22:11:20] *** odyi has joined #opensolaris
[22:24:20] *** fOB has quit IRC
[22:25:56] *** bcs000 has left #opensolaris
[22:28:37] *** Mech0z has quit IRC
[22:36:25] *** wdp has quit IRC
[22:37:57] *** wdp has joined #opensolaris
[22:42:08] *** axisys has quit IRC
[22:42:41] *** Yu\2 has quit IRC
[22:46:06] *** InTheWings has quit IRC
[22:47:12] *** InTheWings has joined #opensolaris
[22:47:32] *** Statts[a] has joined #opensolaris
[22:49:43] *** ganbold has quit IRC
[22:51:42] *** jfisc is now known as jfisc_
[22:52:54] *** jfisc_ is now known as jfisc
[22:57:18] *** hsp has quit IRC
[23:00:56] *** galt has joined #opensolaris
[23:12:23] <CIA-108> SFE jurikm: SFElibvpx.spec: initial spec
[23:15:26] *** hunter has quit IRC
[23:15:27] *** hunter_ has joined #opensolaris
[23:15:36] *** hunter_ has quit IRC
[23:25:09] <CIA-108> SFE jurikm: ext-sources/proftpd.xml: fix typos
[23:25:42] <CIA-108> SFE jurikm: encumbered/SFEhandbrake.spec: fix build deps
[23:26:57] <CIA-108> SFE jurikm: add missing patches
[23:32:22] *** phretor has joined #opensolaris
[23:32:24] <phretor> hi
[23:33:04] <phretor> which board looks more OpenIndiana/Solaris friendly? In particular, I'd use it for a SOHO RAID-Z NAS: http://www.intel.com/products/desktop/motherboards/dg45fc/dg45fc-overview.htm or http://www.intel.com/products/desktop/motherboards/DQ45EK/DQ45EK-overview.htm ?
[23:34:10] *** Statts[a] has quit IRC
[23:35:30] *** libkeiser has quit IRC
[23:35:37] <CIA-108> SFE jurikm: SFElibvpx.spec: build requires yasm
[23:37:04] <tomww> phretor: you have in both cases a nice checksummed ZFS filesystem, a fast CPU, but no protection against Memory bit errors. So this invalidates the checksumming on ZFS to a nice to have bit not longer a protection.
[23:37:46] <tomww> you might want to check as well a ecc protected memory setup. even smaller AMD cpus and some chipsetzs do support ECC protection for memory
[23:38:33] <phretor> tomww: unfortunately, I have to stick to a Celeron CPU I just bought. Is there any mobo you may suggest?
[23:39:13] <phretor> tomww: or, any other suggested mini-itx setup? 4 sata (or 3 sata + 1 esata) is a must for me.
[23:40:29] <tomww> have none for intel based setups. I only use AMDs, last bought a ATX sozed boards with 890 chipset, 16GB ecc ram and 615e quard cpu
[23:40:41] <tomww> so can't help here in the intel field, sorry.
[23:40:58] <phretor> tomww: any embedded amd setup in mini itx format?
[23:41:11] <phretor> I can always resell the Celeron :)
[23:41:45] <tomww> you might want to read the blogs around home nas, e.g. http://constantin.glez.de/
[23:42:53] <monsted> phretor: you could probably buy an entire HP Proliant Microserver for the money you're looking at paying for parts
[23:42:56] <tomww> I would use an AMD mobo chipset (790 or 890) and put ECC memory in.
[23:43:30] <phretor> monsted: how do you know how much I would spend? :)
[23:43:33] *** libkeiser has joined #opensolaris
[23:44:10] <phretor> tomww: so, I understand that putting ECC ram in those two Intel boards won't help, right?
[23:44:16] <monsted> phretor: the miniitx socketed boards are usually pretty expensive
[23:44:19] <tomww> as COU you could use a 2-core from the Athlon X2 series with the "e" at the end or a regular one (draws little more power). just make really sure you buy a cpu with at least 10h family/model series to get power management working
[23:44:38] <phretor> monsted: I found some around 70-100Eur
[23:44:44] <tomww> intel desktop is by marketing limited to non-ECC ... no way
[23:45:40] <tomww> unfortunatly. for a regular workstation you can go w/o ECC, but I want my photos stored on server wich is ECC protected
[23:45:59] <phretor> I understand, I agree.
[23:50:48] <phretor> The sapphire IPC-AM3DD785G looks nice
[23:52:21] *** mikefut has quit IRC
[23:57:01] <phretor> tomww: what do you mean with 10h family/model series, and what do you mean with COU?
[23:59:02] <tomww> there are old AMD cpus known as family 0fh (15 decimal) and there are 10h (16 decimal). those older as dual core can't use powermanagement (e.g. change clock to lower frequency to save power)
top

   March 17, 2011  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >