[00:00:20] <m0le> useradd randomname [00:00:48] <iron_angel> right, and it errors out? [00:00:52] <m0le> when i cd /export/home/ there is no home folder and when I pullup the graphical tool no user either and I can not log in as that user [00:01:09] <m0le> no errors angel [00:01:12] <iron_angel> oh, hmm. [00:01:20] <iron_angel> grep username /etc/passwd [00:01:46] <m0le> points to the corresponding folder and shell [00:02:14] <m0le> random:x:100:1::/home/random:/bin/sh [00:02:17] <jwit> you want useradd -m [00:02:27] <iron_angel> yeah, was about to suggest that. [00:02:44] <iron_angel> also, the user is disabled initially IIRC, until you set a password [00:03:00] <iron_angel> (rather than having a blank password by default), but I could be smoking salt here. [00:03:02] <m0le> ok now I get an error, Operation not applicable [00:03:07] <m0le> Oh i know iron angel [00:03:12] <m0le> i set a password each time [00:03:26] <m0le> with -m it says the operation non apllicable [00:03:32] <iron_angel> oh, gotcha. [00:03:49] <iron_angel> Gotta use -m during the initial creation. If the user's created, I think you can use usermod. [00:03:52] *** LuckyLuke has joined #opensolaris [00:04:04] <m0le> i deleted the user [00:04:06] <pguser> no, you use ps [00:04:12] <iron_angel> ps? [00:04:22] <pguser> passwd [00:04:33] <iron_angel> to change a password, yes. [00:04:34] <pguser> there you can set home directories, shells [00:04:35] <m0le> yeah i know but still i can't create the user with the commands [00:04:50] <m0le> or anything else apparently it is non applicable [00:05:02] <iron_angel> hunh. [00:05:11] * iron_angel tries, SXCE b63/SUNW,Ultra-80 [00:05:39] *** sparc-kly has joined #opensolaris [00:05:40] *** ChanServ sets mode: +o sparc-kly [00:06:09] *** freakazoid0223 has quit IRC [00:06:12] <pguser> http://www.uwsg.iu.edu/usail/man/solaris/passwd.1.html [00:06:30] *** obsethryl has quit IRC [00:06:42] <m0le> i know pguser, the issue is not the password [00:07:01] *** Shinden has joined #opensolaris [00:07:20] <m0le> As root I can not add users to the system, which well can become a huge problem. [00:07:29] <iron_angel> m0le: That's odd. It just worked for me. [00:07:41] <iron_angel> useradd -g foo -m arglebargle [00:07:46] <iron_angel> then passwd arglebargle [00:08:39] <m0le> Operation Not Applicable [00:08:44] <m0le> on the useradd [00:08:49] <Shinden> adduser ? [00:08:54] <twincest> autofs [00:09:04] *** freakazoid0223 has joined #opensolaris [00:09:09] <twincest> m0le: easier solution: specify /export/home/user as the home dir [00:09:23] <trygvis> nono, use automountd! [00:09:23] <twincest> m0le: other solution: learn how to use the automounter; or remove /home from the automounter [00:09:40] <twincest> it's hardly worth it on a single user system.. although probably worth learning [00:10:36] *** timsf has joined #opensolaris [00:11:05] *** m0le has quit IRC [00:11:06] <Shinden> m0le: UnixCBT feat. Solaris 10 Edition ;] search it in torrent sites [00:11:23] *** m0le has joined #opensolaris [00:11:33] <iron_angel> ack, something's very wrong with XFCE. [00:11:36] <Shinden> m0le: UnixCBT feat. Solaris 10 Edition ;] search it in torrent sites [00:11:54] <Shinden> iron_angel: shit happens [00:11:55] <Shinden> :( [00:12:00] <m0le> funny part twincest: it created user, and no home directory after specifying home directory [00:12:02] <iron_angel> hmm, indeed. [00:12:03] <m0le> thanks shinden [00:12:10] <twincest> yeah you need to make the homedir yourself [00:12:18] <Shinden> m0le: in /export/home make one [00:12:24] <twincest> or useradd -m [00:12:37] * iron_angel will probably use this as an opportunity to learn dtrace :/ [00:13:30] <Shinden> m0le: http://isohunt.com/release/76349/UnixCBT [00:13:31] <Shinden> [; [00:13:58] *** timsf has quit IRC [00:14:30] <m0le> thanks Shinden [00:15:02] <Shinden> 10 euro ;] [00:15:32] *** bengtf has quit IRC [00:16:05] <Shinden> VERSION [00:17:09] <m0le> yeah, after creating the home folder and giving full read-write to all just to confirm it is still not seeing the directory [00:17:38] <Shinden> where u make it ? [00:17:41] <Shinden> in /home ? [00:17:46] <Shinden> or /export/home [00:17:50] <m0le> /export/home/ [00:18:15] *** timsf has joined #opensolaris [00:18:23] <iron_angel> I need to figure out what the canonical way to do that is, I always cheat by having a slice mounted on /home... [00:18:24] <timsf> Evenin' all [00:19:02] <Shinden> m0le: usermod -d /home/export/userdir username [00:19:20] <m0le> yeah shinden i already did that [00:19:39] <Shinden> chown ? [00:19:44] <m0le> I am familiar with adding users to a system just I have had this issue before [00:19:53] <m0le> I reinstalled it and it worked just fine [00:19:57] <Shinden> [; [00:20:05] *** sparc-kly__ has quit IRC [00:20:07] <m0le> but i was hoping to avoid a reinstall [00:20:10] <Shinden> change owner and group [00:20:14] <Shinden> mb will work [00:20:20] <trygvis> hmm .. after adding disks to a system, shouldn't they show up after a devfsadm? [00:20:31] *** LuckyLuk1 has quit IRC [00:20:40] <iron_angel> I would think so, yeah. [00:20:44] <m0le> That is a no go Shinden, Time for me to just reinstall [00:20:54] *** SymmHome has quit IRC [00:20:58] <Shinden> m0le: user linux next time [00:20:58] <Shinden> ;] [00:21:09] <Shinden> ubuntu 4 desktop debian 4 serwer [00:21:10] <iron_angel> give up? [00:21:11] <m0le> Shinden: oi? [00:21:14] <iron_angel> Bah, never! [00:21:14] <Shinden> [; [00:21:43] <m0le> Shinden: <--redhat for server, gentoo, freebsd, openbsd, and plan 9 desktop lolz [00:21:51] <Shinden> lol [00:22:11] <Shinden> os/400 rulla [00:22:12] <Shinden> ;] [00:22:23] <m0le> wait that is just how some people see it, I mean i know my way around just when I see this happen after an install of solaris a part of me dies [00:22:47] <m0le> Plan 9 is where it is at, not really but it does make for a good headache =) [00:23:44] <Reidms-420R> mole you use plan9? [00:24:04] <m0le> yes, I have it installed and running on a machine, I installed it about two months ago [00:24:13] <Reidms-420R> How is it? [00:24:18] <m0le> Different [00:24:21] <m0le> =) [00:24:32] <Reidms-420R> do you think it is a "successor to Unix"? [00:24:37] <Shinden> learn os/400 ;] [00:24:48] <iron_angel> I need to try out Plan 9 some time. [00:24:57] <m0le> At it's current state no, if more developers got a hold of it and worked with it for a while yes [00:25:02] <Reidms-420R> same here iron_angel [00:25:09] <e^ipi> i've got a house full of commercial unix [00:25:12] <iron_angel> I've got Solaris, NeXTSTEP, AIX, Linux, Mac OS X and OpenBSD running in here right now... [00:25:13] <e^ipi> I even have a SCO box [00:25:16] <iron_angel> oh, and OPENSTEP. [00:25:17] *** sparc-kly has quit IRC [00:25:38] <m0le> It has potential IBM showed us that, but not exactly the drop unix and migrate phase but that is just my opinion [00:26:16] <Reidms-420R> lol SCO [00:26:25] <e^ipi> unixware 7.1 [00:26:29] <m0le> Blue Gene with it's petaflop processing with it. [00:26:33] <Shinden> i used openSCO [00:26:34] <Shinden> ;] [00:26:58] <iron_angel> is SCO really as barfucious as the popular conception says? [00:27:24] <iron_angel> (Incidentally, I find most people complaining about how determinedly spartan Solaris is, haven't used it since 2.6) [00:27:27] <Shinden> sco is big SATAN like microsoft and USE [00:27:29] <Shinden> USA [00:27:30] <e^ipi> meh, i wouldn't go out of my way to use it, but it's not really /that/ terrible [00:27:33] <m0le> I was always a gnu/linux and some BSD no SCO, UNIXWARE, or Solaris until about two years ago when I would mess occasionally use solaris [00:27:34] <Reidms-420R> thought Nextstep was eoled [00:27:45] <iron_angel> It is. [00:27:48] <jamesd> most of SCO was made and designed 10 years ago, long before the current managers where in power... [00:27:58] <iron_angel> Well, unless you count Mac OS X, its direct descendent. [00:28:06] <e^ipi> i haven't used linux in a while actually... it's always just pissed me off how buggy it is [00:28:09] <jamesd> and i think 10 years is a very short sited estimate. [00:28:19] <m0le> <---has a NextCube that still works [00:28:23] <iron_angel> me too [00:28:46] <Reidms-420R> I need to try FreeBSD Sparc [00:28:50] * iron_angel has 4 NeXTs, a cube and three slabs, but only two are fully working now. [00:29:00] <e^ipi> does freebsd have a sparc port? [00:29:06] <Reidms-420R> aye [00:29:10] <iron_angel> Reidms-420R: it's a bit limited, but nice. An E250 is about the ideal box for it. [00:29:11] <e^ipi> i thought they were just x86/amd64 + powerpc [00:29:31] <iron_angel> nope, also SPARC and Alpha. [00:29:34] <Shinden> what about GNU/HURD ? [00:29:38] <Reidms-420R> 420R close enough iron_angel? [00:29:54] <iron_angel> Yeah. That's pretty much an Ultra 80 in a rackmount, right? [00:30:07] <Reidms-420R> aye [00:30:14] <m0le> Shinden:Don't speak of the HURD LOL [00:30:26] <iron_angel> As long as you're not trying to use much in the way of graphics, yeah, it'll work great [00:30:26] <Reidms-420R> It has everything except a sound card :p [00:30:37] <Shinden> [; [00:30:41] <timsf> Hey, don't suppose anyone from the opensolaris.org website team is online ? [00:30:43] <iron_angel> I have an eBus sound module just lying around, from a dead Ultra 60. [00:30:46] <m0le> j/k [00:30:51] <Shinden> debian/hurd will rulla [00:30:52] <Shinden> [; [00:30:53] <Reidms-420R> sweet [00:31:05] <iron_angel> will, if HURD ever gets finished. [00:31:10] <Reidms-420R> iron_angel willing to sell? [00:31:16] <m0le> IF is the big deal [00:31:21] <iron_angel> Sure! [00:31:27] <timsf> am seeing 60 sec. + page create times on the website at the moment, and I'm loosing the will to live... [00:31:32] <iron_angel> it's just that shipping will be a butt, probably. [00:31:45] *** Fullmoon has joined #opensolaris [00:32:12] <iron_angel> Reidms-420R: also, you could use either an ES1370 or an EMU10k1 card, I forget which one has SPARC drivers. [00:32:21] <iron_angel> oh, but under FreeBSD they both do. [00:33:19] <Reidms-420R> Well I need Solaris Sparc support- since it is primary OS [00:34:33] <iron_angel> One of those two works in Solaris (at least in 10) with a 3rd-party driver, but I'd have to google to see which (or both)., [00:34:47] <iron_angel> well, there's also 4front OSS, but that doesn't help much. [00:35:02] <iron_angel> but yeah, I could sell the eBus card, for the cost of shipping. [00:35:15] <Reidms-420R> have the model number? [00:36:16] <iron_angel> for which, the eBus module? [00:36:24] <Reidms-420R> aye [00:36:30] <iron_angel> 501-4155 [00:36:43] <iron_angel> working pull from otherwise dead U60, identical to card in U80. [00:37:38] <Shinden> http://pl.youtube.com/watch?v=Y6kd42jIaHk&eurl=http%3A%2F%2Fwww%2Eubuntushop%2Ebe%2F [00:37:40] *** pguser has quit IRC [00:40:35] <Reidms-420R> I could use 501-4155 [00:41:12] * iron_angel nodnods. [00:41:16] <iron_angel> cool. [00:41:22] *** bengtf has joined #opensolaris [00:43:43] *** sparc-kly has joined #opensolaris [00:43:43] *** ChanServ sets mode: +o sparc-kly [00:46:03] <Reidms-420R> So what country/state are you in iron_angle [00:46:09] <Reidms-420R> ^iron_angel [00:46:25] <iron_angel> Presently in Italy, but I'd be sending from a US military base. [00:46:32] <iron_angel> I'll be back in the US semi-shortly though. [00:52:35] *** yarihm has joined #OpenSolaris [00:55:43] <m0le> <--probably the last place anyone here would expect to see in irc =) [00:55:45] *** bunker has quit IRC [00:56:02] <iron_angel> where? [00:56:07] <iron_angel> I see only an IP. [00:56:12] <m0le> Mississippi, the coast of mississippi [00:56:18] <iron_angel> gotcha :) [01:00:20] *** blueandwhiteg3 has joined #opensolaris [01:05:49] *** cmihai has quit IRC [01:07:04] *** sioraiocht has quit IRC [01:07:23] *** cypromis_ has joined #opensolaris [01:07:53] <Shinden> czas spac [01:07:54] <Shinden> nara [01:08:24] *** Mazon is now known as mazon [01:14:56] *** NikolaVeber has quit IRC [01:17:29] *** richlowe has joined #opensolaris [01:21:57] *** Fish- has quit IRC [01:22:24] *** aska_ has quit IRC [01:22:52] *** cypromis has quit IRC [01:25:41] *** vmlemon has quit IRC [01:29:12] *** Tpenta has joined #opensolaris [01:30:11] *** danv12 has joined #opensolaris [01:30:33] *** Arnald has quit IRC [01:35:07] *** ChanServ sets mode: +o Tpenta [01:35:41] * timsf decides that the opensolaris.org webapp is in fact a subtle psychological experiment investigating the limits of patience in computer users [01:36:43] <blueandwhiteg3> Would SXCE default to different options if the installer were left aloe? [01:36:46] <blueandwhiteg3> *alone [01:37:12] <Tpenta> lol [01:37:15] <Tpenta> you're up late tim [01:37:24] <Tpenta> waiting to check that gman arrives safely in Oz? [01:37:29] <Tpenta> it's past midnight there now isnt it? [01:37:58] <timsf> Yeah. Decided at 10:30 that I wasn't tired, and I might as well migrate the ie-osug pages over to the new project/ page [01:38:14] <timsf> It's now 00:40, and I'm getting tired [01:38:21] * Tpenta shudders I shoudl do that, gawd i've not even put toninghts meeting up on the old ones [01:38:47] <timsf> - that said, it's saved me a trip out of bed (Ella just woke up a bit whingy, but I got her back to sleep again) [01:39:00] <timsf> I only had about 15 pages to migrate. [01:39:11] <timsf> but each page creation was taking about 2 minutes, [01:39:21] <timsf> subsequent edits about 40 sec. each [01:39:28] <timsf> - adds up. [01:39:58] <timsf> I take it Gman hasn't arrived yet ? [01:41:18] <Tpenta> havent heard from him, i think he said he gets in about 10 [01:41:25] <Tpenta> about 20 minutes away [01:41:30] <timsf> (oh, and bloody page attachments seem to be taking even longer) [01:41:46] <timsf> Cool - tell him "hi" whenever you see him [01:42:33] *** jpdrawneek has quit IRC [01:43:51] <seanmcg> Morning Tim [01:44:21] <timsf> Hey Sean - late there isn't it ? Must be, ooh about 00:44 ? ;-) [01:44:53] <seanmcg> aye, got some of that list in. They've given us an extra day :) [01:45:16] <seanmcg> now my brain hurst and need sleep. See ya later. [01:45:23] <timsf> Nice - sleep tight. [01:45:49] *** nostoi has quit IRC [01:49:04] *** jHoNDoE has joined #opensolaris [01:50:08] <richlowe> Hey timsf. [01:50:48] <timsf> hi richlowe [01:51:24] <richlowe> timsf: for what it's worth, I'm told the webapp is even worse to maintain than to use. [01:51:36] <richlowe> so you're not even getting half the suffering. :) [01:51:40] <blueandwhiteg3> I just re-installed Solaris Express and it went smoothly, but I've never re-installed. How do I get it to see the zpool I previously created? [01:51:56] <Stric> zpool import [01:51:58] <timsf> Yeah, heard that alright - a quarter the suffering is painful enough thanks! [01:52:08] <timsf> Think I'm nearly done [01:52:25] <timsf> all I need now, is to spot some typo that runs over all 15 or so pages ... [01:52:58] *** ShadowHntr has joined #opensolaris [01:52:59] <timsf> Oh f*. [01:53:01] <blueandwhiteg3> wow, that was easy. i was trying to figure out the correct syntax... [01:53:22] <timsf> Consistency between "Users Group" and "User Group" - bugger. [01:53:45] <blueandwhiteg3> I'm really mystified why this solaris install went so much better than the last one... [01:53:47] <jamesd> blueandwhiteg3, that is why everyone loves EMC and veritas [01:54:00] <jamesd> er loves ZFS, but [01:54:04] *** sioraiocht has joined #opensolaris [01:54:31] *** uebayasi has joined #opensolaris [01:54:45] * iron_angel tries to figure out why XFCE is badly malfunctioning on the U80 when it worked fine on the U60... [01:54:48] <sioraiocht> okay [01:54:49] <sioraiocht> ir2hf: error: Ran out of memory [01:55:00] <sioraiocht> why is sun studio giving me that error? [01:55:03] <sioraiocht> HMM? [01:55:04] <sioraiocht> =p [01:55:07] <richlowe> sigh. [01:55:12] <blueandwhiteg3> jamesd: I think you recall my solaris complaints... I think somehow my install was screwed up from the start. [01:55:36] <richlowe> why would people send external mail with From: as swan hosts with no MX. [01:55:40] * richlowe looks pointedly at borg.sfbay [01:55:43] <sioraiocht> it only happens when I compile with -xinstrument=datarace [01:55:48] <jamesd> my brain has melted... its 95F today, and a lot of humity [01:56:26] <timsf> Sorry to hear it jamesd - lots of humidity in Dublin as well today ~= 100% in fact :-/ [02:02:39] <blueandwhiteg3> Crap, this is insane. Everything that failed last time has worked so far. [02:03:48] *** MajorPayne has joined #opensolaris [02:04:13] <blueandwhiteg3> I think I have found a bug, or I misunderstand the function of this. Could anybody either correct me or tell me where to file a bug report? In the Shared Folders GUI, you can add 'allowed hosts.' When I choose the "Hosts in the nge0 network" option it does not allow any machines in the nge0 network to connect. [02:07:47] *** jmcp has joined #opensolaris [02:07:59] <jamesd> not sure, i tend to avoid gnome and guis for sysadmin tasks [02:08:13] * iron_angel is even more baffulated now. [02:08:26] <blueandwhiteg3> jamesd: Regardless, it would be a bug, which I would think should be fixed? [02:08:45] <jamesd> sure go ahead and post a bug on bugs.opensolaris.org [02:09:12] <blueandwhiteg3> There's also another issue I believe is a bug.... despite a re-install, after I set the NIC to a static IP/router/etc then switch back to DHCP, it won't ever fetch and set the gateway IP address properly [02:09:35] <jamesd> did you change its setup using sys-unconfig [02:09:47] <blueandwhiteg3> I tried that last time, no fix [02:09:47] <richlowe> I'm still not sure how much of the JDS gui admin stuff actually works. [02:09:53] <richlowe> my limited experience has been "none of it" [02:09:54] <blueandwhiteg3> i tried a fresh install, no fix either [02:09:59] <richlowe> but I'm pretty sure they wouldn't have been able to get away with that. [02:10:02] <richlowe> so I assume it must, somehow. [02:10:18] <blueandwhiteg3> JDS? [02:10:25] <richlowe> GNOME with foolish branding. [02:10:31] <freakazoid0223> java desktop system [02:10:34] <iron_angel> bah, XFCE is a delayed segfault-fest for no especially good reason right now. That's weird. [02:10:41] <richlowe> actually, since they're trying to get away from the JDS name, maybe 'GWFB' is as good as any ;) [02:10:46] *** MajorPayne has left #opensolaris [02:11:05] <iron_angel> One would expect a 'Java Desktop System' to be written in Java. [02:11:10] <iron_angel> I'm glad it's not though. [02:11:25] <blueandwhiteg3> Whatever the case may be... this fresh re-install solved so many issues. I think something was wrong with my previous install.... [02:11:52] *** yongsun has joined #opensolaris [02:11:52] *** yongsun has left #opensolaris [02:11:52] *** mega has quit IRC [02:12:00] <SYS64738> hi, where can I find the pkg for web proxy server ? [02:12:04] *** mega has joined #opensolaris [02:14:48] <timsf> Ok, I give up - I'll restart the battle with the webapp tomorrow (erm, today, whatever) Night all! [02:14:48] *** timsf has quit IRC [02:18:23] *** jamesd has quit IRC [02:19:22] *** sartek_ has joined #opensolaris [02:19:46] <blueandwhiteg3> Well, it's all working great, except for the fact that I'm not getting write access to the volume I'm mounting up over NFS, despite specifying it clearly in the mount command. What settings under solaris control the server aspect of write access? [02:19:54] <SYS64738> night [02:22:00] <blueandwhiteg3> I guess it's directly mapping unix permissions.... [02:23:16] *** jamesd has joined #opensolaris [02:23:16] *** ChanServ sets mode: +o jamesd [02:24:45] <iron_angel> yah, it should. [02:25:07] <iron_angel> but is it mounting ro, or mounting rw but you don't have permissions? [02:25:48] <blueandwhiteg3> i fixed the permissions [02:26:35] <blueandwhiteg3> oops, brb [02:26:36] *** blueandwhiteg3 has quit IRC [02:26:38] *** sartek has quit IRC [02:26:55] *** WOP has quit IRC [02:27:18] *** newpers has left #opensolaris [02:30:13] *** blueandwhiteg3 has joined #opensolaris [02:32:46] <blueandwhiteg3> Alright, I'm still having problems with NFS performance. I have an empty RAIDZ with 4 x 250 GB drives. I see decent throughput locally. But via NFS, I'm seeing... <13 MB/sec... over a gigabit link tested good to 112+ MB/sec [02:33:00] <blueandwhiteg3> Is there anything that can be optimized on the server side? [02:33:09] <iron_angel> are you using TCP? [02:33:29] <blueandwhiteg3> I'm using whatever is the default. I was getting 112+ MB/sec with TCP in my bandwidth tests to the solaris box. [02:33:30] *** jamesd has quit IRC [02:33:31] <iron_angel> in my experience, NFS over TCP tends to have better throughput than over UDP even though that's counterintuitive. [02:33:36] *** yongsun has joined #opensolaris [02:34:42] *** hali has quit IRC [02:34:47] *** iron_angel has quit IRC [02:35:23] *** jamesd has joined #opensolaris [02:35:23] *** ChanServ sets mode: +o jamesd [02:36:25] *** yongsun|wfh has joined #opensolaris [02:37:36] *** jamesd has quit IRC [02:41:12] *** jamesd has joined #opensolaris [02:41:12] *** ChanServ sets mode: +o jamesd [02:42:40] *** yongsun|wfh_ has joined #opensolaris [02:44:26] <blueandwhiteg3> tcp doesn't seem to help... i'm alrady using the largest possible r/w block size [02:44:51] *** jamesd has quit IRC [02:45:19] *** m0le has left #opensolaris [02:46:58] *** jamesd has joined #opensolaris [02:47:23] *** mega has quit IRC [02:48:12] *** lon3star has joined #opensolaris [02:48:21] *** mega has joined #opensolaris [02:48:47] <lon3star> hello everybody [02:48:56] <lon3star> i have one question [02:49:08] <blueandwhiteg3> Are there any suggestions to optimize the solaris size of things? [02:49:41] <lon3star> i just installed sun web server but when i start the web server i cant see my page on localhost [02:50:19] <lon3star> do i need to configure my /etc/services ? [02:51:01] <jamesd> blueandwhiteg3, give / about 6GB, put the rest on a zfs pool.... [02:51:30] <jamesd> do a full install+oem ..it will save you a lot of problems.. hard disks are cheap [02:51:41] <blueandwhiteg3> jamesd: What exactly do you mean? I have my zpool mounted up at /bigpool [02:51:48] <blueandwhiteg3> I am using NFS to connect directly to that [02:52:34] <jamesd> blueandwhiteg3, you can put 100's of filesystems in one pool, including ones that hold /usr/local and /opt/csw ... see my blog for more details, uadmin.blogspot.com [02:53:25] <blueandwhiteg3> jamesd: I mis-typed. I was speaking of the solaris side of things, in terms of NFS performance. [02:53:56] *** Gman has joined #opensolaris [02:54:38] <blueandwhiteg3> The interesting twist is that I seem to see decent read speeds via NFS, just terrible write speeds. (A lot more terrible than iozone running locally shows) [02:54:57] *** jamesd_ has joined #opensolaris [02:55:45] <jamesd_> mount your shares with -orsize=8192,wsize=8192 and that should give you good performance unless you are serving many clients [02:56:07] <jamesd_> google solaris nfs tuning for more ideas, but really it works pretty damm good out of the box. [02:57:04] <jamesd_> i get 12.5MB per second with that setup over 100mbit nics, with gigabit i get more [02:57:18] <blueandwhiteg3> jamesd_: That's what I was hoping, but even with bigger r/w sizes, I'm only getting 8-13 MB/sec write, though read is much much faster [02:58:14] *** yongsun|wfh has quit IRC [02:58:53] <jamesd_> depending on the type and size of files you are using, it may be limited, are you using a full gigabit network? [02:59:00] <jamesd_> how is the hard disks connected [02:59:16] <blueandwhiteg3> jamesd_: Full dupex gigabit, direct machine to machine link, 895 mbit/sec typical netperf results using tcp [03:00:04] <jamesd_> you can also play with mtu's jumbo mtus can help [03:00:15] <richlowe> Hm. [03:00:42] <richlowe> Wonder how gdamore's stuff looking at UDP would help with NFS. [03:00:54] <richlowe> theres a large packet/large number of packets UDP protocol. [03:01:08] *** yongsun|wfh_ has quit IRC [03:01:23] <blueandwhiteg3> jamesd_: It doesn't seem to be related to the mtu size. Writes are slow, reads are fast. Additionally, 895 mbit/sec with tcp and stock 1500 mtu seems very decent. [03:01:45] <blueandwhiteg3> jamesd_: I also can't figure out how to change the MTU under solaris with my nge0 NIC [03:01:49] <jbk> does the hcl include wireless cards (by brand)? [03:01:52] <jamesd_> i would look into disk if only writes are slow [03:02:12] <richlowe> jbk: not sure. [03:02:15] <blueandwhiteg3> richlowe: I tried both tcp and udp, no difference [03:02:28] <richlowe> especially given vendors habits of changing radio's without changing model#'s [03:02:37] <richlowe> ... without the apstrophe's. [03:02:49] <richlowe> (yeah, the last one was deliberate) :) [03:02:55] <jbk> that's the thing -- i know which chipsets are supported, but that doesn't help a lot in picking a specific card [03:03:28] <richlowe> jbk: often times, neither does knowing make/model. [03:03:53] <richlowe> I was looking at a usb wireless thing last night that's used 3 or 4 slightly different combinations of crud, all under basically the same name. [03:04:52] <jamesd_> x86: nge Driver Updated to Support Jumbo Framework [03:04:52] <jamesd_> This networking enhancement is new in the Developer 5/07 release. [03:04:52] <jamesd_> Starting with this release, the nge driver has been updated to enable Jumbo Frame support. The nge driver's default MTU has been raised to 9 Kbytes, that improves system performance and lowers CPU utilization significantly. [03:04:55] <blueandwhiteg3> jamesd_: i'd love to benchmark the disk, but every tool I can find is rather frustrating to work with. [03:05:21] <richlowe> jamesd_: it defaults up there? [03:05:23] <richlowe> Huh. [03:05:26] <jbk> hmm i wonder if best buy might have a decent price -- might try them just cause i'd have less hassles returning it if it doesn't work [03:05:54] <jamesd_> blueandwhiteg3, you can get an idea, by looking at iostat -xz 2 and watch the %b collumn [03:06:14] <jamesd_> richlowe, http://docs.sun.com/app/docs/doc/820-0724/6nceocr8c?a=view [03:06:32] *** simford has joined #opensolaris [03:07:23] <blueandwhiteg3> jamesd_: what does %b mean? [03:07:29] <jmcp> blocked [03:07:36] <jamesd_> blueandwhiteg3, i beleve its busy [03:08:08] <richlowe> Hey jmcp. [03:08:21] <jmcp> hiya [03:08:40] <blueandwhiteg3> jamesd_: This gives me the throughput on each drive, which jumps up and down a lot, but it's often well into 20-ish MB/sec per drive [03:09:20] <jamesd_> blueandwhiteg3, what type of files are you moving? large or small? lots? [03:09:39] <blueandwhiteg3> jamesd_: single, large files [03:10:02] <blueandwhiteg3> i just timed cat /dev/zero > /bigpool/file and in 10 seconds, I made a 380 MB file [03:10:15] *** alanc_away has quit IRC [03:10:36] *** alanc_away has joined #opensolaris [03:11:19] <blueandwhiteg3> that's really disappointing in terms of write rates (a single file could be written more quickly to any one of these drives alone), but still vastly higher than the write rate showing up over NFS (at most 13 MB/sec, usually a lot less) [03:13:03] <jamesd_> http://tech.groups.yahoo.com/group/solarisx86/message/39057 [03:13:42] <jamesd_> blueandwhiteg3, i would also recomend looking at iscsi it may give you better performance, zfs and nfs are the greatest match currently... [03:14:29] <palowoda> Heck I get about 44M/bytes on nfs writes with two cheap Gig E cards. [03:14:32] <palowoda> on Zfs [03:15:50] <palowoda> blueandwhiteg3 just curious did you try a non-zfs exported file system write test? [03:15:55] <jamesd_> all my solaris boxes max out the rate of the slowest nic involves, i don't have 2 gigabit solaris boxes.... the only other gigabit box i have is a windows xp, and they are not known for network io [03:16:46] *** sartek_ has quit IRC [03:16:59] <palowoda> I'd agree with that. My cheap 8192 gig nic is running about 650mbits. tcp large buffers. [03:18:27] <blueandwhiteg3> palowoda: the NIC level is great... 895 mbit/sec... it's the file protocol or the filesystem or something that's biting me [03:18:57] <palowoda> Yes I've been reviewing what you wrtoe. [03:19:03] <palowoda> *wrote. [03:19:19] <palowoda> Did you try on a non-zfs raid exported file system. [03:19:23] <blueandwhiteg3> palowoda: When you say non-zfs exported file system, what do you mean? like UFS on the boot drive? [03:19:38] <palowoda> Yes and I meant exported. [03:20:10] <blueandwhiteg3> i can't export my zpool because it says it is in use [03:20:15] <blueandwhiteg3> can i force that somehow? [03:20:45] <palowoda> Oh didn't know you root partition was zfs. [03:21:08] <blueandwhiteg3> palowoda: No, the root partition isn't zfs. [03:21:33] <blueandwhiteg3> ok, my crappy boot drive is getting... 30 MB/sec write from /dev/zero [03:21:38] <blueandwhiteg3> locally [03:21:39] <blueandwhiteg3> now trying via NFS [03:23:28] <jamesd_> blueandwhiteg3, try running netperf and dd the drive at the same time, it will more closely emulate the reading/writing from the drive and dealing with net traffic [03:23:50] <blueandwhiteg3> jamesd_: on which system? [03:23:58] *** Murmuria has quit IRC [03:24:06] <jamesd_> blueandwhiteg3, on the solaris box.. [03:24:47] <palowoda> A good file mix bench is: http://opensolaris.org/os/community/performance/filebench/ [03:25:32] <palowoda> But that is something for later. [03:26:12] <sioraiocht> anyone here use gdb on solaris? [03:26:42] <jamesd_> sioraiocht, I have, but dtrace goes a long way to replacing the need for gdb in a lot of cases [03:26:54] <sioraiocht> jamesd_: I know how to use gdb though =/ [03:27:15] <jamesd_> okay what is the problem [03:27:31] <blueandwhiteg3> Alright, this is plain embarrassing. /export/home gives me ~17 MB/sec sustained - pretty solid network use curve. /bigpool gives me ~8-9 MB/sec average. [03:27:41] *** theRealballchalk has quit IRC [03:27:48] <sioraiocht> I've never gotten this error "not in executable format: File format not recognized" [03:28:06] *** theRealballchalk has joined #opensolaris [03:28:17] <jamesd_> sioraiocht, file file_that_is_giving_you_error [03:28:31] <sioraiocht> gives me that same error [03:28:38] <palowoda> try a zfs raid 0 pool test. [03:28:50] <sioraiocht> I just realised I compiled this binary with -xinstrument=datarace [03:28:53] <sioraiocht> do i need to recompile? [03:28:55] <jamesd_> sioraiocht, i mean at the command line. [03:29:02] <sioraiocht> ohh [03:29:17] <sioraiocht> ELF 64-bit LSB executable AMD64 Version 1 [SSE2 SSE FXSR AMD_3DNow CMOV FPU], dynamically linked, not stripped [03:29:53] <jamesd_> is your gdb compiled to support 64 bit executeables? [03:30:13] <sioraiocht> it just got installed from blastwave (not by me, I don't have root) [03:30:19] <sioraiocht> that could be it, though? [03:30:42] <jamesd_> i would guess that is the case... [03:30:45] <sioraiocht> okay [03:30:52] <sioraiocht> thanks [03:31:13] *** nrubsig has joined #opensolaris [03:31:13] *** ChanServ sets mode: +o nrubsig [03:31:24] <nrubsig> Morning! :-) [03:31:30] <jamesd_> i think its built on solaris 8 and solaris 8 doesn't have amd x64 support [03:31:56] <jmcp> hi nrubsig [03:31:59] <nrubsig> Anyone know a Mr. Ringwas ? [03:32:02] <jmcp> jamesb_: correct [03:32:03] <nrubsig> jmcp: Hi! :-) [03:32:25] *** theRealballchalk has joined #opensolaris [03:32:28] *** jHoNDoE has quit IRC [03:32:33] <jmcp> sioraiocht: for x64 support you need s10 fcs or later [03:32:40] * nrubsig sees lots of subscriptions of a Mr. ringwas@{yahoo,acm,dsl-only,*}.{org,net.com} [03:32:53] <jmcp> hmm [03:32:58] <jmcp> I'd be suspicious [03:33:03] <sioraiocht> jmcp: you used abbreviations I don't understand [03:33:17] <nrubsig> jmcp: yeah, I am suspiciour. [03:33:20] <nrubsig> s [03:33:53] <jamesd_> sioraiocht, that is just what the first release of solaris 10 was called [03:33:56] <Tempt> blueandwhiteg3: Are you still bangin' on about your NFS problems ;-) [03:34:01] <jamesd_> fcs == first customer sales [03:34:04] <Tpenta> FCS = First Customer Ship [03:34:16] <Tpenta> close james ;) [03:34:17] <nrubsig> Why not FSH ? [03:34:26] <nrubsig> FirstCustomerHorror [03:34:29] <nrubsig> or [03:34:34] <nrubsig> FHSTC [03:34:37] <jamesd_> nrubsig, that is linux and windows [03:34:38] <Tpenta> because it doesnt pop in for 10 minutes and say Hello, oh no, that's Fish [03:34:41] <nrubsig> FirstHOrrorShippedToCustomers [03:34:52] *** theRealballchalk has quit IRC [03:35:29] <blueandwhiteg3> Tempt: yes, still going crazy [03:35:56] * Tempt sighs [03:36:00] <blueandwhiteg3> ZFS/RAID-Z seems to be slower than I could ever imagine when combined with NFS [03:36:03] <sioraiocht> jamesd_: well this machine definitely has solaris 10 heh [03:36:19] <blueandwhiteg3> I have 4 x 250 GB drives that have average read/write speeds around 60 MB/sec [03:36:22] <jmcp> Tpenta: yeah, that's it [03:36:26] <palowoda> Try a RAID-0 ZFS test. [03:36:33] <jamesd_> sioraiocht, yes but your gdb package was compiled on a solaris 8 machine, so it wont have support for x64 binaries [03:36:34] <jmcp> sioraiocht: sorry - bad assumption on my part [03:36:41] <Tempt> raidz does consume a little CPU time. [03:36:48] <Tempt> But I've had no problems getting solid throughput. [03:36:55] <sioraiocht> jmcp: haha no worries [03:37:08] <blueandwhiteg3> Tempt: This CPU is way more than ample. It's an AMD64 3400+, even tested overclocking [03:37:13] <sioraiocht> jamesd_: ahhhh, I'll try compiling it myself, think I need root to just run it? [03:37:18] <blueandwhiteg3> CPU usage is <20% at all times during all tests under solaris [03:37:24] <Tempt> If you're not able to hammer 100Mbyte/sec through your four SATA drives you've either got a very slow system, a problematic controller, or ultimately sucky drives [03:37:51] <LeftWing> Tpenta: Tonight is still on, mmm? [03:37:52] <palowoda> No a 3400+ isn't that great in performance at all. [03:38:04] <Tempt> That said, you're using MacOS X as your NFS client and I've found MacOS chokes bigtime on NFS. [03:38:27] <blueandwhiteg3> Tempt: Would writes be slower that reads? That's part of the problem [03:38:54] <jamesd_> nfs is most lilkely the problem... nfs + zfs dont make for great performance [03:39:09] <blueandwhiteg3> Tempt: And even still, same client, the crappy PATA boot drive (UFS) outperforms the elaborate RAID-Z [03:39:17] <blueandwhiteg3> via NFS [03:39:46] *** reflecte has joined #opensolaris [03:39:52] <blueandwhiteg3> palowoda: The 3400+ may not be all that great, but if cpu usage is under 20%, i think that qualifies for 'more than ample' [03:39:53] <Tempt> Still, if you're on gig-e, you should be seeing better than 6mbyte/sec [03:39:58] <jamesd_> blueandwhiteg3, correct, there are a few bugs that only effect zfs+nfs ... [03:40:19] <blueandwhiteg3> I'm starting to see that... [03:40:27] <reflecte> i want to encrypt an entire partition. what are my options? [03:40:29] *** Dink has quit IRC [03:41:00] <blueandwhiteg3> Something is constantly in use... it seems that I can't get my client to fully untether from the nfs share on solaris, so i can't recreate my zpool. Is there a good way to cut it off hard? [03:41:04] <jamesd_> reflecte, find the encryption tool kit and/or wait for zfs encryption support [03:41:29] <jamesd_> blueandwhiteg3, unshare your filesystems in zfs. [03:41:30] *** Dink has joined #opensolaris [03:41:41] <palowoda> blueandwhiteg3: Your local disk performance would go up with say something like a AMD 5400+ about 30/40 percent. [03:41:59] <blueandwhiteg3> palowoda: why would that be? [03:42:22] <palowoda> At least that is what I've seen during some upgrades like that. [03:42:23] <reflecte> jamesd_ does the tool kit provide command line programs to create/modify the encryption? [03:42:42] <jamesd_> reflecte, not sure i just remember it was mentioned sometime last year i never tried it. [03:42:54] <palowoda> blueandwhiteg3: I'd have to go back and plug the old cpu in and find out. [03:43:33] <palowoda> I don't know what I would do with the data after I found out though. [03:43:46] <blueandwhiteg3> palowoda: interesting. that is most strange, because the memory bandwidth is more than ample, the cpu usage is low... that all points to the idea that there are bugs in how ZFS is implmented? [03:44:13] <palowoda> There are some zil bugs related to performance zfs/nfs. [03:44:30] <palowoda> I didn't think it was that bad though. [03:44:33] <blueandwhiteg3> I wish I'd known all this before I went through the trouble of trying to setup such a system.... [03:44:36] *** HackersDrop has joined #opensolaris [03:45:01] <palowoda> I not sure your running into that specific perf bug though. [03:45:04] <nrubsig> !seen meem [03:45:17] * nrubsig looks at HackersDrop [03:45:30] <palowoda> Your system using DDR2 memory? [03:45:35] <blueandwhiteg3> palowoda: yep, dual channel [03:45:37] <Tempt> Christ. [03:45:48] <Tempt> Memory performance isn't going to make that much impact on your NFS shares. [03:45:54] <Tempt> At this level, anyway. [03:45:55] <nrubsig> heh [03:46:05] <blueandwhiteg3> palowoda: with I/O zone for reads/writes that are cached, i see values well into several GB/sec [03:46:08] <Tempt> It isn't like you're hitting them with lots of clients with multiple 10gig-e links [03:46:56] <Tempt> If I can get a solid NFS performance going to an old Blade-1000 to an old 880, surely your new x64 wonderboxes should be able to do better. [03:48:01] <blueandwhiteg3> Yeah... my CPU is almost untouched by all this data shuffling, even locally dumping from /dev/zero and such [03:48:23] <palowoda> I have two AMD wonderboxes and get 44M/Bytes a second writes on NFS with ZFS. With 10.00 NIC's to boot. [03:48:38] <blueandwhiteg3> palowoda: What disk configuration? [03:49:43] <palowoda> Segate SATA I drives 16meg memory. Raid 0. AMD5600+ cpues DDR2 800mhz. Realtek 8192 1Gig net. [03:49:54] <palowoda> Both machines. Build 67 [03:50:14] <blueandwhiteg3> Alright. In a moment I'm going to make one big old striped RAID and see what happens [03:50:49] <palowoda> Buy the way my local disk writes are around 110 Mbytes sec. [03:51:05] <palowoda> Thrashing the cache that is. [03:51:49] *** jamesd__ has joined #opensolaris [03:53:39] <Tpenta> Leftwing: yes [03:53:46] <Tpenta> Glynn is currently in neutral bay [03:53:55] <blueandwhiteg3> I'll be right back. Then I will make the striped raid and we'll see what happens. [03:53:58] *** blueandwhiteg3 has quit IRC [03:54:15] <Tempt> Realtek ethernet. Never a trustworthy choice. [03:54:23] *** theRealballchalk has joined #opensolaris [03:55:50] <theRealballchalk> hey guys my CDE session froze and once i rebooted, it gave me a messege: DT Messeging System cannot be started and prompts me to click OK and it kicks me back to the login screen [03:55:51] <theRealballchalk> any ideas? [03:56:00] <theRealballchalk> i also have the error messege too [03:56:05] <palowoda> What can I say they are 9.95 at Fry's they don't break. They get 65 percent of the bandwidth, not great. The 8 port Gig switches are 39.95. Never failed me in years. [03:56:30] *** yongsun|wfh has joined #opensolaris [03:56:54] <jmcp> theRealballchalk: check /etc/hosts and make sure that your hostname is set correctly [03:56:57] <palowoda> I know no vlan. netboot etc. [03:57:20] <nrubsig> theRealballchalk: check whether your home dir has write permission [03:57:33] *** blueandwhiteg3 has joined #opensolaris [03:57:45] <theRealballchalk> jmcp: yea that was one of the messeges and i checked it [03:57:50] <theRealballchalk> it is correct [03:57:53] <theRealballchalk> lemme check again [03:57:54] <blueandwhiteg3> alright, rebooting solaris, hopefully that will let me destroy the zpool [03:58:17] <jmcp> theRealballchalk: that message from CDE is basically a hint to check everything related to name resolution [03:58:42] <HackersDrop> hi yall [03:58:44] <theRealballchalk> even when i did 'mv .dt .dtBROKEN' still won't do wonders [03:58:48] <theRealballchalk> hold on [03:59:31] <jmcp> theRealballchalk: your .dt directory should have just about nothing to do with this problem [03:59:46] <theRealballchalk> jmcp: oh ok [03:59:52] <theRealballchalk> i'll change it back [03:59:55] <theRealballchalk> hm [03:59:56] <nrubsig> theRealballchalk: the ~/.dt dir is not involved, the issue happens before that point. [04:00:01] <theRealballchalk> nrubsig: lemme see [04:00:09] <theRealballchalk> oh [04:00:57] <theRealballchalk> i renamed .dt back [04:02:46] <theRealballchalk> it tells me to check /etc/src.sh and /usr/adm/inetd.second and those i don't have [04:03:01] <theRealballchalk> i only have /etc/hosts and it's got the correct hostname [04:03:10] <twincest> what about /etc/inet/ipnodes? [04:03:13] <jmcp> theRealballchalk: check the following :: in /etc/nsswitch.conf you have "hosts: files dns" ; the service name-service-cache is enabled, you have valid nameserver IPs in /etc/resolv.conf, svc:/network/dns/client:default is enabled [04:03:21] <theRealballchalk> twincest: lemme see [04:03:49] <jmcp> theRealballchalk: the line which has your hostname on it in /etc/hosts - please show us what that name is? [04:04:06] <theRealballchalk> twincest: yea nothing wrong there just like /etc/hosts [04:04:27] <theRealballchalk> jmcp: ok hold on [04:04:28] <jmcp> and ... what does " getent hosts `hostname`" show you [04:04:34] *** halton has joined #opensolaris [04:04:49] <blueandwhiteg3> so to make a striped raid zpool... just add all the devices? [04:05:08] <hile_> bad james! =) use $() instead of backticks [04:05:25] <twincest> backticks93 [04:05:40] *** het has joined #opensolaris [04:06:44] <nrubsig> yeah [04:06:47] <nrubsig> bad jmcp [04:07:13] <nrubsig> jmcp: read http://opensolaris.org/os/project/shell/shellstyle/#use_posix_subshell_syntax [04:07:29] <nrubsig> jmcp: and http://opensolaris.org/os/project/shell/shellstyle/#put_subshell_result_in_quotes [04:08:02] <theRealballchalk> jmcp: yea i remembered adding the 'hosts: files dns' there in /etc/nsswitch.conf..............my /etc/hosts namerserver is THOR...................the nameserver is correct to my router's ip......... [04:08:15] <palowoda> blueandwhiteg3: just use two drives. [04:08:33] <theRealballchalk> lemme do getent hosts [04:09:08] <blueandwhiteg3> palowoda: it will auto assume striping then? [04:09:20] <palowoda> yes [04:09:29] <theRealballchalk> jmcp: ok 'getent hosts THOR' gives me my ipaddress and THOR [04:10:34] <palowoda> And run a simple iozone write test with say something like a 500G test file after that. [04:10:39] <nrubsig> theRealballchalk: which OS do you have ? [04:10:50] <theRealballchalk> SXDE 55b [04:10:57] <blueandwhiteg3> palowoda: Give me the arguments you use with iozone [04:11:59] <theRealballchalk> it's been working but then CDE session froze so i cycled the power and got the errors....now i'm in JDS blah [04:12:05] <jmcp> hile_: bite me :) [04:12:07] <nrubsig> theRealballchalk: what does $ getent ipnode THOR # say ? [04:12:07] <jmcp> nrubsig: ditto [04:12:16] <nrubsig> jmcp: ditto=? [04:12:17] <theRealballchalk> nrubsig: ok hold on [04:12:23] <nrubsig> or ipnodes [04:12:26] <theRealballchalk> k [04:12:36] <jmcp> nrubsig: bite me, ie, same comment I made to hile [04:12:58] * nrubsig bites jmcp [04:13:41] <theRealballchalk> nrubsig: 'getent ipnodes THOR' doesn't do anyting [04:13:43] <theRealballchalk> no output [04:13:48] <nrubsig> erm [04:14:07] * jmcp lunches [04:14:18] <nrubsig> theRealballchalk: $ ls -lad /etc/inet/ipn* # please ... [04:14:18] <palowoda> iozone -i write -s 500000 [04:14:25] <theRealballchalk> nrubsig: ok [04:14:45] <palowoda> I usaully use bonnie++ forget the iozone args sometimes. [04:15:08] <theRealballchalk> here's my error messege from /var/adm/messeges....................Jul 8 14:14:56 THOR dtexec[4498]: [ID 672058 user.error] libtt[4498]: libtt: startup_ttsession("ttsession -s -d :0") failed with code -1 [04:15:08] <theRealballchalk> , see syslog [04:15:23] *** xuewei has joined #opensolaris [04:15:54] *** vortex` has joined #opensolaris [04:15:59] <nrubsig> theRealballchalk: if getent ipnodes # doesn't know your hostname then this may be the source of the problem since CDE is IPv6 aware [04:16:02] <theRealballchalk> nrubsig: what is the '#' for? [04:16:09] <nrubsig> shell comment start [04:16:14] <vortex`> is anyone else getting slow speeds downloading nevada from sun? i'm maxing out at 20k/sec.. [04:16:26] <nrubsig> theRealballchalk: I am using this in emails etc. to make sure that people don't type too much [04:16:34] <theRealballchalk> nurbsig: ya i included that in the first command still nothing [04:16:41] <theRealballchalk> lemme issue the second command [04:17:59] <theRealballchalk> nurbsig: that last command says: /etc/inet/ipnodes -> ./hosts [04:18:10] <blueandwhiteg3> palowoda: I'll try in just a sec, but thanks to the wonders of dhcp not configuring properly on solaris... i gotta reboot [04:18:50] <nrubsig> theRealballchalk: Ok. what does $ ls -lad /etc/hosts # say ? [04:19:55] <theRealballchalk> nrubsig: it says:........... /etc/hosts -> ./inet/hosts [04:20:24] <nrubsig> theRealballchalk: what does $ cat /etc/inet/ipnodes | fgrep -i THOR # say ? [04:20:25] <sioraiocht> okay, how do I pass arguments to an executable when i run it inside dbx? [04:20:37] <twincest> sioraiocht: run -foo [04:20:38] <theRealballchalk> ok hold on [04:21:40] <theRealballchalk> it says...........MyIPAddress------------THOR-----------# Added by DHCP [04:21:45] *** dlynes_laptop has joined #opensolaris [04:21:54] <nrubsig> theRealballchalk: what does $ cat /etc/nsswitch.conf | egrep -i "hosts|ipnodes" # say ? [04:22:01] <theRealballchalk> ok hol don [04:22:24] *** cypromis has joined #opensolaris [04:22:44] <blueandwhiteg3> now solaris won't grab onto the dhcp properly, so i can't install iozone properly [04:23:19] <theRealballchalk> nrubsig: it displays a hosts: files dns and next line: ipnodes: files dns [04:23:32] <nrubsig> uhm [04:23:49] <nrubsig> theRealballchalk: and $ getent ipnodes THOR # doesn't list anything ?! [04:23:52] <theRealballchalk> heh [04:23:59] <theRealballchalk> lemme try that one again [04:24:50] <theRealballchalk> nrubsig: oh it does - it gives me: MyIPAddress----------THOR [04:24:56] <nrubsig> groan [04:25:13] <nrubsig> ok, seems that part is working [04:25:28] <theRealballchalk> i dunno [04:25:38] <nrubsig> theRealballchalk: did you disable any CDE services via svcadm ? [04:25:45] <theRealballchalk> everything has been working for the past 3-4 months since i've installed this OS [04:25:48] *** cypromis_ has quit IRC [04:26:00] <theRealballchalk> nrubsig: no no [04:26:15] <theRealballchalk> nurbsig: the only thing i did was svsadm disable ssh and ftp [04:26:25] <theRealballchalk> svcadm* [04:26:31] <nrubsig> theRealballchalk: did you ran the machine without reboot during that time ?` [04:26:48] <theRealballchalk> well i rebooted 3 times already [04:29:40] <CSFrost> anyone work regularly inside/outside new york (city)? [04:29:59] <theRealballchalk> and cat ~/.dt/errorlog shows : dtsession: Unable to start message server - exiting. [04:31:39] <blueandwhiteg3> palowoda: testing iozone now [04:32:00] *** bzcrib has joined #opensolaris [04:32:43] <blueandwhiteg3> palowoda: crap, this sucks. it's showing like... 34 MB/sec write, 19.4 MB/sec re-write [04:32:52] <blueandwhiteg3> versus 32 MB on the boot UFS drive [04:33:10] <blueandwhiteg3> how about i take one of these drives and ufs format it and benchmark? [04:33:30] <palowoda> Damn that is crappy. [04:33:38] <theRealballchalk> nrubsig: what's a magic cookie? i found something in .TTauthority [04:33:45] <palowoda> Well maybe not so much. [04:33:54] <palowoda> What kind of 250G drives? [04:34:01] <theRealballchalk> nurbsig: that is also included in the error messege [04:34:05] <nrubsig> theRealballchalk: that's like X11 cookies and allows remote CDE sessions to connect to your computer [04:34:30] <theRealballchalk> nrubsig: i don't have write permission to it because it's root. so should i change it back? [04:34:41] <theRealballchalk> chgrp me:staff it? [04:34:49] <blueandwhiteg3> palowoda: These are solid segate drives. I've run them in other systems and benchmarked ~60 MB/sec sustained read or write. Tom's Hardware's benchamrks match these. [04:34:58] <nrubsig> theRealballchalk: it's used for things like Drag&Drop to CDE applications which are coming from another machines [04:35:09] *** crib has quit IRC [04:35:21] <nrubsig> theRealballchalk: chown/chgrp or better remove and reboot [04:35:25] <palowoda> Are they SATA drives? [04:35:28] <blueandwhiteg3> palowoda: Yes [04:35:49] <blueandwhiteg3> What's the best way to wipe a disk and format in, say, UFS? [04:35:54] <theRealballchalk> nrubsig: the contents of that file is binary and non-chars and i see foreign ip addresses! [04:35:58] <blueandwhiteg3> i want to try isolating a single drive and testing [04:36:11] <theRealballchalk> ima chown it and reboot [04:36:14] <Drone> I've never seen meem talk in #opensolaris. [04:36:27] <nrubsig> Drone: you're late... one hour! [04:36:35] <palowoda> format and partition. [04:37:18] <nrubsig> theRealballchalk: the cookie files don't "loose" cookies and therefore collect all IP addresses of all machines where you ever used the stuff [04:37:34] <theRealballchalk> oh i see [04:37:37] <palowoda> blueandwhiteg3: I take it you tested these drives in another system with the same motherboard and cpu type as the one your using now? [04:38:00] <blueandwhiteg3> palowoda: No. However, I think I am going to get out a linux live disc and see what it gets for throughput... [04:38:02] <theRealballchalk> brb [04:38:04] *** theRealballchalk has left #opensolaris [04:38:07] <coffman> gar [04:38:14] <coffman> gnome eats my memory [04:38:17] <coffman> not nice! [04:38:35] *** theRealballchalk has joined #opensolaris [04:38:41] <palowoda> What brand of motherboard are you using just for info. [04:38:52] <theRealballchalk> nrubsig: oh i love CDE [04:39:03] <nrubsig> theRealballchalk: why ? [04:39:05] <blueandwhiteg3> It's a foxconn 'business edition' or something like that. Pretty exhaustive features. [04:39:13] <theRealballchalk> because it's working now [04:39:16] <theRealballchalk> lol [04:39:17] <blueandwhiteg3> palowoda: Want to give me the 30 second version of formatting and partitioning under solaris? I can get into the formatting, but not sure what I'm doing when it comes to the partition UI [04:39:22] <theRealballchalk> thanks tho [04:39:23] <palowoda> Oh right I remember now. [04:39:38] <nrubsig> theRealballchalk: you get the same effect when you chown the X11 cookie [04:39:44] <reflecte> are all the proper tools to harden solaris already included by default? [04:39:49] *** alfred_ has left #opensolaris [04:39:58] <theRealballchalk> nrubsig: oh i just renamed it totally or if not, removed it [04:40:20] <theRealballchalk> nrubsig where can i read more about that '#' beginning comment thing? [04:40:24] <palowoda> Go to the partition. Select new partition. All FreeHog. Make your sizes. Get out of the menu and run newfs on the partiton. [04:41:01] *** yongsun|wfh has quit IRC [04:41:42] *** yongsun|wfh has joined #opensolaris [04:42:49] *** yongsun|wfh has quit IRC [04:43:24] *** crib has joined #opensolaris [04:43:30] <hile_> reflecte, download JASS [04:44:21] <reflecte> hile_ does that cover all possible options in solaris 10? i saw that on some hardening tools lists but the lists were pretty outdated [04:44:40] <hile_> you want to harden your box? [04:44:49] <hile_> put it in a secure room and unplug the network cable [04:44:52] <blueandwhiteg3> palowoda: I'm lost in this interface. I have no idea what I'm doing here. It's not exactly familiar. There's a pile of partition options, none of which make any sense to make a single large partition. [04:45:05] *** bzcrib has quit IRC [04:45:19] *** sparc-kly_ has joined #opensolaris [04:45:35] <blueandwhiteg3> ah ha! i found all free hog.... [04:45:52] <palowoda> run 'modify' in the partiton menu [04:46:22] <palowoda> it will assign all the space to partition 6. [04:46:27] <reflecte> hile_ i don't want to overlook any features i could be using [04:46:32] *** bengtf_ has quit IRC [04:46:35] <palowoda> or how every you want to set it up. [04:46:49] *** sparc-kly has quit IRC [04:49:16] <blueandwhiteg3> palowoda: initializing cylinder groups... [04:49:56] <palowoda> than after your done run a 'newfs /dev/rdsk/disk_and_slice' you made the partition on. Than mount and test. [04:50:21] <palowoda> Would be interesting to see the results of a Linux test also like you said. [04:50:22] <blueandwhiteg3> yep, already did newfs, waiting for it to complete [04:50:58] <palowoda> Do you know which SATA controller chip Foxconn uses? [04:51:53] <blueandwhiteg3> palowoda: I'll get you the manufacturer's site. I checked solaris compatibility before buying... [04:52:21] <theRealballchalk> how do i delete a file beginning with a dash? [04:52:51] <Tempt> rm -- -dashy.filename.of.doom [04:53:14] <blueandwhiteg3> palowoda: http://www.foxconnchannel.com/EN-US/Product/motherboard_detail.aspx?id=en-gb0000202 [04:53:25] <palowoda> Well yeah compatibility just means you can run it which you already are doing. [04:54:25] <palowoda> Damn they don't report the chip model type. Figures. [04:54:54] <blueandwhiteg3> I'm not sure, I do know I cross-checked compatibility of everything before buying. I don't have web history going back that far. [04:55:04] <blueandwhiteg3> And lo and behold, everything does work great. [04:55:24] <blueandwhiteg3> If this doesn't get me closer to the throughput i'm expecting, i will boot linux up on there and see how it does [04:56:04] <CSFrost> anyone ever take a job they didn't want to do? if so, how much should I charge? heh [04:56:06] <palowoda> Actually you should be getting close to 50/60 Mbytes sec write with those drives. [04:56:23] <blueandwhiteg3> palowoda: yeah, i should, that's what I told you in the first place :) [04:56:45] *** yongsun|wfh has joined #opensolaris [04:57:15] *** coffman is now known as coffman_zzz [04:57:20] <Tempt> CSFrost: As much as you can. [04:58:10] <CSFrost> Tempt, well my clients on this one left a message on my voicemail with "whatever I ask" [04:58:17] <Tempt> CSFrost: It really makes a difference. "I hate this place and I hate who I'm working for and I can't believe I'm saving their arse, but $300/hr is $300/hr and I'm getting the last fscking chuckle." [04:58:23] <CSFrost> Tempt, does that mean a 50% premium, or 100%? [04:58:48] *** nahamu has left #opensolaris [04:58:53] <CSFrost> Tempt, I was thinking 600/hr just for having to be on park ave. [04:58:56] <Tempt> CSFrost: I'd go for a 100% premium if I really didn't want to do it. [04:59:09] *** palowoda has quit IRC [04:59:22] <CSFrost> I *really* can not stand being in the city. [04:59:42] *** lon3star has quit IRC [04:59:47] <Tempt> CSFrost: Meh. I save the really extortionate rates for not being able to stand the client / work. [04:59:56] <Tempt> CSFrost: I just *don't* go to places I can't stand. [05:00:08] <CSFrost> Well, I have a feeling they are going to make me touch some windows stuff [05:00:11] <Tempt> CSFrost: Although being unwilling to enter the city probably cuts your options down a little too much. [05:00:18] <CSFrost> that's why I said no the first 6 times. [05:00:29] <CIA-26> yy150190: 6490623 Some networking problems with Solaris_b44_64 domU(using solaris_b44_64 dom0), 6510396 system panicked in e1000g_82547_timeout, 6554976 e1000g driver does not support 10D5 device - Sun Pentwater PEM quad port [05:00:35] <Tempt> CSFrost: I think that's what they call "bad touch" [05:01:16] <CSFrost> Tempt, I just call it nausiating... [05:01:52] <Tempt> CSFrost: Easy then, don't do it, if they beg set your rate at $1200/hr plus expenses. [05:02:11] <Tempt> CSFrost: If they'll pay that, they're obviously desperate enough for you to get to work on the "plus expenses" side. [05:02:19] <Tempt> CSFrost: I'm sure you can get a good dinner in the city. [05:02:55] <CSFrost> Tempt, I'd rather not stay in the city for any longer then I'd have to.. so dinner will be better away from it. [05:03:43] *** palowoda has joined #opensolaris [05:04:06] <blueandwhiteg3> palowoda: Wow, this is horrible. [05:04:23] <blueandwhiteg3> 14.9 MB/sec write, 5.5 MB re-write [05:04:37] <CSFrost> Tempt, I honestly don't understand why people like living there.. I used to live there and it was just horrible. [05:04:55] <palowoda> On a ufs filesystem? [05:04:56] *** m0le has joined #opensolaris [05:05:16] <blueandwhiteg3> palowoda: yes... i think i might as well get external drives that plug in over the parallel port at this point.... [05:05:23] *** Fullmoon has quit IRC [05:06:02] <palowoda> What build of solaris is this? [05:06:15] <blueandwhiteg3> latest, sxce 67 [05:06:36] <palowoda> Hmm. [05:07:08] <palowoda> Just guessing now it could be a driver issue. [05:08:00] <palowoda> do a /usr/X11/bin/scanpci and find the line for the sata chip. [05:08:13] <palowoda> Or it will say raid controller. [05:10:27] <blueandwhiteg3> palowoda: working on it, was trying linux live disc [05:11:07] * blueandwhiteg3 grows old while Solaris boots [05:11:53] <reflecte> blueandwhiteg3 what tool are you using to test speeds out of curiosity? [05:12:21] <blueandwhiteg3> reflecte: iozone, though i also time cat /dev/zero > file [05:12:27] <blueandwhiteg3> for local tests [05:13:04] <palowoda> You should use dd for local tests but iozone is going to give you more options. [05:13:51] <palowoda> regardless the disks are performing like crap. [05:14:04] <blueandwhiteg3> i don't think it matters that much when we have this level of performance disparity [05:14:31] <palowoda> blueandwhiteg3: I can't believe you bought such a nice motherboard and put a crappy cpu in it. [05:14:49] <blueandwhiteg3> pci bus 0x0000 cardnum 0x0e functon 0x00: vendor 0x10de device 0x0266 [05:14:55] <theRealballchalk> Temp: whoa what do u do? [05:15:07] <blueandwhiteg3> nVidia Corporation MCP51 Serial ATA Controller [05:15:31] <palowoda> Thats the ATA controller, should have a line for the SATA controller. [05:15:57] <blueandwhiteg3> palowoda: It says "Serial ATA" - there is an IDE controller above it [05:16:30] <palowoda> hang on. [05:16:47] <blueandwhiteg3> there are more... basically the same, save for the device umber [05:17:29] <blueandwhiteg3> palowoda: This is just supposed to be an elaborate storage server... just for me, more or less. I didn't see any real reason to bother with an excessive cpu... if the cpu was a real issue, it could be upgraded, but why, if i'm only able to use a few percent of it right now? [05:17:52] <palowoda> Sometimes you would be suprised. [05:18:27] <blueandwhiteg3> palowoda: If i'm only using a tiny bit of my cpu and something is bottlenecking, it's poorly designed software. [05:18:37] <palowoda> I'm seeing a lot of reference to MCP51 as an audio controller in the bug database. [05:19:14] <palowoda> By that logic you only need a 500mhz cpu right? [05:19:16] <jmcp> mcp51 does everything [05:19:17] <blueandwhiteg3> palowoda: It's the same MCP51 for the PCI bridge, the IDE, the HD audio, the ethernet... everything [05:19:32] *** jlc has joined #opensolaris [05:19:52] <blueandwhiteg3> palowoda: Well, you want a little head room for 'bursts'... but if you have enough memory bandwidth, enough cpu, it should be alright. [05:20:11] <blueandwhiteg3> I did test this cpu to be stable up to a bit over 2.4 GHz [05:20:31] *** jlc has quit IRC [05:20:47] *** jlc has joined #opensolaris [05:20:53] *** jlc has quit IRC [05:21:12] *** jlc has joined #opensolaris [05:21:43] <blueandwhiteg3> palowoda: plus, they don't sell 500 mhz cpus for mobos like that... otherwise, who knows :P [05:22:11] <jlc> anyway of compiling ON in RAM? :) [05:22:24] <jlc> 4GB might not be enough though [05:22:32] <jlc> just thought it would be quickre [05:22:33] <jmcp> jlc: buy more ram then [05:23:03] <jlc> justifying more than 4GB on my desktop and laptop seems hard [05:23:04] <jlc> ;) [05:23:50] <palowoda> Well I suspect this sata chipset is just bing used in the PATA emulation mode. Don't see any bugs releated to it in the bug database but it may not be a commonly tested board for Solaris. [05:23:51] <blueandwhiteg3> jlc: more than 4 GB of ram on your notebook is impossible unless you have more than two slots for ram [05:24:02] <blueandwhiteg3> palowoda: I could murk around in the bios? [05:24:07] <blueandwhiteg3> i'm booting a linux live cd [05:24:10] <palowoda> They sell 2G SODIMMS now. [05:24:21] <blueandwhiteg3> palowoda: That's my point. [05:24:31] <jlc> aye [05:24:53] <palowoda> The bios settings are going to be much of a help. More of a dirver chipset testing issue. [05:26:25] *** uebayasi has quit IRC [05:26:47] <blueandwhiteg3> jlc: the bios is also a PITA for getting a lot of ram, especially in notebooks [05:27:31] <blueandwhiteg3> in fact, it often tops out at 2 or 3 GB [05:27:50] <Tempt> Ask Delewis about RAM in notebooks. Pretty sure he's got 4Gb in his. [05:28:57] <palowoda> The newer Core2 Duo 64bit laptops are a little better about 4G footprints. [05:29:04] <Doc> i've got 2Gb, but about a week after i bought it they started shipping the 4Gb models [05:29:39] * jlc has 4gb coming in a week or so [05:29:56] <jlc> my desktop now has 4gb though and that is what I'm working on atm [05:30:32] *** rachel_ has joined #opensolaris [05:33:59] *** JohannaIsNotHere has quit IRC [05:34:39] <jmcp> I want to max my u20m2 with 2gb dimms [05:35:00] <jlc> can ON be built with studio 12 or is there problems? [05:35:00] <blueandwhiteg3> palowoda: The real issue is the chipset. The new Santa Rosa based machines support 4 GB comfortably, but C2Ds are often used with other systems... [05:35:27] <blueandwhiteg3> man, the sata performance is crappy under linux too [05:35:32] <blueandwhiteg3> is this mobo cursed? [05:35:38] <blueandwhiteg3> linux sees them as sata [05:35:53] *** HackersDrop has quit IRC [05:36:08] <jmcp> jlc: there are still problems with ss12 [05:36:13] <jlc> thx [05:36:23] <blueandwhiteg3> If this mobo is this bad, I will RMA it and tell them it's defective, or not as advertised [05:36:32] <blueandwhiteg3> SATA 300 != 18 MB/sec [05:36:49] <jlc> I haven't been able to get a LU to work in ages, so figure I'll just follow ON again [05:41:21] *** vortex` has left #opensolaris [05:41:52] <Stric> blueandwhiteg3: I've got MCP51 too and just got ~60MB/s while reading a file [05:42:10] <blueandwhiteg3> the problem here is writing.... [05:42:21] <blueandwhiteg3> Stric: what does your writing look like? [05:43:30] *** gdamore has quit IRC [05:43:32] <Stric> seems to get around 40-50MB/s [05:43:53] <blueandwhiteg3> Then what in the world is wrong with my mobo? [05:44:10] <Stric> maybe it's your disk that's fubar [05:44:31] <blueandwhiteg3> All of them? [05:44:45] <blueandwhiteg3> And they were working perfectly a few days ago in another system? [05:44:52] *** Gropi_ has joined #opensolaris [05:45:23] <blueandwhiteg3> i did have to turn off bios detection of the drives, due to the EFI labels [05:45:36] <Stric> this is a hitachi thingie.. only got one sata disk in this machine [05:46:09] <palowoda> That's weird too because I didn't have turn off the bios detection with EFI labels. [05:46:25] <blueandwhiteg3> It causes my bios to... get stuck [05:46:52] <Stric> btw, my tests are with debian sid and home-brew 2.6.20.7 [05:47:10] <blueandwhiteg3> this is interesting [05:47:17] <blueandwhiteg3> the read rates under linux are great [05:47:21] <blueandwhiteg3> 69.6 MB/sec [05:47:30] <blueandwhiteg3> dd /dev/sda to /dev/null [05:47:35] <Stric> blocksize? [05:47:39] <blueandwhiteg3> 8192 [05:48:10] <blueandwhiteg3> reversing the if and of drops the transfer rate to 18 MB/sec [05:48:26] <blueandwhiteg3> i'm going to test all the drives [05:48:33] <blueandwhiteg3> if they're bad... I'm rmaing them! [05:48:39] <palowoda> Well at least with the VIA based SATA controllers writes are up at 60+ Mbytes a sec. [05:49:08] <blueandwhiteg3> i can't imagine why reads would be fast, writes slow? [05:49:22] <palowoda> I wonder if all MCP5X models perform like this. [05:50:00] <blueandwhiteg3> ohh, now this is interesting [05:50:01] <palowoda> What they are used with nforce4 motherboards? [05:50:17] <jlc> if I do "env" it has all of these CXX32=/opt/SUNWspro/bin/CC [05:50:25] <jlc> C* blah = ss12 [05:50:32] *** chadz has joined #opensolaris [05:50:36] <jlc> i have 11 in /opt/SunStudio11/SUNWspro/ [05:50:50] <blueandwhiteg3> sda and sdb drives are slow, sdc and sdd are fast [05:50:51] <jlc> changed my path accordingly, do I need to do a bunch of unset's [05:50:55] <blueandwhiteg3> time to shut down and shuffle drives around! [05:51:31] <Stric> blueandwhiteg3: you having problems with the network card locking up? [05:51:44] <blueandwhiteg3> Stric: Not locking up, just solaris being funky with DHCP [05:51:55] <blueandwhiteg3> actual connectivity is perfect and tested good [05:52:08] <jlc> unset CC CXX32 CC64 CXX CXX64 CC32 [05:52:18] <chadz> it seems the opensolaris 11 cd I grabbed dislikes my rtls nic. what is there to try and do? [05:52:23] <jlc> should I do that to get rid of ss12 and set 11 or does it matter [05:52:36] <Stric> well.. I'm having issues both in linux and win.. when shuffling too much data, it just stops all of a sudden.. unload driver + load again and it'll work.. [05:53:04] <Stric> had the same problems with some discrete syskonnect cards under solaris/sparc, up until their latest drivers.. haven't had problems since.. [05:53:43] <jmcp> chadz: more details please. rtls was integrated into Solaris 10 before FCS. [05:53:58] <blueandwhiteg3> Stric: I tested 112 MB/sec+ sustained, no problems [05:54:01] <Stric> but I've had it with this syskonnect card (yukon thingie), so I'm getting a cheap rtl8169 tomorrow.. they seem to work just perfectly [05:54:29] <jmcp> Stric: I had interrupt handling problems with my skge instance [05:54:38] <jmcp> gave up on it and went to a crappy old rtls instead [05:54:50] <blueandwhiteg3> i am really glad i'm getting somewhere with this... [05:54:53] *** rachel has quit IRC [05:54:58] <Stric> latest sparc drivers from syskonnect seem to work just fine.. [05:55:04] <blueandwhiteg3> if some of the drives are fast and others are slow, i'm going to start rmaing parts until i find which one is faulty, haha [05:55:13] <palowoda> you mean it could be the drive models? [05:55:14] <Stric> been pushing loads of terabytes with it.. [05:55:49] *** rachel_ is now known as rachel [05:55:52] <palowoda> Or are they all the same model? [05:56:09] <blueandwhiteg3> palowoda: They are all the same model [05:56:18] <Stric> guessing at ~100TB without a single (to me known) hickup, from reading bandwidth graphs [05:56:20] <blueandwhiteg3> I suppose they could be experiencing a strange failure paradigm, a write slowdown [05:57:37] <chadz> jmcp: i'm not entirely sure. playing with solaris the first time. dmesg reports, "RTLS don't support this device: vendorID = 0x1186, deviceID = 0x1300" [05:57:44] <blueandwhiteg3> well, i shuffled drives around, now a and b are fast, c and d are slow [05:57:47] <chadz> jmcp: it also failed setting it up during hte install [05:57:50] <blueandwhiteg3> actually [05:57:53] <blueandwhiteg3> c is slow [05:57:54] <blueandwhiteg3> d is fast [05:58:01] <blueandwhiteg3> time to test drive by drive, cable by cable.... [05:58:10] <Stric> could be cable or firmware [05:58:30] <palowoda> Sounds encouraging though. [05:59:22] <jmcp> chadz: you might have more luck with one of Murayama's drivers instead, lemme check .... [05:59:30] *** gaz has quit IRC [06:00:19] <blueandwhiteg3> palowoda: Yes. it's possible i have a bad cable or something.... [06:00:28] <jmcp> chadz: that's the DLink DFE-538TX 10/100 Ethernet Adapter [06:00:37] <jmcp> I am surprised rtls is barfing [06:00:53] <blueandwhiteg3> i wonder if linux will handle me hot swapping the sata connector? [06:00:54] *** Gropi has quit IRC [06:01:19] <jmcp> chadz: if you run grep pci1186 /etc/driver_aliases what do you see? [06:01:47] <chadz> one second, surprising the core install is _very_ coreish and only ships with sh :) [06:02:10] <jmcp> oh heck .. you did a core install [06:02:11] <jmcp> :( [06:02:14] <jmcp> why? [06:02:21] <chadz> i didn't want to download 15 cds [06:02:29] <jmcp> anyways, Murayama's driver is "rf" and you can get it from http://homepage2.nifty.com/mrym3/taiyodo/eng/ [06:02:44] <chadz> i figured I could just download what I needed. [06:02:54] <jmcp> bad assumption to make with an OS which you don't know [06:03:04] <jmcp> go for the 2.4.0 version, btw, if you're going to use "rf" [06:03:05] <blueandwhiteg3> it appears linux will let me hot swap? how interesting... [06:03:05] <chadz> rtls "pci1186,1301" [06:03:16] <palowoda> food I need food. [06:03:37] <jmcp> chadz: that's all that's listed? [06:03:44] <chadz> jmcp: yeap. [06:04:06] <chadz> jmcp: i don't think the leap will be that challenging. [06:04:07] <Stric> blueandwhiteg3: Summary: No TCQ/NCQ in early chipsets. NCQ support added in later chipsets. Looks like a PATA controller, but with full SATA control including hotplug and PM. [06:04:16] <jmcp> chadz: try running this as root :: update_drv -a -i ' "pci1186,1300" ' rtls [06:04:17] <Stric> blueandwhiteg3: http://linux-ata.org/driver-status.html#nvidia [06:04:26] <jmcp> note that it's single quote double quote .... double quote single quote [06:04:27] *** Dar has quit IRC [06:04:55] *** uebayasi has joined #opensolaris [06:05:16] <chadz> it sucessfully terminated with no messages. [06:05:44] <jmcp> ok [06:05:49] *** jamesd__ has quit IRC [06:05:50] *** jamesd_ has quit IRC [06:05:55] <jmcp> try ifconfig rtls0 plumb [06:06:12] <chadz> plumb? [06:06:18] <jmcp> trust me [06:06:21] <Stric> "enable" [06:06:26] <jmcp> ? [06:06:28] <chadz> just wan't to know what it stands for :) [06:06:34] *** jamesd_ has joined #opensolaris [06:06:37] <jmcp> like with pipes :) [06:06:45] <Stric> it's to "enable" a nic [06:06:50] <chadz> ah [06:07:08] <Stric> what unplumb is, you have to figure out yourself ;) [06:07:25] <blueandwhiteg3> i think we have a 'slow' drive [06:07:26] <chadz> i'm hoping it's !"enable" [06:07:33] <chadz> jmcp: it's timing out [06:07:45] <jmcp> chadz: what do you mean? [06:07:48] <chadz> jmcp: hmm, hitting enter returned me to orompt [06:08:01] <Stric> that's expected [06:08:02] <chadz> ifconfig -a shows hte device now [06:08:07] <Stric> also expected [06:08:18] <jmcp> ok, so now you can run ifconfig rtls0 inet 192.168.1.100 netmask 255.255.255.0 up [06:08:25] <jmcp> substitute appropriate IP etc etc [06:08:35] <chadz> should dhcp be usable now? [06:08:40] <jmcp> I hope so [06:08:45] <chadz> dhclient, dhcp ? [06:08:46] <jmcp> ifconfig rtls0 dhcp start [06:08:50] <chadz> ah. [06:09:05] <jmcp> chadz: one of the *major* problems with the core install cluster is you don't get any of the tfm [06:09:13] <chadz> tfm ? [06:09:18] <jmcp> the fine manual [06:09:21] <jmcp> as in, RTFM :-) [06:09:38] <chadz> ahh, heh. [06:09:46] <chadz> i just downloaded a huge manual [06:09:53] <chadz> 20 megs zipped. [06:09:59] <jmcp> or you could try pkgadd of the SUNWman package off the media [06:10:02] <chadz> haven't looked at it yet. [06:10:25] <chadz> heh, this is odd. [06:10:36] <chadz> ping 192.168.1.1 -> "192.168.1.1 is alive" [06:11:02] <Stric> but no route to the internet [06:11:14] <jmcp> ping localhost [06:11:20] <jmcp> so now you need to add a network route [06:11:49] <jmcp> route -v add default $router [06:12:09] <chadz> shouldn't dhcp have handled that? [06:12:21] <jmcp> depends on whether your dhcp server is serving properly [06:12:24] <jmcp> :) [06:12:35] <chadz> should be :). ping 4.2.2.2 responds hte same [06:12:40] <chadz> "... is alive" [06:13:07] <chadz> now I have to figure out package management :) [06:13:08] <Stric> so it's all working then.. [06:13:09] <jmcp> what were you expecting to see? [06:13:20] <chadz> milliseconds ? [06:13:24] <Stric> ping -s something to have an alternative view [06:13:27] <chadz> not used to these coreutils. [06:13:36] <chadz> ah, there it is. [06:13:38] <jmcp> chadz: hence the need to drop your assumptions, and rtfm [06:14:03] <chadz> jmcp: what assumption did I make? [06:14:32] <jmcp> that solaris commands would show the same output as $other_os commands [06:14:55] <jmcp> that a core install would have manpages, and that you can download other packages as needed, along with dependencies [06:15:28] <Gman> jmcp, ping constantly pisses me off too fwiw [06:15:58] * jmcp shrugs [06:16:10] <chadz> jmcp, please quote me stating that i expected to be able to download dependencies [06:16:39] <jmcp> chadz: chill, dude [06:16:46] <jmcp> you made implicit assumptions in your comments [06:17:06] <Stric> to both: when you ASSUME, you make an ASS out of U and ME .. ;) [06:17:40] *** het has quit IRC [06:18:13] <chadz> jmcp: how convenient for you that you can spot my implicit assumptions in my comments when even I cannot :) [06:18:24] <Stric> trained eye ;) [06:18:31] <jmcp> chadz: I've had many years of it [06:18:45] <chadz> as i said, i JUST downloaded the manual. thanks for the help with getting internet, which is all I really wanted. [06:19:00] <jmcp> right then, you're all set [06:19:32] *** het has joined #opensolaris [06:19:34] <chadz> are you trying to get rid of me? [06:19:43] <jmcp> not at all [06:20:56] <jmcp> chadz: I've just moved house and my adsl isn't turned on yet and I'm really feeling the loss. If you've got broadband then you are all set to go and do $whatever ... [06:21:16] *** jamesd_ has quit IRC [06:21:36] *** jamesd_ has joined #opensolaris [06:24:59] <blueandwhiteg3> great, get this... SATA port 1 causes drive #2 to fail, but sata port #2 causes all drives to work perfectly [06:26:25] *** jamesd_ has quit IRC [06:26:38] *** jamesd2 has joined #opensolaris [06:27:46] <blueandwhiteg3> maybe i have intermittent drive problems? [06:28:02] <blueandwhiteg3> man, where is the nearest bridge? i'd like to jump... [06:28:49] <nrubsig> jmcp: DSL addiction ? [06:29:59] <jmcp> nrubsig: nope, I need it so I can work from home [06:30:10] <jmcp> at the moment I'm in the Brisbane office [06:30:26] <Tempt> Brisbane. Joy. [06:30:31] <jmcp> Tempt: hometown for me [06:30:39] <jmcp> Tempt: it's not bleak city, that's true ... [06:30:44] * nrubsig imagines a new capital punishment: "... The people of <insert-country> hereby sentence you to five days without working DSL ..." [06:31:28] <blueandwhiteg3> So much for the concept of work release! [06:31:29] <jmcp> since the previous owners of our home took the phone books with them and our phone line isn't turned on yet, it's been bloody difficult to get things done [06:31:47] <blueandwhiteg3> yahoo yellow pages? [06:32:00] <jmcp> blueandwhiteg3: no phone, no broadband, no telephone books [06:32:11] <blueandwhiteg3> jmcp: That's why I have unlimited data on my mobile! [06:32:49] <jmcp> ah, mobile data services [06:33:08] <blueandwhiteg3> hey... $17/mo for all the data, sms and mms i can use, i'm happy [06:34:01] <jmcp> different economies here in Oz [06:34:47] <blueandwhiteg3> jmcp: That's what I've been told. [06:34:59] <blueandwhiteg3> well.... i think i have it [06:35:15] <LeftWing> Or indeed no economies at all, at least as far as mobile data is concerned. ;P [06:35:18] <blueandwhiteg3> haha [06:35:27] <blueandwhiteg3> drive #2 is jumping between 18 MB/sec and 60-70 MB/sec [06:35:34] <blueandwhiteg3> doesn't matter the cable, the port [06:35:49] <blueandwhiteg3> which, when rolled into a RAID, would cause no end of problems [06:35:55] <blueandwhiteg3> and of course... SMART says it is fine [06:36:40] <jmcp> I've come to the conclusion that SMART isn't so smart [06:37:12] <blueandwhiteg3> every hard drive failure i've had has passed smart until AFTER it failed [06:37:40] <Gman> nrubsig, worse is having severly capped dsl [06:38:10] <nrubsig> Gman: I can't be worse then DSL from the german telekom [06:38:26] * Gman was getting downloads of 2KB/s at the weekend [06:38:28] <nrubsig> Gman: it blew-up a whole Xorg release-wrangers meeting in the past [06:39:10] *** jamesd__ has joined #opensolaris [06:39:24] *** jamesd2 has quit IRC [06:40:39] *** jamesd_ has joined #opensolaris [06:40:51] <chadz> is blastwave's pkg_get still preferred? [06:40:59] <nrubsig> yes [06:43:31] <blueandwhiteg3> well, this is an interesting new mode of drive failure [06:43:33] <blueandwhiteg3> slowdown [06:43:37] <blueandwhiteg3> no i/o errors [06:43:39] <blueandwhiteg3> just slowness [06:43:43] <blueandwhiteg3> extreme slowness [06:44:01] <blueandwhiteg3> is there any way this can be automatically detected in a raid-z? [06:45:09] <blueandwhiteg3> i suppose i could run a dd throughput check every so often [06:45:27] <Doc> yah.. that happens. drive will be generating a shit-load of retrys, but never enough on a single block to pass an error back to the host [06:45:34] <blueandwhiteg3> yeah [06:46:46] <blueandwhiteg3> i guess that makes two hard drives i'm rmaing tomorrow [06:47:44] <blueandwhiteg3> it is nice to know that these drives bench 61-69 MB/sec sustained [06:47:53] *** jamesd__ has quit IRC [06:48:18] *** jamesd__ has joined #opensolaris [06:48:52] <blueandwhiteg3> now it's time to go back to solaris and setup a nice raid-z with three drives... [06:52:24] <Tempt> sata750 1.16T 1.56T 0 443 0 55.5M [06:52:27] <Tempt> 100% NFS traffic [06:52:29] <Tempt> all writes [06:53:34] *** Gman has quit IRC [06:54:55] *** jamesd__ has quit IRC [06:55:11] *** jamesd__ has joined #opensolaris [06:56:11] *** ShadowHntr has quit IRC [06:57:52] *** pguser has joined #opensolaris [06:58:11] <pguser> got a question about solaris express 5/07 [06:58:36] <pguser> is it compatible with linux? [06:58:43] <jmcp> in what way? [06:59:09] <pguser> I had solaris 11/06 that I tried to install and it said that solaris fdisk partitions were not supported along with linux ones [06:59:29] <jmcp> you need to choose the "Solaris2" fdisk partition type [06:59:48] <pguser> where can I select this in the solaris install? [07:00:17] <jmcp> offhand, I don't know. perhaps in the disk partitioning section of the installer' [07:00:22] *** yongsun|wfh has quit IRC [07:00:22] *** jamesd has quit IRC [07:00:22] *** sioraiocht has quit IRC [07:00:23] *** danv12 has quit IRC [07:00:23] *** Atomdrache has quit IRC [07:00:23] *** nightswim has quit IRC [07:00:24] *** Reidms-420R has quit IRC [07:00:24] *** SirFunk has quit IRC [07:00:24] *** CSFrost has quit IRC [07:00:26] *** lisppaste3 has quit IRC [07:00:26] *** Drone has quit IRC [07:00:26] *** seanmcg has quit IRC [07:00:26] *** ruxpin_ has quit IRC [07:00:27] *** SYS64738 has quit IRC [07:00:27] *** The-spiki has quit IRC [07:00:27] *** jamesb_ has quit IRC [07:00:27] *** timelyx has quit IRC [07:00:27] *** jcsmith has quit IRC [07:00:28] <pguser> I have the 11/06 solaris version right now, I was thinking of downloading solaris express 5/07 [07:00:47] <jmcp> 11/06 is Solaris 10 update 3 [07:01:04] <jmcp> SX 05/07 is based on Solaris Nevada build 64 iirc [07:01:22] <pguser> is there another solaris express coming out soon? [07:01:42] <pguser> i'd hate to download 5/07 and then have another one pop up. [07:01:51] <jmcp> I think SX releases come out about every 3 months, so I think you'd be ok with 0507 [07:02:10] <jmcp> unless you want to go to sun.com and get the current build which is 68 [07:02:26] <pguser> where can I get the current build? [07:02:32] <pguser> link? [07:02:39] <pguser> what goodies are in the current build? [07:03:08] <jmcp> go to opensolaris.org/os/community/onnv and have a look at all the docs that are there [07:03:23] <jmcp> we have "flag day" messages which list new features in a build [07:03:54] *** ruxpin has joined #opensolaris [07:03:54] *** yongsun|wfh has joined #opensolaris [07:03:54] *** jamesd has joined #opensolaris [07:03:54] *** sioraiocht has joined #opensolaris [07:03:54] *** danv12 has joined #opensolaris [07:03:54] *** Atomdrache has joined #opensolaris [07:03:55] *** nightswim has joined #opensolaris [07:03:55] *** Reidms-420R has joined #opensolaris [07:03:55] *** SirFunk has joined #opensolaris [07:03:55] *** CSFrost has joined #opensolaris [07:03:55] *** lisppaste3 has joined #opensolaris [07:03:55] *** Drone has joined #opensolaris [07:03:55] *** seanmcg has joined #opensolaris [07:03:55] *** SYS64738 has joined #opensolaris [07:03:55] *** The-spiki has joined #opensolaris [07:03:55] *** jamesb_ has joined #opensolaris [07:03:55] *** timelyx has joined #opensolaris [07:03:55] *** jcsmith has joined #opensolaris [07:03:57] <pguser> i read on opensolaris that zfs now has the capability to spread data around on the disk to prevent one part of the disk from destroying your data [07:04:06] <pguser> sort of like raid for one disk [07:04:16] <jmcp> uh ... I wouldn't have put it that way [07:04:21] <blueandwhiteg3> pguser: that would not be very useful.... [07:04:40] <blueandwhiteg3> pguser: it might offer a little benefit, but really, drives fail catastrophically all too often [07:04:56] <blueandwhiteg3> anybody know why dd doesn't display a nice throughput gauge under solaris? [07:04:59] <pguser> blueandwhiteg3: it would be, say you have a lap top and the part of where your important data is stored gets corrupted [07:05:27] *** jamesd_ has quit IRC [07:05:40] <pguser> i read on the opensolaris site that someone put in a feature to zfs where the data gets spread the farthest apart on the disk for redundancy [07:06:01] <pguser> so you can have 3 copies of a file in seperate locations on the disk [07:06:12] <jmcp> sounds more like "ditto blocks" [07:06:16] <pguser> on the physical platter [07:06:28] <blueandwhiteg3> pguser: that's a terrible plan for any real data security... [07:06:41] <pguser> blueandwhiteg3: then why did they put it in zfs? [07:06:48] <pguser> i'll go look up the link [07:07:29] <Tempt> blueandwhiteg3: dd on Solaris doesn't give you a pretty little gauge for the same reason grep doesn't spit out ansi colours by default [07:07:34] <Tempt> blueandwhiteg3: It isn't Linux. [07:07:41] <Tempt> blueandwhiteg3: If you want GNU grep, build it! [07:07:41] <blueandwhiteg3> pguser: there are scenarios where it may be useful, but you can't even compare it to a raid... and the more copies of something you keep on a drive, the more space you use up and the slower writing will be [07:07:50] <Tempt> blueandwhiteg3: And if you want GNU dd, build it! [07:08:04] <pguser> blueandwhiteg3: its better than nothing if you only have one disk like me. [07:08:17] <blueandwhiteg3> pguser: why not just back up your hard drive? [07:08:20] <pguser> in a laptop senario [07:09:27] * Tempt bored heads off to build SPARC/Solaris packages of some GNU tools. [07:09:51] <blueandwhiteg3> Tempt: Like for blastwave.org? [07:10:17] <Tempt> blueandwhiteg3: Like building packages for blastwave, except not building the packages for blastwave. [07:10:31] <blueandwhiteg3> Tempt: Just for personal use the? [07:10:36] <blueandwhiteg3> *then [07:10:45] <Tempt> I don't share packages anymore. [07:11:13] <blueandwhiteg3> Tempt: I was just curious the purpose. I'm satisfied with using time and dd [07:11:29] <Tempt> I tend to package things up if I'm going to use them. [07:11:42] <Tempt> That way I can easily deploy the packages onto the next system I need to work on. [07:12:07] <Tempt> I don't complain about anything being "missing" from a default install, I just add it to my tools package and then I know exactly what I'm dealing with. [07:12:25] <blueandwhiteg3> well, sustained writes with 3 x drives in a raid-z is about equal to a single drive [07:12:52] <Tempt> Thats about the best you can ask for. [07:12:57] <Tempt> Your read speed should be better. [07:13:00] <twincest> iirc one raid-z set will always be write limited to one drive - but you can increase that by striping [07:13:10] <twincest> (e.g. 2 sets of 4 disks, not 1 set of 8) [07:15:06] <Tempt> blueandwhiteg3: If you're on SPARC I can give you a set of packages. [07:15:20] <blueandwhiteg3> Tempt: Haha... sorry, no migration plans. [07:15:45] <pguser> recursive snapshots, what are they? [07:15:54] <Tempt> The only reason I don't build for blastwave is I don't have x86 around here. [07:16:01] <pguser> it looks to be a new feature of solaris express [07:16:06] <pguser> 6/07 [07:16:12] <pguser> er 5/07 [07:16:20] <blueandwhiteg3> twincest: I don't think that the speed of a raid-z is limited to a single drive? [07:16:37] *** jamesd_ has joined #opensolaris [07:16:39] <twincest> i do - and doesn't your test agree with me? [07:16:45] *** jamesd__ has quit IRC [07:17:05] <blueandwhiteg3> twincest: I just clocked 71 MB/sec write. That's a touch higher than the best i've ever seen my drives clock. [07:17:21] <Tempt> Modern SATA drives should hit around 71MB [07:17:59] <blueandwhiteg3> Tempt: I know. I just spent over an hour benchmarking each and every drive to isolate the slow one and make sure I didn't have a controller problem... not once did i see over 69 MB/sec [07:18:14] <Tempt> Perhaps my SATA drives are faster. [07:18:31] *** pguser has quit IRC [07:18:37] <blueandwhiteg3> Tempt: It varies a bit. As density rises, so does throughput, albeit not proportionally. [07:18:52] <twincest> blueandwhiteg3: http://blogs.sun.com/roch/entry/when_to_and_not_to [07:18:53] <Tempt> True. [07:19:17] <blueandwhiteg3> Tempt: These are only 250 GB drives. [07:19:27] <Tempt> aah [07:19:45] <Tempt> Vendor: ATA Product: ST3750640AS Revision: E Serial No: [07:20:00] <blueandwhiteg3> Tempt: Once I get this system running smoothly, I'll expand. But I need to RMA one of these drives already. [07:20:13] <Tempt> Aah, the joy of cheap consumer hardware. [07:20:27] <blueandwhiteg3> 750 GB Seagate SATAs, eh? [07:20:52] <blueandwhiteg3> Tempt: Those drives are the exact same class as my 250 GB Segates. 7200.10s and such. [07:21:18] <Tempt> Yep. Cheap consumer hardware. [07:21:37] <Tempt> Vendor: HITACHI Product: HUS1014FASUN146G Revision: 2A07 Serial No: 0547TWLX29 [07:21:45] <Tempt> That's the profile of my other drives [07:22:13] <blueandwhiteg3> That's some Sun-specific drive? [07:23:10] <Tempt> It's just a Fujitsu drive with Sun firmware [07:23:18] <Tempt> They do that so they'll all exactly the same sector count etc [07:23:22] <blueandwhiteg3> ah [07:23:44] <Tempt> No real difference to just buying a 146Gb 10k FC Fujitsu off the shelf. [07:24:07] <blueandwhiteg3> I'd prefer not to pay a good bit more for specialized drives, and I don't need high uptime, so I think what I have is probably good enough. [07:24:20] *** Gman has joined #opensolaris [07:24:51] <noyb> welcome back [07:25:39] <Gman> hi noyb [07:25:58] <blueandwhiteg3> twincest: I keep clocking a bit over what I'd otherwise predict using a single drive, but I will read that article you linked in a second. [07:26:14] <jmcp> Tempt: there are some differences - chiefly in the way that the non-Sun branded drives respond to scsi inquiry page83, but apart from that you're spot on as far as I recall [07:26:27] <Tempt> /opt/PCOWgnucore/bin/dd if=/dev/zero of=/sata750/incoming/craptastic bs=1024k count=100 [07:26:30] <Tempt> 104857600 bytes (105 MB) copied, 0.485573 s, 216 MB/s [07:26:37] <Tempt> This is why "dd" is not a benchmarking tool [07:26:52] *** jamesd__ has joined #opensolaris [07:26:59] <Tempt> jmcp: I've never had problems using non-Sun drives in Sun systems and vice versa [07:27:13] <jmcp> I've come across only one case where it was a problem [07:27:31] <jmcp> that was with a very dodgy consumer-grade disk that somebody put into a server [07:27:39] <Tempt> There were a lot of firmware problems with the early FC drives [07:27:42] <noyb> dd "speed" will vary greatly with blocksize in my experience. [07:27:57] <Tempt> jmcp: Consumer disk in a Sun box? [07:28:06] <Tempt> jmcp: Do they still make consumer SCSI disks? [07:28:19] <noyb> this is not to say that it is a benchmark. [07:28:30] <jmcp> no, this was a consumer IDE disk that somebody put into an ultra10 [07:28:34] <Tempt> /opt/PCOWgnucore/bin/dd of=/dev/null if=/sata750/incoming/craptastic bs=1024k count=100 [07:28:37] <Tempt> 104857600 bytes (105 MB) copied, 0.510397 s, 205 MB/s [07:28:44] <Tempt> Now, that figure is believable. [07:29:18] <Tempt> jmcp: I'd never thought of an Ultra-10 as a server ;) [07:29:26] <jmcp> well ... me neither, really [07:29:31] <jmcp> but it was being used as a mail server [07:29:38] <Tempt> I mean, you can pick an Ultra-10 up with one arm [07:29:44] <Tempt> It can't be a real machine. [07:29:47] <jmcp> and then propel it with the other [07:30:15] <Tempt> I mean, in that generation of workstation-class, an Ultra-80, well that could be a server. [07:30:29] <Tempt> You can't just sling that under an arm and walk a few blocks. [07:30:33] <noyb> Tempt: I see your point about the variations *without* changing the blocksize. [07:30:38] <Tempt> (unless you're a lot more buff than the average sysadmin) [07:30:41] <jmcp> Tempt: you're a size-ist :-) [07:31:02] <Tempt> No, really, a read speed of 205MB/s is actually about right. [07:31:20] <Tempt> A write speed, however, of 216MB is insane to a raidz. [07:31:38] <Tempt> Notch the file size up by a factor of 10 ... [07:31:47] <Tempt> 1048576000 bytes (1.0 GB) copied, 8.3196 s, 126 MB/s [07:31:53] <Tempt> Oh, look, it goes down. [07:31:56] <Tempt> Now, make it 10Gb... [07:33:23] <blueandwhiteg3> yeah, if your sample size is too small (or really, your time duration) dd is screwy [07:33:51] <jmcp> or if you want to do some multithreaded testing [07:34:04] <jmcp> or if you choose the block size which most closely matches your device cache size [07:34:17] <blueandwhiteg3> jmcp: or any kind of testing other than synthetic sustained read/write [07:34:23] <Tempt> Or if all your writes are going into zfs aggressive ram cache [07:34:59] <jmcp> yup [07:35:31] <Tempt> 10485760000 bytes (10 GB) copied, 131.595 s, 79.7 MB/s [07:35:37] <Tempt> from: /opt/PCOWgnucore/bin/dd if=/dev/zero of=/sata750/incoming/craptastic bs=1024k count=10000 [07:35:50] *** jamesd has quit IRC [07:36:02] <Tempt> A very different story, and one that reflects the fact that there is more I/O on that zpool than a single dd job. [07:36:07] <noyb> Tempt: it's really fast if you read from /dev/null instead... [07:36:13] <noyb> :-) [07:36:33] *** cypromis has quit IRC [07:37:34] <blueandwhiteg3> Well, I'm delighted to report that my RAID-Z is working to a level I'd call satisfactory. The new problem is that NFS is still sucking, badly. [07:37:37] *** jamesd_ has quit IRC [07:37:42] *** jamesd_ has joined #opensolaris [07:37:50] *** jamesd__ has quit IRC [07:38:37] <blueandwhiteg3> I read large files at ~105 MB/sec locally on the solaris system... but only like 32-35 MB/sec over NFS [07:38:42] *** sartek has joined #opensolaris [07:38:47] <blueandwhiteg3> I wonder if I should just go kill some developer at Apple.... [07:38:55] <jmcp> blueandwhiteg3: only one?' [07:39:06] <blueandwhiteg3> jmcp: Alright, an entire office of them [07:39:14] <blueandwhiteg3> mount -t nfs -o rwsize=60144,forcedirectio,udp 10.1.1.1:/bigpool/ /Server [07:39:15] <jmcp> heheheh [07:39:32] <blueandwhiteg3> despite playing with various flags, it doesn't seem to be helping immensely [07:39:39] <Tempt> ahaha [07:39:40] <Tempt> HAHA [07:39:43] <Tempt> Yes, well. [07:39:56] <Tempt> How much systime is your mac burning during those nfs jobs? [07:42:02] <blueandwhiteg3> nfsiod is using about 13% and kernel_task is using about 30%, of a cpu running at 1.067 GHz [07:42:21] <blueandwhiteg3> that's reading at ~33 MB/sec [07:42:32] *** chadz has quit IRC [07:42:35] <Tempt> yowch [07:42:48] <blueandwhiteg3> How hard is iSCSI to setup? [07:43:08] <Tempt> Easy for the zvols, I have no idea for the MacOS side. [07:43:29] *** cmihai has joined #OpenSolaris [07:43:30] *** jamesd_ has quit IRC [07:43:55] <blueandwhiteg3> Tempt: The OS X side I can handle. [07:44:48] <blueandwhiteg3> Tempt: Where do I start? [07:45:17] <blueandwhiteg3> iscsiadmin ? [07:45:46] <blueandwhiteg3> yep, it's not on, gotta turn the service on [07:46:30] <seanmcg> blueandwhiteg3: theres been some threads about the OS X iscsi implementations.. benr doesn't like em.. [07:47:03] <blueandwhiteg3> seanmcg: well, i can give it a shot and find out i guess? [07:47:27] <blueandwhiteg3> Is there a howto anywhere? [07:47:49] <noyb> google [07:47:52] <cmihai> iSCSI works on MacOS. [07:48:05] <cmihai> blueandwhiteg3: you want to start a iSCSI target on the Solaris side? [07:48:06] <cmihai> That's easy. [07:48:25] <seanmcg> sure, though have a look at benr's post to zfs-discuss. Some prior knowledge is always useful :) [07:48:28] <blueandwhiteg3> cmihai: Yep. I'll try it and see. I'd rather not rip out the nfs subsystem in NFS [07:48:30] <cmihai> zfs create -s -V 1T storage/iscsi && zfs set shareiscsi=on storage/iscsi [07:48:46] <cmihai> That creates a 1TB volume and exports it via iSCSI. [07:48:51] <cmihai> That's pretty much all you need. [07:49:20] <cmihai> PS: you don't even need a 1TB pool, that is an overcommited volume :-) [07:49:35] <blueandwhiteg3> cmihai: Why can't I simply match it to the filesystem? [07:49:42] <cmihai> Then you can just connect to that on your Solaris/Linux/MacOS/whatever. [07:49:45] <cmihai> blueandwhiteg3: what do you mean? [07:49:52] <cmihai> "match it to the filesystem" [07:50:02] <cmihai> It exports a RAW volume via iSCSI. [07:50:10] <cmihai> So you can format it anything you like on the initiator side. [07:50:21] <cmihai> And I've tried Win/Lin/Sol/Win as initiators, they all work great. [07:50:27] *** freakazoid0223 has quit IRC [07:50:48] <blueandwhiteg3> cmihai: Oh, how interesting. I am only sorting out how iSCSI works.... that could be interesting. [07:51:04] <cmihai> It's not NFS mate. [07:51:09] <cmihai> It's like a local SCSI disk. [07:51:14] <cmihai> Think Fibre Channel storage. [07:51:20] <blueandwhiteg3> Over gigabit [07:51:23] <cmihai> Or external SCSi storage. [07:51:28] <cmihai> Over TCP/IP. [07:51:44] <cmihai> Add Gigabit cards + aggregation (see dladm aggr in Solaris) and you're set. [07:52:04] <cmihai> You're basically going to see the disk as an empty local SCSI disk on the initiator side. [07:52:09] <cmihai> Terms: initiator = client [07:52:12] <cmihai> target = server. [07:52:35] <Gman> man, it's ball freezing in sydney today [07:53:04] <cmihai> cool [07:53:13] <seanmcg> heh, wait till you hit Dublin Gman :) nowt but rain forcast for the week :) [07:53:27] <Gman> seanmcg, can't wait ;) [07:53:44] *** triplah_ has joined #opensolaris [07:54:55] <Gman> seanmcg, perfect guinness drinking weather :) [07:55:24] <seanmcg> Aye. sure what else would there be to be doing ?-) [07:55:54] <g4lt-U60> drinking whisky? [07:56:39] <cmihai> Drinking Heineken? [07:56:54] * g4lt-U60 throws a whisky bottle at cmihai [07:57:05] <cmihai> Eye, thanks mate. [07:57:21] <noyb> it was broken and empty... ;-) [07:57:24] <blueandwhiteg3> well, there's only one free iSCSI initiator for OS X that I found... so here goes nothing [07:57:25] <blueandwhiteg3> brb [07:57:26] *** blueandwhiteg3 has quit IRC [07:57:28] <cmihai> Bastards! [07:57:37] <noyb> lol [07:58:10] <e^ipi> I never got the appeal of guinness [07:59:09] <noyb> I never got the appeal of beer [07:59:09] <Gman> there's not much of an appeal outside ireland [07:59:34] <e^ipi> self-proclaimed geeks here drink it a lot for some reason [07:59:42] <e^ipi> you know the type I mean... [08:00:01] <e^ipi> but it's not really very good at all [08:00:21] <cmihai> It's not bad when it's chilled and served fresh [08:00:55] <twincest> guinness is "ok" [08:01:03] <g4lt-U60> it's really good for seperating men from boys. men will at least attempt to drink it or make a half and half. boys willwhine about it [08:01:04] <twincest> i'll drink it if there are no real ales [08:01:35] <e^ipi> why wouldn't you be able to drink guinness? [08:01:40] <e^ipi> it's like, 4% per volume [08:01:43] <e^ipi> it's a woman's beer [08:01:46] <cmihai> Less.. [08:01:57] <cmihai> And considering the quantity it's served in.. 400ml [08:01:58] <Gman> [it tastes a lot better in ireland, and then it really is quite enjoyable] [08:02:02] <cmihai> unlike regular beer (500ml) [08:02:09] <g4lt-U60> e^ipi, because the jocks see the dark beer and decide for their coors light or crap beer [08:02:10] <cmihai> and the price (3-4 times the price of a normal beer).. [08:02:18] *** estibi has quit IRC [08:02:29] <cmihai> Getting drunk on Guinness is practically impossible :-) [08:02:34] <g4lt-U60> I usually get it in draught, which changes the equation [08:02:48] <cmihai> That's the idea... [08:02:58] <cmihai> The whole: we use N and CO2 deal.. [08:03:09] <g4lt-U60> they just grab a glass and pour here, no special glasses for guinness [08:03:15] *** blueandwhiteg3 has joined #opensolaris [08:03:44] <blueandwhiteg3> alright, we're all set to go here, iSCSI target should be active on the solaris machine, initiator on OS X [08:03:47] <g4lt-U60> or if they are, it's part of a "buy the glass and get specials on refills" deal [08:03:53] <e^ipi> I like a good dark british ale, but guinness is just kinna gross [08:04:04] <blueandwhiteg3> I don't know how to configure the authentication.... [08:04:08] <twincest> guinness is not a british ale :) [08:04:13] <twincest> doesn't taste anything like proper ales [08:04:15] <e^ipi> i know that [08:04:29] <g4lt-U60> again no disagreements, but if they can't handle guinness, god help them if I get a real porter or stout [08:04:32] <Gman> cmihai, oh, i assure you it's very possible [08:04:41] <cmihai> Well, I've tried. [08:04:44] <cmihai> Had like 14 [08:05:15] <cmihai> All I did was piss. A lot. [08:05:19] <blueandwhiteg3> I created: zfs create -s -V 450G bigpool/iscsi [08:05:20] * Gman is obviously a lightweight, approaching 8 is enough to sink him [08:05:35] <blueandwhiteg3> then I shared it: set shareiscsi=on bigpool/iscsi [08:05:56] *** freakazoid0223 has joined #opensolaris [08:07:06] <Tempt> beer? Dark? Get yourself some Chimay. [08:07:09] <cmihai> blueandwhiteg3: carefull of the size mate. For example, I used 1TB cause Windows is limited to 1TB volumes (you need to use dynamic disks for more, add volumes to the volume manager (Veritas Lite based)). 450G should be fine with OSX though. [08:07:29] <blueandwhiteg3> cmihai: The size should be good. [08:07:46] <trochej> Elo [08:07:49] <trochej> Coffee? [08:08:08] <cmihai> iscsitadm list target -v [08:08:33] <cmihai> trochej: is that your duck? [08:08:33] <blueandwhiteg3> cmihai: How do I setup authentication? Everything looks good there [08:08:48] <blueandwhiteg3> Tempt: I have some chimay in the fridge... [08:08:55] <cmihai> blueandwhiteg3: nuts to auth atm [08:08:58] <cmihai> Just make sure it works [08:09:03] <cmihai> And you can format it on the initiator side [08:09:06] <cmihai> and put some data on it [08:09:17] <blueandwhiteg3> OS X reports: "The password sent did not match the target secret, login status code 0202." [08:09:37] <cmihai> Bleep? [08:09:42] <cmihai> Try a Windows client or a Solaris client ok? [08:09:48] <cmihai> Should be without auth by default. [08:09:53] <g4lt-U60> tempt I really like Deschutes Obsidian Stout for real drinkin [08:09:55] <cmihai> It does support CHAP and Radius and all that though. [08:10:11] <blueandwhiteg3> cmihai: I also have auth off, and it's still failing. You say it should be fine without auth, however? [08:10:50] *** trede has joined #opensolaris [08:12:01] <blueandwhiteg3> port 3260 is correct? [08:12:12] <Gman> jmcp, how long would it take to walk from neutral bay into north sydney? [08:12:37] <trochej> cmihai: Hmm? [08:13:03] <blueandwhiteg3> cmihai: How can I enable CHAP? I think it needs a 'secret' [08:14:51] <blueandwhiteg3> I guess I'm gonna packet sniff [08:15:19] <cmihai> blueandwhiteg3: yes. Try another client first. Got a Windows or something? Or another Solaris? [08:15:38] <blueandwhiteg3> I don't have another solaris box. I could try windows. Recommended client? [08:15:58] <e^ipi> every windows box is a potential UNIX box [08:16:16] <dlg> sfu [08:16:28] <blueandwhiteg3> e^ipi: Ah, but you see, this is windows in a box... it is virtualized! [08:17:03] <cmihai> So? [08:17:04] <cmihai> Will work fine! [08:17:15] <cmihai> Use MS iSCSI [08:17:17] <cmihai> it works great [08:17:18] <blueandwhiteg3> I could install any other OS I want as well in virtualization [08:17:24] <cmihai> http://www.microsoft.com/windowsserver2003/technologies/storage/iscsi/default.mspx [08:17:33] <cmihai> Get the 32 bit for your platform of course. [08:18:28] <blueandwhiteg3> I tried installing solaris under virtualization and my freakin' virtualization environment was a huge PITA with video [08:19:23] <trochej> Drat, I got exception in thread "main" with sconadm in Sol 10 [08:19:23] <trochej> :/ [08:21:50] <jmcp> Gman: about 15 minutes [08:22:00] <jmcp> depending on where you are in NB of course [08:22:31] *** hali has joined #opensolaris [08:23:24] <Gman> jmcp, towards military road, wycombe road [08:25:36] *** Fullmoon has joined #opensolaris [08:26:46] <Tempt> g4lt-U60: Deschutes Obsidian Stout, huh? Haven't seen that one around locally. [08:26:53] *** blueandwhiteg3_ has joined #opensolaris [08:27:09] <blueandwhiteg3_> Do you have to specify the target name when logging into iSCSI? When you run 'iscsitadm list target -v' which is that? Do you need to specify the iSCSI name anywhere when logging in? [08:27:31] *** blueandwhiteg3 has quit IRC [08:30:07] <blueandwhiteg3_> My windows environment is temporarily down. It will be up in a little while. [08:32:08] <g4lt-U60> Tempt, nor would you, unless you were in the pacific northwest [08:32:19] *** yarihm has quit IRC [08:32:50] <e^ipi> pacific northwest minus canada [08:33:10] <g4lt-U60> and given taht they'd have to add fucking preservatives to change that, I hope that it never changes [08:33:50] <e^ipi> I get my coffee 3 days after roasting in seattle [08:33:54] <g4lt-U60> e^ipi, I've no doubt that it's available in BC as well [08:33:59] <e^ipi> I think they could ship a few cases to vancouver [08:34:55] <g4lt-U60> since they're small, they might not be willing ot deal with the hassles of exporting yet [08:35:04] <e^ipi> fair enough [08:36:02] <blueandwhiteg3_> I can't find the posts about iSCSI initiators under OS X on the zfs-discuss list. Anybody want to point me in the right direction? [08:36:50] <e^ipi> too bad there's no low-power ( like via C3 ) amd64 class machine [08:37:13] <blueandwhiteg3_> e^ipi: AMD makes low power chips [08:37:16] <e^ipi> embedded ZFS appliance would be wonderful [08:37:26] <e^ipi> blueandwhiteg3_: x86 chips though [08:37:28] <e^ipi> not amd64 [08:37:33] <Tempt> g4lt-U60: We get a *lot* of imports here. [08:37:40] <Tempt> g4lt-U60: So, err, ship me a case already ;) [08:37:42] <e^ipi> hell if i'm gonna run zfs on a 32 bit machine [08:38:03] <g4lt-U60> I'll trade you for molson max ;P [08:38:10] <Samy> e^ipi: Why? [08:38:20] <Tempt> g4lt-U60: How about Mountain Goat Surefoot Stout? [08:38:25] <Tempt> g4lt-U60: That's pretty good stuff. [08:38:38] <e^ipi> Samy: because ZFS runs like ass on a 32 bit machine [08:38:40] <g4lt-U60> okay, next time I get a round tuit [08:38:47] *** jmcp has quit IRC [08:38:52] <blueandwhiteg3_> e^ipi: Have you seen the turion chips? [08:39:03] <e^ipi> yes, in laptops [08:39:37] <Samy> e^ipi: I assume this. [08:39:44] <Samy> e^ipi: But is it all because of the 128-bit fields? [08:39:59] <Samy> I wouldn't imagine that to be a serious issue if you don't have large files to begin with. [08:40:08] <blueandwhiteg3_> e^ipi: There's no reason why they can't be used in an embedded application. The TDP is similar to many Via chips. [08:40:33] <e^ipi> blueandwhiteg3_: but nobody sells ATX boards that hold them [08:40:40] <e^ipi> that I know of, anyways [08:40:56] <blueandwhiteg3_> They make socket 754 turions [08:41:05] <e^ipi> interesting [08:42:11] <blueandwhiteg3_> http://www.pricewatch.com/cpu/turion.htm [08:43:00] <blueandwhiteg3_> Their TDPs push down into the 20-some watts range and with cool 'n quiet and throttling and all, they probably idle at just a few watts [08:43:20] <Tempt> Build an embedded based on T1. [08:43:25] <Tempt> That'll do nicely. [08:43:29] <e^ipi> Tempt: buy me a T1 [08:43:39] <Tempt> Buy your own damn hardware! [08:43:51] *** nrubsig has quit IRC [08:43:52] <seanmcg> blueandwhiteg3_: I can't find that thread zfs + OS X either from o.s.o Odd. benr did try using the globalSAN initator on the Mac side, but gave up after two days with it. [08:43:58] <e^ipi> it would make a decent embedded machine [08:44:11] <e^ipi> particularly the single-core version that surfaced a while ago [08:44:14] <e^ipi> assuming anyone'd sell them [08:44:42] <blueandwhiteg3_> seanmcg: alright, at least i'm not crazy! [08:44:59] <Tempt> OpenSPARC S1 or whatever. [08:45:05] <Tempt> They've got that working in an FPGA [08:45:08] <blueandwhiteg3_> anybody want to grab packets of you connecting to an iSCSI mount properly? I only need the first two or three packets. [08:45:48] <blueandwhiteg3_> or i'll eventually probably be able to do it, but not until windows is ready [08:48:53] <cmihai> blueandwhiteg3_: it works. [08:49:43] <cmihai> On MacOS. [08:50:52] <blueandwhiteg3_> e^ipi: Did you know Intel makes ULV xeons? [08:51:02] <blueandwhiteg3_> e^ipi: They go as low as like... 13 watt TDP? [08:51:03] <cmihai> blueandwhiteg3_: http://www.studionetworksolutions.com/products/product_detail.php?pi=11 use this on MacOS [08:51:10] <e^ipi> I did not know that, no [08:51:30] <blueandwhiteg3_> cmihai: Already installed and setup. Authentication fails. [08:51:36] <cmihai> blueandwhiteg3_: it's a free iSCSI initiator implementation that works. [08:51:38] <cmihai> Oh. [08:51:48] <cmihai> Setup CHAP on the target. [08:52:04] <blueandwhiteg3_> That's what I was asking earlier, how is that done? [08:52:55] <e^ipi> Gig-E <-> firewire doesn't exist, does it? [08:53:26] <blueandwhiteg3_> e^ipi: Why would you want that? Fastest FW = 800 mbit [08:53:38] <e^ipi> faster than 100mbit [08:53:55] <e^ipi> which is what I've got currently [08:53:58] <blueandwhiteg3_> what do you have with firewire but without gigabit or a pci slot? [08:54:12] <cmihai> iscsitadm modify admin --chap-name bollocs [08:54:13] <blueandwhiteg3_> an iBook? [08:54:20] <e^ipi> apple in all their wisdom decided they don't want to make a machine for under $2000 with PCI [08:54:31] <blueandwhiteg3_> e^ipi: Why not just use IP over FireWire? [08:54:31] <e^ipi> i have a G4 mini [08:54:31] <cmihai> iscsitadm modify admin --chap-secret [08:54:34] <cmihai> And poke the secret there [08:54:39] <Tempt> IP over firewire works. [08:54:42] <cmihai> Anywho [08:54:47] <cmihai> Just RTFM iscsitadm manpage [08:55:05] <blueandwhiteg3_> e^ipi: I'd suggest IP over FW (works with linux too) and just roll your own solution [08:55:16] <seanmcg> blueandwhiteg3_: have a look at benr's blog for some info on chap [08:55:27] <e^ipi> i dunno if my solaris machine's firewire ports even work [08:55:38] <e^ipi> i don't use FW for anything [08:55:46] <asyd> \_o< [08:56:00] <cmihai> blueandwhiteg3_: http://www.cuddletech.com/blog/pivot/entry.php?id=834 - CHAP [08:58:07] <blueandwhiteg3_> e^ipi: my thought is just to use a little linux box as a bridge [09:00:48] * Samy gave away his G4 mini to a poor Macedonian kid [09:00:55] <Samy> Loaded up with Linux. [09:01:03] <Samy> Wonder how far he went with that ;-p [09:01:06] <Samy> I bet his parents sold it [09:01:07] <Samy> hahaha [09:01:42] <g4lt-U60> e^ipi, blade100? [09:02:17] <g4lt-U60> I have a firewire CD bruner working fine with nv_55b on my SB100 [09:03:12] *** Cyrille has joined #opensolaris [09:05:42] <e^ipi> naw, it's an off the shelf sempron thing [09:05:57] *** triplah_ has quit IRC [09:05:57] *** het has quit IRC [09:05:57] *** rachel has quit IRC [09:05:57] *** xuewei has quit IRC [09:05:58] *** aeroevan has quit IRC [09:05:58] *** deather has quit IRC [09:05:58] *** jpipkin has quit IRC [09:05:59] *** polk__ has quit IRC [09:05:59] *** postwait has quit IRC [09:05:59] *** Marv|LG has quit IRC [09:05:59] *** cormac has quit IRC [09:05:59] *** bda has quit IRC [09:06:00] *** spiff_ has quit IRC [09:06:00] *** Abe_Froman has quit IRC [09:06:00] *** sparkleytone has quit IRC [09:06:00] *** timeless has quit IRC [09:06:00] *** cstumpf has quit IRC [09:06:01] <e^ipi> the firewire chipset is whatever comes with SiS900 [09:07:08] <blueandwhiteg3_> e^ipi: I don't know that solaris can do IP over FW directly, but a linux box would make an easy bridge... [09:07:21] <e^ipi> but then i'd have to run linux [09:07:39] <trochej> Or freebsd [09:07:53] <Tempt> Or just give up on ip over firewire [09:08:01] *** danv12 has quit IRC [09:09:07] *** Tpenta has quit IRC [09:09:11] <blueandwhiteg3_> I don't think FreeBSD supports IPoFW [09:09:25] <Samy> It does. [09:10:09] <blueandwhiteg3_> Excellent. [09:10:31] * steleman is away: Gone away for now. [09:10:32] *** reflecte has quit IRC [09:10:54] *** steleman is now known as steleman_away [09:11:04] <trochej> I believe freebsd supports everything under the sky, it's just not as well advertised as Linux [09:18:40] *** kloczek has joined #opensolaris [09:19:01] *** Snake007uk has joined #opensolaris [09:22:01] *** laca has joined #opensolaris [09:22:08] <blueandwhiteg3_> wow, iSCSI support is really frustrating [09:22:10] *** Dink has quit IRC [09:23:04] <The-spiki> Samy: Macedonians aren't poor. As fas as we're talking about IT Macedonia is actually among the best developed nations... [09:23:08] *** cydork has joined #opensolaris [09:25:05] *** Dink has joined #opensolaris [09:25:43] *** estibi has joined #opensolaris [09:27:28] <e^ipi> wacky yugoslavia and it's deceptively developed economy [09:28:03] <e^ipi> take that, former soviet union [09:28:09] <The-spiki> e^ipi: wtf? :) [09:28:28] <e^ipi> the former soviet states can't get their shit together after communism [09:28:32] <e^ipi> yugoslavia could [09:28:59] <The-spiki> macedonia wasn't in SU. It was briefly in the yugoslavia, but that doesn't matter... [09:29:12] <e^ipi> I'm aware of this [09:29:26] <e^ipi> but both the soviet union and yugoslavia were communist states [09:29:42] <The-spiki> yugoslavia was socialist. not comunist [09:30:13] <e^ipi> and when they both were no longer socialist, yugoslavia flourished ( civil war notwithstanding ) and the ex soviet states did not [09:30:17] <The-spiki> if you follow politics you'll see that france, sweden and other western europe countries are also socialist [09:31:13] <The-spiki> blah. you can't compare Macedonia with Serbia, Croatia, Bosnia... [09:31:23] <The-spiki> it's like mixing apples and oranges [09:32:07] <e^ipi> except some of the apples want to kill the oranges and take their land [09:32:10] <blueandwhiteg3_> Anybody want to point me in the direction of setting up an smb share under solaris? [09:32:26] <e^ipi> under pretense of ancient history [09:32:27] <blueandwhiteg3_> I'd like to see if Apple's build of samba sucks as bad as their nfs [09:32:42] <e^ipi> blueandwhiteg3_: i've never had problems with apple's NFS client [09:32:47] <e^ipi> server, i'll not comment on [09:32:51] <blueandwhiteg3_> e^ipi: Except that it's slow [09:33:21] <blueandwhiteg3_> e^ipi: It might not be very noticeable with 100 mbit... but with gigabit, i still feel like i'm using 100 mbit [09:33:33] <g4lt-U60> man -M /usr/sfw/man smbd [09:33:51] <blueandwhiteg3_> they call it sfw? [09:34:08] <g4lt-U60> sfw == sun freeware [09:34:13] <e^ipi> although in fairness to milosevic, seems to me like it was some particularly nutty military leaders that did most of the un-necessary killing [09:34:39] *** gdamore has joined #opensolaris [09:35:06] <The-spiki> e^ipi: Macedonia wasn't involved in the war between Serbs, Croats and Muslims (over parts of Croatia and Bosnia)... [09:36:18] <The-spiki> my opinion about milosevic is khmm... if there was continuation of the trial he would probably get a sentence, without evidents that he's guilty. [09:36:33] <e^ipi> no, he was quite hated [09:37:18] <palowoda> blueandwhiteg3: What smb.conf info are you looking for? [09:38:03] <The-spiki> his biggest problem were his close asociates (politicians from the party). he also didn't wanted to draw the line between criminals and the army. that got him in a really bad position [09:38:05] <blueandwhiteg3_> palowoda: I'd be happy with just about anything that enabled solaris to mac os x samba communication. [09:38:43] <gdamore> hi * [09:39:13] <gdamore> e^ipi: I've not seen a response from the OGB yet re. a repository for your sources. [09:39:19] <blueandwhiteg3_> palowoda: it seems simple, but it doesn't seem to work [09:39:24] <palowoda> Ah, not familiar with smb on Apple. Samba smb between Solaris x86 and Windows is really fast. [09:40:26] <blueandwhiteg3_> palowoda: Samba under OS X seems pretty fast. Never really used it much, but I've seen well into the tens of MB/sec. OS X's default configuration seems decent (works great with Windows networks) but I think something may be wrong with the way solaris is setup [09:40:33] <e^ipi> gdamore: i can just tarball up my workspace and send them to you [09:40:35] <e^ipi> *shrug* [09:40:44] <gdamore> for now that would be fine. [09:40:47] <e^ipi> IIRC the review is due on the 16th [09:41:08] <gdamore> i still want you to be able to post directly... but there is some SoC infrastructure missing. [09:41:10] *** Gman has quit IRC [09:41:17] <richlowe> Hey gdamore. [09:41:20] *** Gropi_ is now known as Gropi [09:41:27] <palowoda> blueandwhiteg3: What setup on Solaris are you talking about? [09:41:28] <gdamore> hey richlowe. :-) [09:41:36] <richlowe> e^ipi: You have an Hg workspace in the SCM infrastructure... [09:41:50] <richlowe> if gdamore isn't equipped to poke it, stevel is. [09:42:03] <gdamore> yeah, but it would be nice if he could edit web pages, and post webrevs. [09:42:15] <e^ipi> gdamore: it's mostly freebsd code at the moment anyways [09:42:17] <gdamore> and i know nothing about the HG stuff. [09:42:20] <richlowe> emancipation, as far as web pages. [09:42:31] <richlowe> codereview, I'm pretty sure tools endorsed him. [09:42:35] <gdamore> i figured he needed contributor status to do anything. [09:42:35] <richlowe> ping a tools person who isn't me. [09:42:36] <e^ipi> I suppose i could monopolize the emancipation spaces [09:42:50] <gdamore> oh if tools endorsed him, then we're good. [09:42:53] <e^ipi> gdamore: i can't post to cr.os.o [09:43:12] <e^ipi> but i can use the stuff on emancipation, i had actually forgotten about that [09:43:15] <gdamore> have you set up public ssh keys in your os.o profile? do you have a contributor grant from anywhere ? [09:43:16] <richlowe> gdamore: SCM stuff you'd probably need steve for the initial import to go smoothly [09:43:21] <richlowe> gdamore: the web infrastructure sucks ass. [09:43:26] <e^ipi> I don't have a contributor grant, no [09:43:30] <e^ipi> not that i'm aware of [09:43:37] <gdamore> see... that's what I think we need to have fixed. [09:43:44] <richlowe> And that's what I think should be fixed, by now. [09:43:58] <blueandwhiteg3_> palowoda: I think I've not properly even started samba, despite the fact the gui says it is on, the ports don't seem to be open [09:43:59] <gdamore> you think he has a contributor grant somewhere? [09:44:00] <palowoda> Like boo too. [09:44:35] <blueandwhiteg3_> palowoda: I don't know what it is called within svcadm to start it [09:44:43] *** bengtf_ has joined #opensolaris [09:44:52] <richlowe> gdamore: I thought tools gave him one. [09:45:01] <gdamore> well, that would be helpful. [09:45:10] <gdamore> maybe there is some additional legwork to set stuff up. [09:45:20] <richlowe> or maybe everyone is swamped in crud. [09:45:23] <palowoda> blueandwhiteg3: I've been setting up my own version of samba smb.conf. Edit the smb.conf file and start it up manually to see if it works. [09:45:43] <gdamore> e^ipi: have you recently tried to setup an ssh key on your os.o profile? [09:46:02] <palowoda> Swamped in crud sounds fun. [09:46:03] <blueandwhiteg3_> palowoda: where is smbd? [09:46:49] <blueandwhiteg3_> i found the configuration file, but not the actual binary [09:47:14] <palowoda> /usr/sfw/sbin/smbd [09:47:22] <blueandwhiteg3_> I can't seem to find anything under solaris, sorry [09:47:50] <palowoda> Shesh when is Solaris going to ship locate by default. [09:48:00] <blueandwhiteg3_> it would save a lot of time [09:48:03] <cmihai> never. [09:48:08] <cmihai> You don't need locate. [09:48:10] <blueandwhiteg3_> i either get to crawl the directories [09:48:15] <cmihai> What are you searching? [09:48:19] <cmihai> Say the name of the binary. [09:48:20] <richlowe> The moment someone does the work and ARC's it. [09:48:20] <blueandwhiteg3_> it's found [09:48:21] <cmihai> smbd? [09:48:21] <richlowe> hint hint. [09:48:28] <cmihai> grep smbd /var/sadm/install/contents [09:48:31] <cmihai> Instant gratification. [09:48:45] <blueandwhiteg3_> yes, but that assumes you're familiar with that... [09:48:59] <palowoda> richlowe: Oh that involves the swap right? [09:49:08] <cmihai> Yes, that assumes you're familiar with Solaris. [09:49:08] <palowoda> swamp [09:49:17] <cmihai> And locate assumes you're familiar with locate, and a gentoobie gnubie. [09:49:24] <blueandwhiteg3_> palowoda: It's up and running and shared, now i just need to get a valid user account to connect. [09:49:30] <cmihai> Look, learn how to use grep, pkginfo and find. [09:50:10] <palowoda> cmihai: Why make it harder? [09:50:33] <palowoda> More difficult. [09:50:53] <cmihai> palowoda: what? [09:50:54] <blueandwhiteg3_> palowoda: how do i manage the smb users? i need to create some kind of a useful login... i can see the share, but can't authenticate yet [09:50:59] <cmihai> How is this harder? [09:51:07] <cmihai> OH MY GOD, I got it, it's NOT Linux. [09:51:44] <palowoda> cmihai: Ok crawl back under the rock. [09:52:06] <The-spiki> cmihai: don't bash linux just because some newbie is asking question [09:52:06] <palowoda> Train the masses for all I care. [09:53:12] <cmihai> Why does everyone assume Solaris should include their favourite GNU tool in base, by default, just because they can't handle the native tools there? [09:53:48] <palowoda> No you and I can handle the native tools. Screw the rest. [09:54:50] <cmihai> /opt/sfw/bin/glocate f none 0555 root bin 97928 24367 1160007663 SFWgfind [09:54:53] <seanmcg> blueandwhiteg3_: /usr/sfw/bin/smbpasswd to create the user entries or edit smb.conf to point to unix passwds, docs on samba.org tell all :) [09:55:04] <cmihai> palowoda: wonder what that is. [09:55:43] <palowoda> It's glocate because it's a name conflict with locate. [09:56:26] <cmihai> palowoda: look, if you can't handle find, you can't handle locate. It's basic common knowledge. The rest can use nautilus or mc find or whatever. You're using broken logic: you assume they're familiar with Linux and Linux is the one true way. It's even there ffs. [09:56:36] <cmihai> And you assume it's easier then say find. [09:56:49] <cmihai> The learning process is the same: use this command, kthxbye. [09:57:17] <palowoda> It's glocate, and Linux users better get it striaght. [09:58:43] <cmihai> It's glocate because it's gnu findutils, gfind, glocate, etc. Just like GNU ls is gls and so on. It's not THAT complex. Oh well, whatever, I'm out. [09:58:46] * cmihai & [10:00:06] *** dmarker has quit IRC [10:00:17] *** dmarker has joined #opensolaris [10:00:50] <palowoda> I really don't care about the logic of the renaming of GNU locate. [10:01:20] <renihs> afaik its slocate? [10:01:21] <blueandwhiteg3_> wow, samba really outperforms nfs under OS X [10:01:38] <palowoda> Yeah I mentioned that. [10:01:50] <palowoda> At least under winders it does. [10:03:17] <palowoda> At least now in build 67 you can access all your smb clients and back them up. [10:04:36] *** kloczek has quit IRC [10:05:56] <blueandwhiteg3_> palowoda: Yeah. It's sad Apple has a unix based system and has such a crappy NFS client. [10:06:34] <palowoda> But Apple has the same NFS in their phone so it has to be worth something. [10:06:35] <blueandwhiteg3_> The bad news is that it's still not really coming very close to matching the capacity of the RAID, but it's an improvement. Maybe there are parameters can can be tweaked, like with NFS? [10:06:47] <blueandwhiteg3_> I may also have to try a larger MTU, if I could just figure out how to change it under solaris. [10:06:54] *** mazon is now known as Mazon [10:07:55] *** kloczek has joined #opensolaris [10:09:38] *** linma has joined #opensolaris [10:11:11] <palowoda> I'm not sure tweak the net will help smb. Smb has a lot of small packets. Does a lot of busy status communication. But good enough for say a streaming video or backing systems up. [10:11:58] <palowoda> How many users are you worried about using smb? [10:15:14] <palowoda> blue: by the way I look at the specs of your motherboard a little furture. It isn't worth upgrading the cpu. Your only running ddr. [10:17:21] <Tempt> blueandwhiteg3_: ifconfig $ifname mtu $mtu [10:18:16] <blueandwhiteg3_> palowoda: This is basically an elaborate personal file server... literally, I will be the only person using it very often [10:19:07] <blueandwhiteg3_> Tempt: Thanks, but look what I got: ifconfig: setifmtu: SIOCSLIFMTU: nge0: Invalid argument [10:21:37] <blueandwhiteg3_> palowoda: My primary goal is to have maximum throughput, particularly for large files. I think that I have plenty of memory bandwidth for those purposes. [10:22:03] <palowoda> Well I don't have any Apple machines on my home net but have three or four family members doing multimedia work wth the Solaris systems and have no problem. But that is with a Solaris and MS envionment. [10:22:43] <palowoda> *environment [10:22:51] *** Drone has quit IRC [10:23:06] <blueandwhiteg3_> palowoda: I'm concerned with moving large files to and fro as quickly as possible. I'm basically trying to eliminate all my external drives, with perhaps the exception of a few 2.5" models [10:23:11] <blueandwhiteg3_> (for travel) [10:23:17] <asyd> hmm, sux, network doesn't work with solaris/virtualbox [10:24:36] <blueandwhiteg3_> palowoda: As far as I can tell, even locally when slamming with writes, my cpu isn't even close to pegged [10:25:16] <palowoda> You mean you want 99 percent transfer rates or nothing at all? [10:25:37] <palowoda> Put numbers on it. [10:26:28] <palowoda> I could care less about cpu cycles being consumed. [10:26:33] <blueandwhiteg3_> Alright, writing full bore to ZFS using dd gives me about 70-75 MB/sec writes with 3 x 250 GB SATA drives. CPU usage is averaging less than half - maybe 40%? [10:26:56] <palowoda> And what is your goal. And why is it your goal? [10:26:58] <trochej> Hmm [10:27:13] <trochej> I get this error from sconadm log file: [10:27:14] <trochej> INFO: SCN Fault: No valid and not-expired token exists for this user. [10:27:17] <blueandwhiteg3_> reading full bore locally shows 105 MB/sec with three drives [10:27:25] <trochej> Still, I have an account with which I can login to sunsolve [10:27:32] <trochej> And I use this account credentials [10:27:54] <trochej> IT's Sol 10 11/06 [10:28:22] <palowoda> Uggh who cares about sunsolve and Solaris 10. [10:29:17] <blueandwhiteg3_> I want to be as close to saturating the gigabit connection as possible, between the disks and the network protocol. I know the disks will get there when I add more disks. The physical gigabit link sustains 112-115 MB/sec with TCP. I don't know how close I can get. At the very least I want to get a solid 50 MB/sec read / write, such that it's a hair faster than my notebook drive. [10:29:20] <palowoda> Blue so you expect 105M/B on the net too rigithg? [10:29:23] <palowoda> right. [10:29:44] <blueandwhiteg3_> palowoda: I don't expect a perfect conversion between network and local access... but I want to get it as far as possible. [10:30:10] <palowoda> What are you getting now? [10:30:40] <blueandwhiteg3_> well, with samba... it's not entirely 'consistent' but it jumps around from 25-50 MB/sec [10:31:05] <palowoda> Well yeah it's going to jump based on what your testing with. [10:31:29] <blueandwhiteg3_> I'm dumping from /dev/zero or reading a large, contiguous file to /dev/null [10:31:48] <palowoda> Not a good measure is it. [10:32:01] *** Drone has joined #opensolaris [10:32:39] <palowoda> Kind of crude. [10:32:55] <blueandwhiteg3_> palowoda: Well, it cuts out the potential of a disk bottleneck. Practically, the results are actually almost identical to copying files to and from my drive. [10:34:00] <blueandwhiteg3_> I use a large block size, etc. [10:34:16] <palowoda> Oh copying files accross the net will cause all kinds of throughput problems. What exactly are you looking for in the end results? [10:34:42] *** timsf has joined #opensolaris [10:34:49] *** Fish has joined #opensolaris [10:35:16] *** nostoi has joined #opensolaris [10:35:19] <timsf> Morning all [10:35:26] <quasi> morning [10:35:42] <Fish> hello [10:36:08] <palowoda> A good early morning [10:36:28] <blueandwhiteg3_> palowoda: I want to move a very large file in as little time as possible, either to or from the raid, particularly to. [10:37:05] <palowoda> Can you do it now? [10:38:02] <palowoda> How fast can you move the large file with ftp? [10:39:16] <blueandwhiteg3_> palowoda: I haven't tested FTP... I suppose I could [10:42:01] <blueandwhiteg3_> how does one enable ftp under solaris? [10:42:06] *** Dar has joined #opensolaris [10:42:12] <palowoda> svcs [10:42:19] <palowoda> svcadm [10:42:27] <palowoda> netservices [10:42:32] <palowoda> glocate [10:42:35] <palowoda> opps [10:42:48] <blueandwhiteg3_> oh, duh, i was trying svcadm start when i had to first enable [10:42:56] <blueandwhiteg3_> too much time with service [10:43:05] <palowoda> ah your new at it right? [10:43:41] <blueandwhiteg3_> yes... i never touched solaris before last week / earlier this week (technically last week, as it's 1 something am on monday here) [10:43:44] <palowoda> Just think of training all the Apple users and Linux users. [10:43:58] <palowoda> Well it's a long road. [10:44:02] *** cognistudio has joined #opensolaris [10:44:07] <blueandwhiteg3_> I'd like a quick pocket guide just pointing out where things are.... [10:44:23] *** cognistudio has quit IRC [10:44:27] <palowoda> Pocket guide to Solaris. Heh now that is a good one. [10:44:42] <palowoda> Ask Sun marketing for somethign like that. [10:44:43] <blueandwhiteg3_> doubles as bulletproof vest! [10:45:33] <timsf> I quite liked the "Mac OSX for Unix Geeks" o'reilly book, in terms of an introduction to mac os x [10:45:34] <palowoda> Hey at least you don't have to worry about recompiling your kernel under Linux and figuring out what kernel modules you need. [10:45:46] <timsf> it'd be lovely if Solaris had something similar I agree... [10:45:56] <blueandwhiteg3_> oh yes, don't get me started on linux and recompiling the kernel! [10:46:07] <blueandwhiteg3_> i was pleased to see that is NOT happening under solaris [10:46:16] <palowoda> timsf: Maybe that is what Ian Murdock should be working on. [10:46:39] <palowoda> blue: It's not going to happen under solaris don't worry. [10:47:18] <blueandwhiteg3_> yeah.. ftp moves... not something i'd ever thought a lot about using for speedy file transfers, but it scoots [10:47:33] <blueandwhiteg3_> it's hard to tell if my disk or the network is the bottleneck [10:47:54] <timsf> Hah, you'll have to get opensolaris users to agree on what they want first! [10:47:57] <richlowe> timsf: docs.sun.com, in theory. [10:48:04] <richlowe> when it's up. [10:48:07] <palowoda> When it's getting hard to tell you have more time to investigate performance issues later. [10:48:08] <richlowe> and you have time to wait on it. [10:48:09] <timsf> ;-) [10:48:24] <timsf> There isn't a concise ... [10:48:33] * timsf digs at bookshelf [10:48:55] <timsf> 200 page reference though [10:49:05] <blueandwhiteg3_> palowoda: Well, my drives are in a bit of a mess at the moment, I'll swap things around and know soon enough :) [10:49:06] <timsf> Granted, a lot of the shell stuff could be thrown out, [10:49:30] <timsf> but it's a nice book to describe the "different" things in Mac OSX - netinfo, launchd, etc. [10:49:39] <palowoda> blueandwhiteg3: And you have an environment with multiple OS's and that complicates the numbers your finally looking for. [10:50:02] <blueandwhiteg3_> palowoda: I agree. I'm primarily concerned with performance under OS X... other OSes will transfer, but it's not so important to fly [10:50:26] <palowoda> OSX has it's own performance problems which are not documented. [10:50:55] <palowoda> Remember OSX is not exactly an OS you purchase for technical reasons. [10:51:14] * timsf agrees [10:51:17] <blueandwhiteg3_> palowoda: I agree. You don't want to know the ugly issues I've dug up... [10:51:40] <blueandwhiteg3_> I personally like the level of integration it offers. But then I also find myself wanting to throw it out the window fairly often... [10:51:51] <blueandwhiteg3_> though perhaps less than other options [10:53:10] *** coffman_zzz is now known as coffman [10:53:32] *** damienc has joined #opensolaris [10:55:11] <blueandwhiteg3_> But yes, the goal here is multiple throughput. In a few cases, I may be doing more than one file transfer at a time. [10:55:16] <blueandwhiteg3_> *maximum throughput [10:55:19] <blueandwhiteg3_> (not multiple) [10:55:50] <palowoda> Your playing with numbers and percentages your not talking about. [10:57:06] <blueandwhiteg3_> What do you mean? [10:57:34] <palowoda> What results do you expect? [10:57:50] <palowoda> Why do you expect them? [10:58:23] <Tempt> I expect to have a gigabit link and get 100Mbyte/sec throughput on NFS :P [10:58:48] <palowoda> Depending on how much money right? [10:59:22] <Tempt> Because my day to day home computing needs *demand* that sort of performance, and protocol latency shouldn't exist, let alone disks not being able to keep up. [10:59:38] <Tempt> Oh, and I expect my total hardware bill for the server to be tree fiddy [11:00:14] <palowoda> Tempt explain our home needs for 100Mbyte thorughput? [11:00:29] <CIA-26> zf162725: PSARC 2007/058 Ralink RT2500 802.11b/g Wireless Dirver, 6444193 RFE add support for RaLink wireless chipsets [11:00:58] <Tempt> palowoda: I think you're missing my overall tone. [11:01:01] <blueandwhiteg3_> hahaha [11:01:06] <blueandwhiteg3_> I'm going to keep pounding at it.... it's hard to say, this isn't a system being designed according to some kind of grand plan, I'm just trying to make it the best possible. [11:01:23] <palowoda> I don't care what the overall tone is. [11:01:33] <palowoda> Why should I? [11:01:38] <blueandwhiteg3_> I can see that OS X's NFS client sucks, which answers one big question for me. I can see Samba has potential. [11:01:48] <Tempt> palowoda: Translation: I was being fscking sarcastic. [11:01:57] <palowoda> Ah much better. [11:02:06] <blueandwhiteg3_> By the way, I used a faster disk and can see that FTP is the winning protocol in terms of throughput. [11:02:11] <Tempt> Hence the ":P" on the top line [11:02:29] <blueandwhiteg3_> I'm not dissatisfied with the results now. [11:02:44] <blueandwhiteg3_> peaking at 65 MB/sec write over the network [11:03:07] <Tempt> Although in this age of digital media and video and whatnot, getting good performance on a home fileserver is handy. [11:03:31] *** MattMan has joined #opensolaris [11:04:19] <Tempt> blueandwhiteg3_: Using jumbo frames yet? [11:04:34] <blueandwhiteg3_> Nope [11:04:35] <blueandwhiteg3_> ifconfig: setifmtu: SIOCSLIFMTU: nge0: Invalid argument [11:04:43] <Tempt> aah [11:04:44] <blueandwhiteg3_> when I try and do: ifconfig nge0 mtu 9000 [11:04:47] <Tempt> unplumb the interface [11:04:50] <Tempt> ndd it [11:04:55] <Tempt> replumb it [11:04:58] <Tempt> ifconfig it [11:05:17] <Tempt> You need to enable jumbo frames before plumbing the interface, and then you can crank the MTU to 9000 [11:05:18] <palowoda> nge support large frames? [11:05:26] <Tempt> WTF is an nge anyway? [11:05:28] <Tempt> Nvidia? [11:05:35] <palowoda> yep [11:05:40] <blueandwhiteg3_> nvidia gigabit ethernet [11:05:53] <palowoda> e1000g does. [11:06:18] <palowoda> HCL guide doesn't really give that kind of info out. [11:06:36] <Tempt> ndd /dev/nge \? [11:06:47] <Tempt> accept_jumbo (read and write) [11:07:12] <palowoda> Hey wait does the OSX or other OS's support nge largeframes also? [11:07:39] <blueandwhiteg3_> yes, OS X supports jumbo frames [11:08:21] <palowoda> What nic do they use in Apple machines? [11:08:46] <Tempt> A crappy one [11:08:58] * Tempt never got his Powerbook to talk to a Cisco 3512XL properly. [11:09:04] <palowoda> A crappy largeframe nic sounds good. [11:09:10] <blueandwhiteg3_> That was back when Apple rolled everything itself [11:09:26] <Tempt> The ethernet in the powerbook was shabby beyond reason [11:09:44] <blueandwhiteg3_> Perhaps related to the 133 MHz FSB?? [11:10:02] <Tempt> and Apple's response was "We do not support the use of the Powerbook with Cisco switches. Please purchase a supported switch such as the Belkin xxx from the AppleStore" [11:10:13] <Tempt> FSB be damned. [11:10:27] <Tempt> An old Pentium-90 could hammer back to back packets on 100Base-T [11:10:53] <blueandwhiteg3_> That old Pentium 90 probably had a 33 MHz system bus [11:10:54] <nightswim> lies [11:10:56] <Tempt> Everything is FSB this and DDR that these days. People blame shoddy performance on FSB or RAM or lack of neon lights. [11:11:09] <blueandwhiteg3_> Now Apple basically calls up their new friends at Intel, get all their latest chipsets and package them into a pretty package. [11:11:17] <blueandwhiteg3_> So you get the latest Intel special. [11:11:20] <palowoda> Oh crap, can't have too many neon lights. [11:11:53] <blueandwhiteg3_> I'm not too unhappy. Regular frames, netperf, tcp gets me reliably at least 895 mbit/sec to the nge0 interface on the solaris box. [11:12:32] <palowoda> blue you don't have the latest apple intel special? [11:12:34] <Tempt> nightswim: lies? [11:12:48] <quasi> blueandwhiteg3_: that's probably as far as you're likely to get on on a nic like nge [11:13:08] <Tempt> I've had decent performance with cassini, gem and e1000g [11:13:29] <blueandwhiteg3_> quasi: i'm happy with it... i'd be curious to see netperf results on 'better' NICs... but still, 90% of theoretical [11:13:37] <blueandwhiteg3_> palowoda: it's not the latest intel special, but fairly new [11:14:32] <palowoda> Don't worry blue Apple will have you typing on your Iphone anyday now. [11:14:45] <quasi> blueandwhiteg3_: intel usually do fairly good nics [11:14:59] <blueandwhiteg3_> my point that it's not usually too bad [11:15:06] *** bunker has joined #opensolaris [11:15:30] <blueandwhiteg3_> there's no way i'm buying a phone that expensive that's built off old 2G technology [11:15:35] <palowoda> "too bad" = what number (in apple mathmatics)? [11:16:02] <blueandwhiteg3_> palowoda: I don't know in what sense...? [11:16:07] <palowoda> But Iphone is wifi. [11:16:29] <palowoda> You won't get the sense of appple math. [11:16:44] <palowoda> math related to performance and standards [11:17:24] <palowoda> Unless it's apple math. [11:17:33] <nightswim> Tempt: I was thinking that my p90 can't push 100 to 100, but I forgot that it had crappy nics [11:17:37] <renihs> apple math=math? [11:17:48] <nightswim> so you can discard my comment [11:18:32] <palowoda> nics are so objective on how much they can "push". [11:18:45] <palowoda> subjective [11:18:52] <blueandwhiteg3_> i'm not paying $600 for a freakin' 3" wifi thing to surf the net wth [11:18:55] <blueandwhiteg3_> no 3rd party apps [11:19:01] <blueandwhiteg3_> no voip [11:19:08] <blueandwhiteg3_> no unlocked version [11:19:18] <palowoda> Steve Job blows you a kiss. [11:19:59] <_basta_> choice is good anyway. [11:20:09] <palowoda> And smiles all the way to the bank too. [11:21:00] <blueandwhiteg3_> The problem is that everybody else is idiots in terms of how they designed their handsets, here comes Jobs with a technically crappy product entry but figures out a few design things and... there he goes, making oodles of money [11:21:34] <richlowe> apple have design-fu, and a dedicated fanboy base with spare cash. [11:21:37] <richlowe> you can't fault them for it. [11:22:06] *** NikolaVeber has joined #opensolaris [11:22:20] <_basta_> get the difference between media capable phones, and the media device, which is capable to make calls too. [11:22:22] <palowoda> When the thrill is gone I guess. [11:22:40] <quasi> it's like M$ - why should they ever bother writing a decent os when their customers are willing to buy what they have [11:23:25] <blueandwhiteg3_> i wish they'd just fix the process bloat problem... [11:23:57] <palowoda> no, no no. process bloat is solved with more memory. [11:24:12] * Tempt just found his old Solx86 box under a pile of shit [11:24:15] <Tempt> It was still running. [11:26:02] *** bnitz has joined #opensolaris [11:26:03] <Tempt> what does one do with a sempron 2800 with bugger all ram and 4 x 250Gb IDE spindles [11:26:37] <palowoda> sempron? Why would a Solaris box be runnign on a sempron? [11:26:52] <Tempt> cheapest 64 bit CPU? [11:26:59] <palowoda> oh your cheap. [11:27:13] *** uebayasi has quit IRC [11:27:15] <Tempt> I don't think it ever spiked 10% load [11:27:28] <Tempt> It's running FCS [11:27:30] <Tempt> grub free zone [11:27:41] <palowoda> put a nvidia 9750 video card in there and run some multimedia. [11:27:57] <Tempt> It is in one of those little Antec Aria cases [11:28:05] <palowoda> I'm running compiz with Solaris on mine. [11:28:27] <palowoda> Try it on the semptron. [11:28:34] <Tempt> No PCI express [11:28:39] <palowoda> Bingo. [11:29:03] <Tempt> Every slot has a dual e1000g card in it [11:29:08] <palowoda> A two dollar technology these days. [11:29:40] <palowoda> 35.00 per slot on the e1000g. [11:29:59] <Tempt> I thought they charged a lot more for the dual cards [11:30:00] <Tempt> "server" edition [11:30:21] <palowoda> Yeah those are about 180.00 now. [11:30:48] <blueandwhiteg3_> palowoda: I think you'd be horrified to hear how cheaply my AMD64 system went together for... [11:31:06] <palowoda> I'll bet I out did your. [11:31:08] <palowoda> you. [11:31:39] <palowoda> Retail price that is. [11:32:13] <blueandwhiteg3_> shoot [11:32:27] <Tempt> Aah, fukkit, scrap it for parts and toss what sucks. [11:33:55] <palowoda> 700.00 2.8Ghz AMD 64 box, 2G DDR2 800mhz, 1.5T storage Nvidia 7600GTS 51Kmeg video 1G nic. [11:34:10] <palowoda> dual core. [11:34:50] <Tempt> I could put the 4 spindles in firewire enclosures and hang them off my blade-1000 [11:35:36] *** yongsun has quit IRC [11:36:08] <blueandwhiteg3_> The core of my system - case, power supply, cpu, memory, etc... was under $200 [11:36:12] <coffman> Tempt: i would get an sas card and an external sata enclosure [11:36:35] <blueandwhiteg3_> gigabit, sata, video, etc. all onboard [11:36:49] <palowoda> yeah I know what cpu and motherboard your using. [11:36:55] *** coffman has quit IRC [11:37:13] <blueandwhiteg3_> Then I just tossed in some drives i'd already had on hand [11:37:15] <blueandwhiteg3_> 1 GB RAM [11:37:20] <Tempt> coffman: And put what in it? [11:37:37] <Tempt> a SATA enclosure to drive IDE disks [11:37:42] <Tempt> how wonderful would that be. [11:37:46] <Tempt> and I don't care if you're gone ;) [11:37:51] *** lloy0076 has joined #opensolaris [11:37:59] <blueandwhiteg3_> should do decently for my application... and with enough free pci/pci-e slots to add more drives [11:38:49] <lloy0076> In SXCE 67, in gnome terminal, hard up against the left of the screen, sometimes the characters momentarily blur up. [11:38:54] <blueandwhiteg3_> once everything is running smoothly, i'll add 4 x 500 GB SATA drives [11:39:00] <Tempt> nah [11:39:03] <Tempt> 750s are the win [11:39:13] <palowoda> so what was the total price blue? [11:39:15] <lloy0076> I know that doesn't make much sense but I'm not actually sure how to get a screenshot of it because any action that gets a screenshot seems to unblur it. [11:39:16] <blueandwhiteg3_> Tempt: Cost per GB, no way [11:39:29] *** vmlemon has joined #opensolaris [11:39:33] <Tempt> have to count controller ports in the cost as well [11:39:33] <blueandwhiteg3_> lloy0076: yep, it's a a big headache without any easy fix [11:39:38] <Tempt> + power + cables [11:40:00] <blueandwhiteg3_> Tempt: All I bought was the case, power supply, mobo, ram, cpu... i already had the drives [11:40:04] <lloy0076> blueandwhiteg3_: Are you being serious? [11:40:25] <blueandwhiteg3_> lloy0076: yes, i discussed this issue earlier this week and i think they even pointed out a pending bug? [11:40:27] <palowoda> number$ number$ [11:40:32] <lloy0076> blueandwhiteg3_: Ah, ok. [11:40:35] *** calumb has joined #opensolaris [11:40:49] <blueandwhiteg3_> lloy0076: I updated nvidia drivers, played with fonts, etc... no easy fix. Just use ssh! P [11:41:01] <lloy0076> ssh into my own box? [11:41:09] <blueandwhiteg3_> no, i'm talking about remote use [11:41:12] <lloy0076> ssh -C -X localhost ... seems a bit of overkill [11:41:16] <blueandwhiteg3_> haha [11:41:21] <blueandwhiteg3_> that might work, but i doubt it [11:41:28] <lloy0076> heh [11:41:48] <lloy0076> Fire up my windows box, plonk an X Server on it and then ssh -C -X the_solaris_box :( [11:41:49] <blueandwhiteg3_> Tempt: so i don't know the total cost if you include the drives.... or the drives i'm planning to add [11:42:09] <blueandwhiteg3_> but hey, cheap RAID-5 consumer NAS boxes cost more [11:42:09] *** Atomdrache has quit IRC [11:42:26] <blueandwhiteg3_> lloy0076: you could install linux! [11:42:32] <palowoda> What 750G SATA drives are going for 199.00 these days. [11:42:34] * lloy0076 hmm [11:42:44] <lloy0076> I wonder if Linux in my BrandZ zone displays the same anomalies [11:43:05] <Tempt> lloy0076: Install SSGD and run a full screen session? [11:43:48] <Tempt> Alright, little crapbox, you're history. [11:43:56] * Tempt looks for the electric drill [11:43:59] *** Atomdrache has joined #opensolaris [11:44:08] <blueandwhiteg3_> 750 GB = $200, 500 GB = $100 [11:44:12] <blueandwhiteg3_> you tell me which is a better deal [11:44:32] <asyd> hmm sxde is supposed to be installable in qemu, right? [11:45:03] <palowoda> Next week the 750G drives will be 149.00. [11:45:27] <blueandwhiteg3_> and 500 GB drives will be $80 [11:45:39] <blueandwhiteg3_> the price point will eventually move to 750 GB, but not for a few months at least [11:45:44] <palowoda> 1T's will be 200.00 [11:45:52] <blueandwhiteg3_> that's a ways off [11:46:05] <palowoda> 6 months ok [11:46:12] * lloy0076 sigh [11:46:21] <lloy0076> I can't find SSGD on sun.com... [11:46:26] <lloy0076> Despite searching for it. [11:46:39] <blueandwhiteg3_> samsung could launch 1.6 TB drives today [11:47:26] <blueandwhiteg3_> they just choose not to.... [11:48:18] <palowoda> Hell so for for home usage I'm having a problem filling up 1.5T. [11:48:36] <palowoda> Unless I want to archive movies. [11:48:41] <Cyrille> lloy0076, http://www.sun.com/software/products/sgd/index.jsp [11:49:06] <lloy0076> Cyrille: Thanks - I could find references *to* it but not the actual product :( [11:49:11] <blueandwhiteg3_> or snapshot your pc backups through all eternity! [11:50:15] <palowoda> Different boxes for the backups. [11:52:20] <blueandwhiteg3_> How do you control power management on solaris boxes? i.e. spin down drives when not in use? [11:53:27] <lloy0076> SSGD or SGD seems to be quite good but rather much an overkill to fix my font problem :P [11:53:38] <palowoda> I don't. It's not all that bad with about a half a dozen dual core amd machines. [11:55:17] <blueandwhiteg3_> It's a waste to leave the drives spun up for me, when they are not in use... [11:56:13] <palowoda> ahh if my power bill gets worse I'll worry about it. Now when they where Intel cpu's I was chewing about an extra 100.00 a month. [11:56:38] <Tempt> Is it just me or is all PC hardware a giant hackjob [11:56:40] <blueandwhiteg3_> those netburst things were killer [11:56:41] <Tempt> lacking any class at all [11:56:51] <blueandwhiteg3_> it's all a hackjob [11:56:56] <palowoda> No it's just you Tempt. [11:57:01] <renihs> netburst was ugly [11:57:25] <Tempt> Try to get anywhere in that Aria case all I've managed to do is slice my fingers on the shit-quality metalwork. [11:57:30] <richlowe> Tempt: I tend to blame it becoming widespread too soon. [11:57:40] <richlowe> (for such things in general, actually) [11:57:45] <richlowe> by the time "Better" was more defined, it was too late to change anything. [11:57:49] <Tempt> The whole architecture has been a hack since day one [11:58:02] <palowoda> Ahh the good old days. [11:58:05] <Tempt> Back before the storage industry had interface-based pricing gouges, they wouldn't just use SCSI [11:58:27] <palowoda> I beat people over the head for not using SCSI. [11:58:58] <palowoda> No company should sell arrays with SATA. [11:58:59] <blueandwhiteg3_> by the way... intel planned to scale netburst to 10 ghz! [11:59:02] <Tempt> If IBM had just put SCSI in their machines from the get-go, nobody would have had to suffer ST506, IDE, SATA, anything like that,. [11:59:43] <blueandwhiteg3_> they just forgot that they would have to partner with kennmore, because the only way they could dissipate that much heat would be into a clothes dryer! [11:59:51] <blueandwhiteg3_> goodnight all [12:00:02] <blueandwhiteg3_> thanks for letting me pick at your brains [12:00:05] *** blueandwhiteg3_ has left #opensolaris [12:00:09] <palowoda> night. [12:00:57] <Tempt> I could keep ranting about PC hardware, but I should probably just shut up. [12:01:18] <lloy0076> lol [12:01:23] <palowoda> Yeah ranting about PC hardware isn't like it use to be. [12:01:24] <lloy0076> PC Hardwars is "The Bomb" :P [12:01:38] <Tempt> The Bomb indeed. [12:01:41] <Tempt> It keeps going to pieces. [12:02:03] <lloy0076> I've decided that it's about as fragile as Eclipse on Solaris. [12:02:20] <palowoda> I wish more hardware would break down than my hardware stocks would go up. :-) [12:02:26] <lloy0076> My Eclipse on the weekend went pie shaped for no sensible reason, but reinstalling all the plugins I had going made it work again. [12:03:17] <Tempt> *crunch* [12:03:26] <Tempt> PC dilemma over, motherboard now smashed beyond repair. [12:03:37] <Tempt> I'll stick to SPARC from now on. [12:04:28] <palowoda> And I voted for Bush too. [12:05:39] <lloy0076> heh [12:06:00] *** cstumpf has joined #opensolaris [12:09:41] * renihs has a v40z mainboard hanging on its wall [12:11:10] <quasi> renihs: dud or working? [12:13:15] <renihs> hehe dead one :p [12:13:26] <renihs> just a huge nice looking mainboard [12:13:33] <renihs> thought its better than an image :p [12:13:34] <palowoda> Software lives hardware dies. [12:13:39] *** lloy0076 has quit IRC [12:14:57] *** aruiz has joined #opensolaris [12:22:46] *** deather has joined #opensolaris [12:24:29] *** bzcrib has joined #opensolaris [12:25:36] *** Dink has quit IRC [12:26:03] *** Dink has joined #opensolaris [12:29:21] *** nostoi has quit IRC [12:31:25] *** jlc has quit IRC [12:34:20] *** calumb is now known as calAFK [12:40:58] *** Vanuatoo__ has joined #opensolaris [12:44:57] *** yongsun has joined #opensolaris [12:50:50] *** salamanders has joined #opensolaris [12:51:42] *** deather has quit IRC [12:51:42] *** calAFK has quit IRC [12:51:42] *** Dar has quit IRC [12:51:42] *** timsf has quit IRC [12:51:42] *** Drone has quit IRC [12:51:43] *** halton has quit IRC [12:51:43] *** richlowe has quit IRC [12:51:43] *** Yamazaki-kun has quit IRC [12:51:45] *** paul has quit IRC [12:51:45] *** NeZetiC has quit IRC [12:51:47] *** ofu has quit IRC [12:51:47] *** sporq has quit IRC [12:51:47] *** Stric has quit IRC [12:51:47] *** adamg has quit IRC [12:51:48] *** prg3 has quit IRC [12:52:28] *** deather has joined #opensolaris [12:52:28] *** calAFK has joined #opensolaris [12:52:28] *** Dar has joined #opensolaris [12:52:28] *** timsf has joined #opensolaris [12:52:28] *** Drone has joined #opensolaris [12:52:28] *** halton has joined #opensolaris [12:52:29] *** richlowe has joined #opensolaris [12:52:29] *** Yamazaki-kun has joined #opensolaris [12:52:29] *** NeZetiC has joined #opensolaris [12:52:29] *** adamg has joined #opensolaris [12:52:29] *** paul has joined #opensolaris [12:52:29] *** prg3 has joined #opensolaris [12:52:29] *** Stric has joined #opensolaris [12:52:29] *** sporq has joined #opensolaris [12:52:29] *** ofu has joined #opensolaris [12:53:11] *** MikeTLiv1 has joined #opensolaris [12:54:33] *** coffman has joined #opensolaris [12:55:03] <Doc> http://www.dilbert.com/comics/dilbert/archive/images/dilbert21047470070709.gif [12:55:15] *** calAFK has quit IRC [12:56:12] <quasi> ;) [12:56:57] *** simford has quit IRC [12:58:46] <phips> lol [13:00:05] *** estibi has quit IRC [13:00:09] *** deather_ has joined #opensolaris [13:02:33] *** estibi has joined #opensolaris [13:03:00] *** Vanuatoo_ has quit IRC [13:06:13] <Doc> hmm.. there used to be a good website that had details on internet acecss in hotels (ie, if/how much/etc) - anyone know it? [13:06:48] *** MikeTLive has quit IRC [13:10:23] *** bzcrib has quit IRC [13:12:46] *** MattMan is now known as MattAFC [13:13:33] *** rasputnik has joined #opensolaris [13:13:54] *** coffman has quit IRC [13:14:00] *** coffman has joined #opensolaris [13:16:40] *** yongsun has left #opensolaris [13:17:16] *** deather has quit IRC [13:18:16] *** deather_ is now known as deather [13:18:19] *** boro has joined #opensolaris [13:21:12] *** coffman has quit IRC [13:35:41] *** m0le has quit IRC [13:37:12] *** calumb has joined #opensolaris [13:44:23] *** obsethryl has joined #opensolaris [13:44:32] <JWheeler> WHen I get an error like this: In file included from vp5.c:33: [13:44:32] <JWheeler> vp56.h:27:20: stdint.h: No such file or directory <-- It's trying to tell me that gcc can't find stdint.h, correct? [13:47:23] <Berny__> yep [13:47:48] *** jambock has joined #opensolaris [13:47:56] <JWheeler> the gcc command included a -I/usr/include .... I'm not quite understanding why it's not working [13:48:57] <Tempt> pinging anyone using x86 with disksuite for mirroring / [13:51:56] <PerterB> Tempt: ? [13:52:39] <Tempt> Alright, last time I tried to mirror a PC with x86 on I had some hassles [13:53:10] <Tempt> What am I doing wrong? I went for the usual metainit ; metainit ; metaroot ; reboot and then grub hated me. [13:53:21] <Tempt> Is there a magic step that isn't there on SPARC? [13:53:26] *** coffman has joined #opensolaris [13:54:09] <PerterB> nope, in my experience the only difference is when you're done you need installgrub rather than installboot to make the second drive bootable (but that shouldn't affect grub booting from your first drive) [13:54:22] <Tempt> Hmm. [13:54:23] <Tempt> Okay then. [13:54:39] <Tempt> I must admit last time I tried this was with the first Solaris release to include grub. [14:04:08] *** cmihai has quit IRC [14:04:26] *** cmihai has joined #OpenSolaris [14:05:51] *** master_of_master has joined #opensolaris [14:11:13] *** cmihai` has joined #OpenSolaris [14:14:07] *** cmihai` has quit IRC [14:15:43] *** MikeTLiv1 has quit IRC [14:17:44] *** master_o1_master has quit IRC [14:19:59] *** Dink has quit IRC [14:20:52] *** Dink has joined #opensolaris [14:29:26] <Tempt> Here goes for the reboot [14:30:54] <Tempt> PerterB: Thanks for your comments before. Indeed, it worked properly this time; must've been been an early-on bug [14:35:04] *** halton has left #opensolaris [14:37:53] *** cmihai has quit IRC [14:47:36] *** iMax has joined #opensolaris [14:48:45] *** kloczek has quit IRC [14:48:59] *** kloczek has joined #opensolaris [14:49:37] *** calumb is now known as calLNCH [14:51:02] <SYS64738> isit sun dowloands broken ? [14:51:36] *** MattAFC is now known as MattMan [14:53:43] *** SirFunk has quit IRC [14:56:48] *** trs81_ has joined #opensolaris [14:56:49] *** trs81 has quit IRC [14:57:13] *** trs81_ is now known as trs81 [14:57:15] *** cmihai has joined #OpenSolaris [14:57:59] *** trs81 has quit IRC [15:04:28] *** vortex` has joined #opensolaris [15:05:54] <vortex`> i've just built a new install of nevada with a minimal set of packages.. (core only) I've installed sshd from solaris packages on the opensolaris DVD and have it running - starting with SMF OK. However when i SSH to the machine all i get back is 'no kex alg' [15:07:01] <vortex`> has anyone got any pointers on what to do to fix that up and get ssh going? [15:07:47] <Shinden> jo any 1 installed opensolaris on HP ML350 G2 > [15:07:48] <Shinden> ? [15:08:28] <quasi> Shinden: check the HCL [15:09:15] <Shinden> hcl ? [15:09:47] <vortex`> hardware compatibility list [15:10:17] *** mega has quit IRC [15:11:26] *** mega has joined #opensolaris [15:11:40] <quasi> http://www.sun.com/bigadmin/hcl/ [15:16:19] <quasi> http://www.sun.com/bigadmin/hcl/hcts/install_check_sx.html for the lazy [15:18:59] *** peteh has joined #opensolaris [15:21:10] *** trs81 has joined #opensolaris [15:21:32] <rasputnik> vortex`: are you sure 'svcs -xv' doesn't show anything on the box? [15:23:03] <quasi> vortex`: you could check the package for dependencies - maybe you're missing something essential [15:23:27] <vortex`> rasputnik: i honestly didnt look, but worked out i forgot to create the server keys :) linux has spoiled me :p [15:23:45] <vortex`> creating the keys and restarting the daemon did the trick. [15:24:13] <rasputnik> vortex`: it should do it for you on first boot, but it doesn't when you strip down the install for some reason. happens to me every time. [15:24:24] <quasi> vortex`: solaris does the key generation on the first boot in a normal install [15:24:47] <vortex`> yes i did a stripped back core only install, hence my problem i guess :) [15:24:52] *** SirFunk has joined #opensolaris [15:24:53] <vortex`> thanks anyway, im off to hit the sack :) [15:27:51] *** peteh has quit IRC [15:28:19] *** peteh has joined #opensolaris [15:36:10] *** obsethryl has quit IRC [15:45:54] *** timsf has quit IRC [15:46:23] *** timeless has joined #opensolaris [15:46:23] *** het has joined #opensolaris [15:46:24] *** spiff has joined #opensolaris [15:46:26] *** cormac has joined #opensolaris [15:46:28] *** bda has joined #opensolaris [15:46:35] *** Marv|LG has joined #opensolaris [15:46:36] *** xuewei has joined #opensolaris [15:47:24] *** polk__ has joined #opensolaris [15:49:11] <cmihai> Totally offtopic, but I'm getting a bit desperate here guys :-]. If anyone has AIX around (non priviledged user is fine) could you tell me what fileset has rpcgen? "lslpp -w /usr/bin/rpcgen" should give you the output. [15:50:55] *** calLNCH is now known as calumb [15:51:19] *** Abe_Froman has joined #opensolaris [15:51:23] *** jpipkin has joined #opensolaris [15:52:26] *** boro has quit IRC [15:52:50] *** sparkleytone has joined #opensolaris [15:54:10] <cmihai> Nevermind, got it, thanks anyway. [16:02:29] *** AtomicPunk has joined #opensolaris [16:03:30] *** RaD|Tz has joined #opensolaris [16:10:42] <Berny__> what the heck [16:11:46] <Berny__> what's "libc internal error: _rmutex_unlock: rmutex not held." supposed to tell me? [16:12:01] <Berny__> happens on a up-to-date patched sol10 box [16:12:19] <Berny__> with an app which ran fine the past few years [16:12:57] <quasi> trying to free a lock you don't have [16:13:14] <Berny__> yeah but why? [16:13:19] <Berny__> didn't see this before [16:13:29] <Berny__> before last patch orgy i mean [16:14:10] <quasi> maybe one of the patches made the error show instead of being silently ignored? [16:14:24] <Berny__> but that app never aborted before [16:14:26] <Berny__> does now [16:14:32] <quasi> ah, aborts [16:15:38] <Berny__> http://rafb.net/c5K57J13.html [16:15:45] <Berny__> is the last bit of truss output [16:17:31] <quasi> Berny__: that rafb paste 404s [16:17:53] *** Mazon is now known as mazon [16:18:31] <Berny__> http://rafb.net/p/c5K57J13.html [16:18:33] <Berny__> yuck [16:20:47] <quasi> the breaks before the unlock might be something as well [16:23:22] <Berny__> hmm [16:23:55] <Berny__> bloddy users [16:24:01] <Berny__> bloody even [16:24:05] <quasi> try running it under apptrace instead of truss [16:24:38] <Berny__> why do they always wait like 4 weeks until they use essential software again? [16:25:00] <Berny__> AH! [16:25:08] <Berny__> apptrace IS helpful! [16:26:21] <Berny__> http://rafb.net/p/xrDoQ559.html [16:26:35] <Berny__> seems like it runs into this problem in libxview [16:28:16] *** linux_user400354 has joined #opensolaris [16:30:14] *** xushi_ has quit IRC [16:32:00] *** timsf has joined #opensolaris [16:35:35] *** dlynes_laptop has quit IRC [16:36:00] <quasi> you could try adding -v '*mutex*' to apptrace for details [16:38:06] <Berny__> nothing :-\ [16:38:07] *** Netwolf_ has joined #opensolaris [16:41:57] <Berny__> hmm bugger... i shall log a call i guess [16:44:50] *** gobbler has joined #opensolaris [16:49:57] <NeZetiC> hi [16:50:50] <NeZetiC> someone here know how solaris select loaded or unloaded kernel modules ? [16:50:53] *** salamanders has quit IRC [16:54:17] *** lisppaste3 has quit IRC [16:54:41] *** Netwolf has quit IRC [16:57:08] <NeZetiC> hum, I think choose is better than select. In fact, modinfo -c return state of modules, and I just want to know how Solaris set the state of a module (how it take the decision to load it or not) [16:59:26] *** lisppaste3 has joined #opensolaris [17:04:37] *** derchris_ has joined #opensolaris [17:07:41] *** rasputnik has quit IRC [17:09:11] *** stevel has joined #opensolaris [17:09:11] *** ChanServ sets mode: +o stevel [17:11:00] *** alanc_away is now known as alanc [17:11:56] *** calumb is now known as calAFK [17:12:05] *** MikeTLive has joined #opensolaris [17:12:10] <alanc> 2429 new messages in INBOX - so much for everyone taking off last week too [17:13:02] <oninoshiko> GOOD MORNING (or whatever it is in your respective time zone? [17:16:36] *** Snake007uk has quit IRC [17:23:08] <sickness> I'm just back (from holidays) [17:23:12] *** ruxpin has quit IRC [17:26:37] <trochej> sickness: Your nickame is oddly appropriate :) [17:26:48] <gdamore> hi * [17:27:47] <gdamore> so, my GLDv3 with hardware checksum support changes to hme seems to be yielding a consistent 2% performance boost in throughput. the testing is still under way, but I'm ecstatic so far. [17:28:05] <gdamore> this 2% was enough to *pass* one of the tests that was previously timing out on this rinky dink little 360MHz CPU. [17:28:18] <Auralis> nice [17:33:49] *** sparc-kly__ has joined #opensolaris [17:42:45] *** bunker has quit IRC [17:43:43] *** sparc-kly has joined #opensolaris [17:43:43] *** ChanServ sets mode: +o sparc-kly [17:49:33] *** estibi_ has joined #opensolaris [17:52:54] <aruiz> could anybody point me to any documentation that explains how to setup a zone from a solaris express dvd? [17:53:16] *** sparc-kly_ has quit IRC [17:53:55] <phips> aruiz: lots of good docs here http://www.sun.com/bigadmin/content/zones [17:54:04] <aruiz> phips, thanks :) [17:54:13] *** seb9 has joined #opensolaris [17:59:34] *** Pietro_S has quit IRC [18:00:11] *** seb9 has quit IRC [18:00:20] *** postwait has joined #opensolaris [18:00:36] *** sparc-kly__ has quit IRC [18:01:11] *** calAFK is now known as calumb [18:02:08] *** seb7 has joined #opensolaris [18:02:44] *** estibi has quit IRC [18:09:54] *** mazon is now known as Mazon [18:10:07] * oninoshiko fears the 'M' [18:10:24] *** gobbler has quit IRC [18:13:44] *** estibi_ is now known as estibi [18:14:17] *** damienc has quit IRC [18:15:26] *** Pietro_S has joined #opensolaris [18:15:29] *** iMax has quit IRC [18:16:06] *** jcea has joined #opensolaris [18:17:42] *** cydork has quit IRC [18:21:04] *** RaD|Tz has quit IRC [18:21:33] *** RaD|Tz has joined #opensolaris [18:27:16] *** pfa3rh has joined #opensolaris [18:29:26] *** duri has quit IRC [18:29:38] *** MattMan has quit IRC [18:29:41] *** duri has joined #opensolaris [18:30:11] *** Cyrille has quit IRC [18:31:31] *** sudeep has joined #opensolaris [18:33:01] <sudeep> I have 30GB hard disk with windows XP installed on my system. Primary partition is 7GB and remaining is extended partition --> divided into two 7GB and 15GB logical partition.. I want to install sun solaris 10 on 7GB extended (logical) paritition.. but during install process. it only shows primary and extended partition , not logical partition where i want to install solaris....how to get... [18:33:02] <sudeep> ...around this problem ?? please help..... [18:34:10] *** slowhog has joined #opensolaris [18:36:11] *** dduvall has joined #opensolaris [18:36:42] <gdamore> e^ipi: are you around? [18:37:28] <Pietro_S> sudeep: on what do you need Solaris 10 to install? [18:37:50] <sudeep> Pietro_S.. yes .. solaris 10 [18:38:09] <sudeep> Pietro_S, logical partition.... [18:38:47] <Pietro_S> I asked why you need solaris 10 and not opensolaris, when you ae asking on #opensolaris channel ;-) [18:40:22] *** calumb has quit IRC [18:40:37] <Pietro_S> sudeep: does have that partion any special tag? [18:40:42] <sudeep> Pietro_S: i don't have opensolaris right now... i have requested it.. i hope i will be gettin it soon.. but still.. i have solaris 10 DVD and very eager to use it onto my system... also people at #solaris channel seen to unresponsive to my query.. so i am here.. [18:42:08] <sudeep> Pietro: its logical partition (NTFS fs) [18:42:29] <sudeep> Pietro_S: i don't know.. more than that... [18:45:05] *** dunc has quit IRC [18:45:06] <Pietro_S> isn't it done by that dynamical win partion? Boot to XP and take look on Disk manager, there should be more nfo about that disk [18:45:27] <timsf> sudeep, afaik it can't be done - OpenSolaris can't install to logical partions (I'm digging around for a citation ) [18:45:51] <timsf> http://mail.opensolaris.org/pipermail/opensolaris-help/2006-October/002546.html [18:46:06] <timsf> (looking for additional statements... ) [18:49:03] <sudeep> timsf: so if i have to install [open]solaris on to my system.. what should i do... [18:49:46] *** RElling has joined #opensolaris [18:50:28] *** estibi has quit IRC [18:50:57] <timsf> I think you need a primary partion for Solaris. [18:50:59] * oninoshiko thinks you sould make an illogical partition [18:52:01] <timsf> sudeep, this is a little old, but may help http://blogs.sun.com/bobn/entry/a_grub_configuration_for_multiple [18:52:02] <Pietro_S> I think that I installed on extended partion, but I'm not sure about it ... [18:52:54] <sudeep> Pietro_S: i would be thankfull..if u could tell me how... [18:54:38] *** bnitz has left #opensolaris [18:54:39] <Pietro_S> in windows, delete that partion and convert it to free space, then installer should find free space and format it itself [18:55:09] <timsf> perhaps more also at http://www.opensolaris.org/jive/thread.jspa?messageID=123199 [18:55:22] <timsf> (I just Googled for these, btw... ) [18:57:03] <sudeep> Pietro_S: whole extended partition or.. just the one of the logical partition where i want to install it... [18:58:50] *** bondolo has joined #opensolaris [19:00:03] <Pietro_S> depend if you you have any data on whole extended partion ..., but I would first try with logical only [19:00:37] <CIA-26> jesusm: 4960249 sun4u startup code abuses cmn_err() in places [19:01:10] <sudeep> Pietro_S: thanks.. [19:06:31] *** trs81 has quit IRC [19:06:38] *** trs81 has joined #opensolaris [19:08:20] *** Dar is now known as Dar_HOME [19:13:01] *** sudeep has quit IRC [19:17:06] *** dunc has joined #opensolaris [19:21:43] *** kilohertz has joined #opensolaris [19:24:19] <Reidms-420R> Nautilus just crashed :p [19:24:22] <Reidms-420R> lol [19:25:34] *** bengtf__ has joined #opensolaris [19:26:05] <Reidms-420R> I am now playing "Barbers Adagio for String" [19:26:07] <Reidms-420R> s [19:28:50] *** Murmuria has joined #opensolaris [19:32:41] *** bengtf_ has quit IRC [19:39:14] *** Fullmoon has quit IRC [19:43:20] *** kilohertz has quit IRC [19:49:32] *** twincest is now known as mary-kate [19:50:04] *** mary-kate is now known as twincest [19:54:00] *** bubbva has joined #opensolaris [19:56:03] *** Vanuatoo__ has quit IRC [19:56:12] *** Vanuatoo has joined #opensolaris [19:59:28] *** sartek has quit IRC [20:00:02] *** linux_user400354 has quit IRC [20:00:31] <CIA-26> marks: 6575997 Memory corruption while running ztest [20:00:32] <CIA-26> rf157361: 6495050 Add dcmds to view and search machine description, 6546910 Add dcmds to examine last epacket of the error queues [20:02:18] *** kloczek has quit IRC [20:04:14] *** kloczek has joined #opensolaris [20:05:11] *** aruiz is now known as aruiz_office [20:08:49] *** obsethryl has joined #opensolaris [20:16:47] *** estibi has joined #opensolaris [20:22:06] <Gropi> Reidms-420R: I like that song :-) [20:25:32] *** sartek has joined #opensolaris [20:27:42] *** KermitTheFragger has joined #opensolaris [20:30:43] *** RElling has left #opensolaris [20:31:36] *** PC__ has joined #opensolaris [20:31:39] *** RaD|Tz has quit IRC [20:31:58] *** PC__ is now known as RaD|Tz [20:37:14] *** bondolo has quit IRC [20:41:55] *** RaD|Tz has quit IRC [20:50:06] *** seb7 has quit IRC [20:56:33] *** twincest is now known as _mary_kate_ [21:00:30] <CIA-26> raf: 6577503 mutex_trylock(3C) tries too hard [21:02:19] *** beholder has joined #opensolaris [21:06:37] *** danny_j has joined #opensolaris [21:11:10] <beholder> I've only got 256 meg of ram in the server I bought. SXCE seems to be a bit heavy on the install, do I have any other options? [21:12:01] <quasi> beholder: should be possible with the commandline installer [21:12:24] <beholder> quasi: It will have to be the CLI installer, it's a Netra T1 without a framebuffer. [21:12:37] <gdamore> 256 works fine in a Netra T1. [21:12:38] <beholder> quasi: So the CLI installer doesn't have insane memory requirements? [21:12:43] <gdamore> not really. [21:13:00] <gdamore> I've actually installed on systems with as little as 128MB. [21:13:01] <quasi> beholder: 256 should be enough for the cli [21:13:03] <e^ipi> hey gdamore... [21:13:10] <beholder> Sweet thanks guys [21:13:15] <gdamore> hey e^ipi. did you see the mail from stevel? [21:13:16] <e^ipi> you poked me earlier ( well, after I had gone to bed anyways ) [21:13:25] <e^ipi> yeah, so that's cool [21:13:41] <gdamore> well, that was what it was about. i was looking for your os.o login, but steve found it. [21:13:54] <e^ipi> yeah, it's my old IRC nick [21:14:28] <gdamore> ok. well, i don't know what that is either. but its all good now, since steve was the one who needed it. [21:14:48] <gdamore> in other news, I've posted to networking-discuss and driver-discuss looking for hme code reviewers. [21:14:50] <gdamore> any out there? [21:15:18] <beholder> Oh wait I downloaded SXDE... hehe have to go find the other version I guess [21:15:51] *** MooingLemur has joined #opensolaris [21:16:07] <e^ipi> did Sun kill hme too? [21:16:19] <e^ipi> I was under the impression they were still supported in some machines [21:16:20] <MooingLemur> is there a way to increase the priority of a zfs scrub? [21:16:28] <MooingLemur> (or resilver) [21:16:56] <gdamore> sun didn't kill hme nor qfe. but they don't *sell* them any more. [21:17:19] <gdamore> funny thing is I added hardware IP checksum support. Got a 2% perf. improvement on those old nics. :-) [21:21:48] <MooingLemur> weird.. based on reduced latency on retransmits? :P [21:22:18] <e^ipi> hmm, the contributor status & so forth haven't gone through yet [21:22:20] *** theRealballchalk has left #opensolaris [21:22:30] *** KermitTheFragger has quit IRC [21:22:37] <e^ipi> but if someone's on it, i'm not gonna worry about it for now [21:22:42] <e^ipi> it's noon on monday [21:22:51] *** bondolo has joined #opensolaris [21:25:06] <CSFrost> Yep.. what a wonderful day! [21:25:19] <e^ipi> erm... okay [21:25:55] <CSFrost> Having everyday off.. they always seem so nice. [21:26:37] <e^ipi> except rent day [21:29:56] <beholder> I'm on the sun download page and it's telling me the most recent version of SXCE is 64a (sparc download). The topic says 67. Am I downloading the wrong one? :) [21:30:45] <alanc> 64a sounds like SXDE, not SXCE - did you go to the wrong download page? [21:31:12] <e^ipi> http://opensolaris.org/sxce_dvd [21:31:15] <e^ipi> just use that link [21:31:51] <beholder> Can't use the DVD version, installing to a Netra T1 [21:31:55] <beholder> Need the CD [21:32:08] <e^ipi> opensolaris.org/sxce_cd IIRC [21:32:22] <alanc> or just follow the download links on opensolaris.org [21:32:32] <e^ipi> besides, you should just use the DVD iso and jumpstart the netra [21:32:35] <e^ipi> it works a lot better [21:32:45] <beholder> Ahh much better :) [21:32:53] <e^ipi> lofiadm(1M) [21:33:06] <beholder> I don't have another solaris install around to do a jumpstart [21:33:55] *** mega has quit IRC [21:34:23] <e^ipi> well, share the DVD via nfs & boot off CD1 anyways [21:34:49] <beholder> I'll give that a try. [21:44:15] *** johnniez has left #opensolaris [21:48:04] *** cypromis has joined #opensolaris [21:48:13] *** gaz has joined #opensolaris [21:54:26] <leal> i'm having real problems with sol 10 u3... [21:55:09] <leal> i have two servers (poweredge) to make a cluster, and the installation process was "bad" in the both. [21:55:28] *** jumpi has joined #opensolaris [21:55:49] <leal> now, i have made a full+OEM installation on the other server, and some files are missing... and the installation was without errors. [21:56:46] *** nachox has joined #opensolaris [22:08:09] * gdamore is still desperately seeking code reviewers for hme conversion. [22:08:20] * gdamore would like to putback tonight, but it probably won't happen. [22:10:02] *** linux_user400354 has joined #opensolaris [22:10:05] *** danny_j has quit IRC [22:10:52] *** linux_user400354 has quit IRC [22:11:25] *** linux_user400354 has joined #opensolaris [22:17:55] *** jumpi has quit IRC [22:18:36] *** nachox has quit IRC [22:19:02] *** henriknj has quit IRC [22:23:55] *** beholder has quit IRC [22:23:57] <sickness> I've installed the acpi and powernow drivers with frkit, now how could I see if they work? [22:24:07] <sickness> isn't there a way to read temperatures or processor speed? [22:29:04] *** linux_user400354 has quit IRC [22:29:43] <cmihai> sickness: tried "prtconf -vvv && prtdiag -vvv"? [22:31:44] *** phalenor_ has joined #opensolaris [22:31:46] *** phalenor has quit IRC [22:31:57] <seanmcg> sickness: powernowadm should show speeds and what its currently. [22:32:06] *** pjlv has joined #opensolaris [22:32:41] *** alobbs has joined #opensolaris [22:32:42] <cmihai> so should psrinfo -v, but may not take account for speedstep :-) [22:36:26] <sickness> seanmcg: tnx! [22:36:31] *** laca has quit IRC [22:36:37] <sickness> cmihai: psrinfo -v shows same info I think... [22:36:47] <sickness> prtdiag too [22:38:22] *** nostoi has joined #opensolaris [22:41:33] *** CSFrost has quit IRC [22:44:12] *** CSFrost has joined #opensolaris [22:44:50] *** cmihai has quit IRC [22:45:12] *** phalenor_ is now known as phalenor [22:45:13] *** cmihai has joined #OpenSolaris [22:46:07] *** jambock has quit IRC [22:46:24] *** goo-man has joined #opensolaris [22:47:52] *** theRealballchalk has joined #opensolaris [22:52:58] *** SirFunk has quit IRC [22:55:24] *** cypromis has quit IRC [22:55:41] *** cypromis has joined #opensolaris [23:00:56] <sickness> http://rafb.net/p/XjeMEq23.nln.html <- interesting... [23:02:02] <CIA-26> marks: 6578215 zfs_mount() needs to handle GETATTR failures better. [23:02:25] <Reidms-420R> Which version Gropi? Classical or Trance :P [23:07:41] <Gropi> Reidms-420R: trance :-) [23:07:57] <Reidms-420R> I like both [23:08:21] <Reidms-420R> Perfect song to put on a website that was defaced lol(the trance one) [23:08:23] <Reidms-420R> So sad [23:10:47] <Reidms-420R> (I am not a cracker- just saying) [23:14:24] <e^ipi> you cracker-ass white boy [23:14:36] <Gropi> sickness: was that just a XSS attack or even a CSRF attack? [23:15:05] <coffman> anyone here with a wacom tablet? [23:16:09] <sickness> Gropi: lol [23:16:17] *** SirFunk has joined #opensolaris [23:16:34] <Pietro_S> coffman: yes, there are some binary drivers if you have new device [23:17:57] <Gropi> sickness: hmm, somehow I confused things. [23:18:00] <coffman> Pietro_S: i got a wacom intuos3 usb on x86 [23:18:06] <sickness> Gropi: about what? :) [23:18:09] <Pietro_S> coffman: if you have usb device you should be fine [23:19:33] <coffman> Pietro_S: you mean it will run out of the box with xorg? [23:21:14] <Pietro_S> not as you would like, you need to install wacom drivers for solaris if you want to work it well [23:22:22] <Pietro_S> I don't have intuos3, but if you base xorg with older ones misbehave buttons ... [23:22:54] <Pietro_S> coffman: http://www.sun.com/io_technologies/vendor/wacom_technology_corporation.html [23:23:42] <Pietro_S> and http://www.wacom.com/productsupport/sgi_sun.cfm [23:24:57] <coffman> Pietro_S: this are also x86 drivers? [23:26:11] *** REllin1 has joined #opensolaris [23:27:32] *** REllin1 has quit IRC [23:27:39] <Pietro_S> not sure, but from first link it's "verified" on Solaris 10 X86/X64 ... [23:27:59] *** RElling has joined #opensolaris [23:30:45] <coffman> Pietro_S: im trying, thx [23:31:14] *** Gropi has quit IRC [23:31:22] *** leal has quit IRC [23:32:36] *** m0le has joined #opensolaris [23:33:33] <Pietro_S> if it will work give me some notice, because my old penpartner (serial pen) died and I have in plan to buy new one ... [23:35:51] *** estibi has quit IRC [23:42:48] *** slowhog has quit IRC [23:43:14] *** slowhog has joined #opensolaris [23:44:37] *** goo-man has left #opensolaris [23:55:44] *** jumpi has joined #opensolaris [23:56:28] *** postwait has quit IRC