[00:00:34] <CIA-19> gjelinek: 6575612 some brands need a post-install hook to be installed, 6574267 zlogin error msgs for non-native zones could be under .SUNWnative [00:02:40] *** postwait has quit IRC [00:03:34] <pfn> ooh, there's a java api for dtrace [00:04:20] *** apuc has joined #opensolaris [00:06:59] <blueandwhiteg3> What version of the NFS protocol does SXCE 67 run? [00:07:12] <Tpenta> it was one of the first to do v4 [00:08:18] <blueandwhiteg3> Ah ha! OS X doesn't support v4. Will it fall back to v3? [00:08:41] <sommerfeld> solaris will do nfs v2, v3, and v4 [00:08:43] <Tpenta> yes [00:08:47] <Tpenta> hello bill [00:09:33] <blueandwhiteg3> sommerfeld: Is this automagic? Can it be forced? [00:09:42] <Tpenta> yes and yes [00:10:00] <sommerfeld> see /etc/default/nfs [00:11:14] *** Yamazaki-kun has quit IRC [00:11:31] <pfn> blueandwhiteg3, what you see in dmesg on the solaris box? [00:11:47] *** karrotx has quit IRC [00:11:52] <sommerfeld> .. but it should autonegotiate to the newest common version.. [00:12:14] <blueandwhiteg3> pfn: the only potentially interesting thing is: Jul 3 15:02:58 SolarisBox mountd[512]: [ID 664212 daemon.error] No default domain set [00:12:30] <blueandwhiteg3> that only happened once, however [00:13:47] <pfn> blueandwhiteg3, and showmount -e solarisbox from osx? [00:14:03] <blueandwhiteg3> Alright... how do I restart the nfs server now that i have forced it to version 2 or 3 only? [00:15:57] <blueandwhiteg3> pfn: Exports list on 192.168.0.102: [00:16:05] <blueandwhiteg3> pfn: /export/home/Shared 0.0.0.0/ffffff00 [00:16:22] <pfn> well, your box isn't on the list [00:17:01] <blueandwhiteg3> that was the option for all hosts on the LAN [00:17:20] <pfn> so export it to your osx box specifically first [00:17:23] <pfn> then try from there [00:18:31] <blueandwhiteg3> ok, i think we have worked past that problem [00:18:38] <blueandwhiteg3> i explicitly added my host [00:19:12] <blueandwhiteg3> should i file a bug? the "all hosts on <NIC>" option in the GUI is not! [00:20:02] <blueandwhiteg3> well, it works! [00:20:07] * pfn shrugs [00:20:19] <blueandwhiteg3> The freakin' GUI does not do what it says [00:20:28] <blueandwhiteg3> It either needs to be relabeled or fixed [00:20:34] <blueandwhiteg3> We have mountage! [00:22:11] <blueandwhiteg3> pfn: Any idea why the shared folders gui is now refusing to open? It begins to open, then closes before any windows open... [00:24:01] <pfn> I don't use any gui, so I don't know [00:24:18] *** karrotx has joined #opensolaris [00:25:21] *** cypromis_ has quit IRC [00:25:50] *** carbon60 has quit IRC [00:28:15] *** nrubsig has joined #opensolaris [00:28:15] *** ChanServ sets mode: +o nrubsig [00:29:45] <nrubsig> Does anyone seen Phillip Brown on IRC ? [00:30:11] * nrubsig stares at dclarke [00:30:23] * nrubsig stares harder at dclarke [00:30:26] <dclarke> he never comes here [00:30:33] * dclarke stares back [00:30:43] * nrubsig stops building a glue trap for Philip [00:31:10] * dclarke writes a really silly error 404 page for Blastwave [00:32:55] *** Gman has joined #opensolaris [00:33:22] *** Yamazaki-kun has joined #opensolaris [00:33:22] <SYS64738> when I install CSWSquid_x86.pkg from cooltools it create also the correct smf ? [00:33:34] <dclarke> who knows ... [00:33:40] <dclarke> I use Squid from Blastwave [00:33:53] <dclarke> which is .. quite frankly .. more up to date and better [00:34:16] <SYS64738> I saw that cooltools use 2.5 while blastwave 2.6 [00:34:47] <SYS64738> dclarke, I need the ncsa auth helper is it possible to install via blastwave ? [00:34:52] <dclarke> the coolstack people came to me a year ago and asked that I release their stuff with their build scripts because they can't get a damn thing done inside Sun [00:35:18] <dclarke> ncsa auth helper ? dunno .. you would have to research that [00:35:50] <SYS64738> I used the squid src from coolstack and compiled swith with that flags [00:36:03] <dclarke> cool [00:36:04] <SYS64738> but I now I asked myself how to make it start [00:36:07] <dclarke> no pun intended [00:36:33] <SYS64738> it's first time on solaris for me [00:37:03] *** sartek has joined #opensolaris [00:37:23] <dclarke> ah .. good for you .. you're from linux land ? [00:37:41] <SYS64738> bsd [00:37:52] <dclarke> real unix .. good [00:38:47] <SYS64738> where can I find info on about to start squid with svc ? [00:39:02] <pfn> get the smf xml for squid [00:39:06] <pfn> then svccfg import squid-smf.xml [00:39:22] <pfn> then svcadmin start squid [00:39:29] <dclarke> perhaps svcs -av | grep -i squid may help also [00:41:06] <SYS64738> dclarke, where did you put the cache dir ? [00:41:27] <dclarke> I put it on different places depending on load [00:41:30] <boyd_> err... that'd be svcadm enable squid [00:41:33] <boyd_> (Morning, all) [00:41:54] <dclarke> I have a server in Montreal with 1000 employees banging it and it uses cache on separate spindles of disks on multiple controlelrs [00:42:03] *** boyd_ is now known as boyd` [00:42:13] <SYS64738> I am a little confuse on /opt hierarchy [00:42:14] <dclarke> others can leave the cache in /opt/csw/var or move it to /var/squid or whatever [00:42:21] <dclarke> its easy [00:42:27] <dclarke> the /opt is the top of it [00:42:48] <dclarke> then /opt/csw or /opt/vendor can have software in there [00:42:58] <dclarke> like Lotus Domino can be /opt/lotus [00:43:09] <dclarke> software from Blastwave goes into /opt/csw [00:43:19] <SYS64738> at the moment I put all in /opt/squid, I use a zone only for squid [00:43:24] <dclarke> software from Sun for Sun Studio can be /opt/SUNWspro [00:43:35] <twincest> also, /etc/opt/ (config files for software in /opt) and /var/opt (for var-stuff for software in /opt) [00:43:42] <dclarke> oh .. you built it yourself and put it whereever .. oh well [00:44:03] *** boyd` is now known as boyd [00:44:16] <blueandwhiteg3> SInce the GUI is broken, how can I use the CLI to add more hosts to the allowed list for an NFS share? Or just ALL hosts? [00:44:30] <pfn> blueandwhiteg3, man exportfs [00:44:30] <SYS64738> is it possible that the .xml for squid isn't in the package ? [00:44:33] <pfn> blueandwhiteg3, man shareadm [00:44:54] * boyd grumbles about stinkin' nick stealers [00:45:00] <dclarke> SYS64738 : what package ? [00:45:07] <blueandwhiteg3> no man page for shareadm [00:45:09] <pfn> boyd, register, /msg nickserv ghost :0 [00:45:21] <pfn> blueandwhiteg3, exportfs then [00:45:28] <pfn> blueandwhiteg3, man exports [00:45:45] <SYS64738> dclarke, coolstack or blastwave (squid) I have two zone one with squid from coolstack and one from blastwave [00:45:55] <boyd> pfn: All done.. it's just having to do the recovery that sucks :) [00:47:21] <dclarke> SYS64738 : well .. with the Blastwave package .. go check for a init script in /etc/rc3.d or try svcs -av | grep -i squid [00:48:07] <pfn> or just write your smf descriptor and import it ;-) [00:48:12] <SYS64738> dclarke, isnt there [00:48:13] <boyd> or svc '*squid*' [00:48:14] <pfn> s/your smf/your own smf/ [00:48:20] <boyd> svcs '*squid*' [00:50:46] <SYS64738> where can I info on about to write a smf descriptor ? [00:52:19] <pfn> man svccfg [00:52:30] <pfn> try svccfg export anyservice and look at the output [00:52:33] *** karrotx has quit IRC [00:52:38] <pfn> make something similar [00:53:17] <palowoda> Or http://www.opensolaris.org/os/project/smf-doc/smf-dev/smf-book.html might have some info. [00:53:28] <oninoshiko> can the iSCSI target be made to return a domain name instead of an IP in SendTarget responces? (and if so, how?) [00:53:32] <SYS64738> thanks [01:00:42] <CIA-19> joycey: 6564934 nxge driver fails to resume io traffic after suspend/resume operation [01:04:49] *** apuc has quit IRC [01:10:16] *** ylon has joined #opensolaris [01:10:22] *** boro has quit IRC [01:10:29] <ylon> having some troubles partitioning and slicing up a drive [01:10:43] <ylon> I can get it partitioned to one solaris2 chunk [01:11:15] <ylon> (via fdisk) but then when I go to partition, or slice, it I am getting problems with "free hog" it appears. [01:11:24] <ylon> anyone able to offer some help along these lines? [01:11:34] <ylon> been trying to follow http://docs.sun.com/app/docs/doc/806-4073/6jd67r9hs?a=view#disksxadd-12321 [01:11:56] <ylon> which seems to get me most of the way, but when I just want two slices to use as vdevs for zfs, things go haywire [01:12:11] <ylon> essentially I've got a 250gb drive and need to split it up into 200 and 50 [01:12:18] *** Mazon is now known as mazon [01:16:51] *** sfire||mouse has quit IRC [01:18:04] <axisys> hmm.. my aggr1 interface has sshd running but not responding.. http://rafb.net/p/7yKHlQ80.html [01:18:37] *** chris_d has quit IRC [01:19:41] <Tpenta> gdamore: good point on that /usr/gnu email [01:19:41] <axisys> if i telnet to port 22 on that interface it responds fine [01:20:36] <axisys> the non-global zone that binds to that aggr1 IP shows sshd running [01:21:26] *** blueandwhiteg3 has quit IRC [01:21:58] *** blueandwhiteg3 has joined #opensolaris [01:25:14] *** apuc has joined #opensolaris [01:25:50] <jamesd> can someone tell me why sun stopped making u60's ... it surely beats the blade 100/150... i can't beleve how badly the blade 150 performs compared to a smp box even if the smp box cpus are half as fast... [01:26:13] *** GeneralDelta has joined #opensolaris [01:26:19] *** pfa3rh has quit IRC [01:28:17] *** blueandwhiteg3 has quit IRC [01:28:47] *** blueandwhiteg3 has joined #opensolaris [01:29:33] <GeneralDelta> Hi all! I have another newb question... :-) I would like to make a permanent change to my PATH... Where/how do I do that? I know I make the change in the '.bash_profile' but I have am having a hard time finding it. I'm guessing I have to create it. Where to I place it and what would the syntax be? Thank you! [01:29:51] <vmlemon> Does Solaris Express ship with GNU GCC? [01:30:05] <jamesd> vmlemon, yes /usr/sfw/bin [01:30:15] <RElling> Solaris has shipped with gcc for 10+ years. [01:30:27] <jamesd> RElling, not included in the base install [01:30:44] <RElling> yep, cleverly hidden :-( [01:30:58] <jamesd> in the sun freeware companion disk [01:31:25] *** Fish has quit IRC [01:32:00] *** rawn027 has joined #opensolaris [01:32:06] <rawn027> hello everybody [01:32:15] <GeneralDelta> hello [01:32:22] <jamesd> hi [01:32:36] <rawn027> right now im using a webclient are there any good irc clients on solaris? [01:32:39] <rawn027> I usually use my mac [01:32:50] <GeneralDelta> Ditto [01:32:58] <jamesd> rawn027, blastwave has xchat, and irssi [01:33:13] <GeneralDelta> which is nicest? [01:33:18] <jamesd> sunfreeware.com should have irssi and possibly bitchx [01:33:34] <GeneralDelta> lol, I like that name ;-) [01:33:53] <jamesd> i prefer bitchy its like bitchx but with an attitude. [01:34:05] <vmlemon> I take it that I can get it from Sun.com? Since I don't have a companion CD [01:34:12] <blueandwhiteg3> Alright... I am running a direct gigabit connection between one machine and my solaris box and dumping file ... it is running slower than the 100 mbit connection through the switch [01:34:26] <blueandwhiteg3> Where do I start on whipping solaris into shape in terms of network throughput? [01:34:28] <jamesd> vmlemon, no one mentioned sun.com ... blastwave.org and sunfreeware.com [01:34:47] <vmlemon> OK [01:34:50] *** nrubsig has quit IRC [01:35:42] <blueandwhiteg3> I tried increasing the packet size on the OS X machine connected... it stopped all connectivity [01:35:57] <blueandwhiteg3> I have locked the OS X machine to gigabit, so we clearly have a gigabit link [01:36:18] <GeneralDelta> hate to ask again, but I would be very grateful if some one would direct to a "how to" on making a permanent change to my PATH [01:36:24] <jamesd> blueandwhiteg3, how are you trying to transfer the files as nfs? ftp? scp? [01:37:01] <blueandwhiteg3> nfs [01:37:36] <jamesd> make sure you pass -orsize=8192,wsize=8192 when you mount the directories. [01:37:37] <blueandwhiteg3> i also would like to test disk throughput ( cat /dev/zero > /file basically ) but there's no way to watch disk activity in realtime as far as i can easily see? [01:37:50] *** rawn027_ has joined #opensolaris [01:37:51] <jamesd> blueandwhiteg3, iostat -xz 2 [01:38:09] <rawn027_> i guess i will be using bitchx [01:38:14] <rawn027_> compiles without an issue :) [01:38:21] *** rawn027 has left #opensolaris [01:38:39] <blueandwhiteg3> alright... even this crappy old disk is getting 30 MB/sec [01:38:42] <blueandwhiteg3> so that's not the bottleneck [01:39:10] <jamesd> blueandwhiteg3, make sure you pass the wsize and rsize parameters, even windoze gets good speed with them set. [01:39:13] <rawn027_> this is a nice client, i like it better than irssi [01:39:51] <blueandwhiteg3> jamesd: I need to issue those parameters on the OS X machine which is connecting to the solaris machine [01:40:08] <jamesd> blueandwhiteg3, yes [01:40:32] <jamesd> blueandwhiteg3 -orsize=8192,wsize=8192 [01:40:35] <blueandwhiteg3> yes [01:40:42] *** GeneralDelta has left #opensolaris [01:40:45] *** GeneralDelta has joined #opensolaris [01:42:38] *** movement has quit IRC [01:42:44] *** cypromis has joined #opensolaris [01:43:02] <blueandwhiteg3> jamesd: What is orsize? [01:43:08] <blueandwhiteg3> wzise = write size [01:43:16] <pfn> reread [01:43:27] <jamesd> -o means pass the option(s) to the filesystems [01:43:32] <blueandwhiteg3> my mount_nfs has slightly different options [01:43:49] <blueandwhiteg3> ok [01:44:07] <blueandwhiteg3> what does rsize do? [01:44:26] <pfn> read size [01:44:27] <pfn> duh [01:44:37] <jamesd> read size, suggests that nfs use 8192 byte reads [01:45:01] <blueandwhiteg3> then why don't i use the mount_nfs option -r which specifies read size? [01:45:51] <jamesd> because its the posix way of doing things and it works on all OSes i have seen [01:45:52] *** apuc has left #opensolaris [01:47:42] *** deather has quit IRC [01:48:01] *** deather has joined #opensolaris [01:49:00] <RElling> blueandwhiteg3: also verify that the server's net indeed negotiated to the speed you expect. kstat -m _interface_, where _interface_ may be nge, e1000g, or whatever the net uses. Look for 1000fdx_cap. Sure would be nice to have this info someplace more convenient... [01:49:22] <blueandwhiteg3> I forced the OS X end to gigabit [01:49:29] <blueandwhiteg3> it would not communicate if it wasn't on gigabit [01:49:45] <RElling> s/1000fdx_cap/link_speed/ [01:50:10] <RElling> doesn't matter, a switch can autonegotiate each port. [01:52:07] <blueandwhiteg3> this is a direct link [01:52:14] <blueandwhiteg3> i screwed up OS X's mounting [01:52:17] <blueandwhiteg3> need to reboot [01:52:41] *** blueandwhiteg3 has quit IRC [01:53:35] <rawn027_> is there a way to automatically add a zfs pool to a cifs server [01:53:40] <rawn027_> like there is with iscsi? [01:54:07] <vmlemon> I've got a copy from SunFreeware [01:56:31] <jamesd> rawn027_, automate? its only one command... zpool import poolname [01:56:32] *** MikeTLive has left #opensolaris [01:58:26] *** nrubsig has joined #opensolaris [01:58:27] *** ChanServ sets mode: +o nrubsig [01:59:02] <RElling> rawn027_: RFE 6380862, zfs(1) should allow setting samba shares http://bugs.opensolaris.org/view_bug.do?bug_id=6380862 [01:59:31] <rawn027_> ok i will do some reading, thanks [02:01:28] *** danv12 has joined #opensolaris [02:04:05] <axisys> my aggr1 interface showing this traffic in tcpdump http://rafb.net/p/uW1A7h81.html [02:04:19] <axisys> can anyone explain what this traffic is about? [02:04:43] <axisys> i see both nic's mac address sending traffic to one unknow mac address [02:06:33] <axisys> also it look slike it is using 10MB even though it can take 100 M [02:06:38] <twincest> axisys: LACP maybe? [02:06:50] <axisys> twincest: is that what that is? [02:06:55] <twincest> i don't know [02:07:14] <axisys> twincest: could be.. how about the speed ? why tcpdump shows 10MB ? [02:13:55] *** uebayasi has joined #opensolaris [02:14:32] <axisys> anyone knows how to disable dns lookup on sshd ? [02:15:08] <axisys> got it VerifyReverseMapping [02:16:43] <axisys> hmm.. that did not work as promised by sshd_config man page. [02:16:48] <axisys> i still see this [02:16:49] <axisys> debug3: Trying to reverse map address [02:17:08] <ylon> for some reason it seems that solaris is much slower than other operating systems on a unit on which I currently have it installed. Is there some global debugging flagged on in the releases like b67? [02:17:23] *** Trident has quit IRC [02:17:52] *** sstallion has joined #opensolaris [02:19:12] <axisys> got it this time LookupClientHostnames [02:20:24] <SYS64738> pfn, can you give me an idea on this error: [02:20:25] <SYS64738> [ Jul 4 02:17:22 Executing start method ("/opt/coolstack/bin/svc-squid start") ] [02:20:25] <SYS64738> [ Jul 4 02:17:22 Method "start" exited with status 96 ] [02:21:47] <palowoda> If I remember right status 96 is usally a xml configuration problem. Most likely in the manifest file. [02:22:29] <CSFrost> palowoda, you ruined pfn's quiz! [02:23:00] *** alfism has quit IRC [02:23:01] <palowoda> Ahh, didn't mean too. I'm just bumbling along. [02:23:10] <SYS64738> I lost the ipsec connection to the test server [02:23:15] <SYS64738> it's time to go to bed [02:23:23] <SYS64738> good night and thanks all [02:23:45] *** Trident has joined #opensolaris [02:25:03] *** mikaeld has quit IRC [02:25:03] *** jwit has quit IRC [02:25:03] *** Bart_M has quit IRC [02:25:03] *** TBCOOL has quit IRC [02:25:03] *** Hunger- has quit IRC [02:25:07] <vmlemon> Is asprintf() implemented on Solaris? [02:25:40] *** Bart_M has joined #opensolaris [02:25:40] *** TBCOOL has joined #opensolaris [02:25:40] *** mikaeld has joined #opensolaris [02:25:40] *** Hunger- has joined #opensolaris [02:25:40] *** jwit has joined #opensolaris [02:25:47] <jamesd> vmlemon, dont think so [02:26:14] <vmlemon> Blast :( [02:26:41] <vmlemon> That's made it harder to compile the utility that I'm trying to compile, already [02:26:47] <richlowe> the implementations I've seen were pretty evil. [02:26:53] <richlowe> it's not just something you can pull from any other vendor. [02:27:10] <richlowe> I think the BSD ones do their magic via messing around with the innards of FILE [02:27:12] <jamesd> vmlemon, http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4508459 [02:27:49] <vmlemon> Hmm [02:28:14] <richlowe> jamesd: though it's keyworded for xen, and johnlev was RE of a duplicate of it. [02:28:18] <richlowe> maybe they'll bring it in? [02:29:26] * vmlemon wonders how he'd go about getting xar to compile, without asprintf() [02:29:27] <jamesd> possibly, they have won half the battle, they relize its missing and its a problem... getting them to devote an engineer to the project is the hard part, but i would assume its a pretty simple fix since you can just port the bsd version into solaris [02:30:09] *** estibi_ has joined #opensolaris [02:30:25] *** jpdrawneek has quit IRC [02:30:30] <palowoda> Heh, it's marked as an oss-bite-size too. [02:32:03] *** estibi has quit IRC [02:32:33] *** stevel has quit IRC [02:33:21] <CSFrost> heh, I never did figure out the bug reporting puzzle [02:33:51] <CSFrost> the duplicate gets an engineer's name tossed on it, but is closed, meanwhile it just sits around in que [02:34:46] <vmlemon> I don't really want to start hacking up source code, but is there a workaround for the missing API call/function? [02:37:51] * sstallion slaps SVM [02:38:17] <palowoda> One distro of opensolaris already solved the asprintf problem. [02:40:02] <nrubsig> vmlemon: |asprintf()|nis very evil. [02:40:07] <nrubsig> s/nis/ is/ [02:40:12] <rawn027_> what is SOA when referring to Java? [02:41:54] <sstallion> Service Oriented Architecture [02:42:11] <sstallion> Usually comprised of ESB's and/or disparate web services [02:42:21] *** sioraiocht has quit IRC [02:42:33] <sstallion> IBM likes to bring it out and wave it around every now and then, even though it really doesnt mean anything [02:43:34] *** karrotx has joined #opensolaris [02:46:26] <CSFrost> kind of like TEC [02:53:15] <pfn> SOA has a fucked up meaning... [02:53:30] <pfn> I mean, how the hell do you get services across an entire organization to integrate... it just doesn't happen... [02:54:14] *** blueandwhiteg3 has joined #opensolaris [02:54:54] <blueandwhiteg3> how do I increase the MTU under solaris? it is current set to 1500 [02:56:10] <ylon> how does one actually create an iscsi target zpool? [02:57:31] <jamesd> blueandwhiteg3, google your solaris device name and solaris mtu ... it varies by driver currently its being worked on for solaris 11 [02:58:29] *** linux_user400354 has joined #opensolaris [03:09:56] *** simford has joined #opensolaris [03:10:05] *** alanc_away has quit IRC [03:10:29] *** alanc_away has joined #opensolaris [03:11:00] *** blueandwhiteg3 has quit IRC [03:15:56] *** vmlemon has quit IRC [03:16:52] *** blueandwhiteg3 has joined #opensolaris [03:17:35] <blueandwhiteg3> alright... i've maxed out the write size with nfs... i'm working on mtu on the solaris box [03:17:55] <blueandwhiteg3> are there any other values of interest? [03:18:30] <blueandwhiteg3> i may try using UDP? would that help? [03:18:41] <jamesd> blueandwhiteg3, usually rsize and wsize is enough to get decent performance, what are you getting? [03:18:50] <jamesd> using udp shouldn't help. [03:18:55] <blueandwhiteg3> I'm seeing ~13 MB/sec... [03:18:59] <blueandwhiteg3> I have a direct gigabit connection [03:19:12] <blueandwhiteg3> it varies up and down [03:19:14] <jamesd> are are both auto-setting to full duplex gigabit? [03:19:40] <blueandwhiteg3> i have the OS X machine locked to full duplex gigabit [03:19:43] <jamesd> sounds more like 100mbit ethernet. [03:19:55] <blueandwhiteg3> well, it sometimes go up to 15-16 MB/sec [03:19:58] <blueandwhiteg3> then it goes back down [03:20:17] <blueandwhiteg3> The disk on the other end will take 26-30 MB/sec sustained [03:20:26] <jamesd> what boxes are you using? [03:21:17] <blueandwhiteg3> The disk on my end will read at 20-22 MB/sec [03:21:27] <blueandwhiteg3> I have a mac os x box, reading from an external drive [03:21:30] <jamesd> yeah but what about the other one? [03:21:45] <jamesd> is the external drive via usbv2? [03:21:47] <blueandwhiteg3> linked to an amd64 box via locked full duplex gigabit ethernet [03:22:03] <blueandwhiteg3> the drive is internal on the amd64 box [03:22:24] <jamesd> reading from an external drive <--- how is that connected [03:22:25] <blueandwhiteg3> the external drive is USB2, yes, but I have tested it at 20+ MB/sec sustained [03:22:32] <blueandwhiteg3> I am going to switch to firewire, actually [03:22:50] <jamesd> i bet the usb2 is the problem... [03:23:07] <blueandwhiteg3> I have tested it, sustained 20 MB/sec+ [03:23:11] <blueandwhiteg3> i tested it on the very file [03:23:37] <jamesd> yes but if you are doing network + usbv2 at the same time, it can be limited. [03:24:01] <blueandwhiteg3> ok, i'm using the internal drive now [03:24:22] <blueandwhiteg3> virtually identical results [03:24:54] <blueandwhiteg3> i checked... [03:25:01] <jamesd> i can get 10-12MB/s on my 10 year old solaris box to a windows box or a solaris box over 100mbit nics. [03:25:09] <blueandwhiteg3> internal drive - 44-48 MB/sec sustained read with large file [03:25:28] <blueandwhiteg3> internal drive to remote box - ~13 MB/sec (same large file) [03:26:22] <blueandwhiteg3> i'm glad i upgraded my notebook drive, those are great file transfer rates for a 2.5" [03:26:38] *** jamesd_ has joined #opensolaris [03:27:00] <blueandwhiteg3> so i have pretty clearly established this is a network/nfs problem [03:27:03] *** jamesd has quit IRC [03:27:36] <jamesd_> i would have to say its a osx problem or possibly a bad gigabit driver.... [03:27:49] <blueandwhiteg3> well, i've seen >20 MB/sec using SAMBA with other machines [03:27:54] <blueandwhiteg3> I'd like to use iperf to test the link [03:27:59] <blueandwhiteg3> but it won't build under solaris [03:28:06] <rawn027_> is there anything special i need to do to run a samba server [03:28:13] <rawn027_> its in maintenance right now [03:28:17] <rawn027_> default install of SXDE [03:28:42] <blueandwhiteg3> does anybody know of alternatives to iperf or know how to make it build under solaris? [03:29:15] <blueandwhiteg3> i'm really going crazy not being able to test the link directly [03:30:02] <jamesd_> # du -h file1 ; time cp file1 /ide/test2/ [03:30:03] <jamesd_> 100M file1 [03:30:03] <jamesd_> real 0m13.474s [03:30:03] <jamesd_> user 0m0.005s [03:30:03] <jamesd_> sys 0m2.090s [03:30:04] <jamesd_> [03:31:55] <blueandwhiteg3> what are you doing there? [03:32:20] <jamesd_> printing the size of the file, then copying it over an nfs link [03:32:32] <jamesd_> # pwd [03:32:39] <jamesd_> /test [03:32:43] <blueandwhiteg3> and how does this do us any good versus just monitoring the disk and network activity? [03:32:48] <jamesd_> df /test [03:33:02] <jamesd_> it gives you the total time take to move the file [03:33:04] *** yongsun has joined #opensolaris [03:33:09] <jamesd_> /test (enterprise:/pool2/test): 3680502 blocks 3680502 files [03:33:27] <blueandwhiteg3> yes, but i can see that easily already [03:33:33] <blueandwhiteg3> or at least the approximate throughput [03:33:42] <blueandwhiteg3> 13 MB/sec is a long ways from 30 MB/sec [03:33:56] <blueandwhiteg3> I want true link-level testing [03:37:36] <jamesd_> well... you can use my sample method it will give you the more accurage results... iostat simple looks at the current through put each time it checks not the over all throughput of the link. [03:37:55] <blueandwhiteg3> I'm using a different thing under OS X that is a bit higher resolution [03:39:30] <blueandwhiteg3> Is there any reason why my NIC does not seem to properly get a new DHCP lease after using a static IP? [03:39:31] <jamesd_> and you have another application that is running and slowing down the system... i assume its a single proccessor box [03:40:07] <blueandwhiteg3> I end up having to reboot to get it to pull a DHCP properly... [03:40:27] <blueandwhiteg3> The OS X machine is dual core, core duo. The solaris box is doing nothing and has a fairly fast AMD64 cpu. [03:41:46] <Tempt> ideas, anyone: cat: input error on /dev/rmt/2un: Not enough space [03:41:52] <Tempt> Not enough space? [03:42:03] <jamesd_> does netstat -i hae any errors or collisions. [03:42:22] <CSFrost> Tempt, that sounds familiar, but my memory is just aweful [03:42:57] <blueandwhiteg3> My OS X machine reports no collisions or errors on the link. It's literally two machines talking *directly* to each other. [03:43:29] <RElling> GbE over UTP doesn't have collisions, its dual-duplex [03:43:37] <blueandwhiteg3> That's my point [03:44:20] <jamesd_> well i'm here guessing... but one of the boxes is forced to gigabit, sometimes... the other box will be stuck in 100mbit mode or 1/2 duplex. [03:44:32] <blueandwhiteg3> I'm rebooting solaris so it grabs DHCP off my LAN properly (anybody know why it fails?) then I'm going to try again with iperf [03:44:47] <blueandwhiteg3> I found somebody who has suggestions on how to build it: [03:44:47] <blueandwhiteg3> http://archive.ncsa.uiuc.edu/lists/iperf-users/dec05/msg00010.html [03:44:50] <jamesd_> okay... major storm coming through... got to go... [03:44:56] <Tempt> CSFrost: Heh. Annoying. [03:45:13] <blueandwhiteg3> sorry to see you go! [03:45:18] <blueandwhiteg3> bye jamesd_ [03:45:19] *** jamesd_ has quit IRC [03:45:44] <blueandwhiteg3> can anybody point me in the right direction with iperf compiling? [03:45:52] <blueandwhiteg3> I think i did everything properly, but it still was failing [03:46:17] <blueandwhiteg3> It's unclear if the changes were to be made before or after running ./configure [03:47:01] <Tempt> CSFrost: Think it might be a blocksize thing. [03:47:35] <Tempt> CSFrost: This was my attempt at doing zfs send directly to tape, y'see. [03:48:55] <CSFrost> Tempt, hrm I could ask the last person who dealt with the problem, though he doesn't come on irc much.. :-( [03:50:06] * nrubsig wishes his Ultra5 would be fast enougth to play http://www.youtube.com/watch?v=KyO62mkomCc [03:50:35] <Tempt> CSFrost: I'm trying to dd it in with a massive blocksize at the moment. So far so good. [03:50:45] <Tempt> CSFrost: dd if=/dev/rmt/2un of=zfs1 bs=400000000000000 [03:51:07] <CSFrost> I'm also running a quick search, to see if it shows up anything different incase it fails [03:51:26] *** ylon has quit IRC [03:51:43] <nrubsig> Tempt: erm [03:51:54] <nrubsig> Tempt: doesn't "dd" only copy whole blocks ? [03:52:25] <Tempt> nrubsig: Lord knows. It's giving the machine something to think about, so I'm happy with that. [03:52:32] <nrubsig> Tempt: e.g. a file with 513 bytes and a dd blocksize of 512 will only output 512 bytes [03:52:52] <nrubsig> I may be wrong [03:52:56] <Tempt> nrubsig: Hence the rather large blocksize. [03:53:15] <Tempt> nrubsig: Calculated scientifically by leaning on the zero key for a while. [03:53:34] <CSFrost> lol [04:02:23] *** kloczek has quit IRC [04:02:49] <CSFrost> blueandwhiteg3, you might wish to move your questions to nfs-discuss and go from there.. [04:03:03] <blueandwhiteg3> CSFrost: I'd like to simply test my link [04:03:14] <blueandwhiteg3> It's a good suggestion, and if the link checks out, that is a great place to go [04:04:45] <CSFrost> Have you tried nicstat? [04:05:23] <CSFrost> You might also want to try adding a switch to the mix, and seeing if you have the same problems [04:10:01] *** bnitz has quit IRC [04:10:25] <blueandwhiteg3> CSFrost: I don't have a switch at the moment. I use gigabit like this all the time without problems. I really need to use link-level testing here before I can proceed. [04:12:09] *** nrubsig has quit IRC [04:12:37] *** r00tintheb0x has quit IRC [04:12:44] *** [1]Pir8 has quit IRC [04:12:45] *** r00tintheb0x has joined #opensolaris [04:13:08] *** Fullmoon has quit IRC [04:13:15] *** Pir8 has joined #opensolaris [04:14:35] *** theRealballchalk has left #opensolaris [04:17:31] *** rawn027_ has quit IRC [04:18:26] <CSFrost> which iperf version blueandwhiteg3 ? [04:18:37] <blueandwhiteg3> latest [04:18:38] <blueandwhiteg3> http://archive.ncsa.uiuc.edu/lists/iperf-users/dec05/msg00010.html [04:18:50] <blueandwhiteg3> I would take pretty much any version, however... [04:24:18] *** linux_user400354 has quit IRC [04:26:19] *** EdLin has joined #opensolaris [04:26:20] *** linux_user400354 has joined #opensolaris [04:27:50] *** karrotx has quit IRC [04:29:55] *** EdLin has quit IRC [04:32:53] *** derchris has quit IRC [04:33:08] *** derchris has joined #opensolaris [04:39:35] *** alobbs has quit IRC [04:42:22] <noyb> CSFrost: nrubsig may have forgotten that block sizes are adjustable with dd. how is your dd going? [04:42:58] <Tempt> My dd is running fine. [04:42:59] <noyb> oh, nm.... I think he was speaking to Tempt. my bad. [04:43:13] <Tempt> my zfs rec from tape is running fine. [04:44:09] <noyb> pwd [04:44:12] <noyb> gah... [04:44:53] <noyb> Tempt: rec == recover ? [04:45:01] *** theRealballchalk has joined #opensolaris [04:45:06] <Tempt> receieve [04:45:09] <Tempt> receive [04:45:11] <theRealballchalk> man what the ehck [04:45:26] <theRealballchalk> 10.4.9 uphuck 1.3 wont boot off the DVD [04:45:35] <theRealballchalk> what could be the problem? [04:45:59] <theRealballchalk> woops! [04:46:01] <theRealballchalk> wrong room [04:46:03] <theRealballchalk> sorry [04:46:04] <jwit> blueandwhiteg3, i've built iperf using those instructions on s10 [04:48:30] <Tempt> noyb: I think I need add a dd session when writing the tape in future. [04:48:59] *** linux_user400354 has quit IRC [04:49:23] <blueandwhiteg3> jwit: hmm... could i use your binary? [04:49:41] <jwit> that's not a good practice :) [04:49:51] <jwit> what error are you getting? [04:50:18] <blueandwhiteg3> jwit: it dies early on in the make process, even with gmake [04:50:26] <blueandwhiteg3> errors relating to threading [04:50:46] *** sommerfeld has quit IRC [04:50:49] <jwit> you replaced _all_ the -pthread with -lpthread right? even in src/Makefile ? [04:51:18] <blueandwhiteg3> jwit: I think / thought I did [04:51:33] <blueandwhiteg3> it's possible i missed something or made some kind of other typo [04:51:36] <blueandwhiteg3> jwit: why don't you just send me your source and i'll try building it? [04:52:08] <jwit> you shouldn't really encourage strangers to send you code or binaries [04:52:10] <noyb> Tempt: I don't understand what you're saying here: Tempt> noyb: I think I need add a dd session when writing the tape in future. [04:52:25] <blueandwhiteg3> jwit: I know it's bad practice [04:52:40] <blueandwhiteg3> however, this is a non-production system, and if you sent me code, I could just diff it! [04:52:56] <blueandwhiteg3> it would be pretty obvious if you did anything nasty! [04:53:41] * noyb sends forktodeath binary with source for /bin/ls ... [04:55:10] <blueandwhiteg3> jwit: is there a problem with sending me your source? i seriously will diff it and see what's going on.... [04:55:21] <noyb> blueandwhiteg3: did you get my message? :-) [04:55:48] <blueandwhiteg3> noyb: who cares? this machine has *nothing* critical on it! you could blow it up backwards and forwards and i'll basically just nuke it and start over [04:56:24] <blueandwhiteg3> it's also not going to be a public server or anything [04:56:27] <noyb> it's your show then. I care. I tried to help you care, and now I don't care. No problem. Thanks for playing. [04:56:36] <blueandwhiteg3> it's basically an elaborate NAS [04:58:44] *** ShadowHntr has joined #opensolaris [04:59:28] <Tempt> noyb: Drop dd bs=512 in the pipeline and perhaps I won't need to set huge blocksizes when reading from the tape. [05:01:38] <theRealballchalk> hey guys is there a way how i can tell cdrecord to burn a bootable dvd? or do i just burn it? [05:01:40] <noyb> Tempt: but that will take a longer time. I use 8MB blocks myself like so: bs=8192k but you could use larger block sizes (as you have) but I would use something that fits within my available mem. [05:04:32] <noyb> and you may want to investigate the dd man page for other options that my make sense in some given circumstances. notrunc, noerror, and sync come to mind when using dd in a pipeline. [05:04:42] <CSFrost> therealballchalk, from iso? just burn it [05:05:04] <noyb> "my" == "may" [05:06:05] <theRealballchalk> ok [05:06:31] *** sstallion has quit IRC [05:06:38] *** sstallion has joined #opensolaris [05:15:44] *** movement has joined #opensolaris [05:16:15] <Tempt> noyb: I don't want to have to mess around too much on a restore, that's the thing. [05:16:18] <Tempt> noyb: All too hard ;) [05:16:43] *** LeftWing has quit IRC [05:17:58] <noyb> Tempt: well, by tossing dd into the backup/restore mix... I think you're already messing around too much. :-) [05:18:08] * dclarke wanders in [05:18:29] *** movement has left #opensolaris [05:18:36] <noyb> Tempt: did I miss your initial discussion on your design decisions for adding dd to the mix? [05:18:43] *** danv12 has quit IRC [05:18:59] *** sioraiocht has joined #opensolaris [05:22:08] *** LeftWing has joined #OpenSolaris [05:26:41] <Tempt> noyb: Here's the story. [05:28:14] <Tempt> noyb: Backup method was zfs send $SNAPSHOT | mbuffer -m 128M -f -o /dev/rmt/2n [05:28:19] <Tempt> noyb: Runs fine, good speed etc. [05:28:48] <Tempt> noyb: Upon trying to restore from that, I get "Not enough space" on reads. dd bs=$HUGE if=/dev/rmt/2n reads from that fine. [05:28:55] <Tempt> noyb: Hence the block size pondering. [05:30:42] *** sparc-kly_WORK has joined #opensolaris [05:34:09] *** movement has joined #opensolaris [05:34:33] *** danv12 has joined #opensolaris [05:36:22] *** danv12 has quit IRC [05:36:35] *** sstallion has quit IRC [05:36:45] *** danv12 has joined #opensolaris [05:40:42] <noyb> Tempt: I've never used mbuffer. I did a little googling, it looks pretty cool. I'd be interested in *why* it broke, and my whole backup and restore project would grind to a halt. :-) [05:43:22] <noyb> but instead, you simply chose a different tool. So, to measure the performance with time being my only metric, I would do the whole backup/restore two or more times with the default bs and a large bs. just prepend ptime to your pipeline and see which one wins. [05:45:00] <axisys> is there a way to check if i am ran netservices limited on a system.. i dont want to rerun if I already ran it [05:45:03] <sioraiocht> hi friends, is there a tool that will tell you the number of threads for each process on the machine? [05:45:26] <axisys> sioraiocht: prstat -L [05:45:29] <Tempt> noyb: I haven't chosen a different tool. I'm trying to find out why things are being difficult. [05:45:45] <dclarke> sioraiocht : ps -eflL [05:46:19] <Tempt> noyb: Performance, at this point, is less of a concern than why I can write to a tape freely but need to do some happy dance to read it. [05:46:38] <Tempt> noyb: So I'm testing different read strategies to work it out. [05:46:51] <noyb> I must have misunderstood. It thought you chose mbuffer and the result didn't work, so you substituted dd in mbuffer's place. my mistake. maybe you could post your entire cmd line. that would be cool. [05:46:58] <Tempt> noyb: Frankly, I'm about ready to put an axe through the front of the library. [05:47:06] <Tempt> noyb: I was *thinking* about sticking dd in there. [05:47:12] <dclarke> Tempt : don't do that [05:47:16] <Tempt> noyb: I'm waiting for this verify run to complete first. [05:47:25] <dclarke> Tempt : what are you doing? ufsdump ? [05:47:30] <Tempt> dclarke: zfs send [05:47:37] <dclarke> erk .. [05:47:45] * dclarke crosses self [05:47:49] <dclarke> good luck [05:48:02] <Tempt> I'm going to just switch to tar or star soon. [05:48:12] <dclarke> go star [05:48:13] <Tempt> Frankly, I don't understand why backups have to always be PAIN. [05:48:25] <dclarke> they don't [05:48:28] <noyb> Tempt: aha! so I'm *not* alone. your project grinds to a halt while you're trying to fix the strange error. :-) let's form a club. No... I think they did that already: USENIX... :-) [05:48:29] *** GeneralDelta has quit IRC [05:49:10] <Tempt> noyb: Pretty much. and with only two drives in the library, I can only run two decent tests at a time. And they're DLT8000, so thats a maximum of 6Mbyte/sec throughput on a good day with perfect I/O and an idle bus. [05:49:10] <dclarke> can I join too ? [05:49:35] <Tempt> Seriously, the whole backup situation is screwed. [05:49:50] <noyb> hehe. the door is open dclarke [05:50:01] <Tempt> You either hand over mountains of cash for an "enterprise" solution like Netbackup or Omniback/DP or you play around with shell scripts and dodgyiness. [05:50:04] <gdamore> so... anyone in this audience want a gldv3 hme? I've started the nic driver tests.... :-) [05:50:08] <dclarke> want to see the thing that has me stuck in my tracks ? [05:50:18] *** movement has quit IRC [05:50:23] *** movement has joined #opensolaris [05:50:25] <Tempt> dclarke: Bring it on. [05:50:33] <dclarke> gdamore : I want RealTek chip support .. [05:50:38] <gdamore> its done. [05:50:43] <dclarke> Tempt : better sit down .. this is tricky [05:50:54] <gdamore> if you mean by that rtl8110sc. [05:51:03] <dclarke> gdamore : what ? RealTek support? not likely .. it doesn't work here [05:51:04] <Tempt> Oh, I'm sitting down, swilling my hot beverage from my slogan-enabled Sun mug staring at my workstation blankly. [05:51:26] <dclarke> Tempt : okay .. so you are familiar with boot loaderss I'll bet [05:51:27] <gdamore> did you try the code from nevada as of a couple of days ago? [05:51:38] *** movement has quit IRC [05:51:41] <Tempt> dclarke: Aaah, you mean OpenFirmware, right? [05:51:44] <dclarke> gdamore : no Sir! will do .. thanks for the heads up ! [05:51:46] <Tempt> dclarke: That's more than a bootloader though. [05:51:53] <dclarke> Tempt : no .. I mean GRUB [05:52:03] <dclarke> see http://www.blastwave.org/dclarke/grub2/03_Jul_2007/vt100.jpg [05:52:06] <Tempt> dclarke: Aaah, the spawn of a really evil thing, with evil nobbly bits. [05:52:13] <gdamore> GRUB is my current nemesis, but tis a problem for #netbsd.... [05:52:31] <dclarke> gdamore .. please listen in then .. this may interest you [05:52:40] <gdamore> ok. [05:52:44] * noyb is already lost... [05:52:46] <dclarke> so I have GRUB legacy served over pxeboot to this hardware here [05:52:54] <dclarke> which works fine of course [05:53:09] <dclarke> I boot the GRUB legacy and you see that there on my VT100 here [05:53:25] <Tempt> and that's it? [05:53:33] <dclarke> I then point GRUB legacy to the first partition on my Kingston USB stick [05:53:42] <axisys> is there a way to check if i am ran netservices limited on a system.. i dont want to rerun if I already ran it .. anyone? [05:53:48] <dclarke> I load my build of GRUB2 [05:53:56] <dclarke> and then boot it [05:54:01] <gdamore> and? [05:54:03] <dclarke> which also works [05:54:12] * gdamore waits for the punchline. [05:54:29] <dclarke> see http://www.blastwave.org/dclarke/grub2/03_Jul_2007/grub2_002.jpg [05:54:42] <dclarke> so that is GRUB 1.95 from current CVS [05:54:47] *** movement has joined #opensolaris [05:54:51] <dclarke> I bring up the command line [05:55:01] <dclarke> http://www.blastwave.org/dclarke/grub2/03_Jul_2007/grub2_000.jpg [05:55:25] <dclarke> see that we now have support from the command line to read the partitions of the local hard disk(s) as well as the USB attached devices [05:55:42] <dclarke> see the snv_64a miniroot there ? [05:55:42] <gdamore> very nice. i've not played with grub2 yet. [05:55:51] <gdamore> yah. [05:56:06] <dclarke> well hold on .. the brick wall is coming [05:56:11] <dclarke> see http://www.blastwave.org/dclarke/grub2/03_Jul_2007/grub2_001.jpg [05:56:12] <noyb> axisys: I don't know how to tell, since each service can be manually configured via svcadm and others. And it's interesting to note that even here on snv_62 that the man page states: Interface Stability == Obsolete [05:56:13] <gdamore> its on an ext2 filesystem? [05:56:19] <dclarke> BINGO ! [05:56:28] * dclarke always knew gdamore was sharp [05:56:39] <dclarke> see the loaded modules that I have there ? [05:56:51] <gdamore> yah. [05:56:58] <dclarke> note the presence of ufs as well as serial etc etc [05:57:06] <gdamore> right. [05:57:17] <dclarke> well if you look at http://www.blastwave.org/dclarke/grub2/03_Jul_2007/grub2_000.jpg [05:57:28] <axisys> noyb: hmm [05:57:35] <dclarke> you see that the hd0,1 partition is not recognized [05:57:46] <dclarke> that's the snv_64a hard disk [05:57:47] <gdamore> yes. [05:57:54] <gdamore> what is it formatted with? [05:58:01] <dclarke> and GRUB2 and its ufs loaded module can not read the partition format [05:58:04] <axisys> hey guys anyone done any work on wanboot on x86? something like wanboot for sparc [05:58:15] <dclarke> its a straight run of the mill install of snv_64a [05:58:24] <axisys> i love the sparc wanboot.. works pretty nice [05:58:26] <gdamore> ufs logging confusing it maybe? [05:58:36] <dclarke> good call [05:58:40] <dclarke> I thought so also [05:59:00] <dclarke> so I figure my next step will be to go see if the issue is the partition table layout [05:59:06] <Tempt> Partition type? [05:59:10] <dclarke> as well as the possibility of a logginf UFS [05:59:18] <Tempt> 0x83 is the linux default, right? [05:59:32] <dclarke> its a DOS partition type as far as the disk or Linux or Solaris is concerned [05:59:33] <gdamore> i know that the netbsd boot loader can't load a kernel on a solaris ufs partition, because of ufs logging [05:59:54] <dclarke> hyrmmm ... good insight there [06:00:05] <dclarke> I need ot have a look closer at the soures I guess [06:00:19] <dclarke> by the way .. it was a bitch to compile this [06:00:32] <gdamore> heh. [06:00:43] <gdamore> i like the idea of having an ls command though... [06:00:59] <dclarke> so I'm trying to create a decent bootloader as well as installer for OpenSolaris [06:01:01] <gdamore> right now i'm stuck because legacy GNU GRUB won't load my netbsd amd64 kernel.... [06:01:08] <dclarke> this is part of project gazelle and chinkara [06:01:15] <gdamore> ?!? [06:01:17] <Tempt> It would be truly impressive to see a replacement for grub. [06:01:24] <dclarke> gdamore : I think we need to look forwards to GRUB2 [06:01:38] <gdamore> bootloaders are by their very nature crufty beasts. [06:01:51] <jbk> especially on x86 [06:01:54] <gdamore> i've considered the idea of a "generic" loader numerous times during my spell doing embedded work. [06:02:03] <dclarke> gdamore : gazelle and chinkara ... one of them is a software service for OpenSolaris distros and the other is an actually installer [06:02:11] <gdamore> they are no more crufty than on certain embedded MIPS platforms. :-) [06:02:24] <jbk> i'm surprised chicken bones isn't part of the needed boot strapping process [06:02:33] <dclarke> gdmore : oooh .. what I wouldn't give for SmartFirmware or Open Boot on x86/AMD64 [06:02:35] <gdamore> you think it isn't? ;-) [06:02:50] * dclarke had to kill a cat last night to get this far [06:02:59] <gdamore> there is OpenBIOS, but it is largely crap the last time i tried it. [06:03:12] <dclarke> ha ha .. been there .. done that [06:03:15] <dclarke> went to GRUB2 [06:03:18] * gdamore hopes it was the cat that keeps crapping in his front yard. [06:03:24] <Tempt> GRUB is needlessly baroque [06:03:33] <dclarke> I did this for the ppc port a while ago with great success btu this is stopping me cold [06:03:52] <gdamore> don't you have open firmware on ppc? [06:03:59] <dclarke> exactly [06:04:10] <dclarke> makes it a whole lot easier than this [06:04:52] <dclarke> oh .. what really twists my shorts up in a bind .. the sdamn serial module doesn't work and I can;t get ttya support to function [06:05:03] * Tempt thinks it is sad that there is so much pain for bootloaders on x86 when Windows manages to boot on anything that has BIOS disk support [06:05:05] <dclarke> that makes no bloody sense .. I'd expect that to work [06:05:19] <dclarke> Tempt : yeah [06:05:27] <Tempt> I mean, honestly ... [06:05:30] <gdamore> Tempt: if you're willing to depend on BIOS limitations, then it is easy. [06:05:31] <Tempt> grub is a nightmare. [06:05:38] <dclarke> Tempt : and today I received my Microsoft Windows Server 2008 beta DVD too and It boots [06:05:41] <gdamore> but grub was designed to work *around* limitations in older BIOS' [06:06:04] <jbk> heh [06:06:09] <Tempt> Which isn't required in 99% of cases, but all the cruft is still there. [06:06:15] <gdamore> back when people couldn't boot systems with Linux because their disks wouldn't see past 4GB. :-) [06:06:19] <dclarke> so .. thanks for the three very excellent pointers [06:06:20] <jbk> well [06:06:30] <gdamore> you're welcome. [06:06:33] <dclarke> (1) the ufs logging issue [06:06:44] * Tempt remembers booting linux with a small DOS partition and loadlin <g> [06:06:47] <dclarke> (2) the potential for a partition map issue [06:06:54] <dclarke> (3) kill a chicken [06:06:55] <noyb> seems like there's nothing grand or unified about it. It appears to be a custom job for each platform. :-) [06:07:08] * jbk remembers cursing kickstart on friday [06:07:08] <gdamore> yeah. [06:07:09] <Tempt> wha? Grub on something other than x86? The horror, THE HORROR> [06:07:12] <dclarke> noyb : its a pita [06:07:25] <noyb> Tempt: lol [06:07:30] *** movement has quit IRC [06:07:37] <Tempt> I understand that Sun ditched the old DCA boot because of rapidly changing SATA chipsets and whatever else, but at least it worked without too much fscking around. [06:07:41] <gdamore> i just wish someone would make OpenBIOS work well and we can all stop using Grub. [06:08:06] <gdamore> no, DCA boot was ditched in anticipation of ZFS booting, I suspect. [06:08:08] <jbk> i had forgotten what a horrid mess x86 remote console solutions generally were [06:08:11] <dclarke> Tempt : see http://www.blastwave.org/dclarke/grub/grub_1.91/day_01/img_1280.jpg GRUB2 on ppc [06:08:20] <Tempt> WHY? [06:08:23] <jbk> until i started this job [06:08:44] <dclarke> this worked better on ppc http://www.blastwave.org/dclarke/grub/grub_1.91/day_01/img_1278.jpg [06:08:45] <gdamore> are you asking me? [06:09:10] <Tempt> PPC? [06:09:12] <Tempt> Which platform? [06:09:13] <dclarke> is whio asking ? [06:09:23] <dclarke> Tempt : that was ... Pegasos [06:09:40] <dclarke> Tempt : but I am sure I could get it to fly on the EFIKA also [06:09:43] <Tempt> I gather the host has no openfirmware? [06:09:46] *** movement has joined #opensolaris [06:09:52] <dclarke> yeah .. it did [06:10:04] <Tempt> So why do you need grub? Huh? [06:10:08] <gdamore> the other problem with DCA was that you had write realmode drivers for each chipset. it made things like PXE boot for various arbitrary chips basically impossible. [06:10:28] <gdamore> that's what i want to know too. [06:10:35] <dclarke> Tempt : just cause ! [06:10:53] <dclarke> at the time I wanted GRUB2 [06:10:58] <Tempt> Because GRUB is GNUlitically correct? If x86 can't have a sane boot process, noone should have a sane boot process? [06:11:12] <dclarke> that was then .. today I want GRUB2 for various features and forward looking reasons [06:11:38] <dclarke> GRUB legacy is essentially done .. no more development there [06:11:39] <Tempt> I honestly don't spend enough time booting systems, I suppose. [06:11:49] <Tempt> I generally install an OS on them, and boot it. [06:11:54] <gdamore> heh. [06:12:17] <Tempt> I see my boot process when I need to patch or replace/install hardware [06:12:17] <gdamore> i *wish* i could boot this damned kernel. but that's a problem for a different IRC channel... :-) [06:12:27] <dclarke> well ... getting to that boot stage requires a boot loader of some sort [06:12:49] <dclarke> gdamore : I'll go look into those RealTek drivers [06:12:53] <gdamore> one of the biggest damned annoyances in Solaris booting on x86 was the boot archive. [06:12:58] <Tempt> Anyway, back to my tape blocksize games. [06:12:59] <dclarke> gdamore : in snv_67 ? [06:13:01] <Tempt> Wish me luck! [06:13:06] <gdamore> i think snv_68. [06:13:18] <dclarke> oh .. so not really released in binary form yet [06:13:21] <gdamore> which probably means you need to build yourself or use nightly archives. [06:13:24] <gdamore> right. [06:13:32] <dclarke> I can BFU just fine [06:13:38] <dclarke> but I'd rather not .. [06:13:47] <dclarke> I'm working on this here GRUB2 thingy [06:13:55] <gdamore> you don't need to BFU, just copy the rge binary from the archives. [06:14:14] <dclarke> isn't there rge sources ? [06:14:24] <gdamore> yes, so you could also build it yourself. [06:14:49] <dclarke> okay .. but as you say .. the rge bins are in the archives .. okay .. I never considered that really [06:15:00] <gdamore> right. [06:15:12] <dclarke> gdamore : you're working at Sun these days right ? [06:15:19] <gdamore> yes. sort of. [06:15:26] <dclarke> from Tadpole to Sun .. you leapt [06:15:32] <gdamore> (contract, currently expires in October) [06:15:33] <jbk> *rimshot* [06:15:54] <dclarke> jbk : g'day dude [06:15:59] <jbk> hello [06:16:22] <dclarke> so are you using that mercury server much ? [06:16:27] * gdamore keeps hoping someone at Sun will think he is valuable enough to offer a full time position. [06:16:32] <dclarke> I never check it really [06:16:33] <jbk> well since i just moved, not recently [06:16:40] <jbk> i need to find a long ethernet cable now [06:16:46] <dclarke> okay .. its there .. idling away [06:16:47] <jbk> probably see if fry's or such has it [06:16:56] <dclarke> go wireless [06:17:04] <jbk> my laptop has a broadcom card [06:17:09] <theRealballchalk> fry's? [06:17:12] <theRealballchalk> where are we from? [06:17:17] <theRealballchalk> texas [06:17:17] <jbk> and i tried swapping it out with an atheros minipci card [06:17:28] <jbk> but it was causing serious instability [06:17:32] <jbk> theRealballchalk: houston [06:17:42] <Tempt> aha [06:17:43] <blueandwhiteg3> Hello, I was here a while trying to figure out how to optimize my NFS under Solaris. I completed testing with netperf and conclude I am seeing about 895 mbit/sec on my 1000 mbps link... that's fairly impressive, like 112 MB/sec [06:17:48] <dclarke> k .. I have to go have a look at this GRUB2 issue again [06:17:49] <Tempt> I need to use mbuffer on rec's as well as sends [06:17:50] <theRealballchalk> jbk: get the hell outa here, i live by Bellaire [06:17:53] <Tempt> it must do something magic to the blocksize. [06:17:55] <Tempt> zfs receive -vd sata750/test1 [06:18:00] <Tempt> = bad. [06:18:01] <jbk> theRealballchalk: no shit [06:18:03] <Tempt> mbuffer -m 128M </dev/rmt/2un | zfs receive -vd sata750/test1 [06:18:04] <theRealballchalk> lol [06:18:05] <Tempt> = good [06:18:11] <jbk> i'm barely west loop [06:18:23] <theRealballchalk> oh so u west side? [06:18:24] <blueandwhiteg3> However, it now puts the problem with throughput right on the shoulders of NFS. Is there any way to test NFS and leave out the disk writing on the server? [06:18:26] <jbk> (i10 & 610 -- across from the dealerships) [06:18:27] <jbk> yeah [06:18:32] <theRealballchalk> we're southwest [06:18:36] <jbk> yeah [06:18:43] <jbk> i know where bellaire is :) [06:18:43] <theRealballchalk> where the asian shootings happen [06:18:48] <jbk> haha [06:18:52] <theRealballchalk> ahhah coool man [06:19:00] <theRealballchalk> jeez [06:19:25] <theRealballchalk> well all this time there's actually someone from my hometown [06:19:39] <jbk> well i just moved down from kc :) [06:19:52] <theRealballchalk> kc? [06:19:55] <theRealballchalk> kentucky? [06:20:03] <jbk> kansas city [06:20:15] <theRealballchalk> oh ok [06:20:16] <Tempt> blueandwhiteg3: What sort of throughput are you getting for your NFS xfers? [06:20:47] <blueandwhiteg3> The drive handles ~30 MB/sec local writes (I will be installing a bigger RAID soon, this is just 'testing') but I was seeing... maybe 13 MB/sec? [06:20:52] <theRealballchalk> jbk: is it because of more computer jobs? [06:20:54] <Tempt> over gigabit? [06:21:09] <blueandwhiteg3> Tempt: Yes. And drives on both ends were not the bottleneck as far as I could tell. [06:21:13] *** sparc-kly has joined #opensolaris [06:21:14] *** ChanServ sets mode: +o sparc-kly [06:21:15] <jbk> well, i had been wanting to get out of kc for a while [06:21:20] <Tempt> same subnet? [06:21:21] <blueandwhiteg3> Tempt: I also increased the read/write size [06:21:33] <jbk> cause my previous employer was causing lots of stress and was generally taking advantage of me [06:21:40] <blueandwhiteg3> Tempt: Direct connection. Reliably testing at 895 mbit/ec [06:21:45] <jbk> and kc is pretty boring if you're single [06:22:03] <Tempt> add forcedirectio to your client options [06:22:04] <jbk> and i know a bunch of people here in houston as well as texas in general, this job came up, so i figured i'd give it a shot [06:22:05] <theRealballchalk> oh midwest yea somewhat i can imagine [06:22:06] *** danv12 has left #opensolaris [06:22:14] <theRealballchalk> but houston.............i'm afraid it isn't gonna help much [06:22:36] <jbk> cause while i could easily get a job in california, even with the 'adjustments' it'd still be difficult to actually save anything with the outrageous housing costs [06:22:43] <jbk> it's better than kc :) [06:22:44] <blueandwhiteg3> Tempt: Where do I do that? [06:22:53] <theRealballchalk> yea California is no place to retire [06:23:00] <jbk> or to save for it [06:23:00] <theRealballchalk> no savings there [06:23:25] <jbk> unless you happen to have pre-ipo options in a statup that goes public that you can cash out on quickly [06:23:28] <theRealballchalk> i'd say Houston is what ur looking for then [06:23:43] <theRealballchalk> living cost to job ratio is 1:1 [06:23:49] <theRealballchalk> it's nice [06:23:54] <jbk> which kinda sucks, cause realistically, that's probably about the only place i'd be able to find anything that'd actually interesting [06:24:03] <Tempt> mount -F nfs -o rsize=blah,wsize=blah,forcedirectio [06:24:13] <theRealballchalk> jbk: California? [06:24:16] <jbk> yeah [06:24:19] <theRealballchalk> ahh [06:24:29] <theRealballchalk> yea i would too [06:24:53] <blueandwhiteg3> Tempt: That got me an extra 2 MB/sec... 15 MB/sec or so [06:24:57] <theRealballchalk> my cousins there has great jobs but their houses are not old but crap old - i just can't live like that [06:25:13] <jbk> at my old job, there was a sun prof. services guy that came out, and in less than a day, he takes me aside and is going 'what the hell are you doing here? you need to get out of here' [06:25:14] <blueandwhiteg3> Tempt: Wait, maybe it didn't help at all? [06:25:29] <theRealballchalk> it ain't my way of living unless i live out in one of the beach houses there hahaha [06:25:31] <theRealballchalk> forget it [06:25:40] <jbk> and my coworkers actually had a pool as to when i'd quit [06:26:08] <theRealballchalk> a pool? wadaya mean? [06:26:24] <jbk> like they had put in bets as to when i'd quit [06:26:34] <jbk> cause they all saw the crap i was getting handed [06:26:34] <theRealballchalk> hahah that's fuxed up [06:26:48] <blueandwhiteg3> Tempt: mount -w -o rwsize=61440,forcedirectio -t nfs 10.1.1.1:/export/home/Shared/ /Server [06:26:54] <jbk> i actually put on my exit interview 'it shouldn't take a death in the family to not be disturbed outside of work, including vacations' [06:27:09] <theRealballchalk> oh ok [06:27:32] <Tempt> Is this Linux? [06:27:39] <jbk> which is what it actually took, and even then, there was speculation they'd have tried to call me if i hadn't given my notice the same day i told them i was gonna be out for 4 days due to my grandmother passing away [06:27:54] *** Yamazaki-kun has quit IRC [06:28:24] <blueandwhiteg3> Tempt: I'm using OS X. [06:28:31] <pfn> kc has better housing costs than california... [06:28:38] <jbk> yes [06:28:43] <jbk> but the only thing to do is drink and eat bbq [06:28:59] <jbk> granted, not necessairly bad things, but gets old after a while [06:29:14] <pfn> well, all you have to do anywhere is just drink and eat [06:29:23] <pfn> what else are you looking for... [06:29:24] <theRealballchalk> sorry bathroom [06:29:27] <pfn> I guess kc is landlocked... [06:29:30] <pfn> so no beaches [06:29:35] <pfn> but I'm sure you've got forests and shit [06:29:49] * pfn has been to kc once... [06:29:55] <pfn> it was like a suburban hell [06:30:02] <pfn> actually, was in overlandpark/lenexa... [06:30:07] <jbk> pfn: slightly landlocked :) [06:30:11] <jbk> yeah [06:30:15] <jbk> i worked in overland park [06:30:27] <pfn> driving down the main street, whatever, metcalf or something it seems like every 2 miles you see the same stores over and over [06:30:33] <jbk> haha [06:30:40] <jbk> yep [06:30:44] <pfn> petco, 2 miles later, petco again, 2 miles later, petco again... [06:30:46] <pfn> wtf... [06:31:15] <theRealballchalk> heheheh [06:31:16] <jbk> what were you doing in kc? [06:31:29] <pfn> was there on business [06:31:56] <jbk> telco related by any chance? [06:32:07] <pfn> no, although I guess I saw a huge sprint building out there [06:32:14] <jbk> just one? [06:32:20] <jbk> prior to the nextel merger [06:32:21] <pfn> well, the campus [06:32:24] <jbk> yeah [06:32:25] <pfn> so I guess it'd be more than just 1 [06:32:29] <jbk> that's 14 buildings :) [06:32:41] <pfn> no, had a meeting over at informix... [06:32:42] <jbk> and like 350 acres or so [06:32:45] <jbk> ahh [06:32:51] <jbk> in op or lenexa? [06:33:01] <jbk> i know they used to have a building off college there in lenexa [06:33:08] <jbk> right by john deere :) [06:33:18] <pfn> yeah, in lenexa, off college & 95th or something [06:33:29] <pfn> or was it 103rd [06:33:31] * pfn shrugs [06:33:33] <jbk> yeah ok [06:33:45] <jbk> i used to work in that same office park type complex [06:33:54] <jbk> well 113th [06:34:03] <jbk> before we got moved [06:34:46] <jbk> but yeah, not terribly exciting [06:35:28] <blueandwhiteg3> Tempt: Any more ideas? Anything to change on the server side? Can I somehow test to /dev/null on the server to eliminate any risk of the disk being the bottleneck? [06:35:31] *** Gman has quit IRC [06:36:25] <blueandwhiteg3> Tempt: Increasing readahead seems to give slightly more even throughput, but i'm still not even touching 20 MB/sec [06:36:28] <pfn> I still find it funky that kc isn't in ks [06:36:29] <pfn> heh [06:36:39] <jbk> there is kck & kcmo [06:36:43] <jbk> just kcmo is larger [06:36:48] <blueandwhiteg3> Tempt: I guess if throughput is 2/3 of actual drive speed, I could live with that. But since this is a big file, why can't it be faster? [06:37:00] <jbk> they border each other [06:37:29] <jbk> some people seem to think they're at opposite ends of the world instead of opposite sides of state line rd. [06:38:48] <boyd> Anuonw remember which uadmin is reboot? [06:38:55] <pfn> so--changing subjects, how virtual are zones? [06:38:59] <boyd> Anyone [06:39:08] <pfn> how much overhead is there, etc? is it like a real VM? or is it more like 'jail' ? [06:39:22] <jbk> zones are like jails on steroids [06:39:26] <boyd> pfn: Closer to jails with more network and process-level isolation [06:39:30] <jbk> pretty low overhead [06:39:42] <jbk> if you want more vm, look at xen or ldoms [06:39:43] <pfn> how about lx? sounds interesting, is it a full VM environment there? [06:39:49] <Tempt> Not sure how "quality" the MacOS NFS is. [06:39:54] <pfn> actually, zones sounds interesting [06:40:01] <Tempt> Let me ssh into my laptop and run a benchmark. [06:40:04] <jbk> no, it's more like the linux runtime stuff you might see in fbsd or such [06:40:52] <boyd> The lx brand presents a Linux kernel interface to linux userspace programs. The linux processes are full solaris processes with some additional libs loaded [06:40:53] <blueandwhiteg3> Tempt: That'd be much appreciated... i'd like to see if there's something 'wrong' here.... [06:41:00] <blueandwhiteg3> Tempt: Latest command: mount -w -o rwsize=32768,forcedirectio,udp,readahead=16 -t nfs 10.1.1.1:/export/home/Shared/ /Server [06:41:07] <pfn> I'm curious if I could run, say, asterisk under lx [06:41:16] <boyd> Surely someone knows the uadmin numbers offhand? :) [06:41:17] <pfn> I guess minus the zaptel drivers, it should be doable [06:41:26] <boyd> pfn: I'd wonder about hardware support. [06:41:50] <boyd> If you don't want to have access to any hardware you can run asterix on solaris natively. Apparently it outperforms linux [06:42:06] <Tempt> How are you measuring performance? [06:42:16] <boyd> A ruler, I think [06:42:36] <Tempt> That was intended for blueandwhiteg3 [06:42:38] <Tempt> However .. [06:42:49] <blueandwhiteg3> Tempt: I'm basically watching activity monitor under OS X. There's also a similar thing under solaris [06:42:59] <Tempt> activity monitor? [06:43:10] <blueandwhiteg3> Tempt: Real time chart of network activity [06:43:17] <blueandwhiteg3> Tempt: I tried elaborate timing routines and found they weren't really better and a headache.... [06:43:22] <Tempt> included with MacOS? [06:43:27] <blueandwhiteg3> Tempt: Yes [06:44:00] <blueandwhiteg3> Tempt: When I watch the same thing using netperf... it is amazing... almost instantly to 112-113 MB/sec [06:44:05] <Tempt> vnc in [06:44:07] <Tempt> fire it up ... [06:45:00] <blueandwhiteg3> Tempt: Me? Or you? [06:45:06] <Tempt> hang on. [06:45:11] <blueandwhiteg3> Tempt: No rush. [06:45:40] <boyd> Tempt: I was aware of that :) [06:47:05] <Tempt> Christ. [06:47:07] <Tempt> I hate MacOS [06:47:12] <Tempt> How to I get NFS read rates? [06:47:21] <blueandwhiteg3> directly? [06:47:43] <blueandwhiteg3> There's not really a 'direct' way to do it, as far as i know.... unless you have a binary monitoring i/o [06:48:41] <Tempt> useless [06:48:46] <Tempt> I'm saturating my 100Mbit ethernet [06:48:57] <Tempt> But that's not exactly saying much given I've got VNC and crap running as well [06:49:00] <elektronkind> Tempt, iostat doesn't report on nfs mounts? [06:49:01] <boyd> What about time dd if=/some/nfs/path of=/dev/null? [06:49:02] <blueandwhiteg3> You can watch /Applications/Utilities/Activity Monitor.app [06:49:18] <Tempt> I've already got network load stats in my top bar thingy. [06:49:32] <boyd> mmm... MenuMeters [06:49:39] <Tempt> Notably, however, the Mac is in 75% sys time doing this [06:49:44] <blueandwhiteg3> Is there a way I can pump to and from /dev/null over NFS? [06:49:57] <Tempt> Because the MacOS implementation of NFS is a CPU hog. [06:49:57] <boyd> Tempt: It's called the "Menu Bar". They're quite popular, you know :) [06:50:23] <blueandwhiteg3> Tempt: What? 75%?? I am using 14% of 200% [06:50:31] <blueandwhiteg3> At most [06:50:43] <blueandwhiteg3> more like 10% on average [06:50:44] <Tempt> 200%? [06:50:50] <blueandwhiteg3> two cores [06:50:51] <elektronkind> blueandwhiteg3: read a file from a nfs mount and redirect it to /dev/null. You write a file to a nfs mount by reading from /dev/zero.. both using dd [06:51:27] <blueandwhiteg3> and those core are clocked down to 1066 mhz now [06:52:14] *** linux_user400354 has joined #opensolaris [06:52:26] <blueandwhiteg3> how can i eliminate disk on one or both ends? Could i read from /dev/zero or write to /dev/null over NFS? [06:52:43] <elektronkind> huh [06:52:53] <elektronkind> what on earth are you trying to do, willis [06:52:58] <Tempt> This is a G4 Powerbook [06:53:06] <Tempt> In other words, a craptop of craptitude with crap ++ [06:53:14] <blueandwhiteg3> Tempt: The G4 has like no bandwidth.... [06:53:35] <blueandwhiteg3> I wonder if I share /dev/ if the aforementioned will work? [06:53:40] <Tempt> No. [06:53:50] <elektronkind> yu don't seem to have a clear understanding of NFS [06:53:55] <blueandwhiteg3> I don't. [06:53:56] <elektronkind> s/yu/you [06:54:07] <Tempt> I'm looking forward to the day I finally hack up another solution for playing media [06:54:14] <elektronkind> you do all testing from the NFS client [06:54:18] <Tempt> Then I'm going to set fire to the Powerbook outside the nearest Apple dealer. [06:54:40] <blueandwhiteg3> elektronkind: What do you propose I do on the server? [06:54:49] *** movement has quit IRC [06:54:54] *** movement has joined #opensolaris [06:54:55] <blueandwhiteg3> elektronkind: what parameters should i change on the server? i want to learn... [06:55:10] <elektronkind> blueandwhiteg3: what is it that you're trying to accomplish? [06:55:49] <blueandwhiteg3> elektronkind: I am putting together basically a large personal file server... intended to be fast. [06:55:56] <Tempt> Oh, this Activity Monitor is a hoot: 30.56% User, 89.95% System, 0% Nice, -5% Idle [06:56:25] <blueandwhiteg3> Tempt: Apple has minor problems tracking CPU usage.... I had a process use 250% of my 1 cpu at one point... [06:56:32] <blueandwhiteg3> it was very brief, however [06:56:39] *** slowhog has left #opensolaris [06:56:40] <elektronkind> blueandwhiteg3: so what's your setup look like then... Mac NFS client and Solaris NFS server... or? [06:56:45] <blueandwhiteg3> elektronkind: Yes [06:57:02] <Tempt> Consumer OS, leave it to the consumers. [06:57:13] <palowoda> Man can you come up with some good Acronym's for crap++. [06:57:46] <elektronkind> Tempt: hey, I manage all my solaris boxen from behind a 24" imac and a mbp at home, doof :) [06:57:59] <pfn> 24" imac? wow [06:58:13] <pfn> talk about a wasted monitor when that imac is obsoleted :) [06:58:21] <blueandwhiteg3> elektronkind: While I'm just testing on the boot disk (a slow disk - like 30 MB/sec write) for now, the array of drives should be able to perform extremely well. [06:58:31] <elektronkind> pfn: boss got it for me. the size of a "24 screen on a imac is almost uncalled for [06:58:41] <elektronkind> but hey, who am I to complain [06:58:42] <pfn> well, the imac has decent graphics card [06:58:53] <pfn> but yeah, 24" on integrated hardware sounds like such a waste [06:58:56] <blueandwhiteg3> I anticipate the drives will substantially outperform the gigabit link, that's for sure [06:59:31] <blueandwhiteg3> I could try samba next.... if there are no ideas on nfs... [06:59:41] <elektronkind> blueandwhiteg3: don't count your chickens before they hatch. array performance can vary greatly depending on the IO patterns it has to service. [06:59:57] <blueandwhiteg3> elektronkind: extremely large files - disk images [07:00:29] <Tempt> And I manage everything from behind my Sun workstation with a pair of 19" LCDs [07:00:48] * pfn manages everything from his pitiful windows box with a 24" lcd... [07:01:23] <Tempt> and periodically I get a lot done from behind the 11" green phosphor CRT on an old wyse terminal. [07:01:28] <Tempt> JUST AS GOOD [07:01:54] <pfn> then again, I'm not a sysadmin, so I only have a single RDP window opened... [07:01:56] <elektronkind> hey, nethack works on it [07:02:03] <pfn> and a few putty windows for personal shtuffs [07:02:59] <Tempt> And soon, I'll be managing all my stuff at home from behind 4 19" LCDs connected to Sunrays. [07:03:04] <Tempt> and it will be EVEN BETTER! [07:03:05] <blueandwhiteg3> elektronkind: Any idea on how to authenticate into solaris smb using OS X? [07:03:08] <pfn> 4 19" monitors? how nice [07:03:13] <pfn> sysadmin working from home? even nicer [07:03:54] <blueandwhiteg3> Can I create some kind of ram disk under solaris and share that via NFS for better testing? [07:03:55] <Tempt> anyway, talking of work, time to vi /kernel/drv/sd.conf [07:04:11] <elektronkind> blueandwhiteg3: cmd=k in Finder and type smb://yourserverIP/sharename ? [07:04:47] <elektronkind> blueandwhiteg3: what do you think that would accomplish. It wouldn't be testing your array, that's for sure. [07:04:48] <blueandwhiteg3> elektronkind: authentication error [07:05:18] <elektronkind> blueandwhiteg3: look at samba's logs on the server and see if samba is griping about something [07:05:21] <blueandwhiteg3> elektronkind: My array isn't in place. i want to eliminate the disk layer. I have tested the raw network layer - 895 mbit/sec. Now I want to test the file transfer layer. [07:05:38] <elektronkind> ...and looking at logs is Sysadmin 101 stuff, mind you [07:06:11] <elektronkind> blueandwhiteg3: that won't get you anything meaningful. "File transfer layer" [07:06:18] <elektronkind> do you also buy diplomas online? [07:06:29] <blueandwhiteg3> elektronkind: Did I say I was a systems administrator? [07:06:29] <elektronkind> crazy talk [07:06:33] *** tecNikal has joined #opensolaris [07:07:54] <tecNikal> hi i am using build 64 open solaris [07:08:08] <elektronkind> get your array up and test off of that. it's what you'll be ultimately using for your file store, so pretty much anything else would be chasing ghosts and wild assumptions at this point. [07:08:09] <tecNikal> sorry 67 but my machine still cannot detect USB [07:08:28] <tecNikal> and wireless [07:09:02] *** ylon has joined #opensolaris [07:09:03] <tecNikal> how is usb abbriviated in solaris 10 ? [07:09:36] <ylon> I've got some iscsi questions, anyone around that could offer some quick help? [07:11:13] <Tempt> ylon: Boyd will be able to help, he's an iSCSI expert. [07:11:56] <ylon> that's super Tempt, would boyd be around per chance right now? [07:12:33] <Tempt> wait and see. [07:12:46] <ylon> It appears that I've got iscsi set up in solaris right now, but I'm trying to figure out how to get an initiator to connect [07:12:55] <ylon> specifically I'm using globalSAN for Mac OS X [07:13:15] <ylon> and I'm not seeing what type of connection I should set up (CHAP, etc.) [07:13:35] <ylon> or if I'm supposed to use Portals or Targets (being that this is new to me) [07:15:11] <tecNikal> how is usb abbriviated in solaris 10 ? [07:15:58] <ylon> tecNikal: are you talking about /dev/usb/* [07:15:59] <ylon> ? [07:16:18] <ylon> I'm personally not very familiar with solaris yet (linux/fbsd/Mac OS X background) [07:16:41] <tecNikal> ylon yes i am asking about usb but i dont know i have a wireless adapter configured [07:16:50] <tecNikal> so i want to check if i can get it running [07:17:02] <tecNikal> wireless usb Aztech [07:17:17] <ylon> hmm, unfortunately I'm not going to be of much use there [07:17:22] <ylon> I would grep through dmesg [07:17:37] <ylon> to see if it detects it somehow [07:17:50] <ylon> but really, I've found the driver detection pkg that sun provides to be rather helpful [07:18:03] <ylon> perhaps you mileage would vary though [07:20:07] *** uebayasi has quit IRC [07:20:49] <ylon> hmm, looks like I'm getting it to connect, but I'm not seeing a mount point [07:29:32] *** cmihai has joined #OpenSolaris [07:31:42] *** danv12 has joined #opensolaris [07:33:02] *** blueandwhiteg3 has quit IRC [07:33:22] *** sparc-kly has quit IRC [07:33:53] *** duri has quit IRC [07:34:12] *** duri has joined #opensolaris [07:35:23] *** linux_user400354 has quit IRC [07:41:03] <tecNikal> is there a way to enable wireless ?> [07:41:32] <cmihai> dladm [07:41:43] <cmihai> Read the manpage. It obsoletes wificonfig [07:42:33] <tecNikal> no driver found for interface 0 (nodename: 'interface') of ZyDAS USB Device [07:42:44] <tecNikal> this is the error [07:42:46] <cmihai> tecNikal: so it's not supported. [07:43:02] <cmihai> Look for a 3rd party driver, check the HCL and try a newer version of OpenSolaris (SXCE 67 for example) [07:43:14] <tecNikal> i am on sxce 676 [07:43:15] <tecNikal> i am on sxce 67 [07:43:45] <tecNikal> its just this reason they ask me to install it i was previously using solaris 10 11/06 on my machine [07:47:20] <tecNikal> never mind thanks [07:47:23] *** tecNikal has quit IRC [07:48:40] *** thowe has joined #opensolaris [07:48:54] <thowe> is developer edition only available on DVD? [07:49:34] <cmihai> thowe: get the DVD and do a JumpStart :-). Who needs cds? :D [07:49:57] <cmihai> Better yet, forget the developer, get SXCE 67 [07:50:13] <thowe> cmihai: Well, I do. I don't have a DVD drive. [07:50:38] <thowe> What's SXCE? [07:51:45] <pfn> why would you want to get sxce? [07:51:48] <pfn> community edition? [07:51:51] <pfn> it's updated too frequently... [07:51:56] <thowe> Ah, community edition. [07:52:42] *** lloy0076 has joined #opensolaris [07:52:53] <thowe> I'm just poking around and want to try this thing out. The recent GPLv3 discussions peaked my interest. [07:52:59] <cmihai> thowe: Solaris Express Community Edition. Basically SXDE is just another SXCE anyway :-). Same software and such. [07:53:19] <cmihai> thowe: well, so get SXCE 67, there's a CD version also. [07:53:26] <thowe> Ah, Community edition is available on CD. [07:54:09] <pfn> really? I thought CE was on multiple dvds.... [07:54:21] <thowe> Know anything about Belelix or Nexenta? [07:54:42] <pfn> belenix looks interesting for me to learn from (interested in doing a similar livecd setup) [07:54:46] <cmihai> Yes, don't :-) [07:54:55] <pfn> nexenta sounds interesting if you're a gnu fanboi [07:55:00] <cmihai> They're LiveCDs, and Belenix isn't all that updated. [07:55:07] <g4lt-U60> pfn, the entire opensolaris multidisttro pack is precisely two DVDs [07:55:09] <cmihai> Now Nexenta.. is basically Ubuntu + OpenSolaris ON [07:55:26] *** xushi has joined #opensolaris [07:55:26] <lloy0076> (although Nexenta is focussing on the server side only from now on; apparently) [07:55:27] <pfn> cmihai, well, I'm trying to figure out how to setup a livecd nas... [07:55:46] <thowe> I respect the GN ideal, but I lean towards BSD ways of doing things )technically). [07:55:50] <thowe> er GNU [07:55:59] <cmihai> ew GNU ;_) [07:56:18] <pfn> what I want to be able to do is: boot livecd -> setup zpools/zfs -> save config data to zfs and somehow setup livecd to boot zfs as boot_archive for config... [07:56:37] <cmihai> Yes, don't. [07:56:41] <cmihai> Get a real disk. [07:56:44] *** program has quit IRC [07:56:47] <pfn> why? [07:56:55] <pfn> just another part to fail and reconfigure when it does fail... [07:57:04] <pfn> in addition, wasted overhead of another disk to power... [07:57:09] <thowe> Why is this system so friggin big? What the heck is in there? [07:57:24] <pfn> thowe, my core + additional packages install is just under 1gb... [07:57:51] <thowe> That's huge... [07:57:53] <pfn> thowe, although, it seems you can have a usable system in about 200mb... if you know exactly what to prune down... [07:58:22] <pfn> my /usr/sfw is about 300mb, and /usr/jdk is another 300mb [07:58:25] <thowe> Is there a ports system like Open/FreeBSD? Or pkgsrc or what? [07:58:45] <pfn> I don't understand why /usr/jdk is 300mb, though [07:58:48] <pfn> the jvm isn't that huge... [07:58:53] <pfn> I should look through and prune that down [07:59:10] <thowe> Hmmm. Can't you simply do without Java entirely? [07:59:17] <pfn> why? [07:59:18] <pfn> I like java [07:59:36] <cmihai> thowe: STOP. And THINK. [07:59:42] *** ShadowHntr has quit IRC [07:59:49] <cmihai> thowe: Solaris is smaller then SLES, RHEL, Debian (2 DVDs) and so on. [07:59:53] <cmihai> thowe: it's NOT big. [08:00:25] <cmihai> It comes with 2 dekstop systems by default (CDE and JDS, that's Gnome... 2.18+ in later SXCE builds), and a LOT of software and packages and such. [08:00:48] <richlowe> from memory, a whole lot of the installed size is staroffice. [08:00:49] <cmihai> Do a full install of RHEL / SLES or Debian and see "big". [08:00:50] <thowe> Cool. Does KDE work OK? [08:00:51] <richlowe> doubly so if you have both of it. [08:01:03] *** Triskelios has joined #opensolaris [08:01:14] <cmihai> thowe: eh, doesn't come with KDE, but it works fine. [08:01:16] <richlowe> hm, only 385M now. [08:01:19] <thowe> I normally run OpenBSD, so I'm used to small 5-minute install. [08:01:34] <cmihai> thowe: do JumpStart with flar (Flash Archives) [08:01:39] <cmihai> 5 minute installs on 100 systems :P [08:01:40] <richlowe> /usr/jdk is large because there's at least two of them in there. [08:01:51] <richlowe> the time the install takes is largely unrelated to how much it's installing. [08:02:01] <thowe> I'm not sure what is meant by "JumpStart" but I'll look into it. [08:02:06] <cmihai> But if you install 4GB of packages and take the time to configure the package db every package the installer installs.. and the checks... [08:02:16] <richlowe> cmihai: don't even have to jumpstart for the flar stuff, just have one around. [08:02:21] <cmihai> thowe: same as Ignite-UX on HP-UX, or KickStart in Linux. [08:02:22] <richlowe> only useful if you've already done one install though. [08:02:29] <cmihai> Well, yeah :-) [08:02:47] <cmihai> Much much faster though.. nice linear file transfer, no more package db and scripts garbage [08:02:58] <richlowe> I still think Sun, and the other distributors should put up package-cluster sized flars. [08:03:09] <richlowe> (where "other distributors" would be adjusted to taste, but the same concept, I guess) [08:03:10] <thowe> Not sure what KickStart is... I don't think it was around when I started playing with unix... [08:03:24] <cmihai> More or less automated network installs. [08:03:32] <thowe> Generally I install a system, and hit the man pages/FAQ/userguide. [08:03:41] <cmihai> thowe: and I assure you most net install technologies were there long before you :P [08:03:59] <cmihai> thowe: you'll be happy to know Solaris has the best documentation. EVER. [08:04:02] <thowe> I've used solaris on various servers (e250, e4500) [08:04:08] <richlowe> Except when compared to OpenBSD [08:04:09] <cmihai> docs.sun.com, opensolaris.org and bigadmin for FAQs... [08:04:16] <richlowe> The OpenBSD manpages are a thing to be envious of. [08:04:24] <thowe> OpenBSD has pretty good docs... [08:04:25] <cmihai> http://solaris.blackhorizon.org - bunch a links. [08:04:37] <cmihai> Nah, OpenBSD has GREAT manpages and a descent FAQ. [08:04:43] <richlowe> they're, like, correct (which makes them better than 99% of other OSes), and generally coherently formatted (which takes care of the other 1%) [08:05:13] <cmihai> Heh, yeah, Solaris manpages format ain't much fun to read. [08:05:32] <thowe> Linux pisses me off after using BSD for so long. Man pages are often a joke and half the normal tools seem crippled in some way. ifconfig, for example. [08:05:54] <Triskelios> why would stat() on a nonexistant path return ENOMEM? [08:06:26] <thowe> The thought of a GPL Java might just get me to look at Java again too... [08:06:41] <g4lt-U60> you also have to remember that the manpage tradition in hte BSD variants owes its genesis to Bill Joy, who later founded Sun [08:06:49] <ylon> hey cmihai, I've got iscsi set up, but am having a hard time getting the initiator to connect to it (globalSAN), any chance you may be able to help? [08:08:31] <cmihai> ylon: GlobalSAN initiators might not be supported. I'm sure Solaris, Windows, Linux, NetAPP and such work. No idea about globalSAN. [08:08:41] * thowe is downloading community 67 [08:08:47] <cmihai> Did you test your iSCSI target with anything else? [08:08:56] <ylon> hmm, seems to connect, but then I don't see it show up in the logs [08:08:57] <cmihai> Try Windows or Solaris [08:09:07] <ylon> no, haven't tried it elsewhere [08:09:12] <cmihai> See if it's connected on the Solaris side [08:09:25] <ylon> what free client could I use for windows [08:09:26] <ylon> ? [08:09:31] <cmihai> iscsiadm list target [08:09:35] <cmihai> ylon: Microsoft, it works great. [08:09:58] <cmihai> http://www.microsoft.com/downloads/details.aspx?familyid=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en [08:10:17] <cmihai> Initiator-2.04-build3273-x86fre.exe this is what you probably want. [08:10:28] <cmihai> Just make sure you export 1TB tops [08:10:30] <cmihai> like [08:10:37] *** linux_user400354 has joined #opensolaris [08:10:40] <cmihai> zfs create -s -V 1T storage/iscsi [08:10:48] <cmihai> zfs set shareiscsi=on storage/iscsi [08:11:09] <cmihai> Just use MS iSCSI initiator and select the target (put in IP) [08:11:09] *** cypromis has quit IRC [08:11:16] <cmihai> Then run "mmc" - disk managemnet. [08:11:18] <cmihai> management [08:11:26] <cmihai> You can add them to a dynamic volume and such. [08:11:59] <cmihai> Just don't go over 1TB, Windows can't handle it. If you want more then 1TB, multiple volumes + Dynamic Disks (It's Veritas Volume Manager VxVM light for Windows inside) [08:12:10] <ylon> is there a snapshot of settings for that initiator? Also should persistent be set? [08:12:33] <cmihai> Go with defaults. [08:12:37] <cmihai> Just select the target IP [08:12:48] <cmihai> then the volume you want to mount. It's all set. [08:12:55] <ylon> what is the default port? [08:13:04] *** danv12 has quit IRC [08:13:33] <cmihai> DEFAULT mate they work [08:13:40] <cmihai> 3260 or whatever [08:14:01] <ylon> just wanting to be sure that it uses the same port that I will try in mac os x as well [08:14:17] <ylon> I want to mate up the configs so that they match and I'll see where the problem lies [08:14:22] *** yongsun has quit IRC [08:14:33] <cmihai> iscsitadm list target -v [08:14:35] <cmihai> On the target [08:15:08] <boyd> ylon: Dude... I think Tempt has been eating some bad shrooms. I so don't know that much about iSCSI [08:15:21] <ylon> well, I did that but don't see a port listed, unless it is embedded [08:15:31] <ylon> boyd: :) [08:15:46] <ylon> iSCSI Name: iqn.1986-03.com.sun:02:b6dea0f0-bfe5-ed8e-ca52-b88bbf17990f [08:15:57] *** danv12 has joined #opensolaris [08:16:25] <cmihai> ylon: so do a netstat :P [08:17:01] <ylon> er, that's a novel idea :D [08:17:11] <ylon> now if I can figure it out on solaris :D [08:17:26] * cmihai pokes ylon [08:17:45] <ylon> please don't make me pull out the trout [08:17:46] *** laca has joined #opensolaris [08:18:40] <cmihai> prefetch.net/presentations/SolarisiSCSI_Presentation.pdf read this or something :P [08:18:45] <ylon> looks like it is 3260 [08:18:52] <cmihai> No shit :-) [08:19:07] <ylon> k, I'd better go to bed, I'm beyond trouble :P [08:19:18] <ylon> l8r folks and thanks [08:21:11] <cmihai> Uhum. Mkey, bye. [08:21:43] <cmihai> opensolaris.org/os/community/os_user_groups/frosug/iscsi/iscsi.pdf this too looks good. [08:22:10] *** danv12_ has joined #opensolaris [08:23:55] *** lloy0076 has left #opensolaris [08:25:55] *** danv12 has quit IRC [08:26:01] *** estibi_ has quit IRC [08:26:54] *** linux_user400354 has quit IRC [08:27:59] *** Gman has joined #opensolaris [08:29:36] *** mikefut has joined #opensolaris [08:29:54] *** linux_user400354 has joined #opensolaris [08:37:19] *** uebayasi has joined #opensolaris [08:40:09] *** yippi has joined #opensolaris [08:44:40] *** mikefut has quit IRC [08:51:12] *** danv12_ has quit IRC [08:54:27] *** agony__ has joined #opensolaris [08:56:20] *** agony_ has quit IRC [09:01:15] *** danv12 has joined #opensolaris [09:01:54] *** Cyrille has joined #opensolaris [09:02:15] <asyd> ola [09:02:33] *** thowe has quit IRC [09:04:57] *** yongsun has joined #opensolaris [09:08:57] *** halton has joined #opensolaris [09:09:53] *** mazon is now known as Mazon [09:10:26] *** sparc-kly has joined #opensolaris [09:10:26] *** ChanServ sets mode: +o sparc-kly [09:12:09] <quasi> morning [09:14:09] *** lon3star has joined #opensolaris [09:14:12] *** linux_user400354 has quit IRC [09:19:57] *** LuckyLuke has joined #opensolaris [09:23:52] *** mikefut has joined #opensolaris [09:28:49] *** deedaw has joined #opensolaris [09:29:17] *** sparc-kly has quit IRC [09:33:59] *** cydork has joined #opensolaris [09:35:47] *** LuckyLuk1 has quit IRC [09:37:23] *** boro has joined #opensolaris [09:38:59] <razrX> morning [09:40:13] *** bzcrib has joined #opensolaris [09:42:53] *** tsoome has joined #opensolaris [09:46:30] *** iMax has joined #opensolaris [09:46:35] *** danv12 has quit IRC [09:46:36] *** estibi has joined #opensolaris [09:47:22] *** Tpenta has quit IRC [09:47:29] *** MattMan has joined #opensolaris [09:47:56] *** estibi has quit IRC [09:48:10] *** estibi has joined #opensolaris [09:48:41] *** tsoome1 has joined #opensolaris [09:53:42] *** triplah_ has joined #opensolaris [09:54:51] *** cypromis has joined #opensolaris [09:55:28] *** adamg has joined #opensolaris [09:55:30] *** danv12 has joined #opensolaris [09:58:16] <trochej> http://www.flickr.com/photos/trochej/707652697/ [09:58:47] <e^ipi> erm... okay [09:58:58] <e^ipi> my laptop runs nevada as well... *shrug* [10:00:09] *** dmarker has quit IRC [10:00:18] *** dmarker has joined #opensolaris [10:02:39] *** tsoome_ has joined #opensolaris [10:05:53] *** tsoome has quit IRC [10:10:10] *** vmlemon has joined #opensolaris [10:10:50] *** pjd- has quit IRC [10:11:25] *** peteh has joined #opensolaris [10:14:47] *** tsoome1 has quit IRC [10:14:54] *** pjd- has joined #opensolaris [10:16:43] *** lon3star has quit IRC [10:24:00] *** Trisk[laptop] has joined #opensolaris [10:26:24] *** Triskelios has quit IRC [10:31:09] *** Trisk[laptop] is now known as Triskelios [10:37:51] *** obsethryl has joined #opensolaris [10:42:43] *** bunker has joined #opensolaris [10:44:21] *** Fish has joined #opensolaris [10:44:58] <Fish> hello [10:47:00] *** NeZetiC has joined #opensolaris [10:47:01] *** uebayasi has quit IRC [10:48:01] <trochej> 'lo [10:48:04] *** Dar_HOME is now known as Dar [10:49:35] *** migi has joined #opensolaris [10:51:05] *** Triskelios has quit IRC [10:52:14] *** boro has quit IRC [10:52:58] *** Triskelios has joined #opensolaris [10:53:49] *** halton has left #opensolaris [10:54:48] <estibi> hello [10:58:31] *** bnitz has joined #opensolaris [11:00:20] *** tsoome has joined #opensolaris [11:00:33] *** halto1 has joined #opensolaris [11:01:03] *** calumb has joined #opensolaris [11:01:30] *** halto1 has left #opensolaris [11:01:47] *** halton has joined #opensolaris [11:02:31] *** calumb is now known as calAFK [11:05:12] *** bzcrib has quit IRC [11:07:12] *** obsethryl has quit IRC [11:07:54] <timsf> Morning everyone [11:08:40] <Atomdrache> So I go to either the command line mode or the Open Boot PROM on an e450 running SXCE, and on occasion the display gets all hosed. The font is bigger, only every few horizontal lines of the characters are displayed, ant there are four instances of the prompt arranged horizontally on the screen. Does anybody know what's going on here or how to fix it? [11:09:25] <richlowe> mornin' timsf. [11:09:45] <Doc> the drip tray is full [11:09:52] <quasi> ;) [11:10:06] <timsf> Happy Independence Day! [11:10:09] <richlowe> Doc: still doing support, wherever you are now? [11:10:17] <Doc> Cisco, and no [11:10:20] <richlowe> Pity. [11:10:21] *** Trisk[laptop] has joined #opensolaris [11:10:30] <richlowe> the thought of people paying you for help always amused me. :) [11:10:51] <Doc> that's why i had to stop... i couldnt help laughing in their faces [11:11:39] <Doc> and anyway - my response was valid [11:11:51] <Doc> odds are he's got a memory leak, and it'f fulled the drip tray [11:12:02] <Doc> err... it's filled even [11:12:33] <vmlemon> Was today's flavour of the day Java? ;) [11:13:05] *** danv12 has quit IRC [11:14:38] *** dunc has joined #opensolaris [11:15:39] <Doc> nah.. java empties it's own drip tray... eventually [11:15:56] *** kloczek has joined #opensolaris [11:18:14] <quasi> Doc: and if it doesn't empty it fast enough, it'll get killed ;) [11:18:14] *** tsoome_ has quit IRC [11:18:26] <vmlemon> I wonder if Sun could go into making coffee machines, since they already have the coffee [11:18:43] *** Triskelios has quit IRC [11:19:19] <Doc> they could do it, but it'd probably be cheaper to buy a coffee shop than one of their coffee makers :) [11:19:30] *** Snake007uk has joined #opensolaris [11:21:17] *** mman has joined #opensolaris [11:21:26] <mman> hi all [11:21:44] <vmlemon> Of course, they could code the firmware in Java [11:22:50] <vmlemon> But you'd have to wait another 20 minutes, just for it to turn on [11:24:58] * vmlemon loads Mono into the Sun coffee machine ;) [11:26:18] <quasi> vmlemon: there's always the HTCPCP to implement if you're into the whole coffe brewing thing [11:26:50] <vmlemon> I just use a jar of instant, and a kettle ;) [11:26:56] <vmlemon> So much quicker, and cheaper [11:27:40] <quasi> ah, so you don't really like coffee ;) [11:28:15] <vmlemon> Brewed coffee is great, if you have the time and equipment, though [11:28:29] <vmlemon> We used to have a coffee maker, at one time [11:28:57] <vmlemon> Although it didn't take that long to make [11:29:26] <vmlemon> Would be great to time how long it takes to make both ways, though [11:30:12] *** calAFK is now known as calumb [11:30:47] *** cmihai has quit IRC [11:32:40] <vmlemon> I've never really counted, but on average, I get through 5 or 6 cups a day [11:35:13] <vmlemon> It's not that I don't like it, though [11:36:53] *** Trisk[laptop] is now known as Triskelios [11:37:00] <quasi> I just put a filter with some freshly ground beans straight on the mug - which might take 3 mins to run through, but doesn't make much of a time difference because it still takes a while longer before getting cold enough to drink [11:49:19] *** deather_ has joined #opensolaris [11:56:45] *** tsoome has quit IRC [11:57:23] *** yongsun has quit IRC [11:57:52] *** Chihan has joined #OpenSolaris [11:59:10] *** deather has quit IRC [12:02:48] *** xuewei_ has quit IRC [12:09:10] <Chihan> Hello,everybody:D [12:12:54] *** Fullmoon has joined #opensolaris [12:13:30] *** monzie has joined #opensolaris [12:14:03] *** Fullmoon has quit IRC [12:15:18] *** Chihan has quit IRC [12:16:37] *** halton has quit IRC [12:20:04] *** Chihan has joined #OpenSolaris [12:22:41] *** tsoome has joined #opensolaris [12:24:07] *** simford has quit IRC [12:27:07] *** kendey has joined #opensolaris [12:28:50] *** xeon_ has joined #opensolaris [12:30:51] *** rachel has quit IRC [12:36:52] *** aruiz has joined #opensolaris [12:41:13] *** kendey has quit IRC [12:49:02] *** Pir8 has quit IRC [12:50:16] *** Pir8 has joined #opensolaris [13:00:57] *** ravv has joined #opensolaris [13:04:04] *** ravv0 has joined #opensolaris [13:05:24] *** bor1 has joined #opensolaris [13:07:16] *** tsoome has quit IRC [13:12:10] *** xuewei has quit IRC [13:15:19] *** ravv has quit IRC [13:15:58] *** tsoome has joined #opensolaris [13:16:23] *** linma has quit IRC [13:21:04] *** tsoome has quit IRC [13:22:24] *** sartek has quit IRC [13:24:46] *** ravv0 has quit IRC [13:31:08] *** triplah_ has quit IRC [13:43:07] *** mocelle has joined #opensolaris [13:44:38] <mocelle> hi [13:44:52] <mocelle> i need help [13:45:06] <mocelle> ist anybody here? [13:45:16] <Doc> no [13:45:25] <Doc> everyone left about 30 mins ago [13:45:26] <mocelle> great [13:45:27] <Pietro_S> nobody here ;-) [13:45:41] <Cyrille> completely empty. [13:45:49] <quasi> none at all [13:46:04] <NeZetiC> nothing to see [13:46:07] <mocelle> i wanna set up opensolaris with dell power edge 1550 [13:47:22] *** calumb has quit IRC [13:47:55] *** mocelle has quit IRC [13:49:49] *** jambock has joined #opensolaris [13:57:14] *** xeon_ has left #opensolaris [13:58:22] *** MikeTLive has joined #opensolaris [14:00:01] *** obsethryl has joined #opensolaris [14:04:51] *** master_o1_master has joined #opensolaris [14:06:20] <Pietro_S> strange guy ... [14:07:18] *** bor1 has quit IRC [14:07:22] <trochej> Tru [14:08:25] <kszwed> he must have tripped the circuit breaker when powering on the dell. [14:08:43] <JWheeler> hehe [14:08:45] <trochej> Probably [14:08:52] <trochej> Hmm [14:09:02] <trochej> Nex insfrastructure meeting for my corpo is in Dallas [14:11:22] <Peanut> The latest MacBook Pro has an Nvidia GeForce 8600m GT chipset - would that make it more likely to have proper accellerated X and OpenGl while running OpenSolaris on one of those? [14:12:02] <quasi> Peanut: what does xorg say? [14:12:16] *** cast has joined #opensolaris [14:12:32] <Peanut> quasi: good question, thanks [14:12:46] <quasi> ;) [14:13:00] * Peanut still has to get used to the whole idea of Xorg, sorry :-) [14:14:00] <coffman> Peanut: yes, since there are nvidia bin driver for it [14:14:09] <cast> hmmm, what is ON? [14:14:26] <quasi> I couldn't make xorg work with my nvidia 6150, but xsun worked [14:14:41] <coffman> quasi: uhm? [14:14:53] <Peanut> Hmm.. I dropped my MacBookPro down the stairs last week, so being able to run ON on it would be a point in favour of replacing it instead of fixing it. [14:15:13] <timsf> cast, Operating System / Networking [14:15:20] <timsf> http://www.opensolaris.org/os/community/on/ [14:15:25] <Peanut> quasi: it says: "Due to bugs in the documentation toolchain, documentation for this release is not available online at this moment" :-p [14:15:35] <cast> ahh, thanks! [14:15:38] *** aruiz has quit IRC [14:15:41] <Peanut> The press release is available though *lol* [14:15:56] *** master_of_master has quit IRC [14:16:58] <quasi> ;) [14:17:15] *** Vanuatoo has quit IRC [14:17:24] <sporq> http://www.youtube.com/watch?v=yJUEULWEP9c [14:17:37] *** aruiz has joined #opensolaris [14:26:11] *** leal has joined #opensolaris [14:27:24] <leal> i did a install of solaris 10 u3, and the system can't boot... there is no fcp 64 module (qlogic) [14:27:34] <Tempt> That sounds odd. [14:27:47] <Tempt> /kernel/drv/sparcv9/fcp [14:27:51] <Tempt> /kernel/drv/amd64/fcp [14:27:56] <leal> i did a copy from another machine (same hardware), but now the message is: Cannot mount root path. [14:27:57] <Tempt> Sound be there. [14:28:07] <Tempt> What sort of machine? [14:28:26] <coffman> sporq: seen "shadow army" ? [14:28:41] *** carbon60 has joined #opensolaris [14:28:46] * Tempt sets about installing Linux on his SUNPCi. [14:28:47] <leal> poweredge. the machine is not the problem... i did a install of u2 without problems... [14:29:07] <Tempt> Poweredge? Booting from fibre channel? [14:29:19] <leal> Tempt: yes. [14:29:34] <Tempt> Wow. [14:29:35] <Tempt> Changing world. [14:29:43] <Tempt> What sort of storage? [14:29:49] <leal> EMC [14:30:02] <Tempt> Obviously you've made sure the LUNs are visible to the host? [14:30:26] <leal> Tempt: thanks, but the problem is not the hardware... i want to know how can i config the root path to the kernel... [14:30:51] <leal> Tempt: without that, how would i install?? [14:31:21] <leal> Tempt: it's a solaris bug, i want to know how to fix. [14:31:30] *** jambock has quit IRC [14:31:45] <Tempt> Log a case with Sun if you have a support contract, they'll nail it pretty quickly. [14:32:01] <Tempt> It is even managing to boot the kernel, or choking in GRUB somewhere? [14:33:05] <leal> "Cannot mount the root path" is a kernel message... [14:33:44] <Tempt> Okay. [14:33:52] <Tempt> So the kernel is up, but it can't mount root [14:33:55] <Tempt> boot it off a cd [14:33:59] <leal> the initial problem was the fcp missing driver... the install process was broken... but know, i that i put the fcp driver there, i need to configure the root path [14:34:04] <Tempt> check /etc/system and /etc/vfsta [14:34:06] <Tempt> vfstab [14:34:09] *** sparc-kly has joined #opensolaris [14:34:10] *** ChanServ sets mode: +o sparc-kly [14:34:25] <Tempt> And I can't think why you wouldn't get the fcp driver. [14:34:30] <Tempt> Did you do a "Full Install + OEM"? [14:34:51] <leal> Tempt: right. [14:34:52] <Tempt> and did you drop the appropriate config file in for fcp? (err, fcp.conf sounds about right) [14:35:17] <Tempt> name="fcp" parent="pseudo" instance=0; [14:35:30] <Tempt> That's the default line in fcp.conf for me, just in case you missed it. Everything else is comments. [14:36:11] <leal> Tempt: there is no fcp.conf file on the machine that i did the copy. and there, works without problems... [14:36:16] *** sniffy has quit IRC [14:37:19] <Tempt> Hmm. [14:37:35] <Tempt> Check sd.conf on the machine you copied it from? [14:37:36] *** monzie has quit IRC [14:37:37] <Tempt> ssd.conf? [14:37:50] <leal> Tempt: where? [14:37:51] <Tempt> Disable MPXIO? [14:37:59] <Tempt> /kernel/drv/sd.conf [14:38:02] <Tempt> /kernel/drv/ssd.conf [14:38:10] <leal> 32? [14:38:20] <Tempt> 32? [14:38:21] <leal> the kernel is 64 [14:38:28] <Tempt> Oh, I'm on SPARC. [14:38:43] <Tempt> There isn't a seperate config file location. I don't have any x86 boxes to check on. Err, x64 boxes. [14:39:59] <leal> the rootfs should be on boot_archive i guess... i dont know the solaris boot procedure very well... [14:40:12] <leal> i did a bootadm update-archive without luck. [14:40:45] <quasi> when does it stop? before or after hitting the grub menu? [14:41:03] <Tempt> quasi: He said he's getting a cannot mount root, that would imply after the kernel has booted. [14:41:12] <leal> yes [14:41:13] <quasi> true [14:41:59] <quasi> leal: when I hit those, I try to boot the failsafe, mount under /a and update the boot arch [14:42:08] <leal> i'm thinking in do a reinstall... and see if the bug repeat. [14:42:26] <leal> quasi: i did that. [14:42:31] <Tempt> Make sure you do a full+OEM install [14:42:40] <Tempt> (catches all the drivers) [14:42:50] <quasi> leal: with the right -R ? [14:43:06] <leal> quasi: from the manual :) [14:43:20] <quasi> which says? [14:43:53] <leal> Tempt: i did a full+OEM and i see the driver been installed on progress bar (was the last thing to be installed)... so, i think there is the problem (bug) [14:44:11] <Tempt> It would appear to be the case. [14:44:11] <leal> bootadm update-archive -R /a [14:44:24] <Tempt> I thought EMC tried to force everyone to use Emulex HBAs, anyway. [14:44:53] *** bengtf has joined #opensolaris [14:45:06] <quasi> leal: looks about right [14:45:37] <leal> Tempt: fcp and qla2200 drivers installation messages, was the last thing in installation. after that the installation finish, and on reboot, hang... [14:46:22] <leal> sometime ago i was able to mount the boot_archive to see the files... know, i dont remember the command to do that... [14:46:23] <quasi> doesn't it show where it is trying to boot from if you boot with -v? [14:47:11] <leal> i did a boot -kd, and moddebug, but it now show that... i will try -v [14:47:32] * quasi doesn't usually try to boot x86 off fc [14:47:33] *** Vanuatoo has joined #opensolaris [14:47:59] <leal> quasi: ok, thanks anyway. [14:48:34] <leal> i will do a reinstall, if the problem occur again, i will post on mailing list. [14:48:49] <quasi> with the debugger you should be able to dig out what it is booting from [14:49:34] <leal> quasi: like i said, the messages was "cannot mount root path", without a path :) [14:49:39] <leal> maybe some option... [14:50:32] <quasi> leal: the other option would be fiddling with the grub settings [14:50:39] <leal> in fact, without the debugger i cant see anything, the machine reboots soon after the kernel messaegs. [14:51:31] <leal> quasi: i will try to do something now, and see if i can get more facts. [14:52:09] <Tempt> Bloody #linux types aren't very helpful. [14:52:34] <Tempt> Anyone *here* got a Linux box? I need a kernel compiled. [14:52:37] <Tempt> Please? [14:53:09] <Doc> screw you, and the linux box you rode in on! [14:53:21] *** calumb has joined #opensolaris [14:53:21] <Tempt> That's about it. [14:53:35] <Tempt> All I wanted to do was actually use my SunPCi for something. [14:53:47] <Cyrille> you need someone else to compile your kernel? I thought this was what the linux experience was all about! [14:53:52] <quasi> Tempt: download a binary dist [14:53:56] <vmlemon> Tempt: x86 or x86_64? [14:53:57] * Tempt screams [14:54:05] <Tempt> SunPCi needs to be netbooted with nfsroot [14:54:15] <Tempt> I need a Linux machine to compile the kernel on. [14:54:27] <Tempt> Without an nfsroot kernel, I can't boot Linux. [14:54:30] <dlg> why linux? [14:54:33] <Tempt> Without Linux, I can't compile the kernel. [14:54:41] <Tempt> dlg: Didn't want to run Windows. [14:54:49] <quasi> Tempt: brandZ ;) [14:54:52] <Tempt> dlg: Never tried nfsroot'ing FreeBSD before. [14:54:59] <Tempt> no brandZ on SPARC. [14:55:17] <Tempt> The SunPCi is an intel machine in a PCI card that lives in my SPARC machine. [14:55:18] <asyd> Tempt: i have a debian 2.6.17 if you want [14:55:18] <dlg> Tempt: probably a lot easier though, and better documented [14:55:35] <Tempt> I've definately done nfsroot on Linux ages ago. Back on kernel 1.2.13 [14:55:41] <Tempt> and it worked and all. [14:55:44] <vmlemon> Ubuntu on x86, if it helps [14:56:05] <Tempt> I'm pretty sure Ubuntu uses a heavily modified kernel. [14:56:07] <dlg> Tempt: that was like, 4 rewrites ago [14:56:24] *** cmihai has joined #OpenSolaris [14:56:31] <Tempt> Oh, this is so hard. [14:56:40] *** calumb has quit IRC [14:56:53] *** calumb has joined #opensolaris [14:57:51] *** calumb is now known as calLNCH [14:58:55] <quasi> Tempt: anyone in here helping you would of course be scared of being considered a traitor ;) [14:59:14] <Tempt> You can't run Solaris x86 on the SunPCi, so it isn't really treachery. [14:59:36] <quasi> what, not enough memory? [15:00:04] <Tempt> Not a whole lot of RAM. [15:00:20] <Tempt> Also, I don't think there would be a way to get it to boot. [15:00:30] <Tempt> With Linux, you can boot DOS and then use loadlin.exe to fire up a kernel with nfsroot parameters. [15:00:35] <CIA-19> ml149210: 6572981 BGE need to support NetXtreme 5753 (onboard NIC for HP Compaq laptop NX9240) [15:00:42] <vmlemon> Does the SunPCi have its own internal HDD? Or does it use a partition on the host? [15:00:42] <Tempt> I don't think you can do that with SolX86. [15:01:08] <Tempt> It uses a virtual partition from the host which is only supported by BIOS and Windows. And only Windows because Sun provides a pre-install driver kit thingy. [15:01:27] <vmlemon> I see [15:01:57] <Tempt> I'm d00med. [15:02:02] *** Trisk[laptop] has joined #opensolaris [15:03:38] <Trisk[laptop]> Tempt: you should be able to build in an lx-brand zone... [15:03:46] <dlg> Tempt: that virtual partition thing sucks [15:04:26] <Tempt> Trisk[laptop]: Not on any of my machines I can't. [15:04:30] <dlg> which sunpci is it? [15:04:40] <Tempt> dlg: An old one. IIpro [15:05:52] *** cmihai has quit IRC [15:05:58] <ylon> having some issues with iscsi still, the iscsi target is discovered fine by the initiator it appears, but the volume never appears for initialization/formatting. [15:06:03] <dlg> the disk image is just like a hard disk image? [15:07:26] <Tempt> dlg: In a proprietary format, yes. [15:07:32] <dlg> pwned [15:07:39] <Tempt> To be honest, the SunPCi software is remarkably shite. [15:10:02] <dlg> man, itd be fun to play with one of those [15:10:14] <dlg> i think itd be easy to boot netbsd or openbsd on it [15:10:23] *** Triskelios has quit IRC [15:10:29] *** jambock has joined #opensolaris [15:10:46] <ofu> is there still work going on on zfs-crypto? [15:11:27] <quasi> ofu: the design doc got posted Monday [15:12:31] <quasi> ofu: http://opensolaris.org/os/project/zfs-crypto/design_review/ [15:12:36] *** edwardocallaghan has joined #opensolaris [15:12:48] <edwardocallaghan> hey [15:13:01] <edwardocallaghan> back in Allbry [15:13:15] <edwardocallaghan> built my self a server [15:13:26] <edwardocallaghan> well working on it at the moment [15:13:42] <edwardocallaghan> and on a wicked fast 56k [15:15:29] <vmlemon> Wow, I bet 56kbps is blazing speed ;) [15:16:08] <vmlemon> Highest I've ever got with an analogue "modem" was 50k [15:16:25] *** sparc-kly_ has joined #opensolaris [15:18:03] *** sparc-kly has quit IRC [15:18:12] <vmlemon> ("modem" being one of those evil proprietary, software-defined ones) [15:20:30] *** cmihai has joined #OpenSolaris [15:21:39] *** cmihai has quit IRC [15:23:43] *** sartek has joined #opensolaris [15:24:26] <ofu> quasi: ah, work going on but it seems to be far from being usable [15:24:47] *** Giaco has joined #opensolaris [15:25:23] *** cypromis_ has joined #opensolaris [15:26:50] *** DataStream has joined #opensolaris [15:28:39] *** trisk__ has joined #opensolaris [15:28:39] <quasi> ofu: more like preparing to start work [15:29:12] <quasi> ofu: lofi crypt is probably a better choice for the near future [15:30:11] *** trisk__ is now known as Triskelios [15:30:12] *** mega has quit IRC [15:31:20] <Triskelios> does someone handy with DTrace know why I can't seem to get the fbt provider to do anything with fbt::stat:entry, but syscall::stat:entry works? [15:32:09] <movement> Triskelios: there's no stat() function? [15:32:58] <movement> hmm there is though [15:33:23] <movement> Triskelios: what does dtrace -l -n 'fbt::stat:entry' say? [15:33:35] *** cmihai has joined #OpenSolaris [15:33:42] *** mega has joined #opensolaris [15:33:43] *** cmihai has quit IRC [15:34:12] *** cmihai has joined #OpenSolaris [15:34:19] <Triskelios> lists ID as 8549, in genunix, which is expected.. [15:34:29] <movement> so what's the problem exactly? [15:36:39] <Triskelios> I'm doing a printf on entry but nothing happens... it works if I match on syscall instead of fbt, everything else being the same [15:36:39] *** Trisk[laptop] has quit IRC [15:38:02] <movement> x86? [15:38:29] <Triskelios> maybe I can't use execname? stat should be the entry point, though... [15:39:23] <movement> I'm bemused that syscall::stat:entry works for you [15:40:12] <Giaco> Warning - Invalid account: 'pxy' not allowed to execute cronjobs [15:40:34] <Giaco> what must I do ? [15:40:41] *** cypromis has quit IRC [15:40:51] <quasi> Giaco: probably an expired password [15:41:46] <movement> ah [15:43:23] <Plouj> username: [15:43:28] <Plouj> username: _ [15:43:30] <movement> Triskelios: you need to trace xstat... [15:43:34] <movement> it's a bit twisty around there [15:43:45] *** IAW1992 has joined #opensolaris [15:47:32] *** Trisk[laptop] has joined #opensolaris [15:48:04] *** ylon has quit IRC [15:49:36] <Pietro_S> huh, I read that zfs-crypto review and it looks like they also thinks about zfs crypto swap ... Does zfs swap work? Also does anyone know why someone should do zfs swap? I think it's quite bad idea ... [15:50:09] *** SymmAirport has quit IRC [15:50:18] <quasi> there's a very good reason to encrypt swap [15:50:25] <cmihai> quasi go ask MacOS :P [15:50:33] <cmihai> MacOS file vault didn't encrypt swap [15:50:41] <cmihai> So the key was always there for people to find :-) [15:51:07] <quasi> cmihai: nah, I'm not a steve fanboy [15:51:29] *** xushi has quit IRC [15:51:45] <Doc> join #photogeeks [15:51:48] <Doc> blah [15:52:22] *** calLNCH is now known as calumb [15:54:41] *** mman has quit IRC [15:55:23] *** Triskelios has quit IRC [15:57:01] *** nachox has joined #opensolaris [15:57:13] <nachox> morning [15:57:18] *** trisk__ has joined #opensolaris [15:58:09] <Samy> Hi [15:58:18] <Samy> Anyone here using AMD64? Specifically, larger machnies. [15:58:47] <DerJoern> Galaxy 4600 - is that enough? [15:59:32] <Samy> How many CPUs? [15:59:51] <seanmcg> 16 [16:00:05] <Samy> 16 cores or 16 CPUs? [16:00:06] <quasi> seanmcg: 16 cores, right? [16:00:07] <DerJoern> 4 dual cores [16:00:26] <Samy> Ok, 8. Good enough :-) [16:00:34] <Samy> DerJoern: You mind running a simple test for me? [16:00:48] <Samy> DerJoern: It isn't CPU intensive, and it should only take 3-4 minutes. [16:01:26] <Samy> DerJoern: Looking to see effect of hypertransport on dynamically allocated memory. In Linux, atleast with shared pages there is quite a serious impact (getpid, gettimeofday, etc...). [16:01:30] <DerJoern> Samy: no, is is in produktion [16:02:03] <Samy> Ok. :-( [16:02:44] <quasi> Samy: I think solaris is somewhat smarter on that [16:03:14] *** edwardocallaghan has quit IRC [16:05:22] <Samy> quasi: Solaris doesn't have this shared page mechanism to begin with, no? [16:05:40] *** IAW1992 has quit IRC [16:05:46] <Samy> quasi: Additionally, Solaris has MPO which is geared towards sun4v AFAIK, which is about reducing memory bank contention. [16:05:51] *** Trisk[laptop] has quit IRC [16:06:02] <Samy> quasi: This is a case of general memory access overhead. [16:06:12] <Samy> quasi: * Due to hypertransport [16:06:43] <Samy> and the physical memory is linearly (and statically) chomped across sockets iirc. [16:06:47] <Pietro_S> I see reasons to encrypt swap, but I'm not sure to use zfs sawp at all, what's the advantage to have swap cached in memory and so on, also making snapshot of swap makes no sense ... [16:06:49] *** Cyrille has quit IRC [16:07:14] <nachox> Pietro_S: grow your swap? :) [16:07:24] <nachox> raidz your swap? [16:07:47] <quasi> checksum your swap [16:08:17] <quasi> Pietro_S: as for caching, that could be turned off [16:08:24] <quasi> compressed swap [16:08:34] *** SYS64738 has quit IRC [16:08:46] <nachox> compressed swap sounds cool [16:08:54] *** capitano__ has joined #opensolaris [16:09:07] <quasi> nachox: "memory doubler" ;) [16:09:19] *** Giaco is now known as SYS64738 [16:09:32] <nachox> hehe, talk to marketing they might like that :P [16:09:44] <Pietro_S> uh, I don't think that compressed swap has any advantages, but raidz maybe has some ... [16:10:07] <twincest> Pietro_S: because it means swap is just another zvol, instead of having to use different slices [16:10:53] <quasi> Pietro_S: depending on processor/disk speed, you may get better performance by compressing on disk [16:12:10] *** LuckyLuk1 has joined #opensolaris [16:12:43] <Pietro_S> quasi: that would need to be on very slow disks ... and lot's of free cpu power ... [16:12:56] <quasi> Pietro_S: not uncommon [16:13:51] <Pietro_S> quasi: if you have cpu super power, you mostly have also *plenty* of ram [16:14:33] <nachox> decompressing when you're using the lz based algorithm is not very pressesor intensive [16:14:50] *** CIA-19 has quit IRC [16:15:34] <quasi> Pietro_S: you're forgetting one thing about swap - if a process allocates a pile of memory (not uncommon with java) and doesn't use it, then that will compress quite well [16:16:21] *** edwardocallaghan has joined #opensolaris [16:17:00] <Pietro_S> yep, lot's of zeros, thanks for pointing on it, so the nexxt question is does ON support zfs swap ight now? [16:17:16] <Pietro_S> s/ight/right [16:17:59] *** calumb is now known as calAFK [16:18:42] <cast> newbie question: i have looked around, but i can't seem to see anywhere that lists a summarized changelog of what went into SXCE 67, or 66, or any SXCE release. while SXCE 67 was announced in the forums/mailing lists i didn't see any description of what changed from 66->67, where would one look for such information? [16:18:57] <cmihai> Check the onnv flag days [16:19:35] <quasi> Pietro_S: I think it is set for soon on the roadmap - well before zfs crypto gets there [16:20:12] <Tempt> HA [16:20:42] <Tempt> Not surprisingly, nfsroot doesn't exist in Linux kernel anymore, even though the docs are still in the kernel. If it does exist, a) the docs are wrong on boot options b) it doesn't work with the old boot options. HA! [16:21:08] <cast> cmihai: thanks :) [16:23:17] *** Cyrille has joined #opensolaris [16:23:47] *** CIA-26 has joined #opensolaris [16:24:52] <quasi> Tempt: I actually went as far as checking my old kernel config and didn't find it [16:25:36] <Tempt> Yet it still exists according to Documentation/nfsroot.txt [16:25:39] <quasi> Tempt: take a look at http://www.gentoo.org/doc/en/diskless-howto.xml [16:26:44] <Tempt> Yeah, menuconfig won't give me the option so I just edited .config and popped the magic in there. [16:27:09] *** calAFK is now known as calumb [16:27:26] <nachox> Tempt: not wise [16:27:28] *** LuckyLuke has quit IRC [16:28:31] <Tempt> What sort of OS requires a kernel compile for something so simple? [16:28:43] <asyd> linux, hurd [16:28:44] <nachox> hmm, linux? [16:31:04] <edwardocallaghan> edward wonders what ever happend to MINIX ? [16:31:27] <vmlemon> It's still under development [16:31:28] <nachox> Tempt: keep in mind that you wouldnt actually do that with a redhat if you value your service contract :) [16:31:38] *** calumb is now known as calSHOP [16:31:41] <Tempt> I'd never deploy Red Hat. [16:31:57] <Tempt> I'd never take a job looking after Red Hat unless it paid so much money I didn't care. [16:32:28] <nachox> what are you running then? [16:32:31] <nachox> debian? [16:32:35] <Tempt> HackJob 2000 [16:32:36] <Tempt> heh. [16:32:41] <nachox> lol [16:32:46] <Tempt> I'm just trying to build a kernel at the moment, nothing else quite yet. [16:32:49] <Tempt> It'll be Slackware 12 later. [16:32:58] *** derchris has quit IRC [16:33:03] *** derchris has joined #opensolaris [16:33:21] <edwardocallaghan> Tempt:Sorry I did not have time for drinks maybe soom time soon... ;) [16:33:24] <trochej> Tempt: Linux kernel? Why would you do that? [16:33:38] <Tempt> To run on my SunPCi card, of course. [16:33:43] <trochej> Oh [16:33:48] * Tempt grins. [16:33:51] <trochej> Why would you do THAT? [16:33:55] <Tempt> Not much point in running Windows on it, is there? [16:34:12] <edwardocallaghan> Slackware is about the only Linux worth looking at for a server [16:34:30] <Tempt> Well, I used Slackware in the 90s and I want to use it again today. [16:34:40] <Tempt> It doesn't feel like a kiddie OS to me. Well, it didn't back then. [16:34:58] <nachox> too bad it still doesnt include pam [16:35:02] <Tempt> And it didn't have me typing /etc/sysconfig/network-flooble-wurble/ip-up monkeytastic noodlebar klunker. [16:35:15] *** obsethryl has quit IRC [16:35:15] *** loke has joined #opensolaris [16:35:16] <vmlemon> Hah [16:36:08] <Tempt> Besides, I've seen the best efforts of a Red Hat support contract in Australia. [16:36:13] <Tempt> If it doesn't exist in a GUI, it isn't supportable. [16:36:37] <Tempt> holy shit: it mounted! [16:36:58] <nachox> how will they support their labeled os if they only support stuff with a gui? [16:37:05] <loke> Well, in latter years, Sun support has dropped like a friggin' rock too [16:37:50] <Tempt> To be honest, I use our Sun contracts for two things: Spare parts delivery and handing our VxVM induced panics over to Symantec support so I don't have to talk to symantec. [16:38:18] <nachox> and downloading patches i assume [16:39:11] <loke> I worked for Sun Service for several years some years back. The service we provided was kilometres above what I'm getting now, as a customer. They won't even let me speak to the backline people directly anymore. Wtf?! (and the frontline is not even Sun staff) [16:39:31] <loke> Thankfully, I still have contacts at Sun I can call directly [16:39:54] <Tempt> Hey, I had to give an impromptu lesson on 880 maintenance to one field guy they sent out from the outsource company. [16:40:05] <Tempt> Still, he was friendly, polite, had all the right parts and was eager to learn. [16:40:23] <Tempt> First week on the job. [16:40:47] <loke> Tempt: wt... the on-site people should never be sent out to service a machine on which they've never had training [16:40:53] <loke> damn, the strandards have dropped [16:40:59] <loke> standards [16:41:32] <Tempt> heh. [16:41:47] <nachox> found a bug in sun's docs :) [16:41:51] <Tempt> IBM is even worse. [16:41:59] <Tempt> They send out printer guys to fix RS/6000s [16:42:14] <Tempt> and unlike the Sun guys, they aren't eager to learn, they just sit on their phone back to someone in the backline and break stuff. [16:42:19] <vmlemon> It must be bad, if you have to give the technicians training on-the-job, when they're supposedly "trained" already [16:42:20] <loke> nachox: there should be a "report a problem" link on the bottom of the doc page. [16:42:31] <loke> nachox: I've had very swift reply and fix when using that one [16:42:39] <loke> at least their doc department seems to be unharmed [16:42:55] <quasi> Tempt: IBM sends you a guy with a hardware manual to work from - if he has to be able to read faster than 2 words/minute, the price doubles [16:42:58] <twincest> you know, i don't think much of their product, but i found mysql support to be surprisingly good [16:43:15] <twincest> i suppose it has to be to fix all the problems ;-) [16:43:29] <loke> quasi: Hah... Have you had the misfortune to deal with IBM support people, doing a Sun contract? [16:43:40] <Tempt> Oh, man, that's bad. [16:43:42] <Tempt> I have. [16:43:49] <quasi> loke: no, that was for ibm people doing ibm gear [16:43:52] <Tempt> Including the beautiful exchange: [16:44:03] <Tempt> Me: Have you worked on these machines before (v440) [16:44:12] <Tempt> Him: I've worked on more of these than you've had hot dinners, mate. [16:44:16] <loke> quasi: I've sat at a meeting concenring a large bank customer, where they present their "performance analysis". Basically it was a CPU utilisation graph... but here's the kicker... [16:44:21] <Tempt> Me: Fair enough, although I'm not a salad man. [16:44:31] <loke> quasi: They had one pillar per day... the CPU usage averaged over the day [16:44:43] <Tempt> Him: *fumbles around a bit*. Fark, she's stone dead. I thought you said it needed a replacement CPU? It won't even turn on. [16:44:44] <loke> then we were told that everything was fine [16:44:55] <Tempt> Me: *reaches over, turns key and powers it on* [16:45:11] <loke> ...I looked at the raw vmstat data and saw that they had a run queue of 20+ or so during peak hours [16:45:12] <quasi> loke: hah, I've had to put extra cpu boards in e25k based on similar graphs [16:45:16] <nachox> "cause: you attempted to boot a system running a 32 bit sparc or x86 kernel with a disk greater than 1b. solution boot a system with a 64 bit sparc or x86 kernel with a disk greater then 1tb" how can that be considered a solution? how is the solution different than the problem for x86? [16:45:20] <Tempt> That's my thing about IBM support. [16:45:33] <loke> nachox: One bit!? [16:45:34] <loke> :-) [16:46:16] <nachox> loke: actually the "attempt" part :P [16:46:55] <loke> 1b is one bit :-) [16:47:13] <nachox> s/1b/1tb/ [16:47:16] <nachox> ops :) [16:47:28] <loke> to my knowledge there is no ISO prefix "t", so "tb" makes no sense either :-) [16:48:02] <loke> I think you want TB, Tb is terabit :-) [16:48:03] <nachox> terabyte, dont make type more! :) [16:48:33] <twincest> nachox: i think it means "a sparc or x86 system, with a 32-bit kernel" and the solution is to use "a sparc or x86 system, with a 64-bit kernel" [16:49:05] <nachox> they mean an x64 with a 64 bit kernel [16:49:26] <twincest> that's what i said [16:49:43] <nachox> or amd64 or whatever they call them [16:50:19] <nachox> it's stupid but then again, i am too :) [16:51:06] <loke> nachox: you're stupid? [16:51:07] <twincest> mpt + 2TB problem is still not fixed :( (kinda funny since sun ships mpt hardware on x86) [16:51:15] <loke> nachox: Hurry! Apply for a job at IBM service! [16:51:34] <loke> I just realised that Solaris ships with a broken wget by the way [16:51:40] <loke> you can't wget a file larger than 4 GB [16:51:45] <nachox> loke: i am already.. err, i mean... not service, just solaris sysadmin [16:51:49] <loke> and after 2 GB it shows negative file size [16:52:14] <loke> (it crashes after 4 GB since the progress counter goes back to 0) [16:52:30] <nachox> loke: you wondered why sun splits solaris downloads in 600mbs chunks? [16:52:35] *** mlh has quit IRC [16:52:35] <loke> nachox: you work for IBM? [16:52:45] <loke> nachox: can't they just fix wget? :-) [16:52:58] <loke> nachox: but yeah, I actually did wonder that :-) [16:53:06] <nachox> no, i might though, just waiting for an interview [16:53:15] <loke> I thought they wanted to make it possible to fit on a CD perhaps [16:53:18] <vmlemon> It should be on the list of Sunisms [16:53:28] *** calSHOP is now known as calumb [16:54:10] *** bengtf has quit IRC [16:55:00] *** vmlemon is now known as Your [16:55:20] *** Your is now known as vmlemon [16:56:29] <Tempt> alright, my nfsroot slackware is able to boot and run init now. [16:56:49] <nachox> slackware rocks, try that with redhat :) [16:57:01] <Samy> haha [16:57:09] <Samy> So I was looking for an AMD machine :-P [16:57:27] <Samy> and this random guy decides to help me out, and well, it turns out he works for AMD. [16:57:30] <Samy> Rox and sox. [16:58:05] * vmlemon stuffs Samy's socks with rocks [16:58:41] <Samy> ;[ [16:58:48] <Samy> Reminds me, I should put my socks in laundry. [16:59:40] <nachox> Samy: ask him whether he got a comission for that or not [17:00:29] <Samy> nachox: What a polite question to ask. ;-] [17:00:32] *** Murmuria has joined #opensolaris [17:01:22] *** sstallion has joined #opensolaris [17:03:21] *** swmackie has joined #opensolaris [17:03:26] <vmlemon> Is there a way to obtain the total size of all the items in a directory from the command line> [17:03:29] <vmlemon> ? [17:03:35] <asyd> du -hs . ? [17:03:40] <vmlemon> Thanks [17:10:09] <PerterB> hmm, why might the SUNW.gds agent be not probing my app when it has Network_resources_used and Port_list set correctly (and no other Probe_Command set)? [17:10:31] <PerterB> if only the gds_probe source was in the code they released, or it was truss-able...... [17:10:47] *** deedaw has quit IRC [17:11:27] <Tempt> SUNW.gds is a little special. [17:11:37] <PerterB> indeed [17:11:48] <Tempt> It should probe though, I've used it to monitor all sorts of crazy things. [17:13:01] <PerterB> I thought it should too, but I ended up writing a minimal service for it to monitor that simply logs connections, so I'm pretty confident that it's not doing what it's meant to [17:14:25] *** postwait has joined #opensolaris [17:15:41] *** swmackie has quit IRC [17:15:46] *** bengtf has joined #opensolaris [17:21:05] *** sstallion has quit IRC [17:23:26] <Tempt> hmmm [17:23:29] <Tempt> init will boot [17:23:32] <Tempt> but it can't spawn a getty [17:23:35] *** Netwolf has joined #opensolaris [17:23:39] <Tempt> nor can I boot init=/bin/bash [17:23:54] <Tempt> It looks like I can't run any dynamic-linked binaries [17:24:42] *** Plaidrab has joined #opensolaris [17:25:06] * Plaidrab foolishly preps to dive into the Indiana lists. [17:25:17] *** cast has left #opensolaris [17:26:26] <nachox> will it mount / ? [17:29:06] *** SYS64738 has quit IRC [17:29:23] <Tempt> Yep, obviously, since it can find /sbin/init [17:29:27] *** estibi has quit IRC [17:29:53] <Tempt> and I made sure there was an /etc/fstab with an entry for proc [17:33:49] <nachox> hmm, maybe compiling busybox? i dont think it has deps other than libc and ld-linux [17:34:02] <nachox> so if init worked so should busybox [17:34:48] <Tempt> not a bad idea. [17:35:12] <Tempt> The problem really is that I'm stuck on an 80x25 text screen [17:35:17] <Tempt> No logging [17:35:23] <Tempt> can't work out what goes wrong further up the chain. [17:35:31] <Tempt> hang on [17:35:35] <Tempt> comment out the getties! [17:35:36] <Tempt> ... [17:36:33] <Tempt> aah, no /bin/sh can't be helping. [17:36:48] <nachox> /bin/sh is just a symlink to bash in linux [17:36:53] <Tempt> I know [17:37:02] <Tempt> Probably something the slackware installer creates [17:37:10] <Tempt> I'm working by untar'ing slackware packages and hacking ;) [17:37:16] <nachox> hehe [17:37:24] <Tempt> Indeed. [17:37:31] <Tempt> Wish I had a live slackware box to check things on [17:37:44] <Tempt> I got SunOS 4.1.4 running this way! [17:39:17] *** MattMan is now known as MattMTG [17:39:17] *** MattMTG is now known as MattAFC [17:40:31] <Tempt> aah, libc isn't in the core packages on slackware? [17:40:59] <edwardocallaghan> Tempt:Are you involved with driver things and opensolaris? [17:41:13] <Tempt> edwardocallaghan: Nope. [17:41:16] *** mega has quit IRC [17:41:23] *** Netwolf_ has joined #opensolaris [17:41:30] <Tempt> edwardocallaghan: Not my game, really. I leave writing drivers to the people who know what they're doing. [17:41:35] <edwardocallaghan> Who would you recomand I talk to mate? [17:41:53] <Tempt> Depends on what sort of driver you're talking about. [17:42:42] <edwardocallaghan> I got a report of my hardware in a tar.gz and i'm happy to email it to improve support and get the best out of mine and others hardware ;) [17:43:41] <Tempt> hmm. [17:43:42] <Tempt> Not sure there. [17:43:48] <Tempt> I'm sure someone will pipe up soon. [17:43:59] <nachox> Tempt: glibc-solibs is in /a and i think it is a required package (by the slackware terminology) [17:44:23] *** carbon60 has quit IRC [17:44:26] <nachox> glibc is in /l and it's only required if you're compiling programs [17:45:32] *** carbon60 has joined #opensolaris [17:45:40] <Tempt> hmmm [17:45:57] <Tempt> I just untarred *everything* in /a to make sure. [17:46:03] <Tempt> So I should have it. [17:46:13] *** carbon60 has quit IRC [17:46:15] <Tempt> Can I get the linux equiv of ldd /bin/bash [17:46:57] <nachox> ldd /bin/bash [17:47:00] <asyd> few seconds [17:47:07] <asyd> http://pastebin.ca/603146 [17:47:11] <asyd> ah hmm [17:47:12] <Tempt> thanks [17:48:18] <Tempt> crap, I don't have linux-gate, libdl or ld-linux.so... [17:48:36] <nachox> wait [17:49:47] <nachox> Tempt: this is from a slackware http://pastebin.ca/603149 [17:50:47] <Tempt> Hmm [17:50:51] <nachox> and you have both linux-gate and ld-linux.so otherwise you would not be able to run init [17:50:59] <Tempt> no linux-gate; no ld-linux ... [17:52:29] *** mlh has joined #opensolaris [17:52:55] *** Netwolf has quit IRC [17:53:04] *** RaD|Tz has joined #opensolaris [17:53:06] <nachox> linux-gate is an object exported by the linux kernel to every process memory, there is no file for it [17:53:55] <Tempt> Aah. [17:54:54] <nachox> samething for ld-linux [17:55:37] <Tempt> Righteo. [17:55:53] <Tempt> I think I might have to install slackware somewhere and then hack an install. [17:56:05] <edwardocallaghan> > Anyone willing to offer driver help ? < please leave a notice here and I will read the logs latter [17:56:19] <edwardocallaghan> Thank you.. [17:56:53] <edwardocallaghan> dclarke:I am going to try to have a _normal_ night [17:57:01] <Tempt> Is this perhaps caused by a lack of ld.so.cache? [17:57:16] <dclarke> edwardocallaghan : good man [17:57:23] <edwardocallaghan> dclarke:I will get back to ASAP [17:57:28] *** cmihai_ has joined #OpenSolaris [17:57:30] <dclarke> edwardocallaghan : were you able to login ? [17:57:31] <edwardocallaghan> heh [17:57:43] <edwardocallaghan> dialup at my mates house [17:57:51] *** cmihai_ has quit IRC [17:58:02] <dclarke> dialup works fine .. I used it lots [17:58:05] <dclarke> its real real stable [17:58:10] <edwardocallaghan> I just got some hardware to install my own OS on but no NIC driver for it [17:58:16] <dclarke> and if you use ssh with compression .. you hardly notice [17:58:18] *** cmihai has quit IRC [17:58:23] <edwardocallaghan> lol [17:58:32] <edwardocallaghan> cool [17:58:36] *** cmihai has joined #OpenSolaris [17:58:56] <edwardocallaghan> Although i don't know how to do that [17:59:33] <nachox> Tempt: it is a possibility. run ldconfig? [17:59:50] <Tempt> Can't run ldconfig, machine won't boot. [17:59:59] <Tempt> (it'll run init, the only static binary on the system!) [18:00:14] <EchoBinary> das boot! [18:00:52] <nachox> Tempt: ldconfig is statically linked in linux [18:01:15] <nachox> maybe init=ldconfig? :) [18:01:21] <Tempt> haha [18:01:33] <Tempt> I just edited my boot batch file before you typed that ;) [18:01:37] <Tempt> great minds think alike. [18:01:52] <edwardocallaghan> ok have a good night all [18:02:06] *** rawn027 has joined #opensolaris [18:02:12] <Tempt> night. [18:02:25] *** swmackie has joined #opensolaris [18:04:00] <Tempt> haha! read only filesystem. Better add "rw" to the nfs opts [18:04:02] *** iMax has quit IRC [18:04:05] <asyd> :) [18:04:17] <nachox> Tempt: ... [18:04:20] <Tempt> (you'd think rw would be a default for a rootfs mount ..) [18:04:44] <Tempt> hmmm [18:04:45] <Tempt> nfsroot=10.0.10.10:/chimera/root,rw [18:04:46] *** iMax has joined #opensolaris [18:04:56] <Tempt> still whines about read-only fs [18:05:18] <Tempt> It *should* be readable. [18:05:22] <Tempt> and writable. [18:05:51] <nachox> Tempt: what about the options you're passing your linux kernel? [18:06:17] <Tempt> loadlin chim ide=noprobe ide0=noprobe hda=noprobe root=/dev/nfs nfsroot=10.0.10.10:/chimera/root,rw ip=10.0.40.199:10.0.10.10:10.0.10.2:255.255.255.0:chimera:eth0:none init=/sbin/ldconfig [18:06:54] <Tempt> hmm, wrong netmask [18:07:06] <nachox> it cant be that [18:07:16] <Tempt> No, just noticed it, that's all. [18:07:20] *** edwardocallaghan has quit IRC [18:07:33] <Tempt> I've had this crap with Linux NFS before. One of the reasons I like to rant about linux having crap NFS. [18:07:34] *** karrotx has joined #opensolaris [18:08:40] *** calumb has quit IRC [18:08:46] <renihs> ? i run alot linux boxes with root over nfs [18:08:49] <renihs> never was a problem [18:08:55] <Tempt> From a Solaris server? [18:09:00] <renihs> also yes [18:09:02] <Tempt> or a non-linux server in general? [18:09:20] <renihs> solaris nfs but also linux nfs (server) [18:09:23] *** calumb has joined #opensolaris [18:09:24] <Tempt> What mount options? [18:09:29] <renihs> didnt note there was a difference [18:09:30] *** syscalls has joined #opensolaris [18:09:39] <renihs> for the linux hosts? [18:09:46] *** yippi has quit IRC [18:09:56] <Tempt> yes. [18:11:12] <renihs> kernel /zod/zod-i686-vms ip=dhcp root=/dev/nfs nfsroot=192.168.100.10:/zod/i686 [18:11:35] <renihs> root(nd) [18:11:47] <nachox> your box cannot use pxe to boot that you had to use loadlin? [18:11:57] <Tempt> Correct, the SunPCi has no pxe. [18:12:11] <renihs> but you have to make sure to use nfsv4 on linux [18:12:19] <renihs> otherwise no +2gb files and stuff [18:12:26] <nachox> ouch! nfsv4 in linux! [18:12:30] <rawn027> Tempt how are you with zfs? [18:12:38] <Tempt> Use it daily. [18:12:45] <renihs> nachox, ? i use it since its in kernel [18:12:48] <Tempt> zfs rocks my little world. [18:12:52] <renihs> nfs4 only :p [18:13:03] <rawn027> I am looking to set it up on my box here that I will use a file server/backup server for my laptop [18:13:07] *** postwait has quit IRC [18:13:13] <rawn027> It is running Mac OS X [18:13:34] <renihs> macosx has only read zfs support :p [18:13:47] <rawn027> renihs i will be using cifs/nfs/or iscsi [18:13:52] <rawn027> ideally it would be iscsi [18:14:02] <renihs> steve jobs is really a moron, refusing zfs was the most dumb thing he could do [18:14:10] *** derchris has quit IRC [18:14:13] <rawn027> renihs dont even get me started with that shit ;) [18:14:18] <rawn027> im so pissed at apple right now [18:14:31] <renihs> you should be :p ...i would be :p [18:14:34] *** derchris`work has quit IRC [18:14:49] <rawn027> back to my issue, what do you think i should use [18:14:52] <rawn027> nfs cifs or iscsi [18:15:03] <rawn027> considering id love to use iscsi but the initiator sucks on osx right now [18:15:25] <renihs> a backup solution should be kept simple, so iscsi for macsox i wouldnt use [18:15:37] <rawn027> :-P [18:15:44] <renihs> but me going home now, have to play with my new present i got from my ex-university [18:15:49] <renihs> a dell m90 :p [18:15:49] <rawn027> i guess nfsv4 it is :) [18:15:55] <renihs> ya [18:16:06] <rawn027> can zfs automagically share a pool? [18:16:09] <renihs> cifs isnt bad though, less overhead, but i would stick with nfs too [18:16:30] <renihs> its an fs not a sharethingie [18:16:40] <rawn027> renihs rofl [18:17:21] <cmihai> rawn027: yes, ZFS can automagically share [18:17:58] <Tempt> setting 777 on / and /etc to see if that helps [18:17:58] <cmihai> zfs create -s -V 1T storage/p0rn && zfs set shareiscsi=on storage/p0rn -> 1TB iSCSI ;-). Or sharenfs or whatever. [18:18:08] <asyd> a pool ? You can't attach a pool more than one time [18:18:39] <rawn027> so when i go to create my 2nd hard drive do i create with the zfs command or the zpool command? [18:18:43] <nachox> Tempt: that will break everything [18:18:50] <rawn027> a pool is a group of disks [18:18:51] <Tempt> Doesn't matter. [18:18:57] <Tempt> did a zfs snapshot a while back [18:19:02] <renihs> cmihai, omg ...you are right :p [18:19:02] <Tempt> Haven't progressed since then. [18:19:12] <cmihai> heh [18:19:56] <Tempt> okay, perm change didn't help. [18:20:02] *** calumb has quit IRC [18:21:33] <rawn027> invalid property: sharenfs? [18:23:22] *** calumb has joined #opensolaris [18:23:51] <cmihai> zfs set sharenfs=on storage/p0rn [18:23:52] <nachox> Tempt: bte, i doubt ld.so.cache is part of any slackware package, i would look for the problem somewhere else [18:23:57] <cmihai> zfs set sharenfs=ro storage/stuff [18:24:04] <cmihai> zfs set sharenfs=rw storage/rtfm [18:24:04] <cmihai> :P [18:24:05] <nachox> *btw [18:24:17] <cmihai> rawn027: "zfs get all storage" [18:24:20] <Tempt> The fact that I can't write to the root filesystem shows a problem. [18:25:01] <asyd> guys, I have the fucking smartcards stuff [18:25:31] <rawn027> it has nothing about nfs [18:25:34] <nachox> yes, but you should be able to boot even with an ro fs [18:25:37] *** syscalls has quit IRC [18:25:38] <cmihai> rawn027: os? [18:25:43] <rawn027> SXDE [18:25:46] <rawn027> NV 64a [18:25:50] <asyd> s/have/hate/ [18:25:56] <cmihai> rawn027: look closer [18:25:59] *** yippi has joined #opensolaris [18:26:02] <rawn027> but this time it took the property [18:26:04] <cmihai> zfs get all storage/p0rn | grep nfs [18:26:15] <rawn027> it only worked on the poo [18:26:26] <rawn027> pool* [18:26:47] *** Cyrille has quit IRC [18:26:54] <Tempt> Probably just borken linux nfs [18:26:54] <rawn027> its only there when i do zfs get all storage [18:27:10] <cmihai> Linux?! [18:27:14] <cmihai> rawn027: are you on Linux / FUSE? [18:27:23] <rawn027> ewwww no [18:27:28] <rawn027> SXDE [18:27:29] <cmihai> Phew :-) [18:27:32] <cmihai> Right. [18:27:35] <cmihai> Well, it's there. [18:27:38] <rawn027> I hate linux [18:27:41] <rawn027> with a passion [18:27:49] <rawn027> havent used it in about 1 year [18:27:55] <rawn027> unless i had to [18:28:00] <rawn027> been with FreeBSD [18:28:07] <rawn027> now recently learning Solaris [18:28:09] <rawn027> as you can tell :) [18:28:22] <cmihai> # zfs get all storage/kits [18:28:26] <cmihai> storage/kits sharenfs off local [18:28:36] <cmihai> # zfs get sharenfs storage/kits [18:28:37] <cmihai> NAME PROPERTY VALUE SOURCE [18:28:37] <cmihai> storage/kits sharenfs off local [18:28:38] <cmihai> Get it? [18:28:53] <cmihai> It's there, you're just not doing something right. [18:28:54] <rawn027> yeah mine returns nothing BUT... [18:28:59] <rawn027> zfs get all storage [18:29:05] <rawn027> returns sharenfs on [18:29:18] <cmihai> storage is the name of your pool or whatever you call it [18:29:22] <cmihai> some call it tank or whatever [18:29:26] <rawn027> yeah storage is the pool [18:29:34] <rawn027> storage/backup is what i just create with the zfs command [18:30:04] <cmihai> # zfs get sharenfs storage/backup [18:30:05] <cmihai> NAME PROPERTY VALUE SOURCE [18:30:05] <cmihai> storage/backup sharenfs rw local [18:30:06] <cmihai> :P [18:30:13] <cmihai> That's what I had shared :-) [18:30:19] <rawn027> :-P [18:30:26] <rawn027> so let me get this straight [18:30:32] <rawn027> say i start with a blank disk [18:30:45] <rawn027> how do i go about getting it up as a NFS share... [18:30:47] <cmihai> zfs create storage/p0rn && zfs set sharenfs=rw storage/p0rn [18:30:50] <cmihai> Done. [18:30:57] <rawn027> so whats the zpool command :-P [18:31:20] <Tempt> haha,haha,hhaahhhahahaha,ha,ha,ha - Root-NFS: unknown option: rw [18:31:25] <vmlemon> What would be the best way to migrate my data from the one 160GB ext3 partition to ZFS? [18:31:41] <cmihai> zpool create storage RAIDTYPE disks [18:31:42] <rawn027> cmihai why is there a zpool command then? [18:31:46] <rawn027> cmihai sorry [18:31:47] <vmlemon> Since I don't have any larger disks to copy to [18:31:48] <cmihai> create store raidz2 disk1 disk2 etc [18:31:55] <rawn027> ohhh [18:31:58] *** dclarke has left #opensolaris [18:31:58] <cmihai> rawn027: you already have a pool [18:31:59] <cmihai> storage. [18:32:06] <rawn027> im going to start from scratch [18:32:09] <cmihai> zpool list [18:32:22] <cmihai> rawn027: "https://localhost:6789 [18:32:25] <cmihai> root, rootpass [18:32:34] <cmihai> It's a ZFS web manager, gives you command line too. [18:32:41] <cmihai> If you can't be arsed to read the docs ;P [18:32:57] *** bnitz has left #opensolaris [18:33:02] *** bnitz has joined #opensolaris [18:33:24] <rawn027> thanks :) [18:34:03] *** iMax has quit IRC [18:34:38] *** calum_ has joined #opensolaris [18:34:57] *** trede has joined #opensolaris [18:36:58] *** calumb has quit IRC [18:37:01] *** Symmetria has joined #opensolaris [18:37:05] *** sartek has quit IRC [18:41:28] *** sioraiocht has quit IRC [18:41:38] *** sioraiocht has joined #opensolaris [18:41:39] <Tempt> I give up for tonight, see you all tomorrow. [18:41:58] *** sioraiocht has quit IRC [18:42:01] *** sioraiocht has joined #opensolaris [18:42:27] *** sioraiocht has left #opensolaris [18:47:02] *** rawn027 has quit IRC [18:48:22] <leal> OK, i did a reinstall of a solaris box [18:48:41] <leal> booting from san (qla2340), and the system does not boot [18:49:26] <leal> after the installation (without errors), i get WARNING: add_spec: No major number for chs (10 times), and after that, the machines reboot [18:52:19] *** ravv has joined #opensolaris [18:52:36] <Plaidrab> Hmm. San settings change? [18:52:37] <Tempt> nachox: Was missing "rw" from boot flags. *sigh*, finish tomorrow. Thanks for your help. [18:54:06] <twincest> what is fsmgmtd? [18:54:37] *** carbon60 has joined #opensolaris [18:55:44] <leal> Plaidrab: what? [18:56:11] <Plaidrab> booting from san (qla2340) [18:56:21] <leal> with -kd option, i see the message: cannot mount root path. [18:56:55] <Plaidrab> Did you already verify all of the obvious? [18:57:02] <leal> No, nothing changed... I did a install, and after that i reboot. and the server does not boot. [18:57:13] <leal> Plaidrab: like what? [18:57:24] [18:57:24] *** timsf has quit IRC [18:57:55] <Plaidrab> device settings on the san were not changed. The cabling is all still in place, etc. [18:58:11] <Plaidrab> ravv: try devfsadm? [18:58:42] [18:59:06] <leal> Plaidrab: sorry by the english, but i think you did not understand... i did a installation (twice), and after that the server does not boot. [18:59:23] <Plaidrab> ravv: I'm assuming your on x86. I'm not sure how you do a reconfigure reboot there. [18:59:25] *** nostoi has joined #opensolaris [18:59:28] <leal> Plaidrab: how can i install if the SAN is not working? [18:59:48] <leal> Plaidrab: sol u2 works, u3 dont. [18:59:49] <Plaidrab> leal: Ah. Sounding like you reinstalled a previously working system. [18:59:59] *** peteh has quit IRC [19:00:04] *** bnitz has left #opensolaris [19:00:11] <Plaidrab> Will it come up single user? [19:00:28] <leal> Plaidrab: that is right. did work with u2.. but i need u3 for cluster. [19:00:30] <Plaidrab> Have you applied the current patch clusters? [19:00:54] <leal> Plaidrab: how could i do that, the system does not boot! [19:01:07] <Plaidrab> You've tried single user? [19:01:15] <leal> yes [19:02:14] <nachox> Tempt: np [19:02:41] <Plaidrab> What failure does it give you? [19:04:24] <Plaidrab> just the chs one? [19:04:57] *** cajetanus has joined #opensolaris [19:04:58] *** stevel has joined #opensolaris [19:04:58] *** ChanServ sets mode: +o stevel [19:05:11] *** stevel changes topic to "Latest SXCE 67 | Latest ON 68 | Starter kits: http://get.opensolaris.org" [19:05:21] *** stevel has quit IRC [19:06:10] <Plaidrab> You might be able to boot to the media ( DVD, CD, etc ) mount your / as mount, chroot that, then apply your patch cluster. But I'm not certain. I'd more likely try running the installer again. [19:06:23] <leal> Plaidrab: like i said... with -kd i get: "cannot mount root path" [19:06:54] <leal> Plaidrab: i did that twice...zdfasdfasd [19:07:57] *** Snake007uk has quit IRC [19:09:23] <Plaidrab> hmm [19:09:36] <Plaidrab> I'm not familiar with those two flags. Looking. [19:10:20] <Plaidrab> Assuming they are 10 specific? [19:13:19] <Plaidrab> Looks like you'll need to wait for someone with more specific knowledge. [19:13:40] *** Murmuria has quit IRC [19:17:14] <trochej> Hi, quick question. Is it possible to disable a limit of username lenght being 8 chars in Sol 10? [19:18:27] <cmihai> !seen delewis [19:18:29] <Drone> delewis (delewis!n=dlewis at 24-176-104-6 dot dhcp.jcsn.tn.charter.com) was last seen in #opensolaris on Sat 30 Jun 2007 22:18 GMT, saying 'its a security precaution.'. [19:18:30] *** MattAFC is now known as MattMan [19:18:44] <cmihai> Um.. damn [19:18:46] <Samy> !seen %n [19:18:48] <Drone> I've never seen %n talk in #opensolaris. [19:19:07] <Samy> !seen %30x%n [19:19:09] <Drone> I've never seen %30x%n talk in #opensolaris. [19:19:12] <Samy> ;[ [19:19:22] <vmlemon> !seen anyone [19:19:25] <Drone> I've never seen anyone talk in #opensolaris. [19:19:33] <cypromis_> this is the silent channel [19:19:37] <cmihai> We are NOT alone :-) [19:19:41] *** cypromis_ is now known as cypromis [19:19:47] <cypromis> big silence is watching you [19:19:52] *** dlynes_laptop has joined #opensolaris [19:20:00] <trochej> True [19:20:02] *** palowoda has quit IRC [19:22:03] *** estibi has joined #opensolaris [19:23:57] *** theRealballchal1 has joined #opensolaris [19:24:57] <cajetanus> ?? [19:25:40] <Plaidrab> trochej: Check in /etc/default/passwd [19:26:46] <trochej> Plaidrab: Thank you [19:27:33] <Plaidrab> Some new things were added to that in 10, but I don't recall off the top of my head if a Max username length is one of them. [19:31:12] *** cajetanus has quit IRC [19:31:35] *** cajetanus has joined #opensolaris [19:31:43] *** theRealballchalk has quit IRC [19:34:54] *** migi has quit IRC [19:41:31] *** MattMan is now known as MattAFC [19:46:57] *** cajetanus has quit IRC [19:47:02] *** cajetanus has joined #opensolaris [19:48:29] *** bunker has quit IRC [19:55:40] *** bengtf has quit IRC [19:59:17] *** calum_ has quit IRC [19:59:54] *** dunc has quit IRC [20:04:00] *** capitano__ is now known as SYS64738 [20:05:34] *** Chihan has quit IRC [20:05:52] *** mikefut has quit IRC [20:06:48] *** nachox has quit IRC [20:09:18] <SYS64738> Tempt, are you alive ? [20:11:40] <SYS64738> is it possible to stop a service of a spare zone from the global zone ? [20:11:53] <SYS64738> I don't reach to logon to that zone [20:12:30] <bda> See zlogin(1) [20:12:41] <bda> Briefly: zlogin <zone> <command> [20:15:00] <SYS64738> from the console I can see a lot of: getpwnam failed to find userid for effective user 'nobody' [20:15:07] <SYS64738> from squid daemon [20:15:13] *** bengtf has joined #opensolaris [20:16:39] *** karrotx has quit IRC [20:19:54] <SYS64738> bda do you mean like: megatron # zlogin starscream svcadm disable squid ? [20:23:08] *** cydork has quit IRC [20:35:36] *** Gropi has joined #opensolaris [20:39:28] <twincest> is there any news on the QFS open-sourcing? like ETA? [20:40:13] *** swmackie has quit IRC [20:44:10] *** blueandwhiteg3 has joined #opensolaris [20:53:06] *** tsoome has joined #opensolaris [20:53:23] <blueandwhiteg3> Does anybody have an idea why ssh would start failing at boot for no apparent reason on a SXCE 67 system? "Method or service timed out." [20:53:27] *** RaD|Tz has quit IRC [20:55:34] *** AgentX has joined #opensolaris [20:55:44] *** Trisk[laptop] has joined #opensolaris [20:56:49] <ofu> nevada66 crashes my vmware during startup from cd, is this a known bug? [20:58:34] <AgentX> SXCR66 works well in VMware here. [21:00:06] <ofu> ok, i will try it on another host tomorrow, thx [21:00:13] <pfn> sxde 64a works on vmware fine for me, too [21:01:47] <blueandwhiteg3> Is there any kind of 'repair system' functionality in SXCE 67? I think something has happened to my system, but I can't fathom what. ssh times out at startup. The gui login comes up, but when I try to log in, gnome gets 'stuck'.... [21:04:05] *** kFuQ has quit IRC [21:05:32] *** jamesd has joined #opensolaris [21:05:32] *** ChanServ sets mode: +o jamesd [21:06:27] *** ravv has quit IRC [21:07:15] *** trisk__ has quit IRC [21:07:18] <e^ipi> blueandwhiteg3: tried with a different user? [21:07:32] <e^ipi> could be ~ all messed up [21:07:37] <blueandwhiteg3> e^ipi: It seems to have gotten un-stuck [21:07:43] <e^ipi> fabulous [21:08:09] <blueandwhiteg3> e^ipi: I'm pretty weirded out by how it is working, but I can't get anything to reliably fail now.... [21:08:47] <e^ipi> predictive self-healing works... huzzah [21:09:32] <blueandwhiteg3> e^ipi: hahahahaha... too bad this is the boot volume, which isn't zfs! [21:09:53] <blueandwhiteg3> wow, it appears that gzip compression is now available? [21:09:57] *** kFuQ has joined #opensolaris [21:10:01] <e^ipi> predictive self healing refers to SMF and FMD, not zfs [21:10:12] <e^ipi> it's just a marketing phrase anyways [21:10:19] <Pietro_S> self-healing works also on ufs, it's on os level not fs ... [21:10:32] <blueandwhiteg3> e^ipi: I'm new to solaris [21:13:08] <blueandwhiteg3> e^ipi: that's a fascinating feature [21:13:44] <e^ipi> which is, fmd? [21:14:28] <AgentX> Marketing buzzwords. What else! [21:14:55] *** trisk__ has joined #opensolaris [21:14:55] <vmlemon> Buzz... Buzz... Buzz..z.z.zzz [21:15:03] *** schily has joined #opensolaris [21:17:29] <AgentX> When the virtual consoles project is going to be integrated into Osol? [21:18:06] *** movement has quit IRC [21:18:08] *** NikolaVeber has joined #opensolaris [21:18:27] *** sniffy has joined #opensolaris [21:18:38] <e^ipi> AgentX, ask them? [21:18:41] <e^ipi> or use screen [21:19:00] <AgentX> I came here looking for "them". [21:19:06] <jamesd> AgentX, when its ready and tested and not a moment before [21:19:26] <AgentX> jamesd, That makes sense. [21:20:57] <blueandwhiteg3> I'm trying to get smcwebserver running, when i try and start it, it says console service is already running, which makes sense, that it is already running, but the problem is that connections to the host and port are refused. Is there a firewall I need to disable? [21:21:53] <e^ipi> don't use SMC [21:21:56] <e^ipi> seriously, don't [21:22:36] <e^ipi> disable it, and forget it exists [21:22:55] <blueandwhiteg3> e^ipi: That bad? [21:23:02] <e^ipi> it's terrible [21:23:13] <tsoome> blueandwhiteg3: solaris 10? you may have disabled remote access [21:23:23] <blueandwhiteg3> sxce 67 [21:23:26] <e^ipi> i feel sorry for whichever engineers had to work on it, because they wasted all that time creating something so completely and utterly worthless [21:23:39] <tsoome> netpolicy was the command to enable it [21:23:43] <tsoome> i think [21:23:49] <AgentX> Sun should keep Java fanatics away from Solaris team. [21:23:53] <blueandwhiteg3> haha [21:24:14] <tsoome> e^ipi: dont feel, solaris patch management is a lot worse [21:24:30] *** Trisk[laptop] has quit IRC [21:25:15] <AgentX> At least, admintool wasn't a hog for resources. [21:26:23] *** deather_ is now known as deather [21:28:05] *** cajetanus has left #opensolaris [21:28:12] *** mikefut has joined #opensolaris [21:28:48] <blueandwhiteg3> Is there any good way to sort through the various disks in /dev/dsk ? If you put a bunch of drives in a system it's a bit of a headache to figure out which is which... [21:28:58] <twincest> blueandwhiteg3: type 'format' [21:29:12] <blueandwhiteg3> ha ha! [21:29:15] <blueandwhiteg3> *ah ha! [21:30:09] <blueandwhiteg3> though i don't really want to format them... that is a very handy display [21:31:10] <jamesd> blueandwhiteg3, format < /dev/null [21:31:28] <blueandwhiteg3> yeah, i just escape out of it [21:31:32] <blueandwhiteg3> +d [21:34:06] *** movement has joined #opensolaris [21:34:42] <AgentX> Is it just me, or the SXDE installer is a little too dumbed down? [21:35:00] <e^ipi> it's completely retarded, hence the push for a replacement [21:35:46] <blueandwhiteg3> SXCE demanded I enter a DNS server manually, or turn off DNS entiretly, despite the fact I had specified DHCP which already provides DNS.... [21:35:56] <blueandwhiteg3> (during the installer) [21:37:03] <AgentX> e^ipi, I hope they're not going to use Java for the replacement. [21:37:10] *** migi has joined #opensolaris [21:37:17] <Plaidrab> Does the SVN supplied with SXCE not like it when you can't get a cannonical hostname? [21:37:26] <blueandwhiteg3> It appears that raidz zpool 'avilable' sizes include all drives, even though one of the drives is not really free? [21:38:29] <seanmcg> Not really free ? [21:38:40] <Plaidrab> Whoops. I'm an idiot. Forgot to make sure dns was in the hosts entry in nsswitch.conf. :) [21:42:16] <pfn> seanmcg, the parity "drive" [21:42:39] <pfn> blueandwhiteg3, yes, it's a bit stupid [21:42:59] <pfn> the dns bit [21:43:13] <pfn> I hate that the default nsswitch.ldap defines ldap to be the default provider for everything... [21:43:18] <blueandwhiteg3> pfn: I think it sort of makes sense, if you think of the zpool being the physical level [21:43:28] <pfn> blueandwhiteg3, well, did you add it as a raidz? heh [21:43:49] <blueandwhiteg3> pfn: yes [21:44:02] <blueandwhiteg3> zpool create -f bigpool raidz c2d0 c3d0 c4d0 c5d0 [21:48:40] <seanmcg> did you try pulling one of those disks ?-) [21:50:22] *** trede has quit IRC [21:50:53] *** movement has quit IRC [21:51:11] *** movement has joined #opensolaris [21:53:21] <blueandwhiteg3> seanmcg: 'zpool status' confirms it is a raidz... though that is on my list of things to test! [21:53:34] <blueandwhiteg3> right now, i'm moving onto benchmarking the performance [21:54:03] <seanmcg> have fun :) [21:54:09] *** tsoome1 has joined #opensolaris [21:54:22] <blueandwhiteg3> I still have not gotten a clear answer from anybody as to why / how to investigate why I can't seem to get a DHCP lease after using a static IP configuation under solaris.. [21:54:52] <jamesd> did you run sys-unconfig [21:55:37] <blueandwhiteg3> ugh.... self-healing systems? i had to reboot 3 times yesterday to clear up this issue, now it's working properly [21:55:45] <blueandwhiteg3> oh, no [21:55:46] <blueandwhiteg3> it's not [21:56:04] <blueandwhiteg3> it's getting the ip right, but not the gateway address [21:56:38] *** movement has quit IRC [21:56:42] <blueandwhiteg3> blank system? [21:56:53] <blueandwhiteg3> how far is 'blank'? [21:57:08] <jamesd> blueandwhiteg3, it clears a few files, and walks you through resetting it up... you will be back up and running in 5 minutes. [21:57:32] <blueandwhiteg3> why would i want to do this? i might as well just reboot at that point [21:58:08] <Plaidrab> If you missed something key, it will fix it. [21:58:29] <jamesd> blueandwhiteg3, because it allows you to reset up your network, so it will hopefully work in the future, so you wont have to reboot it 4 times a day. [21:59:01] *** carbon60 has quit IRC [21:59:19] *** cypromis has quit IRC [21:59:54] <AgentX> sys-unconfig is the quickest and cleanest way to fix network settings, especially if you are new to Solaris. [22:01:01] <Plaidrab> I was told that SXCE b66 should have SunPro in it. I can't seem to locate where. :) Was that incorrect or am I just not poking the right path? [22:01:03] *** movement has joined #opensolaris [22:01:18] <jamesd> Plaidrab, i think its in SXDE not CE [22:01:41] <Plaidrab> Bugger. [22:01:44] *** tsoome has quit IRC [22:01:53] <Plaidrab> Any way to validate I didn't mislabel my DVDs? [22:02:07] <jamesd> Plaidrab, just get CE, and download and install sun studio [22:02:14] *** cmihai has quit IRC [22:02:25] <Plaidrab> Point [22:02:30] <AgentX> Solaris Express images have both CE and DE in one disc, I think. [22:03:26] <vmlemon> Yes [22:03:44] <vmlemon> My copy of SolEx DE 64a has both [22:04:20] <Plaidrab> Perhaps different on the Sparc CD spins? [22:04:26] <richlowe> I think so. [22:04:31] <richlowe> since the sparc DE doesn't exist. [22:04:46] <Plaidrab> Well, that explains much [22:05:41] <AgentX> Now it is SPARC's turn for the step treatment. [22:05:53] <Plaidrab> Bah. [22:06:10] <Plaidrab> Purple! [22:09:24] <blueandwhiteg3> Wow, sys-uncofig really unconfigged! My system isn't getting past the bios now...! [22:09:35] <Plaidrab> ... [22:10:18] <jamesd> blueandwhiteg3, um.. it did no such thing... its rebooting... unless your using a diskless system... it didnt effect any thing with the disks [22:10:45] <blueandwhiteg3> jamesd: I'm half joking, but I really don't see how/why it is now stuck at the BIOS. It is likely unrelated. [22:11:04] <jamesd> blueandwhiteg3, what box is this? [22:11:11] <blueandwhiteg3> AMD64 system [22:12:40] *** pfn has quit IRC [22:13:06] <AgentX> sys-unconfig can access the Emacs psychiatrist internally and do damages precisely as your subconcious intended to. [22:13:20] <Plaidrab> Man. Firsfox's XSS blockers really don't like Sun. :/ [22:13:42] <vmlemon> You forgot to apply plenty of Sun-cream to them ;) [22:13:53] <vmlemon> And to keep them in the shade [22:14:32] <Plaidrab> Please. Geekage here. The only illumination in the room is on the monitor, the case, and my terrariums. [22:15:30] <Plaidrab> Oh. I suppose you could count the numlocks light. :) [22:15:41] <vmlemon> Monitor LEDs? [22:16:01] <vmlemon> Oh, you've already counted them [22:16:06] <Plaidrab> True. But no indicator for the dual input. [22:16:17] <blueandwhiteg3> How are drives identified? [22:16:24] *** nostoi has quit IRC [22:16:30] <Plaidrab> Normally, I use a police lineup [22:16:33] <blueandwhiteg3> I mean, I've created a RAID-Z. What happens if I move a drive from one SATA but to another? [22:16:39] *** nostoi has joined #opensolaris [22:16:50] *** detriment has joined #opensolaris [22:16:51] *** pfn has joined #opensolaris [22:17:05] <blueandwhiteg3> SATA buS [22:17:11] <e^ipi> blueandwhiteg3, if you've given ZFS the whole drive nothing [22:17:34] <e^ipi> zfs writes an EFI disklabel, which has a unique id [22:17:40] <seanmcg> blueandwhiteg3: you can shuffle the drives all you want :) [22:17:40] <Plaidrab> Have you see the USB Key demo? [22:18:07] <blueandwhiteg3> EFI? Are we talking like the BIOS replacement? [22:18:12] <vmlemon> Yes [22:18:35] <blueandwhiteg3> Does that work on BIOS-based systems? [22:18:38] <vmlemon> Doesn't require an EFI implementation in hardware though, it just writes the GPT [22:18:40] <blueandwhiteg3> Where is this USB key demo? [22:18:52] <e^ipi> blueandwhiteg3, it works on a lot of BIOS systems [22:18:58] <e^ipi> /however/ not all [22:19:10] <blueandwhiteg3> well, this will be a good test i guess [22:19:23] <e^ipi> my bios is dumb, i had to convince it not to probe SATA on boot, or else it got confused & wouldn't POST [22:19:32] <blueandwhiteg3> Hmmm [22:19:37] <blueandwhiteg3> I wonder if that could be happening here? [22:19:46] <e^ipi> does the machine boot? [22:20:11] <e^ipi> like, at all [22:20:21] <e^ipi> if it gets to the GRUB menu, it's not the same problem [22:21:10] *** nostoi has quit IRC [22:21:55] <blueandwhiteg3> nope [22:22:15] <blueandwhiteg3> i think i need to yank the sata connection, boot, adjust the bios, reboot [22:22:30] *** Chris_S has joined #opensolaris [22:23:34] *** nostoi has joined #opensolaris [22:24:48] <e^ipi> if you're not getting past POST, then that's a likely culprit [22:25:06] <e^ipi> you can try upgrading your bios if there is one ( there isn't for mine ), and turning off SATA probing [22:25:08] *** detriment has quit IRC [22:25:26] *** detriment has joined #opensolaris [22:25:34] *** detriment has quit IRC [22:27:23] <blueandwhiteg3> e^ipi: It's the sata drives. i'm trying to figure out how to turn off sata probing... [22:28:02] <e^ipi> on my bios, under the drive listing it had "auto" "manual" and "none" or something to that effect [22:28:12] <e^ipi> setting them as "none" worked [22:28:23] *** movement has quit IRC [22:28:29] <e^ipi> the controller driver still lets solaris see the drives, but the bios doesn't bother looking [22:29:16] *** movement has joined #opensolaris [22:30:17] *** movement has quit IRC [22:30:33] *** movement has joined #opensolaris [22:36:45] <blueandwhiteg3> e^ipi: I've disabled the sata drives in the bios, i hope they still show through to solaris! [22:37:52] <seanmcg> disable the probe of them or actually disable them ? [22:38:36] <blueandwhiteg3> I don't know yet! [22:38:43] <blueandwhiteg3> Which locale do I want? I'm in the US... [22:38:46] <blueandwhiteg3> UTF-8? [22:38:51] <blueandwhiteg3> or one of these ISOs? [22:40:02] <trygvis> anyone know of a quick intro on how to boot osol builds from a linux machine? [22:40:19] <e^ipi> blueandwhiteg3, utf-8 is good [22:40:25] <e^ipi> or C locale [22:40:38] <e^ipi> trygvis, put the CD in and turn the machine on? [22:40:54] <trygvis> +net :) [22:41:24] <trygvis> it seems like the disk ok borked and I don't have a cdrom in the machine after I filled it up with drives [22:41:44] <e^ipi> install solaris on the boot server & use JET [22:41:45] <e^ipi> ? [22:42:03] <trygvis> c [22:42:05] <trygvis> sigh [22:42:50] <Plaidrab> mmm. Jet. [22:43:42] <Plaidrab> : thinks the installer should just be a big ole smart and prettified front end to build a Jet template on the fly and then install. [22:44:37] *** DataStream has quit IRC [22:45:26] *** aruiz has quit IRC [22:47:39] <Plaidrab> But I'm weird. [22:47:55] <pfn> well, you can boot the solaris miniroot from grub... [22:48:00] *** Chris_S has quit IRC [22:48:21] <pfn> from there, it's up to you to figure out how to install it to a drive [22:48:41] <pfn> actually, nix that [22:48:50] <pfn> solaris grub changes haven't been merged back into mainline grub [22:48:54] <Plaidrab> I figure if you have it install the minimal OS from the flar then pakgadd everything else on top and post install script any configs not set by JET you should be good. [22:49:12] *** LuckyLuke has joined #opensolaris [22:49:18] <vmlemon> Is there a "Portalaris"/"Solaris from USB Boot" project, similar to the Linux on a Flash Drive projects? [22:49:29] <pfn> vmlemon, belenix [22:49:34] <vmlemon> OK [22:49:35] <pfn> belenix has a liveusb [22:49:36] <vmlemon> Thanks [22:49:38] * pfn shrugs [22:49:53] <pfn> it should be trivial regardless [22:50:04] <pfn> I mean, what's preventing one from installing linux, solaris, etc. to a usb flash [22:50:10] <pfn> (or any usb disk, altogether) [22:50:20] <vmlemon> Providing it's large enough, of course [22:50:37] <pfn> it's not uncommon to find 1gb, 2gb and greater usb flash sizes these days [22:50:46] <pfn> then you've got usb hard disk enclosures as well [22:50:47] <vmlemon> Don't see why it wouldn't work on an IEEE 1394 disk, either [22:50:59] <pfn> $25 enclosure plus a $110 500gb sata disk [22:51:01] <pfn> mmmm [22:51:04] <vmlemon> Although it might take extra work to boot from, depending on the machine [22:51:36] <vmlemon> I know Apple's x86 machines can boot from it [22:52:17] *** ravv has joined #opensolaris [22:52:41] <vmlemon> Quality of BIOS USB boot implementation is a big factor, too [22:53:13] *** Yamazaki-kun has joined #opensolaris [22:53:17] <pfn> yes, I don't think any of my boxes will boot usb [22:53:21] <vmlemon> (My HPised Award BIOS supposedly support USB boot, although it doesn't actually work) [22:53:24] <pfn> all of my boxes are from around 2002 [22:54:05] <jamesd> i think i only have one box newer than 2002 [22:54:05] <ravv> Im trying to get solaris to recognize new disks, is there any information on how to do it somehwere? (solaris 10 on a pentium 4) [22:54:24] <jamesd> ravv, run devfsadm or reboot -- -r [22:54:48] *** mikefut has quit IRC [22:54:50] <blueandwhiteg3> ok, here's the problem with the stupid DNS configuration [22:55:08] * vmlemon wonders how many people still run Pentium 4 Northwood boxes, these days [22:55:41] <blueandwhiteg3> if i tell it to use dns, it wants to know the domain name where this box resides, DNS server IPs, etc... [22:55:45] <jamesd> vmlemon, its there lot in life, its not a life, but its there life. [22:55:50] <blueandwhiteg3> I don't want to hard code those.... [22:55:57] <jamesd> er its not a lot, [22:56:00] <blueandwhiteg3> and there's no 'back' option [22:56:23] <jamesd> blueandwhiteg3, make one up, if you dont have one it doesn't matter. [22:56:26] <pfn> blueandwhiteg3, just hardcode it and let dhcp replace it [22:56:41] <blueandwhiteg3> *grumbles and looks up dns server* [22:56:43] <pfn> e.g. set a "real" primary dns server and a fake 2ndary name [22:56:56] <pfn> and when dhcp configures it, check to see if 2ndary is correct [22:56:57] [22:57:00] <Plaidrab> is aclocla part of the distro? [22:57:11] <pfn> aclocal is in autoconf [22:57:20] <Plaidrab> does it show up in format, ravv? [22:57:22] *** sfire||mouse has joined #opensolaris [22:57:42] <Plaidrab> Aha. Found it. I didn't have *share* paths [22:59:16] [23:00:24] *** tsoome has joined #opensolaris [23:01:51] <Plaidrab> Hm, no, that's just the m4 path. [23:02:08] <Plaidrab> I need to build automake and autoconf? [23:03:15] *** jambock has quit IRC [23:03:35] *** Gman has quit IRC [23:06:24] *** LuckyLuk1 has quit IRC [23:06:44] <Plaidrab> Well, doing it anyway. If the files are in there somewhere someone can point me at them later. :) [23:10:01] *** delewis has joined #opensolaris [23:11:03] <pfn> Sun has submitted changes to the GRUB project to support this; until they have been integrated, only the Solaris GRUB will work. If Linux installed GRUB on the master boot block, you will not be able to get to the Solaris OS even if you make the Solaris partition the active partition. In this case, you can chainload from the Linux GRUB by modifying the menu on Linux. Alternatively, you can replace the master boot sector with the Solaris GRUB in the above ex [23:11:10] <pfn> this is from 1/06 [23:11:13] <pfn> so, have the changes been incorporated into mainline grub yet? [23:11:52] <pfn> since it's been a year and a half [23:12:50] <pfn> GRUB Legacy is not actively developed any longer. Only bugfixes will be made so that we can continue using GRUB Legacy until GRUB 2 becomes stable enough. If you want more features in GRUB, it is a waste of time to work on GRUB Legacy, because we never accept any new feature. Instead, it is better to take part in the development of GRUB 2. [23:12:55] <pfn> fuckers [23:13:11] *** tsoome1 has quit IRC [23:14:16] *** sparc-kly_ has quit IRC [23:14:33] <pfn> so I guess that means linux's grub will never be able to boot solaris... [23:15:17] <jamesd> no but perhaps solaris' will. [23:15:27] <pfn> of course solaris' grub will boot solaris [23:15:29] <pfn> I sure hope so! [23:15:30] <e^ipi> linux is a kernel [23:15:40] <jamesd> pfn, i meant boot linux [23:15:42] <e^ipi> if some distro decides to ship grub2, it'll work [23:15:45] <pfn> e^ipi, by 'linux's grub' [23:15:55] <pfn> I mean the grub that is typically shipped in linux distros [23:15:57] <e^ipi> linux doesn't have grub [23:15:59] <pfn> and they're all shipping grub 0.97 [23:16:08] <pfn> and it'll continue to be the case until grub2 becomes stable [23:16:56] <ravv> Im trying to get solaris to recognize new disks, is there any information on how to do it somehwere? (solaris 10 on a pentium 4) [23:17:00] <pfn> I'm assuming solaris' grub changes are against 0.97 [23:17:36] <trisk__> ravv: devfsadm -c disk && format [23:18:01] *** trisk__ is now known as Triskelios [23:23:54] <Plaidrab> libgsf is not present [23:24:06] <Plaidrab> ? [23:33:52] *** sfire||mouse has quit IRC [23:40:30] *** theRealballchal1 has quit IRC [23:41:38] *** Slhack has joined #opensolaris [23:41:43] *** Slhack has left #opensolaris [23:42:49] *** mikefut has joined #opensolaris [23:48:12] *** tsoome has quit IRC