Switch to DuckDuckGo Search
   July 25, 2011  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >

Toggle Join/Part | bottom
[00:06:57] *** xc1024 has quit IRC
[00:18:12] *** skeeziks1 has joined #openindiana
[00:22:16] *** elgar has quit IRC
[00:24:13] <DontKnwMuch> hi :) What would you people here say, is this normal? http://www.pastie.org/2265444
[00:24:52] <DontKnwMuch> Wakeups-from-idle per second: 10164.2 ? isnt this very high?
[00:35:11] *** alterbuffyg has joined #openindiana
[00:35:12] *** ChanServ sets mode: +o alterbuffyg
[00:45:45] *** gea has quit IRC
[00:48:04] *** duk242 has quit IRC
[00:48:42] *** alterbuffyg has quit IRC
[00:54:38] *** Alasdairrr is now known as AlasAway
[01:17:10] *** mnaser has quit IRC
[01:17:27] *** mnaser has joined #openindiana
[01:25:51] <bdha> AlasAway: I wonder how that woman even found the OI fourms.
[01:25:53] <bdha> forums.
[01:25:58] <bdha> Sad, though.
[01:28:06] <skeeziks1> TIL about fmtopo. Nice.
[01:28:59] <skeeziks1> I was also pleasantly surprised to see that a cfgadm -x unconfigure will start blinking the identification LED on this Dell R515 for the unconfigured disk.
[01:29:14] <skeeziks1> (This is the only way I could find to map the physical bays to the SAS targets)
[01:35:37] <skeeziks1> Anyone know of a way to identify a drive in a bay without using cfgadm to unconfigure the disk?
[01:35:53] <skeeziks1> The Google isn't telling me much.
[01:38:41] *** skeeziks1 has quit IRC
[01:38:48] <jkimball4> there's an opengrok installation somewhere isn't there?
[01:39:34] <jkimball4> nevermind
[01:49:58] *** dws6045 has joined #openindiana
[01:57:05] *** wonslung has joined #openindiana
[02:03:11] *** elgar has joined #openindiana
[02:04:10] *** tsoome has quit IRC
[02:04:41] *** tsoome has joined #openindiana
[03:10:54] *** bradend1 has joined #openindiana
[03:13:12] *** bradend has quit IRC
[03:16:17] *** sponix has quit IRC
[03:16:58] *** master_of_master has quit IRC
[03:18:35] *** master_of_master has joined #openindiana
[03:23:58] <Shadow__X> i just looked at powertop and it seems like my machine is not going to 2.4 ghz and instead stays at 1.6ghz
[03:24:02] <Shadow__X> even with high load
[03:29:39] <brandini> bummer
[03:30:26] <Shadow__X> nevermind it is working just didnt realize the load stopped when i checked it
[03:32:32] <Shadow__X> i thought gzip 9 compression in zfs was multithreaded? It is not pegging both of my cores
[03:46:30] *** DrLou has quit IRC
[04:13:09] *** sivanov__ has quit IRC
[04:26:22] *** jsvcycling has joined #openindiana
[04:36:10] *** sivanov__ has joined #openindiana
[04:37:01] *** ChanServ sets mode: +o Triskelio-
[04:37:03] *** Triskelio- is now known as Triskelios
[04:40:40] *** sivanov__ has quit IRC
[04:49:02] *** POloser has joined #openindiana
[05:02:23] *** kart_ has joined #openindiana
[05:10:16] *** laserbled has joined #openindiana
[05:28:31] *** laserbled has quit IRC
[05:30:32] *** redgone has quit IRC
[05:58:21] *** axisys has quit IRC
[05:58:59] *** axisys has joined #openindiana
[06:15:08] *** laserbled has joined #openindiana
[06:20:48] *** elgar has quit IRC
[07:08:13] *** skeeziks has quit IRC
[07:08:24] *** Naresh has joined #openindiana
[07:21:53] *** Edgeman has joined #openindiana
[07:39:16] *** philhar88 has joined #openindiana
[07:40:52] <philhar88> Anyone care to estimate the performance you would expect from 24x 9 drive RAIDz3 ? Disks are Hitachi 3TB 7k3000 with 2x 12 core AMD and 128GB of RAM.
[07:54:19] *** dekar has quit IRC
[07:55:17] *** Vutral|FB has joined #openindiana
[07:55:42] <Vutral|FB> so sieht man sich wieder
[08:03:19] <bdha> philhar88: "Good"?
[08:03:26] <bdha> Depends on the workload.
[08:03:50] <philhar88> streaming compressed video data
[08:04:02] <philhar88> so compression will be turned off
[08:04:26] <bdha> "Pretty good"? :)
[08:04:32] <bdha> Would be curious to see results.
[08:05:46] <philhar88> Yes well I can't find any open information about systems of this scale
[08:06:22] <bdha> Mail zfs-discuss@. Perhaps relling will respond.
[08:06:59] <bdha> Though depending on how the application works, it may make more sense to parition the storage, and allocate processors per job.
[08:07:47] <richlowe> philhar88: I don't know of open information, but I know relling has had information at around that scale.
[08:07:52] <bdha> s/processors/cores/
[08:08:03] <bdha> Presumably Nexenta at large knows a fair bit, heh.
[08:08:07] <richlowe> may have preceded multi-parity raidz though.
[08:08:22] <richlowe> bdha: I know he had info around the time of the original thumper.
[08:08:30] <richlowe> Brendan has fishworks info at about that size, too.
[08:08:44] <richlowe> though not with 3T disks no doubt.
[08:08:45] <bdha> My Thumper does okay, but it's nowhere near that big.
[08:08:52] <bdha> Yeah.
[08:08:55] <bdha> Old tech.
[08:09:01] <bdha> Kind of funny to think about it that way. :)
[08:09:17] <philhar88> so you suggest posting on the zfs-discuss mailing list?
[08:09:21] <bdha> philhar88: Yes.
[08:09:26] <bdha> philhar88: And ignore most of the responses.
[08:09:33] <philhar88> LOL
[08:09:33] <bdha> Especially Ed Harvey.
[08:10:03] <bdha> relling or brendang are who you want to listen to.
[08:23:24] *** Vutral|FB has quit IRC
[08:25:09] *** Vutral|FB has joined #openindiana
[08:35:06] *** gea has joined #openindiana
[08:42:54] *** merzo has joined #openindiana
[08:45:59] *** Vutral|FB has quit IRC
[08:52:06] <bradend1> Oh man, I wanna work on a system that big soo bad.
[08:52:11] *** gea has quit IRC
[08:52:27] <philhar88> its not that fun, believe me.
[08:52:31] <bradend1> I've only got a few dozen 1TB disks so far.
[08:53:26] <bradend1> Well, fun for me, maybe - I do a lot of infiniband dances - I exceed 10Gbit as a way of life.
[08:54:35] <bradend1> So, I'm still jealous.
[08:55:28] <philhar88> I'll be aggregating 10gbit NICs
[08:55:38] <philhar88> on a Cisco switch so that will be a first
[08:56:28] <bradend1> Well, have lots and lots of fun!
[09:04:41] *** philhar88 has quit IRC
[09:06:27]
[09:07:04] <gaYak> Or would that benefit only with smaller files (which streaming videos certainly isn't) ?
[09:07:29] <bradend1> gaYak: I believe his configuration is z3s made of 9 disks, and 24 of those 9 disk sets.
[09:08:05] <bradend1> ::drool:: I want.
[09:09:22] *** smrt has quit IRC
[09:09:40] *** smrt has joined #openindiana
[09:10:42] *** laserbled has quit IRC
[09:11:03] <POloser> it will be better do raidz3 of 7 disks or 11 disks
[09:16:02] *** Worsoe has joined #openindiana
[09:16:15] *** anikin has joined #openindiana
[09:16:58] <gaYak> bradend1: Ouh.. so it seems. That's.. huh.
[09:17:33] <bradend1> Yeah. I keep thinking I misread it.
[09:18:31] *** |AbsyntH| has joined #openindiana
[09:18:41] <bradend1> Basically I'm jealous of the pile of parts and the goal, possibly not the implementation. After all, the fun is in the solving.
[09:24:17] *** laserbled has joined #openindiana
[09:31:17] *** flyz has joined #openindiana
[09:35:07] *** bens1 has joined #openindiana
[09:39:34] *** lblume has joined #openindiana
[09:44:07] *** asias has joined #openindiana
[09:44:13] *** asias has left #openindiana
[09:46:18] *** held has quit IRC
[10:04:42] *** sivanov has joined #openindiana
[10:05:34] *** InTheWings has joined #openindiana
[10:07:17] *** flyz has quit IRC
[10:11:10] *** tsoome has quit IRC
[10:12:58] *** held has joined #openindiana
[10:17:54] *** dekar has joined #openindiana
[10:24:05] *** dws6045 has quit IRC
[10:25:24] *** dws6045 has joined #openindiana
[10:27:31] *** baitisj has quit IRC
[10:28:19] *** echobinary has joined #openindiana
[10:34:20] *** flyz has joined #openindiana
[10:35:59] *** echobinary1 has joined #openindiana
[10:36:44] *** tsoome has joined #openindiana
[10:37:33] *** echobinary has quit IRC
[10:53:08] *** alterbuffyg has joined #openindiana
[10:53:08] *** ChanServ sets mode: +o alterbuffyg
[10:56:29] *** Lumb has quit IRC
[11:02:31] *** Lumb has joined #openindiana
[11:02:44] *** paularmstrong has joined #openindiana
[11:21:19] *** alterbuffyg has quit IRC
[11:30:54] <eto> does oi support amd64?
[11:32:09] <madwizard> eto: Out of the box
[11:37:10] *** CVLTCMK0 has quit IRC
[11:37:32] <eto> madwizard: so i can set my VM setting to 64 bit right? there are only open solaris options in vbox
[11:38:23] <madwizard> eto: Yes, you can, it should work
[11:38:46] *** CVLTCMK0 has joined #openindiana
[11:39:22] <eto> madwizard: don't want to annoy you, but want to be sure, i can use iso for server i downloaded right? its dual image
[11:40:38] *** paularmstrong has quit IRC
[11:41:45] <madwizard> eto: I'm cool
[11:41:49] <madwizard> You ask, I reply
[11:42:24] *** tg has quit IRC
[11:42:53] <eto> okay thanks
[11:48:00] *** tg has joined #openindiana
[11:48:26] *** tg is now known as Guest13778
[11:59:17] *** joshua_ has quit IRC
[11:59:29] *** joshua_ has joined #openindiana
[12:07:38] *** Guest13778 has quit IRC
[12:07:56] *** tg` has joined #openindiana
[12:09:33] *** Oriona has quit IRC
[12:09:56] *** Oriona has joined #openindiana
[12:14:28] *** tg` is now known as tg
[12:14:57] *** tg is now known as Guest48922
[12:17:14] *** spanglywires has joined #openindiana
[12:17:28] *** Guest48922 is now known as tg`
[12:24:56] *** |AbsyntH| has quit IRC
[12:26:10] <eto> hmm some fast scrolling errors are normal
[12:26:15] <eto> during install?
[12:26:40] <nettezzaumana> eto: :D depends on what they say
[12:33:01] <eto> well it says "transferring contents" and moves so i guess it's okay
[12:42:10] *** laserbled has quit IRC
[12:53:28] <eto> seems like it installed fine, however i am getting error : failed to update CPU microcode
[12:54:35] <nettezzaumana> eto: ouch ... i guess you have it virtualized, right ?
[12:55:03] <nettezzaumana> vmware|vbox|xen|kvm|qemu .. whatever
[12:56:07] <sivanov> eto,there was some workaround about microcode update
[12:56:20] <sivanov> like deleting a file from disk
[12:56:21] <nettezzaumana> mv /platform/i86pc/ucode /platform/i86pc/ucode.disabled
[12:56:32] <nettezzaumana> sivanov: ^^ or like that
[12:56:36] <eto> nettezzaumana: yes i am on vbox
[12:56:38] <sivanov> yep
[12:56:57] <nettezzaumana> eto: you can ignore that message
[12:56:58] <Woodstock> eto: that error can be safely ignored
[12:57:02] <eto> well booting takes ages
[12:57:36] <eto> "loading smf descriptions" is that normal? i will move ucode file when i boot up
[12:57:44] <nettezzaumana> eto: depends on if your cpu+vbox is capable to use hypervisor ... if not it boots up egaes
[12:57:48] <nettezzaumana> **ages
[12:58:04] <spanglywires> first install is the SMF import too
[12:58:17] <nettezzaumana> eto: ah, loading smf descr is generating manifests at first time boot
[12:58:34] <Woodstock> eto: you don't have to move the ucode stuff, the message is harmless and not related to your slow boot or anything else
[12:58:34] <spanglywires> even though its been improved it still takes *ages* if you aren't expecting it
[12:58:50] <eto> i see, i got in, cool
[13:00:01] <nettezzaumana> eto: just for my information ... what's the real cpu model and backend operating system ?
[13:00:33] <eto> okay i will try rebooting to see whether it was just that generation thingy
[13:00:42] <eto> nettezzaumana: hypervisor should be enabled
[13:00:49] <nettezzaumana> and how much ram you granted your guest to use ?
[13:01:09] <eto> nettezzaumana: FreeBSD mako 8.2-RELEASE FreeBSD 8.2-RELEASE #0: Thu Feb 17 02:41:51 UTC 2011 root at mason dot cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64
[13:01:33] <eto> nettezzaumana: CPU: Intel(R) Xeon(R) CPU E5335 @ 2.00GHz (2010.97-MHz K8-class CPU)
[13:02:14] <nettezzaumana> eto: i suggest you explicitely check (and prove yourself) that hypervisor is used by vbox
[13:02:21] <eto> it should have some acceleration features, i granted 750 mb for ram, 2 cores, 16 GB diskspace
[13:02:29] *** tg` is now known as tg
[13:02:31] <eto> nettezzaumana: i will
[13:02:40] <nettezzaumana> eto: 750 is not enough ... recommended minimum is 1G
[13:02:47] <tsoome> 750MB is quite low for solaris.
[13:02:47] <spanglywires> absolutely
[13:02:53] <spanglywires> makes a world of difference
[13:03:09] <spanglywires> i used to try with 768mb and 1g makes your life a lot easier
[13:03:20] <tsoome> if you really wanna go with low ram, make sure gem is disabled;)
[13:03:29] <tsoome> gdm*
[13:03:56] <eto> it was on wiki page, i have 8GB, and two other 1GB boxes running, how much would you grant?
[13:04:05] <nettezzaumana> at least 1G
[13:04:09] <eto> is it okay to grant more than i have real memory in general?
[13:04:19] <nettezzaumana> and still don't use gnome
[13:04:21] <EisNerd_> moin
[13:04:25] *** EisNerd_ is now known as EisNerd
[13:04:33] <nettezzaumana> if you want to use gnome it will suck even with 2G
[13:05:39] <EisNerd> someone here knowing some details about latest macos lion and osol cifs server?
[13:05:54] <tsoome> which details?
[13:05:58] <nettezzaumana> eto: no, it's not okay .. main limitation is real harddrive i/o
[13:06:05] <EisNerd> maybe why the authentification fails
[13:06:09] <nettezzaumana> tsoome: is mounting cifs works
[13:06:32] <tsoome> yes
[13:06:55] <nettezzaumana> s/is/if/
[13:06:56] <EisNerd> or if there is a workaround / configuration thing that fixes the problem
[13:07:07] <eto> nettezzaumana: i run server version so no gnome
[13:07:08] <tsoome> i have workgroup mode tho
[13:07:56] <tsoome> tcp4 0 0 192.168.60.24.62680 192.168.60.51.139 ESTABLISHED
[13:08:26] <tsoome> i have s11 as server tho
[13:08:33] <nettezzaumana> EisNerd: http://frankleng.me/2011/07/21/connect-to-a-freenas-samba-or-afp-share-on-lion-workaround/
[13:09:56] <eto> also quick question what is correct way to reboot - shutdown?
[13:11:06] <tsoome> for reboot - post solaris 10 you can use reboot command as well.
[13:11:17] <tsoome> or init 6 or shutdown -i6
[13:11:33] <tsoome> that includes OI
[13:12:17] <eto> great thanks
[13:13:24] <eto> thanks guys seems up and running
[13:13:27] <tsoome> man reboot tho. there are some nice switches
[13:14:10] <eto> is there also zsh availible in base install?
[13:14:39] <tsoome> pkg install zsh ?
[13:16:22] <eto> okay
[13:18:18] <eto> gid 10 (staff) is equivalent to wheel on BSDs ?
[13:18:29] <EisNerd> nettezzaumana: so there is nothing on serverside that would work?
[13:19:11] <eto> nevermind sudo seems to work nicely
[13:19:26] <eto> this is gonna bee interesting ride! thanks guys for everything
[13:21:05] <eto> well back again seems like hostname didn't register with my dhcp, does oi use dhclient?
[13:22:06] *** SH0x has joined #openindiana
[13:22:11] <EisNerd> damn und der nfs kram in oi tut hier nicht gescheit
[13:23:23] <EisNerd> hm ok, nfs isn't an option as oi exports every cifs share via nfs wihtout any restrictions when the nfs service is enabled
[13:23:25] <spanglywires> eto - Solaris behaviour for DHCP is to ask the DHCP server for the hostname, you can override but I just use static ip's to be honest. There is quite a lot out on google but I don't know how much of it is current these days
[13:23:40] *** heldchen has joined #openindiana
[13:25:33] <eto> spanglywires: okay i will update my dhcpd rules
[13:26:15] *** held has quit IRC
[13:26:30] <spanglywires> eto.. sorry, it might actually be dns, I've only used Solaris dhcp at home
[13:26:42] *** held has joined #openindiana
[13:26:50] <spanglywires> but it sounds like you understand that stuff better than me anyway
[13:28:02] *** McBofh has quit IRC
[13:28:08] *** heldchen has quit IRC
[13:28:17] <EisNerd> damn there was a command to list shares available by nfs on box x
[13:28:24] <EisNerd> but I can't remember
[13:30:48] *** anikin has quit IRC
[13:32:41] <EisNerd> dfshares
[13:36:17] *** Naresh has quit IRC
[13:36:49] *** mnaser_ has joined #openindiana
[13:37:18] *** McBofh has joined #openindiana
[13:37:19] *** mnaser has quit IRC
[13:38:17] *** melliott has joined #openindiana
[13:42:32] <nettezzaumana> EisNerd: nfs should work and i think it's still better then CIFS
[13:44:01] <EisNerd> https://www.illumos.org/issues/1012
[13:44:33] <EisNerd> and I still failing to get kerberized nfs running
[13:45:18] <nettezzaumana> EisNerd: is this oi specific issue ?
[13:45:41] <EisNerd> afaik no
[13:46:04] <nettezzaumana> hmm, lemme check that on s10
[13:46:14] <EisNerd> I have trashed my oi so I need to reinstall
[13:46:37] <nettezzaumana> EisNerd: pff. how JFMI ?
[13:46:40] <EisNerd> nettezzaumana: uhm s10 may be not affected as they use a completely different sharemanagement
[13:47:01] <spanglywires> EisNerd: whats the server with the exported shares? I've recently been unable to get Linux exported NFS/krb5 shares mounted on AIX or Solaris
[13:47:05] <nettezzaumana> yeah, i think so cuz this issue sounds very unfamiliar to me
[13:47:31] <EisNerd> nettezzaumana: created a new partition in linux and doing stupid things without properly reread partition table
[13:48:02] <nettezzaumana> EisNerd: fair enough, no worry, it happens even to experts of experts
[13:48:22] <EisNerd> spanglywires: the otherway around nfs/krb5 exported from oi mount in linux
[13:49:02] <spanglywires> from what i trussed/investigated etc, it seems that it was failing dead with authentication
[13:49:15] <spanglywires> despite kadmin etc and kinit working
[13:49:40] <EisNerd> ugly is that suspend to ram doesn't work properly with zfs kernel module loaded (linux)
[13:50:14] <nettezzaumana> EisNerd: ????? you have workin' zfs kernel module for linux ?????
[13:50:24] <EisNerd> spanglywires: kerberos works fine here on both sides, but nfs/krb fails
[13:50:28] *** DrLou has joined #openindiana
[13:50:28] *** ChanServ sets mode: +o DrLou
[13:50:31] <EisNerd> nettezzaumana: yes? whts the problem?
[13:50:50] <spanglywires> EisNerd: thats exactly what I got with the roles reversed
[13:50:54] <nettezzaumana> oh really, gimme link pls .. i checked the state year ago and it just didn't work
[13:50:59] <nettezzaumana> EisNerd: ^^
[13:51:23] <spanglywires> EisNerd: and its not just OI, its Sol10 and AIX that don't work with nfs/krb5 from Linux
[13:51:33] <nettezzaumana> http://zfsonlinux.org/
[13:51:35] <nettezzaumana> EisNerd: ^^
[13:51:39] <nettezzaumana> ?
[13:51:42] <EisNerd> yes
[13:51:47] <EisNerd> nettezzaumana: gentoo?
[13:51:51] <EisNerd> http://pastebin.com/qWhPPmbt
[13:52:31] <nettezzaumana> EisNerd: no, opensuse but just tell me, where did you get a code ?
[13:53:06] <nettezzaumana> EisNerd: i hope that you don't mean a pseudo-functional fuse based one
[13:53:29] <EisNerd> uff wait I have to llok in the package
[13:53:58] <EisNerd> if [[ "${PVR}" == *9999* ]]; then EGIT_REPO_URI="git://github.com/behlendorf/${PN}.git" EGIT_COMMIT="master"
[13:54:01] <EisNerd> else
[13:54:06] <EisNerd> there
[13:55:07] <nettezzaumana> EisNerd: $PN is missing
[13:55:40] <EisNerd> oh sorry
[13:55:46] <nettezzaumana> only what i have available is zfs-fuse molestatory tool
[13:55:55] <EisNerd> zfs
[13:56:00] <nettezzaumana> thx
[13:56:05] <EisNerd> and spl
[13:56:07] <EisNerd> afaik
[13:56:37] <EisNerd> yes
[13:58:21] <nettezzaumana> EisNerd: thank you, just cloning atm
[13:59:21] <EisNerd> nettezzaumana: no idea if it works on suse
[13:59:50] <nettezzaumana> EisNerd: do i have to get spl source ?
[13:59:51] <EisNerd> in gentoo it is just adding thoose two ebuilds and install zfs
[13:59:51] <nettezzaumana> checking spl source directory... Not found
[13:59:52] <nettezzaumana> configure: error:
[14:00:09] <EisNerd> nettezzaumana: yes, you need two kernel modules
[14:00:37] <EisNerd> but no real idea what you have to do in detail, portage does this for me
[14:01:17] <nettezzaumana> checking spl source version... Not found
[14:01:17] <nettezzaumana> configure: error: *** Cannot determine the version of the spl source. *** Please prepare the spl source before running this script
[14:01:22] <nettezzaumana> interesting
[14:01:34] <nettezzaumana> EisNerd: no worry, i'll fight to win alone ;)
[14:03:43] <nettezzaumana> EisNerd: just for my information (sorry for stupid Q:s i'm too curious and hurry), does it work well ? does it allow you to create and use zpools ?
[14:03:52] <EisNerd> anyway I'll see if my suspend hook-script works
[14:04:02] <EisNerd> nettezzaumana: yes
[14:04:29] <EisNerd> you could also open yoir OI pool (if not in use)
[14:05:03] <nettezzaumana> oh what a praise
[14:05:40] <EisNerd> shit, my oi dlc mirror has failed to update properly
[14:06:20] <EisNerd> nettezzaumana: some guys have setup a linux with zfs root, but due to the suspend issues it isn't that far
[14:07:06] <nettezzaumana> EisNerd: i won't do it, just reading zfs from linux is pretty enough for me
[14:07:53] <nettezzaumana> EisNerd: are you experiencing any serious troubles or fails ?
[14:08:45] <EisNerd> I haven't used it that hard, as I have created a persistent pool just shortly (one disk, nearly no reboots)
[14:09:40] <EisNerd> hm the snapdir seems not so far implemented
[14:11:14] *** POloser has left #openindiana
[14:13:09] <EisNerd> hm the first illumos release in my hands
[14:13:14] <lennard> is anyone aware of a method to hide samba shares from the listing?
[14:13:43] <eto> pushing hostname/fqdn through dhcp made it work
[14:14:36] <EisNerd> data/backup/serviceDaten/linux name=linux_service$
[14:14:48] <EisNerd> lennard: this gives you a hidden share
[14:15:07] <eto> huh dhcpagent, oi sure is differnet
[14:15:59] <EisNerd> is there a usefull listing of fixes and improvements from oi148 to 151?
[14:17:15] <lennard> hrm, only hidden from windows though
[14:20:36] <eto> lennard: you can tweak ACLs
[14:20:58] <eto> lennard: hidden + ACLs tends to work fine on windows
[14:21:20] <lennard> yea, well, I kinda don't want clients to be able to show the names of other shares at all
[14:21:34] <lennard> not even if the'yre sneaky and aren't windows :P
[14:21:41] <EisNerd> lennard: afaik this is limited by protocol
[14:22:35] <lennard> linux can do it :P
[14:22:40] <lennard> with browsable = no
[14:22:40] <EisNerd> as the share enumeration happens afaik normally before the login
[14:23:00] <nettezzaumana> EisNerd: kisss ya it fuckin works
[14:23:08] <lennard> true, but I don't care about visibility (ie may still be invisible) after login
[14:23:13] <eto> what kind of wiki open indiana uses?
[14:23:15] * nettezzaumana goes vomit that he kissed a man .. omg
[14:23:23] <EisNerd> yeah ok thoose shares are never listed even when you allowed to access them
[14:23:30] <eto> nettezzaumana: manfobia?
[14:23:38] <EisNerd> eto: confluence
[14:24:11] <EisNerd> lennard: hm, I'm not sure if a client could connect to a share he doesn't know about
[14:24:33] <lennard> I'm pretty sure they can
[14:24:54] <EisNerd> lennard: if you have a linux server with those hidden shares at hand try smbclient -L //server/
[14:25:04] <eto> EisNerd: pretty dumb wiki, is there a way to get rid of the right panel?
[14:25:10] <lennard> I could make one I guess
[14:25:18] <lennard> the only hidden ones atm are home directories
[14:25:23] <lennard> they might be special cases
[14:26:07] <lennard> yup, that works
[14:26:29] <lennard> not visible in smbclient -L, but still can connect and do dirlistings and stuff (again with smbclient)
[14:26:47] *** laserbled has joined #openindiana
[14:27:11] <eto> doesn't OI use ZFS by default?
[14:27:25] <nettezzaumana> eto: yes it does
[14:29:01] <eto> nettezzaumana: rpool = zpool?
[14:29:18] *** TPickle has joined #openindiana
[14:29:27] <EisNerd> r like root for rootfs
[14:29:48] <nettezzaumana> EisNerd: omg God it works
[14:29:51] <nettezzaumana> :D
[14:30:05] <eto> great, pfexec = sudo?
[14:30:13] <EisNerd> eto: no
[14:30:40] <spanglywires> EisNerd: whats the score with pfexec? is it dead?
[14:31:04] <tomww> pfexec is now what it was designed for
[14:31:18] <EisNerd> eto: it is similar in usage, but completely different in implementation
[14:31:26] <tomww> Glen Faden spoke about what pfexec was really thought to be
[14:32:12] *** Naresh has joined #openindiana
[14:32:21] <tomww> pfexec if using RBAC to elevate access levels fine grained , sudo is different.
[14:32:28] <eto> EisNerd: what should i use? sudo also seems to work
[14:33:02] <spanglywires> tomww - right, I get you.
[14:33:32] <spanglywires> tomww: that definitely makes it easier to explain what it is and what its for
[14:33:43] <nettezzaumana> dd if=/dev/zero of=/mnt/data/zfs.flat bs=1M count=333
[14:33:46] <nettezzaumana> zpool create -f testone /mnt/data/zfs.flat
[14:33:54] <nettezzaumana> # zfs list
[14:33:54] <nettezzaumana> NAME USED AVAIL REFER MOUNTPOINT
[14:33:55] <nettezzaumana> testone 94.5K 296M 30K /testone
[14:34:00] <nettezzaumana> uaaaaaaaaaaaaaa
[14:34:09] <nettezzaumana> sorry .. bad channel
[14:35:02] *** CVLTCMK0 has quit IRC
[14:35:58] <eto> why do the interfaces have so funny long names?
[14:36:17] <taemun> which interfaces?
[14:36:55] <eto> taemun: network ones
[14:37:12] <taemun> eto: can you provide an example?
[14:37:17] <eto> like e1000g0/_a and e1000g0/_b
[14:38:27] <tsoome> they are meaningful names, not stupid shit like eth0
[14:38:29] <taemun> sorry, I haven't seen that
[14:38:44] <taemun> e1000g0 means intel gigabit adapter 0
[14:38:51] <taemun> /_a I have no idea about
[14:38:53] <spanglywires> looks like ipadm has been playing though for _a and _b
[14:39:23] <eto> host has ?dualport? adapter
[14:39:37] <spanglywires> ah, that may be it
[14:40:01] <eto> bge0: <Broadcom NetXtreme Gigabit Ethernet Controller, ASIC rev. 0x009003> mem 0xfd110000-0xfd11ffff,0xfd100000-0xfd10ffff irq 16 at device 4.0 on pci15
[14:40:15] <tsoome> e1000g0 means its port 0 from card driven by e1000g driver.
[14:40:33] <eto> bge1: <Broadcom NetXtreme Gigabit Ethernet Controller, ASIC rev. 0x009003> mem 0xfd130000-0xfd13ffff,0xfd120000-0xfd12ffff irq 17 at device 4.1 on pci15
[14:40:35] <tsoome> (man e1000g)
[14:40:39] <eto> yep
[14:40:55] <lblume> tsoome: there are pros and cons for both sides. It'd be honestly better for the Solaris way if a brand of chipset didn't have a gazillion different potential names :-D
[14:41:50] <eto> chsh doesn't seem to work
[14:42:01] <eto> how can i change shell to zsh?
[14:42:17] <spanglywires> ipadm/dladm allow vanity names though don't they?
[14:42:27] <spanglywires> so you can change to eth[0-99]
[14:42:57] <Warod> spanglywires: yeah, it's possible
[14:43:55] <tsoome> someone have had issues with ipadm and renamed interface.
[14:44:11] <tsoome> probably bug tho, as its supposed to work
[14:44:35] <eto> hmm `passwd -e` worked
[14:47:00] <eto> is it okay to edit /etc/profile directly ?
[14:49:31] <lblume> spanglywires: Yep, Solaris is getting more like Linux, and Linux more like Solaris, which tends to prove that both ways are not so bad :-)
[14:49:47] <lblume> eto: Depends what you intend to do there.
[14:50:02] <spanglywires> lblume: I wish they'd take the good features though :D
[14:51:02] <tsoome> with profile you need to be sure the shells you are using can at your changes
[14:51:06] <tsoome> eat*
[14:51:56] <edho> anyone using this? http://cgi.ebay.com/PCIE-PCI-E-SATA-2-PORT-CONTROLLER-NON-RAID-WINDOWS-7-/160612449995?pt=LH_DefaultDomain_0&hash=item25653f7ecb#ht_2368wt_855
[14:52:10] <edho> sil3132
[14:52:26] *** dimonov has quit IRC
[14:52:48] <nettezzaumana> EisNerd: since now until death i'm at your services :D
[14:53:05] * nettezzaumana starts tracking EisNerd to kill :P
[14:53:13] <eto> lblume: i set pager to less and teminfo to GNUs
[14:53:26] <EisNerd> I got a cheap sata controller, you just need to flash another bios (from vendor website) to disable stupid raids functions
[14:53:47] <edho> EisNerd: what chip
[14:54:00] <eto> tsoome: seems to work with bot bash and zsh
[14:54:23] <eto> EisNerd: can't you just ignore it?
[14:54:46] <eto> EisNerd: you need boot to dos usually to do that
[14:55:05] *** dimonov has joined #openindiana
[14:55:17] * eto bets it some via crap
[14:55:49] <EisNerd> delock SiI3726
[14:56:18] <edho> huh
[14:56:59] <edho> multiplier?
[14:57:24] <eto> oh SiliconImage are much better in my experience, at least both sata ports worked on OpenBSD, not so with via
[14:58:47] *** mnaser_ has quit IRC
[14:59:00] *** mnaser has joined #openindiana
[14:59:09] <edho> hmm
[14:59:17] *** mnaser has quit IRC
[14:59:31] <edho> so, any comment on this? http://cgi.ebay.com/PCIE-PCI-E-SATA-2-PORT-CONTROLLER-NON-RAID-WINDOWS-7-/160612449995?pt=LH_DefaultDomain_0&hash=item25653f7ecb#ht_2368wt_855
[14:59:38] *** mnaser has joined #openindiana
[15:00:09] <tsoome> check hcl
[15:01:46] *** |AbsyntH| has joined #openindiana
[15:05:29] <eto> tsoome: man e1000g
[15:05:29] <eto> No manual entry for e1000g.
[15:06:06] <tsoome> so you have to install manuals
[15:07:28] <eto> tsoome: pkg install manuals ?
[15:07:45] *** spanglywires has quit IRC
[15:08:06] <eto> not in catalog
[15:09:44] <tsoome> enter pkg command without any options
[15:10:01] *** AlasAway is now known as Alasdairrr
[15:10:12] *** spanglywires has joined #openindiana
[15:11:23] <eto> tsoome: sudo pkg install system/manual
[15:11:24] <eto> No updates necessary for this image.
[15:11:56] <eto> pkg search manual spits plenty of stuff but mostly sunstudio related
[15:13:04] <tsoome> pkg search e1000g.7d
[15:13:21] <tsoome> it should be in system/manual tho
[15:13:47] <eto> cool there is lua
[15:14:15] <eto> tsoome: what kind of system is that, pkg that is? pkgsrc?
[15:14:30] <eto> pkg search e1000g.7d
[15:14:34] <EisNerd> uhm how to get details about pcie cards in oi
[15:14:34] <eto> returned nothing
[15:15:36] <tsoome> scanpcy, prtconf
[15:15:41] <tsoome> scanpci*
[15:15:57] <spanglywires> prtdiag/prtpicl ?
[15:16:22] <tsoome> pkg search -r e1000g.7d won't tell as well? if so, your os does not include that manual. too bad.
[15:16:29] <eto> nice
[15:17:10] <eto> even that query ended up empty
[15:17:11] <EisNerd> pci bus 0x0006 cardnum 0x00 function 0x00: vendor 0x1095 device 0x3132 Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller
[15:17:14] <EisNerd> thisone
[15:17:25] <lblume> tsoome: Aren't the manual split into the packages they concern now?
[15:17:44] <eto> EisNerd: well it's present, do you see harddisks on that?
[15:17:46] <tsoome> not in s11
[15:18:00] <eto> tsoome: s11 is release name?
[15:18:09] <tsoome> and even then, searching by full file name should give you it
[15:18:20] <eto> well uname -a = SunOS voivel 5.11 oi_148 i86pc i386 i86pc
[15:18:34] <lblume> Ok. Too bad.
[15:18:58] <EisNerd> this one http://www.delock.com/produkte/gruppen/pci-express/Delock_SATA_II_PCI_Express_Card_2_Port_70137.html
[15:19:11] <EisNerd> afaik I used with the no raid bios
[15:19:51] <EisNerd> eto: gues, your answer, when I recomend it and ask, to give you details, how to get the details of pci-cards in solaris?
[15:19:54] <EisNerd> guess
[15:21:48] <eto> EisNerd: you should plug in drive and list drives instead, however i don't know how to list hardisks in oi
[15:22:11] <nettezzaumana> EisNerd: would you point me on some article describing a rights-to-use zfs on linux ... i'm bit of confused
[15:22:11] <eto> cfgadm -s "select=type(disk)" mentioned in wiki doesn't seem to work on me
[15:22:52] <EisNerd> eto: I use it to extend my 6 onboard satas to have 8 disks in the box
[15:22:58] <EisNerd> so yes it works
[15:23:07] <eto> is there a color option for ls?
[15:23:16] <EisNerd> nettezzaumana: ?
[15:23:22] <eto> EisNerd: so what's the problem then?
[15:23:27] <EisNerd> eto: nothing
[15:23:30] <tsoome> man ls
[15:23:44] <EisNerd> eto: someone where looking for sata controllers here
[15:23:55] <docsteel> EisNerd: if you have time, please come to my office
[15:24:11] <EisNerd> edho: you asked?
[15:24:15] <EisNerd> docsteel: ok
[15:24:35] *** gea has joined #openindiana
[15:25:13] <EisNerd> would be interesting if the oi installed in vbox would boot natively
[15:26:00] <eto> EisNerd: what does mean natively?
[15:26:38] <eto> hmm there seems to be no `group-directories-first` option for ls
[15:26:50] <EisNerd> eto: use vbox raw-disc function to run your vms directly on your disk
[15:27:28] <EisNerd> eto: then you could boot thoose natively instead of using vbox
[15:27:52] <tsoome> …. /usr/gnu/bin/ls --group-directories-first
[15:27:55] <EisNerd> ok afk
[15:29:12] *** gea has quit IRC
[15:31:40] <nettezzaumana> EisNerd: hmm. i don't understand to sense that however a zfs for linux must not be shipped with kenel itself the only possible way is to download and build it yourself.
[15:31:48] <nettezzaumana> **kernel
[15:32:02] <spanglywires> eto: cfgadm -as "select=type(disk)"
[15:32:12] <spanglywires> eto: that does list disks
[15:33:11] <spanglywires> eto: cfgadm is one of the most useful commands for finding disk devices
[15:33:42] <eto> spanglywires: zsh: command not found: cfgadm
[15:33:50] *** gea has joined #openindiana
[15:34:02] <eto> spanglywires: bash: cfgadm: command not found
[15:34:10] <eto> is my isntall hosed?
[15:34:19] <spanglywires> eto: the other is format (or if you don't want to ctrl-c it, format < /dev/null)
[15:34:26] <spanglywires> you must have cfgadm
[15:34:33] <spanglywires> are you signed in as root?
[15:34:41] <eto> tsoome: cool, so there is bassicaly whole gnu core there?
[15:34:49] <spanglywires> eto: should be in /usr/sbin/cfgadm
[15:34:50] *** bens1 has quit IRC
[15:34:55] <eto> spanglywires: no i am logged in as my user
[15:35:10] <lblume> cfgadm works for listing as a user, but is not in the default user PATH
[15:35:18] <spanglywires> eto: it should still run some functionality as non-root
[15:35:24] <eto> spanglywires: yes it works when invoked directly
[15:35:52] <spanglywires> eto: path's are quite anally restricted in Solaris compared to other *nix
[15:36:03] <tsoome> no its neither cool nor there is no whole core.
[15:36:30] <eto> lblume: how can i add it to path? can i add /usr/sbin/ ?
[15:36:35] <tsoome> spanglywires: restricted paths?
[15:36:52] <spanglywires> as in PATH variable
[15:36:56] <tsoome> ?
[15:37:11] <eto> tsoome: how come not cool :) gnu ls is only thing which can group dirs at start of the listing
[15:37:12] <tsoome> its your system, you are its admin, set the path you need.
[15:37:31] <lblume> eto: Right Place is in /etc/default/login and/or /etc/default/su
[15:37:41] <eto> tsoome: well one would expect /usr/sbin to be there by default
[15:37:51] <tsoome> why?
[15:38:05] <eto> doesn't sbin mean static binaries?
[15:38:09] <tsoome> usr/sbin has sysadmin tools
[15:38:17] <tsoome> an no it does not maean static
[15:38:23] <lblume> eto: For globally setting it, that is. Else, just modify your ~/.profile
[15:38:46] <lblume> /usr/sbin means system, /sbin meant static, but not anymore
[15:38:47] <eto> those are the ones that are supposed to be used during emergency, yet they are not in path, i was just wondering
[15:40:00] <eto> lblume: so new meaning is system? good to know, thanks
[15:40:55] <lblume> eto: filesystem(4)
[15:41:03] <lblume> Oops: filesystem(5) :-)
[15:41:53] <tsoome> man filesystem yep:P
[15:42:20] <eto> lblume: tsoome : cool i was trying `man hier` which was not there
[15:44:57] <lblume> And before you ask: a separate /usr is still somewhat supported, but largely meaningless and a cause for issues rather than any real benefit.
[15:47:13] <tsoome> it won't do any good:P
[15:47:28] <eto> lblume: by separate /usr you mean what? like on different partition?
[15:47:39] <eto> lblume: or slice, or what soalris has
[15:49:16] *** tsoome has left #openindiana
[15:50:17] <lblume> dataset, those days.
[15:58:56] *** Naresh has quit IRC
[16:01:01] *** Worsoe has quit IRC
[16:02:49] *** tsoome has joined #openindiana
[16:07:56] <eto> does open indiana survive hw change or direct transplantation to other machine?
[16:08:53] <lblume> Unsupported. Some have reported success doing it, but it's certainly not something reliable in any way.
[16:09:08] <eto> oh shit
[16:09:22] <eto> so it means no dd to real machine
[16:09:37] <eto> that's a pity, why is that?
[16:10:23] <lblume> Because it's something mostly relevant to the hobbyist market, which has never been of much interest to Solaris.
[16:10:46] <lblume> But I'm quite sure the OI team will welcome any patch.
[16:11:22] <eto> when i moved off windows i was glad to learn almost all *nix systems i tried detect hw at boot time (OpenBSD, FreeBSD, Linux) so i consideret it a default feat
[16:11:45] <lblume> All *hobbyist-oriented* *nix systems.
[16:12:15] <eto> lblume: how come? what about moving isntallation between different machines, it doesn't happen in server installations?
[16:12:34] <eto> so in this oi is on par with windows
[16:13:31] <eto> i see solaris drags some kind of pathos around it, not being hobbyist :)
[16:13:32] <lblume> Sun hardware is well-controlled and needed no special work on the OS to do it. And mostly, when you move machines around in a production environment, you do it in a serious way, ie, using zones.
[16:13:32] <Woodstock> eto: i have never seen any problems with that, and i don't see why it wouldn't work.
[16:14:25] <eto> Woodstock: i saw it on several blog posts, i dd-ed several of my system during hardware upgrades and whatnot
[16:14:25] <lblume> Woodstock: Because a whole lot of things needed for devices change with different hardware.
[16:15:33] <eto> lblume: for kernel being able to to detect hw everytime as it goes i think it's actually a feature, and not some hobbyist feature, really
[16:15:36] <Woodstock> lblume: really? which things?
[16:15:42] <lblume> I've tried it at regular intervals. Last time I succeeded was on S9. S10 and beyond never worked for me. It can work if you move to the exact same hardware. If not, a whole lot of devices path will not be the same.
[16:16:01] <eto> lblume: and on big iron it doesn't matter as it's not restarted that often anyways
[16:16:04] *** mikw has joined #openindiana
[16:16:22] <lblume> eto: It does detect the hardware, the problem is not there. What's missing is the magic to find the right place to boot from.
[16:16:38] <eto> huh?
[16:17:24] <lblume> Boot devices handling is very different than what is on Linux and is quite static.
[16:17:37] <Woodstock> yea, the first boot may break since it thinks it is on a different system and refuses to import the root zpool that wasn't exported previously. you boot a livecd, import and export rpool, and everything works again.
[16:18:27] <Woodstock> on sxce you had a failsafe boot archive that you could use to do that, iirc there is even an oi bug requesting to bring that feature back
[16:18:49] <lblume> Woodstock: It is not that simple. that still doesn't take into account the boot devices.
[16:19:02] <spanglywires> eto: basically in the 'enterprise' you'd use provisioning tools rather than go around dd'ing
[16:19:26] <eto> spanglywires: whata re those provisioning tools?
[16:19:37] <lblume> dd'ing would be the most evil way of doing that anway :-)
[16:19:39] <spanglywires> you'd be looking at auto-install
[16:19:48] <eto> spanglywires: soemthing solaris related?
[16:19:53] <Woodstock> lblume: i still don't understand what you mean with that, and i have done that successfully a couple of times now
[16:19:55] <spanglywires> eto: auto-install
[16:20:02] <spanglywires> its not pretty though… one sec
[16:20:05] <eto> spanglywires: okay i will look into that
[16:20:06] <spanglywires> I'll find a link
[16:20:28] <Woodstock> yeah, dd'ing is a bit crude. you would rather zpool-split a mirrored rpool :)
[16:20:39] <spanglywires> eto: http://blogs.warwick.ac.uk/peggleton/entry/automated_installer_i/
[16:20:47] <eto> lblume: why dding is dangerous? even on dead system?
[16:21:00] <spanglywires> eto: I've personally done this with success, but there are many other similar blogs
[16:21:42] <eto> oh my that autoinstall
[16:21:56] <lblume> dd makes a lot of assumptions about a hard disk that are simply almost never true.
[16:22:17] <eto> it's like droping anvil on one's head -> sif files and xml shit really sucked on windows, does it work on solaris?
[16:23:29] <eto> lblume: like for exmaple? i thought since everything is moving to SCSI like crap, disk is just an LBA blackbox anyway, linear chunk of sectors
[16:23:49] <eto> even on ide
[16:24:14] <eto> so oi requires special handling
[16:24:31] <lblume> In theory, yes, but there is still the idea that the BIOS has of the disk, and then the idea that the system has, and it's better if they're the same. Also, consumer disks rarely have the same amount of said blocks.
[16:24:50] <tsoome> if you wanna to live on the edge, go with dd. if not, create pool and ifs send or spool split
[16:24:55] <tsoome> zpool*
[16:25:05] <tsoome> darn, s/ifs/zfs/
[16:25:36] <lblume> dd is a Swiss Army knife made with a blade of silex.
[16:25:43] <tsoome> same thing did apply to ufs.
[16:26:07] <tsoome> the fact its there, does not mean its the tool to do things.
[16:26:12] <eto> "dns multicast service is required" - what if i have solaris boxes using some 3rd party dns server?
[16:26:43] <tsoome> why that should matter?
[16:27:04] <tsoome> mans does not exclude anything
[16:27:11] <tsoome> mdns*
[16:27:14] <tsoome> crap
[16:27:33] * tsoome is taking dd and cloning some fingers….
[16:28:59] *** pettson has quit IRC
[16:29:15] <tsoome> mans was introduced because modern people can't read and therefore they have no idea what is dns and how to manage it. so something automatic was needed.
[16:29:44] <tsoome> sigh. again s/mans/mdns/
[16:29:47] *** pettson has joined #openindiana
[16:30:25] <eto> tsoome: well this autoinstall seems too over the head
[16:30:58] <eto> don't all modern kernels talk to hdd controllers directly bypassing bios?
[16:31:04] <lblume> Woodstock: Sorry, I can't find my notes about it at the moment to be more precise. At least I remember it involved the part where grub finds and starts the right kernel, and I know it was unmovable until at least b133. Maybe it has improved, and certainly some can make it work, it is just not supported in any way AFAICT.
[16:31:30] <tsoome> kernel does not talk to HBA. the driver does.
[16:32:00] *** bens1 has joined #openindiana
[16:32:04] <lblume> eto: The glue where the BIOS gives things to the kernel is where things matter.
[16:32:08] <eto> tsoome: driver becomes part of the kernel, when it's loaded, when it's statically compiled in it's kernel
[16:32:24] <eto> lblume: i thought grub handles that
[16:32:36] <tsoome> in solaris there is no such thing as statically compiled in kernel.
[16:32:48] <eto> tsoome: so kernel is modular by default?
[16:32:49] <lblume> eto: A very linuxy view of the world. The Solaris kernel has a much less tight relationship with its drivers.
[16:33:14] <eto> openbsd and frebsd also have plenty of parts statically compiled in
[16:33:17] <tsoome> it has been since 2.0 which is ages ago.
[16:33:26] <lblume> eto To the point that the Linux way of doing it looks prehistoric to me :-P
[16:33:38] <eto> yes but some crucial drivers are always present
[16:33:52] <lblume> Rebuilding a driver when you update the kernel? Like, WTF? :-P
[16:34:17] <eto> lblume: yeah but i became to like failsafe ramdisk with mini root and some tools, you can always boot then
[16:35:20] <eto> lblume: don't be sarcastic i bet some things are baked into opensolaris kernel as well
[16:35:21] <alanc> I think it was SunOS 3.5 where you had to recompile the kernel to change the number of disks supported, but that was the mid 80's and it got better quickly
[16:36:01] <eto> i am talking about things like vga controllers and keyboard handlers and what other generic hw there is
[16:36:02] <alanc> (or so I'm told, having been in grade school at the time and nowhere near a SunOS box)
[16:36:05] *** elgar has joined #openindiana
[16:36:45] <tsoome> similar stuff in sunos 4 - sometuneables were done that way
[16:37:04] <eto> not all device drivers reside in their own .so
[16:37:08] <lblume> eto: Those are both separate modules
[16:37:30] <tsoome> ego solaris is not linux.
[16:37:36] <tsoome> eto*
[16:37:55] <eto> so it means that os,illumos and friends can't even output panic message to console without access to disk?
[16:37:56] <tsoome> the linux way of doing things is not the only way.
[16:38:06] <eto> dos had it
[16:38:16] <eto> like that
[16:38:31] <lblume> So? Solaris is not DOS-based.
[16:38:48] <tsoome> without the disk?
[16:38:49] <alanc> I don't understand why you think access to disk is needed to output to console
[16:39:03] <lblume> Me either. Maybe Linux needs it?
[16:39:20] <tsoome> :P
[16:39:34] <eto> well if console driver is in the module as you say, and that module resides on disk, how could that possibly work
[16:39:49] <tsoome> same way as you get kernel into ram in first place
[16:39:59] <eto> or grub stuffs the kernel into to the memory loads needed driver and hands it the control?
[16:40:13] <eto> tsoome: normally by bootloader
[16:40:18] <tsoome> kernel does load what it needs.
[16:40:42] <alanc> the console would be handled by either the serial driver, the vgatext driver, or a specific video card driver - whichever of those is needed would be in the boot archive loaded by the boot loader
[16:40:49] <eto> freebsd and openbsd have msot drivers compiled in from what i gathered so it works any time, kernel image is just copied from disk
[16:41:29] <eto> ah so opensoalris uses ramdisk image as well with modules stuffed there
[16:41:37] <alanc> current Solaris-ish OS'es build a zip archive with the kernel and modules needed to boot that's loaded into RAM by the bootloader
[16:41:38] <eto> so that's more like linux
[16:41:44] <tsoome> it does not matter if you have it compiled in or not, it still does come from physical disk or from net.
[16:41:49] <eto> yeh like linux ramdisk
[16:42:10] <tsoome> or "disk" - like stick or whatever device
[16:42:23] <alanc> "bootadm list-archive" will show the contents
[16:42:39] <eto> tsoome: it much different to boot in windows way -> eg drivers are on disk and must be loaded from there, than to boot from attached boot image at the end of your kernel
[16:42:40] <alanc> changing what's included is a matter of building a new zip file, no recompiling
[16:43:25] <eto> alanc: tsoome so we are back to point one if i have all the things needed in boot archive, why can't i boot in different machine?
[16:44:12] <tsoome> not all things are in boot archive
[16:44:20] <eto> or the kernel + image are optimized for that machine only
[16:44:21] <tsoome> check booted list-archive
[16:44:41] <alanc> you mean like the LiveCD does? of course, it's set up for that, rebuilding the /dev & /devices tree at boot time
[16:45:14] <lblume> The right optimization is done automatically at boot time (much more modern than the Linux way too, BTW ;-)
[16:45:24] <alanc> the kernel is the same binary for all machines of a given platform
[16:45:42] <eto> alanc: so there might be such kernel image present on normal install as well to be able to boot in emergency on during tranplantation
[16:46:30] <alanc> eto: I was talking about normal install
[16:46:54] <alanc> the kernel is the same binary in the LiveCD & normal installs
[16:47:01] <eto> lblume: premature optimization is root of all evil :) so if i got it right, almost everything not needed is optimized out of the boot archive breaking failsafe boot
[16:47:26] <eto> alanc: i understand that part that is why i wrote kernel + image
[16:47:28] <alanc> the LiveCD install is basically a tar of the livecd image to the newly formatted disk
[16:48:07] <eto> okay and after boot it optimizes out it's ability to boot on any setup right?
[16:48:16] <eto> *after 1st boot
[16:48:23] <tsoome> the optimization does not break anything, you can regenerate the archive at any point of time.
[16:48:36] <tsoome> but the thing is, you need to know to do it.
[16:48:37] <eto> tsoome: to the livecd boot-ability?
[16:48:51] <eto> tsoome: i have no problem learning
[16:49:05] <alanc> I don't think there's any optimization step, just that it builds the /dev & /devices tree the first time and not on later boots
[16:49:17] <lblume> eto: It is a smart optimization that adapts the kernel to the architecture, and libc to the instruction set.
[16:49:38] <tsoome> just as well as you need to know how to install boot blocks, how to rebuild device tree (if needed)
[16:49:44] <eto> alanc: could that be regenerated by boot option on any arbitrary boot like by grub command line?
[16:50:05] <spanglywires> like boot -r / touch /reconfigure
[16:50:41] <alanc> boot -r will rescan for new hardware, not sure if it will double check all the old stuff is still there
[16:50:48] <alanc> (i.e. not devfsadm -c)
[16:51:10] <lblume> As for the devices, they are sttically defined in places like /etc/path_to_inst, /etc/name_to_major, which are installation specific and not easy to manipulate.
[16:51:20] *** skeeziks has joined #openindiana
[16:52:05] <lblume> alanc: that would be -C ;-)
[16:53:10] <tsoome> the boot disk cloning has never been the "way" to do it. you have automated "hands free" installs, and you have system where os and the data is reasonably separated. with decent network, you have new system installed in 10-15 minutes, without any need of cloning hacks.
[16:55:22] <tsoome> the classroom of ~10 hosts, the first hosts were installed and up when i reached last ones to set on net boot….
[17:02:54] *** dekar has quit IRC
[17:10:10] *** DontKnwMuch_ has joined #openindiana
[17:10:28] *** robinbowes has quit IRC
[17:11:55] <DontKnwMuch_> hi, I have a problem with cpu usage getting higher and higer, and I narrowed it down to cpupm, I switched to poll mode, but I still have problems I think. My powertop output seems strange: http://www.pastie.org/2268762 is this from powertot itself or do I have something else?
[17:13:50] *** tsoome has quit IRC
[17:15:36] *** robinbowes has joined #openindiana
[17:16:37] *** spanglywires has quit IRC
[17:16:43] <DontKnwMuch_> I have 3000 wakeups from idle pre second..
[17:18:49] *** spanglywires has joined #openindiana
[17:19:16] *** spanglywires has quit IRC
[17:22:22] *** kart_ has quit IRC
[17:23:07] *** kart_ has joined #openindiana
[17:35:01] *** dijenerate has quit IRC
[17:36:08] *** miine has quit IRC
[17:37:55] <nettezzaumana> what's current zpool version ? 31 ?
[17:39:19] *** merzo has quit IRC
[17:39:33] <lblume> on S11. Should be 28 on OI.
[17:40:17] <nettezzaumana> lblume: thanks, good
[17:42:46] *** miine has joined #openindiana
[17:49:23] *** raichoo has joined #openindiana
[17:51:39] *** kart_ has quit IRC
[17:52:00] *** kart_ has joined #openindiana
[17:55:09] *** tsoome has joined #openindiana
[18:05:22] *** spanglywires has joined #openindiana
[18:07:39] <alanc> on 2010.11, not on current S11 builds 8-)
[18:16:00] *** raichoo has quit IRC
[18:16:02] <tomww> DontKnwMuch_: does this change if you try this in single-user, or if you disconnect your network cable?
[18:17:15] <eto> tsoome: with automated install you still wait ages for packages to get configured, while on cloning software is usually there
[18:17:50] <eto> tsoome: so can one transfer root tree safely to another device usink that split technique?
[18:17:51] <tsoome> that depends on install type.
[18:18:17] <eto> tsoome: you mean like including all the packages into install setup?
[18:18:23] <tsoome> the example I gave before was done with flash archives.
[18:18:55] <tsoome> and yes, only split or snapshot+zfs send are safe.
[18:19:00] <eto> tsoome: i still am sure just imaging the drive is fastest, when installing filesystem has to do the work n times for each machine
[18:19:02] <tsoome> and for very good reason
[18:19:18] <eto> tsoome: yes i know zfs is special beast
[18:19:29] <eto> tsoome: can one mount zfs in ro mode only?
[18:19:31] <tsoome> somewhat special
[18:19:40] <tsoome> ifs is almost as special
[18:19:46] <tsoome> darn, ufs*
[18:19:58] <eto> that could make it safe to blockcopy, not?
[18:20:08] <lblume> Oh yes, ufs+svm mirrors are the specialest of all.
[18:20:09] <eto> yeah soft updates
[18:20:14] <tsoome> ifs creates cylinder groups
[18:20:17] <tsoome> ffs
[18:20:24] <tsoome> whats with that i key today
[18:20:30] <eto> clynder groups such a relic
[18:20:36] <tsoome> read only won't save you with dd
[18:20:47] <eto> now that almost all hardisks have spira tracks
[18:21:02] *** raichoo has joined #openindiana
[18:21:03] <eto> tsoome: why? shouldn't be fs dead in that state
[18:21:04] <eto> ?
[18:21:08] <tsoome> thats how ufs is designed.
[18:21:50] <tsoome> well, it doesn't matter if its dead or alive
[18:22:09] <tsoome> spool is identified by one single thing - the pool ID
[18:22:18] <tsoome> which must be unique
[18:22:25] <tsoome> or things will start to happen
[18:22:50] <lblume> Pool name used to need some uniqueness too, but that was improved, right?
[18:22:54] <tsoome> with dd (or any other disk or array based cloning), you will create identical image, with the same pool ID
[18:23:09] <tsoome> pool name is for mere mortals like me and you
[18:23:18] <tsoome> the real beast is pool ID
[18:23:46] <tsoome> there is only 2 ways to get unique pool ID - one is spool create, another is spool split.
[18:23:51] <tsoome> zpool*
[18:24:04] <lblume> Yeah, but trying to mount a second zpool called "rpool" has been reported to be funky in the past
[18:24:26] <tsoome> name is not important, because you can rename pool as you like
[18:24:39] <eto> tsoome: there is no tool to set one?
[18:24:41] <lblume> And I think the dd issue would be quite the same on LVM which also uses GUIDs.
[18:24:49] <tomww> and you can import by the ID and with -R /a
[18:24:51] <eto> is this pool id another guuid scheme?
[18:25:10] <tomww> well, but two times the same ID is not good .)
[18:25:13] <tsoome> there is no tool other than zpool split
[18:25:15] <tomww> on one machine
[18:25:26] <eto> tomww: what about cross entwork?
[18:25:30] <eto> network
[18:26:00] <tomww> different kernel booted, so not a problemI think
[18:26:29] <tsoome> also, there is still the issue of possibly different disk geometry.
[18:27:38] <tsoome> anyhow, if you wanna get clone, you can create zfs send image, or you can split mirror and move new pool *with* that disk. zfs send is useful, if you need to move the image itself.
[18:27:52] <tsoome> dd is not the tool for this job.
[18:28:37] <tsoome> i have had ~15 clones of the same pool in single system, believe me, it was fun;)
[18:29:49] <lblume> But honestly, I am wondering: AFAICT, there is no distro advocating the use of dd for disk cloning. It *might* work as a quick-and-dirty way, but also can cause weird issues, on any OS. So why insist on using it?
[18:30:19] <tsoome> but… it can copy the data!
[18:32:03] <tsoome> hm, i suppose i need to learn some AI, it occurred me, i have used it only once, and will need it soon...
[18:32:32] <lblume> Please do, and report back :-)
[18:32:49] <tsoome> :P
[18:32:49] <eto> lblume: oh i've seen it plenty sued for cloning
[18:32:54] <eto> used
[18:33:27] <lblume> I know. Me too. But please note carefully that I said «there is no distro advocating the use of dd for disk cloning»
[18:33:33] <tsoome> sure, it can be. and it can work quite ok as well.
[18:33:41] <eto> well i don't see problem if dd does what it is said to do -> copy blocks 1:1
[18:33:54] <tsoome> if you have identical disks...
[18:34:20] <eto> if the mother system is "dead" and some other system is booted doing dd, i do believe it should be safe
[18:34:29] <tsoome> quite often filesystems actually do care about disk geometry;)
[18:35:06] <eto> tsoome: yep, well to say it clearly, i understood that correct way is to use zfs kung-fu
[18:35:34] <eto> tsoome: disk geometry is relic last 20 years, i still meet people trying to tell me otherwise
[18:35:34] <tsoome> even plain old tar/cpio.
[18:35:55] <tsoome> its still there as long as there is spinning rust.
[18:35:59] <lblume> eto: You misread him. Even if the disk doesn't care, the *OS* still does.
[18:36:00] <eto> last disk that had "geometry" were maybe the 2GB ones
[18:36:29] <eto> lblume: i understood, i thought most filesystems moved away long ago, which is not hte case obviously
[18:37:01] <lblume> FS don't care bout that. Partitioning does.
[18:37:18] <lblume> And FS care about partitioning.
[18:37:20] <eto> lblume: there are lba partitions
[18:38:58] <tsoome> some things just *are*. even if ridiculous - like geometry on ssd or other memory based storage devices;)
[18:39:41] *** dekar has joined #openindiana
[18:40:10] <eto> lblume: yes but anything bigger than 8 gb uses LBA entries in the partition, you can't even express chs big enough
[18:40:46] <lblume> Yes, because to number of cylinders was restricted to 10 bits, IIRC.
[18:41:00] <lblume> Now you can go beyond that, but there is still a number of cylinders.
[18:42:43] <eto> come on i bet every os ignores those when mounting using LBA
[18:45:03] <lblume> how can they? It's the way they calculate the number of blocks to use.
[18:46:15] <lblume> anyhow, time to go back home, 'evening people
[18:46:44] <eto> lblume: bye
[18:49:38] *** bradend1 has quit IRC
[18:50:49] *** bradend has joined #openindiana
[18:51:15] *** DontKnwMuch_ has quit IRC
[19:02:01] *** heldchen has joined #openindiana
[19:04:06] *** held has quit IRC
[19:09:56] *** flyz has quit IRC
[19:10:59] *** Naresh has joined #openindiana
[19:15:11] *** Naresh` has joined #openindiana
[19:16:57] *** Naresh has quit IRC
[19:20:10] *** forquare has joined #openindiana
[19:20:51] *** Naresh` has quit IRC
[19:20:55] *** Naresh`` has joined #openindiana
[19:23:44] *** mnaser_ has joined #openindiana
[19:24:29] *** mnaser has quit IRC
[19:24:30] *** Naresh``` has joined #openindiana
[19:25:48] *** |AbsyntH| has quit IRC
[19:26:06] *** Naresh``` has quit IRC
[19:26:49] *** Naresh`` has quit IRC
[19:27:04] *** Naresh``` has joined #openindiana
[19:27:58] *** mnaser_ has quit IRC
[19:28:23] *** mnaser has joined #openindiana
[19:30:56] *** mikw has quit IRC
[19:32:00] *** Naresh``` has quit IRC
[19:38:54] *** laserbled has quit IRC
[19:41:24] *** flyz has joined #openindiana
[19:46:39] *** yalu has quit IRC
[19:50:41] *** yalu has joined #openindiana
[19:52:06] *** laserbled has joined #openindiana
[20:16:55] *** man_u has quit IRC
[20:18:00] *** heldchen has quit IRC
[20:23:15] *** laserbled has quit IRC
[20:23:23] *** Quadrant_ has joined #openindiana
[20:30:18] *** Alasdairrr is now known as AlasAway
[20:31:05] *** Botanic has quit IRC
[20:39:07] *** gea_ has joined #openindiana
[20:39:45] *** gea has quit IRC
[20:39:48] *** gea_ is now known as gea
[20:47:11] *** Botanic has joined #openindiana
[20:47:11] *** Botanic has joined #openindiana
[20:48:59] *** kart_ has quit IRC
[20:52:04] *** Botanic has quit IRC
[20:53:14] *** bens1 has quit IRC
[20:54:37] *** elgar has quit IRC
[20:58:21] *** held has joined #openindiana
[21:00:15] *** SH0x has quit IRC
[21:07:52] *** DeanoC_ has joined #openindiana
[21:07:52] *** ChanServ sets mode: +o DeanoC_
[21:10:20] *** DeanoC has quit IRC
[21:15:40] *** lblume has quit IRC
[21:17:23] *** lblume has joined #openindiana
[21:21:14] *** held has quit IRC
[21:22:45] *** AlasAway is now known as Alasdairrr
[21:26:27] *** Botanic has joined #openindiana
[21:31:05] *** DeanoC_ has quit IRC
[21:45:45] *** DeanoC has joined #openindiana
[21:45:45] *** ChanServ sets mode: +o DeanoC
[21:51:05] *** elgar has joined #openindiana
[21:52:55] *** dijenerate has joined #openindiana
[21:54:12] *** bens1 has joined #openindiana
[22:00:07] *** ekix_ is now known as ekix
[22:04:37] *** ekix is now known as ekix_
[22:05:11] *** dws6045 has left #openindiana
[22:08:22] *** ekix_ is now known as ekix
[22:08:37] *** muppetdeamon has joined #openindiana
[22:08:59] <muppetdeamon> hello
[22:10:08] <muppetdeamon> which is the best option 151 148 or solaris express 11. I am looking to complete my nas server tonight
[22:10:28] <muppetdeamon> what do you all here recommend
[22:10:45] <muppetdeamon> will be installing napp-it on the top
[22:10:58] <Triskelios> what are your requirements?
[22:12:08] *** dijenerate has quit IRC
[22:12:33] <muppetdeamon> will be a nas for a windows enivronment, will be also using iscsi, would like regular security updates, would be a text only install
[22:12:55] <muppetdeamon> also looking for stability
[22:13:36] <muppetdeamon> i just do not know which to go and install
[22:13:41] <Warod> I don't like the comstar iSCSI.
[22:13:42] <Triskelios> are you looking for an appliance, or a regular server?
[22:13:54] *** baitisj has joined #openindiana
[22:14:08] <muppetdeamon> windows share will be on there, and sabnzbd
[22:14:42] <muppetdeamon> not sure as yet if i would also use it for web development
[22:15:27] <muppetdeamon> many to be used as storage and backup for windows clients, sabnzbd and iscsi for esxi
[22:17:34] <Triskelios> NexentaStor is good for storage, but if you're also running apps, you should probably use OI
[22:18:03] *** dijenerate has joined #openindiana
[22:18:22] <muppetdeamon> yes i will be using apps, so then would it be 151 over 148 and solaris?
[22:19:29] *** mikaeld has quit IRC
[22:19:44] <Triskelios> either 148 or 151 should be reasonably ok. unless you want to pay for a S11X licence
[22:22:04] <muppetdeamon> with 151 does it come with gcc 4 if not can it be install ffrom the package management
[22:23:00] <spanglywires> muppetdeamon: biggest problem you'll have is finding stuff that the linux peeps haven't mangled in a way it only compiles onto Linux
[22:25:16] <Triskelios> gcc 4 should be installable, not officially supported right now
[22:25:28] <muppetdeamon> cool, only guarenteed app i would have is sabnzbd and napp-it and ggc4, highly likely apache, postgres or mysql, php. nothing more then that
[22:26:08] <muppetdeamon> when ggc4 is support i assume it will autoupdate to it when it is with a pkg update
[22:27:21] <alanc> probably be a new gcc-4 package to install alongside (or instead of) gcc-3
[22:28:15] <muppetdeamon> my server/nas has 4x1TB with a 16GB SSD for system
[22:28:59] <richlowe> alanc: Will be (2)
[22:29:10] <muppetdeamon> is there anything i need to be awhere of with adding software sources or will none not need to be added
[22:29:21] <richlowe> alanc: smart money would be on gcc-dev depending on the newer though.
[22:30:25] *** DontKnwMuch_ has joined #openindiana
[22:31:23] *** dijenerate has quit IRC
[22:32:01] <DontKnwMuch_> hi, I have 13x 1TB, for a backup send-receive thing, raidz3 or something else?
[22:32:08] <muppetdeamon> Ok i am going to go ahead and install oi 151 text from usb
[22:37:08] <viridari> DontKnwMuch_: 6x mirror sets, 1x spare
[22:37:55] <viridari> (best I can do with limited input ;)
[22:39:04] *** SH0x has joined #openindiana
[22:39:46] <muppetdeamon> would a 16GB be of any advantage for a cache drive for zraid
[22:40:08] <viridari> muppetdeamon: 16GB of what?
[22:40:22] <DontKnwMuch_> viridari: this is a second sistem to which I will be zfs send from my primary one and I need as much space as possible. Raidz3 with 13 drives... perhaps better to do a two raidz2
[22:40:26] <muppetdeamon> 16GB SSD drive
[22:40:36] <muppetdeamon> server has 8GB of ram
[22:40:54] <DontKnwMuch_> muppetdeamon: enough
[22:41:12] <DontKnwMuch_> muppetdeamon: for 8gb ram, it will do
[22:41:17] <viridari> muppetdeamon: I would be tempted to carve 1GB out of that for slog
[22:41:28] <DontKnwMuch_> too big even
[22:41:35] <viridari> yeah perhaps
[22:45:42] *** bens1 has quit IRC
[22:45:50] <muppetdeamon> dam no spare sata slots for the extra 16GB SSD
[22:46:36] <viridari> muppetdeamon: do you have a PCIe x4 slot open?
[22:46:50] <viridari> (SATA is such a bottleneck)
[22:47:15] <muppetdeamon> yes i do have spare PCIe
[22:47:32] <muppetdeamon> i am not a very rich man
[22:47:41] <viridari> ok I won't go down that road
[22:48:32] <muppetdeamon> will be just 1 ssd 16 GB for oi 151 install
[22:56:08] *** gea has quit IRC
[22:58:48] *** spanglywires_ has joined #openindiana
[23:01:38] *** spanglywires_ has quit IRC
[23:02:42] *** McBofh has quit IRC
[23:02:43] *** gea has joined #openindiana
[23:07:29] *** McBofh has joined #openindiana
[23:21:01] *** spanglywires has left #openindiana
[23:23:53] *** DrLou has quit IRC
[23:29:27] *** mikaeld has joined #openindiana
[23:35:01] *** Alasdairrr is now known as AlasAway
[23:44:35] *** InTheWings has quit IRC
[23:46:49] *** elgar has quit IRC
[23:56:16] *** paularmstrong has joined #openindiana
top

   July 25, 2011  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >