NOTICE: This channel is no longer actively logged.
[00:09:11] *** lesterc has joined ##nexenta[00:12:15] *** koan has quit IRC[00:12:21] *** koan has joined ##nexenta[00:28:38] <TALzz> how do i restart the network service after i made some changes to the ip address of a nic ?[00:54:44] *** olsner has quit IRC[01:07:17] <Corwin7> TALzz: try svcadm restart network[01:08:01] *** TALzz has quit IRC[01:28:17] *** taltamir has joined ##nexenta[02:00:24] *** alfism has quit IRC[02:30:18] *** djw has quit IRC[02:59:31] *** [JT]_ has joined ##nexenta[03:08:36] *** [JT]_ has quit IRC[03:08:53] *** djw has joined ##nexenta[03:08:56] *** rootard has quit IRC[03:12:33] *** Fosforo has quit IRC[03:55:32] *** anilg has quit IRC[04:10:13] *** tsukasa__ has joined ##nexenta[04:25:03] *** tsukasa` has quit IRC[04:32:49] *** [JT]_ has joined ##nexenta[05:11:16] *** rootard has joined ##nexenta[05:11:26] *** ChanServ sets mode: +o rootard[05:22:09] *** JetForMe has quit IRC[05:30:54] *** ChanServ sets mode: -o rootard[06:16:59] *** NCommander has quit IRC[06:20:10] *** NCommander has joined ##nexenta[06:33:05] <[JT]> Hey kids, as promised here are the instructions for installing NexentaCP on XenServer as a paravirtualized VM: http://justindthomas.wordpress.com/2009/04/03/installing-nexentacp-2-rc1-on-xenserver-5/[06:33:43] <[JT]> They're quick and dirty; I can refine them as needed. Please let me know if you have good luck or bad luck if you decide to give it a go.[06:34:32] <[JT]> Quick uname output:[06:34:44] <[JT]> root@nexenta:~# uname -a[06:34:54] <[JT]> SunOS nexenta 5.11 NexentaOS_20081207 i86pc i386 i86xpv Solaris[06:35:01] <[JT]> root@nexenta:~#[06:35:05] <[JT]> Good times.[06:41:26] <rootard> [JT]: awesome! I'll give it a try in the next day/two[06:41:46] <[JT]> Let me know how it goes. :)[06:42:11] <rootard> will do. Thanks for writing this up![06:45:35] <[JT]> No problem.[06:46:16] *** rootard changes topic to "Welcome to the Nexenta IRC Channel | NexentaCP 2 RC1 released: http://www.nexenta.org/os/Download | Bug reports: https://bugs.launchpad.net/nexenta/+filebug | If you got something working on NCP2, help the community by writing a small howto on the nexenta wiki http://www.nexenta.org/os/ | Xen DomU instructions: http://justindthomas.wordpress.com/2009/04/03/installing-nexentacp-2-rc1-on-xenserver-5/"[06:47:59] *** TALzz has joined ##nexenta[06:48:15] <[JT]> Sweet. :)[06:48:16] <rootard> ~tell anilg [JT] has instructions for installing an NCP2.0 DomU[06:48:17] <nexybot> rootard: Error: "JT" is not a valid command.[06:48:26] <rootard> ~tell anilg JT has instructions for installing an NCP2.0 DomU[06:48:27] <nexybot> rootard: Error: I haven't seen anilg, I'll let you do the telling.[06:48:33] <rootard> lol[06:49:15] <[JT]> You might note that it's XenServer - the instructions should be portable by making the right changes to the right Xen files and eliminating the "xe" commands, but that translation will need to happen.[06:49:59] <[JT]> I'm of the opinion that anyone running Xen on Linux should switch to XenServer anyway. :) Less hassle.[06:50:19] *** [JT] has left ##nexenta[06:50:24] <Macer> hm[06:50:26] *** [JT] has joined ##nexenta[06:50:29] <[JT]> Oops.[06:50:30] <Macer> how do i upgrade to rc1 with beta?[06:50:30] <[JT]> :)[06:50:40] <Macer> do i just run apt-get dist-upgrade?[06:50:45] <rootard> apt-clone dist-upgrade[06:50:54] <rootard> apt-get update first of course...[06:51:13] <rootard> [JT]: TBH I don't know the difference[06:51:33] <[JT]> XenServer is a packaged system - like VMware ESX.[06:51:54] <Macer> esxi was horrible on my hardware[06:51:58] <[JT]> It uses Xen, but adds a whole lot of management structure.[06:52:02] <Macer> nexenta+vbox seems to be working rather well[06:52:15] <rootard> I'm not a fan of anything VMWare[06:52:20] <[JT]> Me neither.[06:52:25] <[JT]> Hence XenServer! :)[06:52:28] <Macer> i was running esxi on my server before switching over[06:52:31] <rootard> :)[06:52:34] <Macer> and the throughput for all io was horrible[06:52:41] <Macer> absolutely horrible[06:53:00] <Macer> typical vmware people say "you are using an unsupported hw raid" but seriously[06:53:11] <Macer> 5MB/s ? no form of "unsupported" should ever cause that[06:53:16] <[JT]> For me, the hardware limitations of VMware ESX make it a non-starter. Doesn't support my RAID cards.[06:53:44] <Macer> well.. i think the point is to build around their hcl[06:53:51] <[JT]> Bleagh.[06:53:56] <rootard> Only really supporting Windows as a client is a pretty bad sign from me. I don't have a windows box _anywhere_ these days[06:54:08] <Macer> but i was disappointed that i needed to use an unsupported areca driver for my hw raid[06:54:19] <Macer> rootard: i agree[06:54:33] <Macer> i can't believe they only had a windows vi client for a system based off a linux kernel[06:54:37] <Macer> how senseless :)[06:54:38] <rootard> I like Xen, there is an _option_ to use vnc[06:54:55] <Macer> well... vbox lets you use rdp[06:55:06] <Macer> but then again.. it's not a bare metal hypervisor like xen[06:55:20] <rootard> otherwise I can get to a console as intended... via ASCII or maybe even *gasp* unicode[06:55:28] <Macer> heh[06:55:37] <Macer> well.. esxi isn't worth it[06:55:54] <Macer> it is nice to tell you the truth.. but so far i have yet to make a box that works well with it[06:56:13] <Macer> i tried it on my k45 shuttle (with intel nic installed all hardware on their hcl) and it ran horribly[06:56:24] <Macer> then i tried it on my beast server. and it ran just as horrible[06:56:30] <rootard> haha[06:56:31] <Macer> 2TB lun and vmfs support?[06:56:34] <[JT]> For the most part, XenServer lets you manage it from the command line. I do use the Windows XenCenter quite a bit, though. That runs in a Windows VM, however, so I have to be able to manage the system by CLI at least as a failsafe.[06:57:00] <Macer> what the hell.. the tech is there to support larger sizes[06:57:09] <Macer> why is esxi stuck in the 1990s with 2TB limits? :)[06:57:16] <rootard> I have a similar experience. Someone shipped me a vmware image and I swore up and down that we should just use Xen[06:57:36] <Macer> what kernel is xen based on?[06:57:46] <rootard> after users complained of really poor performance I migrated the image to xen and then there was light[06:58:10] <Macer> heh.. well.. in my experience.. esxi is not what it is cracked up to be[06:58:13] <rootard> Linux... not sure what version they forked from[06:58:16] <Macer> good concept but doesn't really deliver[06:58:33] <Macer> i see.. someone should make an osol kernel based hypervisor :)[06:59:08] <rootard> hmm, that would be scary[06:59:23] <[JT]> XenServer 5 is Linux kernel 2.6.18[06:59:25] <Macer> why? because it would work with good performance? :)[06:59:28] <[JT]> It's based on CentOS 5.[06:59:58] <Macer> heh[07:00:07] <Macer> isn't centos just rhel with different artwork?[07:00:16] <[JT]> Linux xen 2.6.18-92.1.10.el5.xs5.0.0.426.647xen #1 SMP Wed Jan 21 05:37:56 EST 2009 i686 i686 i386 GNU/Linux[07:00:21] <Macer> i use it for my zimbra vbox[07:00:24] <[JT]> Pretty much, I think.[07:00:26] <rootard> I don't know if it would or not... certainly Linux hardware support is broader than OSol (in general)[07:00:57] <Macer> rootard: true.. but osol has done a good job of adding more hardware support[07:01:07] <Macer> and realistically speaking.. people use 5% of what a linux kernel offers :)[07:01:15] <Macer> they need to scrap the vintage hardware heh[07:01:27] <rootard> don't get me started about vintage[07:01:36] <Macer> like... things like mwave support[07:01:37] <[JT]> You can always recompile without the cruft. :)[07:01:50] <Macer> remember mwaves from 1990? :)[07:02:01] <Macer> i want to meet the person still using an isa mwave[07:02:32] <rootard> ahh, back to the days of a 1200 baud modem that takes up a closet...[07:02:38] <Macer> yeah[07:02:51] <Macer> they need to fork the linux kernel into vintage and modern branches[07:03:02] <Macer> and exclude all hardware from before 2000 from the newer one :)[07:03:25] <Macer> the kernel source would be back to 2MB again[07:03:28] <rootard> now you can get a usb GB nic the size of your pinky... and most of that is plastic housing[07:03:53] <Macer> yeah.. for $5[07:03:55] <Macer> heh[07:04:54] <Macer> what i did kind of find as a disappointment was that virtualbox doesn't support vsmp[07:04:54] <rootard> I can't imagine how much trouble it would be to remove all of that cruft though[07:05:15] <Macer> i mean the io throughput more than makes up for it compared to esxi on the same box.. but still :)[07:06:39] <rootard> I hadn't noticed. The only thing I've been running vbox on is my already underpowered/aging mac-mini[07:06:42] <Macer> and i wasn't able to run vbox as a user .. have to run it as root[07:06:47] <Macer> 11483 root 2102M 2097M sleep 59 0 4:16:50 3.3% VBoxHeadless/15[07:07:10] <Macer> i should try to install xen on my artigo[07:07:24] <Macer> my 1GHz 1GB beast of a computer :)[07:07:44] <rootard> :)[07:08:00] <Macer> http://www.via.com.tw/en/products/embedded/artigo/[07:08:21] <Macer> it is the same size as a standard optical drive.. it's half the depth though[07:08:29] <Macer> it's pretty strong for what it is[07:09:10] <Macer> i almost felt like buying a huge full tower because they can fit in a 5.25" bay.. i was going to stack them and run opensolaris and make a 10 artigo cluster :)[07:09:29] <Macer> wonder how well that would work[07:09:45] <[JT]> Would look cool, if nothing else. :)[07:10:01] <Macer> i know haha.. ghetto but still cool.. but that would run a lot of money[07:11:07] <rootard> Maybe you could get a bulk discount[07:11:10] <Macer> they cost ~300 without the hard drive... i know they're coming out with a newer model soon with a 1.5GHz C7 and default sata[07:11:12] *** JetForMe has joined ##nexenta[07:11:17] <Macer> rootard: it would still run a couple thousand[07:11:24] <Macer> it would be an awesome project though[07:11:38] <Macer> oh.. newer model is supposed to have a gbit nic also.. the one i have is only 100mbit[07:13:29] <Macer> ah well.. going to get some sleep.. ttyl[07:13:32] <rootard> This looks like it would be a fun box to run Nexenta + Crossbow + zones[07:13:53] <Macer> rootard: i was going to make it an osol desktop[07:13:55] <Macer> :)[07:13:55] <rootard> then it could serve as a firewall/dhcp/... server for a home network[07:14:00] <Macer> right now i have debian on it[07:14:05] <rootard> very nice[07:14:26] <Macer> rootard: newer one is supposed to be a lot better[07:14:34] <Macer> i really do want to get a few of them and make a cluster[07:14:49] <Macer> would be interesting to see what kind of performance i would get off something like that[07:15:37] <rootard> I would think the bus/memory would be slow[07:16:04] <Macer> yeah :) little bit[07:17:24] <Macer> good night[07:17:29] <rootard> hmm, depending on how cheap this is: http://www.via.com.tw/en/products/embedded/nsr7800/index.jsp it could be fun too... based on the same tech[07:17:32] <rootard> nn[07:21:37] <Macer> i wanted to build a small rack mount[07:21:47] <Macer> doesn't really have to be a cabinet.. i know you can get just frames[07:23:15] <Macer> http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=3301352&CatId=209[07:23:23] <Macer> something like that would be nice to build[07:29:55] *** NCommander has quit IRC[07:35:49] *** anilg has joined ##nexenta[07:56:58] *** lesterc_ has joined ##nexenta[08:02:37] * codestr0m wakes the channel up[08:11:11] *** Bartman007 has quit IRC[08:11:14] *** Bartman007 has joined ##nexenta[08:15:08] *** lesterc has quit IRC[08:15:52] *** lesterc_ has quit IRC[08:16:46] *** [JT]_ has quit IRC[08:22:00] *** synan has joined ##nexenta[08:30:27] <dtbartle> codestr0m: i was already awake :)[08:33:03] <codestr0m> dtbartle: don't you start next week?[08:33:08] <codestr0m> and good things are happening all around[08:33:28] <dtbartle> yes[08:33:31] *** anilg1 has joined ##nexenta[08:52:41] *** anilg has quit IRC[08:53:28] *** anilg1 has left ##nexenta[09:05:39] *** olsner has joined ##nexenta[09:10:34] <codestr0m> are you relocating to cali for this?[09:31:43] *** otep has quit IRC[09:32:33] *** MrGrinch has joined ##nexenta[09:36:52] *** Laevar has joined ##nexenta[09:46:38] <dtbartle> codestr0m: i reocated last week[09:46:42] <dtbartle> i'm in mountain view atm[09:46:58] <codestr0m> oh.. so it's easy to for a canuck to get a work visa I guess[09:47:08] <dtbartle> yeah[09:47:22] <dtbartle> i think the whole thing took about an hour at the airport in toronto[09:47:27] <dtbartle> and i'm good for 3 years[09:52:01] *** taltamir has quit IRC[10:00:06] *** RoyK has joined ##nexenta[10:00:27] <RoyK> hm... the webpages say nexenta requires "just 256MB RAM"[10:00:32] <RoyK> but it doesn't install on 256MB[10:20:11] *** olsner has quit IRC[11:04:47] *** JetForMe has quit IRC[11:05:01] *** JetForMe has joined ##nexenta[11:06:16] *** otep has joined ##nexenta[11:25:12] *** anilg has joined ##nexenta[11:59:15] <Laevar> anilg: i succesfully installed nexenta with the hardcoded cyl-size for the root-slice and now all is working as wanted . Thank you for your help[12:08:53] <anilg> great.. Laevar, was it a simple matter of change a few (one?) lines?[12:09:07] <Laevar> yes, changing one line was sufficient[12:09:28] <anilg> if yes, I could possible add a small dialog into the installer, aloowing the user to specify a percentage of the disk to use..[12:09:34] <anilg> which line was it..?[12:10:19] <Laevar> 1023[12:10:34] <anilg> were you able to then mirror the root slice to get your particular setup working?[12:11:07] <Laevar> yes[12:11:34] <Laevar> it is working right now, and i started making benchmarks with filebench[12:11:50] <Laevar> to determine the best setup for AVS[12:12:33] <anilg> If you get the time, a blog entry on your setup, or something on our wiki would be great..[12:13:04] <Laevar> i planned to do that anyway, so yes[12:14:21] <anilg> cool[12:21:54] *** timeless has quit IRC[12:36:40] *** Einon1 has joined ##nexenta[12:37:04] <wulf1> Laevar : I got a pretty large performance hit by using a bitmap volume on the same "drive" as the data. So getting the bitmap off the data volume is a must for write performance[12:38:06] <Einon1> Hi! Just a fast newbie question please! Is there a way under Nexenta to replicate the disks to an another machine (like DRBD).[12:38:32] *** Einon1 is now known as Einon[12:43:46] <wulf1> Einon : AVS[12:52:15] <Einon> thanks. reading.... :)[13:03:38] *** soucnt_ has joined ##nexenta[13:04:18] *** soucnt_ is now known as legkodymov[13:43:53] <Laevar> wulf1: i was wondering how to make this best, but for my setup it isn't possible to get the bitmap-volumes off the drives which also have replicated data. The only thing i can change is, if the bitmal volume is on the same disk or on other of the pool[13:44:21] <Laevar> but i also has to understand what happens if a bitmap volume is lost[13:44:34] <Laevar> has=have[13:45:36] <wulf1> Laevar : The mirror will fail without the bitmap, but you'll still have your data[13:46:53] <wulf1> Laevar : I was thinking of using a slice of the root disks for bitmap, but for me that would require a reinstall, so I'll just add a few disks to my diskrack for the bitmap[13:47:52] <Laevar> do you know what exactly happens for AVS if a drive in a raidz2 pool crashes ?[13:48:44] <Laevar> i have all my data (because of raidz2) but the bitmap is lost, and in non-sync mode the data on the secondary node is also not up to date[13:49:59] <Laevar> so when i replace the drive, avs must make a sync primary->sexondary, ok, that is the answer ;)[13:50:35] <wulf1> if you lose the bitmap, you'll have to "recreate" the mirror with a full sync[13:50:43] <wulf1> That's my best guess[13:51:24] <Laevar> so it regarding the crash of one drive, it makes no difference if the bitmap volume lies on the same drive or on another.[13:51:44] *** GHReyes has joined ##nexenta[13:52:11] <Laevar> and for my setup it makes equally no difference performance-wise, i think[13:52:33] *** GHReyes has left ##nexenta[13:53:22] <wulf1> Laevar : Well, it does have some effect performance wise. And you'll have one bitmap volume for each block device you want to mirror[13:54:07] <wulf1> If you lose one bitmap volume and the corresponding data volume, you'll have a small sync job to do, but if you lose the bitmap for several, you'll have to do a full resync of all block devices[13:54:15] <Laevar> because i have 6 data-volumes on 6 drives, i also need 6 bitmap-volumes, which for now placed on the same disk[13:54:44] <Laevar> yes, i was thinking about shifting the bitmap-volumes on disk[13:54:57] <wulf1> One separate disk for all the bitmaps, or bitmap on the same disk as data?[13:54:57] <Laevar> so having data1 and bitmap2 on one disk[13:55:20] <Laevar> a bitmap-volume must be on the same disk as data[13:55:38] <wulf1> Laevar : the bitmap volume can be anywhere[13:55:42] <Laevar> but it is a somehow a difference *which* bitmap-volume it is[13:55:44] <Laevar> sure[13:55:51] <Laevar> but there are no other drives[13:55:53] <wulf1> ah[13:56:10] <wulf1> well, having the bitmap volume for another drive will just make the loss of a drive worse[13:56:26] <wulf1> and still the same performance hit[13:56:34] <Laevar> oh, you are right, then i have to sync 2 volumes[13:57:09] <Laevar> yes, same performance-hit[13:57:21] <wulf1> I'd say that using a slice of the root disk for all bitmaps is the best way to go. You don't hit that drive for data much, and if you lose that drive, the bitmap doesn't matter anyway[13:58:36] <wulf1> you say you're using raidz, so I guess you got a single or mirrored root disk[13:58:38] <Laevar> yes thats right, but i waste much space and i have no redundancy for the root-pool[13:58:50] <wulf1> The bitmap volumes take very little space[13:58:54] <Laevar> i have no roodisk[13:59:22] <Laevar> i have 3 slices per disk: root,data, bitmap[13:59:39] <Laevar> thats optimal regarding diskspace[13:59:41] <wulf1> ah, so your root pool and data pool is on the same disks[13:59:53] <wulf1> well, then it's not a good AVS setup, but it'll work :-)[14:00:08] <Laevar> AVS replicates only the data-slice[14:00:25] <Laevar> and losing the root-slice is no problem[14:00:37] <Laevar> they are 5 times replicated ;)[14:01:20] <wulf1> If you do a sequencial write on your data pool, normally it would write in sequence, but if you activate AVS, it'll hit your bitmap slice as well, then you get almost random write performance[14:01:28] <vherva> If nexenta (2.0RC1 b104) install hangs on sata raid driver (arcmsr0, the controller is areca arc-1261), is there any tricks to get past it?[14:01:44] <vherva> Some grub command line parameters?[14:01:47] <Laevar> yes, my performance suffer..[14:02:06] <Laevar> that is what i am testing now[14:02:26] <Laevar> if the performace-lost is to big, i will change to 1 root-disk and 5 data-disks[14:02:33] <Laevar> to=too[14:03:07] *** wurlitzer has joined ##nexenta[14:03:35] *** timeless has joined ##nexenta[14:16:22] *** legkodymov has quit IRC[14:40:50] *** MrGrinch has joined ##nexenta[14:54:04] *** TALzz has quit IRC[15:00:49] *** andy_js has joined ##nexenta[15:11:04] *** TALzz has joined ##nexenta[15:15:31] *** Einon has left ##nexenta[15:26:30] <andy_js> whats the benefit of running Nexenta in Xen instead of Virtualbox?[15:34:10] *** NCommander has joined ##nexenta[15:38:45] *** fserve has joined ##nexenta[15:39:59] <Laevar> did someone saw messages like this with AVS: entered logging mode: memq flush aio status not RDC_IO_DONE ?[15:40:21] <Laevar> replicating with async-mode[15:54:06] *** synan has quit IRC[15:59:33] <RoyK> hm... the webpages say nexenta requires "just 256MB RAM", but it doesn't install on 256MB[15:59:49] <RoyK> "image does not fit into memory"[16:04:38] <TALzz> hello ppl[16:05:15] <anilg> RoyK: It actually does a check for >256mb[16:05:30] <anilg> if you have some allocated to video RAM (like 8mb)..[16:05:48] <anilg> then you effectively only have 248Mb.. and thus it wont install.[16:06:08] <TALzz> this issue when i remove a disk from the machine w/o offline it and the machine stucks doesnt make any sense[16:06:40] <TALzz> what happens if one of ur drives just fail and die, the whole machine is stuck because of that ?[16:06:53] <TALzz> so what`s the point of having hot swap or so called spares[16:11:19] <RoyK> anilg: it's a server with 256MB[16:11:36] <RoyK> anilg: I seriously doubt it allocates video memory on system RAM[16:12:00] *** fserve has quit IRC[16:12:07] <RoyK> but I'll check[16:12:21] <anilg> when you goto the f2 screen and enter 'prtconf | head'[16:12:46] <anilg> Check what the line "Memory Size" says[16:18:38] <RoyK> what is the f2 screen?[16:23:25] <NCommander> So Sun being solid to IBM ...[16:24:31] <anilg> RoyK: when you boot from the CD, and land in the blue screen, press f2[16:24:41] <anilg> NCommander: looks like it form all the news[16:24:53] <anilg> no solid confirmation from either side still[16:24:57] <NCommander> so what will happen to (Open)Solaris?[16:25:25] <anilg> There is enough traction in the community to fork it, if IBM decides to close it..[16:25:47] <anilg> if the fork will be sustained in the longer term ..[16:25:51] <anilg> who knows[16:26:27] <anilg> This is unlikely though.. Opensolaris has built some momentum for itself.. so IBM cant easily decide to close it, if at all[16:26:49] * anilg points out that all this is speculation[16:37:04] <TALzz> anilg: sup[16:37:26] <TALzz> got any idea what i can do with this HD's issues ?[16:41:41] *** tsukasa__ has quit IRC[16:42:01] *** tsukasa` has joined ##nexenta[16:43:46] *** tsukasa` is now known as tsukasa[16:44:40] *** tsukasa has quit IRC[16:46:04] <TALzz> any1 got an idea ?[16:46:10] <TALzz> was it solved in RC1?[16:47:03] *** tsukasa has joined ##nexenta[17:09:09] <TALzz> any1 ? help ?[17:10:39] <andy_js> I wonder what will happen to sparc[17:11:02] *** lleming has joined ##nexenta[17:11:08] * andy_js has fingers crossed for a working ppc solaris port[17:11:13] <lleming> hello[17:11:27] <NCommander> andy_js, a PowerPC port probably wont happen until all the closed bits are replaced[17:12:43] <lleming> sorry for disturbanse but if anybody know how to install vboxaddition to nexenta[17:12:57] <andy_js> AFAIK, it can't be done[17:13:36] <andy_js> the .deb for it is broken, and pkgadd is broken too[17:14:23] <andy_js> (if someone can prove me wrong that would be great, I'd love to be able to run Nexenta at a higher resolution that 800x600[17:15:40] <TALzz> can any1 tell me if the machine get stuck from HD faliur is fixed on RC1 ?[17:15:50] <TALzz> i dont get what`s the spares r for ?[17:20:59] *** TALzzz has joined ##nexenta[17:21:06] <anilg> TALzz: you'll have to wait for the upstream fix[17:21:28] <anilg> if it's fixed in opensolaris, it will be in our kernel too.[17:25:32] *** TALzz1 has joined ##nexenta[17:25:59] <TALzz1> anilg: so ur saying till opensolaris fix it it`s gonna be like that ?[17:26:34] <anilg> yes.. we are mostly consumers of the upstream kernel[17:27:23] <TALzz1> so how can i really work with it as my storage if one drive fails is gonna kill my machine[17:27:34] <TALzz1> it wont even alert cause it`s stuck[17:27:58] <TALzz1> which means i need to be next to the machine 24/7 just incase it happens so i can change the drive quick[17:28:31] <TALzz1> unless all the vm's that i got there still working, it`s just me that cant do anything on the machine cause it`s stuck command wise[17:28:45] <TALzz1> and i cant access it via ssh till i put a HD back in[17:29:18] <anilg> It could be that removing a hardware is different from a hard disk failure.. in terms of how the system reacts[17:29:29] <anilg> though I dont have ideas on how you can test this..[17:29:45] <anilg> s/hardware/hard disk[17:30:39] *** alfism has joined ##nexenta[17:31:09] <TALzz1> i just removed a hd from the machine while it`s workign[17:31:17] <TALzz1> like u do with a normal raid card[17:31:25] <TALzz1> and the machine just freez[17:31:55] <TALzz1> but as soon as i push the hd back in it`s "reasle"[17:32:09] <TALzz1> like if i was running zpool status -v[17:32:22] <TALzz1> the output will be stuck till i push the hd back in[17:32:27] <TALzz1> then i can see the output[17:32:45] <TALzz1> but if a hd dies on u[17:32:48] <TALzz1> like dead dead[17:32:56] <TALzz1> it`s the same thing as removing it , isnt it ?[17:33:03] <TALzz1> cause it`s dead, not elec. working[17:33:27] <anilg> TALzz1: I'm not sure how zpool/zfs handles this.[17:33:39] <anilg> my suggestion is you post this to zfs-discuss at opensolaris dot org[17:34:10] <anilg> those folks are more knowledgeable on this matter and will be able to answer your queries better[17:35:25] <TALzz1> it`s just doesnt make any sense for me that`s all[17:36:14] <TALzz1> i even set the zpool to be on failmode=continue[17:36:24] <TALzz1> and autoreplace=on[17:37:59] *** TALzz has quit IRC[17:38:13] *** TALzz1 is now known as TALz[17:38:18] *** TALz is now known as TALzz[17:42:36] *** TALzzz has quit IRC[17:42:49] *** lleming has quit IRC[17:43:07] <Laevar> how do i monitor total memory consumption under nexenta ?[17:43:50] <andy_js> good question[17:45:27] <andy_js> the ol' free command doesn't seem to be installed[17:45:35] <Laevar> yes[17:45:41] <Laevar> and top doesn't show it[17:45:48] *** lleming has joined ##nexenta[17:47:08] <anilg> Laevar: prstat is a command thats equivalent to top[17:47:47] <Laevar> anilg: but it also doesn't show the total memory consumption, the usual top summary[17:48:34] <anilg> quick google http://www.filibeto.org/pipermail/solaris-users/2004-January/000531.html[17:48:52] <anilg> the vmstat command should be the one[17:49:47] <Laevar> ah, ok, thats working , thank you[17:51:44] *** rmod has joined ##nexenta[17:51:51] <rmod> hey guys[17:52:49] <TALzz> anilg: if i change an ip address for one of the nic's , what do i need to do so it will be updated in the system so i can ping it ?[17:52:57] <rmod> im experiencing an issue where if i physically remove a drive from a machine with a zfs volume(raidz or raidz2) the machine locks up until i relace the drive[17:53:06] <TALzz> what is the svcadm instance to restart the network service[17:53:18] <TALzz> rmod: welcome to my world[17:53:27] *** RoyK has quit IRC[17:53:36] <rmod> heh happens to you too TALzz ?[17:53:37] <TALzz> rmod: u need to bring that drive offline first[17:53:54] <TALzz> yeah i posted it to opensolaris zfs group[17:53:55] <rmod> then remove it?[17:54:02] <TALzz> yeah offline it first then remove it[17:54:11] <TALzz> u might need to unconfigure it too[17:54:20] *** lleming has quit IRC[17:54:28] <TALzz> cfgadm -c unconfigure[17:54:32] <TALzz> then put the new drive in[17:54:32] <rmod> what a pain in the butt[17:54:36] <TALzz> yep[17:54:44] <TALzz> it doesnt act as a real raid :([17:54:51] <rmod> pooooo[17:54:55] <rmod> still love zfs[17:54:59] <rmod> guess ill have to deal with it[17:55:17] <TALzz> yeah[17:55:24] <TALzz> but did it happend to u when u removed it[17:55:30] <TALzz> or the hd failed ?[17:55:44] <TALzz> cause i wanna know if a hd fails does it do the same ?[17:55:55] <rmod> when i rip it out of the machine[17:55:57] <TALzz> cause from some odd reason it doesnt uses the spares hd in that zpool[17:56:12] <rmod> guess it would be the same as a hard drive failure[17:57:11] <TALzz> that`s what i think[17:57:31] <rmod> so wtf is the point of the spares in the pool if it doesnt use them[17:57:32] <rmod> lol[17:57:39] *** rmod is now known as RMod[17:58:02] <TALzz> i think there isnt[17:58:20] <RMod> has this always been the case, or is it just in the latest versions?[17:58:30] <RMod> wonder if it happens with 1.0 or opensolaris[18:01:02] *** Laevar has quit IRC[18:03:53] *** [JT]_ has joined ##nexenta[18:07:50] <codestr0m> anilg: thanks for joining our community ;)[18:14:08] *** [JT]_ has quit IRC[18:32:57] *** alfism has quit IRC[18:42:15] *** koan has quit IRC[18:42:24] *** koan has joined ##nexenta[19:16:37] *** GHReyes has joined ##nexenta[19:23:13] *** alfism has joined ##nexenta[20:00:33] *** JetForMe has quit IRC[20:22:21] *** JetForMe has joined ##nexenta[20:37:07] <RMod> TALzz: may I ask what controller you are using ?[20:57:17] *** TALzz has quit IRC[21:01:42] *** TALzz has joined ##nexenta[21:27:04] *** TALzz has quit IRC[22:11:45] *** baitisj has joined ##nexenta[22:14:43] *** andy_js has quit IRC[22:22:22] *** GHReyes has left ##nexenta[22:27:02] *** lstewart has joined ##nexenta[22:27:29] <lstewart> how can I recover from this error when setting up auto-cdp?[22:27:30] <lstewart> Appliance mail-nas2: Failed to add disk pairs to remote auto-cdp service: com.nexenta.nms.SystemCallError: Failed to add service: sndradm: sndradm: Error: bitmap /dev/zvol/rdsk/syspool/.cdp/c0t3d0 is already in use by StorEdge Network Data Replicator[22:29:48] *** anilg1 has joined ##nexenta[22:33:09] *** anilg2 has joined ##nexenta[22:33:17] *** anilg1 has quit IRC[22:46:02] *** anilg has quit IRC[22:51:53] *** remyzero_ has joined ##nexenta[22:58:50] *** remyzero has quit IRC[23:13:32] *** wavejumper has quit IRC[23:16:13] *** wurlitzer has quit IRC[23:20:50] *** wavejumper has joined ##nexenta