[00:02:57] *** neophenix has quit IRC[00:09:12] *** vrou has joined #omnios[00:09:34] *** desai has quit IRC[00:10:54] *** postwait has quit IRC[00:13:50] *** vrou has quit IRC[00:32:29] *** wuff has quit IRC[00:32:42] *** wez is now known as wez|away[00:47:58] *** wez|away is now known as wez[00:55:54] *** wuff has joined #omnios[01:06:06] *** joltman has quit IRC[01:43:28] *** desai has joined #omnios[01:58:15] *** xeyed4good has joined #omnios[01:59:06] *** xeyed4good has left #omnios[02:13:00] *** gmason has quit IRC[02:13:58] *** jeffrymolanus has quit IRC[02:22:18] *** hydan` has quit IRC[02:42:13] *** gmason has joined #omnios[02:49:16] *** PeterTripp has quit IRC[02:51:33] *** postwait has joined #omnios[02:51:35] *** ChanServ sets mode: +o postwait[02:56:44] *** wez is now known as wez|away[03:08:30] *** wez|away is now known as wez[03:20:16] *** desai has quit IRC[03:20:46] *** desai has joined #omnios[03:34:34] *** nefilim has joined #omnios[03:37:25] *** wez is now known as wez|away[03:45:35] *** vrou has joined #omnios[03:49:53] *** vrou has quit IRC[03:56:58] *** ira has quit IRC[04:02:58] *** gmason has quit IRC[04:03:46] *** ghost75_ has joined #omnios[04:04:10] *** ghost75 has quit IRC[04:11:47] *** postwait has quit IRC[04:31:17] *** jeffrymolanus has joined #omnios[04:48:06] *** patdk-lap has quit IRC[04:52:31] *** jpeach has joined #omnios[04:55:32] *** jeffrymolanus has quit IRC[05:10:24] *** wez has joined #omnios[05:24:37] *** desai has quit IRC[05:33:46] *** vrou has joined #omnios[05:38:33] *** vrou has quit IRC[05:52:31] *** _Tenchi_ has quit IRC[05:53:20] *** _Tenchi_ has joined #omnios[06:35:54] *** jeffrymolanus has joined #omnios[06:51:05] *** nefilim has quit IRC[06:56:12] *** jeffrymolanus has quit IRC[06:57:43] *** szaydel has quit IRC[07:18:20] *** jpeach has quit IRC[07:18:47] *** jpeach has joined #omnios[07:21:58] *** vrou has joined #omnios[07:23:57] *** jpeach has quit IRC[07:26:19] *** vrou has quit IRC[07:31:22] *** slx86 has joined #omnios[07:38:11] *** jeffrymolanus has joined #omnios[07:44:49] *** jeffrymolanus has quit IRC[08:10:47] *** kohju has quit IRC[08:11:22] *** nikolam has joined #omnios[08:12:54] *** kohju has joined #omnios[09:10:09] *** vrou has joined #omnios[09:15:03] *** vrou has quit IRC[09:22:54] *** KermitTheFragger has joined #omnios[09:42:46] *** nikolam has quit IRC[09:53:35] *** jtimberman has quit IRC[09:54:17] *** jtimberman has joined #omnios[09:55:19] *** db48x` is now known as db48x[10:01:50] *** jeffrymolanus has joined #omnios[10:04:51] *** nikolam has joined #omnios[10:55:48] *** khushildep has joined #omnios[10:58:19] *** vrou has joined #omnios[11:02:45] *** vrou has quit IRC[11:15:50] *** nikolam has quit IRC[11:19:59] *** slx86 has quit IRC[11:23:40] *** khushildep has quit IRC[11:25:03] *** khushildep has joined #omnios[11:25:24] *** slx86 has joined #omnios[11:32:36] *** nikolam has joined #omnios[11:38:50] *** bens1 has joined #omnios[11:39:33] *** khushildep has quit IRC[11:40:18] *** khushildep has joined #omnios[11:41:19] *** jeffrymolanus has quit IRC[11:46:07] *** wez has quit IRC[11:47:31] *** khushildep has quit IRC[12:02:21] *** JT-EC has quit IRC[12:09:17] *** JT-EC has joined #omnios[12:15:56] *** JT-EC has quit IRC[12:46:36] *** vrou has joined #omnios[12:49:04] *** jeffrymolanus has joined #omnios[12:51:39] *** vrou has quit IRC[13:09:36] *** JT-EC has joined #omnios[13:20:33] *** xeyed4good has joined #omnios[13:22:52] *** patdk-lap has joined #omnios[13:24:17] *** bens1 has quit IRC[13:34:28] *** szaydel has joined #omnios[13:35:51] *** vrou has joined #omnios[13:41:47] *** xeyed4good has quit IRC[13:44:52] *** desai has joined #omnios[13:50:25] *** slx86 has quit IRC[14:20:10] *** nhubbard has joined #omnios[14:22:24] *** ira has joined #omnios[14:28:37] *** desai has quit IRC[14:32:02] *** postwait has joined #omnios[14:32:04] *** ChanServ sets mode: +o postwait[14:32:09] *** nikolam has quit IRC[14:33:50] *** nikolam has joined #omnios[14:34:48] *** Alasdairrr has quit IRC[14:35:20] *** Alasdairrr has joined #omnios[14:42:40] *** xeyed4good has joined #omnios[14:58:05] *** postwait has quit IRC[15:06:14] *** wuff has quit IRC[15:13:14] *** ghost75_ is now known as ghost75[15:16:44] *** bens1 has joined #omnios[15:24:19] *** vrou has quit IRC[15:32:09] *** nikolam has quit IRC[15:34:01] *** xeyed4good has left #omnios[15:41:15] *** neophenix has joined #omnios[16:04:29] *** newburns has joined #omnios[16:04:36] * newburns Hello all[16:05:30] *** khushildep has joined #omnios[16:05:37] <newburns> Small situation. A tech did a zfs-destroy on a 6TB datatset. It timedout so he did a rm -rf /vdev1/dataset. Then the power shutdown. Any suggestions on how to start some recovery once in maintenance mode?[16:08:14] <JT-EC> Never heard of a destroy timing out before.[16:08:25] <newburns> Napp-it GUI[16:08:42] <JT-EC> Oh.[16:09:13] <newburns> He was in Napp-It gui when the gui became unresponsive, he went into command line and just deleted with rm -rf /vdev1/data2[16:09:36] <newburns> at this point, command line seemed frozen, so off went the power[16:10:19] <newburns> and now coming back up, there are 4 can't open objset /vdev1/data2 and then a prompt for login to maintenance mode[16:10:31] <nahamu> is "vdev1" the name of the pool?[16:10:36] <newburns> yes[16:10:46] <newburns> data2 is the name of the zfs dataset[16:10:53] <nahamu> that's a very confusing name for a pool since pools are made of vdevs...[16:11:13] <newburns> the vdev is what's inside the pool??[16:11:29] <newburns> For sake of conversation let's just say /pool/data2[16:11:29] <nahamu> a pool is made up of 1 or more vdevs, yes[16:11:42] *** nhubbard has quit IRC[16:11:42] *** nhubbard has joined #omnios[16:11:50] <newburns> so he rm -rf /pool/data2[16:11:54] <nahamu> newburns: and when you say "recovery"[16:12:11] <newburns> data2 should be removed, I do not want to recover that[16:12:13] <nahamu> do you want to get back the data from the "data2" filesystem, or do you just want the pool to go back to behaving?[16:12:19] <nahamu> okay, good.[16:12:23] <newburns> I want the structure and operations back to continue use[16:12:29] <JT-EC> newburns: Sorry, not helpful right now but for future reference zfs destroys carry on across reboots and will carry on despite a GUI timing out.[16:13:01] <JT-EC> and with newer illumos systems with async destroy you can monitor progress with `zpool get freeing`[16:13:03] <newburns> Ahhhh. Thanks for that. So it will take a while for a 6TB destroy to happen[16:13:08] <JT-EC> Yep[16:13:09] <nahamu> so at JT-EC points out, if the system seems to be coming up and just sort of hanging, that might be the destroy still running[16:13:32] <newburns> Yea, it seems to be hanging when I do a zpool import[16:13:35] <newburns> Nothing happens[16:13:50] <nahamu> probably the destroy carrying on reclaiming blocks[16:13:59] <nahamu> which version of OmniOS?[16:14:06] <newburns> b250[16:14:36] *** bens1 has quit IRC[16:14:40] <newburns> is that a proper version number?[16:16:10] <newburns> will the destroy continue even if 'rm -rf /pool/data2' was performed after the zfs destroy data2[16:18:04] <nahamu> well, did the rm-rf finish before the system was rebooted?[16:18:09] <nahamu> were there snapshots?[16:18:30] <newburns> no snapshots. The rm -rf went immediately to next command line[16:18:30] <nahamu> if there was a snapshot, the rm doesn't actually return any blocks to the pool[16:18:38] <newburns> so it did not appear to run anything[16:18:54] <nahamu> it's possible that ZFS had unmounted the filesystem already[16:19:03] <nahamu> so the rm was just operating on the empty mountpoint...[16:19:18] <nahamu> (and thus was fast and didn't do anything)[16:19:39] <newburns> do I need to recreate the mount point or anything? or do I just let it sit for a while[16:21:40] *** fwp has quit IRC[16:22:09] <newburns> now the screen shows repeated "no bucket for fff*************"[16:29:47] *** slx86 has joined #omnios[16:31:33] <newburns> is there anything I can do from maintenance mode?[16:31:42] <newburns> svcs -vx shows 58 services not running[16:31:59] <newburns> my "/pool/" doesn't show any of the mount points[16:37:54] *** nefilim has joined #omnios[16:38:29] *** gmason has joined #omnios[16:40:28] <JT-EC> If svcs filesystem/local is in the state offline* then that's what everything else is waiting for.[16:43:51] <newburns> What should I do/run to get it back online. I'm not really sure the zfs destroy is still running[16:49:33] *** joltman has joined #omnios[16:50:15] <newburns> it's OmniOS b281e50[16:50:53] <nahamu> do you have a command line where you can run commands?[16:51:10] *** jpeach has joined #omnios[16:52:05] <newburns> yes[16:52:10] <newburns> I'm in maintenance mode[16:52:23] <newburns> It says killing contract 16 then 20 and 21[16:52:29] <newburns> then I log into maintenance mode[16:52:55] <newburns> I can see standard file structure, but no points under "/pool/"[16:53:28] <newburns> The initial error is "can't open objset "/pool/data2"[16:53:36] <aszeszo> newburns: show us tail `svcs -L filesystem/local` output[16:55:23] <aszeszo> is there anything interesting there? (no need to re-type the contents)[16:57:07] <newburns> I typed exactly what was here and it returned, "No file or directory svcs -L filesystem/local[16:57:29] *** ira has quit IRC[16:57:36] <newburns> is the command "tail 'svcs -L filesystem/local'[16:57:47] <JT-EC> It's a backtick not a single quote[16:57:52] <aszeszo> try tail /var/svc/log/system-filesystem-local:default.log[16:57:53] <JT-EC> (both are)[16:58:09] <newburns> ok[16:58:15] <newburns> going to run now[16:59:43] *** khushildep has quit IRC[17:00:13] *** khushildep has joined #omnios[17:00:29] <newburns> Method "start" exited with status 0.[17:00:38] <newburns> Enabled .[17:00:48] <newburns> last message is repeated 4 times[17:05:51] <newburns> svcs -vx shows /system/boot-archive with 58 dependents and /network/rpc/smserver with 2 dependents are not starting properly[17:09:14] *** desai has joined #omnios[17:11:42] *** postwait has joined #omnios[17:11:44] *** ChanServ sets mode: +o postwait[17:12:37] <newburns> anything else I can run?[17:16:39] <newburns> should I do a proper reboot command or something?[17:23:07] <apeiron> the output you showed only tells us that the start method exited 0[17:23:09] <apeiron> not if it output any errors[17:25:13] *** desai has quit IRC[17:25:53] <newburns> that's all that was shown in the tail[17:27:13] <newburns> I will do a cat instead.[17:30:21] <aszeszo> newburns: try "bootadm update-archive"[17:30:48] <aszeszo> followed by svcadm clear boot-archive[17:31:45] *** jpeach has quit IRC[17:32:21] *** jpeach has joined #omnios[17:36:33] <newburns> "another instance of bootadm (pid 130) is running[17:38:13] *** slx86 has quit IRC[17:38:54] <newburns> I have no more command line after I tried that. It's just blank lines[17:41:21] *** nhubbard_ has joined #omnios[17:42:00] *** nhubbard has quit IRC[17:42:00] *** nhubbard_ is now known as nhubbard[17:43:42] *** _Tenchi_ has quit IRC[17:43:53] *** nefilim has quit IRC[17:46:50] <newburns> Is there a way to get back to the command line? should I try CTRL+C[17:47:04] <newburns> in order to execute the svcadm clear boot-archive[17:47:36] <apeiron> it might be waiting for a lock[17:48:01] <newburns> OK. so just leave it alone until something new happens?[17:48:21] <apeiron> ICBW, though. I've not seen it not return a prompt before[17:48:52] <newburns> Is it still trying to execute my command once pid 130 ends?[17:49:02] <apeiron> if it's waiting for a lock, probably[17:50:03] *** fwp has joined #omnios[17:50:26] <newburns> now it shows "no bucket for ffffff03d4cda860"[17:50:34] <newburns> and **************700[17:59:35] <apeiron> you may want to ask the illumos-discuss list[18:03:56] *** patdk-wk has quit IRC[18:15:17] <JT-EC> Or in #zfs and the zfs ML with a more focussed audience.[18:17:04] *** patdk-wk has joined #omnios[18:20:16] <newburns> what is the proper reboot command[18:20:47] <newburns> to do a full reboot, and not the fast reboot[18:20:52] <JT-EC> reboot -p[18:22:16] <newburns> That returns "another instance of bootadm (pid 130) is running"[18:22:26] <newburns> Should I have killed 130 before rebooting?[18:25:27] <JT-EC> If you want to reboot and something that would be killed on a reboot is preventing it I'd just kill it then try reboot again.[18:26:05] <newburns> so it would be "kill -9 130"[18:26:10] *** jeffrymolanus has quit IRC[18:28:35] *** jeffrymolanus has joined #omnios[18:33:11] *** KermitTheFragger has quit IRC[18:35:23] *** ira has joined #omnios[18:48:25] *** khushildep has quit IRC[18:49:11] <newburns> I have a process 132 zpool-vdev1[18:49:21] <newburns> does that mean it is working on the my pool?[18:50:43] <newburns> ps -ef shows that process. It seems I can't kill the otherr bootadm (pid 130) process[18:59:13] *** szaydel has quit IRC[19:07:37] *** szaydel has joined #omnios[19:25:02] *** wez has joined #omnios[19:36:26] <newburns> What if I reinstall omnios, but leave the ZFS alone, just my OS drive reformatted[19:36:34] <newburns> Could I reattach the old ZFS filesystem?[19:37:17] <thebug> if I'm understanding correctly, is your "zfs filesystem" in a separate pool than your OS?[19:37:27] <thebug> (it's not in 'rpool')[19:37:35] <newburns> Right[19:37:40] <joltman> you should unmount/detach the old ZFS pool before you re-install[19:37:44] <newburns> my rpool is a single raid 1[19:37:47] <joltman> however, if you don't you can force an import[19:37:57] <joltman> export i guess is the correct term.[19:38:06] <joltman> so export first, then you can import on a new OmniOS install[19:38:10] <thebug> export == unmount/detach[19:38:15] <thebug> use zpool export to cleanly do that[19:38:15] <joltman> if you don't export, you can force an import.[19:38:25] <joltman> ^^[19:38:25] <newburns> I can't export from maintenance mode[19:38:32] <thebug> zpool 'detach' is totally different[19:38:36] <joltman> ^^^^^[19:38:51] <newburns> Can I do any of those from maintenance mode?[19:39:06] <thebug> here's an idea to start with, before you reinstall anything[19:39:13] <thebug> do you have recent omnios install media?[19:39:30] <thebug> boot it, and instead of picking 'install' from the text menu it pops up after it's booted, use 'shell'[19:39:54] <thebug> then you can do some of this manipulation without some of the constraints[19:40:12] <newburns> I can do the detach?[19:40:46] <thebug> the installer cd shouldn't auto-import any of your pools[19:40:57] <thebug> let me back up since I probably missed this[19:41:04] <thebug> what's the problem you're trying to solve?[19:41:13] <thebug> oh, bootadm[19:41:29] <thebug> yes, I think we can probably get you sorted out without reinstalling at all[19:42:51] <newburns> So once I boot from disc, what command am I using?[19:43:00] <thebug> basically, from the shell we can import your rpool , ignore your data pool entirely, and try running bootadm against that mounted system[19:43:09] <thebug> you have the numbered menu up?[19:43:31] <newburns> I run to server room to execute the commands. I run fast...[19:43:47] <thebug> one sec while I pull this up in vmware so I at least have *tried* to tell you the right thing ;)[19:44:08] <thebug> first prompt is the keyboard layout, I figure you can answer that one[19:44:34] <thebug> yes, then the next prompt is the text menu[19:44:40] <thebug> pick 'shell'[19:46:22] <thebug> try importing your OS pool with[19:46:27] <thebug> zpool import -R /a rpool[19:46:47] <thebug> (it should mount your usual / on /a)[19:47:09] <thebug> then you can do something like[19:47:18] <thebug> /usr/sbin/bootadm update-archive -v -R /a[19:47:23] <joltman> too bad you don't have IPMI/iDRAC on the server in question, you could have stayed at your desk.[19:47:26] <thebug> assuming you're trying to rebuild the boot archive[19:47:55] <thebug> indeed, but you always find out how much you miss drac/sol/ipmi too late ;)[19:48:07] *** wuff has joined #omnios[19:49:04] <newburns> is it possible to rebuild both pools?[19:49:22] <thebug> is there a reason you need to?[19:49:30] <thebug> I thought your problem was a broken boot archive?[19:50:58] <thebug> ah, reading *even more* scrollback, besides fixing your boot archive, what do you want to 'rebuild' about it?[19:51:20] <thebug> you just want to import the data pool and let it finish the destroys?[19:51:36] <newburns> yes[19:52:23] <thebug> ok, with that in mind, let's take a swing at both problems[19:52:33] <thebug> were you able to successfully rebuild the boot archive?[19:53:05] <newburns> still booting. I'm running back now[19:53:40] <thebug> ok, if it runs successfully, make sure you're not in /a in your shell (just cd / or something), then zpool export rpool to get that out of the way[19:56:28] <nahamu> "single raid 1"[19:56:40] <nahamu> It's a simple mirror?[19:56:53] <nahamu> How could you have 6TB of data on a mirror?[19:57:02] <nahamu> arey you using 6TB drives?[19:57:13] <nahamu> or did you mean it's a single raidz[19:57:17] <newburns> No, rpool is a 6tb mirror[19:57:32] <newburns> I have 6 - 3TB in a RAID10 fashion[19:57:54] <newburns> Something in proxmox got loose and filled up 6TB on my /pool/data2[19:58:02] <thebug> I'm honestly not sure how you'd do that, given that illumos (as far as I know) can't boot from GPT, and drives bigger than 2TB have to be GPT[19:58:09] <newburns> so I went to destroy it and Napp-IT got hung[19:58:27] <thebug> but, that's secondary to my original question[19:58:29] <newburns> my os drive is 250gb mirror[19:58:33] <thebug> bootadm successful?[19:58:38] <newburns> rpool isn't a 6tb mirror[19:58:52] <thebug> ok, you just misspoke then. gotcha[19:59:52] <nahamu> have you tried checking to see if the drives look like they're busy reclaiming blocks?[20:06:33] *** slx86 has joined #omnios[20:11:52] <newburns> ok. "bootadm update-archive -v -R /a" returns missing missing /boot/grub/ on root /a[20:12:13] <thebug> yeah I just noticed that in my vmware setup, trying to figure out which path it *should* be[20:12:32] <newburns> awesome[20:19:08] <thebug> aha[20:19:27] <thebug> so, you 've got the pool mounted with an alternate mountpoint of /a[20:19:40] <thebug> now, you can use beadm to mount the actual root you want to mess with[20:19:44] <thebug> so, if I do[20:19:57] <thebug> beadm list, in my example, I have 'omnios' as a BE[20:20:08] <thebug> so I can do[20:20:12] <thebug> beadm mount omnios /b[20:20:43] <thebug> which puts it on /a/b (yes that's a little non-intuitive)[20:20:56] <thebug> but now you can do[20:21:05] <thebug> bootadm update-archive -v -R /a/b[20:21:31] <thebug> the couple messages about nodeid, mdi_ib_cache, and restore_store are not a problem[20:21:52] <thebug> then you can do beadm umount /a/b[20:22:05] <thebug> and you should have a fixed up boot-archive, assuming that ran successfully[20:22:21] <thebug> if so, then we can try to let zfs do recovery on your data pool ... let me know if the rpool stuff worked :)[20:27:37] <newburns> do I need to "cd /a" first before "beadm mount napp-it /b"[20:28:19] <thebug> nope[20:28:39] <thebug> you're running 'beadm' from the CD/usb stick , so you're good[20:30:01] <newburns> I'm guessing it was successful[20:30:12] <newburns> No errors but the 3 no file/directory[20:30:23] <newburns> Do I umount and restart[20:30:30] <thebug> cool, it should [hopefully!] be bootable[20:30:34] <thebug> do the unmount[20:30:37] <thebug> but do not restart[20:30:49] <thebug> after you do beadm unmount[20:30:56] <thebug> now we're going to try fixing your other pool[20:31:17] <thebug> just a do a plain 'zpool import <data pool name>' and see what it says[20:31:37] <newburns> do i umount /a as well?[20:31:52] <thebug> after the beadm unmount, do 'zpool export root'[20:31:54] <thebug> err 'rpool'[20:32:07] <thebug> that'll unmount all the os disk stuff you just fixed up[20:32:18] <thebug> and make it so next boot it's clean to import and use[20:34:40] <newburns> WARNING: Can't open objset for /vdev1/Omni02CT[20:34:49] <newburns> no command prompt afterward[20:34:59] <newburns> Maybe it's still doing it. I'm unsure[20:36:11] <thebug> hmm[20:36:38] <thebug> can you cancel it, and try[20:36:46] <thebug> zpool import -nfF <poolname>[20:37:37] <newburns> ^C does not cancel[20:37:38] <thebug> that'll let us check if it *thinks* it might be recoverable[20:37:44] <newburns> and drives have a lot of activity[20:37:57] <thebug> let it run then, it might just be playing the transactions[20:38:24] <newburns> So it is still correcting the removal of the 6TB?[20:39:21] <thebug> I can't say for sure, but my suspicion would be that it's doing the async delete of the dataset that you asked it to do[20:40:42] <newburns> I love ZFS. It is air tight[20:41:25] <thebug> if you had a prompt with which to poke at dtrace, someone a bit more knowledgeable in those guts could tell you exactly what it's doing[20:42:12] <newburns> It's fine. If it's working, I trust it to complete it's work. I'll let it run over the weekend if I have to[20:42:52] <thebug> I'd hope it wouldn't take all weekend for 6TB. I am curious to see what it says if/when you get your prompt back[20:43:38] <newburns> Let me go check it now[20:47:28] *** wez is now known as wez|away[20:53:53] <newburns> no prompt[20:54:01] <newburns> but still drive activity[20:59:45] <thebug> if you can background it, you might be able to vaguely see what it's up to with something like[20:59:58] <thebug> dtrace -n 'fbt:zfs::entry { @[probefunc] = count(); }'[21:00:08] <newburns> whoa[21:00:15] <newburns> How do I background it?[21:00:16] <thebug> and let it run for a bit, then cancel and see what functions in zfs are being called[21:00:25] <thebug> if it's possible, ^Z[21:00:27] <thebug> the usual[21:00:34] <thebug> ^z then 'bg'[21:00:47] <thebug> it might not be possible in this case :/[21:00:51] <newburns> are those backticks or single quotes[21:00:56] <thebug> neither[21:00:57] <thebug> just[21:01:01] <thebug> ctrl-z[21:01:01] <newburns> got it[21:01:02] <thebug> then[21:01:03] <thebug> bg[21:01:04] <thebug> <enter>[21:02:49] *** khushildep has joined #omnios[21:05:28] <newburns> ^Z did not work[21:05:37] <newburns> I had to restart because it became unresponsive[21:05:59] <newburns> upon shell "zpool import -nfF vdev1" it went back to the same thing[21:06:04] <newburns> WARNING:...[21:06:29] <thebug> can't open objset for pool vdev1 ?[21:06:56] <newburns> right[21:07:16] <newburns> drives have activity, but no prompt[21:08:12] <thebug> not sure where to go from here. you might ask #zfs or #illumos, mention that you did a zfs destroy <....> , then tried to do rm -fr that, then cut the power and are at the current state while booted into omnios installer media shell[21:08:50] <thebug> that said, obviously the original rm -fr was unnecessary, they just needed to let zfs destroy do its thing[21:11:25] <thebug> there are more zfs recovery flags and zdb tricks, but I'm not well versed enough in zfs internals to dig you out of this one without potentially doing more damage :)[21:12:42] *** slx86 has quit IRC[21:13:06] <newburns> Can i just boot into my rpool and redo my ZFS dataset?[21:15:08] <thebug> you'll probably hang booting, doing what it's doing now[21:15:28] <thebug> not because your rpool is broken, but because it'll try importing the data pool[21:15:48] <thebug> if you wanted to disconnect the data pool disks and work on them on another machine, you could boot your rpool up[21:16:08] <thebug> or, there's a way to tell SMF to boot up with no milestone, which would also skip the import[21:16:17] <thebug> let me see if I can dig that up[21:16:52] <thebug> I shoul,d say, that'll let you boot your machine up without removing disks[21:17:11] <thebug> (sorry, wandering off for a sec to check lunch cooking) :)[21:21:06] <thebug> ok, so if you reboot, and while the grub screen is up, use the arrow keys to select your BE, but don't hit enter[21:21:43] <thebug> on your BE, hit the E key[21:21:54] <thebug> that should show you the full command sequence for that entry[21:22:02] <thebug> go down to the line with kernel$[21:22:09] <thebug> hit E again to edit that one[21:22:17] <thebug> add[21:22:50] <thebug> hit enter, which takes you back to the broken out list, then hit b to boot it[21:22:55] <thebug> the change isn't permanent, just this boot[21:25:12] <thebug> when you do that, it comes up and asks you to log in[21:25:42] <thebug> log in as root, and you should now have a more or less single-user mode, but without it doing any fs imports beyond rpool[21:25:43] *** khushildep has quit IRC[21:26:20] <newburns> So it is not full use operations on rpool?[21:28:51] <thebug> right, it's basically single user mode[21:29:03] <thebug> but you are booted from your real rpool[21:29:03] *** stopbit has quit IRC[21:29:29] <newburns> so it's no better than booting from the OmniOS disc[21:30:10] <newburns> Is there a way I can remove the ZFS from even seeing /vdev1/Omni02CT[21:31:11] <thebug> I'm checking if I can mess with /etc/zfs/zfs.cache keep it from importing stuff[21:31:15] <thebug> don't do anything to that yet[21:31:29] <newburns> ok. I'm still in CD Boot command[21:32:00] <thebug> as in, in the shell of the install cd, or somewhere else?[21:32:10] <newburns> yep. Shell of install.[21:32:23] <newburns> Rebooted back into CD shell[21:32:30] <thebug> ok[21:32:42] <thebug> import your rpool with a mountpoint of /a again[21:32:54] <thebug> (zfs import -R /a rpool)[21:33:00] <thebug> then do the beadm mount again[21:33:43] <thebug> then, in /etc/zfs inside your mounted BE, mv zfs.cache zfs.cache.old[21:33:52] <thebug> beadm unmount that, zpool export rpool, and reboot[21:34:06] <thebug> that should [hopefully] have it forget about importing other pools than the one it boots from[21:34:12] <thebug> and if not, it's easy to undo[21:38:24] <newburns> i'm rebooting into rpool[21:39:03] <thebug> good luck :)[21:52:54] *** stopbit has joined #omnios[21:56:59] <newburns> There's no zfs.cache file within zfs[21:59:20] *** wez|away is now known as wez[22:18:51] *** wez is now known as wez|away[22:27:03] *** wez|away is now known as wez[22:41:36] *** esproul has joined #omnios[23:07:41] *** gmason has quit IRC[23:35:40] *** kwmo has joined #omnios[23:37:01] <kwmo> i have a closed-source commercial app that currently runs on sunos 5.10...i was just curious if anyone would like to speculate on the odds of it running on omnios[23:37:15] <apeiron> between 0 and 1[23:37:39] <kwmo> it's not my app by the way, just one that is inherited[23:37:45] <esproul> kwmo: it depends on what system libraries it requires[23:38:00] <esproul> i.e. if it's only libc, your chances improve greatly[23:38:15] <kwmo> from what i can see it just uses libc plus it's own[23:38:30] <kwmo> it is compiled as 32 bit which could be an issue[23:38:35] <apeiron> try an ldd -r on the executables and libs[23:38:40] <apeiron> we can run 32bit binaries[23:38:41] <esproul> we have 32-bit libc[23:39:25] <esproul> we should be ABI-compatible to s10u8 or earlier[23:39:36] <esproul> can't vouch for post-Oracle changes[23:39:55] <kwmo> cool...i will give it a go...though i would ask here first in case the answer was "right after h*ll freezes over"[23:40:28] <kwmo> thanks everyone![23:40:53] <esproul> good luck[23:41:02] <apeiron> the more non-libc libs it needs, the closer you get to "right after hell freezes over"[23:43:31] <postwait> kwmo: near 1[23:43:31] <kwmo> the current install on 5.10 does not use zfs, i don't suppose there are any tricks to be able to clone the drive while it's mounted are there? my first thought was just to see if the current stuff would run on different hardware[23:44:13] <postwait> running it in a s10 zone is a bit more problematic given Oracle line in the sand with free/nonfree releases.[23:44:46] <postwait> I'd try to get it running in an omnios zone by rsyncing over the binary and all the libs sans libc.[23:44:59] <postwait> and using ld_preload or crle tricks to get it to link legacy for you.[23:45:18] <postwait> I've found that will usually get you to a running binary on newest bits.[23:46:40] <kwmo> thanks @postwait i will have a go at that[23:47:06] <postwait> kwmo: out of curiosity, what app?[23:48:38] <kwmo> it is a media gateway controller app that currently runs on dated proprietary hardware...thought it would be interesting to try and move it to generic hardware[23:49:31] <thebug> I'm not sure what that means, but I'm curious[23:49:38] <thebug> is this like a crestron av controller or something?[23:49:48] <postwait> if you get it working... blog about it![23:49:55] <esproul> ^[23:50:04] <postwait> or let someone here write a case study with you.[23:50:16] <kwmo> no, its part of a voice over ip gateway or softswitch[23:51:03] <kwmo> we own a full license to the software, but i'm not sure the company that makes it would appreciate it LOL[23:51:05] <thebug> oh, that makes sense then. that's actually a good candidate for zones for other reasons, like the resource reservation/capping stuff[23:51:44] <apeiron> I can't see why they'd care unless you're under an NDA, really[23:51:56] <esproul> may violate the support contract[23:52:15] <apeiron> I think getting it to work on not-Sol10 already has[23:52:17] <apeiron> just in general[23:52:36] <kwmo> we don't carry support on it anymore so that is not a concern but the NDA would be[23:52:45] <apeiron> ah, so there is an NDA then[23:53:01] *** neophenix has quit IRC[23:53:57] <kwmo> yes, an original one...it continues so long as you use the licensed software...and from what i can see would still be in effect even if you don't carry support[23:54:20] <postwait> kwmo: been there.[23:54:24] <apeiron> blah, no blog then. :([23:54:38] <postwait> too much of the cool stuff we do is under lock and key as well.[23:56:50] <kwmo> thanks everyone and have a great weekend![23:56:56] *** kwmo has left #omnios[23:58:18] *** joltman has quit IRC[23:58:36] *** newburns has quit IRC[23:59:33] *** esproul has quit IRC