Switch to DuckDuckGo Search
   July 9, 2014  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >

Toggle Join/Part | bottom
[00:02:44] *** leoric has quit IRC
[00:26:57] *** jaimef has quit IRC
[00:28:20] *** Ewin has quit IRC
[00:29:15] *** Ewin has joined #openindiana
[00:31:46] *** jaimef has joined #openindiana
[01:13:09] *** arielb has quit IRC
[01:39:34] *** sputnik13 has joined #openindiana
[02:09:27] *** LeftWing has quit IRC
[02:10:21] *** LeftWing has joined #openindiana
[02:59:28] *** BobLu has joined #openindiana
[03:01:54] *** CVLTCMK0 has joined #openindiana
[03:09:43] *** sauce has quit IRC
[03:11:28] *** sauce has joined #openindiana
[03:40:28] *** BobLu1 has joined #openindiana
[03:40:43] *** BobLu has quit IRC
[03:43:03] *** sc0tty8 is now known as farva
[03:43:09] *** farva is now known as farva_
[03:46:26] *** Herr_cane is now known as Hurricane
[03:48:03] *** ancoron_ has joined #openindiana
[03:49:08] *** BobLu1 has quit IRC
[03:51:50] *** ancoron has quit IRC
[03:53:18] *** BobLu has joined #openindiana
[04:09:41] *** BobLu has quit IRC
[04:20:43] *** doug_ndndn has quit IRC
[04:20:49] *** doug_ndndn1 has joined #openindiana
[04:26:41] *** farva_ is now known as SS_AllBetsOff
[04:30:07] *** alanc has quit IRC
[04:30:33] *** alanc has joined #openindiana
[04:30:33] *** ChanServ sets mode: +o alanc
[04:30:34] *** SS_AllBetsOff is now known as missiSS_ippi
[04:32:11] *** missiSS_ippi is now known as BlitzClaptrap
[04:43:53] *** BlitzClaptrap is now known as sc0tty8
[04:49:14] *** doug_ndndn1 has quit IRC
[04:49:24] *** doug_ndndn has joined #openindiana
[05:01:01] *** Botanic has quit IRC
[05:04:05] *** BobLu has joined #openindiana
[05:04:26] *** mixomathoze has quit IRC
[05:09:39] *** mixomathoze has joined #openindiana
[05:18:04] *** MarcelT has quit IRC
[05:29:41] *** doug_ndndn has quit IRC
[05:30:15] *** doug_ndndn has joined #openindiana
[05:33:41] *** alex2 has joined #openindiana
[05:37:54] *** freakazoid0223 has joined #openindiana
[06:03:21] *** basic6 has joined #openindiana
[06:03:33] *** basic6_ has quit IRC
[06:13:44] *** BobLu has quit IRC
[06:14:02] *** BobLu has joined #openindiana
[06:23:41] *** _Tenchi_ has quit IRC
[06:24:20] *** _Tenchi_ has joined #openindiana
[06:32:57] *** robinsmidsrod has quit IRC
[06:34:25] *** robinsmidsrod has joined #openindiana
[06:57:52] *** nikolam has quit IRC
[07:01:13] *** sputnik1_ has joined #openindiana
[07:06:04] *** alendelon has joined #openindiana
[07:07:06] *** BobLu has quit IRC
[07:09:35] *** alendelon has quit IRC
[07:10:33] *** rocketeer has joined #openindiana
[07:10:33] *** MarcelT has joined #openindiana
[07:11:48] *** Seony has joined #openindiana
[07:40:30] *** rfm has quit IRC
[07:41:47] *** sputnik1_ has quit IRC
[07:42:20] *** rlaager has quit IRC
[07:55:23] *** BobLu has joined #openindiana
[07:55:32] *** rlaager has joined #openindiana
[08:08:37] *** tsoome has quit IRC
[08:22:20] *** psychicist_ has quit IRC
[08:34:51] *** tsoome has joined #openindiana
[08:43:12] *** Seony has quit IRC
[08:57:13] *** Ewin has quit IRC
[08:58:39] *** Ewin has joined #openindiana
[09:01:13] *** BobLu has quit IRC
[09:07:58] *** alanc has quit IRC
[09:09:39] *** alanc has joined #openindiana
[09:09:46] *** ChanServ sets mode: +o alanc
[09:11:39] *** nikolam has joined #openindiana
[09:21:39] <alp> does someone remember what is the last flash version working on OI ?
[09:25:59] <alp> found it, 10.1.85.3 worked for me last time
[09:31:03] <lblume> A need to check if it supports the latest round of Flash vulnerabilities?
[09:31:29] <alp> need to check if it works in FF 24
[09:31:31] <tsoome> probably does:D
[09:31:37] <alp> works for me
[09:31:51] <tsoome> (support vulnerabilities, i meant)
[09:32:44] <alp> I was always interested what malware will do if it finds itself on Solaris... I think it will crash :)
[09:37:52] *** BobLu has joined #openindiana
[09:43:41] *** Botanic has joined #openindiana
[09:45:57] <lblume> Why would it? Flash, Java are portable platforms. It might fail to deliver Windows-specific payload, but if you're pinning your hopes on that, you'll get pwned soon enough.
[09:46:24] *** BobLu has quit IRC
[09:55:05] *** Seemone has joined #openindiana
[10:16:41] *** BobLu has joined #openindiana
[10:18:47] *** BobLu has quit IRC
[10:19:56] *** W0rmDr1nk has quit IRC
[10:37:27] *** rocketeer has quit IRC
[10:41:59] *** LeftWing has quit IRC
[10:41:59] *** LeftWing has joined #openindiana
[10:45:18] *** alendelon has joined #openindiana
[10:47:02] *** BobLu has joined #openindiana
[10:54:09] *** BobLu has quit IRC
[11:13:26] *** W0rmDr1nk has joined #openindiana
[11:25:12] *** held has quit IRC
[11:44:03] *** held has joined #openindiana
[11:53:00] *** BobLu has joined #openindiana
[12:05:11] *** CME has quit IRC
[12:15:19] *** BobLu has quit IRC
[12:59:23] *** BobLu has joined #openindiana
[13:00:09] *** BobLu has quit IRC
[13:03:22] *** BobLu has joined #openindiana
[13:14:28] *** Botanic has quit IRC
[13:29:38] *** BobLu has joined #openindiana
[13:34:01] *** BobLu has quit IRC
[13:43:08] *** rocketeer has joined #openindiana
[14:01:13] <nikolam> last flash working in 151a7 was latest one from Adobe. Still works if you forst go to http://www.adobe.com/software/flash/about/ and then on flash site. OpenSXCE desktop claims they solved Flash problem under illumos, sources of release from few weens ago are available.
[14:01:27] *** Ewin has quit IRC
[14:02:06] <nikolam> lblume, maybe windows-specific payload might also work if 'wine' is installed :P
[14:15:16] *** nikolam has quit IRC
[14:17:48] *** nikolam has joined #openindiana
[14:17:59] *** master_of_master has joined #openindiana
[14:21:09] *** master_o1_master has quit IRC
[14:26:23] *** nikolam has quit IRC
[14:28:23] *** sputnik1_ has joined #openindiana
[14:32:44] *** sputnik1_ has quit IRC
[14:33:14] *** rocketeer has quit IRC
[14:34:32] *** CME has joined #openindiana
[14:35:34] *** CVLTCMK0 has quit IRC
[14:40:23] *** Vutral has quit IRC
[14:51:28] *** nikolam has joined #openindiana
[14:56:21] *** Vutral has joined #openindiana
[15:10:40] *** rbanffy has joined #openindiana
[15:29:38] *** alendelon has quit IRC
[15:43:19] *** dubf_ has joined #openindiana
[15:43:26] *** zacts_ has joined #openindiana
[15:43:56] *** Cenbe_ has joined #openindiana
[15:45:02] *** Lee-- has joined #openindiana
[15:47:09] *** Scall- has joined #openindiana
[15:47:15] *** Vutral has quit IRC
[15:48:36] *** saskaloon has quit IRC
[15:48:37] *** Scall has quit IRC
[15:48:41] *** dubf has quit IRC
[15:48:48] *** _0x5eb_ has quit IRC
[15:48:49] *** Hedonisto has quit IRC
[15:48:55] *** n2deep_ has quit IRC
[15:48:56] *** Lee- has quit IRC
[15:49:01] *** tsukasa has quit IRC
[15:49:06] *** ryao has quit IRC
[15:49:10] *** mui has quit IRC
[15:49:12] *** zacts has quit IRC
[15:49:19] *** kovert has quit IRC
[15:49:21] *** tinuva has quit IRC
[15:49:40] *** copec has quit IRC
[15:49:43] *** VerboEse has quit IRC
[15:49:44] *** jamesd has quit IRC
[15:49:51] *** kshannon_ has quit IRC
[15:49:53] *** tomocha6 has quit IRC
[15:49:54] *** Cenbe has quit IRC
[15:49:55] *** gosx has quit IRC
[15:49:56] *** Lee-- is now known as Lee-
[15:49:59] *** Scall- is now known as Scall
[15:50:22] *** tsoome has quit IRC
[15:52:05] *** mui has joined #openindiana
[15:57:51] *** kshannon has joined #openindiana
[15:58:15] *** gosx has joined #openindiana
[15:58:28] *** kovert has joined #openindiana
[15:58:29] *** n2deep_ has joined #openindiana
[15:58:30] *** jamesd has joined #openindiana
[15:58:42] *** ryao has joined #openindiana
[15:58:53] *** saskaloon has joined #openindiana
[16:01:44] *** tsukasa has joined #openindiana
[16:01:46] *** _0x5eb_ has joined #openindiana
[16:01:49] *** copec has joined #openindiana
[16:01:50] *** copec has quit IRC
[16:05:20] *** copec has joined #openindiana
[16:08:49] *** copec has joined #openindiana
[16:09:23] *** Hedonisto has joined #openindiana
[16:09:24] *** Hedonisto has quit IRC
[16:09:24] *** Hedonisto has joined #openindiana
[16:12:19] *** copec has joined #openindiana
[16:12:35] *** tinuva has joined #openindiana
[16:15:49] *** copec has joined #openindiana
[16:19:19] *** copec has joined #openindiana
[16:20:27] *** VerboEse has joined #openindiana
[16:22:50] *** copec has joined #openindiana
[16:23:47] *** arielb has joined #openindiana
[16:26:20] *** copec has joined #openindiana
[16:29:45] *** nikolam has quit IRC
[16:29:50] *** copec has joined #openindiana
[16:32:46] *** Vutral has joined #openindiana
[16:33:20] *** copec has joined #openindiana
[16:36:50] *** copec has joined #openindiana
[16:40:20] *** copec has joined #openindiana
[16:43:50] *** copec has joined #openindiana
[16:44:03] *** irker308 has joined #openindiana
[16:44:03] <irker308> spec-files-extra [5820] kenmays wine.spec: bumped to 1.7.22
[16:47:20] *** copec has joined #openindiana
[16:47:45] *** alendelon has joined #openindiana
[16:49:58] *** tsoome has joined #openindiana
[16:50:50] *** copec has joined #openindiana
[16:52:59] *** Vutral has quit IRC
[16:54:20] *** copec has joined #openindiana
[16:57:39] *** datadigger has quit IRC
[16:57:45] *** datadigger has joined #openindiana
[16:57:50] *** copec has joined #openindiana
[17:07:04] *** copec has joined #openindiana
[17:07:14] *** sc0tty8 has quit IRC
[17:07:37] *** sc0tty8 has joined #openindiana
[17:12:38] *** Vutral has joined #openindiana
[17:33:08] *** Hedonisto has quit IRC
[17:36:50] *** Hedonisto has joined #openindiana
[17:48:29] *** tomocha6 has joined #openindiana
[17:59:07] *** doug_ndndn has quit IRC
[17:59:21] *** doug_ndndn has joined #openindiana
[18:08:54] *** zacts_ has quit IRC
[18:21:23] *** W0rmDr1nk has quit IRC
[18:32:43] *** psychicist_ has joined #openindiana
[18:57:40] *** dezgot has quit IRC
[18:58:15] *** dezgot has joined #openindiana
[19:17:40] <alendelon> where i can found manual for compiling latest version php on OI?
[19:19:15] *** Botanic has joined #openindiana
[19:27:02] *** nikolam has joined #openindiana
[19:29:01] *** Seemone has quit IRC
[19:44:00] *** irker308 has quit IRC
[19:52:07] *** Kelzier has joined #openindiana
[20:15:52] *** Vutral is now known as mrTapir
[20:28:25] *** ningalls has quit IRC
[20:29:53] *** j0 has joined #openindiana
[20:30:13] *** ningalls has joined #openindiana
[20:34:03] <j0> Where would be a good place to look for paid support for OI? I need to resize my root partition and would prefer to pay for assistance with it than spend more time reading docs and risking my file server.
[20:43:48] *** eki has quit IRC
[20:44:04] <tsoome> you got mirrored rpool?
[20:45:25] <j0> no.. and that will need to change too
[20:45:49] <tsoome> you got larger disks ready?
[20:45:52] <j0> yes
[20:46:08] <tsoome> disk hot swap supported?
[20:46:12] *** Nex7_ is now known as Nex7
[20:46:14] <j0> yes
[20:46:26] <j0> ideally I could use the existing drives in the system and just enlarge the partition
[20:46:41] <tsoome> that will do even better
[20:47:16] <tsoome> it means you wont need to move your hardware at all
[20:47:52] <j0> I started to get confused around the concept of fdisk vs format, and partitions vs slices, .etc :)
[20:48:17] <j0> I understood it when I setup the file server.. but it's been a few years and with it being the only Solaris system I use, my knowledge got stale
[20:49:21] <tsoome> i can walk you through after some time
[20:49:46] <j0> that would be great
[20:50:18] *** held has quit IRC
[20:50:46] <j0> i'm not in a rush, so let me know when works for you
[20:51:35] <jamesd> thankfully for the most part thanks to ZFS those are going away.... :-)
[20:52:38] *** alex2 has quit IRC
[21:02:49] <tsoome> ok, i have time now
[21:10:07] *** held has joined #openindiana
[21:10:07] *** held has joined #openindiana
[21:13:39] <j0> sure.. would you like to do it over the phone, or a screen sharing session?
[21:14:34] <tsoome> na this channel or private will be enough:) at least so far it has been in similar cases:D
[21:15:14] <j0> ok.. let me know what I can paste you :)
[21:15:36] <tsoome> first thing, both current rpool disk and the second one to be mirrored, are they exactly the same size?
[21:15:48] <j0> different size
[21:16:20] <j0> is it worth splitting them up to use part of it as a write cache or anything else?
[21:16:25] <tsoome> ahm, do you happen to have identical one to be paired with boot disk?
[21:17:08] <tsoome> its possible to have different ones, but identical ones would be best to avoid any possible issues/mistakes etc
[21:17:38] <j0> the root is on an 80gb ssd right now, but only 15gb assigned to the rpool.. i also have another 100+gb ssd in there
[21:17:55] *** eki has joined #openindiana
[21:18:11] <j0> in the perfect world.. i'd enlarge the rpool to take up most of the 80gb ssd, and mirror it to the 100gb ssd.. and use the remaining space on the 100gb for a write cache
[21:18:41] <j0> should I be simplifying it and just having 2 drives dedicated to the rpool and nothing else?
[21:19:20] <tsoome> yes, the best would be 2 identical ones dedicated to rpool
[21:20:19] <tsoome> its possible to slice and use spare space for other purposes, but that can hurt performance and its not good idea to have unmirrored slog as you have planned
[21:21:20] <j0> i am in a rare predicament in where i only have 512gb SSDs free :)
[21:21:28] <tsoome> rpool does not have to be on ssd btw
[21:23:23] <tsoome> ok, well, so you have 15GB from 80G ssd in use, and you have 100G one to use as mirror?
[21:23:35] <j0> lets use a slice for now. in the future I may get some small 2.5in drives for the rpool
[21:23:42] <j0> yes, that's right
[21:23:43] <tsoome> yes.
[21:23:51] *** Kelzier has quit IRC
[21:23:54] <j0> I have a 3rd "partition" on the 80gb ssd that i'm not sure if it's ever been in use.
[21:23:58] <tsoome> ok, was that 100G ssd in use somewhere?
[21:24:01] <jamesd> find an old laptop drive.... put all your data on another pool... if all your data is seperate except for a few files in /etc that you can keep a copy of, its easy to reinstall and be up and running in a few minutes
[21:24:17] <j0> tsoome: it was in use as a cache.. but i have other drives for that
[21:24:33] *** echobinary1 has joined #openindiana
[21:24:37] <tsoome> ok, i assume that disk is not in use right now
[21:24:57] <j0> just 1 command away from not being used any more
[21:25:14] <tsoome> ok, make sure its not in use. then run format -e, select that disk
[21:25:41] <j0> done
[21:26:12] <tsoome> then enter fdisk from format prompt, and delete all fdisk partitions and quit with save
[21:26:52] <j0> done
[21:26:53] <tsoome> once done, enter label and select SMI label type
[21:27:04] *** echobinary has quit IRC
[21:27:52] <j0> is "SMI" the solaris system type?
[21:28:22] <tsoome> on label command, it will ask if you wanna use SMI or EFI, you have to use SMI
[21:28:22] <j0> 1 Active Solaris2 1 17909 17909 100
[21:28:29] <j0> it never asked
[21:28:39] <tsoome> you are still in fdisk menu
[21:29:23] <j0> ran it from the start again.. no fdisk.. it just says ready to label, continue?
[21:29:47] <tsoome> did you start format with -e ?
[21:30:03] <j0> that was my problem.. thanks
[21:30:10] <j0> i had restarted
[21:30:17] <j0> ok.. what's next
[21:30:25] <tsoome> OI can boot only from fdisk+SMI, not from EFI, and you can change label type only with format -e
[21:31:20] <tsoome> so, from format prompt, use fdisk command, delete fdisk partitions, and exit fdisk with write
[21:31:42] <j0> done
[21:31:47] <tsoome> once back on format prompt, enter label command
[21:32:14] <tsoome> now it should ask SMI or EFI
[21:32:27] <j0> may I paste to you in a message what it's saying?
[21:32:32] <tsoome> sure
[21:32:48] <j0> i already did the SMI or EFI piece earlier..
[21:33:34] <tsoome> ok, now select 1, create solaris2 partition, 100%
[21:34:00] <j0> ok
[21:34:16] <j0> this is on the 100gb ssd.. should i be creating a smaller partition for the rpool mirror?
[21:34:42] <tsoome> no, SMI lable is inside fdisk partition, you slice SMI label
[21:34:56] <j0> ok.. yikes :)
[21:35:21] <tsoome> its like “extended” fdisk partition, in some sense
[21:35:54] <j0> ok
[21:35:55] <tsoome> so, once back on format prompt, enter label again and then paste me output from verify command
[21:36:41] *** j0 has quit IRC
[21:36:58] *** j0 has joined #openindiana
[21:37:57] <tsoome> ok, now enter partition command
[21:38:26] <j0> k
[21:38:41] <j0> side comment: what is the purpose of the backup partition?
[21:38:55] <tsoome> its historical artifact
[21:39:09] <j0> ok.. i spent a while googling that one
[21:39:24] <tsoome> used to access whole disk, or in case of x86, whole fdisk partition.
[21:39:27] <j0> c3t0d0p2 used to be the write cache fyi
[21:40:16] <tsoome> yea, but you should use slice from SMI, and not multiple solaris fdisk partitions. less confusion;)
[21:40:43] <j0> thanks.. good to know
[21:40:54] <tsoome> ok, pastebin also format verify from existing rpool disk
[21:42:01] <tsoome> on that second disk, enter partition command, from there you can set up SMI slices.
[21:42:27] <j0> http://pastebin.com/2A8WDffJ
[21:42:49] <j0> k
[21:43:22] <j0> in that last pastebin.. c3t0d0 is the existing rpool (80gb)... c3t2d0 is the new mirror (100gb)
[21:43:33] <tsoome> ok, from partition prompt, enter 0, tag root, flags wm, starting from 1
[21:43:49] <jamesd> 12
[21:44:40] <j0> and then the size? I figure if i do a 50gb that should do me ok as I may have some spare 64gb SSDs laying around
[21:44:42] <tsoome> and try to get its size not less than current rpool slice, not much larger either
[21:45:14] <j0> current size is only 15gb
[21:45:23] <j0> i guess i need room for swap on there too
[21:45:29] <tsoome> we wanna get mirror up, then set correct size on 80G ssd, and then adjust 100G size accordingly
[21:46:06] <tsoome> the issue is, you cant shring zpool, only grow. so you have to be really careful with sizes.
[21:46:13] <tsoome> shrink*
[21:46:27] <j0> we are currently working on the 100gb, correct?
[21:46:29] <j0> ok
[21:46:31] <tsoome> yes
[21:46:42] <j0> ok.. i've made it 40gb
[21:47:00] <tsoome> swap is on rpool, thats not an problem to grow it once you have larger rpool
[21:47:15] <j0> Part Tag Flag Cylinders Size Blocks
[21:47:15] <j0> 0 root wm 1 - 6688 40.00GB (6688/0/0) 83894272
[21:47:47] <tsoome> what size you get if you set its size as 2504c ?
[21:48:30] <j0> 14.98GB
[21:48:49] <tsoome> exactly the same as on current rpool?
[21:49:25] <j0> yes
[21:49:28] <j0> verified
[21:49:30] <tsoome> that will simplify things a bit.
[21:50:01] <tsoome> ok, now again label command to write it down
[21:50:09] <j0> k
[21:50:12] <tsoome> you should have slices 0, 2 and 8 defined
[21:50:21] <tsoome> defined == having size
[21:50:27] <j0> ok
[21:50:31] <j0> partitions = slices?
[21:50:32] <j0> sortof?
[21:50:38] <tsoome> yes
[21:50:43] <j0> starting to make sense now
[21:50:55] <tsoome> slices are SMI term, partition is fdisk term
[21:51:14] <tsoome> but the idea is similar.
[21:51:27] <tsoome> ok, now you can exit from format
[21:51:55] <j0> k
[21:53:07] <j0> zpool add yet?
[21:53:17] <tsoome> and set up the rpool mirror; zpool attach rpool c3t0d0s0 c3t2d0s0 — i assume c3t2 is your 100GB ssd
[21:53:28] <tsoome> not add, attach
[21:53:42] <tsoome> and make sure you have s0 at the end of the disk name
[21:53:55] <j0> what command gives me a list of slices?
[21:54:02] <j0> nvm.
[21:54:13] <tsoome> format - verify
[21:54:17] <j0> /dev/dsk/c3t2d0s0 overlaps with /dev/dsk/c3t2d0s2
[21:54:21] <j0> ^ error
[21:54:21] <tsoome> or format - partition - print
[21:54:32] <j0> i was thinking we were working with slice 1, not 0.
[21:54:35] <tsoome> yes, you need to use attach -f, forgot it
[21:54:49] <j0> thanks.. on second read, that error sounded ok
[21:55:15] <tsoome> no, you have s0, s2 and s8 defined, s1 should be listed as 0-0
[21:55:25] <j0> k.. after resilver, what is next?
[21:55:55] <j0> it's running at only 80Mb/s. Is there a built in throttle?
[21:56:05] <j0> might be slow ssds too. :)
[21:56:14] <tsoome> installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c3t2d0s0
[21:56:43] *** jaimef has quit IRC
[21:56:54] <j0> should i wait for resilver or just run it?
[21:56:57] <tsoome> its mirroring based on zfs block allocation
[21:57:03] <tsoome> you can run installgrub now
[21:57:08] <j0> k. done
[21:57:52] <tsoome> it will write bootblocks to cylinder 0, thats why you did start s0 from 1, so it wont overlap with grub bootblocks
[21:58:32] <tsoome> and now you need to wait till resilver is done
[21:58:56] <j0> ok
[22:00:12] <j0> all done
[22:00:55] <tsoome> ok, now you can detach old rpool disk, zpool detach rpool c3t0d0s0
[22:01:21] <j0> k
[22:01:59] *** jaimef has joined #openindiana
[22:02:34] <tsoome> the other 2 solaris2 fdisk partitions on that 80GB disk are not in use?
[22:03:09] <j0> as far as i know :)
[22:03:21] <j0> i do'nt know what the 3rd partition was being used for
[22:03:55] <tsoome> ok, then run format without -e, select that disk, fdisk and delete those partitions and create 1 100% solaris2 partition
[22:04:11] <tsoome> hm
[22:04:58] <j0> 1 Active Solaris2 1 12459 12459 100
[22:05:00] <j0> done
[22:05:15] <j0> and ran label. SMI
[22:05:34] <tsoome> so, write it down get back to format prompt and enter partition
[22:05:37] *** gwr has joined #openindiana
[22:05:42] <j0> create a matching root partition?
[22:06:16] <tsoome> and create s0, from 1 and $ as size if you wanna use all its size
[22:06:45] <tsoome> you can create proper size now, as its smaller disk
[22:07:10] <j0> in your opinion, what size should my root pool be on a basic file server? 15gb was good for a while, but a few log files and crash dumps got out of hand and filled it up
[22:07:17] <j0> but that was after almost 3 years
[22:07:56] <tsoome> that really depends on the size of disks you wanna use there in future
[22:08:45] <tsoome> i would use all the space on dedicated disks, so you have all the space available from pool itself
[22:08:55] <j0> i'll do a 40gb size then.. as i will have some 60gb SSDs that I can use later
[22:09:26] <tsoome> thats reasonable — then you can replace the disks via zpool attach/detach
[22:09:45] <tsoome> and dont need to create new pool:)
[22:09:46] <j0> k..slice is created and labelled smi
[22:10:09] <tsoome> ok, now attach it back to the rpool
[22:10:10] <j0> do i attach it now?
[22:10:10] <j0> k
[22:10:50] <j0> install grub?
[22:11:09] <tsoome> yes, altho its there anyhow, but it wont hurt to be sure;)
[22:11:20] <j0> ok
[22:11:35] <j0> now for the enlarging
[22:11:51] <tsoome> once resilver is done, you can detach second disk and adjust the size of s0 to match the size
[22:12:10] <tsoome> and then attach it back
[22:12:24] <j0> was it just safety reasons for not enlarging when it got moved to the second disk?
[22:12:34] <tsoome> yes
[22:12:42] <j0> what could have gone wrong?
[22:13:17] <tsoome> if you had made mistake with sizes, you could end up with large pool which wont fit to the smaller disk
[22:13:20] <j0> ok
[22:13:57] <j0> now for additional slices, do I just use an empty partition number? what do I tag it?
[22:14:35] <tsoome> the tag is not important, its just for readability
[22:14:49] <tsoome> and there are only predefined names for tags
[22:14:58] <j0> does the partitoin # matter either?
[22:15:52] <tsoome> you use it as part of disk device name, so yes.
[22:16:26] <tsoome> for tag values, if you enter ? on tag prompt, it will list you the possible names
[22:16:27] <j0> but for creating additional slices, does it matter what number they are? (assuming it's an empty slot)
[22:16:32] <j0> just saw that.. thanks
[22:16:55] <tsoome> as long as its not in use, it does not matter, no
[22:17:14] <tsoome> you cant use s9, if i remember correctly
[22:18:05] <tsoome> ok, once resilver is done, make sure you have done installgrub as well, and you can try to get rpool expanded
[22:18:49] <j0> ok.. resilver is done.
[22:18:54] <tsoome> zpool set autoexpand=on rpool, should do it
[22:18:55] <j0> installgrub was ran before.
[22:19:19] <tsoome> if not, you may need to reboot
[22:19:49] <tsoome> once its with nes size, set autoexpand=off again to prevent accidental growth
[22:19:59] <j0> k.. too bad i need a reboot to do it
[22:20:18] <j0> rebooting now
[22:21:17] <tsoome> im not too sure with rpool, somehow its expand has been behaving wierd sometimes.
[22:21:37] <j0> i may have forgotten something.. my pool is not expanding.
[22:21:48] <j0> or do i need a "hard" reboot
[22:22:04] <tsoome> try removing /etc/zfs/zpool.cache
[22:22:12] <tsoome> and yes, reboot -p
[22:22:33] <j0> i never enlarged one of my partitions.. oops
[22:24:29] *** Kelzier has joined #openindiana
[22:24:37] <tsoome> if you have nuked zpool.cache, you need to import other pools manually with zpool import
[22:24:58] <j0> it seems to have worked without rebooting this time
[22:25:06] <j0> even before the resilver
[22:25:14] <tsoome> resilver?
[22:25:38] <j0> i had to do it again.. i forgot to change or write the label for one of the partitions
[22:25:55] <tsoome> ah, ok, i see
[22:26:35] <j0> whenever i'm working with the drive labels on my boot drives, do I always need to run format -e?
[22:27:02] <tsoome> no, only when you need to switch the EFI versus SMI
[22:27:43] <tsoome> if you look on zpool status, on every disk shown without sX at the end, there is EFI label on the disk
[22:28:14] <tsoome> well, if the disk name is ending with dX, to be exact.
[22:28:35] <tsoome> anyhow, remember to set autoexpand=off again
[22:29:05] <j0> already done.. thanks for double-checking
[22:29:22] <tsoome> and now, as you have larger rpool, you can resize swap as well: zfs set volsize=… rpool/swap
[22:29:50] <tsoome> and basically thats it.
[22:30:18] <j0> ooh. i feel like a ninja now :)
[22:30:22] <tsoome> its also good idea to reboot -p as well to be sure its still able to boot;)
[22:30:25] <j0> got my disks all sliced up with logs and caches
[22:30:30] <j0> i'll run the -p now
[22:33:52] <j0> we are working!
[22:34:03] <tsoome> so, great success.
[22:34:09] <j0> thanks so much for patiently walking me through this
[22:34:27] <j0> i would really love to buy you dinner
[22:34:38] <tsoome> that may be a bit hard:D
[22:34:46] <j0> msg me your paypal e-mail
[22:34:55] <j0> where are you from?
[22:35:31] <tsoome> estonia:D
[22:35:52] <j0> i had to google that to get a location
[22:36:21] <tsoome> replacing the rpool disks in future can be done in same way, except that you need to insert/remove disks as well:D
[22:36:57] <tsoome> as long as the partitioning part is done properly, its not that hard.
[22:41:00] <j0> now to figure out how to update OI.. i've been having my samba shares stop working every few months
[22:41:27] <tsoome> pkg update -vn to see the changes, and pkg update to get it done
[22:45:36] <j0> that's a big update today.. :)
[22:46:15] <j0> is openindiana still used widely? I'm seeing lots about using OmniOS instead
[22:47:50] <tsoome> no idea how widely, dont know if they go any statistics on pkg.openindiana.org
[22:48:05] <tsoome> s/go/do/
[22:48:53] <tsoome> omnios is a bit more server oriented
[22:53:59] *** copec has quit IRC
[23:05:20] *** copec has joined #openindiana
[23:06:00] *** alendelon has quit IRC
[23:09:25] <tomww> and OmniOS is text-console only. while OI can drive Nvidia Cards pretty well.
[23:16:34] *** tg has quit IRC
[23:16:34] <copec> For a *nix admin workstation I still use OI
[23:16:41] <copec> well, one of my workstations
[23:17:20] <copec> The rest are infected with lenucks
[23:20:50] <copec> I <3 the opensolaris 10 family tree
[23:42:51] <tomww> "infected", yes, that is a valid description.
[23:43:14] <tomww> I try to avoid infecting my machines, and take the effort to contirbute to SFE / spec-files-extra to get the missing packages.
[23:43:34] <freakazoid0223> eck malware :P
[23:44:17] <freakazoid0223> or perhaps half-assware
[23:45:11] <tomww> the less favourite stuff spreads more then the (in areas) better stuff.
[23:45:35] <tomww> we have native ZFS, that is enough reason to use that kernel.
[23:46:36] *** tg has joined #openindiana
[23:46:44] * Patrickdk wonders if linux will ever get ipmp
[23:46:58] <Patrickdk> comstar ipmp zfs, heh, all things I love
[23:48:21] <tsoome> Patrickdk: oh they have teaming;)
[23:48:36] <copec> zfs > btrfs, dtrace > systemtap, smf > systemd, ...
[23:50:48] <Patrickdk> oh, never btrfs, never again
[23:50:56] <copec> I must admit I'm not unix old-skool, I prefer the gnu tools
[23:51:35] <tsoome> ls —what-option-you-wanna-use-today
[23:52:59] <tomww> zones >> <lxc|other_hald_backed>
[23:53:00] <Patrickdk> ls -lash
[23:53:09] <Patrickdk> :)
[23:53:23] <copec> linux uncontainers
[23:55:22] <tsoome> some of gnu tools are ok, no doubt about it.
[23:55:50] <tsoome> but then again, how many tar options you actually use for example?:P
[23:56:31] <copec> The ones I'm used to, otherwise, "Wha... What the F*ck is this?" :-p
[23:56:48] <copec> obviously what I'm used to is always better
[23:56:58] <tsoome> try gtar —usage | wc -l
[23:57:44] *** Savis has joined #openindiana
[23:58:30] <Patrickdk> tsoome, depends
[23:58:35] <Patrickdk> we talking about tar? or gnu tar?
[23:58:42] <tsoome> gnu:P
[23:58:55] <Patrickdk> normally, tar -xzf, or tar -xaf
[23:59:06] <Patrickdk> and, tar -czpsf
top

   July 9, 2014  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >