Switch to DuckDuckGo Search
   May 10, 2011  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >


NOTICE: This channel is no longer actively logged.

Toggle Join/Part | bottom
[00:01:48] *** Oriona has joined ##nexenta
[00:07:11] *** trbs has quit IRC
[00:09:01] *** Torpeo_ has joined ##nexenta
[00:09:57] *** Torpeo has quit IRC
[00:09:57] *** lburton has quit IRC
[00:09:57] *** Triskelios has quit IRC
[00:10:02] *** Triskelios has joined ##nexenta
[00:12:18] *** Torpeo_ is now known as Torpeo
[00:17:05] *** p3n_ has joined ##nexenta
[00:18:41] *** p3n__ has joined ##nexenta
[00:19:16] *** p3n_ has quit IRC
[00:19:46] *** lburton has joined ##nexenta
[00:19:57] *** p3n has quit IRC
[00:29:16] <kreign> kdavy, you around?
[00:37:48] *** andy_js has quit IRC
[00:49:38] *** bklang_ has quit IRC
[00:50:52] <wonslung> hello guys, i have a question, i have a friend who recently decided he wanted to set up a ZFS based nas and he decided to use nexentastor (out of the choices)
[00:51:40] <wonslung> I've never actually used nexentastor mysef, i've always used solaris, does the webui have a tool to format the drives and label them EFI or does he do it the standard way (he's asking me and i just don;t know)
[00:52:01] *** myers has quit IRC
[00:52:28] *** myers has joined ##nexenta
[01:06:58] <pat2man> wonslung: you can do it either way
[01:24:19] *** myers has quit IRC
[01:26:30] *** kwazar has quit IRC
[01:39:05] *** bklang_ has joined ##nexenta
[01:41:58] *** Ganymede has joined ##nexenta
[01:46:37] <Ganymede> Quick question about ZFS: Do people only use ZFS/Nexenta fileservers for always-on servers? Or does anyone use it on a machine that gets turned off, say, every night and turned back on every morning? I get the feeling that ZFS is fragile to constant reboots from numerous horror stories on forums where after a machine is rebooted, the ZFS filesystems are no longer mountable or recognized for whatever reason (hard drive order changed?).
[01:48:30] <Andys^> Ganymede: that shouldn't happen
[01:48:39] <Andys^> hard drive order shouldn't matter as ZFS labels the drives
[01:48:48] <Andys^> (if you use the entire disk)
[01:49:48] <Ganymede> Andys^, So even if drives are unplugged and replugged to different ports on the same machine, ZFS shouldn't care?
[01:49:53] <Andys^> *nod*
[01:50:04] <Andys^> you should test it first, though ;)
[01:52:04] <Ganymede> Also, I get the impression that ZFS's "ARC" is a very smart caching system, perhaps smarter than Linux's disk caching system? With Linux, I frequently have the problem where a single read through of a large file ends up clearing the cache of small files, but typically, I need to read the small scattered blocks fast and the large file slowly so it's inefficient. As I understand it, ZFS's ARC is smart enough to not let a single large file fill u
[01:52:04] <Ganymede> p the cache?
[01:52:32] <Andys^> yes, ZFS ARC has code to prevent reading a big file from filling the cache
[01:52:54] <Andys^> it also lets you add an SSD as a secondary, larger cache
[01:52:57] <Ganymede> Listing/searching through large directory trees on Linux takes long almost every time...it's like it just doesn't remember the disk structure and has to keep going back to the bare metal.
[01:52:59] <Andys^> however, it eats lots of RAM for all this
[01:53:24] <Ganymede> directory structure*
[01:53:40] <Andys^> ZFS directory metadata is compressed on disk so it doesnt have to read as much. and reading a large file wont make it lose directory cache
[01:54:31] <Ganymede> Oh sweet, compression directory meta data would be a huge plus for me. I have tons of files with pretty much identical metadata and filename with just sequence numbers incremented. Takes forever for Linux to get a directory listing.
[01:55:25] <Andys^> *nod* :)
[01:55:31] <Andys^> it also keeps two copies of all metadata
[01:56:25] <Andys^> so it has a chance to read the metadata off >1 disk
[01:56:44] <Andys^> overall it should be faster than any linux filesystem for most tasks, as long as you feed it plenty of RAM
[01:58:11] <Ganymede> Also, another question, more about hardware. When motherboards typically have N number of SATA 3.0 Gb/s ports, those ports aren't sharing bandwidth, right? They can all go at 3 Gb/sec?
[01:59:59] <Andys^> in theory, yes
[02:00:07] <Andys^> in practice, all but the latest chipsets top out a bit less than that
[02:00:16] <Andys^> (but its OK because most HDDs dont do anywhere near 3gbps)
[02:01:44] <Ganymede> And when you use those areca RAID controllers with say N number of SFF-8087 ports with "3 Gb/s per port", if you hook them up to 4 drives per port using a multidrop connector, the four hard drives share the 3 Gb/s bandwidth? Or they each have 3 Gb/sec to the RAID controller?
[02:04:29] <Ganymede> The card claims supports 128 SAS/SATA drives...no way it has 3 Gb/s to each of those, right?
[02:05:30] <Andys^> yeah
[02:05:44] <Andys^> SFF-8087 have 4 x 3gbps ports
[02:06:01] <Andys^> and yes - its not 128 x 3 gbps :)
[02:06:19] <Andys^> you can only have one drive per 3gbps port unless you use a SAS backplane with SAS expander chip (as per some Supermicro chassis)
[02:08:59] <Andys^> or you can possibly use a SATA Port Multiplier, but those are not very well supported
[02:09:20] <Ganymede> Andys^, Okay, thanks for all the information.
[02:09:35] <Andys^> the multidrop cable you refer to is a passive cable, it simply splits up the 4 ports in the connector into 4 sata cables
[02:09:39] <Andys^> no prob :)
[02:09:49] <Ganymede> Still in the planning stages for my next file server and all this information definitely helps with the decision.
[02:15:07] <Andys^> ok :)
[02:15:14] <Andys^> how much space do you need and for what use?
[02:18:59] <Ganymede> Probably around 4 TB would adequate. The use is pretty varied...virtual machines for testing/evaluating OSes is one major thing. "Large datasets" is another (can't really be more specific). Probably also video editing with Premiere over CIFS or mayyybe iSCSI.
[02:19:30] <Ganymede> This particular set-up isn't for production use.
[02:20:31] <Ganymede> It's for miscellaneous things and I was hoping to kill all the birds with one stone.
[02:21:34] <Andys^> if you're going to be doing VM disks, its prety radndom IO heavy, in that case you could use 4x2TB in RAID10
[02:22:35] <Ganymede> Yes, random small read I/O is a big consideration.
[02:22:50] <Andys^> yep - so i'd avoid RAIDZ
[02:22:52] *** asqui has quit IRC
[02:23:30] <Ganymede> Or...could probably set up multiple zpools. One for bulk storage with RAIDZ and one for random I/O in RAID10.
[02:23:49] <Ganymede> By the way, when you say RAID10, this would be at the ZFS-level, and not at the RAID controller level, right?
[02:24:01] <Ganymede> RAID controller would just show JBOD to the OS?
[02:24:43] <Andys^> yep
[02:24:44] <Andys^> :)
[02:30:14] *** ichii386 has quit IRC
[02:30:35] *** ichii386 has joined ##nexenta
[02:36:59] *** sross has joined ##nexenta
[02:37:09] <sross> good evening
[02:39:25] <sross> Running NexentaStor Community Edition currently, and looking to add more storage. My understanding is that I can just add some new drives (4x2TB for example), create a new zvol raidz1 from those, and then 'add' them
[02:39:39] <sross> However, I currently have a 'mirror group' volume
[02:40:08] <sross> will adding the new raidz1 to my mirror group grow my avail. space as I believe, or is there a catch I'm missing?
[02:40:46] *** Orionau has joined ##nexenta
[02:41:07] <Andys^> if you want to do RAID10 with the new disks then you'll be creating two new mirrored vdevs (not zvols)
[02:41:18] <Andys^> via: zpool add poolname mirror disk1 disk2 mirror disk3 disk4
[02:41:22] <sross> i'm perfectly fine w/ raidz1 across the system
[02:41:30] <sross> (raid-5)
[02:41:55] <Andys^> zpool add poolname raidz disk1 disk2 disk3 disk4
[02:42:03] <Andys^> will create a new 4-disk raidz vdev and add it to the pool
[02:42:05] <sross> but my understanding is that zfs & nexenta wants the zpools to have identical disk sizes, no?
[02:42:20] <sross> ok, it looks like my logic is on the right track then
[02:42:20] <Andys^> the vdevs (NOT ZVOLs) dont have to be the same size
[02:42:27] <Andys^> but the member disks of each vdev do
[02:42:41] <sross> ok, that's where I went wrong w/ terminology
[02:42:44] <sross> thanks!
[02:42:46] <Andys^> no probs
[02:43:19] <sross> so a zvol is the exposure of the underlying vdev (which can be cifs, nfs, etc.)
[02:44:11] *** Oriona has quit IRC
[02:44:53] <Andys^> zvol is just a virtual device based on a file on top of ZFS
[02:45:53] <sross> if I get 1TB drives like I have now (as the mirror group) can I instead 'transition' my current mirror group to a raidz set?
[02:46:08] <Andys^> no
[02:46:20] <sross> the mirror group is not the base os, that's a different set
[02:46:28] <sross> ok, well that answers that Q
[02:47:07] <sross> ok, that takes care of my home system's Q's...
[02:47:38] <sross> any idea if Nexenta will see a SCSI adapter, and allow me to add disks from a JBOD group that is attached?
[02:48:31] <Andys^> yes
[02:48:52] <sross> sweet, there's my test system at work then!
[02:50:40] <Andys^> sticking with mirroring is kinda cool as it means you can keep adding disks in pairs to expand the pool
[02:50:44] *** Ganymede has quit IRC
[02:50:47] <Andys^> and its also faster at random IO
[02:51:28] <sross> hmm
[02:51:55] <sross> although you'll be adding quite a bit more often spindle-wise to keep capacity expanding, no?
[02:53:10] <Andys^> yeah, but... disks are cheap!
[02:53:16] <sross> in my case, here @ home, I have a dell server, and room for 6 internal disks; I could also hang disks externally via an eSATA port multiplier
[02:53:16] *** myers has joined ##nexenta
[02:53:37] <sross> Andys^: any recommendations on a cheap gig-e switch for home?
[02:53:42] <Andys^> sross: netgear or 3com
[02:53:52] <sross> I'm a ProCurve or Juniper fan @ work, but haven't had that kind of cash @ home
[02:57:02] <Teknix> I use a procurve 1800-24g at home
[02:57:18] <Teknix> as my "core" switch in the basement
[02:57:51] <Teknix> it's the cheapest one that does jumbo packets
[02:58:21] <sross> Teknix: thanks, those aren't too bad price wise
[02:58:23] <Teknix> there is one going for $215 with free shipping on ebay
[02:59:18] <sross> The user guide talks about a perf. penalty when you have too many disks in a raidz-1, any idea what # of disks this is?
[02:59:49] <Teknix> 9 drives is generally the upper limit you'd want to go for raidz 1 or 2
[03:00:20] <Teknix> in a single vdev
[03:03:04] <sross> brb
[03:03:09] <jamesd_> i have a dell 2716 for gigabit core, and 2x cisco 2950's for 100mbit core... and a dlink 655 for wifi, and a 8 port unmanaged gigabit switch for for the home office
[03:07:30] *** master_of_master has quit IRC
[03:09:23] *** master_of_master has joined ##nexenta
[03:09:26] <Andys^> my 3com and netgear 8 port units do jumbo frames (8KB)
[03:09:47] <Andys^> its kinda pointless though, all cheap switches lack low-latency jumbo packet forwarding
[03:11:21] <Teknix> I think it depends on how many hosts you have connected doing heavy traffic. I see a significant speedup with jumbo packets transferring large files between my mac and media server
[03:11:54] <Andys^> i didnt see any perforance gains
[03:12:06] <Andys^> got full line rate regardless of frame size, but jumbo seemed to use a bit less CPU time on the client
[03:15:09] <kdavy> Teknix, if it's 9 drives for z1, wouldn't it be logical to have 10 drives for z2? 2^3 + 2 parity
[03:15:30] <kdavy> above that there is little sense, i agree
[03:22:41] <Andys^> and 11 drives for RAIDZ3 ;)
[03:24:25] <kdavy> well, raidz3 is borderline paranoia in my opinion
[03:25:12] <kdavy> at least with enterprise-grade drives
[03:26:40] <kdavy> last system i put in production (this saturday) has two 10-disk raidz2 groups, a hot spare, an SSD for l2arc and two syspool drives. i think that's good enough for a 2U DR site unit
[03:27:21] <Teknix> according to the zfs best practices guide: the recommended number of disks per group is between 3 and 9 (regardless of raidz{1,2,3}).
[03:27:35] *** wantmoore has joined ##nexenta
[03:27:47] <kdavy> Teknix, make me a 3-disk raidz3 will ya?
[03:27:58] *** wantmoore has left ##nexenta
[03:28:05] <Teknix> yes well in that case you'd have to start with 5
[03:28:20] <Teknix> er 8
[03:28:21] <kdavy> but if you start at two more, why not end with 2 more?
[03:28:54] <kdavy> having 2^i data drives is optimal when it comes to lun stripe sizes inside a vdev
[03:29:01] <Teknix> the qualifier was performance
[03:29:14] <Teknix> you can certainly have more per vdev but it will not perform as well
[03:29:55] <kdavy> i can almost guarantee you that a 9-drive raidz3 will perform worse than 11-drive raidz3
[03:31:32] <kdavy> in fact, give me a couple days and i'll prove it on raidz2 (8 vs 10 drives per vdev)
[03:32:01] <Andys^> i did a test with all numbers of disks between 3 and 9 with raidz1 and 2
[03:32:30] <Andys^> in all cases except for one, performance scaled linearly with number of disks
[03:32:44] <Andys^> there was one pathalogical case where performance was terrible, i cant remember what it was at the moment
[03:32:59] *** bklang_ has quit IRC
[03:33:08] <Teknix> was this for reads, writes, or both?
[03:33:21] <kdavy> Andys^: was the worst case with 2^i+1 data drives?
[03:34:01] <Andys^> Teknix: both
[03:34:12] <Andys^> kdavy: i'll have to look it up...
[03:34:34] <Andys^> the main problem is, if you don't use 2^i there is wasted space
[03:34:44] <Andys^> because it only splits each data block into 2^i sized subblocks
[03:36:53] <kdavy> exactly. and wasted space means more disk head movements among other things (though i'd have to look up exact drive geometries to be certain, i have no idea how much data fits on a single track these days)
[03:36:56] <Andys^> here were the good results:
[03:36:57] <Andys^> What Write Read
[03:36:57] <Andys^> 7 disk RAIDZ2 220 305
[03:36:57] <Andys^> 6 disk RAIDZ2 173 260
[03:36:57] <Andys^> 5 disk RAIDZ2 120 213
[03:37:09] <Andys^> i think the bad one was raidz1, 5 or 7 disk. cant remember sorry
[03:37:48] <kdavy> hm, raidz1-5 is optimal, so by exclusion the bad result would have to be raidz1-7
[03:38:04] <victori> just to report back, since updating to nexentastor 3.1; no kernel panics on the semi-high traffic web server
[03:38:22] <Teknix> victori: that's good news
[03:38:36] <sross> well, @ home I'm not using the 'enterprise' drives so far (RE4, etc.)
[03:38:43] <kdavy> victori: what were the kernel panics from? what was the faulting module?
[03:38:43] <victori> what makes me wonder, how did people not hit this bug though?
[03:38:51] <sross> but @ the same time, I don't have a typical IT budget for storage ;)
[03:39:03] <victori> tcp stack bug from 129-134
[03:39:19] <victori> http://www.nexenta.org/issues/242
[03:39:31] <kdavy> victori: ah, never seen that one...
[03:39:45] <kdavy> are you sure it's not related to your specific hardware?
[03:39:48] <victori> need more traffic ;-)
[03:39:57] <victori> nope, tcp stack related nothing to do with hardware
[03:40:23] <kdavy> hrm. then again i don't use tcp with nexenta, i wouldn't know
[03:40:47] <victori> I am just surprised how long the bug has been there in opensolaris
[03:41:24] <victori> we hit it right away when updating to the opensolaris 2009 release, so switched back to snv98 until recently
[03:41:36] <Teknix> yes, it seems odd that others would not have seen it. At first I thought it was specific to a particular nic driver, but it doesn't appear to be
[03:42:07] <Teknix> although it was reported against opensolaris first, so somebody did see it
[03:42:43] <victori> snv98 has been the "most" stable solaris build, I guess before all the crossbow stuff
[03:42:51] <kdavy> there's gotta be a variable that triggers it... zones? jumbo frames? a combination of zones and the tcp stack?
[03:43:34] <Teknix> for me it seems to be zones and running tomcat inside the zone
[03:43:51] <Teknix> and then restarting tomcat or the apache that fronts it
[03:44:31] <Teknix> Andys^: I'm curious how your write speeds could be increasing, considering the vdev is only as fast as a single device in terms of IOPs
[03:44:54] <Teknix> you would expect the read speeds to increase somewhat
[03:45:23] <victori> riche lowe mentioned they went fast and loose with the stack so it bit them.
[03:45:25] <kdavy> Teknix: no it isnt, ncq can alleviate that effect somewhat
[03:45:35] <Teknix> well that's cheating :)
[03:46:02] <kdavy> Teknix: since when is using enterprise SAS drives cheating?
[03:46:17] <kdavy> or FC drives for that matter
[03:46:35] <kdavy> it's only cheating if you pay less
[03:47:04] <Teknix> well, I got my SAS drives a lot cheaper than most people
[03:47:49] <kdavy> Teknix: cheaper than comparable 7200 rpm sata drives? doubtful
[03:47:59] <Teknix> very close actually
[03:48:04] <kdavy> i got my FC drives a lot cheaper than most people too
[03:49:04] <Teknix> I put out a bid for 126 2TB SAS (6Gbps) and 2TB SATA (6Gbps) and the price difference was within $10 of each other
[03:49:53] <kdavy> 96 146Gb 10k rpm FC spindles (in 6 JBODs), with dual FC ESMs and dual PSUs - total $1350 + tax. and no i did not forget any zeros :)
[03:50:55] <jamesd_> kdavy, did you buy that out of the back of a car? from a guy nah bubba
[03:51:15] *** MACscr has left ##nexenta
[03:51:18] <kdavy> jamesd_: no, they came from a computer recycling company
[03:51:22] <kdavy> close enough though
[03:51:59] <kdavy> i know exactly where they came from - an oil company recycled their supercomputing facility - and it's 100% legit
[03:52:26] <Teknix> so you didn't get the five year warranty and the gensui knives, huh?
[03:53:47] <jamesd_> no warranty but i bet he did get the knives.
[03:54:10] <jamesd_> slightly used knives with just a vew dark red stains
[03:54:13] <kdavy> no, my warranty is in redundancy, spares and nightly replication to another array (less iops but data won't be lost)
[03:56:03] <kdavy> in this specific business case (static Citrix XenApp servers that change maybe once every two weeks on average), the recovery point objective is of very little concern, and i have my recovery time objective well met and tested regularly
[04:03:18] <sross> so, Andys^ was talking earlier of just using mirrored sets, and any time capacity expansion is needed, just add another mirror set
[04:03:26] <sross> along w/ how that would help random IO
[04:04:15] <sross> but in my case I'm looking mostly for sequential IO (photographer files; RAW, JPEG, etc.)
[04:04:22] <Teknix> that is a good strategy
[04:04:38] <sross> so is there an advantage to going w/ raidz1 sets instead of mirrored sets?
[04:04:55] *** swy has joined ##nexenta
[04:06:57] <sross> right now I have 2x Barracuda ES (750GB), and with my 'proof of concept' working, I'm ready to move onto putting a production setup in place
[04:07:12] <Teknix> sross: more available space would be about it.. the mirrors are more flexible and generally will perform better
[04:07:17] <sross> issue is Wife's photo business @ home eats through storage...
[04:07:37] <sross> in my case I'm looking @ 2 *max* clients @ a time
[04:07:58] <sross> it will be CIFS, NFS, or /maybe/ iSCSI
[04:08:08] <sross> depends on performance/config ease
[04:08:14] <Andys^> sross: i also dont consider RAIDZ1 reliable enough with large consumer hard disks
[04:08:15] <Andys^> :/
[04:08:29] <Teknix> yeah, the resilver time on a large drive leaves you vulnerable
[04:09:00] <sross> Barracuda ES is 'enterprise'
[04:09:15] <sross> but I can't afford SAS, that's for sure
[04:09:26] <sross> ES/RE series will be hard enough to swallow
[04:09:45] <sross> anything will be better than the stack of external USB 2.0 disks she's using now.....
[04:09:53] <Andys^> its cheaper to just use raidz2, add more disks, and use sub-$100 2tb consumer disks i thin
[04:10:01] <Andys^> if you dont worry about uptime too much (for home)
[04:10:21] <sross> well, I also will have a portion of the system replicating using CrashPlan
[04:10:46] <sross> so I will have a backup in place, although only 2 storage tiers (primary, backup/offsite)
[04:11:01] <sross> it's just not reasonable to use traditional methods
[04:11:29] <sross> anyone wanna buy an LTO3 drive ... ;-)
[04:12:42] <sross> Andys^: good reco w/ raidz2 & consumer
[04:13:55] <sross> Andys^: now I just need to find the <$100 2TB drives...
[04:14:20] *** swy has quit IRC
[04:14:22] <Andys^> Seagate LP :)
[04:14:28] <Andys^> sadly they are now all 4kb sectors
[04:14:42] <sross> eww, they are all <7200RPM
[04:15:02] <sross> 4kb impacts....
[04:15:19] <sross> haven't read up on that, so I have no clue what difference it makes
[04:17:29] <sross> Andys^: last I looked, the perf. impact of RAID on those 'LP/Green' drives was horrendous
[04:23:39] <Andys^> Seagate LP are good for sequential though :)
[04:23:46] <Andys^> mine do >100MB/s
[04:25:41] <sross> hmm
[04:26:21] <Teknix> it's probably no so terrible if you can put and ssd in front of them
[04:26:31] <Andys^> they're good for photos
[04:26:59] <sross> Teknix: that's an interesting proposition
[04:27:04] <sross> I haven't read up on that yet
[04:27:23] <Andys^> i have a 6 disk RAID10 of them for backups, and it goes hundreds of MB/s
[04:27:26] <sross> can I add an ssd and have it be the 'quick temp space' w/ Nexentastor Community Edition?
[04:27:38] <sross> *home office budget*
[04:27:39] <sross> http://eshop.macsales.com/item/Hitachi/0S03208/
[04:27:49] <sross> $200 for 7200RPM 3TB
[04:28:29] <Teknix> is that what they're calling zil and slog these days... "quick temp space"?
[04:28:40] <sross> shoot, I dunno
[04:29:05] <sross> if an SSD speeds up the perf. she see's when download CF cards, editing in lightroom/photoshop, great
[04:29:36] <sross> I like the fact that Nexenta can do the 'hybrid' stuff, but I don't understand it (haven't read up)
[04:29:46] <kdavy> sross, i'm inclined to say "no", unless you add the ssd to something other than the NCE and then move data from it to the main array nightly via scheduled tasks or something
[04:30:05] <kdavy> cache and temp space are two very different use cases
[04:30:09] <sross> yeah, that's too complicated for this setup
[04:31:02] <Andys^> sross: 4 or more of the seagate LPs can handle >1gbps of sequential write, so i doubt you'll need an SSD
[04:31:11] <sross> ahh, good point
[04:31:20] <sross> only 1 NIC in her Precision workstation
[04:31:46] <sross> and then I can afford to get >=4 disks
[04:31:48] <kdavy> sross: is the nexenta only used for backups, or for nearline storage for photo processing as well?
[04:32:08] <sross> idea is the NCE becomes the main repository
[04:32:19] <sross> probably will end up w/ a NAS style box for offsite data
[04:32:35] <sross> kdavy: 1, maybe 2 simultaneous connections 2 the system
[04:32:54] <sross> and if 2, the perf. hit would be an acceptable compromise
[04:33:28] <sross> I may eventually add other vdev's for home files (dvr storage, etc.), but that comes down the line
[04:33:49] <sross> the box holding the disks is a PE1900 w/ dual quad-core and 4GB ram currently
[04:33:50] <kdavy> sross: still, 2 simultaneous connections may require a bunch more concurrent read threads in a heavy photoshop use scenario for example - making the iops the main bottlenec
[04:33:57] *** POloser has joined ##nexenta
[04:34:58] <sross> photoshop is actually the 2nd most used app, and it's *far* behind lightroom
[04:35:24] <sross> which is constantly drawing and re-drawing photo data, incl. metadata
[04:35:36] <kdavy> sross: well that's not so bad then - lightroom does mostly sequential work as far as i can tell
[04:36:08] <sross> and I may be forced to keep the lightroom database on the internal (non-nexenta) ports
[04:36:24] <sross> adobe Best Practice freaks re: network location of Lightroom DB
[04:36:44] <kdavy> by "heavy photoshop" i mean something like a PSD composed from 50+ 16 megapixel raw image layers - that's a recipe to horrible performance if your storage is slow
[04:36:59] <sross> yeah, she's not in that category
[04:37:09] <sross> not the style of work she does
[04:38:27] <kdavy> speaking of photography... i bartered my wedding photographer a deal where i redesign her storage for a 50% discount on her work :) that went really well
[04:38:34] <sross> 90% of her work is downloading images from CF, and then editing them (lots of re-draws)
[04:38:53] <sross> kdavy: heh, we barter all the time
[04:39:27] <sross> usually the other way around: she barter's for nice stuff: Baseball tickets, vacation stuff
[04:39:47] <kdavy> sross: i saved $2k and put in 7 hours of actual work - not bad if you think about the hourly rate
[04:39:48] <sross> I can't imagine bartering for photography work
[04:39:54] <sross> yup
[04:40:35] <sross> when you redesigned it, was it built around the idea of a wedding photographer workfow?
[04:41:25] <kdavy> sross, not only - it was built around the idea of her specific workflow, and the value of her actual data as it decreases with time
[04:41:39] <sross> hmm, tell me more
[04:41:47] <sross> the data growth here is a big part of the issue
[04:41:57] <sross> she likes to give the files away after 2 years I think
[04:42:13] <sross> but until then, I'd like to avoid all the manual labor of burning DVD's, etc.
[04:43:29] <kdavy> in fact what started the whole concept (and her idea of redesigning storage) was the fact that on our first meeting, i asked her to give me all the RAW photos so i could experiment with some HDRI - her camera shoots in 12 bit color vs. 10 bits of my camera, and i'm a photo geek myself
[04:44:20] <kdavy> previously, she kept all RAWs indefinitely on external drives, and that was the major waste of space
[04:44:31] <sross> yeah, that's tough
[04:47:04] <kdavy> so we came up with a data retention scheme that relied on only keeping the good images (ones that made it to the final wedding album), and offering the customer digital copies of all of the photos including bad ones, with the idea that once they're gone from her storage it's the customer responsibility to take care of the images
[04:50:23] <kdavy> and another procedure for reducing RAWs that haven't been touched for a year and/or haven't been deemed important enough into lossless compressed formats
[04:50:59] <kdavy> plus, of course, redesign of actual storage system from external hard drives into a nexentastor ce :)
[04:51:10] <sross> yeah, she does a similar deal, gives images away after 2 yrs
[04:52:43] <sross> how did you workflow the reducing of RAW into lossless compressed?
[04:57:05] <kdavy> sross: she already had a photoshop plugin that could do the work - just needed to have some logic added. don't remember now, but i think it was photoshop automator
[04:58:21] <sross> kdavy: here's an interesting review: http://www.storagereview.com/hitachi_deskstar_7k3000_3tb_review_hds723030ala640
[04:59:59] <kdavy> hm, nice capacity but very poor small read iops, as expected
[05:02:17] <kdavy> if i had to guess based on the i/o graphs, 5 platters vs. 4 platters give a 20% drop in iops, most likely because of larger mass of the drive head assembly and no change in magnetic actuator design
[05:06:52] <sross> Andys^ mentioned he was getting just-fine sequential IO from Barracuda LP, so I'm actually thinking about going that route
[05:07:39] <kdavy> i've been always wondering... how hard (technically) would it be to have independent heads for every platter, and what the i/o benefit would be. in theory, it'd be similar to a per-platter raid5
[05:08:29] <kdavy> or, in desktop drives that don't care about redundancy, raid0 across multiple platters
[05:09:42] *** swy has joined ##nexenta
[05:11:41] <kdavy> this thought was mostly inspired by a Mini Cooper i once drove - with twin hayabusa engines driving the front and rear axles independently, with their own sequential paddle-shift gearboxes, and a common throttle. you could be in 3rd gear at the front for torque, and in 2nd gear in the back for horsepower - independently controlled
[05:11:58] <kdavy> the car sounded like a very pissed off bumble-bee most of the time
[05:14:29] <kdavy> and it definitely represents random i/o just like an 18-wheeler represents sequential i/o - completely different worlds
[05:21:32] <sross> I guess I can expect a theoretical max of 128megabytes/sec per client (editing machine)
[05:21:44] <sross> because I don't have 2 NIC's in her workstation
[05:22:24] <victori> barracuda seagate drive?
[05:22:47] <victori> do *not* get a barracuda seagate drive, they get rocked by vibration
[05:23:11] <victori> the difference between 50-40meg/sec down to 2meg/sec under rackmount conditions
[05:23:22] <victori> https://chris.dod.net/?p=457
[05:23:31] <kdavy> also, do not scream at your JBODs :)
[05:24:06] <victori> the new western digital black drives work very well
[05:24:14] <victori> 2011 models*
[05:24:27] <victori> the older ones also sucked under vibration
[05:24:53] <kdavy> speaking of WD. i'm really disappointed by the 2.5" velociraptors
[05:25:17] <kdavy> so far 3 out of 6 failed in rackmount conditions
[05:25:49] <kdavy> (solid rails, not just servers stacked on top of each other)
[05:26:11] <victori> btw anyone use a ssd disk as a zfs cache device?
[05:26:25] <kdavy> victori: i do
[05:26:40] <victori> kdavy: rackmount on rails works better?
[05:26:53] <victori> our rackmount is thrown ontop since we never got rails for it ;-/
[05:27:35] <kdavy> victori: i have no clue, it's a personal pet peeve that every shelf should have own rails and be accessible independently
[05:27:54] <sross> victori: in my case I'm looking at maybe 6 drives (tower)
[05:27:57] <victori> kdavy: the random IO performance helped with the ssd cache device?
[05:28:22] <victori> looking at improving postgres database performance on our large 10gig instance (large for us)
[05:28:32] <kdavy> victori, random read i/o performance definitely increased. as for writes, i can't tell because all my writes are async
[05:28:44] <victori> battery backed?
[05:28:58] <kdavy> you don't need battery backing for l2arc
[05:29:08] <victori> how is it async then?
[05:29:40] <kdavy> because citrix xenserver is poorly engineered and doesn't make any sync writes :-P
[05:29:50] <victori> ah
[05:30:18] <kdavy> at least not over FC, i havent tried iScsi with it because i assume performance won't get any better
[05:31:53] <kdavy> but for my workload async is fine
[05:39:27] *** swy has quit IRC
[05:41:53] *** p3n__ is now known as p3n
[05:43:40] <sross> victori: my understanding is that the rotational vibration situation is mostly related to large numbers of spindles
[05:44:16] <sross> in my case <=6 spindles shouldn't be an issue is my understanding
[05:47:06] <kdavy> sross: depends on the brand of duct tape you use to tape them together :)
[05:47:29] <kdavy> if you get it at the dollar store, who knows?
[05:49:50] <sross> lol, I won't be using duct tape
[06:17:47] *** HyperJohnGraham has quit IRC
[06:37:09] *** kart_ has joined ##nexenta
[06:39:08] *** myers has quit IRC
[06:41:43] *** myers has joined ##nexenta
[06:50:23] *** kart_ has quit IRC
[06:50:39] *** victori has quit IRC
[07:13:45] *** yalu has quit IRC
[07:15:11] *** yalu has joined ##nexenta
[07:20:05] *** myers has quit IRC
[07:28:50] *** Sergef has quit IRC
[07:32:32] *** kart_ has joined ##nexenta
[08:12:22] *** pavlenko has left ##nexenta
[08:30:34] *** nacx has joined ##nexenta
[08:57:18] *** alhazred has joined ##nexenta
[10:05:15] *** Tweener has joined ##nexenta
[10:14:19] *** Dark_Mobile has joined ##nexenta
[10:15:27] *** andy_js has joined ##nexenta
[10:29:41] *** Dark_Mobile is now known as Darkman_
[10:45:36] *** kart_ has quit IRC
[11:43:21] *** eXeC001er has joined ##nexenta
[11:46:56] *** anatoly_l has quit IRC
[11:53:40] *** victori_ has quit IRC
[11:53:44] *** victori has joined ##nexenta
[12:31:29] *** Darkman_ has quit IRC
[12:41:53] *** Dark_Mobile has joined ##nexenta
[13:34:43] *** anatoly_l has joined ##nexenta
[13:35:29] *** kart_ has joined ##nexenta
[13:43:13] *** Dark_Mobile has quit IRC
[13:50:50] *** Dark_Mobile has joined ##nexenta
[13:56:20] *** myers has joined ##nexenta
[14:04:13] *** Teknix has quit IRC
[14:12:39] *** JagWaugh has joined ##nexenta
[14:13:34] *** tsukasa has joined ##nexenta
[14:29:04] *** myers has quit IRC
[14:34:00] *** Mobile_Dark has joined ##nexenta
[14:36:48] *** Dark_Mobile has quit IRC
[14:46:15] *** POloser has left ##nexenta
[14:46:33] *** anatoly_l has quit IRC
[14:47:01] *** eXeC001er has quit IRC
[14:54:34] *** myers has joined ##nexenta
[14:56:51] *** eXeC001er has joined ##nexenta
[14:58:25] *** hikenboot has joined ##nexenta
[14:59:07] <hikenboot> hello I have a hard drive in my raid 5 that appears to have problem. How do i initiate a recheck of the hard drive for possible recovery
[15:00:02] *** anatoly_l has joined ##nexenta
[15:02:17] *** myers has quit IRC
[15:03:11] <eXeC001er> hikenboot: zpool scrub pool_name
[15:08:10] *** nacx has quit IRC
[15:09:18] <hikenboot> thanks
[15:18:57] *** myers has joined ##nexenta
[15:25:09] *** myers has quit IRC
[15:27:26] *** myers has joined ##nexenta
[15:47:43] *** bklang_ has joined ##nexenta
[15:50:49] *** bklang_ has quit IRC
[16:14:53] *** DreamCatcher has joined ##nexenta
[16:23:53] *** jarle has quit IRC
[16:58:01] *** Dark_Mobile has joined ##nexenta
[17:01:31] *** Mobile_Dark has quit IRC
[17:28:05] <sross> well, i'ts ordered; 5x Seagate Barracuda LP for the NCE PE1900 server
[17:28:33] <Tekni> good luck
[17:33:44] <sross> yeah, at the recommendation of Andys^ I went w/ the LP's
[17:34:00] <sross> it's not a 24/7 system, so I decided against enterprise-class drives
[17:34:46] <sross> it has worked *wonderfully* so far w/ 2x750GB in a mirror, so it's time to ramp up the space and get rid of the stack of external USB drives the wife is using
[17:41:45] *** Sergef has joined ##nexenta
[17:45:24] <hikenboot> Hello! I am getting an I/O error trying to delete files on my nexentastor I have tried running scrub this morning and also last night..before running it again this morning I tried to delete the files again that I wanted to delete and it reports I/O error...anyone able to help
[17:46:21] <sross> hikenboot: well, you won't find much nexenta knowledge here (especially @ that advanced level)
[17:46:39] <sross> hikenboot: have you tried the #nexenta channel or nexenta forums?
[17:46:51] *** myers has quit IRC
[17:47:08] <sross> hikenboot: I do know that scrubs can take a very long time depending on the size of your vdev or zpool sets
[17:47:36] <sross> whoops, thought you dropped into a different channel.
[17:47:41] <sross> ignore my blabbering
[17:48:36] *** asqui has joined ##nexenta
[17:53:38] *** RichiH has joined ##nexenta
[17:53:44] <RichiH> CPU states: 49.9% idle, 0.0% user, 50.1% kernel, 0.0% iowait, 0.0% swap
[17:54:05] <RichiH> it seems nexenta is _not_ happy if you pull a disk from a raidz1 while it's scrubbing it?
[17:54:08] *** DreamCatcher has quit IRC
[17:54:23] <RichiH> the system hang for about a minute the second i ran zpool status pool1
[17:54:46] <RichiH> the command itself is still hanging
[17:55:14] <RichiH> and yes, this is a test system
[17:55:37] <RichiH> still, unless i am doing something wrong, this is a massive problem, imo
[17:55:54] <RichiH> and yes, i did something "stupid" on purpose
[17:56:42] <RichiH> (unless it's rebuilding the pool atm and hangs because of that. but even then, zpool status should return with _something_)
[18:05:28] <RichiH> hmm, after ten minutes, it returns and claims all disks are online
[18:05:36] <RichiH> running it again, i get the faulted
[18:16:02] *** Tweener has quit IRC
[18:22:26] *** Sergef2 has joined ##nexenta
[18:25:20] *** Sergef has quit IRC
[18:33:37] <hikenboot> sross, I guess thats why one should usie small zpool sets...unfortunately I have just one big one 2.5 TB's it will take 8 hours to scan
[18:52:29] *** trbs has joined ##nexenta
[18:55:19] *** kdavy has quit IRC
[19:00:33] *** asqui has quit IRC
[19:05:13] *** asqui has joined ##nexenta
[19:18:05] *** myers has joined ##nexenta
[19:23:21] *** swy has joined ##nexenta
[19:46:11] *** ikarius has joined ##nexenta
[19:54:17] *** myers has quit IRC
[19:55:17] *** jarle has joined ##nexenta
[20:05:00] *** anatoly_l has quit IRC
[20:08:27] *** myers has joined ##nexenta
[20:15:09] *** alhazred has quit IRC
[20:29:12] *** eXeC001er has quit IRC
[20:31:14] *** swy has quit IRC
[20:41:02] *** Dark_Mobile has quit IRC
[20:41:03] *** Mobile_Dark has joined ##nexenta
[20:47:07] *** Mobile_Dark has quit IRC
[20:49:56] *** swy has joined ##nexenta
[20:57:39] *** myers has quit IRC
[20:59:53] *** myers has joined ##nexenta
[21:04:25] *** myers has joined ##nexenta
[21:14:24] *** swy has quit IRC
[21:15:23] *** swy has joined ##nexenta
[21:16:37] *** kwazar has joined ##nexenta
[21:22:23] *** kart_ has quit IRC
[21:24:07] *** danzasph1re has joined ##nexenta
[21:25:32] *** danzasph1re has quit IRC
[21:26:05] *** PatSphere has joined ##nexenta
[21:27:16] *** Dark_Mobile has joined ##nexenta
[21:30:53] *** Mobile_Dark has joined ##nexenta
[21:34:31] *** Dark_Mobile has quit IRC
[21:39:40] *** alfism has joined ##nexenta
[21:41:07] *** PatSphere has quit IRC
[21:41:35] *** PatSphere has joined ##nexenta
[21:43:25] *** myers has quit IRC
[21:44:01] *** myers has joined ##nexenta
[21:48:56] *** swy has quit IRC
[21:51:11] *** agagag_ has quit IRC
[21:52:31] *** PatSphere has quit IRC
[21:53:08] *** PatSphere has joined ##nexenta
[21:53:55] *** Dark_Mobile has joined ##nexenta
[21:56:43] *** Mobile_Dark has quit IRC
[21:56:53] *** Mobile_Dark has joined ##nexenta
[21:59:52] *** Dark_Mobile has quit IRC
[22:05:17] *** PatSphere has quit IRC
[22:06:13] *** PatSphere has joined ##nexenta
[22:16:10] *** hikenboot has left ##nexenta
[22:17:21] *** PatSphere has quit IRC
[22:18:38] *** PatSphere has joined ##nexenta
[22:31:34] *** PatSphere has quit IRC
[22:32:41] *** PatSphere has joined ##nexenta
[22:45:31] *** PatSphere has quit IRC
[22:51:45] *** PatSphere has joined ##nexenta
[22:53:35] *** Teknix has joined ##nexenta
[22:56:11] *** PatSphere has quit IRC
[23:09:30] *** Pathin_ has joined ##nexenta
[23:09:39] *** Mobile_Dark has quit IRC
[23:09:57] *** Pathin_ has quit IRC
[23:14:14] *** Pathin has joined ##nexenta
[23:14:48] *** Pathin has quit IRC
[23:15:11] *** Pathin has joined ##nexenta
[23:15:35] *** Pathin has quit IRC
[23:15:56] *** Pathin has joined ##nexenta
[23:18:10] *** Pathin has quit IRC
[23:19:05] *** Pathin has joined ##nexenta
[23:25:02] *** Pathin has quit IRC
[23:28:07] *** Pathin has joined ##nexenta
[23:29:04] *** Pathin has quit IRC
[23:32:24] *** Pathin has joined ##nexenta
[23:32:55] *** Pathin has quit IRC
[23:33:50] *** myers has quit IRC
[23:50:59] *** Pathin has joined ##nexenta
[23:56:41] *** agagag has joined ##nexenta
[23:57:57] *** Pathin has quit IRC
[23:58:48] *** Pathin has joined ##nexenta
[23:59:25] *** Pathin has quit IRC
top

   May 10, 2011  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >