NOTICE: This channel is no longer actively logged.
[00:03:31] <jamesd_> ipmb, you may have to change the config files burried in /kernel if i rember google should be able to help.. google solaris jumbo mtu[00:03:49] <ipmb> jamesd_: that'll require a reboot, right?[00:04:01] <jamesd_> yeap[00:04:08] *** Teknix has joined ##nexenta[00:04:36] <jamesd_> it may be possible to remove the nic driver and reimport but not sure that is enough[00:04:40] * ipmb trying[00:17:59] <viridari> ZFS on Mac kinda sucks :([00:18:25] <viridari> it sucks enough that I think I need to invest in a decent Firewire card that will work with Nexenta[00:19:35] <jamesd_> i think solaris has some firewire drivers.. i think it supported the firewire that was in my ultra 20[00:20:27] <viridari> I tried "zpool add tank cache /dev/disk2s2" and that wasn't even supported... ran "zfs upgrade" to see what version they ported and it was ZFSv2[00:20:57] <viridari> zfs on an ultra 20... hmmm[00:21:07] * viridari peers behind him at the powered-down E250[00:24:54] <jamesd_> viridari, did you run devfsadm -v[00:25:11] <jamesd_> and then run format and look for drives?[00:25:50] <viridari> jamesd_: it's a mac, not a solaris box[00:26:06] <jamesd_> i thought you had allready decided to try nexenta[00:26:59] <viridari> I have run nexenta on single disk systems in the past. But I bought all of these disks to hook up to my desktop (Mac) using zfs there. A lot of apps complain about having their data on NFS so I tried going local. Grrr. Not going so well.[00:27:43] <viridari> I mean, it's working... but it's not like anything based on modern OpenSolaris[00:34:39] <jamesd_> your buddies at apple central killled zfs support[00:35:15] <viridari> bastards[00:46:27] <SynQ> doesn't os/x support iSCSI?[00:52:22] <Triskelios> viridari: you might be interested in http://tenscomplement.com/[00:59:27] <viridari> Triskelios: I heard of them but all they have right now is vaporware.[01:01:39] *** ipmb has quit IRC[01:02:18] *** andy_js has quit IRC[01:55:16] *** swy has joined ##nexenta[02:28:37] *** asqui has quit IRC[02:58:51] *** kwazar has quit IRC[03:00:24] *** kwazar has joined ##nexenta[03:02:08] *** victori_ has quit IRC[03:04:27] *** ry has quit IRC[03:08:27] *** master_of_master has quit IRC[03:10:11] *** master_of_master has joined ##nexenta[03:17:52] *** myers has joined ##nexenta[03:31:46] *** lamer0 has joined ##nexenta[04:25:01] *** POloser has joined ##nexenta[04:42:57] *** jamesd_ has quit IRC[04:52:51] *** kart_ has joined ##nexenta[05:10:03] *** ikarius has quit IRC[05:18:11] *** lamer0 is now known as victori_[05:20:35] *** andygraybeal has quit IRC[05:31:40] *** alhazred has joined ##nexenta[05:38:24] *** alhazred has quit IRC[05:42:30] *** alhazred has joined ##nexenta[05:47:52] *** myers has quit IRC[05:56:54] *** simulacre has joined ##nexenta[05:57:46] <simulacre> anybody have any experience getting nexenta to send syslog messages to a remote host?[05:58:25] <simulacre> "*.alert;*.err;*.debug;*.notice;*.info;*.crit @splunk" in /etc/syslog.conf isn't doing anything after a syslog restart[05:58:41] <simulacre> snoop -d igb0 -V "port 514 and (tcp or udp)"[05:58:43] <simulacre> shows nothing[06:04:56] *** p3n has quit IRC[06:10:01] *** p3n has joined ##nexenta[06:16:15] *** ikarius has joined ##nexenta[06:17:20] <simulacre> ahhh it was whitespace issues[06:20:28] *** ikarius has quit IRC[06:21:59] *** simulacre has left ##nexenta[06:35:57] *** jamesd has joined ##nexenta[06:45:07] *** simulacre has joined ##nexenta[06:51:54] *** ry has joined ##nexenta[06:54:34] *** jarle has quit IRC[07:03:54] *** simulacre has joined ##nexenta[07:34:21] *** Torpeo is now known as Torpeo_[08:11:23] *** Torpeo_ is now known as Torpeo[08:13:55] *** tsukasa has joined ##nexenta[08:33:55] *** bauruine has quit IRC[09:06:57] *** Thrae has quit IRC[09:16:44] *** Tweener has joined ##nexenta[09:23:52] *** elov has joined ##nexenta[09:24:13] *** Thrae has joined ##nexenta[09:28:34] <elov> Does anyone have a good recommendation on dedupe table sizing in regards to memory and L2ARC SSD cache. Right now my unit surpasses 50% of the memory for the DDT table. And i'm not using a cache for the zpool. But will a L2ARC cache be utilized for the DDT table or will it only be stored in mem?[09:29:42] <Andys^> elov: DDT table is just part of the normal ZFS metadata (like directories and files and attributes)[09:29:46] <Andys^> so it can be cached in L2ARC[09:31:49] <elov> Today i have utilized 6TB of space, giving me a 15GB DDT table. (64k block) but the satabeast have 64TB of raw space that we could allocate to the nexenta unit. But this would build a huge DDT table in the end.[09:32:28] <elov> So i'm thinking of getting the machine apox 64GB of RAM and a 128GB SSD L2ARC drive.[09:35:15] <POloser> elov, what dedup ration do you have? just intresting[09:35:24] <POloser> ratio[09:35:26] <elov> Is it possible to change the volume-check "Enable_ddt_size_check" threshold?[09:35:40] <elov> POloser: kinda low ones, right now only 1.4[09:35:58] <elov> But the cost savings from getting a 40% decrease in utilized storage is good in the end.[09:36:46] <elov> But we're storing backupdata from an ahsay system on it right now, and that data is already compressed and delta/incremental forever.[09:40:47] *** nacx has joined ##nexenta[09:43:19] *** Dark_Mobile has joined ##nexenta[09:44:53] *** Dark_Mobile is now known as Darkman_[10:03:45] *** bauruine has joined ##nexenta[10:08:45] *** andy_js has joined ##nexenta[10:20:08] *** IRConan has left ##nexenta[10:26:12] *** kart_ has quit IRC[10:40:52] *** McBofh has quit IRC[10:43:01] *** McBofh has joined ##nexenta[11:46:02] *** eXeC001er has joined ##nexenta[12:04:05] *** eXeC001er has quit IRC[12:12:53] <yalu> well elov that's the theory but I find that the machine on which I have DDT enabled will reach over 90% cache hits on metadata from ARC, but cache hits from l2arc are 10% or at best 20% no matter how much it fills up. moreover, the longer the machine stays up, the more Nexenta decreases the ARC size (which includes l2arc pointers), and so performance drops again.[12:13:37] <yalu> by the way elov, that 40% storage saving is only a good thing if the memory you use to cache the ddt doesn't cost more than that 40% of your disk.[12:16:37] *** eXeC001er has joined ##nexenta[12:18:06] <POloser> dedup reduces electricity bill too. if you care about it :)[12:19:22] <yalu> interesting point. it depends on your disks I suppose. and on your memory.[12:19:50] <yalu> if your computer or server needs to stay up 3 times as long to do the same job, it doesn't :D[12:31:51] <yalu> bottom line, from my experience the DDT doesn't cache well in l2arc. also, with 64TB of you'll need a huge amount of cache.[12:52:24] *** swy has quit IRC[12:56:56] *** McBofh has quit IRC[12:59:49] <Andys^> yeah[13:00:01] <Andys^> i can't recommend anyone use dedup in its current form, with hard disks[13:00:07] <Andys^> works really well with SSDs though[13:00:43] <Andys^> it might work well for someone with a high dedupe ratio with hard disks[13:01:50] <Darkman_> i tested dedup with my email store but the savings where next to zero... compression works very well[13:11:34] *** McBofh has joined ##nexenta[13:14:54] <yalu> it depends Very Much on your data[13:16:48] <Darkman_> jep[13:18:46] <yalu> plus it all should be a little more tunable... I'd like to be able to determine the max amount of ARC dedicated to data. currently on a memory-restricted system I have to turn off data caching to try to maximize the space available for the ddt. which also means no data ends up in l2arc.[13:20:12] <yalu> or to tune the "normal" metadata to a different size than the space reserved for l2arc pointers. if you're really short on memory you could use ram largely for those[13:21:54] <yalu> suprisingly, iso files don't dedup well. I tried with all ubuntu editions of a certain release and saved about 1% or so[13:22:41] <yalu> otoh VM files or backups DO dedup well if you align things like partition tables[13:24:54] *** kart_ has joined ##nexenta[13:24:59] <Darkman_> i thought about for vm files but i think i will just do normal stuff there as i cannot gurantee that everything will be align[13:25:17] <Darkman_> and saving, lets say 2-3% is not that much for a few TB[13:29:24] <yalu> this said, if Nexenta doesn't come up with their new NCP release real soon, I'm reinstalling my stuff to Openindiana[13:31:24] <Darkman_> well, i've seen so many systems and so far, nexenta is next to "just works" - sure, some quirks sometimes, but most of the stuff works[13:35:18] <yalu> have you ever seen a system that creates files 10 times faster than it deletes them? I have one :-)[13:36:04] <Darkman_> how did you measure? ;)[13:36:15] <yalu> using a biased method *G*[13:36:44] <yalu> well to be honest I think I'm managing to delete files faster, but I still have a lot of leftover backups that keep piling up each day[13:58:31] <elov> yalu: Looking at the cost of memory, a 40% dedupe gain gives me roughly atleast $8000 USD to spend on memory with these setups with a SATABeast 42x2TB drives with 8GB HBA to a HP DL380 G7 server over 36 months of investment time.[14:01:39] <yalu> those drives should be about 500$ each I think... so supposing that's a good guess, 40% more storage would cost 8400$[14:03:42] <yalu> not counting enclosure etc[14:03:50] <elov> add support contracts and so to that[14:04:09] <yalu> well, sounds like a reasonable tradeoff in terms of storage[14:04:21] <yalu> but still better benchmark it first :-)[14:04:38] <elov> benchmarking the beast is quite good.[14:05:38] <elov> Wirespeed throughput on the 8GB HBA and roughly random 4000 iops[14:06:07] <elov> Having 2GB of controller cache in the beast gives a good cache for disk io peaks[14:06:17] <yalu> I mean with zfs, with and without dedup. is a satabeast a HP product?[14:06:32] <elov> Nexsan SATABeast[14:07:05] <elov> Well i havn't seen any difference with the DL380 G7 server (E5620 cpu) using dedupe or not.[14:08:05] *** andygraybeal_ has joined ##nexenta[14:08:49] <yalu> it only starts to make a difference once you have enough data to let the memory fill up with arc[14:08:51] <elov> Auto-sync replication runs even faster with dedupe, utilizes a bit more CPU but with 1Gbit ethernet/fiber connection to the secondary site it the zfs send over netcat can saturate the link[14:08:56] *** andygraybeal_ has quit IRC[14:09:24] *** andygraybeal has joined ##nexenta[14:09:26] <elov> We have only equiped the server with 18GB of RAM today so the DDT table is at 50-60% now.[14:09:44] <elov> with a bit over 6TB data in the zpool.[14:10:33] <elov> The ARC cache get a cache hit of 96.63% over 1 month now.[14:10:45] <elov> Current cache size is 8.25GB[14:10:47] <yalu> I'm below 90% atm[14:10:58] <yalu> and decreases with uptime... believe it or not[14:11:16] <POloser> reboot :)[14:11:38] <POloser> before it too late[14:12:23] <yalu> I do that every few days.[14:14:04] <yalu> also there is some kind of performance problem with deliting files by itself. the server deletes (atm) 380 files/sec even if those files' refcount is > 1 - in which case the fs doesn't need to free blocks, only unlink files[14:14:41] <Darkman_> do you delete locally?[14:14:51] <yalu> yep[14:15:01] <Darkman_> that killed my nexenta once[14:15:16] <yalu> hehe mine too. took hours to bring back on line[14:15:29] <Darkman_> thought it would be faster to do it locally, result was a dead box[14:15:37] <Darkman_> now i do it remotely, works[14:16:10] <yalu> it leaves time (rtt) between each read and write[14:16:39] <elov> okey, with our backup system we don't need to delete that much files. :)[14:16:59] <yalu> I use rotating rsync (simple and it works) to backup Linux systems[14:17:40] <yalu> so every day I delete a directory, the oldest snapshot[14:17:54] <elov> Ah that can make alot of files :)[14:18:02] <elov> You should use UFS :9[14:18:07] <yalu> lol[14:18:58] <yalu> I thought a self-repairing file system with deduplication support would be a nice backup FS. and it is, although there is room for improvement[14:33:42] *** POloser has left ##nexenta[14:35:58] <Andys^> to use dedupe you really need to use a much smaller recordsize[14:36:44] <Darkman_> which hurts performance, too[14:37:08] <Andys^> not necessarily[14:37:16] <Andys^> helps for small block random IO on existing files[14:37:59] <Darkman_> but you have to request more blocks to read a file...[14:38:48] <yalu> if you need to read a small part of a file, chances are it was WRITTEN in small parts, too[14:38:57] <yalu> except maybe things like shared libraries[14:39:21] <yalu> ... but don't get me started on mercurial repositories[14:39:48] <yalu> our testing server has nearly 1000000 files on it, backed up every day[14:41:41] <Andys^> Darkman: if you have a database, its generally updated in 4kb or 8kb sizes. with ZFS default recordsize, it has to write a complete 128kb every time you only wanted to write 8kb[14:42:02] <yalu> so why use a smaller record size - you save a bit on storage but your metadata grows, and with that, your memory requirements[14:42:37] <yalu> you could always use zvol and ufs :-)[14:42:53] <Andys^> the flip side is you need less memory to cache[14:43:03] <Andys^> because you only need 8kb to cache an 8kb record instead of 128kb[14:43:19] <yalu> hmmm do you cache per record, or per block?[14:43:19] <Andys^> also, if you modify 1 byte and take a snapshot, it saves a whole 128kb block[14:43:27] <Andys^> ZFS ARC caches whole blocks[14:43:58] <Andys^> so for VM images, i use 8kb recordsize, preferably with the stored OS disk image using 8kb block size too[14:44:20] <Andys^> in this way, if you enabled dedupe, its more likely to get lots of hits because it can match up 8kb blocks of identical files inside the VM image[14:44:30] <yalu> and your guest OSs? 4K?[14:44:33] <Andys^> but you also need alot more overhead for dedupe because you have so many more blocks than before[14:44:42] <Andys^> 8kb in the guest OS was what i meant[14:45:52] <Andys^> but even if you leave the guest OS at default, the small block size helps to keep snapshots nice and small[14:51:07] <yalu> to get back to deleting: I see an 8 year old dell server running Debian gnu/linux with a simple off the shelf ide disk delete files about 20 times faster than a half-year old hp dl180 with 4 sata disks in raidz and dedup[15:33:39] *** myers has joined ##nexenta[16:15:49] *** jarle has joined ##nexenta[16:20:47] *** swy has joined ##nexenta[16:25:45] *** ibarrera has joined ##nexenta[16:31:52] *** jarle has quit IRC[16:33:48] *** jarle has joined ##nexenta[16:58:02] *** jarle has quit IRC[17:02:05] *** myers has quit IRC[17:11:45] *** jarle has joined ##nexenta[17:19:09] *** laserbled has joined ##nexenta[17:21:36] *** swy has quit IRC[17:21:46] *** swy has joined ##nexenta[17:23:26] *** bobrog has quit IRC[17:41:18] *** laserbled_ has joined ##nexenta[17:42:15] *** kart_ has quit IRC[17:42:31] *** laserbled has quit IRC[17:43:13] *** kart_ has joined ##nexenta[17:44:01] *** asqui has joined ##nexenta[17:44:13] *** laserbled_ is now known as laserbled[17:44:16] *** Darkman_ has quit IRC[17:44:50] *** bauruine has quit IRC[18:11:28] *** laserbled has quit IRC[18:11:33] *** Markmw has joined ##nexenta[18:11:41] *** kart_ has quit IRC[18:12:27] *** kart_ has joined ##nexenta[18:16:36] *** laserbled has joined ##nexenta[18:19:56] *** Tweener has quit IRC[18:20:00] *** shadey_ has joined ##nexenta[18:22:38] *** swy_ has joined ##nexenta[18:22:38] *** swy has quit IRC[18:22:39] *** swy_ is now known as swy[18:26:06] *** eXeC001er has quit IRC[18:27:22] *** kart_ has quit IRC[18:28:21] *** kart_ has joined ##nexenta[19:09:45] *** bobrog has joined ##nexenta[19:19:24] *** trbs2 has joined ##nexenta[19:23:05] *** swy has quit IRC[19:23:24] *** swy has joined ##nexenta[19:28:30] *** myers has joined ##nexenta[19:32:45] *** myers has quit IRC[19:33:40] *** myers has joined ##nexenta[19:41:30] *** bauruine has joined ##nexenta[19:45:48] *** entropic has joined ##nexenta[19:46:12] *** Chris64 has joined ##nexenta[19:46:51] *** nacx has quit IRC[19:49:34] <viridari> can someone in front of a current nexenta box please tell me what the output of "zpool upgrade" returns?[19:50:24] <Chris64> zpool upgrades the zpool version?[19:50:35] <viridari> Chris64: no it just reports the current version of zpool[19:50:36] <Triskelios> assume you mean upgrade -v, 3.1 supports pool version 28[19:50:39] <Chris64> ah ok[19:51:25] <viridari> Triskelios: I guess that is a developer snapshot?[19:51:34] <viridari> Triskelios: the web site only has 3.0.1[19:51:47] <Triskelios> 3.0.5 is current[19:52:06] <viridari> Triskelios: so the web site is behind? or I am missing something?[19:52:13] <Triskelios> yeah, it has version 28 backported as well[19:52:19] <Triskelios> viridari: where does it list 3.0.1?[19:52:27] <viridari> http://www.nexenta.org/ latest news[19:53:29] <Triskelios> oh, that's NCP. I think its repo is synced to 3.0.5 as well[19:54:45] <viridari> I'm in a shop with some Sun Storedge, a lot of FreeBSD, talking to my boss about Nexenta... I'll have some more hardware to set up a demo for him later in the week but for now I'm just collecting info[20:01:24] <Triskelios> NexentaStor is the actual storage product, nexenta.com / nexentastor.org[20:02:24] <Chris64> do you know if it's problematic to mix disk brands in one raidz2?[20:04:12] <viridari> well Nexenta doesn't really specify either if you think about it ;)[20:04:34] *** bauruine has quit IRC[20:10:15] *** kart_ has quit IRC[20:17:09] *** wonslung has quit IRC[20:18:03] *** JagWaugh has joined ##nexenta[20:39:35] <nahamu> Triskelios: is there a way to ID which kernel version one has?[20:40:41] <nahamu> dpkg reported version of sunwckr?[20:41:12] <Triskelios> yeah, dpkg -s sunwcakr or sunwckr[20:45:16] <nahamu> so I have one that I know runs NCP 3.0.1 and it reports sunwckr 5.11.134-12a[20:45:55] <nahamu> I ran and apt-clone upgrade on a NCP 3.0.1 system a few days ago and it reports sunwckr 5.11.134-30-5-2[20:46:19] <nahamu> So it soulds like the NCP repo did indeed get the 3.0.5 updates[20:47:24] <nahamu> viridari: both my NCP machines (the one running 3.0.1, and the one that got the NS 3.0.5 kernel upgrade) report zpool version 26[20:49:46] * nahamu wonders if there should be a news item on the nexenta.org site letting people know about the updated kernel[20:57:52] *** Chris64_ has joined ##nexenta[20:59:44] *** Chris64 has quit IRC[21:00:36] *** alhazred has quit IRC[21:07:07] *** Chris64 has joined ##nexenta[21:07:41] *** Torpeo is now known as Torpeo_[21:08:24] *** Chris64_ has quit IRC[21:09:36] *** Chris64_ has joined ##nexenta[21:11:25] *** Chris64 has quit IRC[21:19:56] *** Chris64 has joined ##nexenta[21:21:25] *** Chris64_ has quit IRC[21:27:56] *** Torpeo_ is now known as Torpeo[21:29:09] *** swy has quit IRC[21:29:24] *** swy has joined ##nexenta[21:38:41] *** Chris64 has quit IRC[21:45:30] *** wonslung has joined ##nexenta[21:57:16] *** wonslung has quit IRC[22:01:50] *** myers has quit IRC[22:02:27] *** myers has joined ##nexenta[22:06:03] *** laserbled has quit IRC[22:07:06] *** JagWaugh has quit IRC[22:26:44] *** Markmw has quit IRC[22:30:04] *** swy has quit IRC[22:30:18] *** swy has joined ##nexenta[22:45:25] *** myers has quit IRC[22:56:25] <entropic> has anyone had an issue where Nexenta NFS/CIFS shares write faster than they read?[23:00:06] <entropic> from a linux box to an NFS share, I can write zeroes at 60-80MB/s, but reading it back (to /dev/null) is 5-10MB/s[23:00:49] <entropic> from a windows box to a CIFS share, I can write some of our home dir data at ~45MB/s, but reading it back tops out before 20MB/s[23:01:16] <entropic> it's strange...[23:07:19] <swy> I'm a nexenta noob, but that's really odd.[23:08:11] <entropic> swy: we're glad we're not the only ones who feel that way[23:08:13] <entropic> :)[23:09:20] <swy> are you testing with data > cache?[23:09:27] <swy> (cache size)[23:10:41] <entropic> swy: not really. My zeroes file is only 500mb, and my home-dir is ~5g. Our nexenta node has 24G of RAM but no L2ARC device[23:12:50] <swy> your data being skewed b/c you're measuring speed to cache and not to the spindles is my only guess. like I said... not at all an expert on this[23:15:09] <entropic> so you're thinking that I'm picking up some kind of write caching benefit, eh[23:15:39] <swy> that was my idea.[23:16:02] <swy> would nexenta use the ram for that?[23:16:12] *** jon_____ has joined ##nexenta[23:17:07] <entropic> I thought that the RAM was for read cache (ARC)[23:17:32] <swy> beyond my knowledge. :)[23:22:58] *** tsukasa has quit IRC[23:24:18] <viridari> nahamu: thank you for checking that. Very useful.[23:33:51] *** jon_____ has quit IRC