Switch to DuckDuckGo Search
   March 26, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >

Toggle Join/Part | bottom
[00:04:29] *** iio7 <iio7!iio7@gateway/vpn/privateinternetaccess/iio7> has joined #openzfs
[00:05:24] <iio7> I have read several places on the Internet that resilvering of a RAID-Z puts more stress on the disks that resilvering a mirror, but I cannot figure out why this should be true. Is this a myth?
[00:24:34] *** andy_js <andy_js!~andy@94.6.62.238> has quit IRC (Quit: andy_js)
[00:24:35] <iio7> Also would a single pool of 2 vdev mirrors, each with two disks, be better than a RAID-Z2 with four disks? As I have understood it, in both cases two disks can fail simultaneous and the data can still be salvaged, but in the RAID-Z2 it can be any two disks, whereas in the pool of mirrors, it cannot be both disks in the same mirror. Have I understood this correctly?
[00:26:31] <sarnold> yes, you've got it; but mirrors may have better performance for the application at hand
[00:28:28] <DeHackEd> 4 disks as two mirrored pairs survive 2 dead disks with 66% probability. raidz2 survives 2 dead disks with 100% probability
[00:28:47] <DeHackEd> but mirrors perform better from a random access standpoint, and probably even in sequential IO to a point
[00:33:05] <PMT_> iio7: striped mirrors often (technically not always) have better performance, but the number of disks required for the same capacity grows faster when you're adding mirror vdevs versus raidz usually, so it's a tradeoff.
[00:33:40] <monsted> iio7: the difference in stress is definitely a myth. not even a sensible one.
[00:34:29] <PMT_> It could have some basis in fact, but if it does, I haven't seen any research confirming it. (Yes, I've seen the lowering performance demonstrations from ambient vibrations. No, that's not the same as affecting disk lifetime without actual data.)
[00:35:26] <monsted> PMT_: there are some real long shots, but nothing that makes any real sense.
[00:35:48] <DeHackEd> even just thinking about the logistics, vibration is the only thing I can think of. RAID-Z will tend to have all drives seeking toether where as RAID-10 has a bit more freedom in seeking around. enterprise or NAS grade drives are designed to tolerate that sort of thing.
[00:35:49] <monsted> the noise of two drive heads ticking away could impact the third drive!
[00:36:25] <monsted> there's so much cargo cult bullshit in storage, it's not even funny
[00:37:00] <PMT_> DeHackEd: thermal shock is technically not the same as vibration but I could believe that also breaking shit
[00:37:03] <monsted> especially surrounding RAID5.
[00:37:36] <DeHackEd> thermal shock? what are you doing to your disks?
[00:40:25] <PMT_> DeHackEd: not overly much, but my point being that if you, say, had a datacenter where the AC failed, kept running, then the AC came back, the disks might abruptly change temperature relatively quickly
[00:40:34] <PMT_> (substitute your more extreme conditions as desired)
[00:42:00] <DeHackEd> wouldn't say that's a shock. definitely a change though...
[00:42:38] <PMT_> If the DC is idling close to 100 C because your AC stopped working and then you start ducting in 20 C air or cooler, it's a rather sharp change
[00:43:25] <monsted> not really
[00:43:36] <monsted> disks (and servers) have a pretty big thermal mass
[00:43:40] <DeHackEd> well, iirc hard drives are not reliable past 50 degC
[00:43:49] <DeHackEd> and yeah that. not to mention the air churn required.
[00:44:46] <DeHackEd> still, we monitor our datacenters with both standalone temperature probes and built-in hardware sensors all set pretty close to their current values. we catch AC failures fairly quick
[00:45:25] <PMT_> DeHackEd: I think spinning rust has both "don't have me turned on outside this range" and "don't ever put me in these conditions even off range"
[00:45:40] <DeHackEd> opreating and non-operating temps, yes
[00:45:49] <iio7> Thanks!
[00:45:50] <PMT_> I'm aware that they have a lot of thermal mass. I've had the fun of recovering from the prior example.
[00:46:11] <sarnold> a nordic friend has some disks in an outdoor shed..
[00:46:11] <DeHackEd> but I mean like if my switch goes up 3 degrees C, notifications will go out. make it 5 and someone gets woken up automatically
[00:46:16] <PMT_> I was just trying to point out a condition which is not stumbling around with liquid nitrogen that could cause a more abrupt temperature shift than common.
[00:49:30] <monsted> i'd be surprised if modern disks had much of a problem with cold. there's no longer grease in the bearings.
[00:51:31] <PMT_> Again, I did not say there was one. I said I would not be surprised if there was one.
[00:52:41] <monsted> this was for sarnold, whose friend runs disks in the cold
[00:54:23] <sarnold> I recall he had some hesitation about *restarting* the array after it had been off for a while, but I can't recall if he had any problems or not
[01:07:11] *** iio7 <iio7!iio7@gateway/vpn/privateinternetaccess/iio7> has quit IRC (Remote host closed the connection)
[01:28:31] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Quit: Qutting)
[04:04:16] *** victori <victori!~victori@cpe-76-174-179-126.socal.res.rr.com> has quit IRC (Ping timeout: 250 seconds)
[04:06:22] *** victori <victori!~victori@cpe-76-174-179-126.socal.res.rr.com> has joined #openzfs
[04:07:40] *** lorenzb <lorenzb!~lorenzb@s2.dolansoft.org> has quit IRC (Ping timeout: 272 seconds)
[05:06:15] *** donhw <donhw!~quassel@host-184-167-36-51.jcs-wy.client.bresnan.net> has quit IRC (Remote host closed the connection)
[05:07:03] *** donhw <donhw!~quassel@host-184-167-36-51.jcs-wy.client.bresnan.net> has joined #openzfs
[06:19:04] <zfs> [openzfs/openzfs] 8727 Native data and metadata encryption for zfs (#489) new commit by Jorgen Lundman <https://github.com/openzfs/openzfs/pull/489/files/195c00c2680a99befd78a00e67a27cb6935b637c..c078cc7479a1f032d05e313a3c7ebc091153751e>
[08:00:55] *** michaeldexter <michaeldexter!~michaelde@c-67-170-143-17.hsd1.or.comcast.net> has quit IRC (Quit: michaeldexter)
[10:10:15] *** andy_js <andy_js!~andy@94.6.62.238> has joined #openzfs
[12:49:34] *** lorenzb <lorenzb!~lorenzb@s2.dolansoft.org> has joined #openzfs
[12:54:45] *** fd0` <fd0`!~fd0@unaffiliated/fd0/x-0826017> has quit IRC (Read error: Connection reset by peer)
[13:51:52] *** lorenzb_ <lorenzb_!~lorenzb@212.51.146.245> has joined #openzfs
[13:52:58] *** lorenzb <lorenzb!~lorenzb@s2.dolansoft.org> has quit IRC (Ping timeout: 245 seconds)
[14:24:22] *** Kurlon <Kurlon!~Kurlon@98.13.72.207> has quit IRC (Ping timeout: 250 seconds)
[15:00:06] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #openzfs
[15:50:10] *** tgunr <tgunr!~davec@47.152.2.66> has joined #openzfs
[15:55:47] *** bn_work <bn_work!uid268505@gateway/web/irccloud.com/x-fpkzuswpcdbprxgu> has joined #openzfs
[16:23:59] <zfs> [openzfs/openzfs] 10509 zpool_003_pos can't find core file (#747) comment by Kody A Kantor <https://github.com/openzfs/openzfs/issues/747>
[17:32:32] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 268 seconds)
[18:02:17] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #openzfs
[18:08:01] *** divine <divine!~divine@2001:470:8247:1::31> has quit IRC (Ping timeout: 250 seconds)
[18:09:37] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 245 seconds)
[18:23:08] *** elxa <elxa!~elxa@2a01:5c0:e094:6e41:e0a6:bbb1:90fb:7d26> has joined #openzfs
[18:28:22] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #openzfs
[18:29:48] *** divine <divine!~divine@2001:470:8247:1::31> has joined #openzfs
[18:33:40] *** TheFuzzball <TheFuzzball!~TheFuzzba@81.2.156.49> has joined #openzfs
[18:34:32] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #openzfs
[19:55:13] *** andy_js <andy_js!~andy@94.6.62.238> has quit IRC (Read error: Connection reset by peer)
[19:55:33] *** andy_js <andy_js!~andy@94.6.62.238> has joined #openzfs
[20:40:50] *** andy_js <andy_js!~andy@94.6.62.238> has quit IRC (Read error: Connection reset by peer)
[20:43:01] *** andy_js <andy_js!~andy@94.6.62.238> has joined #openzfs
[20:58:30] *** michaeldexter <michaeldexter!~michaelde@c-67-170-143-17.hsd1.or.comcast.net> has joined #openzfs
[20:59:09] <zfs> [openzfs/openzfs] Add a manual for ztest. (#729) comment by Prakash Surya <https://github.com/openzfs/openzfs/issues/729#issuecomment-476823598>
[21:52:46] *** TheFuzzball <TheFuzzball!~TheFuzzba@81.2.156.49> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[22:12:27] *** Kurlon_ <Kurlon_!~Kurlon@bidd-pub-04.gwi.net> has joined #openzfs
[22:15:28] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 245 seconds)
[22:16:37] *** Kurlon_ <Kurlon_!~Kurlon@bidd-pub-04.gwi.net> has quit IRC (Ping timeout: 246 seconds)
[22:19:12] *** elxa <elxa!~elxa@2a01:5c0:e094:6e41:e0a6:bbb1:90fb:7d26> has quit IRC (Ping timeout: 268 seconds)
[22:49:26] *** TheFuzzball <TheFuzzball!~TheFuzzba@81.2.156.49> has joined #openzfs
[22:53:01] *** Kurlon <Kurlon!~Kurlon@98.13.72.207> has joined #openzfs
[23:39:43] *** TheFuzzball <TheFuzzball!~TheFuzzba@81.2.156.49> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[23:40:26] *** Yada <Yada!~Yada@88.190.10.137> has joined #openzfs
[23:40:36] *** Kurlon <Kurlon!~Kurlon@98.13.72.207> has quit IRC (Quit: Leaving...)
[23:42:01] *** Yada <Yada!~Yada@88.190.10.137> has quit IRC (Client Quit)
[23:44:33] *** Kurlon <Kurlon!~Kurlon@98.13.72.207> has joined #openzfs
[23:49:09] *** mahrens <mahrens!~mahrens@openzfs/founder> has joined #openzfs
[23:49:10] *** ChanServ sets mode: +v mahrens
top

   March 26, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >