[00:03:28] *** iio7 <iio7!iio7@gateway/vpn/privateinternetaccess/iio7> has joined #openzfs
[00:05:14] <iio7> Have I understood this correctly? If I have 3 4TB disks and I set those up in a RAIDZ, I can have a single disk failure and still maintain all of my data?
[00:06:11] <DeHackEd> yeah it's like raid-5
[00:06:18] <DeHackEd> sorta
[00:10:32] <iio7> And the maximum space would then be 8TB?
[00:10:41] <monsted> yes
[00:11:14] <monsted> (more like 7 in the real world because hard drives marketing people are worse than hitler)
[00:11:47] <iio7> But how does that work if I fill up (or close) to 8TB of data? How can 8TB of data be "protected" across those 12TB of true space?
[00:12:06] <monsted> two disks have data, the third has parity
[00:13:17] <monsted> (the simple answer is that data1 XOR data2 == parity and from two of the three drives, the third can always be rebuilt)
[00:14:00] <iio7> monsted, thank you very much for that explanation!
[00:14:51] <monsted> if you lose data2, data1 XOR parity == data2 and you just gotta go through and rebuild that disk
[00:15:59] <monsted> i'm not entirely sure RAIDZ actually uses XOR, but that's the simplest explanation :)
[00:19:08] *** michaeldexter <michaeldexter!~michaelde@om126204195208.6.openmobile.ne.jp> has quit IRC (Quit: michaeldexter)
[00:20:40] *** andy_js <andy_js!~andy@94.6.62.238> has quit IRC (Quit: andy_js)
[00:20:54] <iio7> I have only experience in running mirrors. If a single drive fails (as you know) I still have access to the data on the working drive. In a RAIDZ setup, do I have to rebuild the failed drive before I can access the data again?
[00:26:07] <DeHackEd> no
[00:27:01] <DeHackEd> CPU usage increases due to rebuilding (though RAID-5 CPU requirements are awfully light compared to, say, RAID-6)
[00:27:29] <DeHackEd> I mean even if you don't have a spare, there's a performance hit. but you probably won't notice it
[00:31:49] <monsted> rebuilding the data is much simpler than doing the checksum, so the rebuild would barely even register if you're measuring timing.
[00:32:56] <DeHackEd> if you try to read data from the dead disk, you instead read data from the parity disk and XOR all the disk contents together to come up with the missing data.
[00:33:12] <DeHackEd> that XOR is technically increased CPU work, but for what modern CPUs can do you're probably not going to notice much
[00:33:45] <iio7> Thanks!
[00:34:00] <DeHackEd> RAID-6 math is much harder, but a "low power" type xeon from many years ago can still hit nearly 2 GB/sec scrub of raidz2, alebit with the CPU cores nearly maxed out
[00:36:38] *** michaeldexter <michaeldexter!~michaelde@om126204195208.6.openmobile.ne.jp> has joined #openzfs
[00:38:09] *** michaeldexter <michaeldexter!~michaelde@om126204195208.6.openmobile.ne.jp> has quit IRC (Client Quit)
[01:04:05] *** iio7 <iio7!iio7@gateway/vpn/privateinternetaccess/iio7> has quit IRC (Remote host closed the connection)
[03:19:12] *** divine <divine!~divine@2001:470:8247:1::31> has quit IRC (Quit: leaving)
[03:19:37] *** divine <divine!~divine@2001:470:8247:1::31> has joined #openzfs
[03:34:16] *** lorenzb_ <lorenzb_!~lorenzb@s2.dolansoft.org> has quit IRC (Ping timeout: 246 seconds)
[04:47:59] *** divine <divine!~divine@2001:470:8247:1::31> has quit IRC (Read error: No route to host)
[04:48:10] *** divine <divine!~divine@2001:470:8247:1::31> has joined #openzfs
[04:51:48] *** divine <divine!~divine@2001:470:8247:1::31> has quit IRC (Read error: No route to host)
[04:53:10] *** divine <divine!~divine@2001:470:8247:1::31> has joined #openzfs
[05:02:25] *** divine <divine!~divine@2001:470:8247:1::31> has quit IRC (Read error: Connection reset by peer)
[05:20:08] *** wadeb <wadeb!~wadeb@38.101.104.148> has quit IRC (Ping timeout: 250 seconds)
[05:20:21] <PMT_> monsted: it uses the same kind of math. The computations get more complicated as you add more parity levels.
[05:23:27] *** wadeb <wadeb!~wadeb@38.101.104.148> has joined #openzfs
[06:29:53] *** donhw <donhw!~quassel@host-184-167-36-51.jcs-wy.client.bresnan.net> has quit IRC (Remote host closed the connection)
[06:31:12] *** donhw <donhw!~quassel@host-184-167-36-51.jcs-wy.client.bresnan.net> has joined #openzfs
[06:35:33] *** sindan <sindan!~admin@125.red-81-42-200.staticip.rima-tde.net> has quit IRC (Remote host closed the connection)
[07:59:27] <Izorkin> How to rollback zfs from 0.8.0-rc3 to 0.7.13 ?
[08:54:13] *** Serus <Serus!~Serus@unaffiliated/serus> has quit IRC (Ping timeout: 245 seconds)
[08:58:17] *** Serus <Serus!~Serus@unaffiliated/serus> has joined #openzfs
[10:06:48] *** lorenzb_ <lorenzb_!~lorenzb@s2.dolansoft.org> has joined #openzfs
[10:07:47] *** andy_js <andy_js!~andy@94.6.62.238> has joined #openzfs
[11:08:31] *** lorenzb_ <lorenzb_!~lorenzb@s2.dolansoft.org> has quit IRC (Read error: Connection reset by peer)
[11:08:42] *** Izorkin <Izorkin!~Izorkin@elven.pw> has quit IRC (Ping timeout: 252 seconds)
[11:08:57] *** lorenzb <lorenzb!~lorenzb@s2.dolansoft.org> has joined #openzfs
[11:21:06] *** ct16k <ct16k!~ryan@78.96.221.131> has quit IRC (Ping timeout: 250 seconds)
[11:33:54] *** ct16k <ct16k!~ryan@78.96.221.131> has joined #openzfs
[11:49:57] *** Izorkin <Izorkin!~Izorkin@elven.pw> has joined #openzfs
[11:53:02] *** Izorkin <Izorkin!~Izorkin@elven.pw> has quit IRC (Remote host closed the connection)
[11:53:15] *** Izorkin <Izorkin!~Izorkin@elven.pw> has joined #openzfs
[12:01:27]
*** Izorkin <Izorkin!~Izorkin@elven.pw> has quit IRC (Quit: ZNC 1.7.2 - https://znc.in)
[12:02:03] *** Izorkin <Izorkin!~Izorkin@elven.pw> has joined #openzfs
[12:26:30] *** victori <victori!~victori@cpe-76-174-179-126.socal.res.rr.com> has joined #openzfs
[14:46:40] *** lorenzb <lorenzb!~lorenzb@s2.dolansoft.org> has quit IRC (Ping timeout: 255 seconds)
[14:50:28] *** Kurlon <Kurlon!~Kurlon@98.13.72.207> has quit IRC (Ping timeout: 245 seconds)
[14:51:12] *** eki <eki!~eki@dsl-hkibng41-567327-143.dhcp.inet.fi> has quit IRC (Quit: leaving)
[15:18:54] *** eki <eki!~eki@dsl-hkibng41-567327-143.dhcp.inet.fi> has joined #openzfs
[15:41:59] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #openzfs
[15:52:18] *** kkantor <kkantor!~kkantor@c-24-118-59-107.hsd1.mn.comcast.net> has joined #openzfs
[16:05:22] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 246 seconds)
[16:13:36] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #openzfs
[16:17:40] *** waz0wski <waz0wski!~waz0wski@hrothgar.distortion.io> has quit IRC (Remote host closed the connection)
[16:26:23] *** lorenzb <lorenzb!~lorenzb@s2.dolansoft.org> has joined #openzfs
[16:33:53] *** waz0wski <waz0wski!~waz0wski@hrothgar.distortion.io> has joined #openzfs
[16:42:37] <poots> i'll tell you w-hut though
[16:42:58] <poots> my poor Turon microserver get pummeled to it's knees during high read/write traffic across gigabit :|
[16:43:01] <poots> i can't wait to upgrade those boxes
[17:25:45] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 246 seconds)
[17:48:18] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #openzfs
[17:54:10] *** iio7 <iio7!iio7@gateway/vpn/privateinternetaccess/iio7> has joined #openzfs
[17:55:36] <iio7> I have a pool with a single mirror of two disks. I can see how I can add two more disks to add another mirror, but I cannot see if it is possible to simple add the two new disks to the existing mirror so one mirror has four disks. It that possible?
[17:55:58] <DHE> `zpool attach`
[17:56:11] <DHE> zpool attach $POOLNAME $ANY_EXISTING_DISK $NEWDISK1
[17:56:19] <DHE> and again for NEWDISK2
[17:56:49] <DHE> that said, 4 way mirrors is a bit much unless you plan to perform some `zpool split` shenanigans
[17:58:18] <iio7> Thanks!
[18:08:52] <iio7> When a pool has been made by using device names, is it possible to change that to uuid?
[18:09:18] <iio7> Sorry, I mean to change that into ids.
[18:15:35] *** elxa <elxa!~elxa@2a01:5c0:e08f:fa51:a3ab:14f5:e71c:34b2> has joined #openzfs
[18:16:48] <DHE> export and re-import with -d /dev/....
[18:38:04] *** iio7 <iio7!iio7@gateway/vpn/privateinternetaccess/iio7> has quit IRC (Remote host closed the connection)
[18:52:19] <PMT_> Specifically /dev/disk/by-id or similar, to be clear.
[19:10:31] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 246 seconds)
[19:16:12] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #openzfs
[20:12:55] *** f_g <f_g!~f_g@213-47-131-124.cable.dynamic.surfer.at> has quit IRC (Ping timeout: 255 seconds)
[20:26:07] *** f_g <f_g!~f_g@213-47-131-124.cable.dynamic.surfer.at> has joined #openzfs
[21:25:04] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #openzfs
[21:39:43] *** divine <divine!~divine@2001:470:8247:1::31> has joined #openzfs
[22:02:43] *** Kurlon_ <Kurlon_!~Kurlon@bidd-pub-04.gwi.net> has joined #openzfs
[22:05:24] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 246 seconds)
[22:07:18] *** Kurlon_ <Kurlon_!~Kurlon@bidd-pub-04.gwi.net> has quit IRC (Ping timeout: 272 seconds)
[22:13:19] *** michaeldexter <michaeldexter!~michaelde@c-67-170-143-17.hsd1.or.comcast.net> has joined #openzfs
[22:37:09] *** elxa <elxa!~elxa@2a01:5c0:e08f:fa51:a3ab:14f5:e71c:34b2> has quit IRC (Ping timeout: 250 seconds)
[22:44:12] *** Kurlon <Kurlon!~Kurlon@98.13.72.207> has joined #openzfs