Switch to DuckDuckGo Search
   March 9, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >

Toggle Join/Part | bottom
[00:25:53] *** gigetoo <gigetoo!~gigetoo@c83-250-38-168.bredband.comhem.se> has quit IRC (Ping timeout: 245 seconds)
[00:29:04] *** gigetoo <gigetoo!~gigetoo@c83-250-38-168.bredband.comhem.se> has joined #openzfs
[00:51:06] *** andy_js <andy_js!~andy@94.6.62.238> has quit IRC (Quit: andy_js)
[01:32:45] *** tgunr <tgunr!~davec@47.152.8.89> has joined #openzfs
[02:40:07] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Quit: Qutting)
[03:39:36] *** jubal <jubal!~jubal@226-5-237-24.gci.net> has quit IRC (Quit: jubal)
[03:40:36] *** jubal <jubal!~jubal@226-5-237-24.gci.net> has joined #openzfs
[04:30:15] *** elxa <elxa!~elxa@2a01:5c0:e095:a4b1:a304:42c7:4bb6:5bf4> has quit IRC (Ping timeout: 258 seconds)
[04:42:56] *** elxa <elxa!~elxa@2a01:5c0:e099:4bb1:f798:c7b2:e325:be3b> has joined #openzfs
[04:53:15] *** elxa <elxa!~elxa@2a01:5c0:e099:4bb1:f798:c7b2:e325:be3b> has quit IRC (Ping timeout: 258 seconds)
[05:09:56] *** donhw <donhw!~quassel@host-184-167-36-51.jcs-wy.client.bresnan.net> has quit IRC (Quit: No Ping reply in 180 seconds.)
[05:11:25] *** donhw <donhw!~quassel@host-184-167-36-51.jcs-wy.client.bresnan.net> has joined #openzfs
[05:31:42] *** luke-jr <luke-jr!~luke-jr@unaffiliated/luke-jr> has joined #openzfs
[05:32:12] <luke-jr> What's the best way to migrate to a new hard drive from a failing one? Make a new ZFS and copy data? Raw block-level copy? Something else native to ZFS?
[05:37:31] <rlaager> luke-jr: Are you looking to just replace the old drive with the new drive, nothing else? What is your pool topology (single disk, mirrors, raidz*)?
[05:38:30] <luke-jr> right, just a single disk
[05:38:41] <luke-jr> SSD went bad, so need to replace it with a new one and send the old one back
[05:40:34] *** noresult <noresult!~noresult@unaffiliated/noresult> has joined #openzfs
[05:50:57] <luke-jr> actually, this is my boot partition, so it would probably be very nice if ZFS could move the data natively - maybe even I can remove the old drive without a reboot? :D
[06:08:54] <PMT> luke-jr: Moving the data is easy. It just might or might not boot depending on the platform because how it does boot depends on platform.
[06:09:43] <PMT> If you wanted to turn the single drive into a mirror and then remove the old drive once it copied the data, zpool attach [pool] [originaldisk] [newdisk]; [wait for it to finish resilvering]; zpool detach [pool] [originaldisk]
[06:09:56] <PMT> But like I said, depending on platform/how your boot process works, that might not boot.
[06:31:41] *** kkantor <kkantor!~kkantor@c-24-118-59-107.hsd1.mn.comcast.net> has quit IRC (Ping timeout: 255 seconds)
[07:06:45] <luke-jr> PMT: Currently, Petitboot is simply loading the kernel+initramfs from an ext4 partition, so that's easy enough to copy
[07:08:06] <luke-jr> I guess the complex step will be to remember how to do the encryption
[07:17:05] <PMT> zpool history is a useful command, sometimes.
[07:18:32] <luke-jr> well, my encryption is via dm-crypt
[07:24:35] <luke-jr> cannot attach /dev/mapper/hd2019c2-d to /dev/mapper/luks-f544b765-393f-4b8b-a8ba-b3b130d42836: device is too small
[07:24:37] <luke-jr> :/
[07:24:51] <luke-jr> apparently Samsung SSDs are slightly smaller than Micron SSDs
[07:25:11] <luke-jr> but the drive is nowhere near full either
[07:30:02] *** Izorkin <Izorkin!~Izorkin@elven.pw> has quit IRC (Ping timeout: 245 seconds)
[07:30:48] *** Izorkin <Izorkin!~Izorkin@elven.pw> has joined #openzfs
[07:31:42] <PMT> luke-jr: At least for ZFS, one of the disks being smaller than the other means "nope"
[07:31:50] <PMT> Rather, trying to attach a smaller disk.
[07:32:21] <PMT> So you could do zpool create on the new drive and the zfs send with various flags to suit your desires
[07:32:26] <PMT> and then, even
[07:36:56] <luke-jr> can I just add the new one and remove the old one? https://web.archive.org/web/20160516035532/http://blog.delphix.com/alex/2015/01/15/openzfs-device-removal/
[07:37:47] <PMT> luke-jr: device removal is A) only in some very new versions, B) specifically intended for the case where you accidentally zpool add a drive and don't want to have to destroy your pool. Using it to migrate 100% of the data from one disk to another is approximately the most pathological usage I can think of.
[07:38:55] <PMT> You could do it. But it's (IMO) a really bad idea, and you will likely regret it.
[07:39:59] <luke-jr> how is it worse than attach/detach?
[07:40:57] <PMT> luke-jr: attach just mirrors the data from drive1 to drive2. device removal copies the data from drive1 to drive2 but also has to keep a map of where all the data on drive1 is now stored on drive2, and you need to keep that map in memory or you will be sad.
[07:41:25] <PMT> Note that you need said map after it's done the removal, so it's not like it's temporary until the removal is done.
[07:43:07] <luke-jr> that sounds ugly
[07:43:58] <PMT> It turns out that for a lot of the things ZFS does, one of the constraints required is that the address where things are stored on disk doesn't change for the life of the thing. And device removal requires violating that.
[07:45:04] <PMT> You can technically do it, if you run ZoL git master. I would suggest not doing it, as it's significantly more complexity permanently without saving significant amounts of effort.
[07:46:13] <luke-jr> is there any way to just zfs send everything automatically, and switch the rootfs to the new filesystem when done? :x
[07:46:57] <luke-jr> or should I be preparing to reboot?
[07:47:53] <PMT> luke-jr: not switching the underlying filesystem out, no. You could swap the drive out if you had a drive >= the existing drive size, but you'd still have to manually recreate all the non-ZFS complexity on it (the /boot partition, the dm-crypt you're doing underneath ZFS...)
[07:49:44] <luke-jr> any way to create my new zpool with the same name as the current one?
[07:50:56] <PMT> You can't have two pools with the same name imported at the same time, no. You can rename them at import time, and it'll stick (or not if you specify a temporary flag).
[07:54:55] <luke-jr> can I change the "sticky" name so the boot stuff just renames it for me?
[07:56:07] <PMT> There probably exists a place you could stick such a command. ZFS does not have an "automatically rename at next import", AFAIK.
[07:57:08] <PMT> TBH I would suggest just using zfs send and spending some amount of time hand-holding the boot process through this once or buy an SSD that isn't smaller than your current one.
[08:03:59] <luke-jr> sadly, nobody advertises the exact-byte volume of drives
[08:04:16] <luke-jr> any way to automatically do the zfs send of everything at least?
[08:06:10] <PMT> luke-jr: zfs send is for the snapshot of the filesystem as of a certain point. Unless you're willing to remount read-only at some point, there can always be a delta. This often doesn't matter much in practice, esp. if you have, say, /etc and / on separate filesystems, so one of them is high-churn and the other isn't
[08:06:25] <PMT> luke-jr: also, vendors do in fact advertise the exact byte capacity of their drives.
[08:12:29] <PMT> For example, https://business.toshiba-memory.com/content/dam/toshiba-ss/asia-pacific/docs/product/storage/product-manual/cSSD-XG5-Product-Manual.pdf page 4
[08:14:09] <luke-jr> so if I shove all my snapshots into my backup system and am okay forgetting about those.. I should just use rsync for the rest?
[08:14:54] <PMT> I mean, rsync has the same problem. If it's still in use, rsync doesn't magically prevent anything from changing between rsync runs.
[08:18:12] <luke-jr> sure, I'm just trying to figure out the best way around this
[08:18:28] <luke-jr> rsync has the disadvantage of decompressing and recompressing everything
[08:19:13] <PMT> So does zfs send|recv unless you use the -c flag.
[08:19:42] <PMT> (I don't have a reason you shouldn't use it, just the observation that by default it has that caveat too.)
[08:20:21] <luke-jr> will zfs send work okay on a non-snapshot?
[08:23:03] <PMT> No? If the dataset isn't actively in use it can implicitly create a temporary snapshot and send that, but if it's not in use you could just explicitly take a snapshot and send it anyway, and the implicit snapshot isn't something you can resume the send from if it's interrupted or use for incremental send.
[08:24:48] <luke-jr> but if I send a snapshot, the receiving end will only get a snapshot, right? not something I can actually use directly
[08:27:26] <PMT> No.
[08:28:22] <luke-jr> ?
[08:32:00] <PMT> Receiving a snapshot creates a dataset which contains that snapshot (or all those snapshots up to that point, if you use -R), but the dataset is read/write. So you could use it. If you needed to still receive an incremental send from another dataset on top of what's there, you'd need to use zfs rollback (or pass -F to recv to tell it to rollback) to the last snapshot they shared.
[08:32:53] <PMT> (You could also use a complicated interaction of zfs clone to keep both copies but the semantics of that are weird and I don't feel like explaining it)
[08:34:17] <luke-jr> hm, so I could just rename the most recent snapshot to the bare volume name?
[08:35:00] <PMT> ...no
[08:35:46] <PMT> If you did zfs send poolA/foo@snap1 | zfs recv poolB/foo, you would then have a dataset, poolB/foo, which also has one snapshot, poolB/foo@snap1
[08:36:57] <luke-jr> so poolB/foo would have the current data magically?
[08:36:59] <luke-jr> or would it be empty?
[08:38:15] <PMT> poolB/foo would be a read/write filesystem (or zvol) that had the data that poolA/foo@snap1 has. If you did, say, -R poolA/foo@snap3 | zfs recv poolB/foo, it'd have poolB/foo@{snap1,snap2,snap3} and the contents of poolB/foo would be a read-write copy of the contents as of snap3
[08:41:18] <PMT> If you wanted to do, say, send -I poolA/foo@snap3 poolA/foo@snap9 | zfs recv poolB/foo; it'd prompt you to rollback poolB/foo if there have been changes to it since snap3, then once the recv is done it'd have the contents as of, and all the snapshots up to, snap9, assuming it already had snap3.
[09:42:08] <luke-jr> [886513.141282] Large kmem_alloc(74472, 0x1000), please file an issue at:
[09:42:10] <luke-jr> https://github.com/zfsonlinux/zfs/issues/new
[09:42:15] <luke-jr> ^ just noticed this, should I be concerned?
[09:50:50] <luke-jr> posted more here https://github.com/zfsonlinux/zfs/issues/8491
[10:57:13] *** michaeldexter <michaeldexter!~michaelde@c-67-170-143-17.hsd1.or.comcast.net> has quit IRC (Quit: michaeldexter)
[11:09:31] *** andy_js <andy_js!~andy@94.6.62.238> has joined #openzfs
[14:00:43] *** wiedi_ <wiedi_!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has quit IRC (Quit: ^C)
[14:39:51] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has joined #openzfs
[15:48:03] *** elxa <elxa!~elxa@2a01:5c0:e099:4bb1:f798:c7b2:e325:be3b> has joined #openzfs
[16:26:26] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #openzfs
[16:27:31] *** andy_js <andy_js!~andy@94.6.62.238> has quit IRC (Read error: Connection reset by peer)
[16:27:52] *** andy_js <andy_js!~andy@94.6.62.238> has joined #openzfs
[17:09:07] *** sindan <sindan!~admin@125.red-81-42-200.staticip.rima-tde.net> has quit IRC (Ping timeout: 240 seconds)
[17:35:57] *** andy_js <andy_js!~andy@94.6.62.238> has quit IRC (Read error: Connection reset by peer)
[17:36:18] *** andy_js <andy_js!~andy@94.6.62.238> has joined #openzfs
[17:46:20] *** sindan <sindan!~admin@125.red-81-42-200.staticip.rima-tde.net> has joined #openzfs
[18:09:50] *** f_g <f_g!~f_g@213-47-131-124.cable.dynamic.surfer.at> has quit IRC (Ping timeout: 250 seconds)
[18:23:51] *** f_g <f_g!~f_g@213-47-131-124.cable.dynamic.surfer.at> has joined #openzfs
[18:34:00] *** kkantor <kkantor!~kkantor@c-24-118-59-107.hsd1.mn.comcast.net> has joined #openzfs
[18:34:02] *** kkantor <kkantor!~kkantor@c-24-118-59-107.hsd1.mn.comcast.net> has quit IRC (Client Quit)
[18:35:40] *** michaeldexter <michaeldexter!~michaelde@c-67-170-143-17.hsd1.or.comcast.net> has joined #openzfs
[19:02:51] *** mgerdts <mgerdts!~textual@2600-6c44-0c7f-ec89-f821-266b-84e4-2b78.dhcp6.chtrptr.net> has quit IRC (Ping timeout: 252 seconds)
[19:11:36] *** bn_work <bn_work!uid268505@gateway/web/irccloud.com/x-hiewmgqculidhisj> has quit IRC (Quit: Connection closed for inactivity)
[20:16:07] *** donhw <donhw!~quassel@host-184-167-36-51.jcs-wy.client.bresnan.net> has quit IRC (Quit: No Ping reply in 180 seconds.)
[20:17:34] *** donhw <donhw!~quassel@host-184-167-36-51.jcs-wy.client.bresnan.net> has joined #openzfs
[20:40:57] *** phaseNi <phaseNi!~phaset@unaffiliated/phaset> has quit IRC (Read error: Connection reset by peer)
[20:43:23] *** phaseNi <phaseNi!~phaset@unaffiliated/phaset> has joined #openzfs
[20:44:55] *** ct16k <ct16k!~ryan@78.96.221.131> has quit IRC (Quit: What does this button do?)
[20:57:58] *** TheFuzzball <TheFuzzball!~TheFuzzba@81.2.156.49> has joined #openzfs
[21:13:04] *** edef <edef!edef@NixOS/user/edef> has quit IRC (Ping timeout: 252 seconds)
[21:13:18] *** edef <edef!edef@NixOS/user/edef> has joined #openzfs
[21:37:07] *** TheFuzzball <TheFuzzball!~TheFuzzba@81.2.156.49> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[22:05:12] *** TheFuzzball <TheFuzzball!~TheFuzzba@81.2.156.49> has joined #openzfs
[22:30:32] *** TheFuzzball <TheFuzzball!~TheFuzzba@81.2.156.49> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[22:44:04] *** TheFuzzball <TheFuzzball!~TheFuzzba@81.2.156.49> has joined #openzfs
[23:57:33] *** elxa <elxa!~elxa@2a01:5c0:e099:4bb1:f798:c7b2:e325:be3b> has quit IRC (Read error: Connection reset by peer)
top

   March 9, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >