[00:00:15] <rjvbb> OS X has had a keychain almost since the beginning, and evidently you can store anything that's supported in any of your own keychains or even in a system keychain if you have admin privileges
[00:00:35] <PMT> CoJaBo: you could go parse zdb output for the file object, I suppose.
[00:01:33] <bunder> the only problem is that doc says you can't generate your own stuff, you have to let the enclave do it, but i guess lundman could modify their zfs /shrug
[00:02:14] <rjvbb> you don't have to use the newfangled enclave thingy, which isn't even available on all Macs
[00:02:18] <CoJaBo> PMT: can that be run on a file or dir somehow?
[00:05:48] <rjvbb> what I imagine is a modified "prompt" mode for getting the key or passphase. Instead of reading from stdin it will either get the data directly from the keychain (if accessible), or it could popen($ZFS_ASKPASS) like ssh does
[00:06:35] <bunder> well that and the generation of the master hash that the pool uses, in the enclave case
[00:06:35] <rjvbb> in the latter case the askpass utility could use any kind of password storage, including Gnome's keyring or KDE's wallet
[00:07:23] <rjvbb> where is that master hash stored currently?
[00:08:08] <bunder> on the pool, you can't see it and it doesn't change if you change your key
[00:08:45] <rjvbb> then there shouldn't be any need to store it anywhere else...
[00:09:15] <bunder> you would the user key i think
[00:09:44] <rjvbb> also seems a bit strange to consider super-secure features if ZFS also supports getting keys from files in known locations, files that I presume are cleartext... :)
[00:10:26] <rjvbb> and yes, this is all about the public key or passphrase you have to enter
[00:12:00] <rjvbb> maybe next week I'll get around to writing a rough draft of this for O3X
[00:12:53] <rjvbb> right now I'm calling it a day :)
[00:16:02] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has quit IRC (Quit: AAAGH! IT BURNS!)
[00:31:36] *** rjvbb <firstname.lastname@example.org> has quit IRC (Ping timeout: 252 seconds)
[00:41:18] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has joined #zfsonlinux
[00:45:35] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has quit IRC (Ping timeout: 250 seconds)
[00:45:41] <Snowman23> It just suggested exercise
[00:46:02] <Snowman23> Whats next, taking care of myself more generally so I don't succumb to boredom because I'm unhealthy?
[00:51:24] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has joined #zfsonlinux
[00:52:46] <ptx0> Snowman23: that's the plan
[00:53:26] <ptx0> it's told me to rearrange my things 4 times
[00:53:44] <ptx0> imagine how crazy i'd look if i'd followed through!
[00:54:04] <ptx0> that should be one of the boredom suggestions. "Imagine how crazy people think you are for using a boredom generator."
[00:59:39] *** Horge <Horge!~Horge@cpe-172-115-16-136.socal.res.rr.com> has joined #zfsonlinux
[01:00:57] <Horge> Ultra quick question: If im running a zfs mirror (R1), One drive is 3TB, and one is 4TB, running at 3TB for the array. I'm ready to swap the 4tb out with another 3tb. Will it let me hotswap these drives?
[01:01:19] <ptx0> if te 3tb drives are the same size
[01:01:27] <Horge> okay perfect
[01:01:33] <Horge> Thank yoU!!
[01:08:24] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[01:11:13] *** MrCoffee <MrCoffee!coffee@gateway/vpn/privateinternetaccess/b> has quit IRC (Ping timeout: 244 seconds)
[01:15:52] *** cheet <email@example.com> has joined #zfsonlinux
[01:18:36] *** endersending <firstname.lastname@example.org> has quit IRC (Quit: Leaving)
[01:35:11] *** elxa <elxa!~elxa@2a01:5c0:e087:8e11:c071:6dcd:235d:5ba> has quit IRC (Remote host closed the connection)
[01:35:56] *** elxa <elxa!~elxa@2a01:5c0:e087:8e11:c071:6dcd:235d:5ba> has joined #zfsonlinux
[01:37:50] *** sponix <email@example.com> has quit IRC (Excess Flood)
[01:38:08] *** sponix <firstname.lastname@example.org> has joined #zfsonlinux
[01:47:32] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has quit IRC (Quit: Laters)
[01:55:32] *** Markow <Markowemail@example.com> has joined #zfsonlinux
[01:55:45] *** elxa <elxa!~elxa@2a01:5c0:e087:8e11:c071:6dcd:235d:5ba> has quit IRC (Ping timeout: 252 seconds)
[02:08:19] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Quit: shibboleth)
[02:15:17] *** rjvb <firstname.lastname@example.org> has quit IRC (Ping timeout: 244 seconds)
[02:46:38] *** Markow <Markowemail@example.com> has quit IRC (Quit: Leaving)
[03:06:18] *** djdunn <firstname.lastname@example.org> has quit IRC (Ping timeout: 245 seconds)
[03:07:33] *** MrCoffee <MrCoffee!coffee@gateway/vpn/privateinternetaccess/b> has joined #zfsonlinux
[03:12:03] *** djdunn <email@example.com> has joined #zfsonlinux
[03:25:00] *** jasonwc <firstname.lastname@example.org> has quit IRC (Ping timeout: 250 seconds)
[04:33:27] *** PewpewpewPantsu <PewpewpewPantsu!~pewpew@unaffiliated/setsuna-xero> has joined #zfsonlinux
[04:34:18] *** sphrak <email@example.com> has quit IRC (Quit: Lost terminal)
[04:34:40] *** sphrak <firstname.lastname@example.org> has joined #zfsonlinux
[04:36:04] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 250 seconds)
[05:00:58] *** Snowman23 is now known as ss23
[05:07:16] *** Ryushin <Ryushin!~Ryushin@windwalker.chrisdos.com> has quit IRC (Ping timeout: 250 seconds)
[05:13:04] *** Ryushin <Ryushin!~Ryushin@windwalker.chrisdos.com> has joined #zfsonlinux
[05:13:31] <ptx0> CoJaBo: which ryzen system is that
[05:13:55] <ptx0> i've got a 2500u laptop and i need 'idle=nomwait' on kernel cmdline to avoid hangs and panics
[05:14:27] <ptx0> something about the MWAIT cpu instruction is broken or behaves unexpectedly
[05:18:47] <CoJaBo> ptx0: Mine needs a slew of options
[05:19:20] <CoJaBo> ptx0: I ended up with iommu=off amd_iommu=off nmi_watchdog=1 processor.max_cstate=5 panic=30 loglevel=9
[05:19:36] <CoJaBo> Plus a bunch of BIOS tweaks
[05:19:46] <CoJaBo> All just to make the system run stable
[05:23:22] <CoJaBo> Basically yeh, what I've set in BIOS is the same effect as idle=nomwait
[05:25:15] <CoJaBo> “Ryzen: Not Even Onc�g��l���;䯨��Ug!xV�|��D#�j��^!6�=az�*]H��*�͝�4kގW���Da�pf�ڭ%%+j™”
[05:29:01] <ptx0> i've had great luck with threadripper fwiw
[05:29:11] <ptx0> none of those issues
[05:29:38] <ptx0> you should fix your DSDT instead of turning IOMMU off..
[05:30:20] <ptx0> using ivrs_ioapic[n]=hw:id
[05:31:38] <bunder> dsdt is an old fix, apparently you should file a kernel bug if you have to do it anymore
[05:33:00] *** tlacatlc6 <email@example.com> has quit IRC (Quit: Leaving)
[05:34:48] <bunder> i'm guessing that would go under acpi, dunno
[05:35:27] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Quit: Qutting)
[05:39:20] <CoJaBo> I don't think I really need IOMMU, better to just leave it off
[05:39:58] <CoJaBo> The IOMMU issues were the most severe, system wouldn't even boot without that, and GRUB is glitchy as hell, so it's a PITA to fix
[05:40:54] <bunder> lol grub, switch to the evil efi dark side :P
[05:41:51] <ptx0> sounds like not using grub-efi
[06:35:14] <CoJaBo> Yeh, I gave up trying to get efi working
[06:35:22] <CoJaBo> I forget what the problem was
[06:36:31] <ptx0> that you weren't persistent enough, clearly
[06:36:42] <ptx0> CSMs suck
[06:36:57] <ptx0> they caused so many issues in any system i've used, intel or otherwise
[06:51:01] *** Dagger <Daggerfirstname.lastname@example.org> has quit IRC (Excess Flood)
[06:51:46] *** Dagger2 <Dagger2email@example.com> has joined #zfsonlinux
[06:52:28] *** Dagger2 is now known as Dagger
[07:05:37] *** MrCoffee <MrCoffee!coffee@gateway/vpn/privateinternetaccess/b> has quit IRC (Quit: Lost terminal)
[07:06:42] *** Horge <Horge!~Horge@cpe-172-115-16-136.socal.res.rr.com> has quit IRC ()
[07:28:36] <CompanionCube> ptx0: why not firmare in general?
[07:32:21] <ptx0> because CSMs are particularly evil
[07:36:20] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has joined #zfsonlinux
[07:40:44] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has quit IRC (Client Quit)
[07:50:19] *** gerhard7 <firstname.lastname@example.org> has joined #zfsonlinux
[07:58:14] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has joined #zfsonlinux
[08:00:37] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has quit IRC (Client Quit)
[08:01:22] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has joined #zfsonlinux
[08:05:02] *** user <user!~user@fsf/member/fling> has joined #zfsonlinux
[08:05:03] <user> Do I need to have fans on sas controllers?
[08:05:17] *** user is now known as fling
[09:12:50] <leito> Hello guys Good morning
[09:15:09] *** beyondcreed <beyondcreed!~beyondcre@S0106000103cfc9f5.ok.shawcable.net> has joined #zfsonlinux
[09:25:23] *** rjvbb <email@example.com> has joined #zfsonlinux
[09:39:50] *** jugo <jugo!~jugo@unaffiliated/jugo> has quit IRC (Ping timeout: 250 seconds)
[09:45:56] <leito> I'm dealing with the cluster. when a controler died, it took 13 seconds when I tried to import the pool over the other controller . Then I changed "zfs_multihost_import_intervals=10 to > 5" And it took 8 sec after that.
[09:46:23] <leito> If I set multihost=off it just took 1.7 sec.
[09:47:09] *** lblume <lblume!~lblume@greenviolet/laoyijiehe/lblume> has quit IRC (Quit: Leaving.)
[09:47:15] <leito> I'm trying to reduce this time. Is there anything you can suggest?
[09:47:58] <leito> Is it safe to use "zfs_multihost_import_intervals = 5" ?
[09:48:51] *** lblume <lblume!~lblume@greenviolet/laoyijiehe/lblume> has joined #zfsonlinux
[09:49:28] *** adilger <adilger!~adilger@S0106a84e3fe4b223.cg.shawcable.net> has joined #zfsonlinux
[09:58:40] <pascalou> hi
[09:58:59] <pascalou> somone on gitlab is asking for spl version how do i get that ?*
[09:59:34] <leito> what is your distro
[09:59:41] <Lalufu> from dmesg (which will show the version which got loaded)
[10:00:03] <leito> for arch-linux you can use: pacman -Q | grep spl spl-linux 0.7.11_4.18.9.arch1.1-2
[10:00:31] <leito> also like Lalufu said "dmesg | grep -i spl"
[10:01:09] <Lalufu> depending on which state your system is in the installed package and the loaded module might have different versions
[10:01:11] <pascalou> thx
[10:01:22] <Lalufu> (for example after you updated, but haven't rebooted yet)
[10:01:24] <leito> yes the better way is using "modinfo spl"
[10:01:56] <Lalufu> I'm unsure about that, that's going to show the version-on-disk, not the version-in-kernel, right?
[10:02:21] <leito> yes. but he should talk about the running version
[10:08:19] <PMT> dmesg is probably the safer way to ask the question, yes.
[10:09:00] *** jugo <jugo!~jugo@unaffiliated/jugo> has joined #zfsonlinux
[10:14:05] <PMT> leito: I mean, it depends on what failure mode you're trying to mitigate. The tradeoff with multihost is that it makes you wait for X+epsilon seconds before importing in order to be sure nobody writes an "i'm using this, asshole" to the pool in that interval. But it also means if for some reason a machine takes longer than X to update multihost you could have someone import it while the old host is alive
[10:14:11] <PMT> and then the old host will fault. It's the usual STONITH problem.
[10:15:32] <leito> PMT: yes you're right. I'm just looking to find good way for my situations.
[10:15:52] *** kaipee <firstname.lastname@example.org> has joined #zfsonlinux
[10:16:39] <PMT> I mean, you could implement something to actually _DO_ a STONITH-style kill-then-import.
[10:17:08] <leito> I have multihost=on and my pool control not belong to corosync. I have a code for the job. its checking all the controllers and decide what should do.
[10:17:10] *** Floflobel <Floflobel!~Floflobel@cosium-152-18.fib.nerim.net> has joined #zfsonlinux
[10:17:31] <leito> I dont use stonith for the job
[10:17:51] <leito> I just need to decrease the time of importing.
[10:19:09] <leito> For example I can "live" move my pools under 8 sec right now. 3 sec tooks export-import and 5 sec for closing shares + recreating shares.
[10:19:38] <leito> But when a node dies (poweroff) the import job was taking 13++ sec
[10:19:47] <PMT> "live" meaning that you're having another machine take over the IP, or?
[10:20:06] <leito> Yes it taking pool ip
[10:20:22] <leito> pool + shares under 8 sec
[10:20:34] <leito> so my shares + datastores + vm's still alive
[10:20:56] <leito> thats why I'm trying to decrease this number
[10:21:14] <PMT> I mean, it makes sense that removing 5 seconds from multihost removed 5 seconds from the import time. (Though be very sure that setting is the same on all hosts.)
[10:22:09] <leito> yes thats right
[10:22:18] <leito> But I'm afraid to decrease more.
[10:22:19] <PMT> I am not an expert, but it seems like you're trying to say you have your own tooling for figuring out which host is "live", which might mean you can safely use low multihost settings if you really trust that logic.
[10:22:41] <leito> And I think maybe I should change another parameter and maybe I will gain 3 more second
[10:22:50] <PMT> Which parameter?
[10:23:13] <leito> I changed zfs_multihost_import_intervals 10 to > 5 as I said.
[10:23:29] <PMT> ...yes, you said that. But you then said "aybe I should change another paramete"
[10:23:37] <PMT> rather, maybe I should change another parameter.
[10:23:46] <leito> I don't know the other parameter that's why I asking for :d
[10:24:00] <leito> sorry for my english skills
[10:24:31] <PMT> Why do you think you'd be able to gain any significant amount of time from that? 5 of those 8 seconds are from multihost, and you're not gonna make import/export take 0 seconds.
[10:25:11] <leito> when multihost=off my import time 1.788 sec
[10:25:39] <PMT> Yes, 8-5 = 3, so you're saving an extra second for not going through the multihost code, presumably.
[10:25:58] <leito> when its on and "interval 5" its 8.443 sec.
[10:27:20] <leito> The interval can be min 1sec. So we can talk about 4.443 sec. But as I said I can import 1.7sec when multihost off
[10:27:21] <PMT> Are you trying to solve a specific problem with it taking too long, or just that you want it to go faster?
[10:27:27] <leito> So thats why im looking something else.
[10:28:14] <leito> PMT: I started with that way. It was 25-30 sec with my code and setting mistakes and its just 10 sec right now.
[10:28:24] <leito> I'm just looking for to be faster yes
[10:28:37] <leito> (In safety zone ofc)
[10:30:52] <PMT> leito: you could play with zfs_multihost_interval as well, but to be clear, it is my claim that you're attempting to strap a jet engine to a bicycle while claiming you want to be safe.
[10:31:17] <leito> I guess I shouldn't play more. lol :D
[10:34:40] <PMT> You can, it's just that when you come in and say you imported on multiple systems and it scribbled over your data, you'll hear an I told you so.
[10:39:08] <leito> That's why I came here to take advice. I hope nobody lives that.
[10:39:47] <leito> I know this is not a playground. but we're all playing.
[10:43:10] <PMT> Plenty of people make choices that people in here don't agree with. Often people in here don't even agree with each other. Sometimes people even admit they're wrong. :)
[10:43:42] <PMT> That's one reason I warned you that I'm not an expert on this, but if you're not trying to solve a specific problem, it's hard to figure out where the safety/SPEED tradeoff makes sense for you.
[10:48:21] *** adilger <adilger!~adilger@S0106a84e3fe4b223.cg.shawcable.net> has quit IRC (Quit: Zzzz...)
[10:58:19] <Lalufu> if your required failover times are very small then a shared zfs pool might be an entirely wrong solution for you
[11:05:30] *** rjvb <email@example.com> has joined #zfsonlinux
[11:06:19] *** rjvb <firstname.lastname@example.org> has quit IRC (Remote host closed the connection)
[11:08:55] <PMT> Ask your doctor if pNFS is right for you.
[11:09:20] *** biax_ <biax_!~biax@unaffiliated/biax> has quit IRC (Ping timeout: 272 seconds)
[11:15:20] *** biax_ <biax_!~biax@unaffiliated/biax> has joined #zfsonlinux
[11:28:41] *** futune <email@example.com> has joined #zfsonlinux
[11:30:30] <leito> My problem is simple "interruption". When I move a pool or carrier node dies, I don't want to live any problem on my datastores, nfs, cifs shares. For doing that I need to make the pool accessible before their time out is over.
[11:31:09] <leito> PMT: I didn't heard about pNFS. I will check it.
[11:39:11] <PMT> pNFS solves this problem very differently from how you currently do.
[11:46:25] *** Floflobel <Floflobel!~Floflobel@cosium-152-18.fib.nerim.net> has quit IRC (Remote host closed the connection)
[11:53:31] *** Floflobel_ <Floflobel_!~Floflobel@cosium-152-18.fib.nerim.net> has joined #zfsonlinux
[12:14:14] *** simukis <firstname.lastname@example.org> has joined #zfsonlinux
[12:18:11] *** Dagger <Daggeremail@example.com> has quit IRC (Excess Flood)
[12:18:51] *** Dagger <Daggerfirstname.lastname@example.org> has joined #zfsonlinux
[12:29:41] *** rjvb <email@example.com> has joined #zfsonlinux
[12:42:05] *** gerhard7 <firstname.lastname@example.org> has quit IRC (Quit: Leaving)
[13:20:40] *** Haxxa <Haxxa!~Harrison@180-150-30-18.NBN.mel.aussiebb.net> has joined #zfsonlinux
[13:21:09] *** Markow <Markowemail@example.com> has joined #zfsonlinux
[13:21:42] *** gerhard7 <firstname.lastname@example.org> has joined #zfsonlinux
[13:23:41] *** hsp <hsp!~hsp@unaffiliated/hsp> has quit IRC (Read error: Connection reset by peer)
[13:23:50] *** hsp_ <hsp_!~hsp@unaffiliated/hsp> has joined #zfsonlinux
[14:27:53] <prologic> $ zfs mount -a
[14:27:53] <prologic> cannot mount '/mnt/data': directory is not empty
[14:28:06] <prologic> Can I mount a filesystem temporarily somewhere other than it's defined mountpoint property?
[14:30:04] <DHE> you can use a hack. mount -t zfs -o zfsutil pool/fs /mnt/elsewhere
[14:30:59] <DHE> or, you know, change the mount point..
[14:31:57] <Lalufu> or allow mounting over non-empty directories
[14:32:40] <DHE> could be data has been written assuming it's mounted when it's not, and now we're trying to clean that up... so no...
[14:33:50] <prologic> Lalufu yeah I might do that once I know wtf in these fils systems :)
[14:33:56] <prologic> I have a funny feeling I did something silly
[14:34:15] <prologic> like create nested file systems; e.g data/media and then data/media/movies
[14:34:28] <prologic> but I'm 99.9% sure there is nothing in the data/media file system at all
[14:34:42] <prologic> but it cmplains like hell because I mount the nested file systems there
[14:34:47] <prologic> shoudl I not do this? :)
[14:36:44] <DHE> historically ZFS on solaris would create directories for its mount points as needed, mount, and during an unmount delete the directories. this was because solaris would enforce mount points are always empty
[14:37:04] <DHE> and ZFS on linux has basically inherited that behaviour and chosen to enforce that itself
[14:37:29] <prologic> right
[14:37:45] <DHE> so ZFS should handle these things itself just fine, except it sorta conflicts with the linux style, plus rebooting when the pool still mounted obviously doesn't delete the directories first
[14:37:59] <DHE> and then these things happen
[14:38:44] <prologic> see
[14:38:45] <prologic> data/media 1.23T 18.7T 274K /mnt/data/media
[14:38:45] <prologic> data/media/movies 354G 18.7T 354G /mnt/data/media/movies
[14:38:53] <prologic> I think i did that; and maybe this wasn't a good idea
[14:39:01] <prologic> that's why I'm 99.9% sure there's nothing in those empty file systems
[14:39:27] <prologic> Ahhh
[14:39:39] <DHE> zfs list -o name,mounted,mountpoint
[14:39:41] <prologic> so these file systems now have these empty directories lying around
[14:39:44] <prologic> committed in the pool
[14:39:46] <prologic> damnit :)
[14:39:46] <DHE> ^^ one of my favourite commands
[14:39:55] <prologic> probably from badly rebooted instances
[14:40:17] <DHE> still, /mnt/data should be empty and contain a ZFS filesystem which contains "media", allowing that to be safely mounted, etc.
[14:40:27] <DHE> *should be empty before mounting
[14:40:33] <prologic> yeap
[14:40:42] <prologic> and a bunch of those empty file systems are "no"
[14:40:57] <prologic> something I can do to clean this up?
[14:41:17] <prologic> remove these empty file-systems? (I think that'll delete the child file-systems?)
[14:41:19] <Lalufu> if it's only the directories, and nothing in them: delete them?
[14:41:21] <prologic> so maybe not :)
[14:41:33] <DHE> find /mnt/data -type d -print0 | xargs -0 rmdir
[14:41:55] <DHE> or maybe unmount everything first if you have partial mounts...
[14:41:58] <prologic> that won't work :)
[14:42:10] *** gerhard7 <email@example.com> has quit IRC (Quit: Leaving)
[14:42:11] <prologic> that's why I asked about temporarily mounting them somewhere else to see what they contain
[14:42:27] <DHE> mount -t zfs -o zfsutil data/media /mnt/otherlocation # or such
[14:42:34] <prologic> *nods*
[14:42:42] <DHE> then just umount when you're done
[14:42:48] <prologic> help me remember thougb; I do need these parent empty file systems right?
[14:43:08] <prologic> IIRC I didn't create them as such; they were created from zfs creat data/media/movies for example
[14:43:12] <prologic> I *think*
[14:43:18] <DHE> zfs list -o name,canmount,mounted,mountpoint
[14:43:25] <DHE> maybe I should upgrade it to this
[14:47:42] *** Markow <Markowfirstname.lastname@example.org> has quit IRC (Quit: Leaving)
[14:53:23] <prologic> $ zfs mount -a
[14:53:25] <prologic> there :)
[14:53:28] <prologic> fixed I hope :)
[14:54:24] <prologic> thanks guys :D
[14:54:25] <prologic> and gals
[14:55:10] <bunder> fling: if it doesn't come with one, probably not, but if its catching on fire and your case air flow sucks, then maybe yes, but good luck finding one that fits without superglue or zipties
[14:58:27] <fling> bunder: I just screwed a random one on it with long m4 screws.
[14:58:52] <bunder> lol nice
[14:59:31] <fling> bunder: 8-32 on one of mobo's heatsinks and m3 on another one :P
[14:59:52] <bunder> that's some jank but if it works... lol
[15:00:03] <fling> It is easy to screw anything into anuminium heatsinks :)
[15:00:29] <bunder> the fins on my hba are too far apart for that iirc
[15:00:57] <cirdan> bigger screws solves that :)
[15:01:06] <fling> I have pike2008 and three fans I used to cool it are too far from it too
[15:01:22] <cirdan> or use some steel epocy and drill/tap it
[15:01:31] <cirdan> epoxy
[15:01:51] <bunder> tapping screw holes, what do i look like, a garage :P
[15:01:52] <fling> Is it fine to watercool motherboard and sas controllers?
[15:02:00] <fling> I don't like small noisy fans on heatsinks
[15:02:14] <cirdan> if you like water damage
[15:02:17] <bunder> yes but you're back to "where do i get a waterblock"
[15:02:44] <fling> bunder: but I found some for cheap on avito.ru
[15:02:45] <cirdan> fling: get some big quiet ones to move the air out of the case and you'll be fine
[15:03:04] <fling> cirdan: yes, I want to replace water with something ;P
[15:03:08] <cirdan> air
[15:03:27] <cirdan> hbas are not gpus
[15:03:42] <cirdan> just get cool air flowing past them and it's fine
[15:06:20] <bunder> they still get toasty
[15:06:40] <cirdan> and that's ok
[15:06:55] <cirdan> if they needed a fan, they would have come with one
[15:07:13] <cirdan> or get a small but quiet one :-)
[15:08:05] <Lalufu> there are generic water blocks that fit a lot of things
[15:08:17] <PMT> sometimes you might benefit from one
[15:08:37] <Lalufu> most of the time it's "chips of size X, surrounded by 4 holes with distance Y and Z"
[15:09:02] <cirdan> my hba has a spring clip holding it down & goes across, iirc
[15:10:23] <fling> cirdan: no, with 5000 rpm fans on mobo heatsinks I had 77C on idling mobo with pci-e sas installed
[15:10:42] <cirdan> what was 77C
[15:10:45] <fling> cirdan: needed to put fans on everything to get idle mobo temp down to 55C
[15:10:56] <cirdan> and what are the chips designed to run at?
[15:10:57] <fling> cirdan: 77C was the idle temp on the mobo
[15:11:16] <cirdan> I dont know where that temp is taken from
[15:11:19] <bunder> chipsets are probably easier to find blocks for than hba/sas cards
[15:11:20] <fling> idk, can't even find in docs where this thermistor located
[15:11:42] <cirdan> sounds like you need more airflow int he case
[15:11:42] <fling> bunder: I will screw random blocks probably but I never used any
[15:12:09] <fling> cirdan: I have a big fan on the front and on the side of haf 932 case
[15:12:16] <cirdan> get an air thermometer probe in the case and see what it's doing
[15:12:30] <fling> cirdan: but I had to remove the top one to be able to move the psu to the top to be able to put three fans on pike2008 card ;<
[15:13:39] *** gerhard7 <email@example.com> has joined #zfsonlinux
[15:19:58] <Haxxa> My last build had PSU issues and some curruption occurred, I have since upgraded my PSU to a seasonic PSU scrubed my data, fixed currupt files and cleared errors. Based on this I should be ok to take the pool back into production?
[15:20:22] <Haxxa> Or will ZFS complain with Checksum errors?
[15:20:37] <Haxxa> I assume once errors are cleared ZFS updates its Checksums
[15:21:24] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[15:21:49] <bunder> if the errors are fixed then it should be fine
[15:22:03] <bunder> if the errors are still there, then it'll pile up on the next scrub/read
[15:22:14] <PMT> Assuming it's not reporting metadata errors after a scrub,clear,scrub, then you should be fine.
[15:22:31] <PMT> scrub, [remove/replace mangled data], clear, scrub, rather.
[15:22:50] <Haxxa> yeah thats what's been done
[15:22:55] <Haxxa> looks clear so far
[15:37:09] *** hyegeek <firstname.lastname@example.org> has joined #zfsonlinux
[15:48:55] *** pascalou <email@example.com> has quit IRC (Remote host closed the connection)
[15:59:24] *** King_InuYasha <King_InuYasha!~King_InuY@fedora/ngompa> has quit IRC (Ping timeout: 272 seconds)
[16:20:03] <ptx0> latest udev is broken
[16:20:05] <ptx0> excellent
[16:20:17] <ptx0> avoid 240-r1
[16:20:33] <pink_mist> this is systemd udev?
[16:20:46] <ptx0> i don't use systemd but yes
[16:21:00] <mason> ptx0: Eh? I thought you were on Gentoo/eudev.
[16:21:19] <pink_mist> what's the issue?
[16:24:09] <ptx0> pink_mist: when lvm service starts up it says it doesn't find hdd described via udev and then when X starts there is no input
[16:24:19] <ptx0> downgrading to stable udev works
[16:26:31] <bunder> yeah why aren't you using eudev
[16:27:53] <ptx0> it was flakey
[16:30:15] *** Markow <Markowfirstname.lastname@example.org> has joined #zfsonlinux
[16:30:33] <mason> FLAKY LIKE BREAKFAST CEREAL
[16:36:50] <prometheanfire> what's the current state of power protected SSDs (non-nvme)?
[16:37:33] <prometheanfire> also, finding a dual nvme pcie card is a pain in the ass
[16:37:42] <nahamu> You mean like the old ZeusRAMs?
[16:37:55] <prometheanfire> no, just SSDs with supercaps
[16:38:52] <nahamu> What's the advantage to having supercaps and "lying" about when you've persisted a write vs just a really high IOPs device that completes the writes very fast?
[16:40:28] <Lalufu> if you have supercaps you're not lying, the data is persisted if you lose power
[16:41:51] <mason> The data is verbed!
[16:42:01] <Lalufu> verbing weirds language
[16:42:14] <prometheanfire> ya, you don't want to not know on a zil device
[16:42:33] <prometheanfire> even then, I mirror my zil and stripe my l2arc
[16:49:45] <ptx0> uhm, wat
[16:49:54] <ptx0> striping l2arc doesn't do anything for you
[16:50:04] <ptx0> it just wastes more memory
[16:50:10] <PMT> does it?
[16:50:25] <ptx0> well, presuming that the additional bandwidth won't do much
[16:50:40] <PMT> Or rather, does it in a way that is different from just adding more L2ARC in a single device?
[16:50:41] <ptx0> it doesn't provide any kind of redundancy though
[16:50:47] <ptx0> no
[16:52:07] <PMT> I mean, it's L2ARC, it's not going to do anything but hurt perf if it bites the dust.
[16:52:34] <ptx0> right
[16:52:47] <ptx0> so i don't get the point of bringing up how it's striped during a discussion on power protection
[16:52:59] <ptx0> it's not like the l2arc is even persistent on reboots
[16:53:45] <PMT> ptx0: I think it was just to clarify the mirror didn't apply to the L2ARC.
[16:54:11] <ptx0> vOv
[16:54:42] <ptx0> ebay users suck, man. quit bidding on auctions and driving up the fuckin price
[16:55:04] <ptx0> had some 8TB and 10TB disks in my list and now they've been bid past their MSRP
[16:55:18] <ptx0> there's 3 days left on the auction
[16:55:48] <ptx0> it's like no one heard of bid sniping and the wonders of paying less for an auction
[16:56:14] <Phil-Work> eBay gets silly sometimes
[16:56:25] <Phil-Work> stuff gets bid up past the cheapest BIN item
[16:56:28] <Phil-Work> people are dumb
[16:59:42] <ptx0> prometheanfire: new SSDs don't have a need for supercaps to protect from power off
[17:00:50] <ptx0> Micron changed how DRAM is used for buffering so that FTL (physical to logical?) addressing is efficiently laid out so that it can be rapidly rebuilt from information stored in NVM
[17:01:36] <ptx0> instead of protecting at power off it rebuilds during boot and adds a couple seconds to the initialisation routine
[17:02:17] <ptx0> enterprise disks seem to employ both techniques
[17:02:49] <ptx0> the only benefit being that you can then enable the disk write cache
[17:03:20] <ptx0> prometheanfire: that's over the last 5 years or so..
[17:03:54] <ptx0> also, get a UPS.
[17:09:11] <cirdan> UPSs can fail and take down the machine
[17:09:34] <cirdan> I know someone who hates APC because they cause him more outages than anything else
[17:10:34] <ptx0> dual power supplies and two UPS
[17:10:56] <ptx0> but, cute anecdote
[17:11:02] <ptx0> you should tell it at parties
[17:11:49] <PMT> i'm still a "fan" of the custom serial cables for older APC UPSes
[17:12:08] <cirdan> yeah gotta love when people plug in a regular one and it shuts off
[17:12:10] <PMT> where if you try to do a "connect" over serial with a non-custom cable, the UPS translates it into "shut all down"
[17:12:13] <PMT> yeah that.
[17:12:30] <cirdan> it was apc's way of saying "pwnt."
[17:12:52] <cirdan> the 10p10c usb cables are also fun
[17:13:54] <ptx0> weird, mine uses USB and no issues
[17:14:02] <ptx0> though occasionally upsmon loses the thing
[17:15:46] *** djs_ <djs_!~derek@S0106602ad08f6eff.cg.shawcable.net> has joined #zfsonlinux
[17:15:48] *** djs_ <djs_!~derek@S0106602ad08f6eff.cg.shawcable.net> has quit IRC (Client Quit)
[17:17:27] <bunder> my cyberpower ones are annoying
[17:17:48] <PMT> which, because of the company-provided app, or
[17:19:02] <bunder> Jan 2 07:27:16 firewall usbhid-ups: libusb_get_string: Input/output error Jan 2 07:43:47 firewall usbhid-ups: libusb_get_string: Broken pipe
[17:19:05] <bunder> like all day
[17:19:10] <PMT> never seen that one
[17:19:16] <PMT> on my cp ups, at least
[17:19:44] <bunder> it still works, i get munin graphs and stuff
[17:19:57] <bunder> dunno why it freaks out constantly though
[17:20:51] <bunder> can you upgrade the firmware on a ups :P
[17:24:34] <ptx0> yes
[17:24:37] <prometheanfire> ptx0: have a ups (two actually) :P
[17:25:04] <prometheanfire> I was thinking of getting a active active power switch for them
[17:25:17] *** ses1984 <email@example.com> has joined #zfsonlinux
[17:25:52] <ptx0> those are expensive since it has to ensure the phases are in sync
[17:26:14] <ptx0> otherwise you'll have what looks like ground line fault everywhere
[17:26:15] *** ses1984 <firstname.lastname@example.org> has quit IRC (Client Quit)
[17:26:20] <prometheanfire> I hate the apc serial thing, I had to custom make a cable to reset it...
[17:26:37] <prometheanfire> ptx0: yep, 450 ish for a good one
[17:27:15] <prometheanfire> that's the one I'm looking at
[17:27:47] *** ses1984 <email@example.com> has joined #zfsonlinux
[17:28:17] *** hyegeek <firstname.lastname@example.org> has quit IRC (Ping timeout: 250 seconds)
[17:28:27] <prometheanfire> mainly for network stuff
[17:28:58] <prometheanfire> one needs a new battery...
[17:30:28] <bunder> ptx0: can't be out of phase if you plug both cords into the same outlet ;)
[17:30:51] * prometheanfire uses diferent breakers, not sure if on the same phase or not though
[17:31:46] <cirdan> prometheanfire: I just bought 2 of them on ebay, 1 new 1 used for a total of like $150 shipped
[17:31:53] <cirdan> well the 20A ones not the 15A
[17:31:55] <cirdan> :)
[17:32:55] <cirdan> tripp lite is awsome for support too
[17:32:56] <PMT> I've actually been quite impressed with my apartment's power - I haven't had anything complain about excess draw except if I turn on my laser printer, on startup I hear the THUNK of the UPS saying "the line appeared to sag so batteries are involved now" for a split-second before going back
[17:32:59] <ses1984> i have a raidz2 where one of the device names changed, so it's dropped off the array as OFFLINE, is there a way to add it back by changing the device path?
[17:33:26] <cirdan> ses1984: easy way w/o importing is make a symlink from the old device name to the new...
[17:33:39] <cirdan> you might be able to do a replace i forget
[17:34:15] <prometheanfire> cirdan: this one will let me remote power cycle, which would be nice
[17:34:20] <cirdan> yup
[17:34:21] *** hyegeek <email@example.com> has joined #zfsonlinux
[17:34:40] <bunder> online not replace iirc
[17:34:42] <cirdan> I have 2x APC metered network PDUs
[17:34:59] <PMT> There's a bug for not being able to tell online to use a new device name that I can't immediately find
[17:35:06] <prometheanfire> PMT: my house has that problem with the printer... I think it's on the same circuit as one of the UPS's though
[17:35:16] <prometheanfire> I reduced the sensitivity and that helped
[17:35:16] <PMT> (The bug is simply that the functionality doesn't exist.)
[17:35:20] <ptx0> bunder: they are online UPS, right? then they have inverters
[17:35:23] <cirdan> laser printers shouldn't be on a UPS i learned
[17:35:36] <PMT> For which reason?
[17:35:37] <ptx0> bunder: you can't combine two inverters' outputs into a single circuit without phase sync
[17:35:41] <cirdan> my phaser would whine even with a 3k ups
[17:35:41] <prometheanfire> cirdan: oh, that problem I don't have :P
[17:36:02] <PMT> #3242 maybe
[17:36:05] <prometheanfire> ptx0: it's a fast cut over
[17:36:09] <cirdan> PMT: there's a large surge when the laser warms up, much more than most UPSs can deal with
[17:36:14] <cirdan> can't
[17:36:15] <ptx0> prometheanfire: then not really active-active eh
[17:36:22] <prometheanfire> ya, wrong words
[17:36:30] <cirdan> xerox said it'll fry the printer's board... and it did
[17:36:42] <ptx0> cirdan: sounds like poor design
[17:37:13] <cirdan> ptx0: sounds like how electronics work
[17:37:22] <prometheanfire> good for support contracts
[17:37:57] <cirdan> to b fair they said it needed something like at least a 3k va UPS because of the power spike
[17:38:25] <cirdan> and my ups would bitch when it warmed up, telling me it couldn't handle the load on battery
[17:38:26] <ptx0> 3,000 VA? that's stupid
[17:38:32] <ptx0> what does capacity have to do with draw
[17:38:39] <cirdan> yes
[17:39:58] <cirdan> "APC recommends a Smart-UPS series product that is sized for the maximum power draw of the laser printer as defined by the manufacturer. This is typically a 1500va or larger UPS. Even small Laser Printers can have very high maximum power draws, due to the nature of the technology. "
[17:40:05] <ptx0> yeah but that's like Tesla saying you can't run an AC unit from the PowerWall
[17:40:15] <ptx0> you can, but it shortens the life of the battery pack
[17:40:41] *** beyondcreed <beyondcreed!~beyondcre@S0106000103cfc9f5.ok.shawcable.net> has quit IRC (Remote host closed the connection)
[17:40:51] <ptx0> i run a laser printer on solar power with a 1kw pure sine inverter
[17:41:12] <ptx0> the power draw isn't that great but it's also a modern laser printer
[17:41:29] <ptx0> apparently they can use LEDs?
[17:42:01] <ptx0> it's marketed as a laser printer but, no lasers, just LEDs
[17:42:12] <cirdan> laser leds are a thing
[17:42:23] <cirdan> ever hear of CD, DVD or BluRay?
[17:42:25] <ptx0> yes but this is not one
[17:43:16] *** catalase <catalase!~catalase@unaffiliated/catalase> has quit IRC (Ping timeout: 272 seconds)
[17:43:49] <ptx0> i like that the LED printer has almost no moving parts vs the B&W laser printer i've got that consumes about 1.8kw during startup and has a few mirrors internally that need to remain aligne
[17:43:52] *** catalase <catalase!~catalase@unaffiliated/catalase> has joined #zfsonlinux
[17:43:54] <ptx0> aligned
[17:44:14] <ptx0> if i used that thing in an RV it wouldn't last long :P
[17:44:45] <ptx0> apparently, it prints in colour too, using multiple colour toner cartridges and LED arrays
[17:46:34] <cirdan> yeah my colorqube uses at least 1.8kw i bet
[17:46:44] <cirdan> just to warm up
[17:46:57] <cirdan> says operating power is 250w
[17:48:21] <ptx0> yeah
[17:48:34] <ptx0> my B&W dims the lights in the office when it turns on but not during print
[17:50:46] <bunder> yes but you also run your solar panel off boat batteries :P
[17:51:04] <bunder> ups batteries are small
[17:52:31] <ptx0> not mine... i modified that shit
[17:52:41] <ptx0> you can run a UPS off whatever 12v cells you want :D
[17:53:10] <ptx0> the inverter itself is limited to 600W but the batteries i replaced with are 4x the original capacity
[17:53:37] <ptx0> of course, it made the thing way heavier and while i was moving, i really regretted it
[17:54:27] <prometheanfire> wonder what my brother mfc-9330cdw uses
[18:07:20] *** f_g <firstname.lastname@example.org> has quit IRC (Ping timeout: 272 seconds)
[18:13:30] *** snehring <snehring!~snehring@2610:130:103:800::2> has quit IRC (Quit: Leaving)
[18:21:08] *** f_g <email@example.com> has joined #zfsonlinux
[18:27:15] *** snehring <snehring!~snehring@2610:130:103:800::2> has joined #zfsonlinux
[18:36:55] *** kaipee <firstname.lastname@example.org> has quit IRC (Remote host closed the connection)
[18:41:52] *** linuxstb <linuxstb!~linuxstb@unaffiliated/linuxstb> has joined #zfsonlinux
[18:42:36] *** elxa <elxa!~elxa@2a01:5c0:e08b:3931:dea4:d301:bd5b:95a1> has joined #zfsonlinux
[19:10:13] <bunder> but half the pool is missing, i'd say that's corrupted metadata
[19:12:37] <bunder> actually, i think their cachefile is outdated
[19:17:33] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has quit IRC (Ping timeout: 252 seconds)
[19:20:01] *** cbreak <email@example.com> has joined #zfsonlinux
[19:21:31] *** Floflobel_ <Floflobel_!~Floflobel@cosium-152-18.fib.nerim.net> has quit IRC (Remote host closed the connection)
[19:24:35] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has joined #zfsonlinux
[19:27:54] *** Zialus <Zialus!~RMF@184.108.40.206.rev.vodafone.pt> has joined #zfsonlinux
[19:49:02] *** zapotah <zapotah!~zapotah@unaffiliated/zapotah> has quit IRC (Remote host closed the connection)
[19:49:21] *** zapotah <zapotah!~zapotah@unaffiliated/zapotah> has joined #zfsonlinux
[20:18:19] *** hsp_ is now known as hsp
[20:52:20] <DHE> oh? activity on TRIM?
[20:54:08] <ptx0> yes
[20:54:14] <ptx0> as of last week he was rebasing and working on it
[20:54:30] <ptx0> last night pushed it finally
[21:02:35] <ghfields> Between that and nfs4acl, it really seems like they want to make this rebase thing really happen.
[21:04:34] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[21:11:22] <cirdan> yeah well there's a ton of bug fixes in zol that would be hard to find and push back upstream
[21:11:26] <cirdan> small stuff but still bugs
[21:14:46] <DHE> TRIM is pretty big though. at least based on how loud people have been about it. :)
[21:15:04] <DHE> unfortunately I don't think it'll make 0.8.0 if release candidates have already been cut for it
[21:15:23] <cirdan> right, i mean reasons for switching to zol as upstream
[21:15:42] <bunder> we effectively are, with illumos being the only holdout
[21:16:37] <cirdan> right
[21:16:43] <cirdan> we will be soon though
[21:17:02] <cbreak> resistance is futile?
[21:17:23] <bunder> which begs the question of the openzfs repo but shrug
[21:17:29] <cirdan> you will be assimated
[21:17:58] *** catalase <catalase!~catalase@unaffiliated/catalase> has quit IRC (Remote host closed the connection)
[21:19:12] <ghfields> ass-i-mated is not what happened to picard
[21:19:32] *** Horge <Horge!~Horge@cpe-172-115-16-136.socal.res.rr.com> has joined #zfsonlinux
[21:23:17] <Horge> Hey all, having a brain fart here. I just created a new mirror on my ubuntu server, everything mounted up and got along just fine. Do I have to do anything specific to share that new pool on the network? upon 'zfs get sharenfs' and 'sharesmb' all values are
[21:23:33] <Horge> "off", but i can access my other pools just fine. Do I just need to restart the server?
[21:24:02] <cbreak> I just share normal
[21:24:05] <cbreak> via /etc/exports
[21:26:14] <Horge> is there a way to check how im currently sharing my files?
[21:26:38] <Horge> because if sharenfs and sharesmb are coming back negatory, im confused. I think i used samba but its legit been 5 years since ive created a new pool on this thing haha
[21:26:40] <cirdan> the normal way
[21:27:04] <cirdan> sharenfs and sharesmb are extras to make it easy to automate sharing but it's not the only way
[21:29:00] *** Celmor <Celmor!~Celmor@unaffiliated/celmor> has joined #zfsonlinux
[21:29:13] <DHE> sharenfs and sharesmb puts the sharing config into the pool itself, meaning that if you move the pool between systems the sharing config moves with it. but if you want to manually set up NFS, samba, or whatever then by all means...
[21:29:37] *** catalase <catalase!~catalase@unaffiliated/catalase> has joined #zfsonlinux
[21:29:48] <Celmor> checksum of a zfs send should not be changing if the dataset(s) aren't mounted, right?
[21:30:36] <Celmor> and no properties are manually changed (though I don't know if zfs ever automatically changes them)
[21:31:37] <PMT> ...what?
[21:32:35] <Horge> got it, im more trying to a) determine how it's currently setup, and b) add this new pool to that setup
[21:32:52] <DHE> sounds like you want a repeatable send?
[21:34:04] <Celmor> I'm doing `zfs send -cepDLR <dataset>@<snapshot> | md5sum` and the checksum seems to change every run
[21:34:30] <cirdan> how large is the snapshot?
[21:34:49] <cirdan> just wondering
[21:34:51] <Celmor> ~15G, many datasets/clones/snapshots though (created by dockers zfs storage driver)
[21:35:16] <cirdan> Celmor: save 2 and diff them?
[21:35:18] <Celmor> doing that to verify my backup with local state of the dataset but since the checksum apparently changes I can't do that
[21:35:35] <cirdan> I have a feeling there's a $date sent with it but i dont know
[21:36:32] <Celmor> how should I be diffing a byte stream, jus hexdump and picking my way through non-legible data?
[21:37:00] <bunder> i wouldn't be surprised if the send stream is timestampped or something
[21:41:14] <cbreak> Celmor: ZFS transport streams already have checksums
[21:41:25] <cbreak> I don't think there's much point to checksum it further
[21:41:41] <Celmor> can I then read them out and compare smh?
[21:41:49] <cbreak> what do you want to read out?
[21:41:57] <cbreak> once you received it, it's there
[21:41:59] <Celmor> the checksum
[21:42:43] <pink_mist> Celmor: "smh"? 'shaking my head'?
[21:42:48] <Celmor> somehow
[21:43:14] <pink_mist> oh, first time I see smh used for that
[21:43:41] <Celmor> might just be me misinterpreting it the first time I've seen it and then using "wrong" from then on
[21:43:51] <Celmor> but it usually fits too
[21:46:19] <Horge> can someone please point me to what to google to share these pools "the normal way", ie without sharenfs or sharesmb. These are old pools being shared on ubuntu 14.04.5 to a windows PC (and mac) via set IP address
[21:46:48] <bunder> "smb.conf"
[21:46:51] <cirdan> Horge: /etc/samba
[21:47:16] <Horge> word, i just did samba-bin-directory /testparm \
[21:47:16] <Horge> > samba-configuration-directory /lib/smb.conf
[21:47:17] <Horge> and got smb.conf not found tho... ill keep diving
[21:47:36] <Horge> aye oops okay back at the top
[21:47:43] <Horge> thanks guys, been way too long
[21:49:24] <Sketch> who puts smb.conf in /lib?
[21:50:30] <Horge> dunno lol the guide on the oracle forums
[21:50:49] <PMT> Celmor: there's never been a guarantee that send streams would be the same across 2 runs even if the dataset(s) don't change. Why are you expecting one?
[21:51:33] <Celmor> I just want to verify that my backup results in an "equal" copy of the local state to guarantee a recover would revert to the expected state
[21:51:55] <Horge> ayee thank you fellas, got in to smb.conf <3
[21:53:08] <PMT> Celmor: short of parsing send streams yourself, good luck.
[21:54:24] *** King_InuYasha <King_InuYasha!~King_InuY@fedora/ngompa> has joined #zfsonlinux
[21:54:38] <cbreak> if oracle recommends it, then it's probably dumb.
[21:55:23] <PMT> (I don't recommend trying to parse send streams yourself. Nothing but suffering will result.)
[21:55:53] <cbreak> Celmor: you're supposed to receive send streams into a pool
[21:56:24] <PMT> Also, what are you trying to compare? The checksums on both sides won't even necessarily agree if they're the same checksum, because the newly added checksum types have per-pool seeds to their hashes
[21:56:26] <Celmor> can't even diff the streams, "memory exhausted"
[21:56:54] <PMT> Celmor: you still haven't clarified what it is you're trying to do, other than "compare [some part of] the send stream to [???]"
[21:57:08] <Celmor> cbreak, does a dry run of
[21:57:20] <Celmor> [21:35] <Celmor> doing that to verify my backup with local state of the dataset but since the checksum apparently changes I can't do that
[21:57:27] <PMT> "verify" how?
[21:57:43] <Celmor> [21:51] <Celmor> I just want to verify that my backup results in an "equal" copy of the local state to guarantee a recover would revert to the expected state
[21:57:57] <Celmor> but I guess the latter just isn't easily possible
[21:58:04] <PMT> The question isn't well-defined.
[21:58:32] <PMT> "I want to verify that a copy is identical in a way other than the existing integrity mechanisms"
[21:58:59] <Celmor> verify if receiving it would result in a usable dataset and should recover the same state it was when I created the zfs send stream
[21:59:22] <Celmor> what is the existing integrity mechanism then?
[21:59:28] <PMT> Celmor: congrats, I think the problem you're trying to solve may be isomorphic to the halting problem. Good luck.
[22:00:48] <Celmor> well, I did specify that the latter propably isn't possible (guarantee that it would recover to the same state), I at least would like to compare if the contents of the streams (or rather the contents of the datasets without zfs metadata) equal
[22:00:53] <PMT> (Less glibly, zfs send streams are a stream of the dataset(s) involved at a point in time. Barring implementation bugs, they're going to be identical. If you're concerned about serialization/deserialization implementation bugs, you already get to write your own.)
[22:01:05] <Celmor> and someone did say that zfs send streams contain checkums so I don't see why this is so impossible
[22:01:27] <PMT> Celmor: you're trying to implement something that already exists, and we're not sure why you think that's useful.
[22:01:46] <Celmor> again, what mechanism is it that already exists?
[22:01:51] *** buu <firstname.lastname@example.org> has quit IRC (Remote host closed the connection)
[22:03:58] <PMT> Celmor: the send streams are checksummed, just like every other thing. So receives will barf if the send stream has bits flip en route. "Identical state on both sides" is poorly specified - the data's not going to be in the same layout, the checksums and logical data sizes may vary, the data may be the same but if that's all you wanted to check why aren't you using rsync with some obscene number of flags?
[22:04:16] *** djdunn <email@example.com> has quit IRC (Ping timeout: 246 seconds)
[22:04:33] *** djdunn <firstname.lastname@example.org> has joined #zfsonlinux
[22:05:45] <PMT> Celmor: put more tersely, "what the hell functionality are you looking for other than what you could get with rsync or computing your own checksums of all the files on both sides or seeing if the recv is rejected for being mangled?"
[22:06:51] <PMT> (Also even if you wrote your own parser you'd be sad b/c you would basically need to replay the send stream on a pool to figure out if it'd produce the same results - all the data in streams with the hole_birth bug is intact, but some data is missing from said streams, resulting in the final product of receiving the stream differing without ever corrupting a bit.)
[22:08:41] <PMT> You appear to want to be able to use send streams as something like tape backups, where you can put them on the shelf and then replay them on demand. That was not, AFAIK, ever the use case, and the advice has always been that while you can save them to files and then receive from the file, the possibility of finding out that receiving won't work after having deleted the original dataset means it's not
[22:08:47] <PMT> really the suggested course.
[22:09:04] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has quit IRC (Ping timeout: 252 seconds)
[22:10:52] <Celmor> what I wanna test is the integrity of my backup script rather than zfs send/recv itself, e.g. if I do `zfs send ... | gzip | rclone rcat <remote>` and later do `rclone cat <remote> | gunzip | zfs recv ...` I wanna know if I get the same data back from the remote and uncompressing it results in the same data which zfs can handle (e.g. by verifying the chechsums from the send stream
[22:13:15] <cbreak> Celmor: what's the problem with doing the normal thing?
[22:13:20] <cbreak> and just receive the stream?
[22:13:28] <cbreak> after you're done receiving, you have the same snapshot
[22:13:35] <Celmor> so referring to what I asked earlier, since zfs send streams contain checksums to verify the streams, if I receive a stream, does it verify if I use the dry-run option or do I actuallve have to receive it (without that option)?
[22:14:05] <Celmor> cbreak, what's the normal thing? how do I know I have the "same snapshot"?
[22:14:36] <cbreak> once you've received it
[22:14:46] <cbreak> you know
[22:14:51] <cbreak> by having received it
[22:15:10] <Celmor> you mean through zfs receive verifying the stream and successfully completing?
[22:15:42] <Celmor> that's what I'm asking, if I have to receive the stream to verify or if a dry-run can verify it too
[22:16:55] <PMT> Celmor: "you don't". Standard tools like rsync or checksum lists in the same checksum and salt on both sides apply if that's what you're looking for.
[22:17:18] *** buu <email@example.com> has joined #zfsonlinux
[22:17:23] <PMT> I have no earthly idea what recv -n will and won't do. I've never used it for anything other than to tell me where it'd receive it.
[22:17:27] <cbreak> Celmor: why would you not receive it?
[22:17:46] <PMT> cbreak: because Celmor wants to shove it into some remote data backup location and receive it in the future if something catches fire
[22:17:59] <cbreak> put a pool at the remote loc
[22:18:02] <PMT> And presumably said remote location just provides a blockstore and not a recv target
[22:18:16] <PMT> cbreak: I know this will shock you, but not all backup providers allow arbitrary code execution.
[22:18:31] <cbreak> :O
[22:18:37] * cbreak is shocked. SHOCKED.
[22:20:27] <Celmor> or zfs pools for that matter
[22:20:31] *** buu <firstname.lastname@example.org> has quit IRC (Remote host closed the connection)
[22:21:12] <cbreak> and here I thought ZFS was the last filesystem we'd ever need...
[22:21:19] <Celmor> this may come too but backup providers which provide zfs pools as a receive location are expensive
[22:24:14] <Celmor> at this point I just wanna test if `zfs recv -n` verifies the stream or not, for that I need to be able to run one without erroring...
[22:24:23] <bunder> hardware, storage, electricity, internet and a building are expensive :P
[22:24:55] *** gerhard7 <email@example.com> has quit IRC (Quit: Leaving)
[22:25:24] <Celmor> theoretically it should be cheaper to run a zfs pool than run a secure/properly setup web server, custom API endpoints, own file versioning, etc.
[22:25:43] <cbreak> Celmor: the problem's that you can't incremental receive without having the base I think
[22:26:16] <bunder> i had thought about that too but he didn't mention inc (i looked :P )
[22:26:20] <CompanionCube> Celmor: well, if you want to not be parsing people's arbitrary send streams in your shared host kernel it could get expensive
[22:28:20] <Celmor> either "destination ... exists" or "destination ... does not exist"
[22:32:21] *** Horge <Horge!~Horge@cpe-172-115-16-136.socal.res.rr.com> has left #zfsonlinux
[22:34:24] <PMT> CompanionCube: I mean, you can theoretically punt all the recv processing code into userland, the problem is doing something useful with it, since you still basically need to "play" the send stream to figure out what happens
[22:34:33] <PMT> And then you need a source of truth that's not the send stream to compare against
[22:35:50] <PMT> Celmor: I really think you're trying to use a screwdriver to hammer a nail in.
[22:38:08] *** akaizen <firstname.lastname@example.org> has joined #zfsonlinux
[22:38:33] <Celmor> well, whar do you recommend
[22:39:01] <Celmor> and why does that zfs recv not work
[22:40:20] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Quit: shibboleth)
[22:41:26] <PMT> Since your link doesn't actually load, I can't tell you.
[22:45:11] <PMT> ...is the snapshot you're trying to recv incremental?
[22:45:37] <Celmor> it was created using -R
[22:45:52] <PMT> Just -R?
[22:47:04] <Celmor> sudo zfs send -cepDLR DataRecover/Backup/docker@sync-d2018-12-28-17-57 | pigz | ...
[22:47:13] <PMT> ...christ, why -D?
[22:47:16] <PMT> Anyway.
[22:47:20] <Celmor> dedup
[22:47:24] <PMT> I know what -D does.
[22:47:33] <Celmor> so why not
[22:47:49] <PMT> But why, though? You know it won't use any on-disk dedup information, and if you're not using sha256 or another no-verify-required checksum it's going to suck?
[22:48:02] <Celmor> "Blocks which would have been sent multiple times in the send stream will only be sent once." sounds like something I would want
[22:48:30] <PMT> If you're heavily optimizing for bandwidth over enormous CPU and memory usage increase, sure.
[22:49:18] <Celmor> that's the idea
[22:49:25] <PMT> Celmor: FWIW, zfs send -Rv mypool/beep@snap5 | zfs recv -n foo/bar (where bar does not exist) for me says (among other things) "would receive XYZG stream" and then attempts to process it like actually receiving, but without arriving on disk.
[22:49:48] <Celmor> didn#t have memory or cpu resource problems so far so I don't see why I should omitt it
[22:49:58] <PMT> Also, you know -p is implied by -R, right?
[22:50:06] <Celmor> yeah
[22:51:10] <PMT> Celmor: those flags work for me, so I have no idea what's going on for you. Is that docker dataset a zvol?
[22:51:51] <PMT> Also, what version?
[22:54:59] <Celmor> 0.7.12
[22:55:40] <PMT> Likewise.
[22:55:59] <PMT> And I don't mean "I never had that problem", I mean "I literally just copy-pasted that flagset into a terminal and tried it"
[22:57:48] <PMT> So as I said, is that dataset a zvol?
[22:58:41] <Celmor> no
[23:00:04] <PMT> Huh.
[23:00:20] <PMT> If you directly pipe it from send to recv does it error like that?
[23:00:38] <PMT> (Note that I don't expect the answer to be "no", but just trying to eliminate variables.)
[23:03:20] <PMT> Celmor:
[23:03:22] <Celmor> "cannot receive incremental stream: destination 'DataRecover/Backup/dockerTest' does not exist", same without -F
[23:03:53] <PMT> Celmor: I _think_ that's completing the initial recv and then failing on the incremental embedded b/c the first one doesn't exist (for obvious reasons)
[23:05:18] <Celmor> wondering with the `zfs send -R ... | zfs recv` gives me 'would receive full stream ...' before that error but the stream read from file does not
[23:05:49] <PMT> Celmor: are you using -v on the recv when doing unpigz?
[23:06:31] *** tlacatlc6 <email@example.com> has joined #zfsonlinux
[23:07:26] <Celmor> unpigz </Data/temp/DataRecover\\Backup\\docker at sync-d2018-12-28-17-57 dot zfs-R.gz | zfs recv -Fnv DataRecover/Backup/dockerTest
[23:08:09] <PMT> It's also erroring on cannot recv new filesystem, not cannot recv incremental.
[23:09:26] *** buu <firstname.lastname@example.org> has joined #zfsonlinux
[23:09:36] <Celmor> so the zfs send stream in the file is somehow corrupt?
[23:09:50] <PMT> I don't know? It should throw a more different error if that happens.
[23:10:55] <Celmor> I guess I should just be checksumming any stream i create and save to a file from now on
[23:12:25] *** cluelessperson <cluelessperson!9f41427d@gateway/web/freenode/ip.220.127.116.11> has joined #zfsonlinux
[23:12:29] <cluelessperson> Hi there
[23:12:40] <cluelessperson> I'm running proxmox against a ZFS over NFS
[23:13:04] <cluelessperson> anyway, on the ZFS machine, I ran a "rsync" operation limited to bandwidth of 90M
[23:13:06] <cluelessperson> welp, too much
[23:13:14] <cluelessperson> first proxmox VMs crash
[23:13:35] <cluelessperson> I try to kill the rsync opreation, but it's stuck
[23:13:47] <cluelessperson> txg_sync is stuck forever at 99% io load
[23:13:51] <cluelessperson> whatever, I force a reboot
[23:14:13] <cluelessperson> now. "a start job is running for mount ZFS filesystems 10min/no limit"
[23:14:20] <cluelessperson> so, it's pretty much screwed and I don't know what to do
[23:14:34] <PMT> I doubt it's screwed. It's probably cleaning up after whatever was stuck.
[23:14:45] <PMT> It's not impossible that it's screwed, but it's not remotely a good idea to conclude that yet.
[23:15:08] <cluelessperson> PMT: agreed, I suppose I shoul djust leave it for a few hours and let it unfrick itself
[23:15:33] <PMT> cluelessperson: it probably shouldn't take that long ~ever (unless you're using dedup maybe)
[23:15:41] <cluelessperson> PMT: well, before I forced a reboot (improper shutoff ahem), it showed txg_sync at 600~ K/s
[23:16:10] <PMT> Oh look, and my logs say you are, in fact, using dedup. Yeah it'll take awhile.
[23:16:11] <cluelessperson> PMT: yes, I had dedup on, and the thing I did rsync on was probably a deduped file, of either 256 or 512 GB
[23:16:29] <cluelessperson> I'd prefer to stop whatever it's doing
[23:16:33] <cluelessperson> and tell it to delete that shit
[23:16:37] <cluelessperson> and forget that data existed
[23:16:51] <cluelessperson> as long as it's only messing with that file
[23:16:57] *** mmlb <email@example.com> has quit IRC (Ping timeout: 246 seconds)
[23:18:10] <PMT> cluelessperson: I'm sure it's working on it.
[23:18:49] <PMT> #8142 might be a thing that would be useful depending on where it's spending its days
[23:19:02] *** Freeaqingme <Freeaqingmefirstname.lastname@example.org> has joined #zfsonlinux
[23:20:07] <cluelessperson> PMT: thanks for your help
[23:20:23] <cluelessperson> I don't mean to be terse, just... upset
[23:21:28] <PMT> I'm not particularly concerned or complaining, just informing you. The only reason I can think of for it to be taking so long to import is if it's having to serialize on rolling back something stupid, or stuck behind doing something in dedup, or both.
[23:23:35] <ptx0> cluelessperson: you're probably stuck this way until it fixes itself
[23:23:44] <ptx0> don't use dedup if you don't know what you're doing
[23:26:20] <PMT> There was a proposal before OpenZFS became a thing to actually hide the UX things for dedup unless it's already enabled and mark it as "no seriously don't this shit is deprecated" in the manual.
[23:27:02] <cluelessperson> ptx0: ... out of memory, kill process, score 0 or sacrifice child"
[23:27:14] *** simukis <email@example.com> has quit IRC (Quit: simukis)
[23:27:16] <cluelessperson> killed process 679 blkmapd
[23:27:24] <cluelessperson> 48GB of ram. :(
[23:27:46] <cbreak> only? :O
[23:28:08] <PMT> I really did mean it when I said dedup used a _loooooooooooot_ of RAM.
[23:28:21] <cbreak> your pool is smaller than 48TB then?
[23:28:35] <PMT> Also blkmapd sounds like it tried starting something other than ZFS, while I'd have expected it to block on all the filesystems mounting to bring up pNFS
[23:29:14] <PMT> Maybe it just spawned the kernel thread. But then IDK why it'd pick that to snipe.
[23:30:01] *** adilger <adilger!~adilger@S0106a84e3fe4b223.cg.shawcable.net> has joined #zfsonlinux
[23:32:43] <cluelessperson> cbreak: my pool is about 48 TB in 32TB usable
[23:32:44] <cluelessperson> storage
[23:33:35] *** fs2 <firstname.lastname@example.org> has quit IRC (Quit: Ping timeout (120 seconds))
[23:34:56] *** fs2 <email@example.com> has joined #zfsonlinux
[23:44:26] *** xlued <firstname.lastname@example.org> has joined #zfsonlinux
[23:56:35] *** eightyeight <eightyeight!~eightyeig@oalug/member/eightyeight> has quit IRC (Remote host closed the connection)
[23:58:24] *** eightyeight <eightyeight!~eightyeig@oalug/member/eightyeight> has joined #zfsonlinux