Switch to DuckDuckGo Search
   January 3, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >
Toggle Join/Part | bottom
[00:00:17] <ptx0> haha that is horrendous
[00:00:30] <ptx0> 32tb pool with dedup and how many GB ram?
[00:00:40] <ptx0> 48gb
[00:00:49] <ptx0> 'nope'
[00:02:55] *** elxa <elxa!~elxa@2a01:5c0:e08b:3931:dea4:d301:bd5b:95a1> has quit IRC (Ping timeout: 252 seconds)
[00:07:38] <cluelessperson> ptx0: :( I don't hear as much disk activity anymore
[00:07:44] <cluelessperson> I think I should reboot it again
[00:08:35] <cirdan> trying for more corruption?
[00:10:14] <Celmor> PMT, it worked when I was directly piping the zfs send in from the remote, not sure why it never worked from the file when I downloaded it from the remote beforehand https://ptpb.pw/8VlN
[00:10:50] <Celmor> so I guess `zfs send -n`
[00:10:54] *** hoonetorg <hoonetorg!~hoonetorg@77.119.226.254.static.drei.at> has joined #zfsonlinux
[00:10:57] <Celmor> doesn't verify internal checksums
[00:14:07] <cluelessperson> ptx0: only a few TB or so was meant to be deduped
[00:16:28] <cbreak> so you only set the dedupe flag for a small dataset?
[00:16:37] <MilkmanDan> ptx0: But how about 32tb pool with dedup and one of them fancy 1/2tb Optane DIMMs and 48gb ram?
[00:16:46] <MilkmanDan> Oh and MAC obviously.
[00:16:47] <cbreak> you can zfs send that dataset to one without dedupe to disable it
[00:18:00] <MilkmanDan> I guess it might have to be the PCI version until Linux gets support for the DIMMs. I can't afford them so I haven't bothered to check.
[00:18:31] <cbreak> I recently bought a samsung 970 EVO 1 TB
[00:18:58] <cbreak> it's super fast :)
[00:19:02] <cbreak> unless I use ZFS :(
[00:19:37] <cbreak> https://the-color-black.net/pics/HFS+-Encrypted.png vs https://the-color-black.net/pics/ZFS-AES-128-GCM.png vs https://the-color-black.net/pics/ZFS-Unencrypted.png
[00:20:09] <cbreak> didn't find a good IO benchmark program for linux yet, /dev/urandom seems too slow to test anything with :(
[00:21:35] <cluelessperson> ptx0: so hard disk activity seems.. like it's died down
[00:21:37] <cluelessperson> should I reboot?
[00:22:45] <MilkmanDan> cbreak: Use openssl encrypting /dev/zero instead.
[00:23:01] <cbreak> is that fast enough?
[00:23:20] <MilkmanDan> It's as fast as your processor can encrypt a datastream, so it should be.
[00:24:42] *** djdunn <djdunn!~djdunn@fl-184-2-23-179.dhcp.embarqhsd.net> has quit IRC (Ping timeout: 250 seconds)
[00:24:45] <MilkmanDan> Not that I'd necessarily want to use a SATA connected SSD for a MAC'ed dedup table, but anyway....
[00:25:03] <cbreak> have to try that the next time I boot linux
[00:26:52] *** djdunn <djdunn!~djdunn@fl-184-2-23-179.dhcp.embarqhsd.net> has joined #zfsonlinux
[00:27:03] <cbreak> openssl speed aes-128-cbc is disapointing: 167168.85k 184966.19k 189894.49k 188802.05k 191676.47k
[00:27:18] <cbreak> on os x, maybe the AES isn't optimized over here
[00:27:58] <MilkmanDan> If all you want is some mass of data to throw at a drive that won't be compressed down to nothing (ie. /dev/zero) for benchmarking, you can just as easily use a directory full of mp3s, or download a big video from Youtube, etc.
[00:29:06] <cbreak> hmm... or I can just loop /dev/urandom output
[00:29:33] <cbreak> I tried iozone, but that kind of squashed me with options :)
[00:30:05] <cbreak> (I'm wondering if the ZFS on Linux is faster than the one on OS X)
[00:30:06] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r244864944>
[00:34:16] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[00:34:30] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has joined #zfsonlinux
[00:37:34] <Freeaqingme> Hi folks, I ran into this repo; https://github.com/zfsonlinux/fstest . Does anybody know if it's still in anyway significant to ZoL?
[00:37:53] <cluelessperson> anyone know?
[00:43:30] <DeHackEd> well, it's under the zfsonlinux group, so I'm going to say "yes"
[00:43:45] <DeHackEd> oh WOW that's old..
[00:44:01] <cbreak> MilkmanDan: openssl on linux is just as slow as on OS X. But just copying /dev/urandom output repeatedly gave reasonable values
[00:44:03] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Alek P <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r244886567>
[00:44:08] <cbreak> thanks for the idea :)
[00:44:21] <cbreak> seems I get about 1.9 GB/s
[00:44:31] <cbreak> still only 60% of what HFS gets
[00:44:42] <Freeaqingme> DeHackEd: yeah, but last commit from 2011, and there was only 2 days of activity in the repo. So it could also have been a small experiment by behlendorf that was never acted on later on
[00:44:51] <cluelessperson> So, on boot, my ZFS machine gets stuck saying "a start job is running for mount ZFS task txg_sync 2590 blocked for more than 120 seconds
[00:44:52] <cbreak> but better than 1.1 GB/s of ZFS on OS X
[00:44:53] <cluelessperson> what do I do now?
[00:46:04] <cbreak> next step: try zfs encryption on linux... but I kind of don't dare update to 0.8 since my ubuntu boots from ZFS and I have no idea if the update would break everything :)
[00:46:23] <cbreak> (and in particular if grub can boot from an initrd on an encrypted zpool... which I doubt)
[00:46:39] <DeHackEd> cbreak also enabling encryption renders the pool unusable from other versions of ZFS (unless all encrypted datasets are destroyed)
[00:46:54] <cbreak> DeHackEd: yeah, I have encryption on OS X
[00:47:00] <cbreak> most of my pools can't be read on linux :(
[00:47:18] <cbreak> somehow there are features in either version that the other doesn't have
[00:47:44] <cbreak> accidentally made my linux pool importable (unless I RO it) by enabling some linux only feature
[00:48:01] <cbreak> user accounting somethingsomething
[00:48:25] <cbreak> ah well. Luckily I was able to zfs send it to some other pool, destroy it, recreate it without the feature, and send back :)
[00:48:30] <cbreak> no reinstallation needed
[00:48:31] <cbreak> zfs rocks
[00:48:36] <cbreak> ...
[00:50:15] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Alek P <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r244887423>
[00:53:14] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[00:53:39] <DeHackEd> Freeaqingme: since the current test suite includes the XFS test suite (??) I think this is obsolete
[00:54:32] <Freeaqingme> DeHackEd: you can configure in a makefile what FS it should compile for. That's been set to ZFS. Also, the originating project was more or less abandoned, so the fact that no more updates have come in doesn't necessarily mean something.
[00:55:09] <Freeaqingme> What I'm looking for is any sort of opinions how good the test suite was when it was evaluated for ZFS (or if it still is)
[00:56:39] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Alek P <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r244888349>
[01:02:08] *** amir <amir!~amir@unaffiliated/amir> has quit IRC (Read error: Connection reset by peer)
[01:03:16] *** amir <amir!~amir@unaffiliated/amir> has joined #zfsonlinux
[01:05:31] <zfs> [zfsonlinux/zfs] OpenZFS: 'vdev initialize' feature (#8230) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8230#issuecomment-451023592>
[01:06:42] <ptx0> cluelessperson: the DDT grows forever, basically
[01:07:12] <ptx0> MilkmanDan: if the ddt is stored purely on optane you may be in luck, though freeing / deleting files will still suck assholes
[01:15:06] <zfs> [zfsonlinux/zfs] zed and udev thrashing with repeated online events (#7366) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/7366#issuecomment-451025018>
[01:22:53] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Alek P <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r244891588>
[01:32:34] *** Nukien <Nukien!~Nukien@162.250.233.55> has quit IRC (Ping timeout: 252 seconds)
[01:37:02] *** Nukien <Nukien!~Nukien@162.250.233.55> has joined #zfsonlinux
[01:41:18] *** djdunn <djdunn!~djdunn@fl-184-2-23-179.dhcp.embarqhsd.net> has quit IRC (Ping timeout: 245 seconds)
[01:41:32] *** djdunn <djdunn!~djdunn@fl-184-2-23-179.dhcp.embarqhsd.net> has joined #zfsonlinux
[01:41:42] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[01:46:43] <ptx0> bunder: https://youtu.be/M2LOMTpCtLA
[01:49:30] <ptx0> are you looking into getting a 2990WX?
[01:49:55] <ptx0> i mean it's $2399 on Amazon.ca :P
[01:50:12] <ptx0> the $582 for the 1920x looks so f'n reasonable by comparison
[01:50:49] <ptx0> aw, the 1920x is now $20 cheaper than the 1900x when i got mine in may
[01:54:54] <Shinigami-Sama> my 1080ti still isn't here
[01:55:00] <Shinigami-Sama> amazon says 12 more days..
[01:55:43] <ptx0> wow, windows linux subsystem does not have any NUMA awareness
[01:55:46] <ptx0> what the hell
[01:56:11] <ptx0> when he runs `lstopo` it just shows one 32 core package
[01:56:21] <cirdan> windows linux subsystem doesn't have a shitton
[01:56:35] <Shinigami-Sama> ENOTIMPLIMENTED
[01:56:37] <cirdan> it's like the only goal was for ssh to work
[01:56:46] <cirdan> mtr and frends don't work
[01:57:02] <Shinigami-Sama> its goal was to have people stop complaining about "ls" and "dir"
[01:57:06] <cirdan> the consoles are all lulz and i dont think any can update in the background
[01:57:11] <Shinigami-Sama> don't get ahead of yourselves
[01:59:17] <cirdan> heh. when you du -sh * a directory and see some things you forgot about
[01:59:40] <ptx0> use du -h -d1 .
[01:59:43] <ptx0> it is better
[01:59:48] <ptx0> you can add -x to avoid crossing fs boundaries
[02:07:18] <DeHackEd> when you trace down where all the disk usage is going, bring it up with the devs, and they say "oh, that's a debug thing we don't need anymore"
[02:14:21] <PMT> cirdan: I mean, they're rapidly iterating on making more work.
[02:17:28] <zfs> [zfsonlinux/zfs] NFSv4 ACL support - WIP, review requested (#7728) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/7728#issuecomment-451033918>
[02:22:05] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r244897946>
[02:28:45] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Remote host closed the connection)
[02:35:27] <cluelessperson> ptx0: well, I only deduped a few files
[02:35:32] <cluelessperson> and those are almost all deleted now
[02:35:37] <cluelessperson> Sector size (logical/physical): 512 bytes / 4096 bytes
[02:35:42] <cluelessperson> what is the actual block size?
[02:35:48] <PMT> 4096.
[02:36:30] <cirdan> 4k
[02:41:40] *** mmlb <mmlb!~mmlb@76-248-148-178.lightspeed.miamfl.sbcglobal.net> has joined #zfsonlinux
[03:02:46] *** adilger <adilger!~adilger@S0106a84e3fe4b223.cg.shawcable.net> has quit IRC (Ping timeout: 244 seconds)
[03:05:03] *** Celmor <Celmor!~Celmor@unaffiliated/celmor> has quit IRC (Ping timeout: 245 seconds)
[03:08:02] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has quit IRC (Excess Flood)
[03:08:56] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has joined #zfsonlinux
[03:11:01] *** rjvb <rjvb!~rjvb@2a01cb0c84dee60009f5aa51703ac078.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[03:52:29] <cluelessperson> hrm, thanks
[04:08:40] <bunder> @ptx0 | are you looking into getting a 2990WX? -- i bought a 2950x
[04:09:06] <bunder> if i ever get enough time off work to order parts
[04:10:07] * DeHackEd is gonna wait for Epyc 2 and see how that goes...
[04:10:14] <DeHackEd> (hopefully really really well but here's hoping)
[04:10:49] <DeHackEd> 64 cores, 128 threads, dual socket...
[04:10:59] <DeHackEd> and based on what ryzen 3000 will be
[04:11:10] <ptx0> at 1.4GHz
[04:11:12] <ptx0> ^_^
[04:11:14] <bunder> but my wallet
[04:11:34] <DeHackEd> oh they'll have 16 and 32 core variants for the people who can only afford a mere $1500 per CPU
[04:11:37] <ptx0> bunder: the gods need sacrifices from time to time
[04:11:38] <DeHackEd> (taking a guess at pricing)
[04:11:54] <DeHackEd> ptx0: oh I'd think at least 1.8 GHz
[04:12:11] <ptx0> are you not into xeon phi
[04:12:21] <DeHackEd> no...
[04:12:27] <bunder> ptx0: i already get to sacrifice to cablemod because nobody makes power supplies with 9 molex connectors :P
[04:15:29] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Quit: Qutting)
[04:17:27] <ptx0> was just thinking 'you could build two 2950 for one 2990wx' but no, really, the board and ram are a fair penny, plus chassis, psu...
[04:17:33] <ptx0> that 2990wx is a great deal, man
[04:18:24] <bunder> i'll pass, the wx has a weird inf fabric layout
[04:18:28] <ptx0> bunder: you could have parts shipped to me if you need someone to, ye know, sign for them
[04:19:50] <bunder> if i was going to do that i'd ship them at my work address
[04:20:01] <bunder> dunno how kindly they would take to that though
[04:20:05] *** buu <buu!~buu@99-74-60-251.lightspeed.hstntx.sbcglobal.net> has quit IRC (Remote host closed the connection)
[04:26:27] *** biax__ <biax__!~biax@unaffiliated/biax> has joined #zfsonlinux
[04:28:48] *** biax_ <biax_!~biax@unaffiliated/biax> has quit IRC (Ping timeout: 246 seconds)
[04:28:57] *** biax__ is now known as biax_
[04:36:32] *** MTecknology <MTecknology!~Mike@nginx/adept/mtecknology> has joined #zfsonlinux
[04:43:00] *** cluelessperson is now known as Guest85916
[04:46:49] *** Guest85916 <Guest85916!9f41427d@gateway/web/freenode/ip.159.65.66.125> has quit IRC (Ping timeout: 256 seconds)
[04:49:04] *** Celmor <Celmor!~Celmor@unaffiliated/celmor> has joined #zfsonlinux
[04:49:52] <Celmor> "62.1G scanned out of 3.46T at 12.9M/s, 76h56m to go" scrub has never been that slow...
[04:53:51] <bunder> so stop using dedup
[04:55:53] <MilkmanDan> Was there an actual PR to undocument the feature and put warnings in the binaries?
[04:56:18] <MilkmanDan> I think it would have savedn an awful lot of misery...
[04:57:10] <Celmor> bunder, I'm not
[04:57:36] <ptx0> MilkmanDan: #5182 makes dedup not a pile of poop
[04:57:40] <zfs> [zfs] #5182 - Metadata Allocation Classes by don-brady <https://github.com/zfsonlinux/zfs/issues/5182>
[04:59:20] <zfs> [zfsonlinux/zfs] NFSv4 ACL support - WIP, review requested (#7728) comment by "Paul B. Henson" <https://github.com/zfsonlinux/zfs/issues/7728#issuecomment-451051160>
[05:10:01] <MilkmanDan> ptx0: Yes, of course. That was my point earlier. :)
[05:11:00] <MilkmanDan> I just mean that until MAC gets activated in 0.8 or whenever, dedup is a timebomb of misery and regret.
[05:11:57] <MilkmanDan> At the very least I'd take it out of the man pages and make the binaries issue terrifying threats unless you were running git mainline.
[05:21:50] *** Llewelyn <Llewelyn!~derelict@184.12.106.191> has quit IRC (Remote host closed the connection)
[05:32:39] <zfs> [zfsonlinux/zfs] NFSv4 ACL support - WIP, review requested (#7728) new commit by "Paul B. Henson" <https://github.com/zfsonlinux/zfs>
[05:44:25] *** tlacatlc6 <tlacatlc6!~tlacatlc6@68.202.46.96> has quit IRC (Quit: Leaving)
[05:58:10] <ptx0> MilkmanDan: no, i have a customer with ~20tb of deduplicated data
[05:58:12] <ptx0> it works great
[05:58:17] <ptx0> but they don't delete data
[05:58:37] <ptx0> i'm sure if they did i would be getting on my hands and knees and... well...
[06:05:59] <MilkmanDan> Oh, interesting. So that's the "normal" "non-exotic" use case where it makes perfect sense and doesn't take an insane amount of RAM?
[06:08:10] <ptx0> no
[06:08:25] <ptx0> they are just paying for 480GB of memory and don't care about performance
[06:08:48] <ptx0> it's something like $6,000 per month to keep just the CPUs burning
[06:09:15] <ptx0> (CPU meaning everything in the server minus storage)
[06:09:29] <MilkmanDan> Sheesh.
[06:09:39] <MilkmanDan> Can't they just buy the hardware outright?
[06:10:00] <ptx0> that's a fair question but, i don't think they have anywhere for it to *go*
[06:10:10] <ptx0> they don't have any offices
[06:10:26] <MilkmanDan> Rent a cabinet?
[06:10:37] <ptx0> could colocate but that's a new headache probably edging to $6,000 per month+
[06:11:06] <ptx0> haven't looked into it, but i did look at the cost to buy HDDs and it would be about $8,000
[06:11:21] <bunder> ptx0: lol watching wendells video... windows spends more time bouncing threads rather than running them... lol, welcome to windows 2000
[06:11:25] <ptx0> tried to get a budget to replicate their setup locally to reproduce issues
[06:11:44] <ptx0> got crickets in response
[06:12:03] <MilkmanDan> ...but they're ok with spending 6 grand a month, forever?
[06:12:18] <ptx0> it's an ever-climbing bill
[06:12:30] <MilkmanDan> Yeek.
[06:12:31] <ptx0> so, no, they have to be ok with paying more than 14,000 per month and climbing
[06:12:37] <ptx0> because the storage is another several $k
[06:12:56] <ptx0> that's just the stuff that i'm involved with, too
[06:13:07] <ptx0> they have some petabyte system consuming more than $25,000 a month
[06:13:16] <MilkmanDan> How much bandwidth are they chewing up with that?
[06:13:21] <ptx0> i'm like, i will run that in my bedroom for them if they pay me... $20k
[06:13:33] <MilkmanDan> Hah, no kidding.
[06:13:37] <ptx0> dunno, bw is unmetered
[06:13:46] <ptx0> not that it matters because the performance of the array is so low
[06:13:51] <MilkmanDan> What pipe?
[06:13:55] <ptx0> 20gbps
[06:14:54] <ptx0> i don't think i've ever seen that throughput in real world, though i did with iperf
[06:15:12] <ptx0> in reality more like 2-6gbps due to aforementioned shit array perf
[06:26:20] *** Llewelyn <Llewelyn!~derelict@184.12.106.191> has joined #zfsonlinux
[06:28:46] *** Celmor <Celmor!~Celmor@unaffiliated/celmor> has quit IRC ()
[06:49:55] *** theorem <theorem!~theorem@pool-173-68-77-128.nycmny.fios.verizon.net> has joined #zfsonlinux
[07:07:00] *** pR0Ps <pR0Ps!~pR0Ps@216.154.13.143> has quit IRC (Ping timeout: 246 seconds)
[07:10:45] *** pR0Ps <pR0Ps!~pR0Ps@216.154.21.160> has joined #zfsonlinux
[07:48:52] *** hyper_ch <hyper_ch!~hyper_ch@openvpn/user/hyper-ch> has quit IRC (Ping timeout: 250 seconds)
[07:52:22] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[07:52:56] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has joined #zfsonlinux
[08:20:18] *** JanC_ <JanC_!~janc@lugwv/member/JanC> has joined #zfsonlinux
[08:20:19] *** JanC <JanC!~janc@lugwv/member/JanC> has quit IRC (Read error: Connection reset by peer)
[08:21:59] *** JanC_ is now known as JanC
[09:03:36] *** biax_ <biax_!~biax@unaffiliated/biax> has quit IRC (Read error: Connection reset by peer)
[09:04:26] *** biax_ <biax_!~biax@unaffiliated/biax> has joined #zfsonlinux
[09:07:49] <zfs> [zfsonlinux/zfs] zpool import: improve misleading error messages (#8236) comment by Bernd Helm <https://github.com/zfsonlinux/zfs/issues/8236#issuecomment-451076489>
[09:15:37] *** gardar <gardar!~gardar@bnc.giraffi.net> has quit IRC (Quit: ZNC - http://znc.in)
[09:18:08] *** gardar <gardar!~gardar@bnc.giraffi.net> has joined #zfsonlinux
[09:19:40] *** PioneerAxon <PioneerAxon!~PioneerAx@103.5.19.50> has quit IRC (Remote host closed the connection)
[09:20:27] *** PioneerAxon <PioneerAxon!~PioneerAx@103.5.19.50> has joined #zfsonlinux
[09:20:38] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[09:22:06] <rjvbb> cbreak: yes, openzfs-osx isn't as fast as ZoL though the difference has become a lot smaller very recently
[09:22:29] <rjvbb> it's one reason I haven't been using it much there
[09:23:55] <rjvbb> pure throughput writing huge (eg. 5Gb) files is now almost the same as HFS+
[09:24:09] *** rjvbb is now known as rjvb
[09:27:52] <rjvb> apart from that, there's bonnie++ which is a pretty nice and succinct cross-platform benchmarking tool AFAICT
[09:37:46] *** gardar <gardar!~gardar@bnc.giraffi.net> has quit IRC (Quit: ZNC - http://znc.in)
[09:40:41] <PMT> Ryushin: i'm starting to want to fork the Debian ZFS packaging and push it upstream so that I can ignore the insane choices they're making
[09:41:28] *** gardar <gardar!~gardar@bnc.giraffi.net> has joined #zfsonlinux
[09:46:55] <rjvb> hmm, insane in what sense?
[09:53:01] <rjvb> just checking: does ZFS have support for per-file arbitrary extended attributes or other kind of metadata that is reset when the file is rewritten, a la the archive attribute on MS Windows?
[09:54:03] <PMT> rjvb: I honestly don't know, re: xattrs.
[09:55:19] <PMT> And in terms of packaging, I reported a bug with them shipping the sysvinit init scripts and having the systemd sysvinit compat stuff installed, and their response was to suggest forcing users to remove the systemd-sysvinit compat layer (which ~50% of Debian users have installed) to install ZoL.
[09:55:36] <PMT> At which point my head nearly exploded.
[09:56:46] <rjvb> I've been keeping myself blissfully unaware of all the systemd hubbub :)
[09:57:11] <rjvb> My Ubuntu version has an inactive version of it, possibly just to get libudev
[09:57:48] *** gardar <gardar!~gardar@bnc.giraffi.net> has quit IRC (Quit: ZNC - http://znc.in)
[09:58:45] <PMT> rjvb: tbh I like systemd in theory but hate some of the choices they've made, but that's mostly academic here. I'm annoyed b/c they seem to think that forcing 50% of Debian users to uninstall a package that will print "uh this seems like a terrible idea are you really, really sure" in order to run ZoL is a reasonable choice.
[09:59:11] <rjvb> indeed
[09:59:49] <rjvb> same on Ubuntu?
[10:00:07] <PMT> I'm not touching Ubuntu with a 39 1/2 foot pole.
[10:01:29] <rjvb> sooner or later I *will* have to upgrade and I have the impression that the old packaging from the ZoL Ubuntu packaging that I'm still using won't work with ZoL 0.8 either
[10:01:48] <PMT> rjvb: oh, there is packaging that will work for that. It's even in a git repo.
[10:02:03] *** gardar <gardar!~gardar@bnc.giraffi.net> has joined #zfsonlinux
[10:02:16] <PMT> cf. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=909153
[10:04:26] <rjvb> the aerusso-guest/zfs repo?
[10:05:59] *** biax_ <biax_!~biax@unaffiliated/biax> has quit IRC (Read error: Connection reset by peer)
[10:06:00] <PMT> Yes. I think there's also a zfs-linux-git package somewhere you could steal the packaging from instead if it's not identical, since they had to fix the SPL merge long before.
[10:06:58] <rjvb> before 0.8, why? And do I steal the systemd insanity along with it?
[10:07:55] <rjvb> I could just wait until your fork lands upstream, I'm not in a hurry to update before 0.8 (possibly 0.8.1) is released :)
[10:07:57] <PMT> rjvb: because the SPL merge happened in git, and the packaging was to include git nightly support so they had to fix it before 0.8?
[10:07:58] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 268 seconds)
[10:08:00] *** kaipee <kaipee!~kaipee@81.128.200.210> has joined #zfsonlinux
[10:08:39] <PMT> rjvb: it's not clear whether A) I'm going to end up forking it or B) it'll be accepted upstream, I'm just rather frustrated that they keep insisting the sky is green and purple and down is up.
[10:09:10] <rjvb> so much for versioning then (latest versions I've seen so far in that aerusso repo is 0.7.11.x, suggesting anterior to 0.7.12 release)
[10:10:23] <rjvb> what upstream are we talking about? OpenZFS? Debian? ZoL itself?
[10:10:25] *** biax_ <biax_!~biax@unaffiliated/biax> has joined #zfsonlinux
[10:10:39] <PMT> rjvb: ZoL itself, in my case.
[10:10:58] <PMT> Also, 0.7.12 doesn't require any package changes, so that should still work.
[10:11:08] <rjvb> I was going to say that would make some sense, and that should increase the chances of it being accepted, no?
[10:11:48] <PMT> In theory. I'm wary of betting on anything I expect happening without a lot of explicit buyin from people.
[10:12:20] <PMT> For example, I would have sworn the Debian ZFS packagers were probably reasonable actors. They might even actually be reasonable and I'm just missing information about their logic.
[10:12:47] <rjvb> I confirm that I'm on 0.7.12 with the old ZoL/Ubuntu packaging just a bit updated here and there (but for previous releases already)
[10:17:48] *** Floflobel <Floflobel!~Floflobel@cosium-152-18.fib.nerim.net> has joined #zfsonlinux
[10:24:52] <cbreak> rjvb: I get about half the performance of HFS+
[10:26:55] <zfs> [openzfs/openzfs] Add a manual for ztest. (#729) comment by Sevan Janiyan <https://github.com/openzfs/openzfs/issues/729#issuecomment-451090963>
[10:31:12] <rjvb> cbreak: measured how? Slowest in my experience is file creation and deletion
[10:31:44] <cbreak> dding a file to an other file
[10:31:50] <rjvb> FWIW, I build from source, using aggressive compiler optimisations (-O3 -march=native -flto=thin)
[10:32:21] <rjvb> and I don't run zed
[10:33:11] <cbreak> on os x I also build from source, on linux I don't.
[10:33:20] <cbreak> on linux I get about 1.9GB/s
[10:33:21] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[10:33:24] <rjvb> of course I'm comparing performance on a 3.5" spinning disk connected to a USB3 port on a Thunderbolt dock :)
[10:33:34] <cbreak> when doing a dd from a file that's likely in arc to an other file 5 times in a row
[10:34:49] <cbreak> for some reason, I get only 1GB/s on OS X for the first GB (when the dest file is already 1GB in size)
[10:35:01] <cbreak> and then 200MB/s
[10:35:02] <cbreak> weird
[10:35:16] <rjvb> IIRC I limited ARC memory to 512Mb for the test I did, and used a benchmark app that just rewrites 5Gb files over and over again (to exclude the host's file cache)
[10:36:19] <cbreak> I want to test write performance
[10:36:21] <rjvb> I similar things. I think that initial speed is when ZFS is writing to the ZIL and then when speeds drop it's writing to the actual file
[10:36:22] <cbreak> so I want arc :)
[10:37:23] <rjvb> I wanted to get an idea of performance in real-world conditions, where don't want all my RAM to go to file IO but most of it to whatever it is that requires said IO :)
[10:37:56] <cbreak> I have a lot of ram
[10:38:06] <cbreak> I don't mind it being used to make things faster :)
[10:38:08] <rjvb> BTW, if you want to get crazy good benchmarking numbers, use the Mac go-to benchmark tool, XBench 8-)
[10:38:21] <cbreak> I used black magic bench thingie
[10:38:49] <cbreak> https://the-color-black.net/pics/HFS+-Encrypted.png vs https://the-color-black.net/pics/ZFS-Unencrypted.png on OS X
[10:38:51] <rjvb> I think that's the same I used (not on my Mac right now). It has like a dashboard with 2 dials?
[10:39:01] <cbreak> with linux I only did DD tests, and I get 1.9GB/s there
[10:39:06] <rjvb> yep, the same
[10:39:36] <rjvb> we have dd on Mac so you could compare pears with pears
[10:39:50] <cbreak> with dd I only get around 2GB/s on OS X too though, so it's probably dd not being fast enough
[10:40:28] <cbreak> (on hfs+)
[10:40:44] <cbreak> anyway, I didn't expect zfs to be as fast as hfs+, since it does a lot more
[10:41:09] <cbreak> like ensuring the data is actually readable again :)
[10:42:30] <cbreak> want to compare https://the-color-black.net/pics/ZFS-AES-128-GCM.png https://the-color-black.net/pics/ZFS-AES-128-CCM.png with linux next, when ever I dare to upgrade ZFS over there :)
[10:46:36] *** Slashman <Slashman!~Slash@cosium-152-18.fib.nerim.net> has joined #zfsonlinux
[10:49:49] <rjvb> FWIW, you can apparently get an MSWin version of the BlackMagic thingy from one of their driver downloads, maybe the same applies to Linux? (https://linustechtips.com/main/topic/511785-where-can-i-find-blackmagic-disk-speed-test-for-windows/?do=findComment&comment=8538092)
[10:51:18] <PMT> cbreak: I don't really expect ZFS to be markedly slower, though.
[10:52:17] <cbreak> PMT: well, you saw the images above from speed test comparing zfs on os x and hfs+
[10:52:49] <cbreak> HFS+ is up to 2.9 GB/s, zfs only around 1.1GB/s
[10:53:10] <cbreak> linux ZFS is faster with dd, but I can't properly compare it to what the speed test program does
[10:53:28] <cbreak> dd seems to be a bottleneck
[10:54:21] <PMT> This conversation reminds me of when glxgears started requiring you to pass --i-acknowledge-this-is-not-a-benchmark to launch it.
[10:54:49] <cbreak> well, it is a benchmark
[10:54:54] <cbreak> ... to find out how fast glxgears works
[10:56:31] <cbreak> and the numbers also show that encryption has a negative performance impact on zfs
[10:57:05] <cbreak> (the hfs numbers are always from encrypted core storage volume, and I have no idea how they do that...)
[10:57:54] <PMT> Unsurprising.
[10:58:11] <PMT> I'm sure if someone cares enough they'll add optimized versions of the crypto bits.
[10:59:08] <cbreak> they are optimized already :)
[10:59:16] <cbreak> I ported the AES assembler from linux over
[10:59:45] <cbreak> chances are the asm is not optimal of course
[11:00:36] <cbreak> since ccm / gcm modes might be harder to optimize than xts, which is what os x claims to use...
[11:00:48] <lblume> It's using AES-NI, right? I forget.
[11:02:21] <cbreak> lblume: I think so
[11:02:46] <cbreak> but the bulk of the ASM is not specific aes-ni code as far as I can tell
[11:03:16] <PMT> I suppose AVX2 and AVX512 bits might be useful, but then you get to figure out whether it'll make the CPU thermally throttle
[11:03:23] <PMT> Or rather, not thermally, but throttle
[11:03:57] <lblume> Hmmm. I run zfs over LUKS, with very little performance impact, if any (and I used to have a CPU without AES-NI, where it was noticeable)
[11:04:29] <cbreak> so maybe the ASM doesn't use AES-NI, just optimized table code
[11:05:04] <PMT> So disassemble it and find out. :)
[11:05:13] <lblume> It can't use an existing lib?
[11:05:19] <cbreak> PMT: module/icp/asm-x86_64/aes/aes_amd64.S
[11:05:49] <cbreak> lblume: don't know
[11:06:03] <cbreak> I only fixed the asm code so it works on OS X, I didn't try to understand it :)
[11:06:16] <lblume> :)
[11:06:33] <lblume> Portability might well be a reason why it can't
[11:06:53] <cbreak> it's not portable.
[11:07:06] <cbreak> the linux asm was littered with ELF specific annotations
[11:07:21] <rjvb> cbreak: how do you know it works if you don't understand it?
[11:07:22] <cbreak> well, maybe portability to before-aes-ni CPUs
[11:07:31] <PMT> The ICP AES ASM seems to be missing entirely any AVX instructions.
[11:07:34] <cbreak> rjvb: testing with live data of course!
[11:07:57] <lblume> PMT: What would be their use? Are there any CPU with AVX and without AES?
[11:08:12] <rjvb> so all you tested is that you can read back the data after encryption/unencryption?
[11:08:42] <cbreak> I also tested that it's faster after than before
[11:08:46] <cbreak> which it is, significantly
[11:08:49] <cbreak> so the code is used :)
[11:09:13] <cbreak> there are also unit tests, but I have no idea how comprehensive they are
[11:09:16] <cbreak> (I think not very)
[11:09:23] <rjvb> that doesn't necessarily prove that the encryption actually does what you think it does, e.g. if you're testing with an encrypted dataset
[11:09:31] <PMT> lblume: I would be curious whether anyone tested if the AESNI bits have a similar problem to certain AVX2 bits, e.g. if you use them the entire rest of the processor throttles
[11:09:57] <rjvb> if it's faster maybe you're just writing the original data? ;)
[11:11:55] <cbreak> module/icp/asm-x86_64/aes/aes_amd64.S contains the AES-NI ASM
[11:12:17] <lblume> PMT: Not to my knowledge. There were benchmarks published years ago, that showed a power use increase when using AES, but not enough to trigger alarms. The conclusion was that even if power increaes, since operations are done faster, the overall use is still comparable to CPUs without it.
[11:12:18] <cbreak> module/icp/asm-x86_64/aes/aes_aesni.S in zol
[11:14:10] <lblume> I also remember that AVX had an issue, I think on its earlier implementations, that would prevent use of other parts of the CPU at the same time (FPU?). Tknow zfs was using it at the time for compression, I think, and I was wondering if that could impact performance. But it was AVX-specific, and I think current CPUs don't have any such issue left.
[11:15:09] <cbreak> aes-ni uses xmm registers for IO afaik
[11:15:44] <cbreak> there's some code in the zol asm to handle kernel mode usage of those registers
[11:15:56] <cbreak> because they apparently aren't saved by default
[11:24:38] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[11:37:09] <rjvb> IIRC the linux kernel makes no guarantees whatsoever about saving anything related to "intrinsics". At least on Debuntu the dpkg scripts add -mno-mmx -mno-sse and the whole rest of the family to the compiler options, for that reason
[11:39:29] <Lalufu> ...for kernel code, right?
[11:39:47] <Lalufu> user space is going to get really upset if those aren't saved
[11:49:08] <cbreak> rjvb: they are saving more and more
[11:49:20] <cbreak> since not saving it always is an information leak
[11:49:23] <cbreak> see spectre
[11:49:31] <rjvb> Lalufu: yes, I was talking about kernel code
[11:52:21] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[11:54:10] <blackflow> rjvb: that no-sse flags, which package on debian?
[11:55:48] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6008c0ef216988ecbcf.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[12:00:23] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[12:06:17] <Ryushin> PMT: Do you ever sleep?
[12:06:59] <Ryushin> And forking the package. Do you think you would want to make your own repo?
[12:07:51] <Ryushin> Thing is, I think the Devuan group will need to fork the package now as well. Might be a good time to work together.
[12:10:13] <PMT> Ryushin: I get that question a lot.
[12:10:52] <PMT> Since AFAIK the record for not having a psychotic break from not sleeping is like a week, and we've been talking longer than that, ...
[12:11:21] <Ryushin> I'm trying to figure out what time zone you are on. And for the life of me, I can. LOL
[12:11:26] <PMT> EST.
[12:12:14] <Ryushin> At first I thought it was in Europe, then nope.
[12:12:32] <PMT> Nope. I even have a 9-5 (...ish) dayjob.
[12:12:45] <Slashman> hello, I have an issue with several datasets that zol thinks are mounted, but they are not: https://apaste.info/nro9, any idea how I can solve this? this is on ubuntu 16.04 so zol 6.5.x
[12:13:07] <Slashman> 0.6.5.x
[12:13:45] <FinalX> things can be mounted without them being visible in the local mtab; for example, they could be in use in containers and mounted there
[12:13:47] <Ryushin> Well, I don't know how you do it sometimes. I'm 49 (how the heck did happen) and I think I work too hard sometimes.
[12:13:50] <PMT> Hm, perhaps I can get ptx0 to add a bot command for 065x
[12:13:56] <PMT> Ryushin: I'm 32, so.
[12:14:45] <Slashman> FinalX: okay, but in this case, the folders are empty, I cannot umount or mout the volume
[12:14:51] <PMT> Slashman: your best bet is probably to report it to Ubuntu and ask them to fix it. A workaround you could use if you were on recent 0.7.X would be just mounting it again, but mounting datasets multiple times isn't supported then.
[12:14:51] <FinalX> odd
[12:15:09] <Ryushin> So the forking? You thinking of your own repo? Or perhaps getting a debian folder in ZoL?
[12:15:20] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has quit IRC (Quit: AAAGH! IT BURNS!)
[12:15:41] <Slashman> PMT: okay, so no idea how to work arroudn this on 0.6.5 ? :'(
[12:15:42] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has joined #zfsonlinux
[12:15:45] <PMT> Honestly I'm still going to end up waiting until the conversation with the Debian folks finishes before trying to decide to go run off and do my own thing, I'm just getting skeptical of it working out another way.
[12:16:39] <Slashman> PMT: are you thinking about an official ZoL repo for Debian?
[12:16:49] <Ryushin> PMT: LOL!!! Welcome to the last two years of my life.
[12:16:55] <PMT> Slashman: Ubuntu is shipping versions that haven't been maintained by anyone but Ubuntu in over a year.
[12:17:32] <Slashman> PMT: I know, I would love a compatible easy to use repo for Ubuntu with latest ZoL
[12:17:42] <FinalX> likewise
[12:17:55] <Ryushin> When the finally committed the init scripts, I thought that it was finally done. Nope, think again.
[12:18:09] <PMT> Slashman: I mean, installing the Ubuntu HWE kernel will give you 0.7.X.
[12:18:26] <Slashman> PMT: I have it on this machine then
[12:18:38] <Slashman> but it still list zfsutils-linux as 0.6.5.6-0ubuntu26
[12:18:39] <PMT> Slashman: what's dmesg | grep ZFS say, then?
[12:18:50] <FinalX> upgrading to 18.04 also gives you 0.7
[12:19:00] <PMT> Yes. The userland is 0.6.5.X, and the kernel is 0.7.X. Ubuntu did a lot of patching to make that fly.
[12:19:02] <Slashman> ZFS: Loaded module v0.7.5-1ubuntu16.4, ZFS pool version 5000, ZFS filesystem version 5
[12:19:29] <Slashman> okay, so does it change anything to my issue then?
[12:19:50] <PMT> sec
[12:20:19] <PMT> You could try asking them to backport https://github.com/zfsonlinux/zfs/commit/93b43af10df815d3ebfe136d03cd2d7f35350470 . But that doesn't explain how you got into this situation.
[12:20:19] <Slashman> trying a zfs mount gives me "cannot mount 'ssd/jenkins/home': filesystem already mounted"
[12:20:42] <Slashman> I guess I'll just reboot, I cannot do anything at this point
[12:20:53] <PMT> (The above commit landed in 0.7.9 so you're running a version that predates that, uh, "workaround")
[12:22:11] <Slashman> ok, I guess I could try to update to ubuntu 18.04 and see if it fixes anything, but that was not my plan, and this situation happened after a reboot
[12:22:14] <PMT> Ryushin: also, given how anemically I've been spending actual time on the Debian thing, I'm not sure how impressed anyone should be with me. I just keep very strange hours.
[12:23:11] <Ryushin> PMT: I will agree with the hours. But really, the Debian thing, what could be worse then what they are doing already.
[12:23:15] <PMT> Slashman: I would agree that upgrading to 18.04 seems kind of like overkill for this, but a lot of people in here would disagree with me. (They also prefer rolling release distros and think any distro that tries to keep stable versions is madness, so I differ from them in a number of ways.)
[12:23:44] <PMT> Ryushin: tbh I stand by my statement that forcing you to uninstall all sysv compat on systemd in order to support sysv is probably worse than the minor breakage on upgrade.
[12:23:57] <Ryushin> PMT: They don't want stable versions? Umm... do they run servers that their jobs depend on?
[12:26:15] <PMT> Ryushin: there is a significant amount of friction these days between people who think you should install the latest version of XYZ every time and people who support the more traditional model of keeping things sta(b)le at certain versions for long intervals.
[12:27:14] <FinalX> tell me 'bout it.. it even created a split at work between devs and sysadmins
[12:27:20] <FinalX> (php)
[12:28:03] <Ryushin> The rolling release is find for desktops and things for that. Heck, I've been running sid on my laptop since 1998. Same install to this day. I've also dealt with the breakages of running unstable as well. But for servers, no thank you.
[12:28:27] <FinalX> devs prefer a package repo that's maintained by 1 dude, sysadmins think that there should be a security team on it instead, etc. so now people can choose.. either really always run the latest and assume that things can and will break, or stay on the same version for long and have a big major upgrade every 2-5 years.
[12:28:52] <FinalX> neither are very favorable imo
[12:29:21] <Slashman> is it possible to prevent all zfs mount at next boot on ubuntu 16.04? I may be able to fix this if it doesn't think that the datasets are already mounted
[12:29:35] <cbreak> Slashman: 18.04 has worked fine for me :)
[12:29:42] <cbreak> better than 17.10
[12:30:01] <FinalX> Slashman: /etc/default/zfs
[12:30:07] <cbreak> Slashman: you can zfs set canmount=noauto to datasets
[12:30:18] <FinalX> change this: ZFS_MOUNT='yes'
[12:30:28] <Ryushin> So the phone server that runs asterisk, it is unstable due to the constant rolling upgrades, that won't go over with the bosses that just want it to work.
[12:31:02] <Ryushin> We can plan for the upgrades every 2-3 years.
[12:31:23] <Slashman> FinalX, cbreak : thanks, I think the canmount=noauto is even better in this case, only some datasets have this issue, thanks, I'm keeping the other change for latter if it doesn't work
[12:31:30] <PMT> Ryushin: also, I discovered something stupid.
[12:32:02] <FinalX> right now we run either Debian or Ubuntu LTS with native PHP-versions, and/or with Ondrej Sury's repo for teams that really need/want the current PHP-version etc. luckily I kind of set the rules, and keep track of all installed versions.. but it's a bit tiresome :p
[12:32:33] <PMT> Ryushin: if is_installed(sysv-rc + systemd-sysv + insserv + initscripts), attempting to remove insserv tries to remove init and prompts fire. If is_installed(sysv-rc + systemd-sysv + insserv + !initscripts), it just prompts you to remove sysv-rc and insserv when you remove insserv and no fire.
[12:32:41] <FinalX> Slashman: yeah, that will work too.. or at least, should. unless you have set mountpoint=legacy and have put them in /etc/fstab.. in that case, don't forget to comment them out there.
[12:33:23] <Slashman> FinalX: no, the mountpoint are defined at dataset level, no legacy
[12:33:55] <FinalX> ack
[12:34:00] <PMT> Slashman: I had a terrible idea.
[12:34:14] <PMT> Slashman: mount -o zfsutil -t zfs foo/bar/baz /where/it/belongs
[12:34:56] <Slashman> I didn't even know that something like that was possible
[12:35:04] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[12:35:22] <Slashman> why is this a terrible idea?
[12:35:43] <PMT> Depending on where the "is-mounted" check is, it might not work.
[12:36:10] <Slashman> I'll try that if everything else fail then :p
[12:36:22] <PMT> It shouldn't set anything on fire, it's just a stupid idea.
[12:36:45] <FinalX> or manually edit mtab? even worse idea?
[12:36:46] <FinalX> :p
[12:36:55] <PMT> https://github.com/zfsonlinux/zfs/issues/5796#issuecomment-362599676 or something.
[12:36:56] <FinalX> or maybe grep the mount in /proc/*/mounts
[12:37:52] <Ryushin> PMT: That is nuts.
[12:38:06] <PMT> Note that the comment above may be describing the opposite of the scenario you're having - e.g. Linux thinks it's in use and ZFS thinks it's unmounted, but I'm not sure.
[12:38:08] <Slashman> rebooting atm
[12:38:09] <PMT> Ryushin: right?
[12:38:36] <Ryushin> I'm trying to figure out why that is happening in my head and I don't really have an answer to that.
[12:39:26] <PMT> My only guess is that it's heavily prioritizing keeping initscripts installed as a special case somewhere.
[12:40:07] <Ryushin> There was a big hoopla a couple of months back about Debian needing to do a better job of keeping init scripts. The Debian ZFS group (well, one of them in particular), does not seem to want that to happen.
[12:40:28] <Slashman> with canmount=noauto, I am able to mount all the datasets that had the issue with zfs mount
[12:41:44] <Ryushin> But yea, there must be some weird logic going on in those packages.
[12:41:44] <PMT> I think my advice still involves reporting a bug to Ubuntu if there isn't one already.
[12:42:27] <Slashman> I think I'll update to 18.04 and see if it fixes the issue, if it does, I won't bother
[12:43:02] <Slashman> I don't see anything in syslog nor in dmesg, so I'm not sure what I can tell them in a bug report
[12:45:23] <PMT> Slashman: I mean, you tell them "my filesystem claims it can't unmount because it's unmounted and can't mount because it's mounted"
[12:46:51] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[12:47:30] <Slashman> PMT: I guess you're right, I'll see after lunch, thank for your help
[12:54:13] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has quit IRC (Ping timeout: 245 seconds)
[13:04:52] *** cbreak <cbreak!~cbreak@77-56-224-14.dclient.hispeed.ch> has quit IRC (Ping timeout: 264 seconds)
[13:36:04] <fling> Is it posible to use risers for pike slot?
[13:53:12] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[13:53:35] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Max SendQ exceeded)
[13:54:03] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[13:56:35] <cirdan> morning PMT, Ryushin
[13:57:00] <Ryushin> cirdan: Morning.
[13:57:25] <Ryushin> cirdan: I'm thinking PMT is a AI. He never sleeps.
[13:58:12] <PMT> I doubt it. AI would have more consistent behavior, unless someone secretly solved general AI and told it to post shit on IRC.
[13:59:58] <bunder> he probably goes to bed at 7-8pm like DeHackEd does :P
[14:00:14] <cirdan> Ryushin: half right. A no I
[14:00:15] <cirdan> :)
[14:00:25] <cirdan> that's how I feel sometimes anyway
[14:00:26] <Ryushin> It is an AI after all. It has to give us false information to keep us guessing.
[14:01:04] <Ryushin> bunder: Wife and I sometimes hit the sack at that time as well. Then we're up at 2-4 in the morning.
[14:02:11] <bunder> too early for me, even if i have to get up at 5:30 for work
[14:02:19] <Ryushin> cirdan: So PMT has an interesting theory, that the Debian ZFS packaging teams is wigged out. I think he might be onto something.
[14:03:48] <Ryushin> Might explain a lot of their behavior. Or some of them live in Colorado and have access to some weird green plant.
[14:04:34] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Read error: Connection reset by peer)
[14:05:35] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[14:06:52] <bunder> y'all could switch to gentoo ;P
[14:07:03] <PMT> Savage.
[14:07:29] <PMT> Ryushin: I don't think they're mad, I just don't understand one of them's logic. Aron actually seems to be the one I agree with in the (currently off-bug) conversation.
[14:09:23] <DHE> bunder: I do not
[14:09:30] <Ryushin> Well, I agree with the fact that uninstalling insserv is just not a smart thing to do.
[14:09:37] <Ryushin> My cat keeps pawing me for pets. :)
[14:09:48] <PMT> tbh I don't mind uninstalling insserv, I mind it trying to rip out all the sysv compat stuff.
[14:11:06] <Ryushin> True.
[14:13:07] <cirdan> I dont know what the current status is
[14:14:28] <PMT> cirdan: briefly, I suggested breaking out the sysvinit scripts into a child package and only making that require removing all the systemd sysv compat stuff, b/c trying to remove the broken part of that on its own makes apt tell me to rip out all the sysv compat stuff in systemd (to the point of warning "I am attempting to uninstall something I think is a core requirement please sign in blood to confirm
[14:14:34] <PMT> you think it's a good idea")
[14:15:15] <PMT> And there's been off-bug discussion of whether that's, uh, reasonable.
[14:15:30] <cirdan> wait what's the bug now
[14:15:31] <cirdan> ?
[14:15:37] <PMT> Some people seem to be voicing opinions that confuse me to the point of wondering if I'm missing some basis for it.
[14:15:52] <cirdan> i mean lets jut get rid of the zfs-share by default if that's what's causing the problems...
[14:16:00] <PMT> cirdan: briefly, having insserv+systemd-sysv installed and installing the package with both systemd unit files and sysvinit files makes dpkg --configure catch fire.
[14:16:06] <blackflow> so Debian can be used without systemd? Without issues or is it a constant struggle to keep sd away?
[14:16:09] <cirdan> ah
[14:16:14] <cirdan> blackflow: yes
[14:16:17] <blackflow> so uh.... what's Devuan for then?
[14:16:24] <cirdan> it's "officially supported" but not required
[14:16:40] <PMT> blackflow: sysvinit support is basically best-effort for stretch and nah for buster, IIUC
[14:16:43] <cirdan> there's still systemd bits/libraries used
[14:17:23] <PMT> cirdan: attempting to apt remove insserv on stretch when you have the initscripts package installed makes apt go insane.
[14:17:50] <cirdan> i have insserv...
[14:17:55] <PMT> So my suggestion was to punt either one or both of the sysvinit scripts and systemd unit files into a child package and require exactly one be installed.
[14:17:59] <cirdan> and I've had no issues up until now
[14:18:12] <cirdan> PMT: seems reasonable
[14:18:22] <cirdan> they can conflict with each other
[14:18:32] <cirdan> but it seems like a debian bug with dpkg
[14:18:37] <PMT> cirdan: you would need to have {insserv,systemd-sysv,initscripts,sysv-rc} installed and try to install the version that got rolled back of zfsutils-linux with the initscripts
[14:18:41] <cirdan> why don't other packages have this installed
[14:19:16] <cirdan> oh so the problem is systemd-sysv
[14:19:18] <PMT> cirdan: one of the maintainer's arguments was that that's overcomplicated and it's fine to require people to remove a package that 50% of Debian popcon has installed in order to use it.
[14:19:19] <cirdan> I dont have that
[14:19:32] <cirdan> yeah well that arg is invalid
[14:19:51] <cirdan> since I feel like that's something that's against debian policy...
[14:19:54] <PMT> That is my opinion and that of one of the other maintainers.
[14:20:10] <cirdan> what is systemd-sysv anyway
[14:20:13] <PMT> cirdan: I suspect so, but haven't bothered going to look and find one to see, because that seems like it could end in people ragequitting.
[14:20:30] <PMT> systemd-sysv is the "support sysvinit scripts when systemd is installed" bit.
[14:20:34] <cirdan> there's 1 I dont think i'd mind if that happened to
[14:20:58] <cirdan> so the bug seems like systemd-sysv and sysv-rc should conflicy
[14:21:00] <cirdan> t
[14:21:03] <PMT> cirdan: I'm not sure that's true, since I didn't explicitly specify who was arguing the position I found untenable, and I don't think it's who you think.
[14:21:12] <PMT> systemd-sysv and sysv-rc shouldn't conflict, actually.
[14:21:24] <cirdan> well dpkg catches fire?
[14:21:49] <cirdan> systemd-sysv is conflicting with something, or dpkg needs a bugfix :)
[14:22:22] <cirdan> I have insserv,initscripts,sysv-rc and have no problems ever and lots of packages have init scripts and systmd crap
[14:22:23] <PMT> cirdan: I haven't bothered trying to figure out why, but basically apt's conflict resolver appears to think that preserving the package "initscripts" being installed is higher priority than anything else, and comes up with a least-cost solution where that's true, unless you explicitly tell it to remove it.
[14:23:42] <PMT> To the point that it recommends removing the "init" metapackage which is marked as Important, and that's what makes you sign a blood oath swearing you were told it was a bad idea and did so.
[14:24:04] <cirdan> I still dont' understand why antyhing wants to remove a package
[14:25:13] <PMT> So, if stretch insserv causes problems with the init scripts and systemd unit files being shipped at the same time, the options are to convince the maintainers to not try installing both at once (which I've been unsuccessful with), not ship the sysvinit scripts at all, or remove the package which causes the breakage by marking it as Conflicts.
[14:25:50] <PMT> Just marking insserv as Conflicts or trying to manually remove just insserv in the conditions above results in what I mentioned.
[14:26:11] <cirdan> right but I have that installed and there's no fire
[14:26:21] <cirdan> but I don;t have systemd-sysv
[14:26:34] <PMT> Note the part where the conditions involved systemd-sysv in the package list.
[14:26:54] <PMT> "I don't meet these conditions and I'm not having that problem" is not a surprising outcome.
[14:26:58] <cirdan> right. so the bug is with dpkg and systemd-sysv
[14:27:23] <cirdan> and we're trying to work around it
[14:27:32] <PMT> Why are you stuck on this
[14:27:36] <PMT> It's not really that hard
[14:28:32] <cirdan> i havent been sleeping well
[14:28:40] <cirdan> what about conflicting with systemd-sysv?
[14:28:53] <cirdan> do many people use that?
[14:28:56] <PMT> cirdan: that results in dropping support for any sysvinit scripts at all if you remove it.
[14:29:14] <PMT> Which, you know, is a pretty unreasonable requirement.
[14:29:17] <cirdan> for systemd?
[14:29:20] <PMT> Yes.
[14:29:43] *** cbreak <cbreak!~cbreak@77-56-224-14.dclient.hispeed.ch> has joined #zfsonlinux
[14:30:11] * cirdan really wants to know who things systemd makes things more simple...
[14:30:52] <FireSnake> loginctl definitely does
[14:31:14] <cirdan> FireSnake: gotta take it as a whole can't seperate it out
[14:31:33] <FireSnake> i only dislike it wants to be pid 1 and that there are too many letters to type until bash completion kicks in for systemctl
[14:31:45] <cirdan> heh
[14:31:52] <PMT> FireSnake: that's more a problem with your bash completion than systemd =P
[14:33:00] <cirdan> but still, I wonder if we can not catch fire if both are installed
[14:34:45] <Slashman> that's very interesting, does someone have the link to the bug tracker on debian? or the mailing list discussion?
[14:35:22] <cirdan> maybe if systemd is installed don't register the init files?
[14:36:00] <PMT> Slashman: the bug is https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=915831 but half the conversation I mentioned is off-bug
[14:36:26] <PMT> cirdan: yes, but I suggested that and nobody else thinks that's a reasonable outcome.
[14:37:11] <cirdan> WHy don't other packages catch fire? I know there are ones that have both sets of files
[14:38:01] <cirdan> it's possible the zfs guys are doing something different/wrong
[14:38:08] <PMT> Not in this case.
[14:38:23] <bunder> then why is it only deb/ubu catching on fire
[14:38:24] <cirdan> ?
[14:38:49] <PMT> bunder: I don't imagine most other distros try installing both the sysvinit scripts and systemd unit files at the same time.
[14:39:01] <PMT> I still think that's a bad idea, but I've given up convincing them of that.
[14:39:42] <DHE> generally a distro has a known init system and you package only those files...
[14:40:04] <bunder> tbf, gentoo does install a zfs.service file but that's it
[14:40:30] <bunder> i had to look because i thought we installed both even if you don't use it
[14:40:42] <cirdan> I have 69 packages that install init files, and i'm pretty certain they also have systemd files
[14:41:37] <PMT> I'm going to stop talking about this. It's just going to aggravate me more at the moment.
[14:41:48] * DHE provides PMT with candy
[14:41:55] <cirdan> sorry I'm just trying to figure this out
[14:42:17] <cirdan> sudo is on everyone's machine and has both. why doesn't that have an issue for people?
[14:42:48] <bunder> i don't always install sudo
[14:43:00] <cirdan> debian has it by default, iirc
[14:43:15] <cirdan> i could be wrong though
[14:43:43] *** fs2 <fs2!~fs2@pwnhofer.at> has quit IRC (Quit: Ping timeout (120 seconds))
[14:44:15] *** fs2 <fs2!~fs2@pwnhofer.at> has joined #zfsonlinux
[14:47:20] <cirdan> so sudo doesn't use any dh_systemd* hooks in it's initscript
[14:47:27] <Slashman> you don't have sudo on minimal install on debian
[14:47:42] <bunder> sudo has a initscript?
[14:48:31] * lblume is curious about that too
[14:48:37] <cirdan> yeah it does
[14:48:53] <cirdan> postfix has both though and uses the dh_systemd* and dh_installinit
[14:49:09] <lblume> How? Where?
[14:49:45] <cirdan> it does: # make sure privileges don't persist across reboots
[14:50:04] <cirdan> find /var/lib/sudo -exec touch -d @0 '{}' \;
[14:50:21] <lblume> On which distro? I don't see amything about on RHEL6/7
[14:50:23] <cirdan> debian
[14:50:32] <cirdan> we're dealing with a debian bug :)
[14:50:47] <lblume> Well, it's not like the conversation stayed strictly on topic, did it? :D
[14:51:06] <bunder> gentoo doesn't have a /var/lib/sudo, weird
[14:51:17] <cirdan> it could stash it somewhere else
[14:51:21] <bunder> (maybe it does under systemd i dunno)
[14:51:39] <lblume> No, there's no sudo reference in the init scripts
[14:51:46] <lblume> I don't see one on Ubuntu either
[14:52:45] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has quit IRC (Quit: Page closed)
[14:52:49] <lblume> Maybe one of those famed Debian security improvements ;)
[14:56:12] <cirdan> well yeah if you reboot you could still have active sudo privs this just makes that go away
[14:57:00] *** fs2 <fs2!~fs2@pwnhofer.at> has quit IRC (Quit: Ping timeout (120 seconds))
[14:57:01] <bunder> if i reboot then nothing is running from the past boot :P
[14:57:49] *** fs2 <fs2!~fs2@pwnhofer.at> has joined #zfsonlinux
[14:59:17] <cirdan> ... do you know how sudo works? it uses a timestamp to determine if it should ask for a password
[14:59:38] <cirdan> debian just zeros out the timestamps
[15:00:23] <cirdan> so here's the zfs debian postinit: https://termbin.com/csrek
[15:00:48] <cirdan> and the postfix one: https://termbin.com/u4xs
[15:01:18] <zfs> [zfsonlinux/zfs] zfs should optionally send holds (#7513) new review comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/pull/7513#discussion_r245007536>
[15:01:39] <cirdan> down the bottom is the init stuff... # postinst processing
[15:01:49] <lblume> Where are those timestamps stored? I don't use passwords, so I've never looked
[15:02:16] <cirdan> lblume: i guess it depends on the distro... debian has it under /var/lib/sudo
[15:04:47] <lblume> Looks like RHEL took the smarter move of using /var/run/sudo
[15:05:52] <cirdan> that didn't used to exist
[15:06:12] <cirdan> ¯\_ツ_/¯
[15:06:31] <cirdan> anyway someone could make /var/run persist
[15:06:56] <Lalufu> that's not the point of /var/run
[15:06:57] <bunder> looks like gentoo uses /run which is tmpfs yeah
[15:07:03] <Lalufu> it's by definition not persistede
[15:07:09] <lblume> Well, one can always choose to shot one's foot. But let me see if I can find it on RHEL6
[15:07:17] <Lalufu> if you want persistent data there's /var/lib
[15:07:18] <cirdan> anyway
[15:07:27] <cirdan> things move slowly
[15:08:58] <bunder> https://pastebin.com/HFWGjdWc lul systemd and selinux
[15:10:53] <lblume> /var/db/sudo on RHEL6. I don't see it being cleaned up by init scripts though.
[15:12:05] <bunder> i think that's for the lectured file
[15:12:33] <bunder> afaik that's the warning you see the first time you run sudo then you never see it again lol
[15:19:04] <cirdan> hmm. can zfs-share just have zfs-zed under should-start, not required-start? maybe that would fix it. of course, it might then conflict with zfs-mount being required but hmm
[15:20:24] <bunder> why does share need zed at all
[15:22:38] <bunder> imo zed should depend on import and nothing should depend on zed :P
[15:24:32] <cirdan> yeah I dunno
[15:30:09] <cirdan> huh. seems we had this same(ish) bug 4 years ago: https://github.com/zfsonlinux/zfs/issues/2680
[15:30:52] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[15:33:06] <cirdan> zed needs to be before share
[15:33:23] <cirdan> but debian split out zed...
[15:33:48] <bunder> lol
[15:36:14] <cirdan> I love the # order is important comment
[15:36:24] <Haxxa> So I was getting Checksum Errors everyday and degraded Pool, now I ahve replaced my Great Wall Awyun HP PSU with a Seasonic and I no longer have issues
[15:37:23] <Haxxa> 200w no name PSU is what Came with my HP MicroServer, a big let down tbh
[15:39:38] <MilkmanDan> Somebody should dig up that 15-odd year old blog post from the Sun engineer who diagnosed his faulty PS with zfs back when that was still considered black magic.
[15:40:35] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[15:47:40] <PMT> Haxxa: well that makes sense
[15:48:00] <PMT> MilkmanDan: i mean, it's _still_ considered black magic.
[15:48:28] <PMT> since the list of possible failure modes of inconsistent power input is (.+)
[15:48:53] <cirdan> PMT: so do you run systmd or sysv?
[15:49:11] <Haxxa> PMT, It took me 3 Monthes and 3 PSUs, 4 New ECC DIMMS, New MB, New HDDs to solve this
[15:49:13] <cirdan> wondering if removign the requires for zed fixes it
[15:49:22] <cirdan> Haxxa: so what was the issue?
[15:50:30] <PMT> Why 3 PSUs, if you know?
[15:50:31] <Haxxa> cirdan, I assume PSU couldn't deal with Xeon + 4 HDDs + HBA + USB Devices + FPGA + Motherboard, it is only 200w and crappy, but not sure.
[15:50:50] <PMT> I wouldn't trust 200W to power very much, no.
[15:51:06] <Haxxa> memtest never revealed any errors, new MB never helped, new PSU didn't help etc.
[15:51:11] <cirdan> the xeon was prolly 90-130w itself
[15:51:20] <Haxxa> PMT, a lot of people use the HP Microservers though
[15:51:26] <Haxxa> I guess I got unluky
[15:51:36] <PMT> Without more data, I can't speculate.
[15:51:44] <Haxxa> cirdan, nah it was xeon 1265lv2 (45w tdp)
[15:51:51] <PMT> cirdan: systemd is installed. I thought that would have been obvious from having the systemd back-compat package for sysvinit scripts installed.
[16:19:47] <MilkmanDan> Memtest is a good way to test that your memory is capable of functioning flawlessly while your system is doing the least amount of work possible.
[16:29:38] <lblume> Memtest did find issues and helped me avoid others several times (stuck bit, incompatible memory modules), so while not perfect, it's far from useless.
[16:47:53] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[17:06:07] <MilkmanDan> Indeed. But many people seem to think it's a way of qualifying a bunch of DIMMs as "faulty" or "flawless". Down there be dragons.
[17:14:06] <lblume> Ah, I get your point, true enough.
[17:15:01] <lblume> And that's not even considering that DIMMs do fail too over time
[17:24:36] <Shinigami-Sama> Haxxa: we had that same problem with one microserv
[17:25:02] <Shinigami-Sama> we stuck an esata card in it for external backup copies and well...
[17:50:14] *** hyper_ch <hyper_ch!~hyper_ch@openvpn/user/hyper-ch> has joined #zfsonlinux
[18:08:54] <PMT> > eSATA
[18:09:01] <PMT> Well, there's your problem
[18:10:12] <prometheanfire> esata sucks?
[18:10:37] * prometheanfire just uses something that links up to the existing sata conector
[18:14:32] <Shinigami-Sama> esata is awesome for just single/dual external drives
[18:14:49] <Shinigami-Sama> it was just to dump backup files to at something more than 12-24MB/s
[18:15:31] <Shinigami-Sama> server -> esata caddie -> ftp relp
[18:23:07] *** Nukien <Nukien!~Nukien@162.250.233.55> has quit IRC (Ping timeout: 240 seconds)
[18:30:57] *** Nukien <Nukien!~Nukien@162.250.233.55> has joined #zfsonlinux
[18:34:22] *** elxa <elxa!~elxa@2a01:5c0:e090:30c1:b938:8b27:230f:2525> has joined #zfsonlinux
[18:34:59] *** elxa <elxa!~elxa@2a01:5c0:e090:30c1:b938:8b27:230f:2525> has quit IRC (Remote host closed the connection)
[18:35:12] *** elxa <elxa!~elxa@2a01:5c0:e090:30c1:b938:8b27:230f:2525> has joined #zfsonlinux
[18:37:56] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 272 seconds)
[18:46:01] *** kaipee <kaipee!~kaipee@81.128.200.210> has quit IRC (Read error: Connection reset by peer)
[19:05:51] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[19:25:29] *** Floflobel <Floflobel!~Floflobel@cosium-152-18.fib.nerim.net> has quit IRC (Remote host closed the connection)
[19:45:16] <zfs> [zfsonlinux/zfs] [RFC] new zscrub command for offline scrubs in userland (#6209) new review comment by Paul Dagnelie <https://github.com/zfsonlinux/zfs/pull/6209#pullrequestreview-189126026>
[19:46:46] <ptx0> bunder: lol amazon doesn't sell that 4tb wd red i got, anymore
[19:47:54] <Shinigami-Sama> thats why I bought 6tbs
[19:48:02] <Shinigami-Sama> 10$ more than the 4s...
[19:48:07] <Shinigami-Sama> at least when I picked them up
[19:50:53] <ptx0> just picked up 4tb wd red for $129 with a 1 per week limit
[19:51:06] <ptx0> was gonna buy 6 over a month and then rebuild my array
[19:51:20] <ptx0> the 6tb wd red are like $250
[19:51:24] <ptx0> not worth it
[19:52:25] <ptx0> can get a 6tb external for $179 but $215 is the cheapest bare internal
[19:52:41] <ptx0> and it's probably a SMR since it's Seagate
[19:53:47] <ptx0> WD Blue cheapest 4T in .ca right now at $127, cheapest 8TB (seagate, not listed as archive, but...) is $285
[19:54:27] <ptx0> unless you have some real density or power issues i don't think 8TB is that great these days
[19:54:52] <blackflow> wth why are those limits?
[19:55:03] <ptx0> ? the one per weeks? i have no idea
[19:55:19] <ptx0> it's for a nas device from '1 to 8 drives' according to WD yet limit 1 per customer
[19:56:16] *** wadeb <wadeb!~wadeb@38.101.104.148> has quit IRC (Quit: Leaving)
[20:07:26] <elxa> ptx0: why would you buy the smaller ones when price/TB isn't that different ? :D
[20:13:35] <ptx0> elxa: because i need more spindles
[20:13:39] <ptx0> not more TB
[20:13:55] <ptx0> i need a nice ratio of vdevs to TB
[20:14:02] <elxa> makes sense
[20:14:07] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has quit IRC (Excess Flood)
[20:14:15] <ptx0> yeah if i were spending 10x i'd probably do double density
[20:15:08] <ptx0> i was bidding on some 10tb disks and they had 4 days left in the auction, was gonna snipe them for $150 but people keep bidding up the value so now they are past MSRP
[20:15:17] <ptx0> good job idiots
[20:16:26] <ptx0> still might get a mybook duo 16tb
[20:17:00] <zfs> [zfsonlinux/zfs] NFSv4 ACL support - WIP, review requested (#7728) new commit by "Paul B. Henson" <https://github.com/zfsonlinux/zfs>
[20:19:21] <elxa> ptx0: I bought 2x wd red 10tb drives in an ebay auction only to notice later that they were oem drives. They're working fine, but this is what you get for trusting ebay :D
[20:19:47] <ptx0> you can file a claim you know
[20:20:15] <elxa> against the seller for not providing or knowing about this detail?
[20:20:39] <ptx0> elxa: i got some 2x 4tb wd gold and they work great, bought maybe 7x 5tb disks from ebay and each one came as described
[20:21:12] <ptx0> though i find myself wishing i'd gotten the toshiba x300 5tb instead of seagate, since they're SMR and trash
[20:21:39] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has joined #zfsonlinux
[20:21:45] <elxa> are you mixing sizes in a single pool ?
[20:29:45] <Shinigami-Sama> wd makes gold now?
[20:29:55] <Shinigami-Sama> is that the new black? like in power rangers?
[20:32:40] <PMT> i heard orange was the new black
[20:33:34] <Shinigami-Sama> thats a transition...
[20:33:40] <ptx0> elxa: probably not
[20:34:03] <ptx0> i will move the 5tb disks to a freebsd archive box and use only new non-SMR disks in my xeon server
[20:35:27] <DHE> golds have existed for a while now. they're the generic enterprise model hard drives...
[20:35:37] <ptx0> they are gone now
[20:35:41] <elxa> Shinigami-Sama: I'm pretty sure you'd get a bit of gold as well with those electronics :D But the gold/price ratio sucks ^^
[20:36:18] <ptx0> DHE: they are now called datacentre drives or some shit
[20:37:25] <zfs> [zfsonlinux/zfs] zfs should optionally send holds (#7513) new review comment by loli10K <https://github.com/zfsonlinux/zfs/pull/7513#discussion_r245109722>
[20:37:58] <DHE> I'll take your word for it, I don't use WDs most of the time
[20:38:22] <Shinigami-Sama> just ilke with carrots eh elxa
[20:40:10] <zfs> [zfsonlinux/zfs] zfs should optionally send holds (#7513) new review comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/pull/7513#discussion_r245110477>
[20:48:01] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[20:49:30] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[20:53:14] <cirdan> DHE: wd finally absorbed hgst, the gold are nore ultrastor dc
[20:53:43] <cirdan> i think the wd red pro = hgst nas as well
[20:55:26] <cirdan> even the website just redirects :/
[20:55:37] <cirdan> i hope they keep the sandisk name though at least for sd cards and such
[20:56:10] <zfs> [zfsonlinux/zfs] OpenZFS: 'vdev initialize' feature (#8230) new review comment by loli10K <https://github.com/zfsonlinux/zfs/pull/8230#pullrequestreview-189159878>
[20:56:52] <cirdan> ptx0: the n300 is the toshiba nas drive, it's been working ok
[20:57:13] <cirdan> the main complaint I heard was they'll send you a prepaid visa to refund your money instead of send an RMA drive
[20:59:42] <FinalX> cirdan: yes, wd red pro = hgst nas
[21:01:08] <cirdan> i'd have rather seen the hgst kept for nas/enterprise drives
[21:01:11] <zfs> [zfsonlinux/zfs] OpenZFS: 'vdev initialize' feature (#8230) new review comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/pull/8230#discussion_r245116581>
[21:01:14] <cirdan> but they want * wd branding
[21:01:23] <FinalX> yeah, same
[21:01:48] <FinalX> plus higher price than the hgst nas it seems
[21:01:55] <cirdan> yeah
[21:02:56] <zfs> [zfsonlinux/zfs] OpenZFS: 'vdev initialize' feature (#8230) new review comment by loli10K <https://github.com/zfsonlinux/zfs/pull/8230#pullrequestreview-189162303>
[21:03:02] *** eab <eab!~eborisch@75-134-18-245.dhcp.mdsn.wi.charter.com> has joined #zfsonlinux
[21:09:17] <zfs> [zfsonlinux/zfs] Add missing MMP status code to libzfs_status (#8222) new review comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/pull/8222#discussion_r245118688>
[21:11:33] *** veegee <veegee!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has quit IRC (Ping timeout: 246 seconds)
[21:14:11] *** veegee <veegee!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has joined #zfsonlinux
[21:16:08] <zfs> [zfsonlinux/zfs] Add missing MMP status code to libzfs_status (#8222) merged by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8222#event-2053229312>
[21:16:16] <zfs> [zfsonlinux/zfs] Wrong doc link on multihost-SUSPENDED pool (#8148) closed by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8148#event-2053229558>
[21:23:19] *** Baughn <Baughn!~Baughn@2a01:4f9:2b:808::> has quit IRC (Quit: ZNC 1.6.2+deb1 - http://znc.in)
[21:30:28] *** Slashman <Slashman!~Slash@cosium-152-18.fib.nerim.net> has quit IRC (Remote host closed the connection)
[21:49:52] <ptx0> cirdan: did you see that China is calling one of its new states New Jersey so that all shipments on ebay are now masked as if they come via USA
[21:50:11] <ptx0> shipping from Avanel, New Jersey, China
[21:50:31] <ptx0> they should have gone with 'New New Jersey'
[21:52:32] <Sketch> s/states/cities/ i'm sure
[21:52:49] <ptx0> province, really
[21:53:11] <Sketch> i don't think they have new province
[21:53:11] <Sketch> s
[21:53:23] <ptx0> google it bro
[21:54:34] <Sketch> i did, and i'm not finding much
[21:54:50] <Sketch> except that new jersey used to be a province before 1776
[21:58:29] <ptx0> oh
[21:58:35] <ptx0> well, keep looking
[22:09:44] <elxa> do you all keep your pools less than 90% filled ? For big pools that is a lot of wasted space
[22:11:13] <Shinigami-Sama> elxa: do you live in a shipping container?
[22:11:26] <Shinigami-Sama> because all that vertical room in your house is wasted space too by that logic
[22:11:35] <Shinigami-Sama> not that I really disagree either way
[22:13:25] <Sketch> not if you hang hard drives from your ceiling...
[22:14:01] <Sketch> i've hit 100% full on pools before. just don't do it if you have snapshots or other things like that.
[22:15:41] <elxa> idk I've a centos server with 14.4T (size from zpool list) raidz1 pool and the whole server comes to a halt during writes sometimes.
[22:15:56] <elxa> cap is 91%
[22:16:13] <elxa> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
[22:16:14] <elxa> tank 14.4T 13.2T 1.17T - 30% 91% 1.00x ONLINE -
[22:21:10] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Ping timeout: 250 seconds)
[22:24:02] <bunder> i think its pretty easy to say "because its full"
[22:26:09] <bunder> ptx0: see i knew buying them one at a time would suck :P
[22:27:54] *** Myrl-saki <Myrl-saki!~programme@unaffiliated/myrl> has joined #zfsonlinux
[22:28:01] <elxa> 8 disk raidz1 pool . Bad setup or just full? Because 1.17T free doesn't seem like it should act like this. Or is this because of fragmentation and disks spend most of the time seeking?
[22:28:02] <Myrl-saki> Why is ZFS's memory usage opaque to the kernel?
[22:29:52] <Myrl-saki> Oh, I guess this one's the answer? https://stackoverflow.com/questions/18808174/lost-memory-on-linux-not-cached-not-buffers/18808311#18808311
[22:30:18] <Sketch> because it belongs to zfs, not the kernel
[22:30:58] <Myrl-saki> Sketch: Does the distinction matter? How about, say, with ext4?
[22:32:39] <Sketch> zfs is a bit different, it's not just a filesystem
[22:32:55] <Sketch> it's basically it's own filesystem+volume manager+raid+cache
[22:33:24] <Shinigami-Sama> Myrl-saki: also ZFS isn't GPL, and the kernel is rather...unfriendly if the module isn't GPL
[22:33:57] <Shinigami-Sama> so it can't use lots of the friendly options the other FSes can
[22:34:16] <Myrl-saki> Shinigami-Sama: Ah.
[22:34:48] <Myrl-saki> Something else.
[22:35:05] <Myrl-saki> (a) L2ARC is clear on boot time, right?
[22:35:18] <Myrl-saki> Rather, L2ARC is clear after boot.
[22:36:02] <Myrl-saki> And (b) Why does L2ARC not appear as extra storage? I guess that's also because of (a), but why (a) from the start?
[22:36:38] <Shinigami-Sama> theres some work on making persistant l2arc, but its basicly just more cache
[22:36:58] <Shinigami-Sama> you don't count the 32/64/128Mb of cache on a drive as extra storage do you?
[22:37:32] <Myrl-saki> Shinigami-Sama: Right, I'm thinking more of tiered storage than anything.
[22:38:59] <Shinigami-Sama> its not really a tier, its just more cache, zfs doesn't really do "tiers"
[22:39:38] <Shinigami-Sama> BPR is probably the closet thing, but its still not even really close...or sanely achievable
[22:40:09] <Myrl-saki> "Block Pointer Rewrite"?
[22:40:11] *** hyegeek <hyegeek!~hakimian@wsip-72-214-228-246.ph.ph.cox.net> has quit IRC (Quit: Leaving.)
[22:40:17] <Shinigami-Sama> yes
[22:40:38] <Shinigami-Sama> I love the idea, but I'm told the spec makes it rather untenable
[22:42:42] <Myrl-saki> Oh. I think I see what you mean now.
[22:42:58] <Myrl-saki> Shinigami-Sama: Thanks, I understand the technological limitations now.
[22:43:27] <Shinigami-Sama> yeah, its a great idea if you ever had anyone who could write perfect+performant code
[22:43:53] <Shinigami-Sama> but we live in reality, in addition to some of the other issues that arise if you use BPR
[22:43:55] <Myrl-saki> Shinigami-Sama: TL;DR, tiered storage doesn't really make sense(read: hard to implement) for ZFS, because of immutability?
[22:44:15] <Myrl-saki> Rather.
[22:44:20] <Myrl-saki> Hard to make it make sense.
[22:45:22] *** hyegeek <hyegeek!~hakimian@wsip-72-214-228-246.ph.ph.cox.net> has joined #zfsonlinux
[22:46:46] <Shinigami-Sama> thats my understanding, I just use zfs, and lurk long enough to see these questions come up somewhat regularly
[22:47:05] <Myrl-saki> Ah. :P
[22:49:32] <zfs> [zfsonlinux/zfs] Eliminate ZTHR races by serializing ZTHR operations. (#8229) comment by Tom Caputi <https://github.com/zfsonlinux/zfs/issues/8229>
[22:49:43] <Myrl-saki> Sketch: Also. From what I understand based on the SO answer, the ZFS cache is more informative than the Linux FS cache(or whatever it's called), so it cannot be transparently deallocated either?
[22:49:48] <zfs> [zfsonlinux/zfs] Eliminate ZTHR races by serializing ZTHR operations. (#8229) comment by Tom Caputi <https://github.com/zfsonlinux/zfs/issues/8229>
[22:51:11] <zfs> [zfsonlinux/zfs] Add zfs module feature and property compatibility (#8231) comment by Don Brady <https://github.com/zfsonlinux/zfs/issues/8231>
[22:52:29] <Shinigami-Sama> zfs cache(arc) is bigger/more-inclusive than the standard linux FS cache(free -m) and is managed by its own process
[22:52:57] <Shinigami-Sama> it reacts more or less the same way, and deallocates when it sees pressure and grows when there is less pressure
[22:53:52] <Myrl-saki> Shinigami-Sama: Right. But the standard Linux FS cache allows you to allocate *over it*, from what I understand?
[22:54:09] <Myrl-saki> As opposed to ZFS having to deallocate once memory is going down?
[22:54:14] <Shinigami-Sama> its just quick at shrinking
[22:54:19] <Shinigami-Sama> arc is slower
[22:55:01] <Myrl-saki> Ah.
[22:56:45] *** elxa_ <elxa_!~elxa@2a01:5c0:e090:30c1:b938:8b27:230f:2525> has joined #zfsonlinux
[22:59:01] *** elxa <elxa!~elxa@2a01:5c0:e090:30c1:b938:8b27:230f:2525> has quit IRC (Ping timeout: 252 seconds)
[23:07:43] <zfs> [zfsonlinux/zfs] OpenZFS: 'vdev initialize' feature (#8230) new review comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/pull/8230#discussion_r245149005>
[23:09:54] *** ih8wndz <ih8wndz!jwpierce3@001.srv.trnkmstr.com> has left #zfsonlinux ("WeeChat 2.3")
[23:10:00] *** ih8wndz <ih8wndz!jwpierce3@001.srv.trnkmstr.com> has joined #zfsonlinux
[23:25:20] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[23:28:59] <zfs> [zfsonlinux/zfs] OpenZFS: 'vdev initialize' feature (#8230) new commit by Brian Behlendorf <https://github.com/zfsonlinux/zfs>
[23:30:02] * MTecknology takes a biiiiiig breath and makes an attempt to dive into zfsonlinux.
[23:30:50] <MTecknology> Installing a server via debootstrap seems like a very interesting step.
[23:31:17] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Quit: simukis)
[23:41:26] <Shinigami-Sama> oh an MTecknology
[23:41:39] <Shinigami-Sama> ptx0: you'll love/hate this guy
[23:42:11] <Shinigami-Sama> MTecknology: if I can manage to get zfs working you'll be fine
[23:45:33] * MTecknology lol @ ^
[23:53:39] * ptx0 primes the ban hammer
[23:53:39] <MTecknology> Shinigami-Sama: I spent all of yesterday fighting with my raid controller, trying to stick hba-only firmware on it. I gave up and just went with it's JBOD option. Rumors are that the hba firmware is only marginally better performance, and I suspect I'll never notice.
[23:55:12] <MTecknology> Hammertime! "━━▊ ━━▊ ━━▊ - https://i.imgur.com/iGDxObQ.mp4"
[23:55:13] <zfs> [zfsonlinux/zfs] After a week of running array, issuing zpool scrub causes system hang (#7553) comment by Brandon Black <https://github.com/zfsonlinux/zfs/issues/7553#issuecomment-451304082>
[23:56:17] <Shinigami-Sama> HW sucks unless its part of the warrenty package yes
[23:56:51] <PMT> MTecknology: what controller
[23:57:23] <MTecknology> 9341-8i
[23:58:24] *** elxa_ <elxa_!~elxa@2a01:5c0:e090:30c1:b938:8b27:230f:2525> has quit IRC (Ping timeout: 252 seconds)
[23:58:37] <PMT> any particular reason you bought a raid controller
[23:59:39] <MTecknology> I was using hardware raid when I made the purchase. A couple years ago, I decided to switch to sw raid whenever I had a need to rebuild the box. That point is just now showing up.
top
   January 3, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >