Switch to DuckDuckGo Search
   January 11, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31
Toggle Join/Part | bottom
[00:08:52] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has quit IRC (Excess Flood)
[00:09:34] <zfs> [zfsonlinux/zfs] Linux 5.0: MS_RDONLY undeclared (#8264) created by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/8264>
[00:10:47] *** Dagger2 <Dagger2!~dagger@sawako.haruhi.eu> has joined #zfsonlinux
[00:12:33] *** tnebrs <tnebrs!~barely@> has joined #zfsonlinux
[00:14:54] *** z1mme <z1mme!zimme@gateway/shell/firrre/x-vmwexyvdfabkvxlk> has quit IRC (Ping timeout: 252 seconds)
[00:16:21] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-453295622>
[00:18:30] *** z1mme <z1mme!zimme@gateway/shell/firrre/x-skhrnyhcjilaqadc> has joined #zfsonlinux
[00:26:09] <bunder> so i wonder what we can do about that, or is 4.20 the end of the line for us
[00:30:55] <Shinigami-Sama> ?
[00:31:16] <bunder> #8259
[00:31:18] <zfs> [zfs] #8259 - Linux 5.0: asm/i387.h: No such file or directory <https://github.com/zfsonlinux/zfs/issues/8259>
[00:31:30] <bunder> greg basically told us to get fucked :/
[00:33:11] *** TimWolla <TimWolla!~timwolla@2a01:4f8:150:6153:beef::6667> has quit IRC (Quit: Bye)
[00:33:19] <Shinigami-Sama> oh him
[00:33:33] <Shinigami-Sama> I've seen his name show up before
[00:33:46] <PMT> he seemed pretty nice in person
[00:33:52] <Shinigami-Sama> usually Linus yelling at him as I recall
[00:34:11] <bunder> greg is pretty important, he does all the minor and lts releases for linux after linus releases the major versions
[00:34:20] <PMT> wow, damn
[00:34:30] <PMT> he is being quite a pissant there
[00:37:08] <Shinigami-Sama> that would be why Linus yells at him a lot
[00:37:19] <Shinigami-Sama> (by his standards)
[00:37:39] <bunder> i've seen several of his conference videos, i would say he's normally a decent guy as well
[00:37:45] <PMT> I thought he got yelled at when Linus disagreed with him, and how rational some of those are probably varies
[00:37:45] <bunder> i wonder who shit in his coffee
[00:39:39] *** TimWolla <TimWolla!~timwolla@2a01:4f8:150:6153:beef::6667> has joined #zfsonlinux
[00:40:47] <bunder> can we even relicense? or would that be up to oracle?
[00:41:14] <PMT> No.
[00:41:21] <bunder> dunno if delphix et al would even care what license we use
[00:41:44] <PMT> Even if you got the Oracle ZFS relicensed, you'd need to go shake down all the committers to OpenZFS over the years to relicense.
[00:42:03] <bunder> hm fun, i figured as much
[00:42:14] <PMT> And as some of them have active antipathy toward Linux, good luck with that.
[00:42:46] <bunder> well i ain't about to switch to fbsd now :P
[00:43:04] <PMT> why not? they don't have a GPL problem and will soon have ZoL =P
[00:43:23] <bunder> unlikely if zol has no future
[00:43:55] <gchristensen> seems unlikely for ZoL to have no future
[00:44:25] <bunder> you missed 8259 / https://marc.info/?l=linux-kernel&m=154714516832389&w=2
[00:44:33] <gchristensen> I didn't miss it
[00:45:15] <PMT> bunder: I mean, it's just the crc32c export. It's not like we can't get the implementation elsewhere.
[00:45:28] <gchristensen> but I wouldn't take a single pissy email to mean the project is dead. a bunch of companies depend on it, some of which are members of the linux foundation. and failing that, yeah, there are alternatives
[00:47:03] <bunder> PMT: there becomes a time where re-implementing what linux keeps forcing away from us - to become unmanageable, or to the point where we'd need a whole new kernel
[00:47:15] <PMT> I'm aware.
[00:47:21] <PMT> But I doubt we're anywhere near that point.
[00:47:33] <Markow> Why does GKH have 'no tolerance' for ZFS on Linux? What's his issue with it?
[00:47:49] <bunder> probably the usual cddl vs gpl bullshit
[00:49:50] <ptx0> so i guess you didn't see the nvidia shit
[00:50:01] <ptx0> because they don't seem to care much about gpl symbols either
[00:50:02] <ptx0> ;)
[00:50:45] <Shinigami-Sama> or you roll back to a shim module again like spl
[00:51:03] <gchristensen> gobs of solutions
[00:51:20] <gchristensen> quick, write in-tree kernel modules which need every function zfs uses ;)
[00:51:45] <bunder> that's like half a kernel though
[00:52:08] <gchristensen> it is going to be fine
[00:52:16] <bunder> memory, io, crypto, compression
[00:52:35] <Shinigami-Sama> ...so dep on systemd?
[00:53:48] *** tnebrs <tnebrs!~barely@> has quit IRC (Ping timeout: 245 seconds)
[00:53:51] <bunder> i bet if they said we can't use kernel avx it would get more attention
[00:54:37] <gchristensen> we'll see what happens if that happens
[00:59:07] *** IonTau <IonTau!~IonTau@ppp121-45-221-77.bras1.cbr2.internode.on.net> has joined #zfsonlinux
[00:59:40] *** Drakonis <Drakonis!~Drakonis@unaffiliated/drakonis> has joined #zfsonlinux
[01:04:11] <ptx0> we should just port ZoL to FreeBSD!
[01:04:19] <ptx0> oh wait
[01:06:19] <gchristensen> let's just all switch to illumos and use linux flavored zones
[01:06:42] <Sketch> mmm, linux flavored
[01:08:32] <ptx0> summary: 5913 GiByte in 14h 38min 04.9sec - average of 115 MiB/s
[01:08:45] <ptx0> so the SMR drives alone would have done 40MiB/s for that much
[01:09:11] <ptx0> DeHackEd: ^
[01:15:33] *** Markow <Markow!~ejm@> has quit IRC (Quit: Leaving)
[01:23:10] <PMT> I'm still a fan of GPL\0, with additional clauses
[01:24:17] <Shinigami-Sama> yeah the gpl is kind of going against "Free" I find
[01:25:04] <PMT> Shinigami-Sama: not quite, this is explicitly always what they meant by Free.
[01:25:34] <Shinigami-Sama> gpl3 reminded me of millitant passivists
[01:26:08] <PMT> The GPL has always been very militant about its restrictions, that's the whole point and novelty of it.
[01:27:28] <gchristensen> this is definitely the point of the gpl
[01:28:35] <bunder> if only bsd didn't have a crap firewall compared to iptables+ipset
[01:29:21] <bunder> (not that iptables is amazing either, but its not pf)
[01:30:27] <PMT> They swear iptables is being replaced with nftables or similar, IIRC
[01:30:34] <PMT> or possibly bpftables, I forget what they called it
[01:30:49] <PMT> Ah, bpfilter.
[01:32:28] <bunder> https://www.phoronix.com/scan.php?page=news_item&px=ZFS-On-Linux-5.0-Problem
[01:32:31] <bunder> :D
[01:34:37] <cirdan> simple fix, just use the damn gpl symbols
[01:34:48] <cirdan> then laugh as there's nothing they can do
[01:35:12] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has joined #zfsonlinux
[01:37:04] <cirdan> also, it's not extra work to keep zfs working, they are doing extra work to break it
[01:37:24] <jidar> I do think ipset is pretty nice, and I've also found firewalld pretty nice from a cli-interface standpoint
[01:41:01] <Shinigami-Sama> PMT: AFAIR iptables just links into the new netfilters
[01:41:26] <bunder> ipset is nice, because many iptables rules makes it slow
[01:41:40] <bunder> when you go from 60k rules to like 40, its amazing
[01:42:28] <Shinigami-Sama> I have like 3 rules, for my plex server to redirect 443 -> 320??
[01:43:15] <cirdan> https://lwn.net/Articles/603131/
[01:44:53] <jasonwc> Why would running smartctl or hddtemp impact drive performance? I noticed that scrub performance was lower than usual and iotop indicated the slowdown corresponded to smartd checking HDD temps.
[01:45:06] <cirdan> depends on the fimware
[01:45:22] <cirdan> there's a seagate? drive that corrupts data if you check smart while it's writing...
[01:45:33] <jasonwc> I just happened to be watching the server and watched as each light went off in turn from 1 to 24
[01:45:44] <jasonwc> I thought that was Samsung
[01:46:02] <cirdan> yeah it's something
[01:47:08] <jasonwc> It's like quantum physics. The very act of monitoring the drive impacts its performance.
[01:47:17] <jasonwc> Don't look and all will be fine :P
[01:51:28] <PMT> cirdan: the drive was samsung at the time, though seagate bought samsung's hdd business in the interim
[01:51:44] <cirdan> yeah I have one
[01:51:56] *** tnebrs <tnebrs!~barely@> has joined #zfsonlinux
[01:52:06] *** zfs sets mode: +b *!*@$#zfsonlinux-quarantine
[01:54:32] <DeHackEd> <ptx0> so the SMR drives alone would have done 40MiB/s for that much # that right on the money for me
[01:55:16] *** tlacatlc6 <tlacatlc6!~tlacatlc6@> has joined #zfsonlinux
[01:57:42] <Shinigami-Sama> yeah that seems like you could mark SMR work..untill you fill your special vdev
[01:57:47] * DeHackEd needs a better way of getting USB keyboards and mice to his windows VM than the usual methods...
[01:58:04] <DeHackEd> Shinigami-Sama: well that's poor planning on your part. plus you can use `zpool list -v` or `zpool iostat -v` to see how full each vdev is
[01:58:59] <Shinigami-Sama> I mean how would you even fix it? mirror it to a bigger drive, split, and auto-expand?
[01:59:03] <bunder> DeHackEd: dunno if synergy works on terminals, or if its just xorg only
[01:59:45] <DeHackEd> Shinigami-Sama: zpool add $POOLNAME special mirror anotherssd1 anotherssd2
[01:59:56] <DeHackEd> preferably BEFORE the first one fills to make sure nothing spills out
[02:00:00] <PMT> synergy is explicitly a graphical thing, I believe.
[02:00:13] <Shinigami-Sama> DeHackEd: wouldn't that just stripe to a new disk?
[02:00:26] *** plut0 <plut0!~cory@> has joined #zfsonlinux
[02:00:29] <Shinigami-Sama> to a new special mirror rather
[02:00:41] <DeHackEd> yeah
[02:00:45] <DeHackEd> and?
[02:00:56] <PMT> Shinigami-Sama: it won't move your old data, no.
[02:01:12] <Shinigami-Sama> but then you'd have an unbalanced stripe
[02:01:23] <DeHackEd> they're SSDs, I'm assuming that the performance impact involved wouldn't really be THAT bad...
[02:01:48] <DeHackEd> not ideal, but probably well below the threshold of me caring. especially when the other side of the coin is SMR disks
[02:02:11] <DeHackEd> and I don't mean spilling onto SMR. I mean all data disk access is to SMR
[02:02:25] <Shinigami-Sama> couldn't you just mirror it first to a larger disk, then break and expand? it seems to be a better option than a sprawling mess
[02:02:35] <PMT> Shinigami-Sama: ...what?
[02:02:51] <DeHackEd> you could, but why bother?
[02:03:08] <DeHackEd> if you need your metadata spread across multiple SSDs for performance reasons, you have a problem.
[02:03:09] <PMT> If you're concerned about not all the data landing on the special device, recreate the pool.
[02:03:30] <DeHackEd> and I will remind you there is already a mirror to help offload that
[02:03:55] <Shinigami-Sama> I guess I'm just over thinking it
[02:04:34] <PMT> Oh, I see. You could indeed replace the legs of the special devices with larger drives before they filled.
[02:05:06] <Shinigami-Sama> I'd rather stick biggere drives into it, than waste slots with yet more disks
[02:05:17] <DeHackEd> well a special vdev isn't really special other than choosing what blocks go where. all the usual rules of vdevs apply
[02:05:25] <Shinigami-Sama> reuse the smaller disk for something else
[02:05:37] <DeHackEd> and such a resilver would actually go pretty fast
[02:07:44] <Shinigami-Sama> makes sense
[02:09:51] <ptx0> you don't have to mirror it to a larger disk first. just zpool replace.
[02:10:03] <ptx0> limited sata / pcie lanes is a real concern
[02:12:27] *** tnebrs <tnebrs!~barely@> has quit IRC (Ping timeout: 240 seconds)
[02:14:55] <jasonwc> I basically did exactly that for my main rpool yesterday. Replaced a mirror of 240GB SSDs with a mirror of 1TB SSDs
[02:15:11] <jasonwc> I think the resilver took 3 minutes per disk
[02:17:11] <tlacatlc6> nice
[02:18:19] <tlacatlc6> i did a test also on small mechanical drives and worked perfectly, although it had little data. :)
[02:22:00] <ptx0> my friend is absolutely reckless with his storage habits and his special vdev has 120gb left still
[02:22:08] <ptx0> out of 160gb
[02:22:12] <ptx0> with a 4tb pool, mind you
[02:22:33] <ptx0> i think "what if the special vdev fills up" is a stupid reason not to use one
[02:24:23] <Shinigami-Sama> oh it wasn't a reason not to use one, but more of me going "I only have 120GB SSDs"
[02:24:26] <zfs> [zfsonlinux/zfs] Disk Failure causes offline pool with enabled multihost (#7709) comment by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/7709#issuecomment-453332756>
[02:24:31] <ptx0> still not a reason :P
[02:24:41] <ptx0> i've got 128k block offload limit on his pool as well
[02:25:12] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee60021401c4d8cbb67e2.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[02:25:20] <jasonwc> On a pool with 56TB of data, zdb said only ~27GB was metadata
[02:25:28] <ptx0> before, his / was on 160gb mirror ssd and VMs etc ran from 4tb SMR mirror. it was hard to keep / with enough free space and the VM ran like hell. now with the special vdev it is a giant "4.1TB pool"
[02:25:38] *** Markow <Markow!~ejm@> has joined #zfsonlinux
[02:25:43] <ptx0> the VM runs much better
[02:25:56] *** zfs sets mode: +b *!*@$#zfsonlinux-quarantine
[02:32:04] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has quit IRC (Ping timeout: 258 seconds)
[02:36:41] <bunder> actually
[02:36:47] <bunder> i've been thinking about what greg said
[02:37:11] <bunder> "Sun explicitly did not want their code to work on Linux" is there actual proof of that
[02:37:28] <ptx0> yeah for something so explicit they sure never stated it
[02:39:14] <bunder> i mean, why even open source it if you didn't want people to use it (granted the cddl wasn't the license gpl-huggers like)
[02:39:37] <ptx0> it's NIH syndrome
[02:39:37] <bunder> is he just being butthurt or something
[02:39:51] <Shinigami-Sama> didn't a bunch of lawyers even conclude that the zfs cddl was actually compatible anyways?
[02:39:56] <ptx0> yes
[02:40:02] <bunder> if you believe ubuntu
[02:40:21] <Shinigami-Sama> I beleive shutle cock will pay lotd of money for useless things
[02:43:10] <bunder> who did he pay?
[02:43:26] <ptx0> lawyers
[02:43:54] <ptx0> it was not just in house council
[02:44:03] <ptx0> it was the SFLC too
[02:44:15] <ptx0> and the FSF
[02:44:52] <ptx0> linus's bosses should be like uh greg our legal dept disagrees
[02:45:16] <ptx0> i know linux foundation != fsf
[02:46:00] <ptx0> if all else fails submit a CoC violation
[02:46:32] <bunder> i almost feel someone should correct his statement, but it shan't be me, not subbed to lkml
[02:46:40] <ptx0> they might schedule for greg the same labotomy that linus got
[02:47:08] <ptx0> you don't have to be subscribed to mail the lkml
[02:47:33] <bunder> yeah i know but at least i'd be able to see a response without having to browse it
[02:48:07] <ptx0> oh believe me you'll know
[02:48:09] <bunder> also how do you reply to a thread in progress without having a part of the chain
[02:48:16] <ptx0> nfc
[02:48:23] <ptx0> reply to the message id
[02:49:19] <Shinigami-Sama> bunder: theres a link to "reply" on the webviewer
[02:49:32] <Shinigami-Sama> it'll open up your default mail handler(mine is gmail)
[02:51:37] <ptx0> mine doesn't have that.
[02:51:48] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Quit: Qutting)
[02:52:39] <Shinigami-Sama> oh its gone now
[02:53:44] <bunder> is it because its marc?
[02:53:56] <bunder> tbh its the only lkml reader i use
[02:54:05] <Shinigami-Sama> I always blame pottering tbh
[02:54:22] <bunder> lel
[02:59:13] <Drakonis> https://lore.kernel.org/ the best
[03:00:00] <ptx0> 130gb special data on my 5.7TiB backup pool
[03:01:37] * ptx0 pats the m.2 wd black
[03:03:27] <ptx0> weirdly only have 950gb allocated on one of the vdevs and 2.4tb on both the others
[03:03:40] <ptx0> even though they all started there together
[03:06:51] *** plut0 <plut0!~cory@> has quit IRC (Quit: Leaving.)
[03:16:24] *** Drakonis <Drakonis!~Drakonis@unaffiliated/drakonis> has quit IRC (Read error: Connection reset by peer)
[03:23:31] <Crocodillian> so I managed to install opensuse tumbleweed on zfs, it went not too terribly and works
[03:25:33] <blackflow> So..... Netcraft confirms it? ZFS is dying?
[03:25:35] <zfs> [zfsonlinux/zfs] Linux 5.0: asm/i387.h: No such file or directory (#8259) comment by sbuller <https://github.com/zfsonlinux/zfs/issues/8259#issuecomment-453348790>
[03:30:41] <cirdan> netcraft died long ago
[03:36:07] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has joined #zfsonlinux
[03:37:41] *** ralfi <ralfi!~ralfi@p200300C0C71056004C05B6EAA8A8BD92.dip0.t-ipconnect.de> has quit IRC (Ping timeout: 260 seconds)
[03:37:51] *** Celmor <Celmor!~Celmor@unaffiliated/celmor> has joined #zfsonlinux
[03:38:20] <Celmor> can I find out the "correct" size of a zpools vdev or of the whole zpool from zdb output (of an offline zpool):
[03:38:24] <Celmor> ?*
[03:41:19] <PMT> "correct"?
[03:42:40] <zfs> [zfsonlinux/zfs] Linux 5.0: asm/i387.h: No such file or directory (#8259) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8259#issuecomment-453353256>
[03:43:16] <ptx0> what a moron
[03:44:43] <bunder> ptx0: you forgot to say "hi phoronix" :P
[03:44:56] <Celmor> PMT, partition table might've been messed up
[03:46:52] <bunder> The next OpenZFS Leadership meeting will be held tomorrow, January 8th,
[03:46:53] <bunder> 1:00-2:00pm
[03:47:02] <bunder> wait, how did i not see that on youtube
[03:47:30] <bunder> oh, no video. that's why
[03:48:20] <Celmor> so I guess all I can do is try to import it
[03:48:30] <Celmor> and see if it complains
[04:01:03] <jasonwc> bunder: Are they discussing anything interesting?
[04:01:09] <jasonwc> Any updates on RAID-Z expansion?
[04:01:17] <jasonwc> (since the last presentation on the topic)
[04:01:27] <ptx0> i posted a response to greg
[04:03:03] <ptx0> basically mentioned that 1. the sflc maintains there is no licensing issue, 2. christoph hellwig's lawsuit against vmware was dismissed, 3. debian/canonical base their decision to ship DKMS source for ZoL on SFLC advice, 4. the GPL doesn't disqualify a user from compiling ZFS on Linux, only redistribution of binaries
[04:03:56] <bunder> you missed the biggest point
[04:04:00] <ptx0> then i asked, 1. should greg's personal feelings affect the quality of the linux kernel? 2. did sun or oracle ever release any statement of any kind that backs your statement [that they wish to be GPL incompatible], and 3. what extra work is to be done aside from dropping a pseudo-protection, the GPL only symbol exports
[04:04:02] <bunder> his comment has no merit
[04:04:18] <bunder> or not :P
[04:04:46] <ptx0> i said even if someone submitted the patches and did "the work" chances are he would find a reason to tell them to get stuffed and leave it as-is. and with all of that in mind, why have any tolerance for out of tree modules at all?
[04:07:28] <bunder> i mean, i've never seen a comment in the code that says "this code is not to be run on linux" or not designed for linux
[04:07:42] <bunder> its fud
[04:07:58] <ptx0> yep and it would probably say something in its license
[04:08:08] <ptx0> like "9a.iiii MUST NEVER BE USED ON LINUX"
[04:08:39] <ptx0> no one is asking gregkh to actually merge ZoL into Linux
[04:08:46] <ptx0> but he acts like that is the case
[04:10:53] <ptx0> i thought i already had a reply... some other subject tho
[04:10:56] <ptx0> damn
[04:12:06] <PMT> ptx0: https://lore.kernel.org/lkml/1547174753-31180-1-git-send-email-haoyu.tang at intel dot com/T/#u
[04:12:36] <PMT> oh drat, not quite the same thing. Should have checked the function signatures before linking.
[04:18:26] <zfs> [zfsonlinux/zfs] Disk Failure causes offline pool with enabled multihost (#7709) comment by adilger <https://github.com/zfsonlinux/zfs/issues/7709#issuecomment-453362569>
[04:18:53] <ptx0> https://lore.kernel.org/lkml/c9021705-9987-f7c2-e60c-15a09b87d345 at tripleback dot net/
[04:18:56] <ptx0> there it is
[04:19:19] <ptx0> oh good, it shows up twice
[04:26:43] <bunder> yeah you tell em panda :P
[04:27:30] <ptx0> heh
[04:27:49] <ptx0> that thing about oracle fixing the license goes both ways
[04:28:08] <ptx0> linux copyright holders can bless the use of their code by non-GPL modules
[04:29:18] <bunder> oracle can't even enforce the cddl unless they release their side of the code afaik
[04:29:41] <ptx0> oracle isn't even the primary licenseholder these days, openzfs is pretty vast
[04:29:58] <ptx0> wouldn't be surprised if the ZoL code that uses the Linux GPL exports, NEVER EVEN EXISTED IN SOLARIS
[04:30:16] <bunder> certainly possible, i never looked
[04:30:26] <ptx0> and, it goes both ways - the OpenZFS license holders can bless the use of THEIR code against GPL software
[04:30:42] <ptx0> copyright holders*
[04:31:10] <ptx0> i think by using GPL exports they're effectively doing so
[04:31:23] <ptx0> so really should just declare ZFS license as GPL\0+CDDL
[04:31:44] <ptx0> fuck greg-kh, what's he gonna do, start another lawsuit that gets dismissed?
[04:31:55] <ptx0> go for it baldie
[04:36:23] * ptx0 waits for inevitable CoC warning
[04:44:35] <bunder> lel
[04:45:27] <bunder> afaik he is still an official gentoo developer, i wouldn't put it past him if he's having a hissyfit and removing all non-gpl symbols
[04:46:04] <bunder> oh he's not
[04:46:08] <bunder> he was on there before
[04:47:53] <ptx0> hmm
[04:48:00] <ptx0> sounds like the hellwig case did go through to the german courts again
[04:48:02] *** metallicus <metallicus!~metallicu@> has joined #zfsonlinux
[04:48:02] <ptx0> on appeal
[04:48:15] <ptx0> they decided he might not even own copyright to any questionable code in vmware's vmklinux
[04:48:47] <ptx0> as he can only pursue justice for his own code being violated and it's unclear whether his contributions were 'important' enough
[04:49:04] *** metallicus <metallicus!~metallicu@> has quit IRC (Remote host closed the connection)
[04:49:14] <ptx0> that is an interesting piece of trivia that the works must be considered substantial and important to be deserving of copyright protection
[04:49:29] <ptx0> i guess my copyright on Hello World is officially bullshit
[04:50:16] *** Markow <Markow!~ejm@> has quit IRC (Quit: Leaving)
[04:53:30] <bunder> i wonder how long it will take for greg to get back to us
[04:54:05] <ptx0> first he has to get some lambs' blood and paint a pentagram on the ground, light some candles, sacrifice something
[04:54:17] <ptx0> it takes a while
[05:11:01] *** mquin <mquin!~mike@freenode/staff/mquin> has quit IRC (Ping timeout: 615 seconds)
[05:13:15] *** Celmor <Celmor!~Celmor@unaffiliated/celmor> has left #zfsonlinux
[05:18:52] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has quit IRC (Ping timeout: 246 seconds)
[05:35:49] <bunder> nothing on zol/ozfs ml either :P
[05:39:12] <bunder> or slashdot or hackernews :P
[05:40:40] <bunder> welp i guess i'm gonna watch a couple more mm2 tournament videos
[05:41:18] <bunder> because i don't give two craps about CES coverage
[05:57:02] *** tlacatlc6 <tlacatlc6!~tlacatlc6@> has quit IRC (Quit: Leaving)
[06:05:55] <bunder> this is nuts, almost near the end and a 2 second gap between these two
[06:35:41] <zfs> [zfsonlinux/zfs] ZVOLs should not be allowed to have children (#8181) comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/8181>
[06:49:56] <bunder> "Sorry, no, we do not keep symbols exported for no in-kernel users."
[06:50:05] <bunder> i guess thats that then
[06:58:49] <zfs> [zfsonlinux/zfs] ztest: add scrub verification (#8203) comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/8203#issuecomment-453387694>
[07:22:46] <ptx0> that's not the solution we're wanting though
[07:22:52] <ptx0> just fuckin remove the GPL ONLY thing
[07:23:10] <ptx0> love how he ignored my email totally though
[07:27:54] <ptx0> is there a way to make a visual representation of file space consumption of different vdev
[07:28:00] <ptx0> to see how a file is fragmented
[07:28:07] <ptx0> or imbalanced, rather
[07:30:17] <Shinigami-Sama> ohh
[07:30:30] <Shinigami-Sama> you mean I need to load up lkml again to see a raging panda?
[07:30:41] <ptx0> na i just asked him to address it
[07:34:24] <Shinigami-Sama> nah he just ignored you
[07:34:39] <lundman> yeps
[07:35:26] <Shinigami-Sama> I wonder how much cddl code is left/immutable in OpenZFS these days
[07:37:56] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[07:39:12] <ptx0> court said that refactoring may not be enough to attain copyright
[07:39:27] <bunder> poor simd /sad trombone https://github.com/zfsonlinux/zfs/blob/master/include/linux/simd_x86.h#L90
[07:40:57] <Shinigami-Sama> well, I"m assuming almost the entire tree has been refacoted and iterated on several times
[07:41:20] <Shinigami-Sama> by that logic, cars designs or website templates can't be copyright protected
[07:41:31] <ptx0> right
[07:41:38] <ptx0> well, no
[07:41:44] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[07:41:59] <Shinigami-Sama> all websites are funky tables
[07:41:59] <ptx0> you can't attain copyright on modifying a previous copyrighted object
[07:42:06] <Shinigami-Sama> ok better
[07:42:29] <ptx0> i think maybe you can but it has to be substantial, who knows
[07:42:36] <ptx0> IANAL
[07:54:22] <Shinigami-Sama> oh well, it'll be breakfast reading to see greg dig himself out of this
[08:14:18] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has joined #zfsonlinux
[09:14:37] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has quit IRC (Ping timeout: 256 seconds)
[09:16:27] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has joined #zfsonlinux
[09:32:07] <ptx0> make[2]: /usr/bin/dtrace: Command not found
[09:32:07] <ptx0> make[2]: *** [Makefile:15001: libvirt_qemu_probes.h] Error 127
[09:32:10] <ptx0> uhm
[09:32:15] <ptx0> are you high
[09:32:38] *** leothrix <leothrix!~leothrix@elastic/staff/leothrix> has quit IRC (Remote host closed the connection)
[09:33:48] *** leothrix <leothrix!~leothrix@elastic/staff/leothrix> has joined #zfsonlinux
[09:34:48] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has quit IRC (Quit: AAAGH! IT BURNS!)
[09:35:01] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has joined #zfsonlinux
[09:41:24] <ptx0> wait, what
[09:41:29] <ptx0> zsh: /usr/bin/dtrace: bad interpreter: /var/tmp/portage/dev-util/systemtap-4.0/temp/python3.5/bin/python3: no such file or directory
[09:41:45] <ptx0> apparently libvirt created a dtrace python script to generate dtrace probes?
[09:41:53] <ptx0> wtf is happening
[09:43:18] <ptx0> that's from systemtap apparently
[09:43:35] <ptx0> why is systemtap pretending to be dtrace
[09:43:45] *** insane^ <insane^!~insane@fw.vispiron.de> has joined #zfsonlinux
[09:48:37] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[09:51:26] *** kaipee <kaipee!~kaipee@> has joined #zfsonlinux
[09:59:02] <blackflow> ptx0: you're depraving the world of popcorny drama by removing comments from issues (unless they're vulgar and stuff like that). :)
[09:59:39] *** IonTau <IonTau!~IonTau@ppp121-45-221-77.bras1.cbr2.internode.on.net> has quit IRC (Remote host closed the connection)
[10:01:45] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 268 seconds)
[10:27:08] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[10:29:49] <blackflow> *wears a tinfoil hat* I do wonder if Red Hat, being (one of) the major contributor(s) to the kernel, knew very well (perhaps by their own design? hence the hat) that ZFS will be landing on hot water due to GPL only kernel APIs, and fully dismissed it while inventing Stratis.
[10:30:06] <blackflow> *in
[10:33:26] <ptx0> yes i am the robot trying to maintain a somewhat professional environment here
[10:36:47] <blackflow> so anyway what does this mean for ZoL now? no 5.x+ kernel unless they amend the API license? is there a lot of work around that (by using different APIs)?
[10:37:15] <ptx0> the fpu is kind of an oddball
[10:37:23] <ptx0> i think they will move it to the SPL module which is GPL
[10:37:28] <blackflow> build from source, sed the license out (for personal use, no distribution), all is good? I had to do that for nvidia...
[10:37:36] <ptx0> no
[10:39:33] <blackflow> ptx0: btw (I'm assuming this is you?) "No one is combing ZFS into Linux or even distributing binary modules here" -- actually Ubuntu is distributing zfs.ko and friends, as part of the linux kernel package. Their lawyers say all is good.
[10:39:47] <ptx0> we're not talking about that in the LKML
[10:39:59] <ptx0> no one is asking for ZFS to be merged upstream
[10:40:06] <ptx0> just to not prevent it from working out-of-tree..
[10:40:41] <ptx0> heck, they aren't even preventing distribution, they're preventing users from exercising their GPL license-granted rights
[10:40:51] <blackflow> indeed.
[10:43:36] <hyper_ch2> if zol was only gpl v3 :)
[10:50:44] <blackflow> What would that mean for FreeBSD's anxiety and OCD about getting rid of GPL in the base?
[10:57:07] <hyper_ch2> BSD isn't a "linux"
[11:01:39] <lblume> blackflow: To be more correct, Ubuntu is distributing zfs.ko in the linux-modules pakage, which is distrinct from the kernel which is in linux-image.
[11:07:20] <chesty> > ptx0: you can't attain copyright on modifying a previous copyrighted object; that reminded me of translating books from one language into a different language, I thought (but honestly I have no idea) the translator owned the copyright to the translated version. But the people tell me that story, who also wasn't a lawyer, was talking about a book
[11:07:21] <chesty> that was in the public domain being translated.
[11:07:56] <chesty> that last sentence was almost english
[11:15:57] <blackflow> lblume: I stand corrected.
[11:16:49] <blackflow> hyper_ch2: FreeBSD is rebasing their ZFS onto ZoL so by extension couldn't do it if ZoL code they're pulling into their repo, is GPL'd
[11:23:01] *** Slashman <Slashman!~Slash@cosium-152-18.fib.nerim.net> has joined #zfsonlinux
[11:32:51] *** mquin <mquin!~mike@freenode/staff/mquin> has joined #zfsonlinux
[11:34:03] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[11:36:03] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6007c4a82fb823a7e46.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[11:51:26] <madwizard> lblume++
[11:54:01] *** mquin <mquin!~mike@freenode/staff/mquin> has quit IRC (Quit: So Much For Subtlety)
[11:56:05] <madwizard> ptx0: bunder Actualy, Brian Cantrill said at least twice on a recorded talk that while they were releaseing everything as CDDL they hoped it would be seen as GPL compatible
[11:58:15] <madwizard> As for Ubuntu, Eben Moglem was consulted and stated that the way Canonical does it is legal
[11:58:29] <madwizard> To which RH lawyers and OSC don't agree
[12:00:27] <blackflow> but nobody pushed to confirm their opinion in a court?
[12:01:55] <lblume> People who think what they do is legal don't usually run to courts to ask for reinsurance.
[12:02:21] <lblume> And after what is already a good number of years, nobody sued.
[12:03:58] <madwizard> I remember attending an OpenSolaris and BSD conferences shortly after ZFS was released and there was a talk on all sides about porting it to Linux
[12:04:05] <madwizard> DTrace specifically caught the eye
[12:04:09] <madwizard> And then suddently systemtap
[12:05:18] <blackflow> lblume: I meant the other way around -- if anyone claiming GPL violation tried to confirm that in court. I mean, I'm sure it'd be in RH's interest to subvert Canonical/Ubuntu -- their direct competition.
[12:08:20] <lblume> I've not heard of anybody suing. Looks like it was FUD all the way.. RH's reasons are probably more complicated than merely being scared of potential violations. Even if that's so simple, it would not make them look good to attack Ubuntu in this way.
[12:10:04] <lblume> Note that over the years, the FUD I've seen was always (irrationally) using *Oracle* as a bogeyman.
[12:11:29] <madwizard> lblume++
[12:11:59] <madwizard> blackflow: RH wants to maintain its good image within community. It wouldn't do any good to attack Canonical in that way.
[12:12:11] <madwizard> But RH not adopting any CDDL technologies speaks volumes
[12:12:49] <madwizard> On the other hand, they obsoleted btrfs in next RHELs
[12:13:12] <madwizard> I have hoped one of the two would replace LVM+XFS stack.
[12:13:34] <madwizard> And now this deduplication technology RH bought last year, I thing its VDO or something
[12:14:07] <madwizard> And disclaimer: all of the above is my personal opinion, not in any way tied to RH
[12:17:31] <lblume> In addition to btrfs, RH maybe has more complicated relationships to handle. They've become pals with IBM, who may or may not be super eager to have ZFS supported on its PPC and mainframes.
[12:17:48] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[12:18:06] <madwizard> Yeah, it's all always a bigger picture
[12:33:12] *** stefan00 <stefan00!~stefan00@ip9234924b.dynamic.kabel-deutschland.de> has joined #zfsonlinux
[12:33:44] <stefan00> can the giz-X level be changed later on an existing dataset?
[12:33:54] <stefan00> gzp-X
[12:34:13] <stefan00> gzip ;-)
[12:38:21] <FireSnake> you can change any read-write property, but it has no effect on existing data
[12:38:43] <FireSnake> it will affect only data written after the change
[12:41:19] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[12:44:34] *** Zialus <Zialus!~RMF@> has quit IRC (Ping timeout: 268 seconds)
[12:50:43] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[13:04:59] *** Cyan <Cyan!~cyan@garamon.de> has quit IRC (Ping timeout: 250 seconds)
[13:07:40] *** Cyan <Cyan!~cyan@garamon.de> has joined #zfsonlinux
[13:13:28] <stefan00> FireSnake: All right, that’s what I want. Thank you :-) !
[13:17:13] <FireSnake> I'm seeing many times that system resources are underutilized (cpu, storage) with all default settings, as well as with compression on. none of the 'performance tweaking' wikis seem to help. i don't know C but before i'm digging the source code, is anything you tune to max out the cpu and storage?
[13:18:26] <FireSnake> like for now i'm looking at sending a dataset from nvme1 to nvme2 at 200mb/s (raw nvme performance > 1Gbytes/s, each pool is one single vdev nvme)
[13:18:49] <FireSnake> the dataset having just rsync of default rhel os files
[13:21:01] <FireSnake> so far I've seen the zio_taskq_batch_pct at 75 but even with it modified at 100 still no significant improvement
[13:34:21] <DeHackEd> keep in mind those NVMe drives usually need to be kept busy by multiple threads in order to be properly saturated
[13:45:06] <stefan00> FireSnake: Did you somehow benchmark read / write performance on both filesystems (nvmes / pools) yet?
[13:56:03] *** stefan00 <stefan00!~stefan00@ip9234924b.dynamic.kabel-deutschland.de> has quit IRC (Quit: stefan00)
[14:02:21] *** mquin <mquin!~mike@freenode/staff/mquin> has joined #zfsonlinux
[14:04:43] *** Cyan <Cyan!~cyan@garamon.de> has quit IRC (Ping timeout: 252 seconds)
[14:06:03] *** Cyan <Cyan!~cyan@garamon.de> has joined #zfsonlinux
[14:15:46] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[14:25:49] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[14:26:15] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Max SendQ exceeded)
[14:26:40] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[14:34:33] <rjvb> FireSnake, stefan00: I wrote a little utility that allows to rewrite files and directories with a different compression, see github:RJVB/afsctool
[14:37:05] <rjvb> question for the "bulk disk" users on here: do you ever upgrade firmware on your drives if there's no obvious issue with the firmware they come with ?
[14:37:14] <rjvb> (or do you just get new disks? :) )
[14:38:34] <rjvb> I *think* there's an upgrade for my Barracuda 14th gen. 2TB drive, question is how I'm going to apply it under Linux...
[14:38:38] <PMT> Usually drive firmware gets updated because there was a problem and we went looking to see if there's a new FW.
[14:39:22] <rjvb> and IF I am because I haven't noticed any issues and cannot even find any info what's changed in the supposed upgrade
[14:39:25] <PMT> Also, there are plenty of drive FW updaters under Linux - the last time I had to use one, the image Seagate offered for download was a stripped Linux distro.
[14:39:51] <rjvb> That's what they push in the MSWin .exe updater, yes
[14:40:06] <rjvb> but the .iso has a FreeDOS environment.
[14:41:05] <rjvb> I'm hoping that under Linux you can get to do this without need for a reboot .... supposing it's even possible to perform an upgrade over USB3!
[14:41:42] <PMT> Depends on what the updater expects to see or does, though usually FW updating is fairly standard at this point.
[14:42:25] <rjvb> (we're talking about a ST2000DM001-1CH164 btw, current FW CC64)
[14:43:15] <PMT> Where do you see a FW update for that? I see updates for the older 9YN164.
[14:43:59] <rjvb> That's why I said I'm not certain there's an update. I searched for "seagate firmware CC64"
[14:44:49] <rjvb> I naively expect that FW CC64 for ST200DM001 (and a range of other comparable drives) will be the same software...
[14:44:56] <PMT> lmfao
[14:45:32] <PMT> As you may gather from my remark about the 9YN164 having an update but not the 1CH164, that's not even true within the same major model.
[14:47:07] <PMT> *necessarily true, I suppose, to be more precise.
[14:47:30] <PMT> Seagate offers a thing on their site where you can tell it the model and serial # and it'll tell you if there's a FW update.
[14:48:07] <PMT> Hm, https://apps1.seagate.com/downloads/request.html doesn't ask for model. Maybe I was thinking of the warranty check.
[14:48:34] <gchristensen> I have a ZFS dataset which gets restored to a blank snapshot on every boot. any nice filesystem options I should consider for this less common use case?
[14:49:05] <rjvb> no, there is a serial # search thingy, but the result page is formatted so sloppily that I can't even tell if it really searched (I cannot seem to get anything but their French site)
[14:49:27] <rjvb> my serial # is Z1E6B1Q6, FWIW
[14:51:30] <rjvb> http://firmware.hddsurgery.com/?manufacturer=Seagate&family=Grenada shows CC76 as the latest firmware, but I'm not keen on trusting them
[14:53:14] *** y9pqb <y9pqb!~y9pqb@2804:14d:90ad:4cae:223:14ff:fed5:d284> has joined #zfsonlinux
[14:56:15] <PMT> gchristensen: ? you haven't explained what the use case is, so I can't really suggest anything.
[14:58:23] <PMT> rjvb: that seems to suggest no update available. I'm not claiming their page is well-organized. (Also, generally FW updates on Seagate drives don't share across CCxY - e.g. CC3H would almost never be updatable to CC40, that usually denotes different things running on them.)
[14:59:18] <PMT> But that's just observational, not something they explicitly state.
[15:01:53] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[15:02:05] <PMT> I'm also confused b/c the only FWs they show on there for ST2000DM001 and CC6x are for the 9YN164 model, but. vOv
[15:02:35] <PMT> I wouldn't update it without cause.
[15:03:11] *** flying_sausages <flying_sausages!~flying_sa@static.88-198-40-49.clients.your-server.de> has quit IRC (Quit: You just lost the game. Peace Out.)
[15:03:43] *** flying_sausages <flying_sausages!~flying_sa@static.88-198-40-49.clients.your-server.de> has joined #zfsonlinux
[15:04:11] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has quit IRC (Quit: Page closed)
[15:05:13] <bunder> i only updated the firmware on my crucial ssd's because they said the original firmware had occasional issues
[15:25:38] *** JanC <JanC!~janc@lugwv/member/JanC> has quit IRC (Remote host closed the connection)
[15:25:53] *** insane^ <insane^!~insane@fw.vispiron.de> has quit IRC (Ping timeout: 245 seconds)
[15:26:00] *** JanC <JanC!~janc@lugwv/member/JanC> has joined #zfsonlinux
[15:57:27] <FireSnake> i only update when smartctl warns me
[15:57:44] <FireSnake> they use to keep a pretty up to date db
[16:08:10] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 272 seconds)
[16:09:45] <Ryushin> PMT / cirdan: I saw that they closed the init script bug: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=915831 But I don't understand if they are going to add the init scripts back or not.
[16:10:08] <Ryushin> Just really tired today and I cannot seem to wake up.
[16:10:55] <Ryushin> Maybe the brain is not grasping that I'm going to have to clear a foot of snow today from the driveway and walk.
[16:15:37] <cirdan> i think i fucked myself with my lto5 drive
[16:15:41] <cirdan> not that it was working...
[16:15:58] <cirdan> i pulled the head off to clean/check and it needs to be realigned
[16:16:17] <cirdan> I thought it was liek the HP where the head sits in a carrier that gets aligned...
[16:16:21] <bunder> how did you manage that
[16:16:33] <cirdan> 2 screws
[16:16:35] <bunder> that's what cleaning tapes are for
[16:16:41] <cirdan> it wasn't working
[16:16:53] <cirdan> i was following a guide for an hp drive and i have an ibm
[16:17:25] <bunder> could have jammed the tape in the drive and cranked the motor by hand :P
[16:17:49] <bunder> maybe i'm just a masochist
[16:19:48] <cirdan> i mean it went through a cleaning but didn't help
[16:20:52] <bunder> i've had to run them 2 or 3 times to get the drives to work, or even a new cleaning tape
[16:21:00] <bunder> (god i hate tape)
[16:21:20] <cirdan> i could write the length of the tape then it would error out as it started to rewind it
[16:22:39] <bunder> rewind by command or by pressing the eject button?
[16:23:15] <cirdan> doesn't the tape get unwound and rewound as it writes?
[16:23:33] <cirdan> all i know is 19gb is how much data fits on 1 pass/track of lto5 and that's when it errored
[16:24:09] <bunder> i don't believe so, it writes like 4 tracks diagonally or something as it sucks in the tape
[16:24:19] <cirdan> hmm
[16:26:41] <bunder> https://www.youtube.com/watch?v=75xm3JMxWE0&t=1m39s
[16:27:28] <bunder> why it goes backwards after the leadin, i dunno
[16:28:07] <cirdan> well time to grab a spare tape and manually aligh i guess
[16:28:49] *** gila <gila!~gila@5ED74129.cm-7-8b.dynamic.ziggo.nl> has joined #zfsonlinux
[16:33:55] <Ryushin> cirdan: Yea, I used to use tapes. Until the 8TB Seagate Archive drive series came out. Since then, I've moved everything to hard drives. I run Bareos for my backups and my customers backups.
[16:35:35] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[16:37:13] <bunder> gears all lubed and not missing teeth?
[16:41:59] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has joined #zfsonlinux
[16:52:35] *** y9pqb <y9pqb!~y9pqb@2804:14d:90ad:4cae:223:14ff:fed5:d284> has quit IRC (Quit: y9pqb)
[16:57:57] <cirdan> bunder: msot things are dry nowadays
[16:58:17] <cirdan> Ryushin: for the price I got stuff I can't beat lto5 or lto6 $/tb
[16:58:44] <bunder> that's why they break :P
[16:59:13] <bunder> can't spend the 5 cents on white lithium grease
[16:59:15] <cirdan> na it's not where the problem is. it's the head
[16:59:34] <cirdan> is either super dirty or just dead
[17:00:22] <bunder> but you said it wrote fine, but didn't rewind
[17:01:00] *** kim0 <kim0!uid105149@ubuntu/member/kim0> has joined #zfsonlinux
[17:08:30] <jasonwc> What is the recommended recordsize for VM storage? 4K or 8K? Also, I noticed the default qcow2 cluster size is 64K. Should that be changed to 4 or 8k?
[17:11:13] <cirdan> no
[17:11:21] <cirdan> it wrote the first track fine
[17:11:22] <jasonwc> Or I suppose I could just use raw images and use ZFS to snapshot
[17:11:23] <rlaager_> jasonwc: It's a trade-off between smaller blocks meaning more efficient overwrites (for small changes) and larger blocks being better for compression and having less metadata overhead. I use 64k, based on a recommendation from Nexenta.
[17:11:34] <cirdan> same i use 64k
[17:11:52] <jasonwc> With qcow2 disk images?
[17:12:03] <cirdan> no with zvol
[17:12:09] <jasonwc> Ah
[17:12:39] <rlaager_> I also make a point of aligning my guest filesystems to 64k boundaries to the extent possible. In ext4, for example, this is stride=16,stripe-width=16 (assuming the default 4k block size in ext4, as 64k/4k=16). I don't know how much those settings matter.
[17:13:31] <ghfields> I use 32k recordsize, but I use raw files, not qcow2
[17:15:55] <jasonwc> Thanks for the suggestions
[17:16:55] <jasonwc> rlaager_: I always see bug reports about poor performnace with zvols. Is zvol performance generally better than using disk images? If not, what's the upside? They seem more difficult to manage.
[17:20:00] <bunder> http://www.fujitsu.com/global/Images/feature_a01_tcm100-907106_tcm100-907106.gif wait that's not diagonal, i wonder what format i'm thinking of :(
[17:24:08] <bunder> must've been dds, oops
[17:25:40] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8254) new review comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/pull/8254#discussion_r247174623>
[17:25:48] <ptx0> FireSnake: your shitty nvme device is probably heat throttled
[17:28:55] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Read error: Connection reset by peer)
[17:30:57] <bunder> its not the nvme's fault it has no ventilation :P
[17:33:09] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6007c4a82fb823a7e46.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[17:34:45] <cirdan> huh irssi has a cve
[17:35:06] <bunder> scripting engine?
[17:35:18] <cirdan> dunno
[17:35:29] <cirdan> just saw it popup
[17:35:46] <cirdan> lol someone said "sends data in plaintext. must be removed"
[17:35:55] <cirdan> that was apple's justification for removing telnet
[17:36:03] * cirdan shakes his head
[17:37:37] <bunder> Irssi 1.1.x before 1.1.2 has a use after free when hidden lines are expired from the scroll buffer.
[17:37:40] <bunder> uhh
[17:43:27] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8254) new review comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/pull/8254#discussion_r247180789>
[17:51:51] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[17:55:34] <PMT> bunder: that's good work they did
[17:58:58] *** Albori <Albori!~Albori@67-43-244-19.fidnet.com> has joined #zfsonlinux
[17:59:40] <bunder> i'm not even sure how that works
[18:00:23] <bunder> if you push stuff off the top of the buffer, shouldn't the top be whatever became the top?
[18:00:31] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8254) new review comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/pull/8254#discussion_r247186589>
[18:00:34] <PMT> bunder: depends on the data structure you're using
[18:00:48] <PMT> They're probably using a ring buffer.
[18:01:19] <bunder> unless they're using a start and end address and updating those instead
[18:02:11] <PMT> It looks like they're using a circular linked list.
[18:02:20] <PMT> Per https://github.com/irssi/irssi/pull/948/commits/8684ccb45c267fdeaaa779fce9323047aa5a9e38
[18:15:36] *** SadMan <SadMan!foobar@> has quit IRC (Ping timeout: 260 seconds)
[18:21:28] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8254) comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/issues/8254#issuecomment-453591655>
[18:39:10] *** zfs sets mode: -b *!*@bifrost.evert.net$#zfsonlinux-quarantine
[18:39:58] *** kaipee <kaipee!~kaipee@> has quit IRC (Remote host closed the connection)
[19:07:18] <ptx0> oh Lukas
[19:07:29] <ptx0> you can try all you want but greg is allergic to logic and reasoning
[19:07:52] <ptx0> though he has a great point
[19:08:06] <ptx0> it was one developer who set those functions GPL-only and they can reverse their own decision
[19:08:35] <ptx0> something something copyright holder something something
[19:09:49] <cirdan> often things are made gpl only after the fact
[19:10:14] <ptx0> yeah, that's some special kind of horseshit though
[19:10:24] <cirdan> and it doesn't negate that it really doesn't matter. you can either link to the entire kernel or not
[19:10:32] <ptx0> if we were userland, that'd be a capital offence
[19:10:57] <ptx0> cirdan: dude, you can't use logic and reasoning with licensing discussion
[19:11:03] <ptx0> none of these people involved are lawyers
[19:11:04] <cirdan> if you dont want an interface to be used you dont expose it
[19:11:17] <cirdan> ptx0: i know. they have a vested interest in FUD though
[19:11:20] <ptx0> it's all emotional argument
[19:11:24] <ptx0> no reality
[19:11:43] <cirdan> also, when was the last time SUN contributed to openzfs?
[19:11:52] <cirdan> cause that guy needs to be reminded...
[19:12:26] <cirdan> just make zfs id as GPL/Friendly :)
[19:12:42] <cirdan> that'll really torque some nuts
[19:13:01] <cirdan> or to be incusive, twist some nipples
[19:14:13] <bunder> i don't think sun/oracle ever contributed to openzfs, it didn't exist when illumos forked from opensolaris
[19:14:22] <cirdan> exactly
[19:14:54] <ptx0> why not open a PR with ZoL to declare our license as GPL
[19:15:01] <ptx0> see how that shit flies
[19:15:28] <cirdan> part of it is GPL iirc
[19:15:34] <bunder> only spl
[19:15:53] <cirdan> I thought the nw openzfs files were gpl?
[19:15:59] <ptx0> heheh even greg says SPL "doesn't work"
[19:16:00] <bunder> ptx0: that might be an interesting fight tbh, something something oracle abandoned the cddl by closing the source
[19:16:20] <cirdan> since with the CDDL lets oracle use the code in a closed manner
[19:16:22] <ptx0> he called it a "GPL condom"
[19:16:26] <bunder> one could argue the license is invalid now
[19:16:32] <cirdan> bunder: no the cddl lets them
[19:17:11] *** fp7 <fp7!~fp7@unaffiliated/fp7> has joined #zfsonlinux
[19:17:45] <cirdan> by doesn't work he mans doesn't like
[19:18:37] <bunder> my understanding is you can use cddl with other licensed work but i don't think you can not give up the source for the cddl bits
[19:19:08] <cirdan> oracle/sun can iirc
[19:20:11] <ptx0> cddl and gpl together in source format are compatible licenses
[19:20:17] <ptx0> it is only when compiling binaries that the conflict arises
[19:20:36] <ptx0> CDDL allows releasing source files as GPL but the binary must remain CDDL
[19:21:01] <ptx0> GPL doesn't allow releasing source as CDDL but the binary must also remain GPL
[19:21:32] <ptx0> so you can indeed combine the source trees but allegedly can't redistribute the resulting binary
[19:21:51] <ptx0> like i told greg, we're following the terms of the GPL
[19:22:07] <bunder> well, except for rh/ubu
[19:22:22] <ptx0> are they trying to say that Linux has no patent-encumbered or otherwise GPL-incompatible code?
[19:22:27] <bunder> well, except for dkms i guess
[19:22:35] <ptx0> or is it that they use the patent holder exemption which doesn't exist in GPLv2
[19:23:21] <ptx0> bunder: the only issue with ubuntu is that their live media contains a zfs.ko aiui
[19:24:01] <cirdan> it's not combined with the kernel until the user does it though
[19:24:17] <bunder> ptx0: and our gentoo isos onoes
[19:24:27] <ptx0> bunder: yea but who cares about that
[19:24:43] <ptx0> that thing also has a ton of proprietary firmware embedded into it
[19:25:12] *** papamoose <papamoose!~papamoose@hester2.cs.uchicago.edu> has joined #zfsonlinux
[19:25:53] <ptx0> the world of open source needs to pull its head out of its collective ass
[19:26:06] <ptx0> remember Mozilla with their branding restrictions?
[19:26:17] <bunder> they still do that
[19:26:27] <ptx0> no, they allow certification now
[19:26:45] <ptx0> https://blog.mozilla.org/opendesign/evolving-the-firefox-brand/
[19:27:12] <bunder> gentoo still tells you that you're not supposed to distribute the binaries you build /shrug
[19:27:31] <gchristensen> maybe gentoo isn't certified then
[19:27:47] <ptx0> because YOU aren't a certified distributor
[19:27:52] <gchristensen> nixos was certified to distribute our firefox builds
[19:28:01] <ptx0> yeah because nixos builds them
[19:28:11] <ptx0> or something? who fucking knows
[19:28:14] <ptx0> it's arbitrary
[19:28:52] <gchristensen> basically they said, our builds of firefox are sufficiently good that it is as if we got them from Mozilla and thus we can call it firefox. I dunno, not my department, but they said we could :P
[19:28:55] <ptx0> We do not allow system libs to be used with official branding because it deviates from official configuration. You must comply with the directive or you must disable official branding for your builds.
[19:29:04] <ptx0> that's the problem.
[19:29:20] <bunder> all i know about nixos is that they use /nix which breaks fhs and gentoo really wants to turn gentoo into nixos
[19:29:31] <ptx0> https://github.com/jasperla/openbsd-wip/issues/86
[19:29:38] <gchristensen> even debian has granted Nix an exemption to the FHS requirement
[19:29:47] <ptx0> this response from project maintainer is fucking epic though
[19:29:53] <ptx0> I will do no such thing until I speak with the person who owns the rights to the intellectual property, which appears to be not you.
[19:30:40] <ptx0> "I suggest you stop being rude to me" lololol
[19:30:45] <ptx0> oh no the internet police will get angry
[19:31:41] <gchristensen> https://lists.debian.org/debian-devel/2019/01/msg00013.html
[19:32:00] <ptx0> i am so tempted to fork this repo and put the branding back
[19:32:45] <ptx0> they want to restrict the branding of their browser but then called it "New Moon", isn't that a book?
[19:32:49] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8254) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8254>
[19:37:35] *** fp7 <fp7!~fp7@unaffiliated/fp7> has quit IRC (Quit: fp7)
[19:38:00] <bunder> https://archives.gentoo.org/gentoo-dev/message/127ca8eb127dfea8337d48a6f9bd774e
[19:38:05] <bunder> like what is gentoo doing
[19:38:40] <bunder> if i wanted to compile nixos from source i'd use nixos
[19:38:46] <bunder> (no offense)
[19:39:23] <bunder> its not like rpm where we need it to 'decompress' srpm or rpm packages
[19:39:45] <gchristensen> I almost never compile my OS binaries on my own, the nixos binary cache takes care of that sufficiently. if Gentoo is going to package Nix, it probably shouldn't change the root
[19:40:39] <ptx0> it shouldn't package nix.
[19:40:44] <ptx0> you just gave the answer
[19:41:04] <ptx0> i'm getting quite frustrated with the nonchalance of nixos users breaking everything
[19:41:05] <gchristensen> Nix proudly holds the prize of recompiling software packages more often than Gentoo ... but it is done by the build farm, not users
[19:41:08] <ptx0> they are almost as bad as systemd
[19:41:26] <ptx0> s/users/devs/
[19:41:42] <gchristensen> not sure what we're breaking that has you upset, typically I find Nix pushes software to be improved
[19:41:45] <bunder> i don't know about breaking, but installing a distro on a distro sounds bad
[19:42:03] <bunder> its not even an lxc container or something
[19:42:04] <gchristensen> Nix is separate from NixOS. Nix can be used on any Linux and macOS
[19:42:05] <ptx0> "improved"
[19:42:31] <ptx0> putting things in arbitrary locations is improvement?
[19:42:59] <gchristensen> they're not arbitrary
[19:43:10] <bunder> boy did i open a can of worms lel
[19:43:15] <gchristensen> but you don't need to agree with me, I don't expect/need you to
[19:43:21] <ptx0> dude, nix's install instructions have you pipe curl to the shell
[19:43:28] <FinalX> fucking hell, not this Nix shit again
[19:43:32] <ptx0> it's built on the work of retarded monkeys
[19:43:46] <gchristensen> I'll agree to disagree on that
[19:43:48] <FinalX> first on #debian, then on #nginx, now here
[19:44:02] <gchristensen> ptx0: how would you prefer to install Nix?
[19:44:14] <ptx0> oh wow, how about downloading a fuckin script and verifying its checksum?
[19:44:16] <FinalX> if you want to evangelical about nix, do it in nix channels
[19:44:22] <FinalX> ptx0: but that would ruin the surprise!
[19:44:26] <gchristensen> I'm not evangelizing I think?
[19:44:43] <ptx0> gchristensen: flying in the face of reason like you are doing is indeed evangelising
[19:44:47] <gchristensen> the script does verify the checksum, and also checksums and signatures are provided for users who want it
[19:44:56] <bunder> alright my bad, i shouldn't have brought up fhs
[19:45:02] <ptx0> you're verifying the checksum of the script you're running IN THE SCRIPT YOU'RE RUNNING?
[19:45:05] <gchristensen> I'll agree to disagree on that
[19:45:09] <FinalX> LOL
[19:45:14] <ptx0> HOLY F U C K
[19:45:17] <gchristensen> no, the checksum is verifying the hash of the tarball it fetches
[19:45:23] <ptx0> that's Security by Brilliance.
[19:45:29] * FinalX runs away not sure whether to laugh or cry
[19:45:33] <ptx0> nooo you missed the point
[19:45:37] <gchristensen> no, I didn't
[19:45:46] <ptx0> you're piping an arbitrary script to the shell without verifying it is even what you want
[19:45:56] <gchristensen> I know that that is an issue
[19:45:57] <ptx0> if this script downloads a tarball from atacker.zyx and verifies it, why do i care?
[19:46:18] <gchristensen> where does your root of trust start?
[19:46:38] <ptx0> lol so it's better to have no root of trust?
[19:46:40] <gchristensen> if you trust an arbitrary domain to provide good checksums on the HTML page, there is no difference in trust of another URL on the same domain
[19:46:47] <gchristensen> I didn't say that
[19:46:59] <ptx0> why'd you even use https to give the script then? https is slow and a waste of cycles in this case
[19:47:09] <ptx0> the script will verify itself!
[19:47:10] <gchristensen> you're confusing the argument.
[19:47:35] <FinalX> gchristensen: that's bullshit
[19:47:47] <ptx0> i don't see any kind of list of checksums, mirrors, or any announce email with checksum to verify
[19:47:47] <gchristensen> you have to trust something at some point. if the page says "download this file and make sure its hash is adc83b19e793491b1c6ea0fd8b46cd9f32e592fc" then an attacker can change the file and the hash
[19:48:02] <ptx0> there's value in having this information, you can verify it from multiple sources
[19:48:12] <ptx0> without this information you are blind
[19:48:22] <FinalX> you can verify whether you're downloading from the real website by asking peers and others, and you can verify if you're actually connected to that website by other means: DNSSEC, Public Key Certificates (SSL/TLS) etc.
[19:48:23] <gchristensen> announcement emails would be good, that is a good idea. on the download page there is a signing key with links to multiple places to verify it
[19:48:33] <ptx0> haven't you ever google search a hash id to find out if anyone else is distributing the thing you're looking at?
[19:48:42] <ptx0> have you ever used a keyserver?
[19:48:42] <FinalX> tarballs can come from CDNs or any other site and then validated with the hash from the official site
[19:48:46] <FinalX> it's how debian/ubuntu packages work
[19:48:49] <gchristensen> yes
[19:49:01] <ptx0> this is why nixos is made from retarded monkeys, because they don't even care enough to put in a modicum of effort
[19:49:03] <FinalX> you download from an untrusted source and you verify the signature against a locally installed PGP-key
[19:49:10] <ptx0> if they're not retarded then they are downright evil
[19:49:16] <ptx0> or lazy, not sure which is worse
[19:49:16] <FinalX> not let the package verify itself
[19:49:41] <gchristensen> FinalX: of course. the trouble is getting to the initial trust.
[19:49:42] * FinalX sends gchristensen a piece of malware that verifies itself and prints "100% OK, totally trustworth and I'm now going to take over your entire system to make things more convenient for you"
[19:49:53] <FinalX> I mean, come on man
[19:49:59] <gchristensen> I never said that either.
[19:50:14] <ptx0> i spent 3 months just refactoring an appliance to not have wide open root access and i had to secure its update routine, meanwhile, not making anything anymore 'difficult' for the end user
[19:50:22] <gchristensen> yeah, it is hard work
[19:50:28] <ptx0> trust me, i know what effort security involves and how many companies say "fuck it, that's hard work"
[19:50:31] <ptx0> like nixos has done
[19:50:36] <gchristensen> I don't think NixOS has done that
[19:50:46] <ptx0> they have a fucking pipe to shell on their install docs
[19:50:49] <gchristensen> if publishing hashes on email lists is the thing to do, that is a thing we can do
[19:50:57] <zfs> [zfsonlinux/zfs] Add pyzfs BuildRequires for mock(1) (#8265) created by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8265>
[19:51:04] <ptx0> what do i need to do open a bug report to get it fixed? you said "i know that is an issue"
[19:51:23] <ptx0> so nixos just ignores security while wanting the rest of the world to allow them to place arbitrary shit in root (/nix)??
[19:51:34] <ptx0> foad with that
[19:51:44] <gchristensen> nixos does not allow arbitrary things in /nix, nor does it ignore security
[19:51:51] <ptx0> except for the pipe to shell?
[19:52:03] <ptx0> i guess it only ignores security when it's convenient
[19:52:06] <ptx0> that makes it ok, right?
[19:52:07] <Shinigami-Sama> gchristensen: considering how often DNS hijacks happen, piping a script is a very terrible idea, let alone all the nonsense ngix and apache can do to urls with rewrites
[19:52:08] <gchristensen> we're not going to delete the curl|sh mechanism of distribution, but we are able to provide additional instructions for people who want something else. and we do already provide signature files and a widely distributed key.
[19:52:22] <ptx0> so nixos ignores security. thanks for clarification.
[19:52:33] <gchristensen> like I said a while ago, I don't need you to agree
[19:52:34] <cirdan> wow crazy. i think my tape drive works now
[19:53:09] <ptx0> the best part about reality gchristensen is that whether or not you understand why or if you disagree with it, it's still there.
[19:53:11] <Shinigami-Sama> cirdan: did you get it drunk and let it evacuate the crap on the head?
[19:53:14] <gchristensen> but sure, you can open an issue on https://github.com/NixOS/nixos-homepage if you'd like to provide feedback.
[19:53:36] *** Slashman <Slashman!~Slash@cosium-152-18.fib.nerim.net> has quit IRC (Remote host closed the connection)
[19:53:39] <gchristensen> I think that if this was an honest conversation, we would be communicating differently.
[19:53:42] <cirdan> i pulled the head and realigned it
[19:53:55] <ptx0> i don't think you even know that you don't care about security.
[19:53:56] <gchristensen> but you're not interested in hearing about the threat modeling we've done
[19:53:59] <cirdan> and a few other things
[19:54:25] <ptx0> cirdan: that reminds me of the miracle of how i got my pcie slots working again
[19:54:33] <cirdan> na I'm just that good
[19:54:40] <ptx0> https://www.domenkozar.com/2014/01/02/getting-started-with-nix-package-manager/ < lol
[19:54:43] <ptx0> from 2014
[19:54:53] <ptx0> "Don't pipe to your shell", I know. Let's not talk about the color of the atomic bomb and how the color might be potentially dangerous Nevertheless, I strongly advise you to take a look at the script before executing it.
[19:54:56] *** elxa <elxa!~elxa@2a01:5c0:e097:fe21:43db:d78b:746c:2e52> has joined #zfsonlinux
[19:55:00] <ptx0> but what about terminal control character exploits?
[19:55:07] <ptx0> what a joke
[19:56:55] <cirdan> i have some fun js code that changes what you copy so if you paste into a terminal bad things happen
[19:57:09] <ptx0> i don't even see a single mention of any security concern on the nixos installation page
[19:57:21] <ptx0> just a dozen curl pipe to shell commands
[19:57:29] <ptx0> where's the verification instructions?
[19:57:53] <ptx0> the "Security" chapter of installation only talks about single vs multi-user installation.
[19:58:27] <gchristensen> https://nixos.org/nix/download.html provides verification instructions
[19:58:38] <ptx0> but i am looking at the manual
[19:58:59] <ptx0> i clicked 'install a binary distribution'
[19:59:54] <gchristensen> (sounds like you're looking at the nix manual, not nixos manual) that should be extended to include verification instructions as well, right now they live on the website and not in the manual, which is definitely an oversight
[19:59:59] <ptx0> anywhere there's a curl pipe to shell it should be big bold letters to stop drop and roll
[20:00:26] <ptx0> yeah but the fact that this method is being used at all is lazy and stupid
[20:00:39] <ptx0> you've created a security problem where there does not need to be one.
[20:00:46] <gchristensen> if you are on https://nixos.org/nix/ and click "Get Nix", it will, in bold, show you where to go for GPG verification instructions
[20:01:06] <prometheanfire> wonder how _gpl got added
[20:01:23] <ptx0> prometheanfire: to the fpu shit?
[20:02:31] <prometheanfire> ya
[20:02:48] <cirdan> magic
[20:02:50] <prometheanfire> I saw the thread on the ML
[20:02:53] <PMT> Pretty easily. It was a non-GPL symbol wrapping a GPL one, and the non-GPL one had allegedly been deprecated for a while and was removed.
[20:03:38] <ptx0> sweet, like 30 datasets just disappeared from my backup system
[20:03:43] <cirdan> so that means even in court almost 100% we could keep using it
[20:03:49] <cirdan> ptx0: on nvme?
[20:04:03] <ptx0> well, i have an nvme slog
[20:04:05] <gchristensen> ptx0: I've opened https://github.com/NixOS/nix/issues/2624 to track that issue, thank you for that
[20:04:21] <ptx0> cirdan: it was 5.6TB
[20:04:21] <PMT> cirdan: "in court" is so far removed from the actual question here
[20:04:24] <ptx0> now the pool says it has 330GB
[20:04:44] <ptx0> the fun thing is that zpool history shows nothing
[20:04:51] <prometheanfire> PMT: ah, makes sense
[20:05:09] * ptx0 is confused as shit
[20:05:30] <PMT> ptx0: even history -i?
[20:05:34] <ptx0> https://gist.githubusercontent.com/kpande/87e10b5a97373d52d53c378c7a5fa08a/raw/f0b193c37c523921b40348781bf9445ee943cded/gistfile1.txt
[20:05:58] <ptx0> that shows more
[20:06:28] <PMT> Absent more data, I'd blame -F
[20:06:44] <ptx0> well
[20:07:27] <ptx0> i see it destroyed everything this morning
[20:07:28] <ptx0> no idea why
[20:07:33] <ptx0> it's all internal
[20:07:53] <bunder> bad script is bad
[20:08:53] <cirdan> oh still ended up with an error code
[20:09:16] <ptx0> bunder: maybe
[20:10:11] <ptx0> i mean, this sucks, i won't be getting that data back
[20:10:38] <ptx0> i have copies of much of it but there were a few TB of raw 4k stuff i recorded and that was the archive of stuff i didn't want to have two copies of
[20:11:55] <ptx0> why are the destroys so far apart, time-stamp wise..
[20:12:22] <cirdan> evil http script redirect!
[20:12:40] <gchristensen> ptx0: also, it turns out the install script is also signed by the release key, with a published hash at https://nixos.org/releases/nix/latest/ -- so exposing those instructions are also pretty easy. I've created https://github.com/NixOS/nixos-homepage/issues/258 to track that issue, as well.
[20:12:56] <gchristensen> thank you again for that feedback, it is good
[20:13:41] <PMT> ptx0: async_destroy?
[20:13:59] <PMT> I don't suppose you have the -i version of history pastebinned somewhere.
[20:14:24] <ptx0> probably can't because of private info
[20:14:27] <ptx0> but i can PM it to you
[20:14:32] <PMT> k
[20:16:26] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r247225715>
[20:16:52] <ptx0> yeah that's how incremental recv work
[20:16:58] <ptx0> internal clone swap
[20:18:05] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r247226259>
[20:18:43] <PMT> Yeah, I'm not surprised. But I think that means the destroys probably aren't where your data went.
[20:19:14] <ptx0> i don't even see the dataset mentioned
[20:19:16] <ptx0> but it was sent
[20:19:31] <PMT> Did your pool end up rolling back on import and you didn't notice?
[20:20:29] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Alek P <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r247226921>
[20:20:47] <PMT> The -i log you shared also is truncated, so if that's where you're looking, ...?
[20:22:00] <ptx0> oh
[20:22:03] <ptx0> stupid termbin
[20:23:13] <ptx0> guess it is reasonable lol
[20:23:18] <ptx0> the file is large
[20:24:46] <ptx0> jeez, just glad the root OS was "in use"
[20:24:52] <ptx0> otherwise the whole OS would be gone
[20:25:27] <ptx0> PMT: looks like 4:58 is the timestamp from this morning
[20:26:04] <ptx0> annd the replication job wasn't even running
[20:26:06] <ptx0> so what was it
[20:26:53] <ptx0> hm, auto snap job is configured to run BUT it doesn't have this pool configured
[20:29:31] <PMT> Attempting to read that logfile in a web browser was a silly plan.
[20:29:55] <ptx0> 2019-01-11.05:02:33 [txg:49014] destroy rpool/media/archive (26058)
[20:30:03] <ptx0> sad trombone
[20:30:16] <ptx0> that filesystem isn't even configured in any snapshot or auto replication
[20:30:20] <ptx0> it's just a NFS share
[20:30:28] <Shinigami-Sama> did you curl a scrip into your server?
[20:30:34] <ptx0> haha tried to install Nix..
[20:30:44] <ptx0> gchristensen: look what you've done you bastard
[20:31:06] <gchristensen> it was my plan all along
[20:31:19] <ptx0> distracting me while my system destroyed my labour
[20:31:22] <ptx0> genius
[20:32:14] <PMT> That's weird, though, isn't it? That appearing to be an internal record when you weren't obviously receiving something that had the dataset stripped or s/t
[20:32:35] <ptx0> right
[20:32:57] <ptx0> that's the really frustrating thing, not knowing what did this
[20:33:49] <PMT> ptx0: did you receive a snapshot from a prior parent dataset that had rpool/media nuked
[20:33:56] <ptx0> no
[20:34:20] <PMT> Because I see 2019-01-11.07:59:09 [txg:51697] destroy rpool/media (27265)
[20:34:56] <ptx0> i have everything running via sudo when it does destroy and there are NO zfs destroy commands in secure log for this timestamp
[20:35:04] <PMT> So I was wondering if you had multiple receives going on at once with -F, and it had a bad day.
[20:35:05] <ptx0> not just destroy but recv etc also runs via sudo
[20:35:10] <ptx0> nope
[20:35:19] <ptx0> this filesystem that was destroyed, had finished receiving
[20:35:26] <ptx0> and had no incremental going on
[20:35:54] <Shinigami-Sama> time to undelete it with zed?
[20:38:09] *** gila <gila!~gila@5ED74129.cm-7-8b.dynamic.ziggo.nl> has quit IRC (Ping timeout: 268 seconds)
[20:38:24] * PMT puzzles.
[20:38:49] <PMT> Shinigami-Sama: ...I'm pretty positive zed can't do anything.
[20:39:05] <ptx0> Jan 11 01:22:21 localhost sudo: www-data : TTY=unknown ; PWD=/var/www ; USER=root ; COMMAND=/sbin/zfs recv -sFv rpool
[20:39:10] <ptx0> i don't think that should be there
[20:40:25] <ptx0> man, this could be because i had a bug where it accepted a newline at the end of the pool name for a minute before i fixed it
[20:40:34] <PMT> oh dear.
[20:40:45] <ptx0> how fun
[20:40:57] <PMT> And you don't have redundant copies of this anywhere?
[20:40:59] <ptx0> brb suicide
[20:40:59] <Shinigami-Sama> PMT: ptx0 here you go https://www.joyent.com/blog/zfs-forensics-recovering-files-from-a-destroyed-zpool
[20:41:04] <ptx0> j/k
[20:41:09] <ptx0> i have some of it
[20:41:12] <ptx0> most of it, maybe
[20:41:36] <PMT> Shinigami-Sama: yes, I know how to recover deleted files. This is an entire deleted pool, and zed is not involved.
[20:41:49] <ptx0> most of the data was ahem "acquired from others"
[20:41:54] <PMT> So I saw.
[20:41:56] <Shinigami-Sama> I'm mostly teasing
[20:41:56] <ptx0> so i can reobtain
[20:42:57] <PMT> This is why one of my TODOs when I stop having large one-off expenses so often is to get something like one of those Helios4s and have it receiving all the important datasets.
[20:43:18] <ptx0> i have a lot of the previous raw video i produced but the stuff from this summer may be dead
[20:43:27] <PMT> For bonus being-an-ass, I suppose I could turn on pool-wide dedup and receive them into distinct paths
[20:43:43] <ptx0> glwt
[20:44:10] <PMT> Man, that might actually make me care enough to go try to get the ddt_log work to work.
[20:44:42] * ptx0 places zfs hold on all the things
[20:44:46] <ptx0> fuck you zfs destroy
[20:45:15] <ptx0> so i've got 2.64TB of data i can recover
[20:45:49] <ptx0> god damn it, i went to clear out the mount point and destroyed the backup data
[20:45:52] <ptx0> and it has no snapshot
[20:46:11] <ptx0> but i've quickly exported the pool... trying a rewind
[20:46:18] <ptx0> having a bad day here
[20:46:27] <PMT> I'm sorry, friend.
[20:46:52] <ptx0> whew
[20:46:54] <PMT> You could go buy some large drives, set up a pool on them, and land backup disk images of this shit on it, so you can doso without worry of losing data.
[20:46:54] <ptx0> that worked
[20:46:59] <ptx0> thankfull it is a single disk pool here
[20:47:45] <ptx0> yeah i'm basically missing all the video from july onward
[20:47:48] <Shinigami-Sama> I read that as a shingle layered disk for som reason
[20:47:51] <cirdan> if only you had something like an offline backup solution... like... tape :)
[20:47:58] <zfs> [zfsonlinux/zfs] Add pyzfs BuildRequires for mock(1) (#8265) comment by Neal Gompa (?????????) <https://github.com/zfsonlinux/zfs/issues/8265>
[20:48:05] <ptx0> i've got the unfinished projects though
[20:48:09] <ptx0> so that's good
[20:48:16] <cirdan> ptx0: tat's awesome man
[20:48:20] <ptx0> cirdan: this is my offline backup i'm pulling from
[20:48:28] <ptx0> that's why it is missing stuff
[20:48:43] <cirdan> i've been the victim of a misplaced rm -rf * and also a failed raid expand
[20:49:00] <ptx0> yeah i've got all the latest video from BC that i didn't do anything with
[20:49:17] <ptx0> anything between july and now that i don't have, i can download from YT
[20:51:15] <cirdan> tahts good
[20:51:53] <ptx0> yeah no more raw shots or full quality clip shows but hey i'll take it
[20:52:11] <ptx0> better than back in 2011 when i was messing around with RAID-0 and lost 7TB of data
[20:52:15] <cirdan> yup
[20:52:26] <cirdan> i lost 3tb in 2004 or 2005
[20:52:32] <cirdan> sad ewek
[20:52:33] <cirdan> week
[20:52:35] <ptx0> aha i even have my music library
[20:52:46] <ptx0> that was the real tragedy
[20:52:56] <gchristensen> cirdan: what do you use for tape?
[20:53:06] <cirdan> nice this slightly used ultrastar is writing at around 155mb/s
[20:53:11] <cirdan> gchristensen: lto
[20:54:13] <gchristensen> I have a somewhat sizable (120t) WORN dataset which I'm not sure I care to afford to put on platters.
[20:56:47] <Lalufu> write once, read never?
[20:56:59] <gchristensen> yeah
[20:57:01] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[20:57:46] <ptx0> you can put your WORN dataset on a RAIN making sure to use some kind of RAID on each.
[20:58:25] <gchristensen> now you're cookin' with gas.
[20:58:40] <ptx0> cirdan: the feeling of shock and trauma i'm going through after data loss is pretty weird
[20:58:49] <cirdan> yeah
[20:58:53] <ptx0> must be how a parent feels when losing a child
[20:59:01] <cirdan> i almost lost every photo from the last 20 years
[20:59:03] <ptx0> "my data i will never be able to exactly reproduce, nooooo"
[20:59:10] <cirdan> I thought I had...
[20:59:15] <cirdan> it was sickening
[20:59:21] <ptx0> did you print them all out after
[20:59:26] <cirdan> no
[20:59:46] <cirdan> i still had a copy in an old backup but it was like 120gb of stuff
[21:01:21] <ptx0> that happened to me in 2007 but i had stacks of CDs burned with the contents i'd just lost
[21:01:27] <cirdan> lucky
[21:01:32] <ptx0> yeah
[21:01:44] <ptx0> my dad got me in a habit of burning everything to disc
[21:02:00] <cirdan> I put my 300 backup/warez cds back onto spinning rust because some were unreadable/corrupt files
[21:02:20] <ptx0> the CDs weren't excessively aged
[21:02:20] <cirdan> it's all sw from the 90s mostly, almost impossible to find anymore
[21:02:30] <cirdan> mine were 10-15 years
[21:02:34] <cirdan> and cheap ones
[21:02:37] <ptx0> 12:02:33 33.4G detached1/editing@test
[21:02:43] <ptx0> decrypting this thing as it sends
[21:02:46] <ptx0> zfs is pretty cool, man.
[21:06:07] <ptx0> it is like when i turned 29 years old my life just immediately started falling apart
[21:06:12] <ptx0> ahahahaha
[21:08:14] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has quit IRC (Read error: Connection reset by peer)
[21:10:55] <zfs> [zfsonlinux/zfs] Add pyzfs BuildRequires for mock(1) (#8265) new review comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/pull/8265#discussion_r247240778>
[21:13:07] <ptx0> ye know, it'd be nice if recv had a different -f option to just remove snapshots and not the dataset itself
[21:13:29] <ptx0> i.e. zfs send -R ... | recv -F ... will remove any snapshots from recv side, that didn't exist on send side
[21:13:37] <ptx0> but it'll also destroy all child datasets?
[21:13:59] <ptx0> recv -f would be nice to just remove snapshots and not do any stupid recursive rollback
[21:19:22] <ptx0> no thx they give me diarrhea
[21:19:23] <zfs> [zfsonlinux/zfs] Add pyzfs BuildRequires for mock(1) (#8265) comment by Neal Gompa (?????????) <https://github.com/zfsonlinux/zfs/issues/8265>
[21:34:32] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has quit IRC (Quit: Konversation terminated!)
[21:34:39] *** rjvbb <rjvbb!~rjvb@lfbn-ami-1-204-20.w86-208.abo.wanadoo.fr> has joined #zfsonlinux
[21:43:58] *** papamoose <papamoose!~papamoose@hester2.cs.uchicago.edu> has quit IRC (Remote host closed the connection)
[21:44:59] <ptx0> the clicking noises from this backup drive are soooooo comforting
[21:45:15] <bunder> clunk
[21:45:27] <ptx0> reminds me of Quantum drives
[21:45:35] <ptx0> click-tap click-tap click-click... tap
[21:46:46] <bunder> squeak
[21:46:48] <bunder> :P
[21:47:56] <bunder> i think i have some <10gb drives in my tub of junk, i bet they're dead
[21:48:17] <rjvbb> I remember Quantum drives for their circular saw noises, as if a platter was working its way outside the box
[21:48:42] <ptx0> torrents going at 33MiB/s
[21:48:48] <Lalufu> Quantum. That's a name I haven't heard in a while
[21:48:50] <rjvbb> (and a couple of 80Mb that you have to kickstart manually when cold :)
[21:48:55] * ptx0 will quickly rebuild his media library
[21:50:17] <Sketch> i used to have a bunch of quantum drives
[21:50:19] <ptx0> i wonder if Greg KH has an email plugin that tells him how many contributions to the kernel someone has before he even reads their message
[21:50:29] <Sketch> i was pretty sad when WD bought them
[21:50:34] <rjvbb> Lalafu: me too. Got me pondering the days I had MacOS + A/UX plus all my PhD work on a 120Mb (sic) drive
[21:50:37] <Sketch> was it even WD yet or was it maxtor then
[21:50:48] <ptx0> uhm, quantum drives sucked
[21:51:01] <ptx0> they had the highest failure rate at the time
[21:51:02] <Sketch> ('member when there used to be more than 3 drive manufacturers?)
[21:51:06] <bunder> fireball :P
[21:51:19] <ptx0> though if you have a 5.25" Bigfoot, it probably still runs
[21:51:49] <Sketch> ew
[21:51:55] <rjvbb> Never had any failures with the Quanta we had in our Macs, that I can remember at least
[21:52:01] <Sketch> now maxtor, they were terrible
[21:52:24] <ptx0> maxtor were better than quantum but drives were less compatible, IME
[21:52:29] <bunder> i never had problems with maxtor
[21:52:38] <Sketch> i think quantum's SCSI drives were good, but their consumer drives were not so great
[21:52:40] <rjvbb> I did
[21:52:41] <ptx0> the drives were more reliable but might not be recognised at all
[21:52:47] <Sketch> maxtor on the other hand was awful until after they bought quantum
[21:53:03] <Sketch> though i did have one maxtor drive that kept losing sectors, but just wouldn't die
[21:53:05] <bunder> i mean, when a drive dies, it's dead, but that happens to all of them eventually
[21:53:09] <rjvbb> (Macs had SCSI drives in the day)
[21:53:35] <ptx0> so did Compaq (at least the one I had)
[21:53:55] <ptx0> i wish i hadn't killed that Amiga 2500..
[21:54:12] <rjvbb> Hah
[21:54:26] <Sketch> my zombie maxtor was in an amiga 2000
[21:54:45] <Sketch> it lost sectors semi-regularly, but it never died
[21:54:52] <ptx0> i tried plugging a PC sound card into the expansion slot
[21:54:53] <Sketch> still powered on last time i tried it several years ago
[21:55:09] <ptx0> tbh it's Amiga's fault for allowing it to fit
[21:55:23] <Sketch> didn't they have ISA slots?
[21:55:31] <rjvbb> Had one of those too, I think I stuck a Quantum in, in addition to a tiny HDD I added to the 8086 co-computer I had in there
[21:55:32] <ptx0> they had 'bridge slots' and 'ISA slots'
[21:55:37] <Sketch> yeah, 2500 was the same thing as the 2000 just with some extra stuff
[21:55:45] <ptx0> pretty sure i slammed the pci card into the bridge slot
[21:55:45] <Sketch> they had both ISA slots and zorro slots
[21:56:00] <Sketch> oh, i see
[21:56:05] <bunder> no vesa bus? :P
[21:56:06] <rjvbb> (I recall having to go into a DOS debugger to get the drive to be recognised :))
[21:56:27] <ptx0> the system didn't turn on and when i removed the sound card, it would only flash green on the display
[21:56:29] <Sketch> yeah, the ISA slots weren't useful unless you had a bridge card, as they were really just meant for x86 compatibility which they expected you to have some hardware module for
[21:56:31] <ptx0> indicating a hw issue
[21:56:46] <Sketch> so the bridge slot was similar to PCI? I don't recall, though not too surprised
[21:56:54] <ptx0> no idea, i was 12
[21:56:57] <Sketch> i see
[21:57:06] <ptx0> life finds a way though
[21:57:15] <Sketch> right
[21:57:24] <ptx0> jeff goldblum reference
[21:57:37] <Sketch> i blew up a c64 once trying to make a diy stereo SID (sound chip) with a friend of mine
[21:58:04] <ptx0> all this talk of destroying hw is giving me PTSD from this morning's data loss
[21:58:40] <Sketch> you could buy adapters for them, but we were like "we can just hook it up to the same pins on the motherboard, and it should work"
[21:58:58] <Sketch> which in theory, was true. but it didn't work out for whatever reason, instead it just failed to power on ever again.
[21:59:07] <ptx0> i thought everyone knew what a sid chip is
[21:59:07] <bunder> https://github.com/gentoo/gentoo/pull/10791 omg someone is actually trying to get mate updated
[21:59:15] <Sketch> ptx0: one would hope, but you never know ;)
[21:59:25] <Sketch> a lot of youngins on irc these days
[21:59:35] <ptx0> like if you haven't listened to Machinae Supremacy, what kind of open source advocate ARE you?
[22:00:16] <ptx0> bunder: wow, we need that bot
[22:00:20] <rjvbb> sid chip, that's something out of an ice age, no? 8-)
[22:00:20] <Sketch> there are people on irc who weren't born until after commodore went bankrupt
[22:00:38] <Sketch> adult people
[22:00:53] <ptx0> people who can obtain a mortgage?
[22:00:58] <Sketch> yep
[22:01:02] <ptx0> wow
[22:01:14] <Sketch> at least i know i'm not the only one in this channel who is old ;)
[22:01:24] <ptx0> i just turned 29 this month
[22:01:40] <rjvbb> pff, just dry behind the ears
[22:01:43] <Sketch> hehe
[22:01:49] <ptx0> i've been referred to by children in public (they're so brutally honest) as "that old homeless man"
[22:02:13] <ptx0> like "what is that old homeless man smoking, it smells funny"
[22:04:10] <TemptorSent> Wait, you're not old enough to be called old. Old is > 37.
[22:04:22] <TemptorSent> (and if you don't get that reference, you're definitely not old)
[22:04:24] <Sketch> yeah, that's not very old
[22:04:35] * Sketch doesn't get the reference, but is > 37
[22:04:49] <gchristensen> or even remember a time when Commodore was in business
[22:04:53] <TemptorSent> ...or lack culture ;)
[22:04:58] <rjvbb> Thx TemptorSent!
[22:05:12] *** mquin <mquin!~mike@freenode/staff/mquin> has quit IRC (Quit: So Much For Subtlety)
[22:05:13] <TemptorSent> Holy Grail reference... Dennis.
[22:05:14] <rjvbb> am with Sketch here
[22:05:17] <Sketch> maybe i've just forgotten it in my old age ;)
[22:05:44] <ptx0> On November 15, 1964, the Chronicle printed the story, quoting Weinberg as saying "We have a saying in the movement that you can't trust anybody over 30."[10]
[22:05:49] <ptx0> this is the only thing i am aware of
[22:06:31] <Sketch> which movement is that?
[22:06:32] <TemptorSent> Homework for tonigh: Watch Monty Python & The Holy Grail
[22:06:40] *** gila <gila!~gila@> has joined #zfsonlinux
[22:06:51] <Sketch> there was a time when i could have quoted the entire movie...but it was a long time ago
[22:07:05] <Sketch> probably around the time ptx0 was born ;)
[22:07:10] <rjvbb> oh, *that* holy grail :)
[22:07:14] <TemptorSent> "Old Woman!" "Man." ...
[22:07:47] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[22:09:41] <TemptorSent> Off to the docs... I *am* old :P
[22:10:14] <rjvbb> not unless you know where Abe gets the mustard from
[22:10:54] <rjvbb> (and if you don't get that reference ... you're probably not Dutch)
[22:13:07] *** codyps <codyps!~codyps@richard.einic.org> has quit IRC (Ping timeout: 252 seconds)
[22:13:31] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8254) new review comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/pull/8254#discussion_r247255026>
[22:16:08] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8254) comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/issues/8254#issuecomment-453658966>
[22:16:42] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6006da1990eeceae2e6.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[22:21:35] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[22:27:57] <zfs> [zfsonlinux/zfs] ztest: add scrub verification (#8203) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8203#issuecomment-453662123>
[22:28:01] <zfs> [zfsonlinux/zfs] ztest: add scrub verification (#8203) closed by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8203#event-2069586450>
[22:37:54] <PMT> Oh, I suppose Debian matter-of-factly fixed the init script dep loop and merged it.
[22:42:01] <zfs> [zfsonlinux/zfs] Fix 0 byte memory leak in zfs receive (#8266) created by Tom Caputi <https://github.com/zfsonlinux/zfs/issues/8266>
[22:42:45] <bunder> 0 byte lol
[22:43:36] *** gila <gila!~gila@> has quit IRC (Read error: Connection reset by peer)
[23:05:31] <Crocodillian> what is the difference between mountpoint=legacy and mountpoint=none, if my root volume is legacy dracut refuses to mount it, but none works
[23:10:27] <zfs> [zfsonlinux/zfs] Fix zio leak in dbuf_read() (#8267) created by Tom Caputi <https://github.com/zfsonlinux/zfs/issues/8267>
[23:12:01] <Crocodillian> I could probably just fix this
[23:12:05] <Crocodillian> in dracut stuff
[23:12:35] <Shinigami-Sama> mountpoint legacy means use fstab
[23:13:05] <Crocodillian> yes
[23:13:11] <Crocodillian> but this is happening in the initramfs
[23:13:14] <Crocodillian> before fstab
[23:14:15] <Crocodillian> also the documentation does not say what the actual difference is, you can mount any volume from fstab, presumably one with mountpoint=none as well
[23:14:45] <Shinigami-Sama> you can set mounpoint to almost anything, mountpoint=/mnt/music
[23:14:58] <Shinigami-Sama> thats what my music dataset is set to
[23:17:08] *** sponix <sponix!~sponix@> has quit IRC (Ping timeout: 245 seconds)
[23:17:33] *** vonsyd0w <vonsyd0w!~vonsyd0w@unaffiliated/vonsyd0w> has quit IRC (Ping timeout: 245 seconds)
[23:19:31] <zfs> [zfsonlinux/zfs] Add contrib/pyzfs/setup.py to .gitignore (#8268) created by Tom Caputi <https://github.com/zfsonlinux/zfs/issues/8268>
[23:20:13] *** sponix <sponix!~sponix@> has joined #zfsonlinux
[23:20:49] *** vonsyd0w <vonsyd0w!~vonsyd0w@unaffiliated/vonsyd0w> has joined #zfsonlinux
[23:27:53] <zfs> [zfsonlinux/zfs] Add contrib/pyzfs/setup.py to .gitignore (#8268) comment by Neal Gompa (?????????) <https://github.com/zfsonlinux/zfs/issues/8268>
[23:31:38] *** tilpner <tilpner!~weechat@NixOS/user/tilpner> has joined #zfsonlinux
[23:34:44] <tilpner> Hi, I just installed ZoL, and am wondering how I can figure out where my disk space went
[23:35:00] <tilpner> zpool list rpool says "FREE 2.72T"
[23:35:16] <tilpner> But the highest AVAIL in zfs list is 2.63T
[23:35:36] <zfs> [zfsonlinux/zfs] Add contrib/pyzfs/setup.py to .gitignore (#8268) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8268>
[23:37:33] <bunder> thats probably right, i think
[23:37:43] <bunder> they count a little differently
[23:38:22] <bunder> unless you have a 100gb zvol
[23:39:25] <tilpner> I don't. Do you know what the difference between AVAIL and FREE is?
[23:41:11] <bunder> iirc zfs accounts for structures, metadata, parity, block size differences, where zpool doesn't
[23:42:19] <bunder> oh and possibly the 3% slack space
[23:43:01] <CompanionCube> it's in the manpage if you want a proper explanation
[23:43:05] <tilpner> Oh, what does it use that slack space for?
[23:43:11] <tilpner> I'll check the manpage :)
[23:43:23] <CompanionCube> tilpner: because truly running out of space in CoW results in a perma-readonly FS
[23:43:35] <bunder> slack space is like it is on ext3, mostly to keep things running
[23:43:59] <bunder> well sortof, i like CompanionCubes explanation better
[23:44:09] <tilpner> So there's no point in keeping rpool/reserve with reservation=10G? (Don't know if that's even sensible)
[23:44:22] <Sketch> everyone loves CompanionCubes
[23:44:36] <CompanionCube> tilpner: it can be useful for other purposes
[23:44:38] <Sketch> so nicely weighted.
[23:44:40] * Sketch goes back to work
[23:45:05] <CompanionCube> but it's definitely not needed
[23:46:12] <bunder> you might still want it, something something pool slows down at 90-95%
[23:46:25] <CompanionCube> that's a good example of one
[23:47:45] <zfs> [zfsonlinux/zfs] ztest: scrub verification (#8269) created by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8269>
[23:47:51] <tilpner> Okay, I'll keep it. Thank you :)
[23:49:47] *** rjvbb <rjvbb!~rjvb@lfbn-ami-1-204-20.w86-208.abo.wanadoo.fr> has quit IRC (Ping timeout: 240 seconds)
[23:51:38] <zfs> [zfsonlinux/zfs] ztest: scrub ddt repair (#8270) created by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8270>
[23:54:42] *** Dagger2 is now known as Dagger
[23:56:54] <zfs> [zfsonlinux/zfs] ztest: split block reconstruction (#8271) created by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8271>
   January 11, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31