Switch to DuckDuckGo Search
   February 16, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28
Toggle Join/Part | bottom
[00:00:02] <bunder> but my fragmentation
[00:00:12] <bunder> :)
[00:01:25] *** rich0 <rich0!~quassel@gentoo/developer/rich0> has joined #zfsonlinux
[00:02:36] *** rich0 <rich0!~quassel@gentoo/developer/rich0> has quit IRC (Remote host closed the connection)
[00:03:29] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Quit: shibboleth)
[00:03:29] <zfs> [zfsonlinux/zfs] better explanation of zvol sparse allocation implications with async and non-direct IO (#8415) created by kpande <https://github.com/zfsonlinux/zfs/issues/8415>
[00:03:42] <ptx0> bunder: that's how i was thinking it'd look
[00:03:49] *** IonTau <IonTau!~IonTau@203-206-42-171.dyn.iinet.net.au> has joined #zfsonlinux
[00:05:13] *** rich0 <rich0!~quassel@gentoo/developer/rich0> has joined #zfsonlinux
[00:06:51] <bunder> s/used/taken ?
[00:07:22] <ptx0> yeah that's what my brain jumped out at but that's the original wording
[00:07:40] <ptx0> i'll update it though
[00:09:17] <zfs> [zfsonlinux/zfs] Pool is stuck waiting for the transaction group to sync (#2871) closed by kpande <https://github.com/zfsonlinux/zfs/issues/2871#event-2143771659>
[00:11:08] <zfs> [zfsonlinux/zfs] Fast Clone Deletion (#8416) created by Sara Hartse <https://github.com/zfsonlinux/zfs/issues/8416>
[00:11:39] <PMT> vzvol?
[00:12:12] <bunder> hmm, i thought we got fast clone delete already
[00:12:54] <bunder> guess not
[00:14:45] <PMT> I believe there's a PR
[00:15:03] <PMT> Oh, hm, apparently there wasn't before, lmao.
[00:16:19] <zfs> [zfsonlinux/zfs] ZFS and NFS hang (#2954) closed by kpande <https://github.com/zfsonlinux/zfs/issues/2954#event-2143781320>
[00:17:39] <bunder> PMT: re vzvol https://www.youtube.com/watch?v=PoHsYzzzp8Q
[00:18:28] *** Hypfer <Hypfer!~Hypfer@unaffiliated/hypfer> has quit IRC (Ping timeout: 268 seconds)
[00:18:42] <zfs> [zfsonlinux/zfs] zvols, device-mapper, LVM2, and LIO target (targetcli) (#2994) closed by kpande <https://github.com/zfsonlinux/zfs/issues/2994#event-2143784386>
[00:19:04] *** digrouz <digrouz!~digrouz@246.188-136-217.adsl-dyn.isp.belgacom.be> has quit IRC ()
[00:19:38] <zfs> [zfsonlinux/zfs] boot hangs with nested mountpoint after recent kernel and zfs updates (#2995) closed by kpande <https://github.com/zfsonlinux/zfs/issues/2995#event-2143785527>
[00:19:53] <cirdan> heh. <@dvl> optiz0r: There's this guy posting on the mailing lists... ~10PB on tape
[00:20:14] <bunder> on one tape?
[00:20:26] <cirdan> ...yes, one tape
[00:20:44] <bunder> is that even possible?
[00:20:52] <cirdan> sure
[00:20:56] <ptx0> ZLE
[00:20:58] <cirdan> wanna buy one?
[00:21:07] <cirdan> ill sell
[00:21:16] <cirdan> >:-D
[00:21:36] <Shinigami-Sama> thats compressed size or uncompressed size?
[00:21:43] <cirdan> yes
[00:22:15] <zfs> [zfsonlinux/zfs] Lost pools after many successful send/receives (#3010) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3010#event-2143788865>
[00:23:23] <jasonwc> ptx0: I'm going to try to replicate the data corruption you experienced with incremental encrypted sends. You note the command you used was "`zfs send -Rwv rpool/ocean@to_newjack | zfs recv -Fv newjack/ocean`". You describe this as an "incremental" send but it just looks like a replication stream of all snapshots through the snapshot specified. I thought "incremental" only applied if
[00:23:23] <jasonwc> you used -i or -I.
[00:23:57] <ptx0> nah
[00:24:03] <ptx0> send -R is incremental if it includes >1 snapshot
[00:24:28] <ptx0> it sends a full stream and then differences
[00:24:50] <jasonwc> The man page indicates it's only incremental if -I or -i is used, but I see your point.
[00:24:50] <jasonwc> https://pastebin.com/nY1XyM9c
[00:27:16] <ptx0> i mean, use zstreamdump
[00:27:20] <ptx0> you can see it is incremental
[00:27:54] <ptx0> i was trying to narrow it down to what causes the IO error and any operation with an internal or otherwise incremental raw recv does it
[00:28:04] <jasonwc> Do you think there's something special about sending a root pool with lots of snapshots that caused this corruption or would any data suffice?
[00:28:11] <ptx0> it doesn't need -R, but -w -I will do
[00:28:30] <ptx0> tom says he can't reproduce it
[00:29:46] <jasonwc> ok
[00:30:55] <cirdan> ptx0: was it limited to a specific pool?
[00:31:16] <cirdan> maybe it's just a tr bug :)
[00:31:20] <ptx0> nope
[00:31:27] <ptx0> recv side is a xeon
[00:31:41] <ptx0> happens on both systems
[00:33:41] <zfs> [zfsonlinux/zfs] zpool destroy fails on zpool with suspended IO (#2878) comment by kpande <https://github.com/zfsonlinux/zfs/issues/2878#issuecomment-464246910>
[00:34:49] <zfs> [zfsonlinux/zfs] Inconsistencies of read and write bandwidth during random writes (#2851) closed by kpande <https://github.com/zfsonlinux/zfs/issues/2851#event-2143805789>
[00:46:56] <jasonwc> ptx0: So, I did a fairly simple test. I created two pools. In the first pool, testpool, I created an encrypted dataset and downloaded kernel 4.19 from kernel.org which I extracted in the same dataset. I then created a snapshot, snap1. I then downloaded kernel 5.0-rc6 and did the same thing. I then created snap2. I then did 'zfs send -Rwv testpool/encryptedtest@span2 | zfs recv -Fv
[00:46:57] <jasonwc> testpool2/encryptedtest. I then scrubbed both pools. No errors.
[00:47:25] <ptx0> scrub doesn't report anything, you need to load the recv filesystem key
[00:47:29] <PMT> bunder: even with custom fabrication, a tape that dense would currently be a fool's venture.
[00:47:32] <Shinigami-Sama> maybr ptx0's porn does somethign weird
[00:47:35] <jasonwc> I'm confused by the fact that you say, "I scrub once a week, which some consider excessive, but it never reports any data issues." but the warning/errors shows a zpool scrub that showed permanent data errors
[00:47:35] <ptx0> and then actually read the files
[00:47:58] <ptx0> the key is typically unloaded, jasonwc
[00:48:03] <cirdan> PMT: its 654mi long
[00:48:06] <Shinigami-Sama> PMT: shingled tape :D
[00:48:14] <ptx0> scrub won't find errors in unloaded and untouched data structures
[00:48:20] <ptx0> i guess that's a good point
[00:48:27] <ptx0> it is something being unwrapped improperly, maybe.
[00:48:41] <jasonwc> the key is loaded since I didn't export the pool
[00:48:56] <jasonwc> would running find be sufficient?
[00:48:56] <DHE> I scrub my enterprise drives weekly because they arrived having either been dropped by the shipper or a catastrophically bad manufacturing batch.. :/
[00:49:00] <ptx0> i raw recv encrypted datasets into 'unencrypted pool'
[00:49:03] <PMT> Shinigami-Sama: christ, no
[00:49:22] <cirdan> DHE: every 2 weeks here at home cause why not?
[00:49:31] <jasonwc> same, every 2 weeks
[00:49:35] <ptx0> jasonwc: fwiw it doesn't occur immediately on my system either, i can set up a fresh rootfs pool and use it for weeks without issue
[00:49:38] <DHE> cirdan: I am genuinely concerned for the health of these disks. that's why I do it.
[00:50:06] <ptx0> tom should set up a script that continuously sends/recvs and loads the dataset to verify the file contents inside
[00:50:15] <cirdan> DHE: yeah i know i just mean even if not it's not generally too hard to do
[00:50:56] <zfs> [zfsonlinux/zfs] document pecularity about zpool iostat statistics reporting IOs as submitted to the block layer (#8417) created by kpande <https://github.com/zfsonlinux/zfs/issues/8417>
[00:50:58] <DHE> there's still a performance hit...
[00:51:31] <ptx0> DHE: but it says drop ship
[00:51:44] <cirdan> hmm question: does smart's uncorrectable sectors attribute work on physical or logical? i have a drive with 8 sectors pending so it would seem that it could be talking about logical sectors
[00:51:47] <DHE> *snort*
[00:51:57] <cirdan> also zfs reports no errors oddly
[00:52:09] <DHE> cirdan: the disk runs in physical sectors. that's how they work
[00:52:12] <PMT> cirdan: Pending Sectors are "if you overwrite the contents of them we remap them from spares"
[00:52:20] <jasonwc> ptx0: I ran a find operation after loading the key for new encrypted dataset and there was no issues. I can try doing an rsync of the data, which will read all of the files.
[00:53:02] <cirdan> PMT: pending means we got a crc error reading it
[00:53:13] <jasonwc> ptx0: So, if as you say, scrub finds no errors, the corruption isn't from the send/recv. It's from whatever happens after that.
[00:53:14] <ptx0> jasonwc: like i said, i had an encrypted dataset and it was fucked on recv side, but i was able to then send without -p, -R, or -w, and 'recreate' the filesystem into a fresh 'zfs create -o encryption=on' encryptionroot
[00:53:31] <cirdan> itll clear if the sector can be read or if it's overwritten
[00:53:33] <PMT> cirdan: yes, which means it got marked as "okay next time it's overwritten we mark it as Offline_Uncorrectable and remap that logical address to a spare sector."
[00:53:36] <ptx0> it was then okay and i was able to recv this and load that dataset key without issue
[00:53:45] <PMT> Or if it successfully reads.
[00:53:49] <ptx0> so, something on the source dataset is mangled in a way that the recv side chokes on
[00:54:04] <ptx0> and over time it will get mangled again in the same way
[00:54:05] <cirdan> it only goes to Offline_Uncorrectable if it can't be written to and read right back
[00:54:18] <PMT> Source?
[00:54:39] <jasonwc> ptx0: Does it matter whether the key is loaded in terms of what scrub is doing? I assume it'll just read the data and verify it against the checksum which doesn't require decryption.
[00:55:04] <cirdan> DHE uses Package Delivery Service for his drives: https://www.youtube.com/watch?v=UOJiEgxt7RY
[00:56:16] <DHE> I don't dictate what the vendor ships with
[00:56:56] <ptx0> jasonwc: no idea
[00:58:23] <jasonwc> ptx0: So, I loaded the encryption key, mounted the encrypted dataset, and then did rsync -arv from the encrypted dataset to a new dataset to see if it would give any checksum or IO errors. Rsync completed and zpool status shows no errors.
[00:58:41] <zfs> [zfsonlinux/zfs] bpobj_iterate overflows the stack (#7675) comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/7675#issuecomment-464253692>
[00:58:48] <PMT> jasonwc: fyi -a implies -r
[00:59:01] <ptx0> yeah, that simple test is not going to reproduce the issue
[00:59:19] <ptx0> you need to run an encrypted rpool with real data that's been written/rewritten/snapshotted over time
[00:59:26] <jasonwc> ah, ok
[00:59:51] <ptx0> i have rpool unencrypted, rpool/ocean as encryptionroot for all children incl home, rootfs
[01:01:40] <zfs> [zfsonlinux/zfs] Serious: zpool will not load without forced import, Ubuntu 14.04 since recent zfs update (#2927) closed by kpande <https://github.com/zfsonlinux/zfs/issues/2927#event-2143837303>
[01:03:26] <jasonwc> That might explain why nobody else has reported this. In order to use encryption for a root pool, you would need an unencrypted boot pool if using Grub, since Grub won't work if encryption=on on the dataset, at least that's my understanding.
[01:03:40] <ptx0> it's not uncommon
[01:05:15] <jasonwc> It isn't? A large number of people are using a testing branch of ZFS to run their root pools with a feature that has had several on-disk format changes, causing the need to recreate datasets?
[01:05:37] <ptx0> i think you're overstating the issues, which occurred a year ago now
[01:05:54] <PMT> A concerning number of people run git builds on their stable systems.
[01:06:06] <PMT> But then, I'm boring about things like that. :P
[01:06:17] <Shinigami-Sama> git head is stupidly stable
[01:06:24] <jasonwc> PMT: Yeah, I've never run a non-stable release on my production systems, just VMs
[01:06:34] <jasonwc> I assume it'll eat my data
[01:08:42] <cirdan> om nom nom nom
[01:09:04] <cirdan> interesting: https://asciinema.org
[01:10:45] <zfs> [zfsonlinux/zfs] canceling a deduplicated send over the network hangs the sending end in D state (#3023) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3023#event-2143847822>
[01:11:32] <jasonwc> Anyone else here running an encrypted rpool with the native encryption?
[01:11:34] <PMT> Shinigami-Sama: it turns out that's true whether you think it's stable or unstable
[01:12:24] <jasonwc> What is?
[01:12:44] <jasonwc> That you should assume it'll eat your data?
[01:13:16] <PMT> jasonwc: the description was "stupidly stable", which could either be taken as ridiculously stable or extremely unstable
[01:13:26] <zfs> [zfsonlinux/zfs] NFS client takes time to get up to speed (#3048) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3048#event-2143850453>
[01:13:30] <Shinigami-Sama> :D
[01:13:55] <jasonwc> ah
[01:14:03] <PMT> My general advice is to not run unstable code on production systems unless you're paying people to fix it when it catches fire. :)
[01:15:01] <zfs> [zfsonlinux/zfs] thread hung in txg_wait_open() forever in D state (#3064) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3064#issuecomment-464257673>
[01:15:28] <jasonwc> man, Trump's speech today sound like it was created by a random word generator
[01:16:09] <PMT> s/today sound/sounds/
[01:17:15] <jasonwc> thanks :)
[01:22:50] <zfs> [zfsonlinux/zfs] ZFS: Unable to set "noop" scheduler (#6513) comment by hoppel118 <https://github.com/zfsonlinux/zfs/issues/6513#issuecomment-464259413>
[01:26:27] <zfs> [zfsonlinux/zfs] Bogus write ops/second being reported by 'zpool iostat'. (#2888) comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/2888#issuecomment-464260089>
[01:27:02] <zfs> [zfsonlinux/zfs] ZFS: Unable to set "noop" scheduler (#6513) comment by kpande <https://github.com/zfsonlinux/zfs/issues/6513#issuecomment-464260225>
[01:28:44] <zfs> [zfsonlinux/zfs] Bogus write ops/second being reported by 'zpool iostat'. (#2888) comment by kpande <https://github.com/zfsonlinux/zfs/issues/2888#issuecomment-464260619>
[01:31:10] <zfs> [zfsonlinux/zfs] document pecularity about zpool iostat statistics reporting IOs as submitted to the block layer (#8417) comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/8417>
[01:37:18] <Shinigami-Sama> I wasn't kidding about he 300 more
[01:37:36] <zfs> [zfsonlinux/zfs] better explanation of zvol sparse allocation implications with async and non-direct IO (#8415) new review comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/pull/8415#pullrequestreview-204491024>
[01:38:12] *** jugo <jugo!~jugo@unaffiliated/jugo> has quit IRC (Ping timeout: 250 seconds)
[01:40:16] *** jugo <jugo!~jugo@unaffiliated/jugo> has joined #zfsonlinux
[01:40:47] <zfs> [zfsonlinux/zfs] zfs mount man page should document legacy behaviour (#8414) new review comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/pull/8414#pullrequestreview-204491462>
[01:45:12] <zfs> [zfsonlinux/zfs] ZFS data corruption (#3990) comment by Garrett Fields <https://github.com/zfsonlinux/zfs/issues/3990#issuecomment-464263921>
[01:46:43] <zfs> [zfsonlinux/zfs] document pecularity about zpool iostat statistics reporting IOs as submitted to the block layer (#8417) new review comment by kpande <https://github.com/zfsonlinux/zfs/pull/8417#discussion_r257434953>
[01:56:49] <zfs> [zfsonlinux/zfs] bpobj_iterate overflows the stack (#7675) comment by Serapheim Dimitropoulos <https://github.com/zfsonlinux/zfs/issues/7675#issuecomment-464265502>
[01:57:30] <zfs> [zfsonlinux/zfs] zfs mount man page should document legacy behaviour (#8414) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8414#issuecomment-464265593>
[01:58:24] <zfs> [zfsonlinux/zfs] ZFS data corruption (#3990) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3990#event-2143892197>
[02:00:25] <zfs> [zfsonlinux/zfs] zfs unmount very slow, txg_sync taking 90% CPU (#3095) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3095#event-2143893871>
[02:02:35] <zfs> [zfsonlinux/zfs] Feature: new dataset flag 'chroot' - don't mount sub-datasets automatically (#3098) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3098#issuecomment-464266329>
[02:04:15] *** AllanJude <AllanJude!ajude@freebsd/developer/AllanJude> has joined #zfsonlinux
[02:04:44] <zfs> [zfsonlinux/zfs] Set setproctitle during zfs send (#8418) created by Sean Eric Fagan <https://github.com/zfsonlinux/zfs/issues/8418>
[02:09:16] <zfs> [zfsonlinux/zfs] ASSERTION(RW_LOCK_HELD(&dh->dh_dn->dn_struct_rwlock)) failed (#3096) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3096#issuecomment-464267236>
[02:09:20] <zfs> [zfsonlinux/zfs] ASSERTION(RW_LOCK_HELD(&dh->dh_dn->dn_struct_rwlock)) failed (#3096) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3096#event-2143900679>
[02:09:45] <zfs> [zfsonlinux/zfs] Set setproctitle during zfs send (#8418) comment by Allan Jude <https://github.com/zfsonlinux/zfs/issues/8418#issuecomment-464267289>
[02:12:42] <zfs> [zfsonlinux/zfs] Crash with SCST + ZFS + IOmeter (#3127) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3127#event-2143903261>
[02:15:17] <zfs> [zfsonlinux/zfs] nex-3165 segregate ddt in arc (#3301) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3301#issuecomment-464267934>
[02:17:20] <zfs> [zfsonlinux/zfs] 0.6.4 upgrade on el6 rpms, yields lots of scary errors (#3271) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3271#event-2143906816>
[02:21:48] <jasonwc> I built ZoL master using the howto instructions. Even after enabling the systemd units, they don't appear to start correctly. They show status "inactive (dead)." Not sure what's going on - they just work using the Debian packages
[02:22:10] *** c3-Linux <c3-Linux!~c3r1c3-Li@ip72-211-81-173.no.no.cox.net> has quit IRC (Remote host closed the connection)
[02:22:53] *** c3r1c3-Lin <c3r1c3-Lin!~c3r1c3-Li@ip72-211-81-173.no.no.cox.net> has joined #zfsonlinux
[02:24:47] <bunder> maybe debian's packaging does more, not sure
[02:24:57] <bunder> every distro does things a little differently
[02:25:52] <jasonwc> Yeah, the last time I built ZoL packages, I had issues with the systemd units as well. In that case, I think it wasn't installing them at all. It seems to install them, but they aren't working properly. Oh well, this is just a test VM. Not a big deal.
[02:27:31] <bunder> https://github.com/gentoo/gentoo/blob/master/sys-fs/zfs/zfs-0.7.12.ebuild#L195 not sure if that would help
[02:28:27] <bunder> it sounds like you might have to remove and readd them into systemd
[02:28:58] <zfs> [zfsonlinux/zfs] ZFS-8000-5E after disk id changed (#3359) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3359#event-2143915381>
[02:33:16] <zfs> [zfsonlinux/zfs] zfs recv -F w/ snapdev=visible problem (#3380) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3380#issuecomment-464270177>
[02:33:22] <zfs> [zfsonlinux/zfs] zfs recv -F w/ snapdev=visible problem (#3380) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3380#issuecomment-464270177>
[02:33:26] <zfs> [zfsonlinux/zfs] zfs recv -F w/ snapdev=visible problem (#3380) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3380#event-2143918322>
[02:36:07] <zfs> [zfsonlinux/zfs] Suboptimal performance with zvols over 10gb ethernet (#3394) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3394#event-2143920042>
[02:36:09] <zfs> [zfsonlinux/zfs] Suboptimal performance with zvols over 10gb ethernet (#3394) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3394#event-2143920042>
[02:36:10] <zfs> [zfsonlinux/zfs] Suboptimal performance with zvols over 10gb ethernet (#3394) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3394#issuecomment-464270494>
[02:40:41] <zfs> [zfsonlinux/zfs] Iterative ZFS volume migration over network (#3407) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3407#event-2143922981>
[02:41:12] <zfs> [zfsonlinux/zfs] Iterative ZFS volume migration over network (#3407) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3407#issuecomment-464271064>
[02:42:27] <zfs> [zfsonlinux/zfs] txg_sync, z_null_iss and txg_quiesce freezes (#3409) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3409#event-2143924040>
[02:43:46] <zfs> [zfsonlinux/zfs] GRUB (grub-mkconfig) unable to detect ZFS root (#3424) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3424#issuecomment-464271333>
[02:43:51] <zfs> [zfsonlinux/zfs] GRUB (grub-mkconfig) unable to detect ZFS root (#3424) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3424#event-2143924795>
[02:45:14] <zfs> [zfsonlinux/zfs] Add TRIM support (#8419) created by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8419>
[02:45:59] <zfs> [zfsonlinux/zfs] ZFS and LUKS may corrupt LUKS Header: Suggestion for magic header detection. (#3430) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3430#event-2143926056>
[02:46:24] <zfs> [zfsonlinux/zfs] ZFS and LUKS may corrupt LUKS Header: Suggestion for magic header detection. (#3430) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3430#issuecomment-464271577>
[02:51:08] <bunder> wew another trim pr
[02:51:33] <jasonwc> So, it appears that compressratio and refcompressratio don't account for padding due to minimum block size. Matt Ahrens mentioned this before but I wasn't sure if it was ever fixed.
[02:51:35] <zfs> [zfsonlinux/zfs] A few zpool attach commands one after the other fail (#3442) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3442#event-2143929571>
[02:51:36] <jasonwc> bunder: Thanks
[02:52:24] <bunder> did i fix it? :)
[02:52:34] <zfs> [zfsonlinux/zfs] System hang (#3445) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3445#event-2143930098>
[02:53:38] <zfs> [zfsonlinux/zfs] After panic, can't mount dataset read/write (#3446) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3446#event-2143930696>
[02:54:03] <zfs> [zfsonlinux/zfs] ZFS soft lockup w/Gluster + KVM/QEMU (#3448) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3448#event-2143930980>
[02:54:57] <jasonwc> bunder: trying now
[02:55:02] <bunder> ah
[02:55:11] <jasonwc> Yes
[02:55:30] <jasonwc> Previously, it wasn't importing my pools. I just reenabled every service, rebooted, and now it's imported
[02:55:38] <bunder> nice
[02:55:42] <jasonwc> and zed is running
[02:55:54] <jasonwc> https://pastebin.com/AdxTH3K9
[02:56:07] <jasonwc> Is there some difference between systemctl enable and systemctl reenable?
[02:57:46] <jasonwc> that's an easy fix :)
[02:58:37] <bunder> no idea, i'm an openrc guy
[02:59:09] <zfs> [zfsonlinux/zfs] zfs receive of a replicated snapshot that includes a clone results in no error, but exit code 1 (#3479) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3479#event-2143933928>
[02:59:19] <cirdan> yeah sudo apt-get remove --purge --burn-with-fire systemd* on every system I setup :)
[02:59:29] <jasonwc> lol
[02:59:34] <cirdan> i do
[02:59:38] <cirdan> even on raspbian
[03:01:23] <cirdan> seriously, I have no need or desire to fight with that landfil-fire
[03:01:24] <bunder> https://www.youtube.com/watch?v=o_AIw9bGogo "the tragedy of systemd" i mean the guy does have some good points
[03:01:41] <bunder> but they need to get the bugs ironed out before i consider trying it
[03:01:53] <cirdan> sysvinit does everything I need it to do and has never failed me, nor caused my system to not boot/reboot, a single time
[03:03:09] <cirdan> no worries with having /var on a dataset, or scroll lock on when I reboot, or "debug" on the kernel line, etc
[03:04:56] <jasonwc> Why did all the distros adopt something that seems universally loathed?
[03:05:05] <bunder> because redhat
[03:05:31] <DeHackEd> because systemd's purpose in life is to make other software depend on it. vis a vis, gnome
[03:05:49] <DeHackEd> unless something changes, your options are ship systemd, or don't ship gnome
[03:05:57] <jasonwc> Ah, true
[03:06:04] <cirdan> jasonwc: it's basically the same story as brexit, as trump, etc
[03:06:07] <bunder> mate still works
[03:06:19] <cirdan> get a few key people to swing the vote
[03:06:26] <jasonwc> People being fooled by ridiculous, absurd promises?
[03:06:26] <zfs> [zfsonlinux/zfs] zpool destroy fails on zpool with suspended IO (#2878) comment by kpande <https://github.com/zfsonlinux/zfs/issues/2878#issuecomment-464273415>
[03:06:27] <zfs> [zfsonlinux/zfs] zpool destroy fails on zpool with suspended IO (#2878) comment by kpande <https://github.com/zfsonlinux/zfs/issues/2878#issuecomment-464273415>
[03:06:28] <cirdan> gnome still works on bsd, iirc
[03:06:29] <DeHackEd> yeah when centos8 comes out I'm seriously considering making a non-systemd variation
[03:06:32] <zfs> [zfsonlinux/zfs] zpool destroy fails on zpool with suspended IO (#2878) closed by kpande <https://github.com/zfsonlinux/zfs/issues/2878#event-2143937786>
[03:06:41] <cirdan> and os x
[03:07:04] <jasonwc> cirdan: It is utterly amazing to me that a large plurality of the UK still thinks Brexit is a good idea, and will benefit the UK.
[03:07:04] * CompanionCube mostly likes the init component of systemd
[03:07:19] <cirdan> jasonwc: same here w//trump
[03:07:21] <jasonwc> Every economic analysis indicates they'll be poorer
[03:07:29] <cirdan> but people are finally realize they are bent over
[03:07:29] <CompanionCube> jasonwc: said people are delusional
[03:07:34] <jasonwc> Haven't you heard? He's already built the wall!
[03:07:35] <CompanionCube> or have bullshit reasons
[03:07:36] <ptx0> #zfsonlinux-social
[03:07:48] *** nils_ <nils_!~nils_@pdpc/supporter/35for7/nils> has quit IRC (Ping timeout: 252 seconds)
[03:08:35] <cirdan> jasonwc: yeah well he said some serious shit today... and it will not go well in the end
[03:09:02] <bunder> if i knew c better i would port smf to linux
[03:09:25] <cirdan> Sacramento International Airport?
[03:09:35] <cirdan> or Sunset Music Festival
[03:09:49] <zfs> [zfsonlinux/zfs] zpool commands block when a disk goes missing / pool suspends (#3461) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3461#issuecomment-464273703>
[03:09:55] <jasonwc> cirdan: Well, he *did* something of serious import today. However, his 50 minutes of stream-of-conscious drivel was not important.
[03:09:59] <bunder> https://en.wikipedia.org/wiki/Service_Management_Facility
[03:11:26] <CompanionCube> bunder: can't
[03:11:43] <CompanionCube> rather fundamental bits rely on solaris-specific functionality
[03:12:45] * ptx0 cough
[03:12:47] <ptx0> jasonwc
[03:12:48] <CompanionCube> if you knew C better a better option would be taking SystemXVI and building on that
[03:12:53] <ptx0> stahp
[03:13:05] <cirdan> ptx0 is having a stroke
[03:14:35] <zfs> [zfsonlinux/zfs] zfs send hangs after pressing ctrl-c (#3547) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3547#event-2143942446>
[03:14:36] <zfs> [zfsonlinux/zfs] zfs send hangs after pressing ctrl-c (#3547) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3547#event-2143942446>
[03:14:39] <zfs> [zfsonlinux/zfs] zfs send hangs after pressing ctrl-c (#3547) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3547#issuecomment-464274097>
[03:14:43] <jasonwc> #8419
[03:14:45] <zfs> [zfs] #8419 - Add TRIM support by behlendorf <https://github.com/zfsonlinux/zfs/issues/8419>
[03:14:49] <jasonwc> Looks like it's actually coming :P
[03:15:42] <bunder> https://github.com/ServiceManager/ServiceManager/blob/master/README.md
[03:15:46] <bunder> dead repo
[03:15:52] <CompanionCube> exactly
[03:16:05] <CompanionCube> why do you think i said take it and build on it?
[03:16:07] <cirdan> so zombify it
[03:16:30] <zfs> [zfsonlinux/zfs] NFS version (#3650) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3650#issuecomment-464274265>
[03:16:37] <zfs> [zfsonlinux/zfs] NFS version (#3650) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3650#event-2143943491>
[03:16:47] <CompanionCube> (if i was the one doing it I'd also swap out SunRPC for something better)
[03:18:15] <jasonwc> There appear to be some issues outstanding re: autotrim but is the manual TRIM feature fine?
[03:19:04] <zfs> [zfsonlinux/zfs] Very heavy memory manager activity whenever copying files after snapshot/send/scrub (#3661) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3661#event-2143944704>
[03:19:43] <zfs> [zfsonlinux/zfs] Apparent AIO related crash in CentOS6 doing ZFS send on SCST host (#3664) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3664#event-2143945031>
[03:20:11] <zfs> [zfsonlinux/zfs] cannot mount zfs in rw mode. endless rcu_sched warnings (#3670) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3670#event-2143945241>
[03:20:34] *** nils_ <nils_!~nils_@pdpc/supporter/35for7/nils> has joined #zfsonlinux
[03:20:38] <zfs> [zfsonlinux/zfs] Null pointer dereference during import under heavy load. (#3674) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3674#event-2143945465>
[03:21:22] <zfs> [zfsonlinux/zfs] rsync causes ZoL to use all memory until system crashes STILL :( (#3677) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3677#event-2143945853>
[03:22:10] <cirdan> jasonwc: non-reproducible when running autotrim or manual TRIM independently
[03:22:23] <zfs> [zfsonlinux/zfs] uneven io distribution (#3686) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3686#event-2143946296>
[03:23:37] <zfs> [zfsonlinux/zfs] Extremely slow zvol nodes creation when pool is resilvering (or repairing) (#3682) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3682#event-2143946880>
[03:23:57] <cirdan> i dont understand
[03:24:51] <zfs> [zfsonlinux/zfs] arc_adapt makes cpu load go 150+ (#3697) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3697#event-2143947471>
[03:25:57] <jasonwc> cirdan: So, it's only a problem when both are used?
[03:26:29] <jasonwc> so, you get these annoying alerts in Canada as well - what better way to be woken up in the middle of the night - https://www.theguardian.com/world/2019/feb/15/riya-rajkumar-canada-alert-complaints-public
[03:27:11] <bunder> nope, i removed broadcastcellreceiver after test day
[03:27:21] <bunder> almost threw my phone at the wall when it went off
[03:27:27] <zfs> [zfsonlinux/zfs] Recurring ZFS deadlock; zfs_iput_taskq stuck at 100% for minutes (#3687) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3687#issuecomment-464275168>
[03:27:28] <zfs> [zfsonlinux/zfs] Recurring ZFS deadlock; zfs_iput_taskq stuck at 100% for minutes (#3687) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3687#issuecomment-464275168>
[03:27:35] <zfs> [zfsonlinux/zfs] Recurring ZFS deadlock; zfs_iput_taskq stuck at 100% for minutes (#3687) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3687#event-2143948888>
[03:29:02] <bunder> if ww3 breaks out, i'm sure i'll hear my neighbours screaming
[03:30:57] <zfs> [zfsonlinux/zfs] Mount a snap of zpool ZFS volume on Solaris 11.2 fails with new version 35 (#3704) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3704#event-2143950547>
[03:31:05] <ptx0> ^ srsly
[03:31:08] <ptx0> a solaris issue?
[03:31:36] <bunder> if it were v28 i'd say okay
[03:32:17] <zfs> [zfsonlinux/zfs] Using normalization=normKD, filenames become unusable if there is a conflict during their creation (#3707) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3707#event-2143951055>
[03:32:58] <zfs> [zfsonlinux/zfs] receive with forced rollback deletes needed fromsnap (#3729) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3729#event-2143951279>
[03:36:55] <RoyK> how far is raid vdev expansion from entering linux now?
[03:37:49] <zfs> [zfsonlinux/zfs] write() might fail with EFBIG for no "good reason" (#3731) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3731#issuecomment-464275946>
[03:39:00] <zfs> [zfsonlinux/zfs] (DDT) ZAP is inefficient when ashift=12? (#3732) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3732#event-2143953628>
[03:39:21] <bunder> i'm not seeing a pr
[03:39:55] <jasonwc> RoyK: It's still in development. It's being worked on by Matt Ahrens. I believe he was supposed to provide an update in Oct 2018 but I haven't heard anything.
[03:40:02] <jasonwc> It certainly won't make it into 0.8
[03:40:24] <jasonwc> Beyond that, I don't think anyone knows.
[03:40:33] <DHE> wtf... how did that solaris ticket not get stomped immediately?
[03:40:57] <jasonwc> RoyK: Given that it's probably the most demanded feature, per Matt Ahrens, I'm sure there will be an update when there is news.
[03:41:13] <jasonwc> RoyK: There are monthly developer conference calls with updates on OpenZFS that are public.
[03:41:39] <jasonwc> RoyK: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit?ts=5bb3b66c#
[03:41:42] <bunder> AllanJude: any news on raidz expansion on fbsd?
[03:42:02] <zfs> [zfsonlinux/zfs] atomic operations hurt performance on large-scale NUMA systems (#3752) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3752#issuecomment-464276224>
[03:42:39] <jasonwc> bunder: I asked a few days ago. He just said that Matt was really busy.
[03:42:58] <bunder> ah okay
[03:43:23] <bunder> i mean ix was sponsoring it, but google isn't turning up much other than reddit threads
[03:43:24] <ptx0> matt's on vacation
[03:43:54] <zfs> [zfsonlinux/zfs] ZVol with master branch source 2015-09-06 and kernel 4.1.6 comes to grinding halt. (#3754) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3754#event-2143955480>
[03:44:34] <CompanionCube> ptx0: inb4 same vacation spot as brady
[03:44:59] <zfs> [zfsonlinux/zfs] Log/Cache on separate drives are not set to noop. (#3761) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3761#event-2143955854>
[03:45:21] <bunder> actually its funny, i saw brady on github the other day
[03:45:33] <ptx0> were you there buying a sandwich or something
[03:45:36] <jasonwc> bunder: I was searching for updates but found nothing recent. Matt has given several talks about the design. I think he has a working demo that does the expansion in a single TXG
[03:46:12] <jasonwc> *working code
[03:46:17] <bunder> maybe it got delayed with the changes to removal/remap
[03:46:48] <DHE> conceptually it's not that hard. I think the tricky part is reshaping the existing data in a crash-safe way, and you have all this new space available to use. :)
[03:48:03] <bunder> i forget, does it let you just add disks, or can you change a z2 into a z3 with it?
[03:48:07] <DHE> no
[03:48:11] <DHE> add disks only
[03:48:21] <bunder> needs improvement :P
[03:48:36] <DHE> it's already an improvement over what we have now. quit yer whining
[03:48:39] <DHE> :)
[03:48:48] <zfs> [zfsonlinux/zfs] Process 'zfs_unlinked_drain' asynchronously on mount (#3814) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3814#issuecomment-464276739>
[03:51:09] <zfs> [zfsonlinux/zfs] Process 'zfs_unlinked_drain' asynchronously on mount (#3814) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3814#event-2143958282>
[03:51:09] <jasonwc> As Matt said in his talk, you still have to do *some* planning
[03:51:33] <ptx0> hey DHE can you look at that 3814 and 8142
[03:51:34] <jasonwc> Being able to expand a raidz vdev by adding individual disks rather than adding new top-level vdevs will make ZFS a lot more attracative for home storage usage
[03:51:37] <ptx0> see if it resolves your concerns
[03:52:47] <cirdan> jasonwc: yeah it can be rough to recreate a larger pool to add a drive
[03:52:54] <bunder> i'm already planning to use all 12 drive bays in my threadripper
[03:53:26] <DeHackEd> ptx0: way ahead of you
[03:53:32] <zfs> [zfsonlinux/zfs] Possible issue with LZJB (#3831) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3831#event-2143959666>
[03:53:36] <zfs> [zfsonlinux/zfs] Process 'zfs_unlinked_drain' asynchronously on mount (#3814) comment by DeHackEd <https://github.com/zfsonlinux/zfs/issues/3814#issuecomment-464277102>
[03:53:47] <ptx0> DeHackEd: can you open a new issue for that
[03:53:53] <DeHackEd> 3814 should be good to be closed
[03:53:56] <DeHackEd> yeah that's probably best
[03:54:00] <DeHackEd> let me check there's no such thing already
[03:54:00] <ptx0> thanks
[03:54:05] <ptx0> yeah good idea, lole
[03:54:16] * ptx0 will just have to close it in 4 days when he finally makes it that far
[03:54:33] <DeHackEd> going in chronological/issue# order?
[03:54:34] <CompanionCube> are you doing a full stale issue sweep?
[03:57:22] <RoyK> jasonwc: thanks
[03:57:50] <jasonwc> wow, you've closed a bunch today
[03:59:18] <zfs> [zfsonlinux/zfs] Feature: new dataset flag 'chroot' - don't mount sub-datasets automatically (#3098) comment by Matthew Thode <https://github.com/zfsonlinux/zfs/issues/3098#issuecomment-464277552>
[03:59:35] <prometheanfire> heh, going through all the old bugs
[04:01:31] <cirdan> bunder: yeah but for reasons I have multiple pools
[04:01:54] <cirdan> mostly cause I can't afford to replace 16 drives at a time
[04:03:29] <zfs> [zfsonlinux/zfs] Feature: new dataset flag 'chroot' - don't mount sub-datasets automatically (#3098) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3098#event-2143963484>
[04:03:44] <zfs> [zfsonlinux/zfs] Feature: new dataset flag 'chroot' - don't mount sub-datasets automatically (#3098) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3098#issuecomment-464277877>
[04:04:00] <ptx0> prometheanfire: fwiw i find noauto to be more convenient
[04:04:08] <ptx0> also yes i am going through all of them
[04:04:24] <ptx0> i closed about 50 issues and categorised about 100
[04:04:29] <cirdan> noauto and canmount are handy
[04:05:10] <DeHackEd> for some reason previewing in github doesn't work anymore so I hope this is formatted okay
[04:05:16] <zfs> [zfsonlinux/zfs] Feature: ZAP data structure to support shrinking, convert to Microzap (#8420) created by DeHackEd <https://github.com/zfsonlinux/zfs/issues/8420>
[04:05:23] <DeHackEd> woah that went very wrong.
[04:05:35] <DeHackEd> oh fuck off, I can't even edit it...
[04:05:54] <cirdan> doom
[04:06:32] <ptx0> fixed
[04:06:45] <DeHackEd> thanks
[04:06:50] <jasonwc> Preview no longer works?
[04:06:55] <prometheanfire> ptx0: eh, I prefer the certianty of canmount
[04:07:03] <DeHackEd> nope. there's a bunch of javascript things that just don't work lately
[04:07:12] <ptx0> prometheanfire: it is canmount=noauto
[04:07:25] <ptx0> prometheanfire: zfs mount -a won't do it but zfs mount <foo> will
[04:07:35] <DeHackEd> eg: selecting the "author" filter on the main issue/PR list, or closing those little popups that show up all over the place because github thinks we're retards
[04:07:42] <ptx0> i do it so i can manually access backups
[04:07:56] <ptx0> works here
[04:08:39] <prometheanfire> if I need access I'll change the property
[04:08:59] <ptx0> :P
[04:09:08] <ptx0> my gd script kept overwriting it when i did
[04:09:54] <ptx0> zfs in 2014 was an interesting era
[04:10:17] * ptx0 remembers how scary it was
[04:11:33] * DeHackEd had 8 production servers running ZFS in 2014
[04:11:44] <CompanionCube> what was the scariest part?
[04:11:51] <DeHackEd> no ABD
[04:12:02] <DeHackEd> or maybe it was just in the early initial PR states...
[04:12:04] <CompanionCube> but that's not unique to 2014
[04:12:30] <CompanionCube> wasn't it only merged in 2017 or so?
[04:12:40] <ptx0> zvol really sucked, memory blew
[04:12:49] <DeHackEd> ABD made the 0.7.0 release..
[04:12:52] <ptx0> xattr bugs
[04:13:07] <PMT> DeHackEd: here i was hoping you might have been offering a PR with it implemented
[04:14:34] <CompanionCube> DeHackEd: ah, ABD made an rc in january 2017 which is likely why i remember that
[04:16:23] <jasonwc> DeHackEd: oh, we had this conversation before. Those aspects of Github still work for me. Must be a browser issue. I'm on the latest stable Chrome release.
[04:16:42] <zfs> [zfsonlinux/zfs] Feature: ZAP data structure to support shrinking, convert to Microzap (#8420) comment by Rich Ercolani <https://github.com/zfsonlinux/zfs/issues/8420#issuecomment-464278778>
[04:16:59] <DeHackEd> jasonwc: firefox 52 (last ESR that didn't suck)
[04:17:11] <jasonwc> yup, that's why it doesn't work, heh
[04:17:37] <DeHackEd> if javascript changes this rapidly that it breaks this easily, the W3C has fucked up
[04:17:55] <CompanionCube> >implying the W3C's the one doing the fucking
[04:18:04] * CompanionCube is happy with current firefox
[04:18:42] <DeHackEd> the latest ESR (which is what my distro ships) has said "fuck java (sorry, I still need it) and fuck all your existing extensions". so I'm not upgrading.
[04:21:08] <ptx0> i use firefox 69
[04:21:18] <bunder> but 52 has vulnerabilities
[04:21:54] <DeHackEd> it's 52.8 (?) which has the security fixes equivalent to just shy of the next major ESR release.
[04:25:47] <zfs> [zfsonlinux/zfs] ZFS Linux/illumos functionality differences (#3851) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3851#event-2143971482>
[04:25:53] <zfs> [zfsonlinux/zfs] ZFS Linux/illumos functionality differences (#3851) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3851#issuecomment-464279341>
[04:26:05] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[04:27:16] <zfs> [zfsonlinux/zfs] zfs very slow when removing files since upgrade to 0.6.5.2 (CentOS 6.6) (#3870) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3870#event-2143972175>
[04:28:18] <zfs> [zfsonlinux/zfs] dracut should verbatim import from cachefile (#3876) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3876#event-2143972495>
[04:29:16] <zfs> [zfsonlinux/zfs] High zvol utilization - read performance (#3888) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3888#event-2143972800>
[04:29:17] <bunder> 52.8.1 has 188 and 52.9.0 has 36
[04:29:17] <zfs> [zfsonlinux/zfs] High zvol utilization - read performance (#3888) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3888#event-2143972800>
[04:29:23] <zfs> [zfsonlinux/zfs] High zvol utilization - read performance (#3888) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3888#issuecomment-464279549>
[04:36:40] <DeHackEd> naturally...
[04:36:52] <DeHackEd> is there an actual list somewhere?
[04:36:59] <zfs> [zfsonlinux/zfs] UNAVAIL cache device (#3877) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3877#event-2143975568>
[04:42:04] <zfs> [zfsonlinux/zfs] "Importing ZFS pool xyz Out of memory" crash at boot. (#3863) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3863#event-2143977620>
[04:48:43] <zfs> [zfsonlinux/zfs] CentOS7 download.zfsonlinux.org missing files for 7.6 (#8412) comment by Alan Latteri <https://github.com/zfsonlinux/zfs/issues/8412#issuecomment-464282085>
[04:52:37] <zfs> [zfsonlinux/zfs] Unable to import zpool with corrupted SPA history (#3889) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3889#event-2143982537>
[04:53:56] <zfs> [zfsonlinux/zfs] zfs send -R -i does not fail if source snapshot does not exist (#3894) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3894#event-2143983083>
[04:55:27] <zfs> [zfsonlinux/zfs] zfs send -R -i does not fail if source snapshot does not exist (#3894) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3894#issuecomment-464283154>
[04:55:32] *** elxa <elxa!~elxa@2a01:5c0:e09b:bee1:e0a7:af10:af35:bf6e> has quit IRC (Ping timeout: 258 seconds)
[04:56:44] <zfs> [zfsonlinux/zfs] High Load Average and IO hangs on file operation (copy) (#3897) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3897#event-2143984414>
[04:56:56] <bunder> i was using one of those cve pages, one sec
[04:56:59] <zfs> [zfsonlinux/zfs] update from zfs 0.6.4.2 to zfs 0.6.5.1,random I/O performance not improved (#3898) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3898#event-2143984529>
[04:57:17] <zfs> [zfsonlinux/zfs] performance regression from 0.6.4.2 to 0.6.5.2 (#3902) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3902#event-2143984679>
[04:57:39] <zfs> [zfsonlinux/zfs] txg_sync, zfs blocked for more than 120s on debian jessie/zfs 0.6.5.2-2 (#3903) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3903#event-2143984845>
[04:57:47] <bunder> https://www.cvedetails.com/version-list/452/3264/1/Mozilla-Firefox.html
[04:59:13] <zfs> [zfsonlinux/zfs] Decreasing zfs_arc_max with l2arc prevents arc_reclaim to finish (#3926) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3926#event-2143985517>
[05:00:40] <zfs> [zfsonlinux/zfs] zfs stuck (#3947) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3947#event-2143985988>
[05:03:01] <bunder> Manually dragging and dropping an Outlook email message into the browser will trigger a page navigation when the message's mail columns are incorrectly interpreted as a URL
[05:03:10] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 250 seconds)
[05:03:15] <bunder> i'm not sure how that's a vulnerability, but okay
[05:03:58] <zfs> [zfsonlinux/zfs] 0.7.1 Input/output error from /bin/ls (#6489) comment by kpande <https://github.com/zfsonlinux/zfs/issues/6489#issuecomment-464283894>
[05:04:03] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #zfsonlinux
[05:08:09] <PMT> bunder: I mean, the risk is low, but the possible impact may be higher
[05:08:17] *** elxa <elxa!~elxa@2a01:5c0:e080:b281:ff07:d593:8a4b:e5b4> has joined #zfsonlinux
[05:09:29] <bunder> also, shouldn't that be a windows OLE bug not firefox
[05:10:13] <bunder> if i drag a pdf into firefox it opens acrobat
[05:11:35] <zfs> [zfsonlinux/zfs] ZVOL /dev/disk/by-uuid and /dev/disk/by-label symlinks not created on Ubuntu Wily (#3951) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3951#issuecomment-464284727>
[05:11:40] <zfs> [zfsonlinux/zfs] ZVOL /dev/disk/by-uuid and /dev/disk/by-label symlinks not created on Ubuntu Wily (#3951) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3951#issuecomment-464284727>
[05:11:45] <zfs> [zfsonlinux/zfs] ZVOL /dev/disk/by-uuid and /dev/disk/by-label symlinks not created on Ubuntu Wily (#3951) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3951#event-2143990777>
[05:12:08] <bunder> wily? i don't even recall that release
[05:12:10] <zfs> [zfsonlinux/zfs] Slow read speed on 0.6.5.3 (#3950) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3950#event-2143991101>
[05:12:21] <PMT> bunder: 1704 i think
[05:12:32] <PMT> oh not even, 1510
[05:12:59] <zfs> [zfsonlinux/zfs] Disk going out from raidz pool (#3998) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3998#event-2143991408>
[05:13:11] <zfs> [zfsonlinux/zfs] Processes (LXC container) stuck in 'D' state. (#3980) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3980#event-2143991540>
[05:13:50] <zfs> [zfsonlinux/zfs] Intermittent stalling of ZFS pool (#3979) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3979#event-2143991756>
[05:14:01] <bunder> oh maybe that's why, if its a .10
[05:14:15] <zfs> [zfsonlinux/zfs] Cannot boot or mount root fs (#3977) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3977#event-2143991944>
[05:15:36] <zfs> [zfsonlinux/zfs] High CPU usage by "z_fr_iss" after deleting large files (#3976) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3976#event-2143992463>
[05:15:56] <zfs> [zfsonlinux/zfs] issue #118 was closed but problem is still there (#3975) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3975#event-2143992655>
[05:16:09] <zfs> [zfsonlinux/zfs] INFO: task txg_sync:28248 blocked for more than 180 seconds, when writing to AF drive formatted with ashift=9; pool creation fails [2 different drives] (#3972) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3972#event-2143992792>
[05:17:19] <zfs> [zfsonlinux/zfs] slow ls on large directories (#3967) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3967#event-2143993377>
[05:18:00] <zfs> [zfsonlinux/zfs] Unable to automount bind or aufs filesystems on ZFS at boot. (#3957) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3957#issuecomment-464285559>
[05:18:40] <zfs> [zfsonlinux/zfs] blocked for more than 120 seconds on 0.6.5.2 in KVM VM (#3955) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3955#event-2143994190>
[05:18:55] <zfs> [zfsonlinux/zfs] task txg_sync blocked for more than 120 seconds on 0.6.5.3 (#3952) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3952#event-2143994276>
[05:21:20] <bunder> oh nice new proton with faudio
[05:21:35] <jasonwc> Ubuntu Wily (15.10) wasn't a LTS release. Probably why you forgot about it.
[05:22:07] <zfs> [zfsonlinux/zfs] nex-3165 segregate ddt in arc (#3301) comment by Alek P <https://github.com/zfsonlinux/zfs/issues/3301#issuecomment-464286558>
[05:23:02] <bunder> yeah that too
[05:23:11] <ptx0> LOL
[05:23:18] <ptx0> god one alek
[05:23:21] <ptx0> good*
[05:24:46] <zfs> [zfsonlinux/zfs] nex-3165 segregate ddt in arc (#3301) comment by Alek P <https://github.com/zfsonlinux/zfs/issues/3301#issuecomment-464287128>
[05:28:36] <zfs> [zfsonlinux/zfs] zpool import on a failing vdev causes all further "zpool" commands to fail, even on different pools. (#4038) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4038#event-2143999144>
[05:29:09] <zfs> [zfsonlinux/zfs] Zfs send high cpu utilization (#4036) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4036#event-2143999340>
[05:29:23] <zfs> [zfsonlinux/zfs] loop executing zpool import and zpool export commands leads to kernel crash (#4033) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4033#event-2143999456>
[05:29:52] <zfs> [zfsonlinux/zfs] zpool rw import fails after attempting to destroy old corrupt fs (#4030) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4030#event-2143999613>
[05:30:09] <zfs> [zfsonlinux/zfs] Pool unexpectedly locked up - zfs 0.6.5-32_g256fa98 (#4025) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4025#event-2143999758>
[05:31:40] <zfs> [zfsonlinux/zfs] partial overwrite of metadata holes results in loss of hole birth info (#4023) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4023#issuecomment-464288317>
[05:31:55] <zfs> [zfsonlinux/zfs] Bad page state in process ... with v0.6.5-31_gf3e2a7a (#4015) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4015#event-2144000167>
[05:32:00] <bunder> holes
[05:33:08] <zfs> [zfsonlinux/zfs] libvirt scsi-block generic scsi interface broken (#4012) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4012#issuecomment-464288575>
[05:33:09] <zfs> [zfsonlinux/zfs] libvirt scsi-block generic scsi interface broken (#4012) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4012#issuecomment-464288575>
[05:33:13] <zfs> [zfsonlinux/zfs] libvirt scsi-block generic scsi interface broken (#4012) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4012#event-2144000682>
[05:33:47] <zfs> [zfsonlinux/zfs] task txg_sync:1328 blocked for more than 120 seconds. (#4011) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4011#event-2144001003>
[05:34:17] <zfs> [zfsonlinux/zfs] How do I destroy a hidden and damaged zvol? (#4010) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4010#event-2144001197>
[05:34:27] <zfs> [zfsonlinux/zfs] CentOS7 download.zfsonlinux.org missing files for 7.6 (#8412) comment by Greg Youngblood <https://github.com/zfsonlinux/zfs/issues/8412#issuecomment-464288831>
[05:34:42] <zfs> [zfsonlinux/zfs] degraded zpool can't be destroy or export (#4003) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4003#event-2144001389>
[05:35:32] <zfs> [zfsonlinux/zfs] Ability to add more levels of storage tiers (#4045) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4045#issuecomment-464289059>
[05:35:38] <zfs> [zfsonlinux/zfs] Ability to add more levels of storage tiers (#4045) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4045#event-2144001853>
[05:36:01] <zfs> [zfsonlinux/zfs] Unexplained data corruption as a result of routine disk replacement (#4047) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4047#event-2144002056>
[05:36:22] <zfs> [zfsonlinux/zfs] CentOS7 download.zfsonlinux.org missing files for 7.6 (#8412) comment by Greg Youngblood <https://github.com/zfsonlinux/zfs/issues/8412#issuecomment-464289220>
[05:36:35] <bunder> speaking of nexenta, how come their repo is so old
[05:36:58] <bunder> hasn't been touched in 3 years
[05:37:19] <zfs> [zfsonlinux/zfs] Broken metadata (#4057) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4057#event-2144002784>
[05:37:21] <AllanJude> who wants to try my patch to improve ZFS send size estimation?
[05:37:23] <AllanJude> https://github.com/allanjude/zfs/commit/d6875ab09623f3cc1b67eb77c7c418aee44cae4a
[05:37:43] <AllanJude> for my test case, it improves the accuracy from 23% under actual, to 7% under actual
[05:38:02] <zfs> [zfsonlinux/zfs] ZFS 0.6.5.3-r1. Low speed read again. (#4064) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4064#event-2144003095>
[05:38:23] <zfs> [zfsonlinux/zfs] blocked for more than 120 seconds on 0.6.5.2 in KVM VM (#4065) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4065#event-2144003234>
[05:38:33] <bunder> i would if i was home right now
[05:39:11] <zfs> [zfsonlinux/zfs] Linux 4.2.0: kernel NULL pointer dereference (#4066) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4066#event-2144003560>
[05:39:17] <AllanJude> I feel it is a bit early to create a pull request for it
[05:39:23] <bunder> on visual inspection it looks fine
[05:39:23] <zfs> [zfsonlinux/zfs] Can't use the whole disk (#4074) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4074#event-2144003659>
[05:39:34] <AllanJude> i have a more invasive version
[05:39:52] <AllanJude> that actually counts the objects, based on the way bookmark calculations are done
[05:40:01] <AllanJude> and it could also take -e into consideration
[05:40:18] <AllanJude> as the size estimate is wrong by 512 bytes * # of embedded bps
[05:40:22] <AllanJude> when you use -e
[05:40:30] <AllanJude> but, I imagine that will be much slower
[05:40:34] <AllanJude> since it has to talk all of the bps
[05:40:44] <AllanJude> ptx0: I have considered that
[05:41:08] <AllanJude> but this seems to help a lot
[05:41:12] <ptx0> looks like i've closed 125 issues today
[05:41:14] <AllanJude> although I am unclear why it is still off by 7%
[05:41:18] <zfs> [zfsonlinux/zfs] PANIC at arc.c:1030:hdr_full_dest() (#4091) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4091#event-2144004438>
[05:41:20] <bunder> i see no problem with a full walk, i think i would prefer it rather than fudging numbers
[05:41:22] <AllanJude> ptx0: ahh, is kpande you?
[05:41:25] <ptx0> yep
[05:41:37] <AllanJude> couldn't put a real name (or face) to the github username
[05:41:44] <ptx0> ah
[05:41:54] <bunder> doesn't help that he has like six github accounts
[05:42:08] <AllanJude> yeah, the kpande one is basically only zol
[05:42:38] <ptx0> it used to be other things but i was sued :P
[05:43:25] <ptx0> now i work on projects mostly anonymously.
[05:45:06] <AllanJude> one of the co-founders of FreeBSD had a similar situation
[05:45:26] <AllanJude> worked as a lumberjack for quite a few years, waiting for non-competes etc from the settlements to wear off
[05:48:46] <ptx0> met him in Chemainus, BC
[05:48:54] <ptx0> just a coincidence at a laundromat
[05:48:59] <ptx0> that was a fun day.
[05:49:29] <ptx0> maybe the same person anyway, can't imagine too many people have the same story of being one of the founders etc
[05:50:18] <AllanJude> Rod Grimes
[05:50:38] <AllanJude> he finally re-joined the project about 3 years ago
[05:51:14] <ptx0> oh, oops, the person i knew was from openbsd
[05:51:16] <ptx0> :P
[05:51:38] <ptx0> so open source hoodlums get sued a lot eh
[05:52:37] <bunder> that guy who threatened to sue me still hasn't pushed his const patch yet
[05:53:07] <ptx0> just wait til i get to his issue and close it for inactivity
[05:53:21] * ptx0 jk
[05:53:27] <ptx0> maybe ;)
[05:53:50] <bunder> not sure he has much of a leg to stand on if the pr is closed and never got merged
[06:02:38] <zfs> [zfsonlinux/zfs] zfs and sharesmb -- are my shares being created twice? (#4125) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4125#event-2144011322>
[06:04:16] <zfs> [zfsonlinux/zfs] It would be nice to have a 'safe mode' for zfs and zpool commands (#4134) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4134#issuecomment-464291382>
[06:05:25] <bunder> could probably close that one
[06:07:21] <AllanJude> snapshot with a hold
[06:07:26] <AllanJude> is how I prevent such foot shooting
[06:08:36] <AllanJude> also, zfs's entire CLI needs an overhaul in relation to -f, it is overloaded it too many places, and there should be flags for each 'case'. Like I want to 'zpool import --ignore-hostid', rather than -f, since -f means 'ignore these 6 different causes for failure'
[06:11:34] <bunder> holds are pretty new
[06:12:01] <PMT> it's true, it could use a significant rework.
[06:21:48] <zfs> [zfsonlinux/zfs] CentOS7 download.zfsonlinux.org missing files for 7.6 (#8412) comment by Greg Youngblood <https://github.com/zfsonlinux/zfs/issues/8412#issuecomment-464293692>
[06:21:49] <zfs> [zfsonlinux/zfs] CentOS7 download.zfsonlinux.org missing files for 7.6 (#8412) comment by Greg Youngblood <https://github.com/zfsonlinux/zfs/issues/8412#issuecomment-464293692>
[06:21:53] <zfs> [zfsonlinux/zfs] CentOS7 download.zfsonlinux.org missing files for 7.6 (#8412) closed by Greg Youngblood <https://github.com/zfsonlinux/zfs/issues/8412#event-2144018513>
[06:23:54] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has quit IRC ()
[06:24:04] <bunder> oh crap i didn't even notice that the whole time
[06:25:57] <bunder> trying to download a 7.4 rpm from the 7.6 directory
[06:36:32] *** c3-Win <c3-Win!~c3r1c3-Wi@ip72-211-81-173.no.no.cox.net> has joined #zfsonlinux
[06:36:47] *** veegee_ <veegee_!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has joined #zfsonlinux
[06:38:29] *** DzAirmaX_ <DzAirmaX_!~DzAirmaX@unaffiliated/dzairmax> has joined #zfsonlinux
[06:38:32] *** BtbN_ <BtbN_!btbn@ffmpeg/developer/btbn> has joined #zfsonlinux
[06:40:11] *** ChibaPet <ChibaPet!~mason@redhat/mason> has joined #zfsonlinux
[06:40:56] *** malevolent_ <malevolent_!~quassel@93.176.189.203> has joined #zfsonlinux
[06:41:18] *** veegee <veegee!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has quit IRC (Ping timeout: 246 seconds)
[06:41:18] *** DzAirmaX <DzAirmaX!~DzAirmaX@unaffiliated/dzairmax> has quit IRC (Ping timeout: 246 seconds)
[06:41:18] *** lundman <lundman!~lord@shinken.interq.or.jp> has quit IRC (Ping timeout: 246 seconds)
[06:41:18] *** mason <mason!~mason@redhat/mason> has quit IRC (Ping timeout: 246 seconds)
[06:41:18] *** xlued <xlued!~xlued@45.76.247.183> has quit IRC (Ping timeout: 246 seconds)
[06:41:18] *** duairc <duairc!~shane@ana.rch.ist> has quit IRC (Ping timeout: 246 seconds)
[06:41:18] *** mmlb <mmlb!~mmlb@76-248-148-178.lightspeed.miamfl.sbcglobal.net> has quit IRC (Ping timeout: 246 seconds)
[06:41:18] *** Llewelyn <Llewelyn!~derelict@50-46-205-58.evrt.wa.frontiernet.net> has quit IRC (Ping timeout: 246 seconds)
[06:41:18] *** nicoulaj <nicoulaj!~nicoulaj@nicoulaj.net> has quit IRC (Ping timeout: 246 seconds)
[06:41:18] *** migy <migy!~migy@static.59.240.9.5.clients.your-server.de> has quit IRC (Ping timeout: 246 seconds)
[06:41:18] *** BtbN <BtbN!btbn@ffmpeg/developer/btbn> has quit IRC (Ping timeout: 246 seconds)
[06:41:19] *** c3r1c3-Win <c3r1c3-Win!~c3r1c3-Wi@ip72-211-81-173.no.no.cox.net> has quit IRC (Ping timeout: 246 seconds)
[06:41:19] *** Freeaqingme <Freeaqingme!~quassel@149.210.181.20> has quit IRC (Ping timeout: 246 seconds)
[06:41:19] *** FireSnake <FireSnake!firesnake@gateway/shell/xshellz/x-nemggintglraiurq> has quit IRC (Ping timeout: 246 seconds)
[06:41:19] *** malevolent <malevolent!~quassel@93.176.189.203> has quit IRC (Ping timeout: 246 seconds)
[06:41:19] *** javi404 <javi404!~quassel@unaffiliated/javi404> has quit IRC (Ping timeout: 246 seconds)
[06:41:19] *** allquixotic <allquixotic!~quassel@pdpc/supporter/bronze/allquixotic> has quit IRC (Ping timeout: 246 seconds)
[06:41:19] *** buu <buu!~buu@99.74.60.251> has quit IRC (Remote host closed the connection)
[06:41:19] *** BtbN_ is now known as BtbN
[06:41:19] *** buu_ <buu_!~buu@99-74-60-251.lightspeed.hstntx.sbcglobal.net> has joined #zfsonlinux
[06:41:19] *** migy_ <migy_!~migy@static.59.240.9.5.clients.your-server.de> has joined #zfsonlinux
[06:41:43] *** vertigo_ <vertigo_!~chris@unaffiliated/anunnaki> has joined #zfsonlinux
[06:42:36] *** xlued <xlued!~xlued@45.76.247.183> has joined #zfsonlinux
[06:43:21] *** SadMan <SadMan!foobar@sadman.net> has quit IRC (Ping timeout: 246 seconds)
[06:43:23] *** vertigo <vertigo!~chris@unaffiliated/anunnaki> has quit IRC (Remote host closed the connection)
[06:43:57] *** Llewelyn <Llewelyn!~derelict@50-46-205-58.evrt.wa.frontiernet.net> has joined #zfsonlinux
[06:49:46] *** c3-Win is now known as c3r1c3-Win
[06:50:46] *** mmlb <mmlb!~mmlb@76.248.148.178> has joined #zfsonlinux
[06:51:13] *** Freeaqingme <Freeaqingme!~quassel@149.210.181.20> has joined #zfsonlinux
[06:51:39] <zfs> [zfsonlinux/zfs] send -Dc causes corruption when received (#8421) created by implr <https://github.com/zfsonlinux/zfs/issues/8421>
[06:58:10] <bunder> oh boy
[07:02:34] <zfs> [zfsonlinux/zfs] send -Dc causes corruption when received (#8421) comment by bunder2015 <https://github.com/zfsonlinux/zfs/issues/8421#issuecomment-464298370>
[07:04:22] <bunder> pcd: you might want to see this :)
[07:36:07] <Shinigami-Sama> I think you mean ptx0
[07:36:34] *** oblikoamorrale <oblikoamorrale!~ami@pdpc/supporter/active/oblikoamorale> has joined #zfsonlinux
[07:36:54] <bunder> no
[07:37:04] <bunder> pcd wants to remove send dedup
[07:37:18] <bunder> #7887
[07:37:20] <zfs> [zfs] #7887 - Deprecate dedup send/receive <https://github.com/zfsonlinux/zfs/issues/7887>
[07:37:32] <Shinigami-Sama> everyone wants to remove dedup
[07:38:47] *** oblikoamorale <oblikoamorale!~ami@pdpc/supporter/active/oblikoamorale> has quit IRC (Ping timeout: 240 seconds)
[07:38:57] *** oblikoamorrale is now known as oblikoamorale
[07:39:15] <bunder> because it sucks :P
[07:47:19] <CompanionCube> ptx0: inb4 get github bot to close stale issues
[07:53:31] <zfs> [zfsonlinux/zfs] MMP Import spins in z_null_int (#8378) closed by Olaf Faaland <https://github.com/zfsonlinux/zfs/issues/8378#event-2144049780>
[07:53:33] <zfs> [zfsonlinux/zfs] MMP Import spins in z_null_int (#8378) comment by Olaf Faaland <https://github.com/zfsonlinux/zfs/issues/8378#issuecomment-464301663>
[07:53:39] <zfs> [zfsonlinux/zfs] MMP Import spins in z_null_int (#8378) reopened by Olaf Faaland <https://github.com/zfsonlinux/zfs/issues/8378#event-2144049834>
[07:53:51] <zfs> [zfsonlinux/zfs] MMP Import spins in z_null_int (#8378) comment by Olaf Faaland <https://github.com/zfsonlinux/zfs/issues/8378#issuecomment-464301683>
[07:56:56] <bunder> lol twice in a row too
[08:01:19] <AllanJude> Shinigami-Sama: there is dedup send, which is unrelated to normal dedup
[08:07:01] *** AllanJude <AllanJude!ajude@freebsd/developer/AllanJude> has quit IRC (Remote host closed the connection)
[08:15:03] *** IonTau <IonTau!~IonTau@203-206-42-171.dyn.iinet.net.au> has quit IRC (Remote host closed the connection)
[08:20:05] <zfs> [zfsonlinux/zfs] performance regression from 0.6.4.1 to 0.6.5.3 (iops) (#4135) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4135#event-2144058456>
[08:21:38] <zfs> [zfsonlinux/zfs] Volume quota not enforced over NFS (#4148) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4148#issuecomment-464303711>
[08:27:40] <zfs> [zfsonlinux/zfs] vdev_config_sync can't guarantee the state on disk is still transactionally consistent (#4162) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4162#issuecomment-464304795>
[08:33:19] <zfs> [zfsonlinux/zfs] zpool import complains about missing log device, suggests -m, then imports with the missing device anyways (#4168) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4168#event-2144063047>
[08:36:48] *** pR0Ps <pR0Ps!~pR0Ps@24.140.236.114> has quit IRC (Ping timeout: 250 seconds)
[08:40:04] *** pR0Ps <pR0Ps!~pR0Ps@104-222-122-23.cpe.teksavvy.com> has joined #zfsonlinux
[08:43:51] <zfs> [zfsonlinux/zfs] zfs send -R -i does not fail if source snapshot does not exist (#3894) comment by loli10K <https://github.com/zfsonlinux/zfs/issues/3894#issuecomment-464307014>
[08:43:57] <zfs> [zfsonlinux/zfs] zfs send -R -i does not fail if source snapshot does not exist (#3894) reopened by loli10K <https://github.com/zfsonlinux/zfs/issues/3894#event-2144066383>
[08:46:10] <ptx0> jeez
[08:46:13] <ptx0> send -R is fuckin broken.
[08:50:13] <bunder> i don't think i've ever used it
[08:50:44] <zfs> [zfsonlinux/zfs] [WIP] Prototype for systemd and fstab integration (#4943) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4943#issuecomment-464307863>
[08:52:39] <bunder> i wonder what's in that 12.6kb
[08:54:12] <bunder> if there's nothing to send it should be zero
[09:00:51] *** cheet <cheet!~cheet@modemcable202.6-59-74.mc.videotron.ca> has quit IRC (Quit: ZNC 1.8.x-nightly-20190128-91af796c - https://znc.in)
[09:01:14] <zfs> [zfsonlinux/zfs] Low performance when zpool is based on iSCSI disk based on zvol/zfs (#4211) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4211#issuecomment-464309523>
[09:01:15] <zfs> [zfsonlinux/zfs] Low performance when zpool is based on iSCSI disk based on zvol/zfs (#4211) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4211#issuecomment-464309523>
[09:01:22] <zfs> [zfsonlinux/zfs] Low performance when zpool is based on iSCSI disk based on zvol/zfs (#4211) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4211#event-2144072443>
[09:02:02] <zfs> [zfsonlinux/zfs] zpool lock down with hundred of zfs snapshot stuck during resilvering (#4226) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4226#event-2144072665>
[09:02:18] <zfs> [zfsonlinux/zfs] zfs send -R -i does not fail if source snapshot does not exist (#3894) comment by bunder2015 <https://github.com/zfsonlinux/zfs/issues/3894#issuecomment-464309664>
[09:03:54] <zfs> [zfsonlinux/zfs] zfs send stuck on 0.6.5.4-1~vivid (kernel 3.19.0-43-generic) (#4229) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4229#issuecomment-464309983>
[09:08:26] <bunder> its like its ignoring the first snapshot and sending everything from beginning to snap2
[09:08:34] <ptx0> yes it is
[09:08:56] <zfs> [zfsonlinux/zfs] zfs send -R -i does not fail if source snapshot does not exist (#3894) comment by loli10K <https://github.com/zfsonlinux/zfs/issues/3894#issuecomment-464310969>
[09:09:18] <zfs> [zfsonlinux/zfs] raidz2 writes more than expected to disk (#4253) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4253#event-2144075355>
[09:10:01] <zfs> [zfsonlinux/zfs] ARC exhausted when destroy a snapshot who have a huge deadlist (#4254) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4254#issuecomment-464311152>
[09:15:37] <bunder> i wonder if that's what happened to your enc pool
[09:16:46] <bunder> i could try receiving the snap but i'd want to be home so i can reboot my laptop if it crashes
[09:32:02] <zfs> [zfsonlinux/zfs] ARC exhausted when destroy a snapshot who have a huge deadlist (#4254) comment by GeLiXin <https://github.com/zfsonlinux/zfs/issues/4254#issuecomment-464314892>
[09:36:24] <zfs> [zfsonlinux/zfs] ZVOLs 3X slower than recordsets at same recordsize/volblocksize for sequential writes (#4265) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4265#issuecomment-464315597>
[09:41:15] *** catalase <catalase!Elite21895@gateway/shell/elitebnc/x-xbmtveemaawuoogv> has quit IRC (Ping timeout: 252 seconds)
[10:10:07] <zfs> [zfsonlinux/zfs] zpool remove <pool> <log device> returns 'pool already exists' (#4270) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4270#issuecomment-464320892>
[10:11:26] <zfs> [zfsonlinux/zfs] IO accounting available in /proc/*/io, but not cgroup-level blkio accounting (#4275) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4275#event-2144097017>
[10:17:16] <zfs> [zfsonlinux/zfs] Renaming a zvol with partition(s) does not move/rename the ?-partX? entries in /dev/zvol. (#4282) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4282#issuecomment-464322062>
[10:18:57] <zfs> [zfsonlinux/zfs] zpool reports 16E expandsize on disks with oddball number of sectors (#8391) comment by loli10K <https://github.com/zfsonlinux/zfs/issues/8391#issuecomment-464322351>
[10:20:39] <zfs> [zfsonlinux/zfs] wrong initramfs is created/updated by dkms (#4342) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4342#event-2144100217>
[10:21:06] <zfs> [zfsonlinux/zfs] I/O errors since dataset change to sync=always (#4340) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4340#event-2144100324>
[10:21:45] <zfs> [zfsonlinux/zfs] libzfs_impl.h -> struct zpool_handle: zpool_hdl should be zfs_hdl (#4336) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4336#event-2144100532>
[10:22:15] <zfs> [zfsonlinux/zfs] kernel message followed by low write performance with 0.6.4-1 (#4335) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4335#event-2144100704>
[10:23:52] <zfs> [zfsonlinux/zfs] "zpool import" hangs (#4322) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4322#issuecomment-464323116>
[10:24:21] <zfs> [zfsonlinux/zfs] Temporary freeze copying lots of small files (#4320) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4320#event-2144101460>
[10:25:43] <zfs> [zfsonlinux/zfs] txg_sync hung task and deadlock when trying to shrink ARC (#4319) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4319#event-2144102002>
[10:26:18] <zfs> [zfsonlinux/zfs] Unbalanced disk use and disk free space (#4310) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4310#event-2144102183>
[10:27:40] <zfs> [zfsonlinux/zfs] Sudden write() + fsync() performance drop (#4305) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4305#event-2144102589>
[10:29:33] <zfs> [zfsonlinux/zfs] failure on resliver (#4299) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4299#event-2144103280>
[10:31:12] <zfs> [zfsonlinux/zfs] PANIC at fnvpair.c:68:fnvlist_size() when having 3 disks in a mirror, 1x1tb and 2x2tb, goes away when 1tb drive is removed AND 2x2tb expanded (#4411) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4411#event-2144103819>
[10:34:36] <zfs> [zfsonlinux/zfs] very slow zfs receive (zfs process consumes too much cpu) with hundreds of recursive datasets and dozens of snapshots for each dataset (#4395) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4395#event-2144104958>
[10:35:58] <zfs> [zfsonlinux/zfs] hang during zpool import - GPF in dmesg (#4389) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4389#issuecomment-464324784>
[10:38:03] <zfs> [zfsonlinux/zfs] Crash during scrub (#4383) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4383#event-2144106121>
[10:38:58] <zfs> [zfsonlinux/zfs] Panic while trying to import pool (0.6.5.4). (#4380) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4380#event-2144106485>
[10:39:39] <zfs> [zfsonlinux/zfs] null pointer deref in spa_config/generate (0.6.5.4 / 3.19.8) (#4378) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4378#event-2144106691>
[10:42:30] <zfs> [zfsonlinux/zfs] task txg_sync: blocked for more than 120 seconds (#4361) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4361#event-2144107696>
[10:43:02] <zfs> [zfsonlinux/zfs] task txg_sync: blocked for more than 120 seconds (#4361) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4361#issuecomment-464325673>
[10:43:42] <zfs> [zfsonlinux/zfs] Unable to clear metadata fault (#4360) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4360#issuecomment-464325772>
[10:43:43] <zfs> [zfsonlinux/zfs] Unable to clear metadata fault (#4360) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4360#issuecomment-464325772>
[10:43:47] <zfs> [zfsonlinux/zfs] Unable to clear metadata fault (#4360) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4360#event-2144108111>
[10:44:36] <bunder> osnap
[10:50:44] <zfs> [zfsonlinux/zfs] high dbu_evict CPU usage during snapshot deletion (#4462) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4462#event-2144110590>
[10:50:56] <zfs> [zfsonlinux/zfs] high dbu_evict CPU usage during snapshot deletion (#4462) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4462#issuecomment-464327073>
[10:50:57] *** LeoTh3o <LeoTh3o!~th3o@phoxden.net> has quit IRC (Quit: Leaving)
[10:51:09] *** LeoTh3o <LeoTh3o!~th3o@phoxden.net> has joined #zfsonlinux
[10:51:25] <zfs> [zfsonlinux/zfs] txg_quiesce_thread crash on contentious ZVOL IO abuse - taskq_thread_dynamic related (#4464) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4464#event-2144110845>
[10:53:07] *** MasterPiece <MasterPiece!~masterpie@unaffiliated/masterpiece> has joined #zfsonlinux
[10:55:40] <zfs> [zfsonlinux/zfs] high dbu_evict CPU usage during snapshot deletion (#4462) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4462#issuecomment-464327965>
[10:56:51] <zfs> [zfsonlinux/zfs] zfs-mount.service is called too late on Debian/Jessie with ZFS root (#4474) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4474#issuecomment-464328203>
[10:58:13] <zfs> [zfsonlinux/zfs] Can't override 'mountpoint' when mounting a filesystem (#4553) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4553#issuecomment-464328426>
[10:58:20] <zfs> [zfsonlinux/zfs] Can't override 'mountpoint' when mounting a filesystem (#4553) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4553#issuecomment-464328426>
[10:58:24] <zfs> [zfsonlinux/zfs] Can't override 'mountpoint' when mounting a filesystem (#4553) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4553#event-2144113211>
[10:59:13] <zfs> [zfsonlinux/zfs] zfs pool corruption on power loss (#4501) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4501#event-2144113573>
[11:00:35] <zfs> [zfsonlinux/zfs] PANIC at zfs_acl.c:832:zfs_acl_xform() When writing (#4499) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4499#event-2144114121>
[11:01:21] <zfs> [zfsonlinux/zfs] hung_task_timeout_secs 0.6.5.4-1 debian 8 (#4483) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4483#event-2144114322>
[11:01:56] *** bz2 <bz2!~z@unaffiliated/zst> has joined #zfsonlinux
[11:03:20] <zfs> [zfsonlinux/zfs] ZFS lockup, data not written (#4486) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4486#event-2144114979>
[11:04:39] *** zst <zst!~z@unaffiliated/zst> has quit IRC (Ping timeout: 268 seconds)
[11:04:42] *** bz2 is now known as zst
[11:05:51] <zfs> [zfsonlinux/zfs] zfs 0.6.5 and write performance problems (#4512) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4512#event-2144115904>
[11:06:37] <zfs> [zfsonlinux/zfs] Very high load during scrubs, rendering whole system unresponsive. (#4528) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4528#event-2144116210>
[11:07:59] <zfs> [zfsonlinux/zfs] Integration tests for proper boot verification (#4555) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4555#event-2144116667>
[11:09:11] *** SadMan <SadMan!foobar@sadman.net> has joined #zfsonlinux
[11:11:08] <zfs> [zfsonlinux/zfs] Build error with new mainline kernel 4.5.2 "blk_queue_flush" [fs/zfs/zfs/zfs.ko] undefined! (#4563) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4563#event-2144118074>
[11:13:06] <zfs> [zfsonlinux/zfs] Deleting Files Doesn't Free Space (#4567) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4567#event-2144118716>
[11:13:42] <zfs> [zfsonlinux/zfs] BUG: NULL pointer deref on Linux 4.4 (#4569) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4569#event-2144118948>
[11:14:09] <bunder> hey save some for the rest of us lul
[11:17:04] <zfs> [zfsonlinux/zfs] Messy locking in zfs_inode_update (#4578) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4578#event-2144120219>
[11:18:17] <zfs> [zfsonlinux/zfs] CPU stuck when getdents gets run on some directories (#4583) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4583#event-2144120608>
[11:18:37] <zfs> [zfsonlinux/zfs] Very slow scrub / resilver after drive failure (#4584) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4584#event-2144120758>
[11:19:28] <zfs> [zfsonlinux/zfs] zdb can't see zpools which have been imported readonly - It should be able to (#4598) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4598#event-2144121019>
[11:20:21] <zfs> [zfsonlinux/zfs] disk usage wrong when using larger recordsize, raidz and ashift=12 (#4599) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4599#event-2144121370>
[11:21:54] <zfs> [zfsonlinux/zfs] task zfs or txg_sync blocked for more than seconds during snapshot creation and deletion (#4604) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4604#event-2144121871>
[11:22:44] <zfs> [zfsonlinux/zfs] Kernel Panic, CentOS7 not syncing: bad overwrite z_wr_int_7 (#4610) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4610#event-2144122214>
[11:22:45] <ptx0> heh
[11:23:49] <zfs> [zfsonlinux/zfs] zfs-dracut on CentOS 6.5 broken (#4640) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4640#event-2144122508>
[11:24:04] <zfs> [zfsonlinux/zfs] zfs-dracut on CentOS 6.5 broken (#4640) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4640#issuecomment-464333107>
[11:24:40] <zfs> [zfsonlinux/zfs] nfsd deadlocks inside zfs_vget() or zfs_getattr_fast() calling rrw_enter_read_impl() (#4648) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4648#event-2144122762>
[11:27:20] <zfs> [zfsonlinux/zfs] 0.6.5.6 - I/O timeout during disk spin up (#4638) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4638#event-2144123718>
[11:27:37] <zfs> [zfsonlinux/zfs] 0.6.5.6 - I/O timeout during disk spin up (#4638) reopened by kpande <https://github.com/zfsonlinux/zfs/issues/4638#event-2144123859>
[11:28:22] <zfs> [zfsonlinux/zfs] sync write on zfs is very slow (#4619) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4619#event-2144124122>
[11:29:46] <zfs> [zfsonlinux/zfs] FIO "bssplit" cause performance degeneration (#4617) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4617#event-2144124642>
[11:30:31] <zfs> [zfsonlinux/zfs] Can't mount dataset after reboot (#4616) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4616#event-2144124933>
[11:30:39] <ptx0> down to 842 bunder
[11:30:50] <bunder> lol
[11:32:25] <zfs> [zfsonlinux/zfs] zfs mount errors are obtuse (#4700) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4700#event-2144125560>
[11:33:57] <zfs> [zfsonlinux/zfs] Possible deadlock in ARC between arc_shrinker and arc_user_evicts_thread (#4688) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4688#event-2144126016>
[11:34:35] <zfs> [zfsonlinux/zfs] zfs hangs on suspend/resume (#4701) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4701#event-2144126248>
[11:37:03] <zfs> [zfsonlinux/zfs] zp->z_xattr_lock acquire timeout panic in zpl_xattr_get (#4765) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4765#event-2144127050>
[11:37:19] <zfs> [zfsonlinux/zfs] Block layer statistics in /sys/block/zdX/stat in ZOL 0.6.5 are empty (#4777) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4777#event-2144127107>
[11:37:57] <zfs> [zfsonlinux/zfs] Durring Sync Write, sync reads drop significantly (#4778) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4778#issuecomment-464334794>
[11:38:02] <zfs> [zfsonlinux/zfs] Durring Sync Write, sync reads drop significantly (#4778) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4778#issuecomment-464334794>
[11:38:06] <zfs> [zfsonlinux/zfs] Durring Sync Write, sync reads drop significantly (#4778) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4778#event-2144127304>
[11:39:43] <zfs> [zfsonlinux/zfs] zfs-mount fails because directory isn't empty, screws up bind mounts and NFS (#4784) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4784#event-2144127779>
[11:40:17] <zfs> [zfsonlinux/zfs] Kernel Panic on SPARC64 (#4744) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4744#event-2144128002>
[11:41:02] <zfs> [zfsonlinux/zfs] arc_reclaim slows down system (#4738) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4738#event-2144128225>
[11:41:45] <zfs> [zfsonlinux/zfs] dmu_buf_get_blkptr unable to handle kernel NULL pointer dereference at 0000000000000038 (#4737) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4737#event-2144128505>
[11:42:04] <ptx0> jesus
[11:42:07] <ptx0> i've only gone through 8 pages
[11:42:21] <ptx0> there's 34
[11:42:49] <hyper_ch> closing bugs?
[11:43:04] <ptx0> yeah and filing them
[11:43:19] <ptx0> because behlendorf keeps removing tags and then github just leaves a ton of issues without one
[11:43:22] <ptx0> lol
[11:43:48] <hyper_ch> finally fixed my nixos/systemd/zfs problem :)
[11:43:54] <ptx0> went from ~995 issues to 840-something
[11:44:01] <ptx0> 832 now
[11:44:21] <hyper_ch> still a lot of work left :)
[11:44:26] <ptx0> what was the magic hint
[11:44:48] <ptx0> "This man got his NixOS installation to work. You won't BELIEVE how!!!"
[11:45:08] <ptx0> what would the dailymail.co.uk headline look like
[11:45:41] <hyper_ch> well, I use nixos. Nixos uses a declarative config file to actually install all the stuff. ZFS requires a hostid which is in the "normal" networking section. However that server acts as host for some qemu VMs. with the systemd.network I did create a bridge and that worked fine up to systemd 237. However with systemd 239 suddenly the host couldn't ping anything anymore.
[11:46:03] <hyper_ch> for some reason (also in he old version) two default gateways entries were added... it worked on 237 but not 239
[11:47:40] <hyper_ch> also, I want to boot and unlock server remotely. What I had to do is (a) add boot.kernelParams = [ "ip=dhcp" ]; then remove the useDHCP="yes"; part from the normal networking section, add nameserver entries to the systemd.network section
[11:48:01] <hyper_ch> no only 1 default gateway, I can still unlock server remotely upon reboot and netowrking all works :)
[11:49:10] <CompanionCube> related to nixos: today i read about fedora silverblue. I think not knowing may have been better.
[11:49:51] <hyper_ch> what's silverblue?
[11:49:58] <CompanionCube> https://debarshiray.wordpress.com/2018/10/22/fedora-toolbox-hacking-on-fedora-silverblue/
[11:50:11] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[11:51:09] <hyper_ch> that's now my networking/briding config in nixos https://paste.simplylinux.ch/view/raw/5bc24245
[11:52:08] <hyper_ch> no there's the other office server that I also need to upgrade :)
[11:52:17] <hyper_ch> CompanionCube: fail to see how that's related to nixos
[11:53:17] <CompanionCube> hyper_ch: the whole thing sounds like a worse implementation of nixos/guixsd
[11:54:28] <CompanionCube> hell, you might even say windows arguably has a better method :p
[11:55:06] <ptx0> windows deployment magic?
[11:55:08] <hyper_ch> CompanionCube: I just read about "snap/py" the other day.... didn't canoncial wanted to enable atomic ugprades with that?
[11:55:15] <ptx0> SDM thing
[11:55:33] <hyper_ch> I really like that Windows now comes with WSL
[11:55:36] <CompanionCube> isn't that for enterprise
[11:55:37] <hyper_ch> that makes so many things easier
[11:55:55] <ptx0> wish i hadn't reproduced that issue with dbu_evict on my workstation earlier
[11:56:00] <ptx0> cpu's been spinning for a few hours
[11:56:02] <ptx0> lol
[11:56:11] <CompanionCube> hyper_ch: i believe snap in the same category, yes
[11:56:13] <ptx0> Seems Fine Though (TM)
[11:56:25] <CompanionCube> ptx0: it's a threadripper
[11:56:35] <ptx0> yeah it's only at 46C
[11:56:41] <ptx0> like i said, seems fine
[11:56:43] <CompanionCube> you have basically infinite spare cores
[11:56:49] <ptx0> kswapd is poopin along though
[11:56:54] <ptx0> i don't even have swap
[11:57:15] <hyper_ch> infinite spare cores?
[11:57:27] <ptx0> oh wait
[11:57:30] <ptx0> the system isn't stuck
[11:57:35] <ptx0> conky just died earlier
[11:57:41] <bunder> funny
[11:57:53] <bunder> first result for dbu_evict is your 4462
[11:57:57] <ptx0> ah wait it's stuck
[11:58:08] <ptx0> oh
[11:58:09] <ptx0> nope
[11:58:09] <hyper_ch> btw, are the AMD Threadrippers really so good?I just stuck to Intel the last decade out of habit... especially when they first introduced aes-ni
[11:58:30] <bunder> don't get the wx, just the x
[11:59:05] <ptx0> i'm on a 1900x and it is spectacular
[11:59:17] <ptx0> kids who love intel CPUs tend to play games, i tend to do work
[11:59:28] <bunder> if you want 16c32t and don't want to pay $1700 for the chip, tr is the way to go
[11:59:53] <ptx0> there's people like jasonwc who will spend 2x on half the functionality because it'll get him a few higher peak fps in a game or two
[12:00:03] <hyper_ch> well, whenwe have next server upgrades I'll probably switch them... currently 4c/8t i7
[12:00:14] <ptx0> i7 doesn't even have ECC...
[12:00:26] <hyper_ch> I'm still ECC-free :(
[12:00:30] <hyper_ch> thats the next thing
[12:02:00] <bunder> ecc is expensive, 128gb quad channel ddr4 already set me back a grand
[12:02:21] <ptx0> why 128gb tho
[12:03:01] <ptx0> that is excessive :P
[12:03:30] * ptx0 just went from 16 to 32G in the local backup server and it feels like overkill
[12:03:38] <ptx0> not running dozens of VMs though
[12:04:45] <bunder> i plan on running like 4 or 5 vm's at a time, plus arc for nfs and the host
[12:06:03] *** metallicus <metallicus!~metallicu@145.128.174.147> has joined #zfsonlinux
[12:06:40] *** metallicus <metallicus!~metallicu@145.128.174.147> has quit IRC (Client Quit)
[12:08:49] <hyper_ch> ecc doesn't seem much more expensive the lsat time I looked
[12:10:41] <bunder> to be fair, i didn't feel like fighting with it, because afaik you can't run ecc at the speeds you would with standard memory
[12:11:02] <bunder> and supposedly you can't run quad channel at those speeds either, but why would 2666 quad channel be on the qvl /shrug
[12:11:05] <ptx0> well, standard memory at xmp speeds is prone to errors too
[12:11:18] <hyper_ch> ecc is slower than standard memory?
[12:11:25] <ptx0> you can run QC at 2666MHz but it'll be overclocking the CPU
[12:11:32] <bunder> no its something to do with the memory controller
[12:11:49] <Lalufu> I'm running quad channel at 3200, only 32GB, though
[12:12:00] <bunder> 8 sticks?
[12:12:03] <Lalufu> 4
[12:12:13] <bunder> ah
[12:17:08] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Ping timeout: 245 seconds)
[12:18:07] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[12:25:24] <hyper_ch> what do you use 128GB ram for?
[12:26:07] <bunder> bunder | i plan on running like 4 or 5 vm's at a time, plus arc for nfs and the host
[12:30:46] <hyper_ch> those are ram hungry vms then
[12:31:48] <DeHackEd> you give VMs what they need
[12:34:20] <zfs> [zfsonlinux/zfs] zfs send stuck on 0.6.5.4-1~vivid (kernel 3.19.0-43-generic) (#4229) comment by Csillag Tamas <https://github.com/zfsonlinux/zfs/issues/4229#issuecomment-464339079>
[12:35:56] *** notdaniel <notdaniel!~dkh@2600:1700:9bd0:2470::33> has joined #zfsonlinux
[12:40:32] *** mquin <mquin!~mike@freenode/staff/mquin> has quit IRC (Quit: So Much For Subtlety)
[12:41:28] <bunder> clamav alone says it needs at least 650mb on my current hardware
[12:42:59] <bunder> amavis is another 3-400
[12:44:52] <bunder> apache is 250
[12:45:55] <hyper_ch> so, other office server also upgrade :)
[12:46:31] <hyper_ch> notebook battery almost dead... it's being past noon already... so good time to buy groceries, have lunch and call it a weekend :)
[12:46:40] <bunder> lol
[12:47:09] <hyper_ch> you disagree?
[12:48:00] *** LeoTh3o <LeoTh3o!~th3o@phoxden.net> has quit IRC (Quit: Leaving)
[12:49:37] *** LeoTh3o <LeoTh3o!~th3o@phoxden.net> has joined #zfsonlinux
[12:50:59] <bunder> no i was laughing at the battery being dead so early into the day
[12:51:20] <hyper_ch> it's 12:51 :)
[12:51:21] <bunder> its like my cellphone, take it off the charger at like 6am and its dead by noon
[12:51:25] <hyper_ch> how long does your notebook battery last?
[12:51:49] <bunder> depends on what i'm doing
[12:51:59] <bunder> but i usually leave it plugged in
[12:52:05] <hyper_ch> mine lasts around 4h
[12:52:09] <bunder> (i know it defeats the purpose)
[12:52:31] <notdaniel> my iphone x is the first phone i recall for years surviving beyond a day
[12:52:41] <hyper_ch> well, off now :)
[12:53:10] <bunder> cheers
[12:54:19] <bunder> notdaniel: i would say that's good, but apple has a history of making their phones intentionally slow
[12:54:56] <notdaniel> well theyve done that recently because the alternative was the battery not being able to survive
[12:55:03] <notdaniel> thus the stories of phone shutdowns at 20%
[12:55:14] <notdaniel> the slower speed hack was the tradeoff
[12:55:27] <notdaniel> if you have a newer phone this is not in any way a problem
[12:55:39] <notdaniel> best performing phone ive ever had
[12:55:50] <bunder> and you can't change the battery on an apple
[12:55:53] <notdaniel> if i keep it for ten years and expect it to work the same with the new os, maybe not
[12:55:59] <bunder> well, kindof sortof
[12:56:23] <notdaniel> barking up the wrong tree
[12:56:25] <bunder> i forget if they still replace the phone like they do with screens
[12:56:37] <notdaniel> i know the arguments and i do not care and i continue to buy iphones
[12:57:01] <bunder> eh to be fair androids been sealing the batteries in too, which is why i use an s5
[12:57:15] <notdaniel> because at the end of the day it gives me by far the least grief than anything else i use
[13:00:16] <zfs> [zfsonlinux/zfs] task txg_sync: blocked for more than 120 seconds (#4361) comment by Fabrice Bacchella <https://github.com/zfsonlinux/zfs/issues/4361#issuecomment-464341018>
[13:02:03] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[13:02:06] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Max SendQ exceeded)
[13:05:24] *** MasterPiece <MasterPiece!~masterpie@unaffiliated/masterpiece> has quit IRC (Quit: Leaving)
[13:20:55] *** mquin <mquin!~mike@freenode/staff/mquin> has joined #zfsonlinux
[13:23:20] <bunder> i wanted to get the ubuntu phone but i dont think they are making the hardware anymore
[13:27:27] <lblume> Nor the software.
[13:29:17] <notdaniel> not really the ideal scenario for your most essential communication device
[13:32:13] *** b <b!coffee@gateway/vpn/privateinternetaccess/b> has joined #zfsonlinux
[13:33:50] <bunder> what do i need other than phone/sms and email
[13:33:56] <bunder> the rest is superfluous
[13:34:30] <bunder> i guess gps is nice but that's about it
[13:35:47] <lblume> Security updates.
[13:36:03] <bunder> its linux
[13:37:13] <lblume> ... yes? So? Main part of my job is keeping track of Linux security updates.
[13:37:22] <bunder> oh, i guess ubu isn't supporting it anymore but ubports is
[13:38:14] <notdaniel> the dream is superfluity along with stability
[13:38:40] <notdaniel> whther it's my phone or my life
[13:38:44] <notdaniel> also i have over 20k unread emails
[13:39:22] <lblume> Only? Lucky you :)
[13:47:15] <bunder> 49, but most of those are my */6 mailq checks
[13:56:59] <cirdan> bunder: you can change the battery in mac/iphones but the mac is a little hard
[13:57:28] <cirdan> apple sill do a phone exchange if anything at all goes wrong, they dont wanna deal with a battery fire so they'll send it out and give you a refurb
[13:57:39] <bunder> i said kindof sortof :P
[13:59:16] <bunder> https://motherboard.vice.com/en_us/article/a3ppvj/dhs-seized-aftermarket-apple-laptop-batteries-from-independent-repair-expert-louis-rossman
[13:59:58] <bunder> i can walk into any samsung repair shop and get any parts i need for my s5
[13:59:58] *** notdaniel <notdaniel!~dkh@2600:1700:9bd0:2470::33> has quit IRC (Read error: Connection reset by peer)
[14:11:14] <cirdan> for now
[14:13:56] <cirdan> wonder if i can get a new battery for my nokia 9300i
[14:17:03] <bunder> aliexpress/ebay
[14:17:09] <bunder> sorry :P
[14:23:17] <lblume> I had an Android with a replaceable battery. When it failed, after 4 years, it was not possible to buy a genuine new one anymore.
[14:26:58] <bunder> yeah i think i lucked out with samsung, doubt i could get moto parts anymore either
[14:35:16] *** akaizen <akaizen!~akaizen@d28-23-89-78.dim.wideopenwest.com> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[14:55:58] *** Hypfer <Hypfer!~Hypfer@unaffiliated/hypfer> has joined #zfsonlinux
[14:56:59] *** rich0 <rich0!~quassel@gentoo/developer/rich0> has quit IRC (Quit: rich0)
[14:59:34] *** rich0 <rich0!~quassel@gentoo/developer/rich0> has joined #zfsonlinux
[15:07:49] *** fs2 <fs2!~fs2@pwnhofer.at> has quit IRC (Quit: Ping timeout (120 seconds))
[15:08:58] *** fs2 <fs2!~fs2@pwnhofer.at> has joined #zfsonlinux
[15:15:10] <zfs> [zfsonlinux/zfs] Feature Request - online split clone (#2105) comment by shodanshok <https://github.com/zfsonlinux/zfs/issues/2105#issuecomment-464350398>
[15:23:04] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[15:26:44] *** sauravg_ <sauravg_!~sauravg@27.6.80.196> has quit IRC (Ping timeout: 250 seconds)
[15:34:33] *** sauravg <sauravg!~sauravg@110.224.132.254> has joined #zfsonlinux
[15:37:59] *** mmlb <mmlb!~mmlb@76.248.148.178> has quit IRC (Ping timeout: 255 seconds)
[15:38:51] *** mmlb <mmlb!~mmlb@76-248-148-178.lightspeed.miamfl.sbcglobal.net> has joined #zfsonlinux
[15:54:22] <bol> Fairphone2 + LineageOS will be my next device
[15:54:55] <bol> Easy to repair if it breaks, and upgradable hardware modules
[16:01:00] *** tomoyat1 <tomoyat1!~tomoyat1@tomoyat1.com> has quit IRC (Quit: ZNC 1.6.5 - http://znc.in)
[16:01:32] *** tomoyat1 <tomoyat1!~tomoyat1@tomoyat1.com> has joined #zfsonlinux
[16:03:34] *** fs2 <fs2!~fs2@pwnhofer.at> has quit IRC (Quit: Ping timeout (120 seconds))
[16:04:28] <ChibaPet> bol: Ah, I've been looking for a Lineage phone. I'll look at that.
[16:04:33] <ChibaPet> hm
[16:04:34] *** ChibaPet is now known as mason
[16:04:45] <mason> Freenode must have gone away overnight.
[16:06:45] *** fs2 <fs2!~fs2@pwnhofer.at> has joined #zfsonlinux
[16:11:24] *** hsp <hsp!~hsp@unaffiliated/hsp> has quit IRC (Quit: WeeChat 2.3)
[16:12:35] <cirdan> ?
[16:12:42] <cirdan> nope
[16:14:33] *** fs2 <fs2!~fs2@pwnhofer.at> has quit IRC (Quit: Ping timeout (120 seconds))
[16:15:28] <cirdan> PMT: so... i ran a long smart test. Extended offline Completed without error. Current_Pending_Sector 8
[16:15:31] <cirdan> ¯\_ツ_/¯
[16:15:35] *** fs2 <fs2!~fs2@pwnhofer.at> has joined #zfsonlinux
[16:16:22] <DeHackEd> ... I can't open a pull request...
[16:17:06] <cirdan> DeHackEd: disable ublock and see if it works
[16:17:21] *** hsp <hsp!~hsp@unaffiliated/hsp> has joined #zfsonlinux
[16:17:36] <DeHackEd> I'm not running ublock
[16:17:40] <cirdan> one day 2 months ago ublock blocked * on github
[16:17:42] <cirdan> no adblock at all?
[16:17:48] <DeHackEd> nope, just scriptsafe
[16:17:52] <DeHackEd> and github is allowed
[16:18:45] *** mmlb <mmlb!~mmlb@76-248-148-178.lightspeed.miamfl.sbcglobal.net> has quit IRC (Ping timeout: 246 seconds)
[16:23:10] <cirdan> try a diff browser?
[16:24:33] <DeHackEd> uhh, no
[16:26:12] <cirdan> same brwoser local different account
[16:27:34] * cirdan runs badblocks
[16:29:52] *** fs2 <fs2!~fs2@pwnhofer.at> has quit IRC (Quit: Ping timeout (120 seconds))
[16:30:23] <DeHackEd> nope, can't upgrade
[16:32:49] *** fs2 <fs2!~fs2@pwnhofer.at> has joined #zfsonlinux
[16:39:33] <zfs> [zfsonlinux/zfs] Should compressed ARC be mandatory? (#7896) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/7896#issuecomment-464356607>
[16:58:14] * DeHackEd installs a fresh gentoo environment to build chromium in
[16:59:56] *** fs2 <fs2!~fs2@pwnhofer.at> has quit IRC (Quit: Ping timeout (120 seconds))
[17:02:21] *** fs2 <fs2!~fs2@pwnhofer.at> has joined #zfsonlinux
[17:02:46] *** zfs sets mode: +b *!*@pwnhofer.at$#zfsonlinux-quarantine
[17:03:58] *** mmlb <mmlb!~mmlb@76-248-148-178.lightspeed.miamfl.sbcglobal.net> has joined #zfsonlinux
[17:04:40] *** malevolent_ <malevolent_!~quassel@93.176.189.203> has quit IRC (Ping timeout: 250 seconds)
[17:04:49] *** malevolent <malevolent!~quassel@93.176.189.222> has joined #zfsonlinux
[17:12:43] *** catalase <catalase!Elite21895@gateway/shell/elitebnc/x-cnpiaywdigfkedab> has joined #zfsonlinux
[17:20:21] *** AllanJude <AllanJude!ajude@freebsd/developer/AllanJude> has joined #zfsonlinux
[17:24:44] <storrgie> does one need to specify ashift when adding a cache device to the pool?
[17:26:23] <zfs> [zfsonlinux/zfs] zpool import on a failing vdev causes all further "zpool" commands to fail, even on different pools. (#4038) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/4038#issuecomment-464360255>
[17:26:53] <zfs> [zfsonlinux/zfs] Should compressed ARC be mandatory? (#7896) comment by Allan Jude <https://github.com/zfsonlinux/zfs/issues/7896#issuecomment-464360302>
[17:27:15] <AllanJude> storrgie: if you want it to have that ashift
[17:28:38] <storrgie> I've got two Intel `SSDPEDMX400G4` and I was planning to add them to the pool as a mirrored log device (not cache, miss-spoke in earlier post), I think that I should use ashift=12 with hese
[17:30:44] <DeHackEd> storrgie: same as any other disk. specify it if autodetection isn't working right
[17:31:06] <storrgie> alright, just did it, I was just worried that with log devices ashift wouldn't be accepted
[17:31:18] <DeHackEd> error: you said "cache" earlier
[17:31:43] <AllanJude> storrgie: every top level vdev will accept ashift
[17:31:51] <AllanJude> you just can't mix ashifts inside a top level vdev
[17:32:22] <storrgie> and, am I correct in thinking that I should be adding a log to this pool if my objective is faster write speeds? It's a mirrored pool (4x2 HGST_HDN726060ALE610)
[17:32:26] <DeHackEd> ashift is a vdev property, yeah... you can mix disks with different sector sizes, at your own risk
[17:32:40] <DeHackEd> faster SYNCHRONOUS write speeds
[17:32:49] <storrgie> I'm thinking I don't need faster read speeds, there are plenty of IOPS in a four drive mirror for what folks are doing on there (running jupyter notebooks
[17:32:51] <DeHackEd> async apps like rsync don't give a shit about log devices
[17:42:20] *** gienah_ <gienah_!~mwright@gentoo/developer/gienah> has joined #zfsonlinux
[17:45:28] *** gienah <gienah!~mwright@gentoo/developer/gienah> has quit IRC (Ping timeout: 245 seconds)
[17:47:35] <AllanJude> yeah, only writes where the app asks to wait until the data is safe on disk, instead of just buffered in RAM (databases, hypervisors, etc) would benefit from a SLOG
[17:52:58] *** clete2 <clete2!~clete2@71-135-200-38.lightspeed.tukrga.sbcglobal.net> has quit IRC (Ping timeout: 245 seconds)
[17:58:59] *** clete2 <clete2!~clete2@71-135-200-38.lightspeed.tukrga.sbcglobal.net> has joined #zfsonlinux
[18:03:37] * DeHackEd builds a troll version of chromium... :)
[18:30:49] <zfs> [zfsonlinux/zfs] (DDT) ZAP is inefficient when ashift=12? (#3732) comment by Nathaniel Wesley Filardo <https://github.com/zfsonlinux/zfs/issues/3732#issuecomment-464365334>
[18:37:00] <zfs> [zfsonlinux/zfs] ZTS: user_property_002_pos fails to destroy volume (#8422) created by John Wren Kennedy <https://github.com/zfsonlinux/zfs/issues/8422>
[18:43:04] *** geaaru <geaaru!~geaaru@151.60.62.96> has joined #zfsonlinux
[18:51:36] <zfs> [zfsonlinux/zfs] zfs(8): explicitly document compression of NUL blocks (#8423) created by DeHackEd <https://github.com/zfsonlinux/zfs/issues/8423>
[18:51:41] <DHE> There! I accomplished something!
[18:52:21] <DHE> I wanted to do something bigger, but this will do for now
[18:59:29] <zfs> [zfsonlinux/zfs] write() might fail with EFBIG for no "good reason" (#3731) comment by Nathaniel Wesley Filardo <https://github.com/zfsonlinux/zfs/issues/3731#issuecomment-464367631>
[19:04:49] <zfs> [zfsonlinux/zfs] task txg_sync: blocked for more than 120 seconds (#4361) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4361#issuecomment-464368066>
[19:16:25] <zfs> [zfsonlinux/zfs] Feature Request - online split clone (#2105) comment by kpande <https://github.com/zfsonlinux/zfs/issues/2105#issuecomment-464368918>
[19:17:31] <geaaru> hi, is there a date for a new release 0.7.x with support to 4.20 kernel ? thanks in advance
[19:17:37] <zfs> [zfsonlinux/zfs] write() might fail with EFBIG for no "good reason" (#3731) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3731#issuecomment-464368991>
[19:19:29] <zfs> [zfsonlinux/zfs] zpool import on a failing vdev causes all further "zpool" commands to fail, even on different pools. (#4038) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4038#issuecomment-464369097>
[19:21:53] *** vaxsquid <vaxsquid!~vaxsquid@168.235.85.223> has joined #zfsonlinux
[19:22:06] *** fs2 <fs2!~fs2@pwnhofer.at> has quit IRC (Quit: Ping timeout (120 seconds))
[19:23:16] <ptx0> geaaru: you can see the 'projects' tab on github
[19:25:05] <geaaru> +ptx0: ok, thank you
[19:26:46] <zfs> [zfsonlinux/zfs] ZTS: user_property_002_pos fails to destroy volume (#8422) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8422>
[19:30:03] <zfs> [zfsonlinux/zfs] (DDT) ZAP is inefficient when ashift=12? (#3732) reopened by kpande <https://github.com/zfsonlinux/zfs/issues/3732#event-2144304112>
[19:31:22] <zfs> [zfsonlinux/zfs] (DDT) ZAP is inefficient when ashift=12? (#3732) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3732#issuecomment-464370028>
[19:34:49] <zfs> [zfsonlinux/zfs] Feature Request - online split clone (#2105) comment by shodanshok <https://github.com/zfsonlinux/zfs/issues/2105#issuecomment-464370358>
[19:36:14] <zfs> [zfsonlinux/zfs] Feature Request - online split clone (#2105) comment by kpande <https://github.com/zfsonlinux/zfs/issues/2105#issuecomment-464370482>
[19:42:46] <zfs> [zfsonlinux/zfs] Feature Request - online split clone (#2105) comment by shodanshok <https://github.com/zfsonlinux/zfs/issues/2105#issuecomment-464371037>
[19:45:39] <zfs> [zfsonlinux/zfs] Feature Request - online split clone (#2105) comment by kpande <https://github.com/zfsonlinux/zfs/issues/2105#issuecomment-464371242>
[19:47:13] <ptx0> mason: https://youtu.be/N7JAkM57uIo
[19:47:15] <ptx0> classic
[19:51:13] <mason> Mm, good stuff.
[19:54:17] <zfs> [zfsonlinux/zfs] Should compressed ARC be mandatory? (#7896) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/7896#issuecomment-464371866>
[19:54:56] <ptx0> lol storing a checksum on disk
[19:55:47] <ptx0> that's so silly. what filesystem would ever do that?
[19:56:31] * ptx0 kinda wishes gregor would drop the issue and let zstd be implemented already
[20:03:29] <zfs> [zfsonlinux/zfs] zpool clear hang when resuming suspended pool (#6709) closed by kpande <https://github.com/zfsonlinux/zfs/issues/6709#event-2144318412>
[20:03:33] <zfs> [zfsonlinux/zfs] zpool clear hang when resuming suspended pool (#6709) comment by kpande <https://github.com/zfsonlinux/zfs/issues/6709#event-2144318412>
[20:03:33] <zfs> [zfsonlinux/zfs] zpool clear hang when resuming suspended pool (#6709) comment by kpande <https://github.com/zfsonlinux/zfs/issues/6709#issuecomment-464372630>
[20:10:37] <zfs> [zfsonlinux/zfs] kernel: WARNING: Unable to automount ...: 256 / cannot stat "...": Object is remote (#4722) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4722#issuecomment-464373494>
[20:10:38] <zfs> [zfsonlinux/zfs] kernel: WARNING: Unable to automount ...: 256 / cannot stat "...": Object is remote (#4722) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4722#issuecomment-464373494>
[20:10:42] <zfs> [zfsonlinux/zfs] kernel: WARNING: Unable to automount ...: 256 / cannot stat "...": Object is remote (#4722) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4722#event-2144321364>
[20:12:04] <zfs> [zfsonlinux/zfs] Should compressed ARC be mandatory? (#7896) comment by Allan Jude <https://github.com/zfsonlinux/zfs/issues/7896#issuecomment-464373648>
[20:13:21] <MilkmanDan> http://youtu.be/JWQZNXEKkaU
[20:13:30] <MilkmanDan> More than 6 years and still not to market.
[20:13:45] <AllanJude> ptx0: it was George Wilson's optimization in like 2014/2015 i think, where the L2ARC header stopped having its own checksum, and instead shared the on-disk checksum
[20:13:54] *** JMoVS <JMoVS!Wk1tqkiQqN@hamal.uberspace.de> has quit IRC (Quit: The Lounge - https://thelounge.github.io)
[20:14:00] <bunder> ptx0: remember that guy i was joking around about living next to a nuclear reactor? https://forums.gentoo.org/viewtopic-t-1093156.html
[20:14:11] *** JMoVS <JMoVS!qjc8olfEKL@hamal.uberspace.de> has joined #zfsonlinux
[20:15:03] <AllanJude> funny that you should link about cpu cooling, have a server overheating
[20:15:22] <MilkmanDan> Is it the cpu?
[20:16:01] <ptx0> AllanJude: he's trying to be clever and bypass storing the checksum in memory
[20:16:32] <AllanJude> ptx0: who is?
[20:16:51] <ptx0> Gregor :P
[20:17:47] <ptx0> "self-checksum on-disk" as if that would not require being stored in memory at some point
[20:22:02] <ptx0> bunder: how many issues do you think we'll be left with when the cleanup is finished
[20:22:09] <ptx0> my guess is about 500
[20:22:53] <bunder> hard to say
[20:23:00] <bunder> MilkmanDan: https://www.techpowerup.com/226614/thermaltake-intros-its-sandia-inspired-engine-27-1u-low-profile-cpu-cooler?cp=2
[20:23:12] <ptx0> scan: scrub repaired 0B in 8 days 18:42:35 with 0 errors on Sun Feb 10 15:03:32 2019
[20:23:19] <ptx0> lol <3 resumable scrubs
[20:24:09] <bunder> MilkmanDan: they make a 17 and a 27mm
[20:24:51] <ptx0> bunder: so you were right about the background radiation eh
[20:25:10] <bunder> well, different person but its funny that it actually happens
[20:29:50] <PMT> ptx0: IMO compressed ARC shouldn't be mandatory, which probably means someone should make the L2ARC change to hold a CKSUM in there again.
[20:31:05] <ptx0> bunder: my coworker with corruption issues lives near there in russia
[20:31:08] <ptx0> now he's going to move :P
[20:31:16] <bunder> hah
[20:31:39] <ptx0> PMT: or make it optional
[20:31:48] <ptx0> i.e. "you want uncompressed ARC? you get inefficient l2arc"
[20:32:16] <ptx0> PMT: based on a survey of my 500+ customers though, compressed arc only helps
[20:32:17] <AllanJude> that is what you have today
[20:32:25] <AllanJude> you re-compress the data and send it to the L2ARC
[20:32:38] <AllanJude> the issue is mostly that with QAT or ZSTD that the recompression might not actually match
[20:33:04] <ptx0> AllanJude: yeah he just wants the checksum there always but i think why not only store it in the l2 hdr if we can't get away with NOT keeping it
[20:33:31] <AllanJude> well, if it is the same, there is no point storing the BP checksum and an L2ARC checksum
[20:33:31] <AllanJude> bu tyes
[20:33:41] <AllanJude> I did look at changing the L2ARC header union
[20:33:50] <AllanJude> to have a 'long' L2ARC header, that included the different checksum
[20:34:04] <ptx0> would that increase the header size for compressed arc too?
[20:34:07] <AllanJude> but the magic done with the ARC headers already is very fragile
[20:34:37] <AllanJude> ptx0: likely you'd use a flag in the arc header, to incidate if it is a large or small l2arc header, to avoid using more space when not needed
[20:34:46] <ptx0> yeah
[20:38:13] <bunder> DHE: don't we do zle on null blocks?
[20:39:07] <MilkmanDan> bunder: That's an ok cooler for low profile but it's not capable of >70W TDP and it doesn't actually use the air bearing tech, which is really disappointing.
[20:39:39] <MilkmanDan> https://www.youtube.com/watch?v=u2tCnjb6lp8
[20:42:28] <ptx0> oh my god that review is lulz
[20:43:51] <AllanJude> bunder: no, if compression is anything but 'off', it actually compare every byte to 0, and if all of the bytes in a block are 0, we store it as a 'hole', and don't write any data, just the metadata
[20:44:21] <AllanJude> ZLE would be useful in cases where there were small amounts of non-zero, mixed in with lots of runs of zero bytes
[20:44:23] <zfs> [zfsonlinux/zfs] Should compressed ARC be mandatory? (#7896) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/7896#issuecomment-464377139>
[20:45:11] <MilkmanDan> I like that guy's videos. It's like hanging around with an awkward friend while he dorks out over some tech, and you just let him go because it's entertaining.
[20:46:03] * ptx0 shakes head
[20:46:06] <bunder> AllanJude: ah. by the way did you still want me to test that patch
[20:46:17] <AllanJude> yes please
[20:46:35] <AllanJude> https://github.com/allanjude/zfs/commit/d6875ab09623f3cc1b67eb77c7c418aee44cae4a
[20:46:51] <AllanJude> more useful output would be:
[20:46:58] <MilkmanDan> ptx0: Well at least he's thorough as hell with his benchmarking: https://www.youtube.com/embed/ePZCHQ--UJM https://www.youtube.com/embed/FulA1u73Mzw
[20:46:59] <AllanJude> zfs send -v ... | zstreamdump
[20:47:13] <ptx0> i've seen every single gamers nexus video, MilkmanDan
[20:47:19] <AllanJude> with combination of entire snapshots, and both -i and -I ranges
[20:47:40] <AllanJude> ideally with before/after
[20:47:46] <MilkmanDan> Oh, heh.
[20:48:52] <ptx0> the head shake was at the suggestion in 7896 to have a dual compressed/uncompressed ARC depending on hit rate
[20:49:06] * AllanJude has recorded over 600 podcast episodes, well over 1000 hours of video
[20:49:18] <AllanJude> ptx0: it seems they don't understand that there is already the cache of the uncompressed version
[20:49:24] <AllanJude> the dbuf cache
[20:49:32] <ptx0> AllanJude: btw, would love to get you on my YT channel if you ever make it out to BC, we'll go mountain biking and talk tech at the same time
[20:49:32] <AllanJude> you can just make that bigger if you really want
[20:49:59] <mason> ptx0: No killing AllanJude. We need him.
[20:50:08] <AllanJude> ptx0: not got any events in BC on my schedule at the moment, although I'll be in Bellingham, WA for Linux Fest North West in April
[20:50:21] <ptx0> that's pretty dang close to me in Vancouver :P
[20:50:25] <AllanJude> yes
[20:50:29] <AllanJude> that is why I mention it
[20:50:31] <bunder> building now, this might take a while
[20:50:44] <ptx0> well you might like Vancouver. we have some bridges.
[20:50:46] <AllanJude> bunder: building only took a few minutes for me, in a 6 core VM
[20:50:55] <bunder> slow configure
[20:51:06] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has quit IRC (Excess Flood)
[20:51:08] <AllanJude> bunder: yes, configure seems to be slower than compiling for some reason
[20:51:13] <AllanJude> ptx0: Colin Percival lives there, seems to like it
[20:51:22] <ptx0> MilkmanDan: "this heatsink scored 82 and this one scored 34" "cool gimme the 82 one" "but that's temperatures"
[20:51:30] <MilkmanDan> Haha
[20:51:31] <mason> heh
[20:51:40] <AllanJude> ha
[20:51:57] <MilkmanDan> More higher numbers is more better.
[20:52:05] <ptx0> my heatsink went Super Sayan
[20:52:28] <ptx0> saiyan, my bad.
[20:52:57] *** Dagger2 <Dagger2!~dagger@sawako.haruhi.eu> has joined #zfsonlinux
[20:56:08] <AllanJude> zfs send | zstd -T0 --adapt | zstd -T0 -d | zfs recv
[20:56:09] <AllanJude> is very nice
[20:56:43] <ptx0> but pigz makes my cpu all toasty
[20:56:44] <AllanJude> sorry, should be a netcat between the 2 zstd's
[20:56:49] <AllanJude> it varies the compression level based on the network speed
[20:57:00] <AllanJude> so, if cpu is the bottleneck, it lowers the compression level
[20:57:06] <AllanJude> but if the network is the bottleneck, it raises it
[20:57:30] <AllanJude> and of course, zstd has 22 compression levels to vary between, instead of 9
[20:57:46] <ptx0> zstd with auto compression though, that'd be somethin
[20:57:56] <AllanJude> on my todo list
[20:58:00] <AllanJude> I have a plan for how to do it
[20:58:07] <ptx0> hm, now that gives me an idea
[20:58:07] <AllanJude> once I sort out this L2ARC issue, and get zstd merged
[20:58:21] <AllanJude> the idea was to very based on the amount of dirty data, similar to the write throttle
[20:58:25] <AllanJude> er vary
[20:58:36] <ptx0> what about a userspace tool that can pipe random data through zfs checksum/compression algo
[20:58:40] <AllanJude> so, if the disk is the bottleneck, use more compression
[20:58:49] <ptx0> so you can make a 'tar' archive equivalent via zfs
[20:59:08] <ptx0> this sounds stupid until you consider auto compression
[20:59:08] <AllanJude> ptx0: at the last hackathon, Pawel and my idea was one to be able to add zfs encryption to a send stream
[20:59:39] <AllanJude> so zfs send unencrypted | zstreamenc -K ... | network | zfs recv encrypted
[21:00:02] <AllanJude> for backing up to 'untrusted' remote pools
[21:00:04] <ptx0> so i can use foo | zstd | ... but if i ran foo | zcompress --compression=auto --checksum=edonr > foo.zcompressed
[21:00:17] <ptx0> that'd let me have a file with multiple compression algorithms
[21:00:25] <ptx0> and a zfs friendly checksum
[21:00:50] <ptx0> probably a dumb idea but i've never thought of that before vOv
[21:01:22] <AllanJude> in my case at the moment, the sender is an older version of FreeBSD (11.1) that doesn't have compressed send
[21:01:28] <AllanJude> that is why I am using zstd
[21:01:29] <MilkmanDan> What do you mean "auto compression"?
[21:01:42] <AllanJude> MilkmanDan: it varies the compression level
[21:01:42] <ptx0> #5928
[21:01:49] <zfs> [zfs] #5928 - auto compression by n1kl <https://github.com/zfsonlinux/zfs/issues/5928>
[21:01:52] <ptx0> compression level /and/ algorithm
[21:02:00] <ptx0> it'll use gzip or zstd depending on what you want.
[21:02:03] <AllanJude> zstd supports levels 1 - 22, and now has --fast= as well
[21:02:20] <AllanJude> which are 'negative' levels, even faster than the original '1'
[21:02:25] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[21:02:47] <ptx0> #7560 is the rpeplacement for 5928
[21:02:50] <zfs> [zfs] #7560 - Adaptive compression [was: auto compression] by RubenKelevra <https://github.com/zfsonlinux/zfs/issues/7560>
[21:03:45] <AllanJude> whereas my idea was zfs set compress=zstd-auto
[21:04:36] <AllanJude> and it would vary the zstd level between min and max, based on amount of dirty data. Compress as much as possible, but if data is starting to build up waiting to be compressed and written, lower the compression level, until we are keeping up
[21:07:16] <MilkmanDan> ptx0: I'm lost. What does that have to do with making tars in userspace?
[21:07:42] <bunder> AllanJude: seems to work, its a little on the high side now but one sec i'll paste the results
[21:07:45] <ptx0> MilkmanDan: compression containers usually only allow for one compression algorithm
[21:08:15] <ptx0> MilkmanDan: rar, tar, zip, lz4, bz2 all use a single algorithm for one file
[21:09:04] <ptx0> we'd basically be creating a new compression container that wraps many algorithms and compression levels with metadata that allows a 'zdecompress' tool to inflate it
[21:09:44] <MilkmanDan> ...as part of zfs?
[21:09:49] <ptx0> zfs makes it easy because we've already unified all those algorithms
[21:10:12] <ptx0> you can make any userspace programme do the same thing but it's not as convenient.
[21:11:01] <bunder> AllanJude: https://gist.github.com/bunder2015/9b4cd2dd6921cdb6afc4bb20cb960592
[21:12:59] <MilkmanDan> So instead of tar -[zjJ] -xvf tar_file.ext gimmie.txt you'd be able to zdecompress tar_file gimmie.txt?
[21:13:49] <MilkmanDan> I guess I'm not grasping how that's different from what zfs already does transparently.
[21:14:07] <MilkmanDan> I think I need a nap.
[21:15:14] *** Nukien <Nukien!~Nukien@162.250.233.55> has quit IRC (Ping timeout: 268 seconds)
[21:15:26] <mason> Sigh. The Fairphone seems to be European-only.
[21:16:23] <bunder> hm maybe it isn't high, but send doesn't seem to print the last output if the send ends before the next output interval
[21:17:24] *** Nukien <Nukien!~Nukien@162.250.233.55> has joined #zfsonlinux
[21:19:55] <ptx0> MilkmanDan: it's for non-zfs purposes
[21:20:16] <ptx0> i guess you could use zfs send but it's for sending a single file
[21:20:29] <ptx0> like i said it's probably a dumb idea GEEZ SORRY
[21:21:38] <zfs> [zfsonlinux/zfs] (DDT) ZAP is inefficient when ashift=12? (#3732) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3732#event-2144350937>
[21:21:49] <bunder> lol
[21:22:18] <zfs> [zfsonlinux/zfs] (DDT) ZAP is inefficient when ashift=12? (#3732) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3732#issuecomment-464381078>
[21:26:20] <zfs> [zfsonlinux/zfs] (DDT) ZAP is inefficient when ashift=12? (#3732) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3732#issuecomment-464381470>
[21:28:57] <zfs> [zfsonlinux/zfs] config: amend libtirpc detection for opensuse (#8313) comment by Rafael Kitover <https://github.com/zfsonlinux/zfs/issues/8313#issuecomment-464381750>
[21:32:22] <ptx0> i just noticed this comment https://github.com/zfsonlinux/zfs/issues/2105#issuecomment-464188998
[21:32:33] <ptx0> mentions casually creating VMs from a 10TiB ZVOL
[21:32:43] <ptx0> what the
[21:33:24] <ptx0> But, If my variation is really only 10MB different from the original 10TB, why shouldn't I be able to pay for 10TB+10MB? -- snapshots + clones give me that. Until the 10TB moves sufficiently that I'm now paying for 10TB (live + 10TB snapshot + 10TB diverged) and my 10MB variation moves so that it's now its own 10TB (diverged from both live and snapshot).
[21:34:04] <ptx0> jeez i dunno what to say there other than "why did you do that"
[21:34:23] <bunder> who has 10tb to spare for only a single zvol
[21:34:47] <ptx0> who the heck has a 10TB VM template
[21:35:47] <bunder> most cloud providers would want like 10 grand just for the block storage
[21:36:08] <ptx0> a lil bit overestimating there
[21:36:15] <ptx0> but yeah it'd be expensive
[21:36:29] <ptx0> it's like $26,000 per month for 100TB
[21:36:52] <bunder> welp time to become an escort i guess hah
[21:38:26] <zfs> [zfsonlinux/zfs] zfs destroy: dataset is busy (#4715) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4715#issuecomment-464382627>
[21:38:27] <zfs> [zfsonlinux/zfs] zfs destroy: dataset is busy (#4715) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4715#issuecomment-464382627>
[21:38:31] <zfs> [zfsonlinux/zfs] zfs destroy: dataset is busy (#4715) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4715#event-2144357553>
[21:40:58] <zfs> [zfsonlinux/zfs] ZFS io error when disks are in idle/standby/spindown mode (#4713) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4713#event-2144358617>
[21:43:59] <zfs> [zfsonlinux/zfs] "cannot receive incremental stream:" when receiving a snapshot incremental file stream in a clone (#4693) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4693#issuecomment-464383120>
[21:44:40] <zfs> [zfsonlinux/zfs] ZFS + CentOS 6 - very low performance of pool and not working SSD write cache (#4786) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4786#event-2144360141>
[21:45:00] <zfs> [zfsonlinux/zfs] ZFS + CentOS 6 - very low performance of pool and not working SSD write cache (#4786) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4786#issuecomment-464383202>
[21:46:26] <zfs> [zfsonlinux/zfs] Poor throughput to vdevs with well conditioned IO (#4792) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4792#issuecomment-464383325>
[21:46:30] <zfs> [zfsonlinux/zfs] Poor throughput to vdevs with well conditioned IO (#4792) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4792#issuecomment-464383325>
[21:46:31] <zfs> [zfsonlinux/zfs] Poor throughput to vdevs with well conditioned IO (#4792) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4792#event-2144360798>
top
   February 16, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28