Switch to DuckDuckGo Search
   January 8, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >
Toggle Join/Part | bottom
[00:02:38] <ptx0> kstat analyzer
[00:02:39] <ptx0> holy fuck
[00:02:47] <ptx0> i've told you 5 times
[00:03:04] <ptx0> arc_summary doesn't tell you this information
[00:03:16] <ptx0> keep trying to do it though if you want
[00:10:19] *** donhw <donhw!~quassel@host-184-167-36-98.jcs-wy.client.bresnan.net> has joined #zfsonlinux
[00:15:42] *** gila <gila!~gila@5ED74129.cm-7-8b.dynamic.ziggo.nl> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[00:17:23] <ptx0> https://github.com/richardelling/zfs-linux-tools/blob/master/kstat-analyzer
[00:28:41] *** Celmor <Celmor!~Celmor@unaffiliated/celmor> has left #zfsonlinux
[00:31:31] <blackflow> ptx0: fuckit we're past that now. y'all confirmed "Header Size" under "L2 Arc Size" section of arc_summary was it. I can't remember who exactly was here (memory issues as well) but it's not as if the channel is brimming with teh variety of folks :)
[00:35:50] <ptx0> blackflow: i'm not talking about header size, i'm talking about its efficiency
[00:47:56] *** prawn <prawn!~prawn@surro/greybeard/prawn> has joined #zfsonlinux
[00:49:10] <zfs> [zfsonlinux/zfs] Make zpool status counters match err events count (#7817) new commit by Tony Hutter <https://github.com/zfsonlinux/zfs>
[00:51:18] <zfs> [zfsonlinux/zfs] Make zpool status counters match err events count (#7817) comment by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/7817#issuecomment-452124219>
[00:55:41] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 258 seconds)
[00:55:53] *** dadinn <dadinn!~DADINN@188.172.153.77> has quit IRC (Ping timeout: 245 seconds)
[00:57:51] *** dadinn <dadinn!~DADINN@188.172.153.77> has joined #zfsonlinux
[01:01:55] <zfs> [zfsonlinux/zfs] making sure the last quiesced txg is synced (#8239) new review comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/pull/8239#pullrequestreview-190042736>
[01:02:04] <zfs> [zfsonlinux/zfs] making sure the last quiesced txg is synced (#8239) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8239#issuecomment-452126269>
[01:17:52] <blackflow> ptx0: ah k, yeah I'll look into it, thanks.
[01:19:08] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 250 seconds)
[01:20:21] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #zfsonlinux
[01:21:18] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[01:34:17] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has joined #zfsonlinux
[01:36:48] *** PewpewpewPantsu <PewpewpewPantsu!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 246 seconds)
[01:45:03] *** chasmo77 <chasmo77!~chas77@158.183-62-69.ftth.swbr.surewest.net> has joined #zfsonlinux
[01:47:49] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has joined #zfsonlinux
[01:48:52] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has quit IRC (Remote host closed the connection)
[01:51:25] *** tlacatlc6 <tlacatlc6!~tlacatlc6@68.202.46.96> has joined #zfsonlinux
[01:51:52] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has quit IRC (Ping timeout: 252 seconds)
[01:58:38] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has joined #zfsonlinux
[02:01:30] <zfs> [zfsonlinux/zfs] Removed suggestion to use root dataset as bootfs (#8247) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/8247#issuecomment-452137347>
[02:38:13] <zfs> [zfsonlinux/zfs] making sure the last quiesced txg is synced (#8239) new commit by seekfirstleapsecond <https://github.com/zfsonlinux/zfs>
[02:39:40] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6006cef745b124f4e00.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[02:57:58] <ptx0> oh udev
[02:58:03] <ptx0> finally switched to eudev
[02:58:08] <ptx0> no more issues yet
[03:01:45] <zfs> [zfsonlinux/zfs] making sure the last quiesced txg is synced (#8239) new commit by seekfirstleapsecond <https://github.com/zfsonlinux/zfs>
[03:05:40] <zfs> [zfsonlinux/zfs] making sure the last quiesced txg is synced (#8239) new review comment by seekfirstleapsecond <https://github.com/zfsonlinux/zfs/pull/8239#discussion_r245858473>
[03:09:41] <zfs> [openzfs/openzfs] Add a manual for ztest. (#729) comment by Sevan Janiyan <https://github.com/openzfs/openzfs/issues/729#issuecomment-452149544>
[03:30:30] *** fp7 <fp7!~fp7@unaffiliated/fp7> has quit IRC (Quit: fp7)
[03:56:34] *** futune <futune!~futune@179.87.189.109.customer.cdi.no> has quit IRC (Remote host closed the connection)
[03:59:29] *** MTecknology <MTecknology!~Mike@nginx/adept/mtecknology> has left #zfsonlinux ("You saw me, but now you don't.")
[04:52:53] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[05:11:49] *** tlacatlc6 <tlacatlc6!~tlacatlc6@68.202.46.96> has quit IRC (Quit: Leaving)
[05:40:32] *** Baughn <Baughn!~Baughn@madoka.brage.info> has joined #zfsonlinux
[05:40:51] <Baughn> Here's something which has been happening lately: http://ix.io/1xKm
[05:41:31] <Baughn> I get minute-long stalls on every sync, unless with sync=disabled. This happens even immediately after a previous sync, with no write pressure.
[05:43:11] <Baughn> The only likely commonalties is it happens on NixOS (which describes all my systems), and Zen 1 AMD (Ditto). I'm not sure where to even start debugging this one -- where is it even hanging? I don't see any ZFS coffee in the stack trace.
[05:56:01] *** evert1 <evert1!~metallicu@bifrost.evert.net> has joined #zfsonlinux
[05:57:14] *** evert1 is now known as metallicus
[05:58:18] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 272 seconds)
[06:05:06] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has quit IRC (Quit: WeeChat 2.3)
[06:06:22] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has joined #zfsonlinux
[06:08:03] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has quit IRC (Client Quit)
[06:10:02] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has joined #zfsonlinux
[06:22:44] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[06:23:23] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has quit IRC (Quit: WeeChat 2.3)
[06:24:09] *** metallicus <metallicus!~metallicu@80.100.205.41> has joined #zfsonlinux
[06:40:54] *** metallicus <metallicus!~metallicu@80.100.205.41> has quit IRC (Quit: WeeChat 2.3)
[06:48:28] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 250 seconds)
[06:49:22] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #zfsonlinux
[06:57:52] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has joined #zfsonlinux
[07:02:54] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has quit IRC (Quit: WeeChat 2.3)
[07:04:46] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has joined #zfsonlinux
[07:19:06] <chesty> I'm running zfs only, no network filesystems , the usual virtual filesystems like tmpfs and sysfs, some by docker and snap. after I first boot, df -h works fine, then sometime later, df -h hangs with 100% cpu, it's in the R state, kill -KILL'ing it doesn't kill it either, it's still in R at 100% cpu. I've seen processes in an uninterruptible sleep
[07:19:07] <chesty> before, but they turn to Z from memory or D. stracing it doesn't print anything to the screen and strace becomes unkillable. I doubt it's a zfs issue, more likely usb (there's a zfs filesystem on usb) or something else, but how do I debug it? I figure someone here might know?
[07:23:36] <ptx0> don't use usb
[07:24:02] <ptx0> universally shitty b us
[07:27:17] <chesty> actually now that I think about it, I've never had this problem before and I recently upgraded to kernel 4.18
[07:28:09] <chesty> ok, so use is bad, but how do I confirm it's usb? I guess I can just unplug it and see if they die, but there must be a better way
[07:29:24] <ptx0> kernel logs?
[07:29:37] <ptx0> stack traces?
[07:29:41] <ptx0> vOv
[07:31:00] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[07:32:49] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[07:40:19] <chesty> there wasn't anything is dmesg, but after I unplugged the usb there are some logs that look related INFO: task txg_sync:14678 blocked for more than 120 seconds and INFO: task zfs:26222 blocked for more than 120 seconds.
[07:42:33] *** nahamu <nahamu!~nahamu@165.225.132.70> has quit IRC (Ping timeout: 245 seconds)
[07:46:15] *** nahamu <nahamu!~nahamu@165.225.132.70> has joined #zfsonlinux
[07:54:58] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has joined #zfsonlinux
[08:05:10] *** nahamu <nahamu!~nahamu@165.225.132.70> has quit IRC (Ping timeout: 250 seconds)
[08:05:31] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has quit IRC (Quit: WeeChat 2.3)
[08:06:01] *** nahamu <nahamu!~nahamu@165.225.132.70> has joined #zfsonlinux
[08:21:29] *** Sketch <Sketch!sketch@new.rednsx.org> has quit IRC (Ping timeout: 268 seconds)
[08:31:23] *** Sketch <Sketch!sketch@2604:180:2::a506:5c0d> has joined #zfsonlinux
[09:05:54] *** bs338 <bs338!sid296274@p3m/member/integral> has quit IRC (Ping timeout: 264 seconds)
[09:06:49] *** bs338 <bs338!sid296274@p3m/member/integral> has joined #zfsonlinux
[09:15:14] *** gardar <gardar!~gardar@bnc.giraffi.net> has quit IRC (Quit: ZNC - http://znc.in)
[09:15:50] <FinalX> hm, I created both of my SSD pools with normalization=formD, but my nas pool doesn't have it. I transferred my LXC dataset with its children back and forth pools sometimes and now it's normalization=- while the rest of the pool is normalization=formD. as it's a readonly property, what's the best way to go about fixing it up to be formD again? tried zfs receive -o normalization=formD, but it just throws me
[09:15:51] <FinalX> "invalid property"
[09:15:59] <FinalX> will rsync be my only friend, or is there another way?
[09:16:55] <FinalX> don't even know how this happened, a dataset with normalization=- while the rest is all formD from the pool up
[09:20:43] *** gardar <gardar!~gardar@bnc.giraffi.net> has joined #zfsonlinux
[09:27:08] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[09:31:28] *** JanC_ <JanC_!~janc@lugwv/member/JanC> has joined #zfsonlinux
[09:33:13] *** JanC is now known as Guest41406
[09:33:13] *** JanC_ is now known as JanC
[09:33:58] *** Guest41406 <Guest41406!~janc@lugwv/member/JanC> has quit IRC (Ping timeout: 252 seconds)
[09:52:17] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[10:05:27] *** kaipee <kaipee!~kaipee@81.128.200.210> has joined #zfsonlinux
[10:08:25] <zfs> [zfsonlinux/zfs] No zpool.cache file created and zpools are not loading when the system reboots (#8252) created by caraghavendravarma <https://github.com/zfsonlinux/zfs/issues/8252>
[10:19:25] *** gila <gila!~gila@94.215.65.41> has joined #zfsonlinux
[10:21:10] *** Lalufu <Lalufu!~s@unaffiliated/lalufu> has quit IRC (Ping timeout: 252 seconds)
[10:27:43] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452230907>
[10:33:04] *** Lalufu <Lalufu!~s@unaffiliated/lalufu> has joined #zfsonlinux
[10:37:53] *** stefan00 <stefan00!~stefan00@ip9234924b.dynamic.kabel-deutschland.de> has joined #zfsonlinux
[10:54:20] *** Slashman <Slashman!~Slash@cosium-152-18.fib.nerim.net> has joined #zfsonlinux
[11:09:28] <zfs> [zfsonlinux/zfs] zfs unable to automount (#8166) comment by colttt <https://github.com/zfsonlinux/zfs/issues/8166#issuecomment-452243858>
[11:10:37] *** insane^ <insane^!~insane@fw.vispiron.de> has joined #zfsonlinux
[11:28:24] *** amospalla <amospalla!~amospalla@unaffiliated/amospalla> has quit IRC (Quit: WeeChat 1.6)
[11:51:46] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6003cdcccdf530d8636.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[12:17:54] <zfs> [zfsonlinux/zfs] No zpool.cache file created and zpools are not loading when the system reboots (#8252) closed by kpande <https://github.com/zfsonlinux/zfs/issues/8252#event-2059931073>
[12:18:09] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452263534>
[12:18:18] <zfs> [zfsonlinux/zfs] No zpool.cache file created and zpools are not loading when the system reboots (#8252) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8252#issuecomment-452263590>
[12:21:15] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452264390>
[12:23:17] <ptx0> https://youtu.be/kDXtt7_7S2o?t=59
[12:23:27] <ptx0> woahw
[12:25:33] <blackflow> hardware porn! btw, ptx0, is this the kstat analyzer tool you were recommending? https://github.com/richardelling/zfs-linux-tools
[12:25:44] <ptx0> i linked it
[12:26:01] <DeHackEd> regarding your last incrmental scrub comment: the same issues with not doing scrubs still apply. failure of disk 1 may be undetected when a read is issued to disk 2, or just that recently written data will probably be in the ARC and not hit disks at all
[12:26:14] <ptx0> i know
[12:26:43] <DeHackEd> I still agree the whole thread is stupid
[12:27:17] <ptx0> ml35 reminds me of a child who constantly says they want to help in the kitchen and keep knocking shit over and spilling flour while saying they're gonna help
[12:27:49] <ptx0> well intentioned, but
[12:28:51] <ptx0> the whole feature request thing seems kinda entitled
[12:29:13] <ptx0> if you want the code, hire someone to write it or do it yourself
[12:29:45] <ptx0> and they submit a LOT of feature requests without ever submitting any work toward getting it done
[12:34:17] *** amospalla <amospalla!~amospalla@unaffiliated/amospalla> has joined #zfsonlinux
[12:34:51] <zfs> [zfsonlinux/zfs] "Dataset does not exist" in incremental receives in current master (#8067) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8067#issuecomment-452267798>
[12:34:56] <zfs> [zfsonlinux/zfs] "Dataset does not exist" in incremental receives in current master (#8067) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8067#issuecomment-452267798>
[12:34:59] <zfs> [zfsonlinux/zfs] "Dataset does not exist" in incremental receives in current master (#8067) closed by kpande <https://github.com/zfsonlinux/zfs/issues/8067#event-2059965152>
[12:36:52] <ptx0> https://github.com/zfsonlinux/zfs/issues?utf8=✓&q=author%3Amailinglists35
[12:36:59] <ptx0> seriously
[12:37:08] <ptx0> they only really submit feature requests
[12:41:25] <zfs> [zfsonlinux/zfs] `zpool export` does not delete the mountpoint on pools with local mountpoint property (#4824) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4824#event-2059978020>
[12:41:50] <zfs> [zfsonlinux/zfs] `zfs create` takes unreasonably long time, exponential to existing number of datasets (#4727) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4727#event-2059978643>
[12:42:35] <zfs> [zfsonlinux/zfs] "zpool create" does not create gpt or any other kind of partition table on hp smartarray CCISS drives (#3478) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3478#event-2059980045>
[12:45:08] <zfs> [zfsonlinux/zfs] [performance] z_rd_int is not using all available cpu cycles during read (#2952) closed by kpande <https://github.com/zfsonlinux/zfs/issues/2952#event-2059984935>
[12:45:23] <zfs> [zfsonlinux/zfs] Option to serialize scrub requests ending to the same physical device (#1216) closed by kpande <https://github.com/zfsonlinux/zfs/issues/1216#event-2059985426>
[12:47:53] <zfs> [zfsonlinux/zfs] default pool mountpoint name should conform to Filesystem Hierarchy Standard (/srv/poolname instead of /poolname) (#4814) comment by kpande <https://github.com/zfsonlinux/zfs/issues/4814#issuecomment-452271044>
[12:47:59] <zfs> [zfsonlinux/zfs] default pool mountpoint name should conform to Filesystem Hierarchy Standard (/srv/poolname instead of /poolname) (#4814) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4814#event-2059990924>
[12:49:23] <zfs> [zfsonlinux/zfs] Feature request: ztop, a top-like tool specific to zfs (#5880) closed by kpande <https://github.com/zfsonlinux/zfs/issues/5880#event-2059993678>
[12:50:39] <blackflow> ptx0: can you link it again please? I've just read through the entire backlog (since I asked about L2 misunderstanding) and there's no link. There's also no "told you 5 times", but only once, without a highlight: "21:59 < ptx0> use kstat analyzer"
[12:50:43] <zfs> [zfsonlinux/zfs] feature request: add new option to `zfs snapshot` subcommand to skip creation of zero sized snapshots (#6041) closed by kpande <https://github.com/zfsonlinux/zfs/issues/6041#event-2059996187>
[12:51:38] <ptx0> you already re-linked it
[12:52:05] <blackflow> ptx0: I see, thanks.
[12:54:04] <PMT> chesty: if you're still around, things like perf top might be useful. also depending on which 4.18 it may be on fire.
[12:54:26] <zfs> [zfsonlinux/zfs] l2arc_feed kernel thread keeps waking up the cpu on idle system (#4844) closed by kpande <https://github.com/zfsonlinux/zfs/issues/4844#event-2060003620>
[12:54:54] <ptx0> there
[12:54:59] <ptx0> i closed most of ml35's issues
[12:55:37] <PMT> we noticed
[12:56:01] <ptx0> felt particularly good to smeash the fuck out of #4814
[12:56:03] <zfs> [zfs] #4814 - default pool mountpoint name should conform to Filesystem Hierarchy Standard (/srv/poolname instead of /poolname) <https://github.com/zfsonlinux/zfs/issues/4814>
[12:58:03] <ptx0> the funny thing there is that "it is a burden to remember" thing
[12:58:14] <ptx0> then brian wants to have the option but not be default
[12:58:30] <ptx0> so they will be setting the prefix on every system they run into, anyway... having to remember that..
[13:04:22] <blackflow> speaking of 4814, DeHackEd said " we also follow a few Solaris conventions by defaulting to refuse to mount into non-empty directories by default.". Sounds like default. What (if anything) should I look into to change this? I'm forced to use legacy mounts because systemd tempfiles or something else fills-in /var and /tmp with stuffs before zfs mounting service gets going
[13:04:42] <ptx0> dear god don't separate out /var
[13:04:46] <ptx0> and /tmp should be tmpfs
[13:05:08] <bunder> yet we still have 955 open tickets
[13:05:11] <ptx0> there's some madness you have to do with systemd units to make it work
[13:05:17] <blackflow> technically I'm not splitting out /var, but /var/log and /var/tmp
[13:05:33] <blackflow> everything else is part of rpool. I want to avoid logs being rolled back with root, in case I need to
[13:05:33] <ptx0> yeah, don't do that
[13:05:42] <ptx0> just clone root
[13:05:45] <ptx0> don't roll it back
[13:06:09] <blackflow> I'm following advice I inherited from FreeBSD and their ZFS devs. Something different about this in ZoL?
[13:06:15] <bunder> i split tmp vartmp and varlog /shrug
[13:06:20] <zfs> [zfsonlinux/zfs] feature request: add new option to `zfs snapshot` subcommand to skip creation of zero sized snapshots (#6041) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/6041#issuecomment-452275689>
[13:06:23] <ptx0> yeah our init system is not that well integrated
[13:06:32] <blackflow> even if I clone it, /var/log would show the old (pre-snapshot) data
[13:06:39] <ptx0> duh
[13:06:45] <ptx0> but the original is untouched
[13:06:51] <ptx0> rollback destroys the new snapshots.
[13:07:21] <blackflow> mkay, anyway, since that comment suggests a configurable default, what should I look into, to allow mounts on non-empty dirs? Is that even wise? I really hate legacy mountpoints but I have to do them.
[13:07:36] <zfs> [zfsonlinux/zfs] feature request: add new option to `zfs snapshot` subcommand to skip creation of zero sized snapshots (#6041) comment by kpande <https://github.com/zfsonlinux/zfs/issues/6041#issuecomment-452276039>
[13:07:54] <ptx0> #3186
[13:07:56] <zfs> [zfs] #3186 - fileset says mounted when it is not <https://github.com/zfsonlinux/zfs/issues/3186>
[13:08:03] <ptx0> don't do it
[13:08:55] <ptx0> just do what everyone else does and run an actual log collection daemon that stores things somewhere else
[13:09:40] <blackflow> I do. The server that receives the logs stores them under /var/log.
[13:10:34] <blackflow> But I see that issue. I think I've hit it as well, filesystem saying it's mounted but it wasn't.
[13:11:49] <ptx0> yeah but when it's /var
[13:11:55] <ptx0> the problem is particularly fun
[13:11:59] <blackflow> and I see the answer to my question is the "overlay" property, so thanks.
[13:12:11] <ptx0> that's the thing you shouldn't do.
[13:12:18] <blackflow> Understood. I see there are issues.
[13:12:18] <ptx0> it's like turning on dedup
[13:13:41] <blackflow> And thanks for the cloning vs rollback advice. One of those details you don't think about until you come across them.
[13:20:12] <ptx0> i basically never rollback
[13:20:16] <ptx0> it is very dangerous
[13:20:35] <ptx0> i clone, verify, promote, destroy the original
[13:20:58] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has joined #zfsonlinux
[13:21:06] <zfs> [zfsonlinux/zfs] feature request: add new option to `zfs snapshot` subcommand to skip creation of zero sized snapshots (#6041) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/6041#issuecomment-452279301>
[13:24:23] <hyper_ch2> ptx0: promote?
[13:24:33] <ptx0> ?
[13:24:38] <ptx0> man zfs
[13:24:39] <zfs> [zfsonlinux/zfs] feature request: add new option to `zfs snapshot` subcommand to skip creation of zero sized snapshots (#6041) comment by kpande <https://github.com/zfsonlinux/zfs/issues/6041#issuecomment-452280276>
[13:25:06] <hyper_ch2> ah, that's an actual zfs subcommand :)
[13:25:21] <DeHackEd> there are many of them, and growing every time an encryption bugfix goes in
[13:25:22] <DeHackEd> :)
[13:25:33] <DeHackEd> (joke)
[13:34:45] <zfs> [zfsonlinux/zfs] feature request: add new option to `zfs snapshot` subcommand to skip creation of zero sized snapshots (#6041) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/6041#issuecomment-452282813>
[13:35:38] <blackflow> ptx0: makes sense to do that, yes.
[13:36:35] <zfs> [zfsonlinux/zfs] feature request: add new option to `zfs snapshot` subcommand to skip creation of zero sized snapshots (#6041) comment by kpande <https://github.com/zfsonlinux/zfs/issues/6041#issuecomment-452283361>
[13:37:27] <ptx0> i don't buy that a feature requires delegation support to be considered valid
[13:37:48] <ptx0> if that's the case then 'zfs create' does not work
[13:39:24] <ptx0> a part of me does not understand the desire to skip zero size snapshots
[13:39:42] <ptx0> they are zero now, but not later..
[13:42:39] <DeHackEd> internally a rollback is implemented by making a clone, promoting it, doing a clone swap (internal operation not available to users) and destroying the old dataset
[13:43:19] <ptx0> in a single txg
[13:43:25] <DeHackEd> right
[13:44:55] <ptx0> wonder if ml35 has some kind of brain damage
[13:45:05] <ptx0> 50% of their comments involve whining about how difficult things are for the end user
[13:45:13] <ptx0> a couple extra lines here or there in a script, just so difficult
[13:45:16] <insane^> maybe someone should point him to ansible or so
[13:45:25] <DeHackEd> zfs is relatively easy compared to some of the stuff out there...
[13:45:35] <ptx0> nah they want to get rid of scripts and roll everything into zfs utilities
[13:45:37] <ptx0> lmao
[13:45:54] <DeHackEd> well, in fairness ZFS now has a scripting language, so we've basically reverse roles now
[13:46:05] <ptx0> ah but it can't run without root so basically doesn't exist ;)
[13:47:43] <ptx0> if snapshot -r also gets a -z to skip zero size snapshots why not give it a -x to skip child datasets as well
[13:48:03] <ptx0> shit, let's just look at the rsync manpage and copy features in one by one
[13:48:20] <MilkmanDan> zfs is weak and immature and will remain so until it can read email.
[13:48:47] <ptx0> MilkmanDan: i've been trying to teach it but it keeps asking where its parents are and i tell it Daddy Ahrens doesn't want to see it until it can read that damn email
[13:48:58] <MilkmanDan> Really, it just needs to quit screwing around and embed emacs.
[13:49:08] <ptx0> zpool nano <file>
[13:49:14] <MilkmanDan> Hah
[13:49:24] <MilkmanDan> Oh yeah. nano for sure.
[13:49:30] <DeHackEd> ptx0: that zero byte snapshot thing is easily implemented by a channel program, and it's race-free !
[13:49:33] <ptx0> zde, the zfs data editor
[13:49:40] <DeHackEd> (I think...)
[13:50:00] <ptx0> give it a filename and it'll discover the offset, recordsize, location of data and reconstruct it for you from disk
[13:50:03] <ptx0> oh wait
[13:50:08] <ptx0> that's what zfs already does
[13:50:20] <MilkmanDan> Wait, zero byte snapshots?
[13:50:35] <ptx0> #6041 dude
[13:50:38] <ptx0> you are a heavy sleeper
[13:50:41] <zfs> [zfs] #6041 - feature request: add new option to `zfs snapshot` subcommand to skip creation of zero sized snapshots <https://github.com/zfsonlinux/zfs/issues/6041>
[13:50:48] <MilkmanDan> I am and it sucks.
[13:51:09] <DeHackEd> if a dataset hasn't changed since it was last snapshot'd, skip creating a new one
[13:51:23] <DeHackEd> actually that's possibly bad because now it looks like you're missing snapshots. maybe rename the old snapshot to the new name
[13:51:26] <MilkmanDan> Oh oh, I get it. Turn snap into a noop if there hasn't been any changed blocks.
[13:51:35] <ptx0> DeHackEd: that sounds even worse
[13:51:38] <DeHackEd> oh wait, now it looks like the old snapshot is missing. maybe we should create 2 snapshots instead
[13:51:47] * DeHackEd had an aneurysm
[13:51:50] <DeHackEd> (sp?)
[13:51:51] <ptx0> DeHackEd: they should just duplicate the last snapshot with the new name
[13:51:53] <ptx0> that's far simpler
[13:51:59] <MilkmanDan> I was thinking there was some new magical datastructure that was going to allow snapshots to literally cost zero bytes of additional storage per snap.
[13:52:08] <ptx0> anyeurism
[13:52:17] <DeHackEd> sure, I had one of those
[13:52:22] <DeHackEd> thanks ml35
[13:52:24] <ptx0> i had one trying to spell it
[13:52:49] <MilkmanDan> Ess tee are oh kay eee.
[13:52:59] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452287532>
[13:53:07] <DeHackEd> no that's something different
[13:53:25] <ptx0> "scripting zfs get before snapshot is too much to do, lets embed that exact same logic into the zfs utility"
[13:53:47] <DeHackEd> they met you half way and provided channel programs that will let you put the script into the zfs utility
[13:54:00] <ptx0> but now IT CAN'T RUN AS NON-ROOT
[13:54:06] * MilkmanDan coughs "lua".
[13:54:07] <ptx0> shifting the goalpost 101
[13:54:15] <DeHackEd> MilkmanDan: that's channel programs
[13:54:22] <DeHackEd> also your tears are delicious
[13:54:27] <ptx0> who even uses zfs delegation anyway
[13:54:28] <ptx0> honestly
[13:54:32] <MilkmanDan> DeHackEd: Hey, justify it however you want, man. ;)
[13:54:38] <ptx0> sudo4lyfe
[13:54:45] <DeHackEd> MilkmanDan: not your tears. I mean ml35's
[13:54:49] * lblume learns about "zfs program"
[13:55:14] <ptx0> i'm surprised osx zfs doesn't have a hard dependency on zcp
[13:55:28] <ptx0> they love the bleeding edge
[13:57:18] <MilkmanDan> How hackish is zfsonosx lately? Does Apple still consider it dirty and sinful and hold its corporate nose when people integrate it?
[13:57:41] <ptx0> i doubt they will ever acknowledge its existence
[13:57:42] <DeHackEd> I'm going to assume zfs on osx isn't blessed by apple
[13:58:03] <ptx0> oh wait https://www.osnews.com/story/14473/apple-interested-in-solaris-zfs/
[13:58:08] <ptx0> lmao
[13:58:11] <ptx0> oh my
[13:58:22] <ptx0> alzheimer's must be an epidemic over there
[13:58:40] <ptx0> er
[13:58:41] <DeHackEd> that was 13 years ago...
[13:58:43] <ptx0> 2006
[13:58:46] <ptx0> nvm
[13:58:47] <MilkmanDan> I wasn't going to ever expect a blessing, but the impression I got at the time of it first being uninvited from the party was that Apple really just wanted the whole project to vanish from history.
[13:58:49] <ptx0> google lied
[13:58:58] <ptx0> the timestamp in search says jan 5 2019
[13:59:33] <ptx0> DeHackEd: fun fact, it was don brady who ported zfs to OS X while working at apple
[13:59:36] <lblume> Must be the timestamp for the ads they'd really, really like you to have a quick glance at.
[13:59:43] <DeHackEd> interesting...
[14:01:29] <ptx0> wait, CEO of Synology "ZFS requires more memory and the performance is not better than Btrfs in compact NAS servers [...]"
[14:01:39] <ptx0> requires more memory?
[14:01:40] <MilkmanDan> Hahahaha
[14:02:01] <ptx0> what about the lack of raid5 safety
[14:02:04] <ptx0> come on, wang.
[14:02:10] <DeHackEd> thanks, but I want my data to be there tomorrow as well
[14:02:46] <MilkmanDan> Translation: "if our customers wanted a robust, high value solution we'd have to spend an extra $32 on RAM and that would eat into our profit margin."
[14:02:54] <blackflow> ptx0: synology has a custom raid layer under btrfs, they don't use btrfs' raid
[14:03:01] <blackflow> (which is extra funny)
[14:03:12] <ptx0> that's proving the point
[14:03:26] <blackflow> yup
[14:03:29] <MilkmanDan> blackflow: Anything Synology says comparing themselves favorable to zfs is extra funny.
[14:03:51] <ptx0> apparently the Dropbox CEO has dinner with the Synology fuckstain
[14:03:59] <DeHackEd> custom raid? as in, not just using mdadm?
[14:04:51] <DeHackEd> (I liked the drobo.. even though the performance was craptacular, its RAID did as advertised)
[14:04:57] <MilkmanDan> ptx0: That actually makes perfect sense. Instead of relying on your local NAS to protect your data, just sync it with the datacenter in Bluffdale. Tada, all safe.
[14:05:08] <ptx0> nah
[14:05:09] <blackflow> DeHackEd: yu
[14:05:13] <blackflow> *yup
[14:05:14] <ptx0> dropbox CEO says zfs is unsupported
[14:05:20] <ptx0> it "doesn't support xattrs"
[14:05:23] <insane^> MilkmanDan, sync? blah, put it in $cloud
[14:05:35] <cirdan> what doesn't, dropbox?
[14:05:38] <cirdan> cause zfs does
[14:05:48] <ptx0> dropbox ceo says zfs does not
[14:05:58] <MilkmanDan> insane^: To get the joke, google://Bluffdale+datacenter
[14:06:02] <DeHackEd> didn't he say anyhting but ext4 is unsupported?
[14:06:13] <ptx0> it refuses to even work
[14:06:15] <DeHackEd> can I just patch the kernel to report ext4 to all statfs system calls?
[14:06:19] <MilkmanDan> Maybe that joke doesn't travel so well outside of the US.
[14:06:26] <ptx0> he says it is unsupported because they only support filesystems that have xattr
[14:06:32] <ptx0> weird that he blocked XFS too
[14:06:36] <ptx0> vOv
[14:06:43] <DeHackEd> dude, XFS is the king of xattrs, or at least the old grampa of it
[14:06:49] <ptx0> inventor, even
[14:06:49] <insane^> MilkmanDan, germany here... so yes...
[14:07:16] <ptx0> oh and 'ext3' is unsupported too
[14:07:27] <ptx0> even though its xattr support is about as shite as ext4
[14:07:44] <blackflow> DeHackEd: actually, not sure. They say they integrate with a custom layer, the "linux raid" and "btrfs" because "Btrfs RAID is unstable": https://www.synology.com/en-us/knowledgebase/DSM/tutorial/General/What_was_the_RAID_implementation_for_Btrfs_File_System_on_SynologyNAS
[14:07:45] <MilkmanDan> insane^: Yes. It is vitally important that you continue to be completely unaware of any datacenters in Bluffdale. Also, your leaders should continue to have sensitive conversations over regular cell phones.
[14:08:01] <DeHackEd> didn't ext3/4 have a limit of 1 filesystem block per xattr for a file, whereas xfs has either 64k or unlimited each?
[14:08:12] <ptx0> i dunno
[14:08:19] <blackflow> DeHackEd: I know it's custom code because I've been looking into their NAS products for a company that wanted to use them in an enterprise-y fashion
[14:08:26] <DeHackEd> (realistically an xattr is transferred in its entirety per syscall, so you're not going to have multi-gigabyte xattrs)
[14:08:30] <ptx0> i do remember discovering that xfs xattr were not as grand as i once thought, now that more modern filesystems exist
[14:08:54] <insane^> MilkmanDan, you mean like this trumpish man uses his iphone?
[14:08:56] <insane^> :p
[14:09:21] <ptx0> insane^: careful, you're veering
[14:09:30] <MilkmanDan> DeHackEd: What if I want the disk blocks of the pdf to contain the full text of the book as xattr metadata?
[14:09:59] <cirdan> pretty sure hfs had xattrs in the 80s
[14:10:33] <ptx0> you mean HPFS?
[14:10:37] <MilkmanDan> ptx0: It's my fault. I'm being treasonous and attempting to stir up foreign resentment against my government, as I am clearly a Russian bot.
[14:10:55] <ptx0> MilkmanDan: it just doesn't belong here
[14:11:02] <ptx0> none of it
[14:12:00] <cirdan> no i mean HFS
[14:12:13] <cirdan> apple called them "resource forks"
[14:12:52] <ptx0> no
[14:12:57] <ptx0> that is not the same as an xattr
[14:16:34] <cirdan> hfs+ could have any number of different forks
[14:16:52] <cirdan> storing anything you wanted
[14:16:54] <ptx0> btw, i was playing Far Cry 5 for the first time today, well, trying to play it
[14:17:09] <ptx0> kept crashing in my VM until i started the qemu template with two NUMA zones and all 8 cores
[14:17:28] <ptx0> then i restarted the 4 core / single NUMA VM and tried again, still crashed, but 8 cores works
[14:17:59] <cirdan> weird
[14:18:04] <ptx0> i was seeing insane latencies and horrible frametime in benchmark but the gameplay seems to be smooth
[14:18:16] <ptx0> also it's a pretty good game so far
[14:18:48] <ptx0> it's neat that you can start up a single player campaign and your friends can just join you on the journey
[14:19:02] <ptx0> turning it dynamically into coop mode
[14:19:31] <ptx0> kind of expensive game though, imo
[14:27:28] <zfs> [zfsonlinux/zfs] feature request: add new option to `zfs snapshot` subcommand to skip creation of zero sized snapshots (#6041) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/6041#issuecomment-452297618>
[14:29:10] <ptx0> oh yeah it must be something personal and nothing related to the issues themselves
[14:29:19] <ptx0> that's why i closed some and not all of the issues, right
[14:29:24] <ptx0> dipshit
[14:32:46] <zfs> [zfsonlinux/zfs] feature request: add new option to `zfs snapshot` subcommand to skip creation of zero sized snapshots (#6041) comment by kpande <https://github.com/zfsonlinux/zfs/issues/6041#issuecomment-452299237>
[14:35:08] <zfs> [zfsonlinux/zfs] enhancement request: "zpool status" should also output scanned and speed after scrub completion (#7123) comment by kpande <https://github.com/zfsonlinux/zfs/issues/7123#issuecomment-452299932>
[14:38:55] <zfs> [zfsonlinux/zfs] Consider adding mitigations for speculative execution related concerns (#7035) comment by kpande <https://github.com/zfsonlinux/zfs/issues/7035#issuecomment-452301042>
[14:41:17] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452301740>
[14:42:05] <cirdan> ptx0: your bot should colorize keywords, like comment, opened, and closed :)
[14:42:41] *** kim0 <kim0!uid105149@ubuntu/member/kim0> has quit IRC (Quit: Connection closed for inactivity)
[14:42:59] <cirdan> at least opened and closed issues
[14:45:21] <zfs> [zfsonlinux/zfs] feature request: add new option to `zfs snapshot` subcommand to skip creation of zero sized snapshots (#6041) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/6041#issuecomment-452302976>
[14:46:36] <zfs> [zfsonlinux/zfs] Feature request: ztop, a top-like tool specific to zfs (#5880) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/5880#issuecomment-452303389>
[14:50:09] <ptx0> cirdan: freenode blocks colour
[14:50:27] *** ChanServ sets mode: +o ptx0
[14:50:31] *** ptx0 sets mode: -c
[14:50:31] *** ChanServ sets mode: +c
[14:50:35] *** ptx0 sets mode: -o ptx0
[14:51:56] <ptx0> i kinda hope ml35 keeps commenting on all the closed issues and gets banned forever
[14:53:02] <ptx0> "you are expecting others to do the work for you, for free" "that is how open source works"
[14:53:06] <ptx0> where's RMS when you need him
[14:59:52] <FireSnake> ptx0: nice to meet you too
[15:00:56] <ptx0> mmhm
[15:01:30] <FireSnake> you seem as aggressive there as here
[15:01:35] <FireSnake> tbh
[15:02:14] <cirdan> FireSnake: pent up sexual tension
[15:02:21] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[15:02:38] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Max SendQ exceeded)
[15:02:57] <ptx0> FireSnake: and you seem as stupid there as here
[15:05:46] <ptx0> cirdan: keep it up and i'll ship you some hard drives
[15:06:16] <cirdan> woot
[15:06:21] <zfs> [zfsonlinux/zfs] Consider adding mitigations for speculative execution related concerns (#7035) comment by RageLtMan <https://github.com/zfsonlinux/zfs/issues/7035#issuecomment-452309780>
[15:06:21] <cirdan> 8tbs please
[15:07:18] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Remote host closed the connection)
[15:07:45] <ptx0> well that's a bold move right there
[15:07:54] <ptx0> suggesting we use leaked code
[15:08:25] * ptx0 ships cirdan SMR devices
[15:09:20] <cirdan> sure
[15:09:25] <ptx0> 4800rpm
[15:09:31] <cirdan> 16tb
[15:09:32] <ptx0> buckle up, buckaroo.
[15:09:40] <cirdan> and an ssd to back the metadata
[15:09:53] <ptx0> i think a 4800rpm smr disk would have an effective rate of less than 20MiB/s
[15:10:18] <cirdan> gpleaked... i think he meant not "leaked" but unintentinaly released as gpl
[15:10:25] <ptx0> yes
[15:10:34] <ptx0> i understood what they meant
[15:10:53] <ptx0> it is all gpl code but you sign some contract that prevents you from releasing the gpl code to unauthorised parties
[15:10:58] <ptx0> of course this is against the gpl
[15:11:09] <TimWolla> ptx0: You should be able to override ChanServ's mode lock if you want to set -c.
[15:11:16] <ptx0> but anyone that mentally unstable, can't possibly be worth fighting over
[15:11:23] <ptx0> TimWolla: cannot, not the channel founder
[15:12:04] <ptx0> only ryao can
[15:12:09] <TimWolla> Ah, I see.
[15:14:55] <PMT> ptx0: i thought we settled the legality of that back in the WRT54G custom firmware days
[15:15:07] <PMT> also wtf is respectre in that context
[15:15:50] <ptx0> https://grsecurity.net/respectre_announce.php
[15:16:00] <ptx0> it's more grsec trash, as linus would say
[15:19:46] *** cinch <cinch!~cinch@freebsd/user/cinch> has joined #zfsonlinux
[15:42:34] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[16:02:19] <madwizard> Who should I ping for code review for a pull request?
[16:02:21] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has quit IRC (Ping timeout: 256 seconds)
[16:02:43] <DHE> don't. just open it, people will look at it
[16:08:25] <cirdan> just address all comments to kpande
[16:08:28] <cirdan> :-)
[16:08:48] <cirdan> (kidding)
[16:10:31] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[16:12:37] <madwizard> That I did, thank you
[16:13:39] <zfs> [zfsonlinux/zfs] zfs receive and rollback can skew filesystem_count (#8232) comment by Jerry Jelinek <https://github.com/zfsonlinux/zfs/issues/8232>
[16:20:04] <zfs> [zfsonlinux/zfs] "Dataset does not exist" in incremental receives in current master (#8067) comment by zrav <https://github.com/zfsonlinux/zfs/issues/8067#issuecomment-452335660>
[16:26:19] <bunder> ptx0: i see he didn't say much about their code leak
[16:26:47] *** TheBloke <TheBloke!~TomJ@unaffiliated/tomj> has quit IRC (Read error: No route to host)
[16:27:30] <bunder> lol the github repo is still there too
[16:30:17] *** TheBloke <TheBloke!~TomJ@unaffiliated/tomj> has joined #zfsonlinux
[16:50:11] *** kim0 <kim0!uid105149@ubuntu/member/kim0> has joined #zfsonlinux
[16:56:31] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has quit IRC (Quit: WeeChat 2.3)
[16:56:43] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has joined #zfsonlinux
[17:01:22] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has quit IRC (Client Quit)
[17:08:21] *** insane^ <insane^!~insane@fw.vispiron.de> has quit IRC (Ping timeout: 260 seconds)
[17:08:43] *** zrav <zrav!~zravo_@2001:a61:460b:9d01:8db8:9c5:5dea:a1d> has joined #zfsonlinux
[17:11:45] <PMT> bunder: I mean, there's basically no way to argue it wasn't legal short of accusing someone of actually compromising grsec computers to acquire the code.
[17:12:18] <bunder> sure but that doesn't mean he can't be publicly salty about it
[17:12:36] <bunder> maybe his lawyer told him not to lol
[17:12:40] <PMT> it does mean he can't readily issue a legal takedown though
[17:17:20] <cirdan> all you can do is cancel the contract of the leaker, if you can find out who it is
[17:17:40] <cirdan> the contract could have fines for doing it as well, but then again you need to figure out who did it
[17:17:56] <PMT> i wonder if they try to cleverly embed steganographic information in the source
[17:18:12] <cirdan> it's possible. it's being done now by govt and other places
[17:19:05] <cirdan> a pretty-printer could likely thwart most of that, unless they mixed up code block order per customer
[17:19:28] <cirdan> but then again, once you discover the secret sauce you can undo/modify it yourself
[17:20:42] <PMT> i know dd-wrt at one point was doing the thing about only giving artifacts to people who paid them and there was some excitement legally about it
[17:21:04] <PMT> oh no sorry i'm remembering sveasoft, I think
[17:31:43] *** elxa <elxa!~elxa@2a01:5c0:e086:8351:5a32:e360:98cb:966f> has joined #zfsonlinux
[17:35:59] *** malevolent <malevolent!~quassel@93.176.182.131> has joined #zfsonlinux
[17:38:45] <zfs> [zfsonlinux/zfs] enhancement request: "zpool status" should also output scanned and speed after scrub completion (#7123) comment by Richard Elling <https://github.com/zfsonlinux/zfs/issues/7123#issuecomment-452365444>
[17:41:08] <zfs> [zfsonlinux/zfs] #8235 feature request: zpool iostat N should repeat header like vmstat N (#8246) new review comment by Garrett Fields <https://github.com/zfsonlinux/zfs/pull/8246#pullrequestreview-190313012>
[17:42:16] <ghfields> madwizard: Really not a code review, but a style review. It's something to chew on while others come by.
[17:42:56] <cirdan> man i hate git/github
[17:43:55] <zfs> [zfsonlinux/zfs] `zpool export` does not delete the mountpoint on pools with local mountpoint property (#4824) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/4824#issuecomment-452367402>
[17:44:37] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by Richard Elling <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452367684>
[17:47:43] <madwizard> ghfields: thanks
[17:47:57] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452368904>
[17:48:41] <bunder> should maybe squash your stack too
[17:49:37] <ghfields> I agree
[17:51:16] <Shinigami-Sama> ptx0: funny story about synology and btrfs....
[17:51:35] <Shinigami-Sama> we had a client lose an entire LUN off of one of those units...
[17:52:05] <Shinigami-Sama> then we recover it, but perf was AWEFUL, like IDE drive bad in a 16? disk unit
[17:52:18] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by Richard Elling <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452370661>
[17:52:27] <bunder> gamers nexus had 2 synology boxes die in like 4-5 months
[17:52:30] <bunder> they sound crap
[17:53:17] <zfs> [zfsonlinux/zfs] Verify disks writes (#2526) comment by Richard Elling <https://github.com/zfsonlinux/zfs/issues/2526#issuecomment-452370987>
[17:54:41] <Shinigami-Sama> they're great for the price, but I perfer QNAPs. they're far more robust OSwise
[17:55:01] <Shinigami-Sama> we typically use them as backup targets now thankfully\
[17:56:22] <bunder> i'm not big on the boxes at all tbh, puny cpu, little memory, yet they're marketed as being an everything box
[17:56:42] <bunder> "run your docker containers and vm's on our atom with 4gb of memory weeeeeeeee"
[17:58:08] <zfs> [zfsonlinux/zfs] Verify disks writes (#2526) comment by Richard Elling <https://github.com/zfsonlinux/zfs/issues/2526#issuecomment-452372816>
[18:01:37] <ghfields> I have obtained 5 sata bay thecus unit with a 64-bit atom processor. The PS died, but runs fine with an external ps with wires snaked into it. Hopefully, I can get the PS issue resolved one way or another and use it as a backup pool.
[18:04:39] <bunder> if its non standard, you're probably gonna have to use a donor and hope it doesn't die too
[18:05:31] <bunder> or the nasty "cables from outside" :P
[18:05:55] <cirdan> might be able to use a pico power or something?
[18:07:20] <bunder> will it have enough wattage? i thought they weren't powerful enough
[18:07:29] <cirdan> what does it need?
[18:07:41] <cirdan> i think they do 60-90w or something
[18:07:50] <cirdan> enough for 5 drives and an arom
[18:07:52] <cirdan> atpm
[18:07:56] <cirdan> atom. sheesh
[18:08:05] * Shinigami-Sama gives cirdan some coffee
[18:08:11] <cirdan> you have no idea
[18:08:33] *** f_g <f_g!~f_g@213-47-131-124.cable.dynamic.surfer.at> has quit IRC (Ping timeout: 268 seconds)
[18:08:34] <Shinigami-Sama> I think I do, I just read almost all of MS's documentation on sharepointonline yesterday
[18:09:26] <bunder> ewwwwwww sharepoint
[18:09:32] <ghfields> let me do some lookin... thecus n5550....
[18:09:37] <FinalX> hm. can I somehow zfs send/receive/whatever an existing (recursive) dataset to make them normalization=formD?
[18:09:49] <FinalX> zfs receive -o normalization=formD won't work at least..
[18:09:59] <Shinigami-Sama> sharepoint is better than SMB/lt2p tunnels...
[18:10:04] <FinalX> and it's readonly on the sending side, so.
[18:10:25] <cirdan> it's pool wide
[18:10:25] <DHE> I suspect not because the metadata required for normalization is either missing in the stream or already in the stream. either way not something you should be sending.
[18:10:34] <FinalX> no, it's not pool wide.. I wish it was
[18:10:41] <cirdan> and only settable on creation
[18:10:46] <cirdan> it isn't/
[18:10:47] <cirdan> ?
[18:10:51] <DHowett> Shinigami-Sama: (i still feel compelled to apologize for the sharepoint team's .. uh, creation)
[18:10:57] <DHE> no, it's a per-dataset property
[18:11:00] <FinalX> no, the pool is normalization=formD, but the dataset on it is not. while all other datasets are.
[18:11:09] <cirdan> oh. interitable though right?
[18:11:13] <FinalX> yes
[18:11:39] <FinalX> so, I should probably create new datasets for every dataset and rsync stuff over then, I guess
[18:11:43] <FinalX> hm
[18:11:44] <Shinigami-Sama> DHowett: its better than it used to be, by leaps and bounds. Especially now that e1 licenses come with word/etc online so you don't even to mess with checking in/out
[18:12:30] <FinalX> it's only like ~200G on an NVMe disk, so could've been worse :)
[18:13:14] <cirdan> wow those are expensive power supplies
[18:13:37] <cirdan> bunder: aftermarket: https://www.ebay.com/itm/Suitable-PSU-for-Thecus-N5200-N5200Pro-N5500-N5550-W5000-ENP-7020D-FSP350-701UJ-/191882753989
[18:14:14] <Shinigami-Sama> cheaper than getting an electical engingeer to figure out whats wrong and corect it. even if is just a Cap that died
[18:14:23] <bunder> 350w heh
[18:14:30] <Shinigami-Sama> oh my... 1/3 of the cost in shipping..
[18:14:36] <ghfields> found that one. 350w definately will do it. Reviews are saying 5x 3.5 drives top consumption around 55w.
[18:14:38] <cirdan> more expensive than getting a different nas box
[18:14:54] <cirdan> yeah all spinning up at once it's about that ghfields
[18:15:16] <bunder> for $115 i'd go the fixing a cap route
[18:15:17] <Shinigami-Sama> if you're lucky, you can tell your BIOS/controller to spin them up at different times
[18:15:20] <cirdan> i have a 4 bay nas like that I was thinking of selling for $250
[18:15:20] <ghfields> Oh, I understand instaneous load is definately different
[18:16:05] <Shinigami-Sama> the HP microserves did that, they had an HD poweron delay, it was the only way the little 180w? PSU could handle two 3.5 drives...
[18:16:30] <ghfields> Is it really a one-off form factor?
[18:16:40] <cirdan> looks like it
[18:16:52] <cirdan> my emc nas has an external power supply. nice and cheap/easy to replace
[18:17:18] <Shinigami-Sama> actually you could probably do a brain transplant of that PSU
[18:17:30] <cirdan> depends if the board fits
[18:17:45] <Shinigami-Sama> if you value your time, it might work to be more expensive though
[18:17:46] <cirdan> you can look and see if a fuse blew in yours
[18:17:58] <cirdan> i've had to replace a fuse or 2 before
[18:18:02] <bunder> resoldering the wires might be a pain, they're usually gigantic solder blobs
[18:18:16] <zfs> [zfsonlinux/zfs] default pool mountpoint name should conform to Filesystem Hierarchy Standard (/srv/poolname instead of /poolname) (#4814) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/4814#issuecomment-452379626>
[18:18:31] <bunder> if you swap the board and keep the original wiring
[18:18:34] <ghfields> I watched this PS repair video this weekend. Check out that accent!
[18:18:37] <ghfields> https://www.youtube.com/watch?v=HcYFbCqM61g
[18:19:42] <bunder> russia? :P
[18:19:45] <zfs> [zfsonlinux/zfs] Verify disks writes (#2526) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/2526#issuecomment-452380142>
[18:20:52] <DHowett> that Thecus thing looks like a FlexATX power supply. the dimensions on the listing don't match that, though, so \_(shrug)_/
[18:21:06] <bunder> i like great scott, but i have too many youtube subs to watch lol
[18:21:57] *** f_g <f_g!~f_g@213-47-131-124.cable.dynamic.surfer.at> has joined #zfsonlinux
[18:23:11] <ghfields> but as cirdan mentioned, a pico might also be an option: http://www.mini-box.com/s.nl/sc.8/category.13/.f
[18:23:26] <zfs> [zfsonlinux/zfs] Bump commit subject length to 72 characters (#8250) merged by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8250#event-2060826287>
[18:23:33] <ghfields> I want a free one though.
[18:26:47] <ghfields> Not going to run my gtx1080ti though...
[18:27:14] <zfs> [zfsonlinux/zfs] Fix missing dkms modules after upgrades (try 2) (#8216) merged by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8216#event-2060834579>
[18:27:18] <zfs> [zfsonlinux/zfs] dkms modules not built automatically for Fedora update or upgrade (#6902) closed by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/6902#event-2060834583>
[18:29:05] <DHowett> in the time it's taken me to decide whether to maintain my pool (buy more 3tb drives) or upgrade it (switch to 7200rpm 4+tb drives), i could have spun up and destroyed hundreds, if not thousands, of new pools, filled with all my data. there seems to be a lesson in there somewhere
[18:29:42] <cirdan> ghfields: i'll give you a free one if you give me your gtx1080ti
[18:29:48] *** kaipee <kaipee!~kaipee@81.128.200.210> has quit IRC (Remote host closed the connection)
[18:29:57] <zfs> [zfsonlinux/zfs] Include third party licenses in dist tarballs (#8242) merged by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8242#event-2060841101>
[18:30:27] <ghfields> Don't have one either... but I think I have a gtx 280 in my basement somewhere.
[18:31:14] <zfs> [zfsonlinux/zfs] making sure the last quiesced txg is synced (#8239) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8239>
[18:31:43] <ghfields> One of those connected to that thecus with all its atomness might not be a fun ride.
[18:33:04] <cirdan> my atom bas has a pcie slot as well :)
[18:39:08] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has joined #zfsonlinux
[18:39:17] *** zfs sets mode: +b *!*@bifrost.evert.net$#zfsonlinux-quarantine
[18:41:43] * Shinigami-Sama is still waiting for his 1080ti... should be here on the 14th
[18:42:31] <bunder> i'm still quite happy with a 980
[18:43:57] <bunder> although i feel dirty buying a rx550 for a threadripper because its a basic gpu
[18:46:13] <Shinigami-Sama> I have 2x 4k displays beside me on the floor waiting for it
[18:46:27] <bunder> oh nevermind then :P
[18:46:42] <Shinigami-Sama> I figured I work remotely 100% of the time now, I want to not strain my eyes and back and enjoy working as much as possible
[18:46:45] <ghfields> bunch of researchers here wanted the multiple 1080ti cards. They cannot grasp their power draw and why many off the shelf workstation systems won't support them. They have enough slots.
[18:48:16] <ghfields> When we do get something figured out for them, I know I'm just giving them a system that will sit idle 97% of the time anyway.
[18:48:21] <Shinigami-Sama> yeah, I'm not too worried about mine, I have gold 750w PSU, though I do have 980x CPU so maybe...
[18:50:27] *** augustus <augustus!~augustus@c-73-152-30-9.hsd1.va.comcast.net> has joined #zfsonlinux
[18:51:35] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[18:52:48] <zfs> [zfsonlinux/zfs] OpenZFS 8473 - scrub does not detect errors on active spares (#8251) closed by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8251#event-2060893411>
[18:54:35] <buu> Aren't TIs only like 300w?
[18:57:45] <Shinigami-Sama> refrence board puts them at 250
[18:58:34] <ghfields> The trick is when you want 2 or 3 in a system... and also expect the system to function. You really have to be aware of your PS abilityes
[19:00:37] <Shinigami-Sama> thats what the 1500w Platinum PSUs are for right?
[19:00:45] <madwizard> ghfields: As I understand the workflow, I should withdraw the pull request, fix issues and raise new one?
[19:01:14] <Shinigami-Sama> who cares if you need to power the monitors... or lights...
[19:01:19] <bunder> no keep it open and just push on top
[19:01:35] <bunder> we have tons of pr's open for months
[19:02:13] <ghfields> No, there is a way to force push.
[19:02:35] <madwizard> bunder: ok
[19:02:44] <madwizard> I'll need to read about it. I only know very basic git
[19:03:01] <bunder> https://github.com/zfsonlinux/zfs/wiki/Git-and-GitHub-for-beginners#correcting-issues-with-your-pull-request
[19:03:12] <madwizard> I also noticed that I didn't sign off my patch
[19:03:31] <DHE> that's automatic with "git commit -s"
[19:03:42] <madwizard> Ah, right. I even read about it last week but already forgot
[19:06:30] <bunder> i wrote that wiki article, so if it sucks, sorry :P
[19:06:34] <ghfields> bunder: looks like you can change "(commit messsage) By style guidelines, this has to be less than 50 characters in length." re: #8250
[19:06:39] <zfs> [zfs] #8250 - Bump commit subject length to 72 characters by Conan-Kudo <https://github.com/zfsonlinux/zfs/issues/8250>
[19:06:48] <bunder> yeah i suppose i can
[19:07:12] <buu> Shinigami-Sama: Well, at least you won't need to power the heater with that setup
[19:07:47] <ghfields> bunder: I've referenced it in the past. I appreciate it.
[19:08:17] <bunder> there :P
[19:09:03] <bunder> 72 seems odd too but what do i know
[19:09:31] <ghfields> I mean... it was merged like DOZENS of minutes ago.
[19:10:44] <bunder> i saw it, i did forget i mentioned the style bits in the wiki though
[19:12:52] <stefan00> when I boot my new gentoo install using zfs + initramfs, it takes a long time for initramfs to import the root pool. Long time meaning about 40 seconds. Nothing except the raw gentoo system on the pool yet. Is that normal? Haven’t seen this on previous installl.
[19:14:23] <bunder> genkernel?
[19:14:28] <stefan00> yes
[19:14:34] <bunder> try dracut
[19:14:43] <bunder> i thought they fixed that, odd
[19:14:58] <zfs> [zfsonlinux/zfs] #8235 feature request: zpool iostat N should repeat header like vmstat N (#8246) new commit by Damian Wojs?aw <https://github.com/zfsonlinux/zfs>
[19:15:10] <stefan00> so its not normal and a gentoo issue, right?
[19:15:28] <bunder> probably
[19:15:48] <bunder> we don't do anything for genkernel like we do with the dracut scripts
[19:18:20] <zfs> [zfsonlinux/zfs] zfs receive and rollback can skew filesystem_count (#8232) merged by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8232#event-2060949735>
[19:19:43] <bunder> https://bugs.gentoo.org/show_bug.cgi?id=627320
[19:20:36] <bunder> apparently its in progress? /shrug
[19:20:45] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 250 seconds)
[19:32:04] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452403543>
[19:32:09] *** adilger <adilger!~adilger@S0106a84e3fe4b223.cg.shawcable.net> has joined #zfsonlinux
[19:33:14] <stefan00> bunder: thanks for helping! In progress, right. I go dracut then, no problem.
[19:34:12] <cirdan> heh. ptx0 mentioned this before but https://forums.freebsd.org/threads/horrific-zfs-performance-on-new-st4000dm004-drive.66615/
[19:34:20] <cirdan> seagate 004=smr :/
[19:35:24] <cirdan> gota love the single didgit iops
[19:36:03] <Shinigami-Sama> I love that last graph
[19:36:10] <cirdan> yeah
[19:36:14] <cirdan> how to tell if a drive is smr
[19:36:38] <Shinigami-Sama> 'hi I'm happy, I'm happy, wait what, dear god no, ............"
[19:37:09] <buu> What is gstat in this context?
[19:37:37] <cirdan> https://forums.freebsd.org/threads/horrific-zfs-performance-on-new-st4000dm004-drive.66615
[19:37:44] <cirdan> check out those sexy graphs
[19:38:06] <cirdan> and some people wonder why users try as hard as possible to avoid smr...
[19:38:22] <cirdan> oh
[19:38:23] <cirdan> heh
[19:44:30] <ghfields> Wasn't there a recent video from one of the openzfs conferences from an HD company talking about SMR?
[19:44:44] <cirdan> prolly
[19:46:08] <ghfields> The ones I found are pretty older (2014)
[19:46:13] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[19:49:45] <zfs> [zfsonlinux/zfs] Make zpool status counters match err events count (#7817) comment by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/7817#issuecomment-452409316>
[19:50:31] <ghfields> BTW... just found the ZFS user Conference registration (DATTO, April 18, 2019, Connecticut)
[19:50:37] <zfs> [zfsonlinux/zfs] zfs should optionally send holds (#7513) new review comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/pull/7513#discussion_r246110196>
[19:52:01] <zfs> [zfsonlinux/zfs] Implement ioctl version of OpenSolaris `open(path, O_XATTR, mode)` and `openat(fd, name, O_XATTR, mode)` (#4437) comment by Dolf Schimmel <https://github.com/zfsonlinux/zfs/issues/4437#issuecomment-452410036>
[19:53:26] <ghfields> I might even be able to consider it since it is on the right coast.
[19:53:32] <zfs> [zfsonlinux/zfs] zfs should optionally send holds (#7513) comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/issues/7513#issuecomment-452410554>
[20:00:46] <cirdan> ptx0: well I ordered the refurb wd gold/hgst ultrastar enterprise let's see how it does
[20:01:12] <PMT> I'm p. sure I dislike Gregor's idea for the compat feature, b/c it's just punting to a form of the old version increments
[20:01:23] <PMT> ghfields: not very recent AFAIK
[20:01:36] <PMT> Though ptx0 can tell you about his successes mitigating SMR with MAC
[20:03:05] *** zrav <zrav!~zravo_@2001:a61:460b:9d01:8db8:9c5:5dea:a1d> has quit IRC (Read error: Connection reset by peer)
[20:10:46] *** futune <futune!~futune@83.240.61.51> has joined #zfsonlinux
[20:15:12] *** rlaager <rlaager!~rlaager@grape.coderich.net> has quit IRC (Quit: ZNC 1.6.3+deb1ubuntu0.1 - http://znc.in)
[20:15:37] <ghfields> I think I was remembering this one https://www.youtube.com/watch?v=a2lnMxMUxyc&index=6&list=PLaUVvul17xScvtic0SPoks2MlQleyejks (Openzfs european 2015)
[20:17:37] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[20:20:11] *** rlaager <rlaager!~rlaager@grape.coderich.net> has joined #zfsonlinux
[20:20:32] <cirdan> PMT: DeHackEd uses it too
[20:21:15] <PMT> cirdan: yeah but I didn't think he used it to make SMR drives not suck. I knew he used it a bunch.
[20:22:04] <cirdan> I'm prety sure he uses it for just that reason :) but I could be misremembering
[20:22:25] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has joined #zfsonlinux
[20:26:41] <FinalX> I also use a stripe of 3 SMR's still, they've been purring along ok'ish. Not great performance or anything but they serve their purpose.
[20:27:39] *** stefan00 <stefan00!~stefan00@ip9234924b.dynamic.kabel-deutschland.de> has quit IRC (Quit: stefan00)
[20:30:50] <Slashman> hey, arc_summary return nothing on debian 9 from backports (0.7.12) with kernel 4.18 (not sure if it's relevant, is this known? should I see the issue with debian?
[20:31:54] <Slashman> how forget it, module not load :x
[20:33:15] <ghfields> https://youtu.be/a2lnMxMUxyc?list=PLaUVvul17xScvtic0SPoks2MlQleyejks&t=1558 "You really don't have a recommendation.... here's some hardware"
[20:33:30] <PMT> Slashman: you could file a bug about better informing you of this edge case if you really want to.
[20:33:42] <ghfields> zfs might die.... Yea exactly
[20:34:31] <Slashman> PMT: I could, but I'm too lazy :p
[20:40:26] <zfs> [zfsonlinux/zfs] Verify disks writes (#2526) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/2526#issuecomment-452425858>
[20:52:34] *** jseiters <jseiters!~jseiters@24.152.254.23.res-cmts.tvh.ptd.net> has joined #zfsonlinux
[21:05:51] <zfs> [zfsonlinux/zfs] "Dataset does not exist" in incremental receives in current master (#8067) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8067#issuecomment-452433632>
[21:08:43] <ptx0> lol @ #4814
[21:08:49] <ptx0> FireSnake: you really don't get it, do you
[21:08:51] <zfs> [zfs] #4814 - default pool mountpoint name should conform to Filesystem Hierarchy Standard (/srv/poolname instead of /poolname) <https://github.com/zfsonlinux/zfs/issues/4814>
[21:09:05] <ptx0> no one cares about the FHS, no one uses /srv
[21:13:04] <Shinigami-Sama> thats olde solaris cruft
[21:13:08] <Shinigami-Sama> like /export/home
[21:13:38] * CompanionCube raises hand and uses /srv
[21:13:39] *** Slashman <Slashman!~Slash@cosium-152-18.fib.nerim.net> has quit IRC (Remote host closed the connection)
[21:14:07] <Shinigami-Sama> this is why we have edgeless saftey cubes
[21:14:22] <zfs> [zfsonlinux/zfs] 0.7.12 gives warning messages on Centos Release 6.10 (#8245) reopened by kpande <https://github.com/zfsonlinux/zfs/issues/8245#event-2061222199>
[21:14:46] * CompanionCube wonders why you'd mount a pool under /srv though
[21:15:53] <CompanionCube> 'Site-specific data served by this system, such as data and scripts for web servers, data offered by FTP servers, and repositories for version control systems.' i don't feel it
[21:16:37] <CompanionCube> unless you're using NFS or such, it'd be a decent stretch to say that a system serves a pool
[21:17:13] <CompanionCube> Shinigami-Sama: /srv seemed/seems like the best place to stuff gogs in, so
[21:18:37] <zfs> [zfsonlinux/zfs] Removed suggestion to use root dataset as bootfs (#8247) comment by George Melikov <https://github.com/zfsonlinux/zfs/issues/8247>
[21:18:45] <Shinigami-Sama> I'm half teasing, I'm thinking of moving to /export/home so I can shrink my / volume before I move everything
[21:21:50] <ptx0> CompanionCube: that goes into /opt.
[21:22:56] <CompanionCube> eh, you could say that for the binaries
[21:23:04] <CompanionCube> but storing data in /opt feels weird
[21:23:45] <PMT> Like storing data in /srv? :P
[21:23:56] <DHE> cirdan: I'm not using MAC to mitigate SMR performance actually. I considered it but the one server I have with SMR disks doesn't have a supported version running
[21:24:08] <DHE> (exactly 1 server with SMR disks in it right now)
[21:24:14] <CompanionCube> PMT: well, my example specifically hits two out of three of the mentioned things
[21:24:34] <CompanionCube> data for the web frontend and the served git repositories :)
[21:28:25] <ptx0> i am autistic and even i think arguing about /srv and wanting it implemented just for the sake of following some arbitrary rules that NO ONE follows is "a bit much"
[21:28:32] <cirdan> DHE: ah my bad. heh you have 1 server too many :)
[21:28:49] <FinalX> https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/srv.html
[21:28:58] <CompanionCube> ptx0: yeah itseems rather pointless
[21:29:00] <FinalX> "as there is currently no consensus"
[21:29:01] <FinalX> lol
[21:29:04] <cirdan> CompanionCube: /mnt should be used cause it's a mount
[21:29:05] <cirdan> :)
[21:29:18] <zfs> [zfsonlinux/zfs] Removed suggestion to use root dataset as bootfs (#8247) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8247#issuecomment-452440455>
[21:29:31] <ptx0> lol /srv/www
[21:29:34] <CompanionCube> cirdan: inb4 someone complains that you can't use /mnt as it's for temporary mounts
[21:29:35] <ptx0> it's /var/www, stupid
[21:29:45] <ptx0> CompanionCube: but what about /media
[21:29:47] <FinalX> how about following the idea of ZFS's inception and NOT use / to begin with, but just mount pool as pool
[21:29:49] <cirdan> hell I could see a default of /zfs/$pool or /mnt/zfs/$pool but not /srv
[21:29:53] <FinalX> screw the leading /
[21:30:02] <cirdan> FinalX: where?
[21:30:05] <CompanionCube> ptx0: no-one uses /media
[21:30:07] <cirdan> ./$pool
[21:30:18] <ptx0> CompanionCube: automount does
[21:30:18] <FinalX> no, no root.. forget the root
[21:30:19] <cirdan> that would be fun
[21:30:22] <FinalX> (I'm just kidding, obv)
[21:30:33] <cirdan> CompanionCube: umm /mnt is temp mounts? since when
[21:30:47] <cirdan> /tmp is for tmp stuff :)
[21:31:04] <ptx0> even within the FHS they document how no one cares about the FHS, repeatedly:
[21:31:07] <ptx0> It should be noted that some distributions like Debian allocate /floppy and /cdrom as mount points while Redhat and Mandrake puts them in /mnt/floppy and /mnt/cdrom respectively.
[21:31:20] <FinalX> always has been, was for flo... yeah, that
[21:31:31] <CompanionCube> cirdan: iirc some old FHS version or something
[21:31:36] <ptx0> but it says /media is for removeable media
[21:31:39] <cirdan> yeah we don't listen
[21:31:44] <ptx0> and then says put cdrom and floppy into /mnt
[21:31:46] <CompanionCube> not than anyone cares except for curmdegons
[21:31:53] <FinalX> still described as https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/mnt.html here
[21:31:55] <CompanionCube> i spelled that wrong but I don't care
[21:31:59] <FinalX> ptx0: yeah but check first line on https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/media.html
[21:32:00] <FinalX> lol
[21:32:01] <ptx0> FHS => "Fuckin' Hella Stupid"
[21:32:06] <FinalX> "
[21:32:08] <FinalX> "Amid much controversy and consternation on the part of system and network administrators a directory containing mount points for removable media has now been created. Funnily enough, it has been named /media."
[21:32:34] <cirdan> I honestly use /tmp for real temp mounts
[21:32:49] <cirdan> in boot envs i even just mount over /tmp :)
[21:32:53] <cirdan> one less command to do
[21:33:02] <FinalX> after working here for 17+ years, I realised that no mounts are really all that temporary
[21:33:20] <cirdan> yeah nobody ever ejects that installer cd
[21:33:27] <FinalX> and I use a path in /mnt if I just can't think of a better one, or it should be mounted but not in a general path
[21:33:45] <cirdan> I put my pools in /mnt
[21:34:02] <cirdan> /mnt(fishtank|mediatank|fishbowl)
[21:34:43] <cirdan> guess I really need to get my backup tapes labeled so I can expand my pool
[21:34:43] <FinalX> like /mnt/plexdrive, because plexdrive's mount is only used by rclone for decryption, which in turn is just mounted as /mnt/google, following by a unionfs of that + /mnt/local (big zfs stripe) into /data
[21:34:44] <CompanionCube> huh, apparently they didn't bother to change the description of /mnt for 3.0
[21:34:57] <CompanionCube> seems stupid
[21:35:03] <cirdan> FHS? mostly
[21:35:04] <cirdan> :)
[21:35:33] <FinalX> in other news, all those datasets I mentioned earlier are now rsynced into new datasets with formD <3
[21:35:41] <cirdan> nice
[21:35:43] <ptx0> ugh, my roommate is kept using my bath towels, in my bathroom. he has his own. took out all towels but the one that goes on the floor. he used that to dry off.
[21:35:55] <FinalX> for some reason they take up less spaces now... :)
[21:35:58] * ptx0 gag
[21:36:08] <PMT> You have a roommate now?
[21:36:20] <ptx0> welcome to 3 weeks ago, or a month ago or whenever it was
[21:36:28] <cirdan> PMT: use his to dry your ass after biking
[21:36:34] <cirdan> then hang them back up
[21:36:37] <cirdan> err ptx0
[21:36:37] <PMT> I knew you moved, I didn't realize it came with a sentient person.
[21:36:42] <ptx0> cirdan: that seems... unlawful
[21:37:10] <ptx0> PMT: there was this story that he was constantly out of the house working on jobs and i'd mostly have the place to myself but, uh
[21:37:11] <cirdan> it only seems that way
[21:37:32] <CompanionCube> ptx0: but it h
[21:37:37] <ptx0> dude's a severe narcissist though he seems mostly harmless
[21:37:40] <CompanionCube> ptx0: but it has been determined that was a lie?
[21:37:51] <cirdan> CompanionCube: it was a good story
[21:37:55] <ptx0> CompanionCube: not yet
[21:38:16] <ptx0> i guess over the course of a year it's not been very long yet and there's plenty of time left for the average to change
[21:39:04] <CompanionCube> truth
[21:39:27] <DHE> cirdan: sadly I can't run everything on 0.8.0-rc or git head. the servers doing important stuff go through the usual change management system. things like my workstation, the backup target and the new developments I can be a little more loose with.
[21:39:53] <ptx0> he is really frustrating so far though, the carpet is dirty, gave him some money and asked him to rent a shampooer and he had so many excuses, like, he owns this house and i'm a tenant. whose job is it to rent equipment? :P
[21:40:27] <ptx0> the money is still sitting on the counter where i left it and carpets are still nasty, good stuff
[21:40:48] <CompanionCube> shampooer is a weird name
[21:41:00] <ptx0> well it isn't a proper noun, so there's that
[21:41:29] <CompanionCube> it just doesn't sound like a carpet thing
[21:41:42] <ptx0> yeah like carpet is made out of hair
[21:42:20] <ptx0> it's snowing today on vancouver island though for real, not that fake "wintry mix" shit
[21:42:40] <ptx0> i never thought i'd see snow again
[21:43:11] * ptx0 is further north than ever before in his life and has less snow than ever
[21:44:30] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[21:50:05] <ghfields> Really bummed that there also isn't a 10g-baseT version of this also https://www.balticnetworks.com/mikrotik-4-port-sfp-802-3at-af-switch-l5.html
[21:50:37] <Phil-Work> will it run 10g-baseT SFP+?
[21:50:38] <PMT> If nothing else, 10GBASE-T has much higher power requirements.
[21:51:11] <ghfields> those transcievers would cost the cost the same as the switch (per side)
[21:51:22] <Phil-Work> yeh, they're not cheap
[21:51:54] <Phil-Work> 2 port SFP+ PCI-E cards tend to work out cheaper
[21:52:12] <Phil-Work> if you're not constrained by structured cabling, that is
[21:53:48] <ghfields> I'm thinking for my home. Put a bunch of cat6 in and then tons of cellulose insulation in the attic. More runs of any kind will not be fun.
[21:54:47] <Phil-Work> you've already run cat6?
[21:55:31] <ghfields> yea. Bought a house, pulled out all pots and replaced with a 4x cat6 at each plate.
[21:55:46] <Phil-Work> yeh, that's my problem
[21:55:50] <Phil-Work> too late to run more cable now
[21:55:52] <ghfields> I think I have 40 runs now.
[21:56:32] <Phil-Work> house v3.0 shall have some quantity of SM fibre
[21:56:49] <Phil-Work> for now, I shall just have to live with it
[21:57:18] *** jseiters <jseiters!~jseiters@24.152.254.23.res-cmts.tvh.ptd.net> has quit IRC (Quit: Konversation terminated!)
[21:57:41] <Sketch> SM = long run, MM = short run
[21:57:47] <ghfields> at work, they had cat5.... someone came in and told them that fibre was the future. They ripped out the distributed switch closets, made a unified switch room in the basement. Now we are stuck with a bunch of OM1 MM fibre.
[21:57:55] <Sketch> you probably don't want SM fiber in your house unless it's coming in from an ISP
[21:58:04] <Phil-Work> MM kills kittens
[21:58:20] <Phil-Work> there's very little reason in this day and age to use MM - the price difference is negligable
[21:59:27] <ghfields> OM1 will support 10G for 33m, but like I said, they killed off the switch closets.
[22:00:08] <ghfields> They chose to kill the PBX also, so they want to do POE... guess what is really hard to do across glass? Electricity
[22:00:31] <Phil-Work> this is why SM
[22:00:44] <Phil-Work> the same SM fibre that supports your 1, 10, 25, 40 and 100G also now supports 400G
[22:00:52] <ghfields> WE have media converters EVERYWHERE. What a mess.
[22:01:39] <Sketch> Phil-Work: there's not a big difference in the fiber itself, but there's a big difference in the cost of transcievers
[22:01:51] <Phil-Work> Sketch, not really
[22:02:00] <Phil-Work> LX 10KM optics are $7 a pop
[22:02:09] <Phil-Work> unless you're buying thousands of them, it's not going to dent your wallet
[22:02:11] <Sketch> especially if you buy from cisco ;)
[22:02:35] <Phil-Work> well, yes, that ^^
[22:02:58] <Phil-Work> once you've bought $vendor optics, you've no longer got a building to use them in as you had to sell it as well as your children and pets to pay for said optics
[22:03:58] <Sketch> even much cheaper 3rd party cisco-compatible tend to follow cisco's cost structure, where SM costs twice as much as MM
[22:04:32] <Phil-Work> not sure what fs.com charge for MM 1G optics but $7 for SM can't be sniffed at
[22:04:59] <Sketch> and many switches practically require brand name or compatible optics
[22:05:01] <Sketch> using fiber to servers is just silly unless you're on the bleeding edge of speeds
[22:05:29] <Phil-Work> fs.com do a decent job of making the optics compatible
[22:06:25] <Phil-Work> fibre to servers gives you a lot more port density
[22:06:39] <Phil-Work> you can get 4x10G off a single QSFP+
[22:06:55] <Phil-Work> with much lower power draw than 10G-BaseT
[22:08:56] <ghfields> (q)sfp+ is pretty nice within a datacenter
[22:09:13] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8253) created by Paul Zuchowski <https://github.com/zfsonlinux/zfs/issues/8253>
[22:10:29] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8253) comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/issues/8253#issuecomment-452452388>
[22:10:42] <Sketch> this is true, but that cabling sounds awful
[22:10:56] <Sketch> and it's probably going to be DAC not fiber anyway ;)
[22:14:06] <gyakovlev> anyone using 0.7.12 on linux-4.20 with this patch https://github.com/zfsonlinux/zfs/pull/8227 ?
[22:20:05] <Phil-Work> Sketch, I've seen some pretty nice deployments
[22:20:32] <Phil-Work> Juniper QFX 10k in the middle of the row with MPO terminated fibres going out to each cab
[22:20:57] <Phil-Work> patch panel in each cab converting MPO to LC then a patch cable to each server
[22:21:10] <Phil-Work> twas a thing of beauty
[22:27:32] <Sketch> hmm, never seen MPO before
[22:27:40] <Sketch> i was thinking more along the lines of DAC octopus cables
[22:27:46] <Sketch> from 40 to 4x10
[22:28:23] <Phil-Work> yeh, we run some of the DAC breakout cables
[22:28:29] * Sketch has a ticket at work to move some servers off of those that were set up before he was here
[22:28:31] <Phil-Work> really fugly
[22:29:48] <Sketch> it's a 10GbaseT switch, but instead of buying some 10GbT SFPs for the servers they just bought some DAC breakout cables
[22:30:39] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[22:32:32] <Phil-Work> yeh, likewise
[22:33:06] <Phil-Work> we connect on the front of the rack into some firewalls with breakout cables, also
[22:33:12] <Phil-Work> literally no nice way to manage that mess of cable
[22:34:32] <Sketch> i've also been at places that did 100% fiber deployments in the past too
[22:34:49] <zfs> [zfsonlinux/zfs] OpenZFS - 6363 Add UNMAP/TRIM functionality (#5925) comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/5925#issuecomment-452459752>
[22:35:07] <Sketch> it seems like a good idea until you end up with 4 fibers going to each server in a rack full of servers, and you have to figure out how to route all of those fibers so that they don't bend or kink
[22:35:42] <Sketch> not to mention testing/cleaning the cables
[22:36:01] <Sketch> technically you are supposed to clean them every time you disconnect/reconnect
[22:36:20] <Sketch> particularly SM, as they are much more susceptible to dirt than MM
[22:36:41] <Phil-Work> can't say I've ever cleaned a fibre
[22:36:53] <Phil-Work> give it a blow, like a Nintendo cartridge and you're set ;)
[22:39:53] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8254) created by Paul Zuchowski <https://github.com/zfsonlinux/zfs/issues/8254>
[22:40:55] *** elxa <elxa!~elxa@2a01:5c0:e086:8351:5a32:e360:98cb:966f> has quit IRC (Ping timeout: 252 seconds)
[22:48:34] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8254) comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/issues/8254#issuecomment-452463907>
[22:52:21] <ptx0> wtf ahrens
[23:03:04] <chesty> ptx0, I got the message about perf, cheers. I've never used it before but I'll see if I can find a tutorial and I'll submit a bug with ubuntu and I'll confirm kernel 4.15 works OK. I've confirmed it's the zfs filesystem on usb that's causing the hang
[23:03:28] <chesty> and I forgot to say thanks, cheers champ, appreciate it
[23:07:11] <jasonwc> The bot says that ahrens commented on #5925 but the most recent comment is from dweeezil 7 days ago. Is it not public or something?
[23:07:19] <zfs> [zfs] #5925 - OpenZFS - 6363 Add UNMAP/TRIM functionality by dweeezil <https://github.com/zfsonlinux/zfs/issues/5925>
[23:07:26] <ghfields> Exactly what I was typing up.....
[23:17:29] <zfs> [zfsonlinux/zfs] auto compression (#5928) comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/5928#issuecomment-452471982>
[23:17:34] <zfs> [zfsonlinux/zfs] auto compression (#5928) comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/5928#issuecomment-452471982>
[23:17:38] <zfs> [zfsonlinux/zfs] auto compression (#5928) closed by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/5928#event-2061476162>
[23:22:48] <zfs> [zfsonlinux/zfs] Implement Redacted Send/Receive (#7958) new review comment by Paul Dagnelie <https://github.com/zfsonlinux/zfs/pull/7958#discussion_r246172823>
[23:25:33] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[23:25:44] <zfs> [zfsonlinux/zfs] Use ZFS version for pyzfs and remove unused requirements.txt (#8243) comment by loli10K <https://github.com/zfsonlinux/zfs/issues/8243#issuecomment-452474265>
[23:26:00] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has quit IRC (Ping timeout: 252 seconds)
[23:26:04] <zfs> [zfsonlinux/zfs] Use ZFS version for pyzfs and remove unused requirements.txt (#8243) comment by loli10K <https://github.com/zfsonlinux/zfs/issues/8243>
[23:29:17] <zfs> [zfsonlinux/zfs] Implement Redacted Send/Receive (#7958) new review comment by Paul Dagnelie <https://github.com/zfsonlinux/zfs/pull/7958#discussion_r246174656>
[23:32:39] <zfs> [zfsonlinux/zfs] Implement Redacted Send/Receive (#7958) new review comment by Paul Dagnelie <https://github.com/zfsonlinux/zfs/pull/7958#discussion_r246175633>
[23:38:14] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has joined #zfsonlinux
[23:39:25] *** kim0 <kim0!uid105149@ubuntu/member/kim0> has quit IRC (Quit: Connection closed for inactivity)
[23:43:50] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Quit: simukis)
[23:48:31] *** wadeb <wadeb!~wadeb@38.101.104.148> has joined #zfsonlinux
[23:52:41] <zfs> [zfsonlinux/zfs] Add TRIM support - replaces #5925 and #7363 (#8255) created by Tim Chase <https://github.com/zfsonlinux/zfs/issues/8255>
[23:52:43] <ptx0> lmfao
[23:52:47] <ptx0> ^ that is the best
[23:52:55] <ptx0> basically "no" was the qanswer.
[23:53:12] <zfs> [zfsonlinux/zfs] OpenZFS - 6363 Add UNMAP/TRIM functionality (#5925) comment by Tim Chase <https://github.com/zfsonlinux/zfs/issues/5925#issuecomment-452481101>
[23:53:16] <zfs> [zfsonlinux/zfs] OpenZFS - 6363 Add UNMAP/TRIM functionality (#5925) closed by Tim Chase <https://github.com/zfsonlinux/zfs/issues/5925#event-2061539431>
[23:53:25] <zfs> [zfsonlinux/zfs] OpenZFS - 6363 Add UNMAP/TRIM functionality (v2) (#7363) comment by Tim Chase <https://github.com/zfsonlinux/zfs/issues/7363#issuecomment-452481187>
[23:53:26] <zfs> [zfsonlinux/zfs] OpenZFS - 6363 Add UNMAP/TRIM functionality (v2) (#7363) comment by Tim Chase <https://github.com/zfsonlinux/zfs/issues/7363#issuecomment-452481187>
[23:53:27] <ptx0> chesty: if i can save anyone from using USB for storage i've done enough for the day
[23:53:30] <zfs> [zfsonlinux/zfs] OpenZFS - 6363 Add UNMAP/TRIM functionality (v2) (#7363) closed by Tim Chase <https://github.com/zfsonlinux/zfs/issues/7363#event-2061539925>
[23:53:48] <PMT> ptx0: no to which?
[23:54:19] <jasonwc> PMT: ahrens asked if he could close out #5925 in favor of #7363 even though the recent activity was on #5925
[23:54:27] <zfs> [zfs] #5925 - OpenZFS - 6363 Add UNMAP/TRIM functionality by dweeezil <https://github.com/zfsonlinux/zfs/issues/5925>
[23:54:32] <bunder> jesus christ how many prs do we need for trim
[23:54:52] <jasonwc> PMT: Instead, Tim Chase closed out both #5925 and #7363 and created a new PR - #8255
[23:54:57] <bunder> its not like github gets slow on big pr's with that hiding thing it does
[23:55:02] <zfs> [zfs] #5925 - OpenZFS - 6363 Add UNMAP/TRIM functionality by dweeezil <https://github.com/zfsonlinux/zfs/issues/5925>
[23:55:49] <PMT> bunder: dunno, how many until people give up on it :P
[23:56:00] <jasonwc> lol
[23:56:35] <jasonwc> or when it's no longer necessary
[23:56:51] <bunder> https://i.imgur.com/GP2Q7rQ.png
[23:57:10] <ptx0> bunder: it can still unicorn
[23:57:42] <bunder> maybe microsoft should buy better servers :P
[23:58:03] <bunder> they did just give everybody private repos
[23:58:18] <ptx0> they probably just have huge overhead and inefficient backend
[23:58:38] <ptx0> bunder: of course they did
[23:58:44] <ptx0> they want more people's private data ^_^
[23:58:55] <ptx0> peoples'?
[23:58:59] <ptx0> fuckin' english
[23:59:04] <PMT> I imagine it's more "allocate more servers" than buy, given(Azure)
[23:59:29] <ptx0> PMT: tis both.
[23:59:37] <ptx0> allocate and buy more.
[23:59:56] <bunder> do they use azure? i forget what happened with that outage
top
   January 8, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >