Switch to DuckDuckGo Search
   January 7, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31
Toggle Join/Part | bottom
[00:17:55] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[00:19:11] *** mmlb2 <mmlb2!~mmlb@76-248-148-178.lightspeed.miamfl.sbcglobal.net> has joined #zfsonlinux
[00:20:37] *** mmlb <mmlb!~mmlb@76-248-148-178.lightspeed.miamfl.sbcglobal.net> has quit IRC (Ping timeout: 258 seconds)
[01:02:47] *** adilger <adilger!~adilger@S0106a84e3fe4b223.cg.shawcable.net> has quit IRC (Ping timeout: 240 seconds)
[01:07:32] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Quit: Qutting)
[01:14:46] <ptx0> lol
[01:14:59] <ptx0> write a wrapper script and use that with the pam module that unlocks encrypted home dir
[01:15:22] *** PewpewpewPantsu <PewpewpewPantsu!~pewpew@unaffiliated/setsuna-xero> has joined #zfsonlinux
[01:17:27] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 240 seconds)
[01:46:01] *** JTL is now known as JLT
[01:46:30] *** JLT is now known as JTL
[01:48:34] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 246 seconds)
[02:05:19] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee60069044dc5ee338f75.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 268 seconds)
[02:06:37] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Quit: simukis)
[02:16:39] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[02:17:35] *** theorem <theorem!~theorem@pool-173-68-77-128.nycmny.fios.verizon.net> has quit IRC (Read error: Connection reset by peer)
[02:19:40] <zfs> [zfsonlinux/zfs] CPU lockup with arc_reclaim responsible for most of the CPU time (#6187) comment by Tomas Mudrunka <https://github.com/zfsonlinux/zfs/issues/6187#issuecomment-451795048>
[02:26:05] <zfs> [zfsonlinux/zfs] CPU lockup with arc_reclaim responsible for most of the CPU time (#6187) comment by kpande <https://github.com/zfsonlinux/zfs/issues/6187#issuecomment-451795772>
[02:26:34] <zfs> [zfsonlinux/zfs] CPU lockup with arc_reclaim responsible for most of the CPU time (#6187) closed by kpande <https://github.com/zfsonlinux/zfs/issues/6187#event-2056567336>
[02:27:52] <Markow> Linux Kernel 5.0-rc1 was just released at 02:18 CET, skipping over 4.21. Cool!
[02:30:42] <DeHackEd> cool.... I guess ending on 4.20 was good enough for linus
[02:30:50] <DHowett> !
[02:30:51] <ptx0> that fuckin stoner.
[02:30:56] <ptx0> ;]
[02:31:14] <DeHackEd> nearly 4 years between 4.0 and 5.0
[02:31:23] <ptx0> mason: 'member https://youtu.be/69e8oa85F3g ?
[02:31:28] <DeHackEd> this is a reasonable release version update rate
[02:31:31] * ptx0 'members
[02:31:31] * DeHackEd stares at Firefox
[02:31:51] <ptx0> DeHackEd: zfs on linux has had like 6 years between 0.6 and 0.8
[02:32:14] <DHowett> heck, it could be 7.. or 8! until 0.8 actually comes out ;)
[02:32:30] <ptx0> the "so far" is implied
[02:32:37] <ss23> The kernel in my heart will always be 2.6.32
[02:32:43] <ptx0> 2.6.9 here
[02:32:53] <DHowett> ss23: you should consider getting yourself upgraded if you heart is running 2.6
[02:33:06] <DeHackEd> that's rhel 4?
[02:33:17] <ptx0> slackware 8 or something
[02:39:19] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has joined #zfsonlinux
[02:40:48] <ptx0> https://access.redhat.com/articles/3078
[02:40:58] <jasonwc> I noticed I was getting a few million ARC hits every minute associated with Monitorix and I discovered it was caused by "zfs get -rHp -o value available poolname." Presumably the goal was to determine the used and free space on the pool but the recursive search hits every snapshot even though snapshots have no available space. Presumably, there's a better way to do what Monitorix is trying
[02:40:58] <jasonwc> to do.
[02:41:20] <ptx0> yeah probably
[02:41:45] <ptx0> i hate 'zfs get'
[02:41:45] <DeHackEd> alternatively, zfs list -Hp -o name,avail
[02:41:48] <ptx0> yeah
[02:41:54] <DeHackEd> requires different parsing but I think it's worth it
[02:41:55] <ptx0> zfs list is superior imo
[02:42:14] <DeHackEd> if you're doing 'zfs get -r' without -s you're probably doing it wrong
[02:42:20] <DeHackEd> at least that's my opinion
[02:44:01] <jasonwc> yeah, that zfs list command is 50K ARC hits vs 1 million for the zfs get
[02:44:27] <ptx0> :P
[02:44:41] <ptx0> if you add -s name does it improve things more
[02:44:56] <PMT> well, -s or -S
[02:45:33] <ptx0> fwiw list -s is sort and get -s is source
[02:45:58] <jasonwc> Why is zfs get so much more demanding than zfs list?
[02:46:10] <ptx0> i think it's meant for smaller operations
[02:46:30] <ptx0> well, more complex operations? i don't f'n know
[02:46:53] <DeHackEd> probably becuase `zfs list` is managed by zpool property listsnapshots but `zfs get` isn't and is in fact incredibly stupid
[02:47:30] <ptx0> interesting, now i want to benchmark using zfs get vs zfs list for identical tiny operations
[02:47:40] <ptx0> my script runs a lot of zfs get and when things are busy it gets a lil hairy
[02:48:16] <ptx0> DeHackEd: any idea how to decode a resume receive token?
[02:48:19] <jasonwc> Also, I noticed it always seems to have a lot of misses for the zfs get operation despite the fact that it was being run every 60 seconds
[02:48:25] <ptx0> i wanna find the name of the snapshot it references
[02:48:27] <DeHackEd> ptx0: zfs receive -nv
[02:48:34] <DeHackEd> + -t <token>
[02:48:38] <DeHackEd> oh wait, zfs send
[02:48:42] <jasonwc> Also, I don't see much activity in zpool iostat when it's running. Are zfs get reads not shown?
[02:48:51] <ptx0> DeHackEd: wat
[02:49:04] <jasonwc> It was showing 30K misses/sec and I only saw a few hundred reads per sec on the pool
[02:49:04] <ptx0> like send -t doesn't work with any other options
[02:49:10] <ptx0> you can't dry send, aiui
[02:49:25] <DeHackEd> ptx0: objection
[02:49:41] <DeHackEd> zfs send -nv -t $TOKEN
[02:49:43] <DeHackEd> just do it
[02:50:22] <ptx0> why the fudge don't receive_resume_token have a source
[02:50:28] <ptx0> thanks zfs
[02:50:49] * DeHackEd <img src="killicon_kamikaze.png" />
[02:51:07] <ptx0> psh so you were right once
[02:51:10] <ptx0> big deal
[02:51:19] <ptx0> i was right once
[02:51:26] <ptx0> 1999
[02:51:37] <ptx0> i remember it like it was yesterday, probably because of that damned y2k software bug
[02:51:55] <ptx0> btw i'm still working on y2k compliance stuff over here in 2019
[02:53:38] <DeHackEd> ptx0: it's a team fortress 2 thing. soldier has a taunt that literally blows himself up, and I did that.
[02:53:47] <DeHackEd> (but if enemies are nearby, they'll be killed as well)
[02:54:04] <ptx0> sounds kinda racist
[02:54:23] <ptx0> is there a tomahawk button that has him scalp people
[02:55:10] <DeHackEd> no, but his does have voicelines for the enemy team like "what's the matter hippie, hair get in your eye?", and the halloween (zombie costume) "I have returned from the grave to give the living haircuts"
[02:55:53] <ptx0> i don't, uh, get it
[02:55:56] <ptx0> vOv
[02:56:33] <tlacatlc6> will it be compliant before y3k? :D
[02:56:45] <DeHackEd> he's a soldier. having short cut hair is a soldier meme I guess?
[02:57:37] <DeHackEd> you know what? it's a sunday evening and I'm still on IRC...
[03:23:27] <mason> ptx0: I like it, but that's around the time I'd stopped listening to new music, and spent my time sinking into stuff I'd already unearthed. :P
[03:23:46] <ptx0> heheh
[03:44:56] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[03:53:45] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has quit IRC (Ping timeout: 246 seconds)
[03:55:38] <mason> ptx0: https://youtu.be/skG2-ObpQvA?t=401
[03:55:48] <mason> ptx0: Was linked from the music video.
[03:55:55] *** qzo <qzo!~qzo@c-73-229-59-252.hsd1.co.comcast.net> has joined #zfsonlinux
[04:04:31] *** Llewelyn <Llewelyn!~derelict@184.12.106.191> has joined #zfsonlinux
[04:27:45] *** b <b!coffee@gateway/vpn/privateinternetaccess/b> has quit IRC (Quit: Lost terminal)
[04:33:53] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has joined #zfsonlinux
[04:34:15] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has quit IRC (Client Quit)
[04:35:12] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has joined #zfsonlinux
[04:38:29] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has quit IRC (Client Quit)
[04:39:17] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has joined #zfsonlinux
[04:42:32] *** JPT <JPT!~jpt@classified.name> has quit IRC (Remote host closed the connection)
[04:54:19] *** JPT <JPT!~jpt@classified.name> has joined #zfsonlinux
[05:05:43] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has quit IRC (Remote host closed the connection)
[05:08:54] <bunder> oh i missed the fun earlier? it looks like brian merged a bunch of python3 stuff
[05:21:32] *** tlacatlc6 <tlacatlc6!~tlacatlc6@68.202.46.96> has quit IRC (Quit: Leaving)
[05:37:13] *** ih8wndz <ih8wndz!jwpierce3@001.srv.trnkmstr.com> has quit IRC (Quit: WeeChat 2.3)
[05:37:43] *** ih8wndz <ih8wndz!jwpierce3@001.srv.trnkmstr.com> has joined #zfsonlinux
[06:07:08] *** Klox <Klox!~Klox@c-73-22-66-195.hsd1.il.comcast.net> has quit IRC (Ping timeout: 268 seconds)
[06:07:58] *** Klox <Klox!~Klox@c-73-22-66-195.hsd1.il.comcast.net> has joined #zfsonlinux
[06:31:47] <bunder> lol the linux 5 rc changes
[06:32:00] <bunder> initial open-source NVIDIA RTX Turing support with Nouveau -- how about fixing the clock speeds on maxwell
[06:48:44] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[07:05:05] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has joined #zfsonlinux
[07:13:02] <DHowett> bunder: yes, every contributor should definitely focus on that one problem ;)
[07:27:55] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has quit IRC (Ping timeout: 256 seconds)
[07:34:34] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has joined #zfsonlinux
[07:36:51] *** PewpewpewPantsu <PewpewpewPantsu!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 258 seconds)
[07:44:48] <p_l> Hmmm
[07:45:24] <p_l> If I try rewinding to recover a pool using -T, I get one or more devices is unavailable error
[07:45:43] <p_l> I can import the pool (but without corrupted datasets) using normal import
[07:46:29] <p_l> ... Close to 5000 transactions might the reason :-(
[07:46:44] <p_l> Shouldn't have started a scrub when things failed, I guess?
[07:48:07] <bunder> DHowett: considering it also affects everything newer than a maxwell, they really should, unless you want to game on a gtx780 for the rest of your life
[07:48:24] *** baojg <baojg!~baojg@162.243.44.213> has joined #zfsonlinux
[07:48:52] <bunder> or use the binary drivers, but that makes you a bad linux person blah blah blah
[07:49:51] <p_l> I think some of the power management stuff had been tricky since maxwell
[07:51:31] <p_l> Over time, different things became "secret sauce"
[07:53:54] <ptx0> am i the only one who has a script to make the connection to Rockstar's servers extremely flaky so that they put me into an isolated public session for GTA V
[07:54:29] <ptx0> they have a 'bad connection' server pool that you can force yourself into by disconnecting ethernet momentarily but if you have working port forwarding, in about an hour there'll be a couple other users in the server
[07:55:22] <ptx0> so i have it continually inject RST packets
[07:55:52] <ptx0> the more public users you can avoid the fewer modders you'll come into contact with, and there is a way to make a purely private session but you can't do missions there
[08:03:06] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has joined #zfsonlinux
[08:03:22] <bunder> people still play gta5
[08:03:26] <bunder> ?
[08:04:00] <ptx0> are you being serious
[08:04:09] <ptx0> or just an anti-GTA elitist
[08:04:58] <bunder> i figured people wouldn't play after the developers basically called its users cash cows
[08:05:18] <ptx0> there's not (m)any alternatives
[08:05:36] <ptx0> and you don't have to do the microtransaction shit
[08:05:48] <hyper_ch2> question: If you talk generally about booting computers. Can you still use "BIOS" for modern systems or should you use "UEFI" instead?
[08:07:12] <bunder> bios is kindof a catch-all term for the menus in the bios, so maybe but its technically wrong
[08:07:25] <bunder> since bios has been dead for years
[08:07:40] <bunder> anything that supports bios is probably that csm or whatever
[08:11:28] *** hyper_ch2_ <hyper_ch2_!c105d864@openvpn/user/hyper-ch2> has joined #zfsonlinux
[08:11:37] <hyper_ch2_> bunder: thx
[08:12:41] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has quit IRC (Ping timeout: 256 seconds)
[08:15:02] <p_l> Don't use BIOS anymore
[08:15:19] <ptx0> yeah, bios is not a catch all term
[08:15:21] <ptx0> that is firmware.
[08:15:37] <ss23> I still hear people saying "change the setting in the bios"
[08:15:49] <p_l> That's another thing
[08:16:21] <p_l> In fee more years, more Class 3 UEFI devices are going to show up
[08:17:25] * bunder shrug
[08:17:34] <bunder> people pervert the english language every day
[08:17:58] <bunder> electric cars have motors, not engines but people do that too
[08:18:20] *** prawn <prawn!~prawn@surro/greybeard/prawn> has quit IRC (Quit: WeeChat 2.3)
[08:36:15] <ptx0> not while i'm around
[08:37:05] <zfs> [zfsonlinux/zfs] vdev_open maybe crash when vdev_probe return NULL (#8244) created by Leroy8508 <https://github.com/zfsonlinux/zfs/issues/8244>
[08:40:24] <ptx0> hey this is really cool
[08:40:25] <ptx0> newrpool/vdisks/5a28448984103 receive_resume_token 1-fba8267e5-f0-789c636064000310a500c4ec50360710e72765a5269730304cae6706abc1904f4b2b4e2d618003903c1b927c5265496a31901648899f824d7f497e7a69660a034384f021aba93ebbb73a20c97382e5f31273538174625eb67e594a667176b1be69a29185898985a58589a181b183a1a9b189a51950c410663e3703c23fc9f9b90545a9c5c5f9d908370200531721b2 -
[08:40:35] <ptx0> # zfs recv -A newrpool/vdisks/5a28448984103
[08:40:35] <ptx0> 'newrpool/vdisks/5a28448984103' does not have any resumable receive state to abort
[08:40:47] <ptx0> sweet eh
[08:40:58] <ptx0> now, how the hell
[08:43:05] <ptx0> sucks, i had a snapshot sent across, but couldn't get rid of that token
[08:43:17] <ptx0> trying to use the token says 'stream must be upgraded to receive'
[08:57:53] <bunder> i don't think i've ever used the resume stuff
[09:01:47] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[09:20:09] <ptx0> it has a couple corner cases where it freaks out.
[09:20:30] <ptx0> zfs utility needs an option to write errors to stdout
[09:29:35] <bunder> but my posix
[09:29:43] <bunder> errors go to stderr
[09:37:59] <Lalufu> the option is 2>&1
[09:41:22] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[09:51:41] *** stefan00 <stefan00!~stefan00@ip9234924b.dynamic.kabel-deutschland.de> has joined #zfsonlinux
[09:51:46] <ptx0> it screws it up.
[09:52:28] <Lalufu> how so?
[09:53:51] <stefan00> hey folks, setting up a new machine right now. Since bunder released a gentoo install for 0.8 rc2 (thank you so much ;-), I think about going 0.8 straight away. How safe is it at the moment?
[09:54:52] *** baojg <baojg!~baojg@162.243.44.213> has quit IRC (Ping timeout: 244 seconds)
[09:58:03] *** Floflobel_ <Floflobel_!~Floflobel@80.215.76.104> has joined #zfsonlinux
[10:03:33] *** veegee <veegee!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has quit IRC (Quit: veegee)
[10:06:18] <ptx0> stefan00: if you have to ask..
[10:06:24] <ptx0> probably not for you
[10:09:18] *** kaipee <kaipee!~kaipee@81.128.200.210> has joined #zfsonlinux
[10:19:37] *** hyper_ch2_ <hyper_ch2_!c105d864@openvpn/user/hyper-ch2> has quit IRC (Ping timeout: 256 seconds)
[10:43:08] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has joined #zfsonlinux
[10:43:22] <cluelessperson> How might one handle high io over NFS against ZFS storage?
[10:49:03] <ptx0> a good SLOG
[10:49:09] <ptx0> sync=disabled
[10:49:14] <ptx0> two ways right there
[10:49:25] <ptx0> make sure you are using xattr=sa
[10:51:36] <cluelessperson> ptx0: I'm a bit confused about the difference between ZIL, SLOG, L2ARC
[10:51:40] <ptx0> mason: i am quite enjoying these new Edifier bookshelf monitors
[10:51:43] <cluelessperson> I feel I understand what ZIL and L2ARC are
[10:51:48] <ptx0> cluelessperson: man zpool
[10:51:51] <ptx0> ;)
[10:52:10] <ptx0> slog is just a dedicated zil device
[10:52:25] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has joined #zfsonlinux
[10:53:03] <cluelessperson> ptx0: oh, well my plan is to install a 1TB NVME and set it up with a ZIL and the rest as L2ARC
[10:53:07] <ptx0> l2arc is more complicated
[10:53:18] <ptx0> you probably don't want one
[10:53:29] <cluelessperson> ptx0: it's just a read cache, right?
[10:53:39] <cluelessperson> I figure that would help io wouldn't it?
[10:53:51] <ptx0> it consumes memory for its own index and it is not lightweight
[10:54:05] *** Floflobel_ <Floflobel_!~Floflobel@80.215.76.104> has quit IRC (Read error: Connection reset by peer)
[10:54:05] <ptx0> like 600gb l2arc consumed something like 5gb of memory
[10:54:35] <ptx0> depends on record sizes, compression, i guess
[10:54:47] <cluelessperson> ptx0: so, obviously 1TB nvme is way overkill for 5-10GB of ZIL
[10:54:56] <ptx0> 5-10gb zil is probably overkill
[10:55:09] <ptx0> you probably want to wait for #5182 really
[10:55:15] <cluelessperson> ptx0: what's that?
[10:55:21] <ptx0> wait for it..
[10:55:21] <zfs> [zfs] #5182 - Metadata Allocation Classes by don-brady <https://github.com/zfsonlinux/zfs/issues/5182>
[10:55:23] <ptx0> there
[10:55:56] <cluelessperson> when I perform some large data move operation on my zfs machine between filesystems
[10:56:10] <cluelessperson> it causes IO delay on my proxmox vm machine (over NFS) to skyrocket
[10:56:13] <cluelessperson> causing VMs to crash
[10:56:20] <cluelessperson> any suggestions?
[10:56:36] <ptx0> probably because you aren't using noop IO scheduler for bio layer or FIFO cpu scheduler for qemu
[10:56:44] <ptx0> or both
[10:56:48] <ptx0> that'd be particularly nasty
[10:57:07] <ptx0> set elevator=noop on cmdline
[10:57:23] <ptx0> and chrt -f -p 1 $(pidof qemu-system-x86_64)
[10:57:28] <cluelessperson> ptx0: I have no idea what elevator=noop means.
[10:57:34] <ptx0> on kernel cmdline
[10:57:42] <cluelessperson> and I don't think I'm using qemu
[10:57:53] <ptx0> your nick is so apt
[10:58:02] <cluelessperson> I can't know everything
[10:58:05] <ptx0> google elevator=noop zfs though
[10:58:10] <cluelessperson> and I can't read the internals on every software I use
[10:58:12] <ptx0> you can at least know how to learn
[10:58:32] <ptx0> not being mean, just straightforward
[10:58:48] <cluelessperson> you're not being mean, but how am I suppose to know the term "elevator" to google in the first place?
[10:58:52] <ptx0> it's a pretty specific keyword
[10:59:00] <ptx0> i mean, i just gave it to you
[10:59:13] <ptx0> from there you said you don't know what it means
[10:59:23] <ptx0> in between those two you couldn't possibly have even tried to find out
[10:59:46] <ptx0> if you did, i mean wow, but it was such a short interval you couldn't have looked hard :P
[11:00:07] <cluelessperson> ptx0: I mean, you called me clueless, I can google things
[11:00:16] <ptx0> indeed
[11:00:21] <cluelessperson> but I don't understand how I'm supposed to find things like that
[11:00:23] <ptx0> the fifo thing is pretty obscure tbh
[11:00:26] <cluelessperson> without asking
[11:00:29] <ptx0> it took me years to find it
[11:00:36] <ptx0> no, asking is not the issue
[11:00:41] <ptx0> glad to share
[11:01:33] <ptx0> it just felt like you really wanted me to do all the homework, and tbh it's 2am and i'm probably moderately cranky-tired but there's some good music playing, at l east, i fixed a couple bugs though.. should go to sleep
[11:01:49] <ptx0> anyway, i think those two things are the most relevant to IO delays and crashing
[11:02:12] <ptx0> i had some CrystalDiskMark issues in windows 10 VM that would cause the whole thing to stutter and i imagine if i had other VMs on the system they wouldn't have been happy..
[11:02:15] <cluelessperson> ptx0: oh, I don't expect you to do all the homework for me, it's just that if you happen to know how to make that work well, you might save me a lot of trouble and misdirection.
[11:02:22] <ptx0> setting FIFO cpu scheduler really worked wonders for that one
[11:02:30] <ptx0> so now crystaldiskmark runs great in the VM, 4GiB/s
[11:02:54] <ptx0> the noop elevator thing, if you use partitions, zfs will not set it. but it does set it on whole disks.
[11:03:16] <ptx0> and if you don't set it, if it uses cfq io scheduler then it tries to be 'completely fair queueing' and shits everything up
[11:03:46] <ptx0> zfs does not know how to cope with any io scheduler on linux other than noop, at least, under load
[11:04:19] <ptx0> if i scrubbed the pool on the VM host then all my VM guests would lose their root devices
[11:04:24] <ptx0> IO timeouts etc
[11:04:24] <cluelessperson> ptx0: another issue is that when I created the zpool, I didn't set ashift :(
[11:04:32] <ptx0> it may not matter
[11:04:50] <ptx0> but you'd really know if that were the problem
[11:04:53] <cluelessperson> well, 4096 blocks, zfs is using ashift=0
[11:05:02] <ptx0> that just means auto
[11:05:27] <cluelessperson> on this raidz2 12x array, I'm only seeing ~150MB read
[11:05:30] <ptx0> the whole system would be having 120s timeouts when you do basically nothing
[11:05:30] <cluelessperson> seems really slow to me
[11:05:36] <ptx0> it'd be slower than that
[11:05:40] <ptx0> a lot slower
[11:05:46] <ptx0> raidz is limited to speed of a single drive..
[11:06:04] <cluelessperson> I was under the impression it did striping of some sort.
[11:06:14] <cluelessperson> I figurd the write speed would be limited, but read would be faster
[11:06:17] <rjvb> ptx0: re: CrystalDiskMark: it runs under Wine too I found out. Do you think the results are meaningful (to compare disks on a given machine)?
[11:06:17] <ptx0> well, no, everything in raidz vdev is all parity
[11:06:29] <ptx0> rjvb: i doubt it, but i'm no expert
[11:06:47] <ptx0> it behaves funny with zfs, to say the least
[11:07:13] <ptx0> i've always seen way higher numbers than i see tech reviewers getting
[11:07:14] <rjvb> well, it's intended for NTFS and FAT so there's that
[11:07:26] <ptx0> yes, i run it on ntfs zvol in windows vm
[11:07:41] <rjvb> If you want to see funny and impressive disk benchmarks, try XBench on Mac with ZFS :)
[11:07:47] <ptx0> it is set up to use pseudorandom repeating bytes, iirc
[11:07:55] <zfs> [zfsonlinux/zfs] Ubuntu 18.04 Root on ZFS should mirror boot (#8223) comment by Erik Wramner <https://github.com/zfsonlinux/zfs/issues/8223#issuecomment-451884554>
[11:08:11] <ptx0> performance testing dedup in any meaningful way with a huge dataset is so difficult
[11:08:29] <ptx0> tried to set up a 1PB pool with mostly random data and #5182
[11:08:30] <rjvb> anyway, Wine should more easily get reliable timing than a VM, no?
[11:08:39] <zfs> [zfs] #5182 - Metadata Allocation Classes by don-brady <https://github.com/zfsonlinux/zfs/issues/5182>
[11:08:42] <ptx0> it took about 3 weeks to even get the data to 150TB
[11:09:00] <ptx0> not because the array was slow but because the data generation was
[11:09:32] <cluelessperson> ptx0: yeah, turns out my pool is using ashift 12 after all. :)
[11:09:44] <ptx0> yeah, it'd be absolute dogshit otherwise
[11:09:54] <ptx0> surprisingly awful
[11:10:04] <ptx0> give it a whirl
[11:10:06] <ptx0> :P
[11:11:00] * ptx0 remembers that time the customer put 900gb of data into dedup and it took more than 4 weeks and then crashed during import... repeatedly... over 6 months of tries
[11:11:29] <ptx0> they basically called it lost forever at that point
[11:11:32] <bunder> the problem with wine is that while it tries to be windows, its not windows, benchmarks like to lie, but a linux bench on linux isn't much better
[11:11:52] <ptx0> bunder: benchmarking is just difficult, period
[11:12:02] <ptx0> getting meaningful results, anyway
[11:12:21] <ptx0> like hey go fuck yourself with a round of zeroes, ZLE
[11:12:25] <rjvb> of course. The only reliable benchmark is what you see in your own workflows IRL :)
[11:12:32] <ptx0> not even
[11:12:42] <ptx0> can't rely on users to generate consistent loads
[11:12:47] <ptx0> ask your local plumber
[11:13:02] <rjvb> of course it is - I said *your* workflow. ;)
[11:13:13] <ptx0> my workflow is a bunch of chumps who don't listen
[11:13:30] <ptx0> probably 90% of them are just dicking around on social media
[11:13:36] <ptx0> i hate them
[11:13:43] <rjvb> and then you qualify it like Bentley used to specify the power of its cars. "Enough" ... "Enough + 50%" etc
[11:14:05] <ptx0> one of them is a group that caused the 2008 recession
[11:14:14] <ptx0> well, had a huge part in it and profited heavily
[11:14:19] <ptx0> yay
[11:14:21] <bunder> what chumps, you live in a trailer in bc :P
[11:14:31] <ptx0> i live in a house with doors and walls
[11:14:45] <Lalufu> Luxury!
[11:14:48] <rjvb> trailers can have those too O:-)
[11:14:55] <ptx0> yeah we can't all be Rob Levin
[11:15:32] <rjvb> ptx0: anyway, about yesterday's exchange. I've now created a 512Mb sparse file on a dataset with copies=2 and lz4, then created an XFS fs in that.
[11:15:33] <ptx0> mason: https://www.youtube.com/watch?v=ZRSNy8DcIDk
[11:15:35] <ptx0> this is really good.
[11:16:02] <rjvb> Initial results are encouraging no matter how patchy the approach
[11:16:09] <rjvb> we'll see how things age
[11:16:53] <ptx0> rjvb: were you the one who spent 4 weeks setting up a production system switchover to some filesystem layout that you ended up scrapping and doing a complete 180 in less than 24 hours?
[11:16:54] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[11:17:06] <rjvb> nope
[11:17:19] <ptx0> ah, could have fooled me
[11:18:25] <rjvb> I'm more someone to work with what I have and then make the best out of it over time
[11:18:27] *** Floflobel <Floflobel!~Floflobel@80.215.76.104> has joined #zfsonlinux
[11:18:52] <rjvb> which sometimes means backing out of what seemed like a miracle solution
[11:19:25] <rjvb> that's what this is too.
[11:20:17] <ptx0> lol, the new resident badboy
[11:20:29] <ptx0> well alright then.
[11:20:53] * ptx0 hands rjvb his sunglasses
[11:21:24] <rjvb> whatever rocks your boat
[11:21:59] <ptx0> an offensive idiom to those with inner ear disease
[11:22:11] <rjvb> (wait, no, yours is a lot bigger than mine)
[11:22:32] <rjvb> huh? And handing sunglasses to those who're blind?
[11:22:54] <ptx0> don't think stevie wonder complained
[11:23:27] <ptx0> or ray charles ^_^
[11:24:12] <ptx0> Many people who are blind who wear sunglasses do so for protection for their eyes and sensitivity to light, which can cause discomfort or dizziness
[11:24:23] <ptx0> that is really counter intuitive.
[11:24:46] <rjvb> nope
[11:25:55] <ptx0> sure it is.
[11:26:05] <ptx0> why would you need to protect something that doesn't work?
[11:26:32] <ptx0> sunglasses aren't going to keep a projectile out of there
[11:27:12] * ptx0 is "legally" blind, that is, it used to be against the law but they legalised it
[11:29:55] <rjvb> no, it isn't counterintuitive for those of us who have some knowledge of visual neuroscience
[11:30:02] <rjvb> and sensory physiology
[11:31:12] <rjvb> but I guess some find it counterintuive too that black Africans need sunscreen
[11:32:09] <rjvb> (it becomes more counterintuitive when you're in a train with someone from central Africa and he complains much more about the heat than you, when're you're from Northern Europe :) )
[11:32:40] <rjvb> ^when're^when
[11:38:59] <ptx0> your egocentrism is showing
[11:45:08] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has quit IRC (Quit: Laters)
[11:54:12] *** ahasenack <ahasenack!~ahasenack@33.93.189.91.lcy-02.canonistack.canonical.com> has joined #zfsonlinux
[11:54:49] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has joined #zfsonlinux
[11:56:34] <zfs> [zfsonlinux/zfs] apt-get/dpkg commands are really slow on a ZFS rootfs (#3857) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/3857#issuecomment-451897878>
[12:08:09] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[12:14:22] *** Slashman <Slashman!~Slash@cosium-152-18.fib.nerim.net> has joined #zfsonlinux
[12:15:19] <zfs> [zfsonlinux/zfs] apt-get/dpkg commands are really slow on a ZFS rootfs (#3857) comment by bunder2015 <https://github.com/zfsonlinux/zfs/issues/3857#issuecomment-451902620>
[12:15:24] <bunder> i'm leaning towards not a bug
[12:15:36] <bunder> plus 3857 is like 4000 bugs ago
[12:17:11] *** gila <gila!~gila@5ED74129.cm-7-8b.dynamic.ziggo.nl> has joined #zfsonlinux
[12:18:06] <zfs> [zfsonlinux/zfs] apt-get/dpkg commands are really slow on a ZFS rootfs (#3857) comment by Richard Allen <https://github.com/zfsonlinux/zfs/issues/3857#issuecomment-451903302>
[12:19:30] *** fp7 <fp7!~fp7@unaffiliated/fp7> has quit IRC (Remote host closed the connection)
[12:21:23] <FireSnake> This conversation has been locked as too heated and limited to collaborators.
[12:21:52] <FireSnake> pfff
[12:22:02] <DeHackEd> unfortunately ZFS uses 1 disk block per inode whereas most other filesystems with fixed inodes pack inodes into a single block increasing stat() efficiency
[12:24:16] <FireSnake> apt/dpkg are slow first because they fsync for safety. i always override that with 'eatmydata' command
[12:24:53] *** fp7 <fp7!~fp7@unaffiliated/fp7> has joined #zfsonlinux
[12:26:56] <DeHackEd> oh I'm thinking of gentoo's portage based on comments in the thread...
[12:28:36] *** Floflobel <Floflobel!~Floflobel@80.215.76.104> has quit IRC (Read error: Connection reset by peer)
[12:31:02] <bunder> sorry :P
[12:31:25] <bunder> but seriously, if zfs made portage slow i'd notice
[12:32:23] <DeHackEd> it would make the rsync slower (without MAC and stuff)
[12:33:08] <bunder> takes me a minute to do the rsync, and a minute to do that newfangled verification stage
[12:33:17] <DeHackEd> with ext4 a single disk sector read brings in multiple inodes which, for batch jobs like rsync, are rather likely to be accessed very soon as well
[12:38:32] <zfs> [zfsonlinux/zfs] apt-get/dpkg commands are really slow on a ZFS rootfs (#3857) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/3857#issuecomment-451908273>
[12:49:07] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has quit IRC (Ping timeout: 268 seconds)
[12:52:37] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has joined #zfsonlinux
[12:53:16] <zfs> [zfsonlinux/zfs] apt-get/dpkg commands are really slow on a ZFS rootfs (#3857) comment by Richard Allen <https://github.com/zfsonlinux/zfs/issues/3857#issuecomment-451911469>
[12:53:47] *** Floflobel <Floflobel!~Floflobel@80.215.76.104> has joined #zfsonlinux
[12:55:07] <zfs> [zfsonlinux/zfs] BUG: soft lockup - CPU# stuck for 22s! [z_wr_iss] (#7042) comment by Tomas Mudrunka <https://github.com/zfsonlinux/zfs/issues/7042#issuecomment-451911925>
[13:06:10] <zfs> [zfsonlinux/zfs] apt-get/dpkg commands are really slow on a ZFS rootfs (#3857) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/3857#issuecomment-451914344>
[13:07:52] <rjvb> FireSnake: I'll check out eatmydata, thx
[13:08:59] <rjvb> should be much less risky to use on ZFS, no?
[13:11:16] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6006cef745b124f4e00.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[13:18:18] *** Floflobel__ <Floflobel__!~Floflobel@2a04:cec0:1009:fc3:f2d9:3101:33c3:6869> has joined #zfsonlinux
[13:18:49] *** Floflobel <Floflobel!~Floflobel@80.215.76.104> has quit IRC (Read error: Connection reset by peer)
[13:25:12] <bunder> https://forums.gentoo.org/viewtopic-t-1091516.html i'm not touching this one
[13:28:18] <zfs> [zfsonlinux/zfs] 0.7.12 gives warning messages on Centos Release 6.10 (#8245) created by samuelxhu <https://github.com/zfsonlinux/zfs/issues/8245>
[13:34:26] <PMT> rjvbb: I mean, if you eat all the sync commands, then you're not guaranteed the data is synced out unless you're running sync=always, and that's a terrible idea
[13:35:40] <PMT> Your data's not going to get mangled once on-disk, but eating sync commands means not necessarily guaranteeing it's there until something it doesn't eat forces it.
[13:36:23] <bunder> re 8245 i don't even see a blkdev.h in 0.7.12
[13:36:49] <bunder> and god damnit no template
[13:54:06] *** Floflobel__ <Floflobel__!~Floflobel@2a04:cec0:1009:fc3:f2d9:3101:33c3:6869> has quit IRC (Read error: Connection reset by peer)
[13:56:08] *** Floflobel_ <Floflobel_!~Floflobel@80.215.76.104> has joined #zfsonlinux
[14:00:49] *** Floflobel_ <Floflobel_!~Floflobel@80.215.76.104> has quit IRC (Client Quit)
[14:08:48] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[14:09:23] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Max SendQ exceeded)
[14:09:53] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[14:10:06] <stefan00> would someone know if there are updated (0.8) man pages somewhere online readable?
[14:13:29] <PMT> https://github.com/zfsonlinux/zfs/tree/master/man for various values of readable
[14:15:07] <DHE> save the raw file to disk, read it with "man /path/to/file/zfs.8" or such. if in the current directory, use "man ./zfs.8"
[14:15:46] <DHE> ... should we put man2html versions of the man pages on the zfsonlinux.org site?
[14:15:57] <DHE> I mean I guess that could be a pain since you'd want versions for each major release
[14:24:36] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[14:34:44] <rjvbb> PMT, re eating: there's bound to be something like a sync when you close the files, no?
[14:35:51] <DHE> close() doesn't imply any kind of synchronous operation. it's just a process reference being discarded.
[14:37:37] <rjvbb> well, normally you don't need to do explicit syncs to get your data to disk, AFAIK you do it when you want to be (extra) sure the data is consolidated *now*
[14:38:38] <rjvbb> but you could call sync after eatmydata (or EMD could do that before exitting)
[14:39:55] <DHE> no... zfs specifically will guarantee ordering of system calls on disk, but not the exact point in time preserved on disk without fsync(), O_SYNC open flag, etc.
[14:40:30] <DHE> (sync=always is obviously the exception to that)
[14:41:37] <rjvbb> no meaning an explicit sync after using EMD is pointless?
[14:46:17] <DHE> you mean the commandline tool "sync" ?
[14:46:21] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[14:46:38] <rjvbb> yeah (or fsync() in the EMD code before exitting)
[14:47:23] <DHE> well, I am guessing EMD will effectively render the sync command useless...
[14:47:34] <DHE> as for behaviour, this sounds like exactly what EMD would do
[14:48:17] <rjvbb> I'll RTFM (and then the FC :) )
[14:56:20] <ghfields> DHE re man2html: Perhaps just linking to master? You would only have to worry about it being "too complete"
[15:09:12] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[15:23:28] <Celmor> would anyone here know a tool to do something like visualize growth of datasets/disk usage distribution (between snapshots)?
[15:24:55] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Quit: simukis)
[15:25:20] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[15:26:01] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Remote host closed the connection)
[15:26:24] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[15:28:12] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[15:34:35] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[15:36:56] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[15:42:31] <PMT> Celmor: I can imagine how to build one, but I'm not aware of one that exists offhand.
[15:43:09] <PMT> rjvbb: I mean, the whole point of EMD is that sync() and family get turned into no-ops. So running sync in something wrapped with that will not do what you want.
[15:43:12] <Celmor> the crudest solution would be to save `zfs list` dumps into a file and compare these, which is what I started with
[15:44:32] <PMT> Celmor: I was picturing something like {k,win}dirstat with areas differently highlighted based on whether they're new or not
[15:45:04] <rjvb> celmor: make snapshot dirs visible and then use something like kdiskusage?
[15:45:09] <Celmor> I just need to know where my disk space is gone
[15:45:12] <PMT> https://rdrr.io/github/d3treeR/d3treeR/man/d3tree3.html might be interesting to leverage for this
[15:45:26] <PMT> Celmor: you and written@FOO might want to be friends.
[15:45:28] <rjvb> PMT: depends on how the sync-noop wrapping is done
[15:45:46] <PMT> rjvb: LD_PRELOAD
[15:45:50] <Celmor> that's what I used but still couldn't find out where disk space went
[15:45:57] <rjvb> was just going to say I thought that
[15:46:16] <PMT> Celmor: so what are you looking for, the dataset with greatest amount written since snapshot N?
[15:46:44] <rjvb> that means if the wrapped command is NOT spawned by exec EMD gets back control after said command returns
[15:47:00] <rjvb> and could then do cleanup, in theory
[15:47:28] <rjvb> or it could overload the exit routine too so it calls the libc sync() routine
[15:48:00] <PMT> Or you could make the script call the child behaviors inside EMD and just not use EMD for a sync. But wouldn't it make more sense to figure out why it's so slow for you?
[15:48:05] <Celmor> PMT, that would help, I just thought there would be something out there that could visualize this and show percentages of growth (not just amount)
[15:50:29] <PMT> Celmor: So what, change_in_referenced for dataset foo / total change in pool referenced data?
[15:51:01] <Celmor> something like that
[15:51:10] <rjvb> PMT: actually, EMD is a shell script. It overloads itself at the end (`exec "$cmd" "$@"`) but removing the `exec` so it can be followed by `sync` is quite trivial
[15:51:17] <PMT> That wouldn't be especially hard to write.
[15:51:57] <rjvb> evidently it'd be nice to figure out why things are slow in the 1st place, but if they're not with EMD then at least we know where to look
[15:52:00] <PMT> rjvb: I am aware. But modifying EMD for one specific use seemed like the more annoying solution when you could just modify the caller.
[15:52:49] <rjvb> I beg to disagree, doing a single sync at the end would make EMD safer while still keep most of its advantages
[15:53:28] <rjvb> and modifying dpkg will be a whole different nest of nasty things
[15:54:21] <DHE> do make sure EMD is not active when you actually call sync, lest it do nothing
[15:54:33] <DHE> so unset LD_PRELOAD (or restore a saved value) before calling it
[15:55:31] <PMT> rjvb: I mean, if you already have to modify the callers to call eatmydata in order for it to be loaded, how is it easier?
[15:56:02] <DHE> even better. call sync yourself when you're done
[15:56:15] <rjvb> `ln -s /path/to/eatmydata /usr/local/bin/dpkg` ?
[15:56:30] <PMT> (To be clear, I don't think modifying eatmydata is a bad solution, I just don't see why it's easier.)
[15:56:48] <rjvb> calling manually is more difficult in that you have to remember it ;)
[15:58:25] <PMT> I assumed you were going to be wrapping it anyway. I agree that using eatmydata versus not would be informative, if only b/c you could use differential graphs.
[16:18:02] *** King_InuYasha <King_InuYasha!~King_InuY@fedora/ngompa> has joined #zfsonlinux
[16:18:17] <PMT> https://developers.google.com/chart/interactive/docs/gallery/diffchart Hm, this seems to have some interesting ideas on how to ddo it
[16:18:20] <PMT> do it, even
[16:30:04] <rjvb> turns out patching libeatmydata so it calls sync() at exit is trivial
[16:30:45] <rjvb> just overload exit() the same way sync() is overloaded, and call libc_sync in the overload when EMD is hungry
[16:50:41] <zfs> [zfsonlinux/zfs] #8235 feature request: zpool iostat N should repeat header like vmstat N (#8246) created by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/issues/8246>
[17:02:21] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has quit IRC (Quit: Page closed)
[17:06:07] <zfs> [zfsonlinux/zfs] Removed suggestion to use root dataset as bootfs (#8247) created by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/8247>
[17:18:09] <Slashman> hello, I'm doing a Debian install with root on zfs, since it's taking a long time and I will need to install a lot of servers like this, what's my best option to clone this installation? I would like to avoid the dd option
[17:19:54] <pink_mist> isn't that more of a debian-related question?
[17:20:26] <blackflow> Slashman: I would (and do) use automation like ansible for that. because you'd have the same problem again on kernel updates, and you probably have automation in place for that too.
[17:20:55] <Slashman> blackflow: I have automation, but not for zfs installation on root
[17:21:51] <Slashman> blackflow: how do you automate it? the process from the wiki uses a live cd
[17:22:52] <blackflow> well yeah you have to boot the server into a ZFS capable live/rescue/installation environment. After that, automation picks up the actual installation procedure. You can't use the installer anyway.
[17:23:20] <blackflow> might even set up some PXE magick for that. I don't know how your network is set up.
[17:24:53] <Slashman> blackflow: is your playbook public? I would be interested
[17:26:31] <Lalufu> for my server installs I have a prepared tarball of an installation, and some scripts around it that do the partition setup, untar the tarball, fiddle with some config files, install a boot loader and off we go
[17:26:37] <Lalufu> all PXE bootable
[17:27:04] <Lalufu> this is basically limited by how fast the tarball can be downloaded
[17:27:08] <Lalufu> in terms of installation speed
[17:27:11] <blackflow> Slashman: https://dpaste.de/HQ8r (it also sets up LUKS root unlocking in initramfs)
[17:28:00] <Lalufu> the installed system is prepared to the point where ansible can get at it and do whatever, so it's immediately manageable
[17:28:06] <blackflow> Slashman: setup-zfs.sh https://dpaste.de/gfBJ
[17:28:46] <blackflow> Slashman: this is specific to us because we rent hosted (dedicated) servers so it's designed to use their rescue env and build zfs from source. Ideally you'd have the rescue/installation env already capable of ZFS
[17:29:59] <Slashman> blackflow: thank you, it's very interesting, I'll need a little time to look at this
[17:30:37] <blackflow> it's still WIP so it's not polished
[17:31:15] <Slashman> Lalufu: currently we don't use PXE for server install, but this will have to change because we'll receive a lot of servers in ~1 months
[17:31:41] <Lalufu> it's quite a bit of work to set this up initially
[17:32:30] <Lalufu> either from scratch or using something like foreman
[17:33:25] <Slashman> Lalufu: https://www.theforeman.org/ I guess?
[17:33:30] <Lalufu> yes
[17:33:38] <Lalufu> Foreman is a big hammer (hah!) though.
[17:33:52] <Slashman> they'll be at FOSDEM, I'll go see them
[17:34:18] <Slashman> seems overkill for now
[17:38:15] <blackflow> Slashman: sounds like maybe something like SaltStack (or another agent-based config manager) would be a better choice? You prepare the boot OS which runs the agent and picks up all the required instructions from the master?
[17:39:57] <Slashman> we are already using ansible, so I would prefer to not use an other management tool, I guess the best way would be to prepare a PXE image that gets me to the point where ansible can run and deploy the system with a custom playbook
[17:57:44] <PMT> ptx0: so NVIDIA blinked today.
[17:57:55] <PMT> ( https://www.nvidia.com/en-us/geforce/news/g-sync-ces-2019-announcements/ )
[18:00:27] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) created by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/8248>
[18:05:05] *** yomi <yomi!~void@ip4d16bd91.dynamic.kabel-deutschland.de> has joined #zfsonlinux
[18:05:28] *** yomi is now known as Guest33514
[18:07:58] *** yomisei <yomisei!~void@ip4d16bd91.dynamic.kabel-deutschland.de> has quit IRC (Ping timeout: 245 seconds)
[18:08:10] <zfs> [zfsonlinux/zfs] Verify disks writes (#2526) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/2526#issuecomment-452006974>
[18:10:12] *** Guest33514 <Guest33514!~void@ip4d16bd91.dynamic.kabel-deutschland.de> has quit IRC (Ping timeout: 246 seconds)
[18:12:36] <PMT> I mean, just scrubbing the bits since last scrub is not the worst idea I've ever heard, but seems like it could lead to poor life choices
[18:21:21] *** yomisei <yomisei!~void@ip4d16bd91.dynamic.kabel-deutschland.de> has joined #zfsonlinux
[18:22:20] <FireSnake> not as a replacement for regular full pool scrubs, not default - can help busy pools to catch errors earlier vs waiting for the next full scrub
[18:23:35] <PMT> FireSnake: sure, my concern is people who just incremental scrub endlessly and it displays completion of last scrub that didn't actually scrub all the things
[18:28:43] <FireSnake> then an incremental one should say scan: *incremental* scrub repaired 0 in [...]
[18:29:28] <PMT> FireSnake: sure, but they know it's incremental. IMO it should probably say something like "last full scrub did X on [...] last incremental scrub did Y on [...]"
[19:19:54] <ptx0> FireSnake: wtf
[19:19:58] <ptx0> you can pause and resume scrubs
[19:20:26] <PMT> ptx0: yes, the proposal is to allow you to scrub data written since last {full,incremental} scrub without having to re-iterate the whole dataset.
[19:20:28] <ptx0> PMT: i wouldn't humour this anti-feature suggestion
[19:21:04] <PMT> I think it'd be useful but might just lead to people doing that a bunch and then being upset after cargo culting scrub -i everywhere and finding ancient data corruption
[19:21:43] <ptx0> mailinglists35 is irritating as hell
[19:22:05] <PMT> I just love that they actually use that name for a GH account
[19:26:47] <zfs> [zfsonlinux/zfs] Don't allow dnode allocation if dn_holds != 0 (#8249) created by Tom Caputi <https://github.com/zfsonlinux/zfs/issues/8249>
[19:27:03] <ptx0> i love how people i've never seen contribute a single PR go on about how easy something should be
[19:27:13] <ptx0> omg
[19:27:14] <ptx0> is that
[19:27:21] <ptx0> i think it is
[19:27:45] <ptx0> PMT: btw, tom said on friday that he found a reproducible test case from another openzfs developer, that probably helps with my issue
[19:28:10] <zfs> [zfsonlinux/zfs] Don't allow dnode allocation if dn_holds != 0 (#8249) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8249>
[19:28:11] <PMT> oh nice
[19:28:21] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by Richard Elling <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452033140>
[19:29:05] <PMT> ptx0: is that patch from Tom the bug you hit, or just a random other piece of on-fire?
[19:29:14] <PMT> I personally love how people keep requesting BPR
[19:30:09] <PMT> I really believe e.g. ahrens when they say "look we had a working prototype and the performance was worse than you can imagine, you don't want it"
[19:31:12] <DHE> "but we have MAC now, so surely that'll make it good!"
[19:31:23] <PMT> lipstick, pig, etc
[19:31:39] <DHE> I learned "perfume, turd"
[19:31:59] <PMT> Probably better.
[19:33:15] *** PewpewpewPantsu <PewpewpewPantsu!~pewpew@unaffiliated/setsuna-xero> has joined #zfsonlinux
[19:35:56] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 250 seconds)
[19:37:36] <ptx0> polished turds
[19:37:54] <ptx0> polish as in shine not the country
[19:38:19] <Shinigami-Sama> PMT: "peice of on-fire" <- I'm totes stealin that
[19:38:21] <Slashman> hey again, does anyone have a better solution than creating an mdadm raid1 for zfs on linux on root (with mirror) with UEFI for the UEFI partition? like that: https://outflux.net/blog/archives/2018/04/19/uefi-booting-and-raid1/ , the other way would be to somehow sync the UEFI partition regularly
[19:38:33] <ptx0> Slashman: don't do that.
[19:38:44] <ptx0> just set up distinct ESPs and do the one time install
[19:38:53] <ptx0> there is no need to sync your efi esp if you do things sensibly
[19:39:07] <ptx0> (seriously)
[19:39:16] <Slashman> okay, thank you
[19:39:38] <Slashman> I'm stuck with UEFI on the new DL365 servers since I have only NVME drives unfortunately
[19:39:40] <ptx0> Slashman: if you put your kernel/initramfs on the esp you might wanna rethink that. i use grub-efi to load zpool for /boot.
[19:39:59] <ptx0> and grub-efi is the only thing in the 6 ESP partitions
[19:40:04] <BtbN> I actually have my ESP set up like that, metadata 0.9 mdraid 1
[19:40:13] <ptx0> yeah it doesn't work everywhere though
[19:40:17] <Slashman> ptx0: are you doing thing differently than on https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS ?
[19:40:30] <ptx0> Slashman: probably, i use native encryption and 0.8 on gentoo
[19:40:51] <BtbN> Why wouldn't it work? With 0.90 metadata it should be indistringuishable
[19:41:25] <stefan00> can grub2 boot zfs/linux without an initrd?
[19:41:41] <BtbN> grub2 doesn't care about initrds
[19:41:58] <BtbN> Either it can read the kernel and the initrd, or it can't.
[19:41:58] <ptx0> BtbN: why does anything ever not work
[19:42:06] <ptx0> because engineers are human
[19:42:36] <BtbN> Last time i checked grub2 couldn't read an up to date zfs pool, so I'd rather opt for a non-zfs boot partition
[19:44:23] <ptx0> wat
[19:44:31] <ptx0> use a feature-stripped /boot pool
[19:44:52] <ptx0> BtbN: we call that, cutting off your nose to spite your face.
[19:45:50] <BtbN> An entire seperate pool just for /boot seems pretty overkill
[19:47:14] <Slashman> BtbN: any pointer on how to install such system? from here, I guess I'll follow the installation steps on the wiki and after the first boot change the /boot to a raid1
[19:47:41] <BtbN> I just installed it like that right away
[19:48:05] <Slashman> with zfs on root too?
[19:48:09] <BtbN> yes
[19:49:08] <Slashman> did you follow one of the wikis?
[19:49:33] <Slashman> and adjusted for this part obviously
[19:50:59] *** kaipee <kaipee!~kaipee@81.128.200.210> has quit IRC (Remote host closed the connection)
[19:53:28] <BtbN> no
[19:54:39] <Slashman> do you have a procedure somewhere that I could look at?
[19:55:58] <BtbN> Not really, just set up the filesystems, put stage3 on it, and continued as usual
[19:56:23] <PMT> I think the problem here is that a lot of people aren't as familiar with boot and can't just make up a sequence of instructions that works.
[19:57:06] <BtbN> I'm booting that server with systemd-boot, so my ESPs are just right at /boot
[19:58:33] <Slashman> I guess I can, but I'll need some work to remember how to correctly set up the fstab by hand after creating my mdadm raid1
[19:58:46] <Slashman> or any other files that mdadm needs
[19:59:48] <Slashman> I usually install either mdadm volume with the installation of my distrib, or zfs on linux following the wiki, or without any raid
[20:01:03] <ptx0> 13:47:28 < BtbN> An entire seperate pool just for /boot seems pretty overkill
[20:01:16] <ptx0> please senpai explain how your non-zfs filesystem with mdraid is any better
[20:01:36] <ptx0> i must be fucking retarded because i don't see it
[20:01:49] <BtbN> My UEFI can read it, so I'd call that better.
[20:02:10] <ptx0> UEFI can read ZFS as well.
[20:02:27] <BtbN> What? I doubt that.
[20:02:34] <ptx0> doubting thomas
[20:02:50] <BtbN> Unless UEFI made some huge jumps in the last couple weeks, FAT32 is where it's at.
[20:02:56] <ptx0> https://efi.akeo.ie/
[20:03:13] <ptx0> no, this is years old.
[20:03:43] <BtbN> "Download the driver and copy it to a partition that you can access from the EFI shell." great...
[20:04:02] <BtbN> So I need a FAT32 partition for UEFI to load the driver from to load my kernel from ZFS?
[20:04:09] <Lalufu> yes
[20:04:13] <ptx0> i'm sorry, are you new to UEFI? don't you know what an EFI ESP is?
[20:04:25] <BtbN> I'd rather just put the kernel right on there instead.
[20:04:39] <ptx0> 'right on there' is not a checksummed filesystem
[20:05:20] <ptx0> it's not compressed, either
[20:05:24] <BtbN> That doesn't overly matter for the kernel, or any components that are on /boot
[20:05:32] <ptx0> i'll keep my lz4 /boot and uncompressed kernels, thanks
[20:06:04] <BtbN> I'm not gonna setup insane UEFI driver chains just to load the kernel and even grub from ZFS
[20:06:20] <ptx0> but yet you're setting up an unsupported mdraid esp and that's perfectly fine
[20:06:27] <ptx0> brilliant
[20:06:48] <BtbN> I still don't see how it's unsupported. With 0.90 metadata it's indistinguishable for the firmware
[20:07:00] <ptx0> it should be but sometimes isn't
[20:07:09] <ptx0> it's unsupported - try asking your vendor for help with it.
[20:07:15] <zfs> [zfsonlinux/zfs] OpenZFS: 'vdev initialize' feature (#8230) closed by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8230#event-2058395834>
[20:07:26] <ptx0> HP servers are evil anyway though
[20:07:29] <BtbN> The only slight issue is that due to the ESP partition type it doesn't auto-assemble, but that's easily fixed
[20:08:03] <ptx0> and i'm sure you've never had a UEFI firmware that requires MBR to actually boot, either
[20:08:09] <ptx0> but i have
[20:08:18] <BtbN> That sounds like classic BIOS with extra steps
[20:08:33] <ptx0> no, it is uefi that requires some kind of hybrid partition table
[20:08:36] <ptx0> thanks lenovo
[20:09:05] <ptx0> reminded me of trying to get linux running on a mac, actually
[20:09:16] <zfs> [zfsonlinux/zfs] zfs.8 uses wrong snapshot names in Example 15 (#8241) merged by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8241#event-2058400172>
[20:09:48] <ptx0> not sure if the thinkpad i've still got is the one with that firmware 'bug'
[20:10:11] <BtbN> I know some lenovo Laptops where you have to set the "Active" flag on your ESP
[20:10:15] <ptx0> pissed me off though, almost had a great uefi setup and then.. it didn't boot.
[20:10:36] <ptx0> it's like there was no bootloader at all
[20:11:17] <ptx0> tried to use legacy boot mode and it also didn't recognise gpt partition table, which is weird, because older systems had worked just fine.
[20:11:55] <zfs> [zfsonlinux/zfs] Bump commit subject length to 72 characters (#8250) created by Neal Gompa (?????????) <https://github.com/zfsonlinux/zfs/issues/8250>
[20:15:51] <zfs> [zfsonlinux/zfs] Bump commit subject length to 72 characters (#8250) comment by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/8250>
[20:18:12] <zfs> [zfsonlinux/zfs] Bump commit subject length to 72 characters (#8250) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8250>
[20:21:36] <zfs> [zfsonlinux/zfs] Removed suggestion to use root dataset as bootfs (#8247) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8247>
[20:23:37] <Slashman> what I get from all that is that UEFI is fucking annoying and I still don't know how I will solve this issue
[20:26:15] <blackflow> I wouldn't (and don't) bother with mirroring /boot. you can alway recreate it if your primary (boot) disk in teh array fails, even without downtime if you have hotswappable drives.
[20:27:41] <Slashman> yeah, I guess I'll do just that, I have never seen a nvme drive until now anyway...
[20:27:51] <Slashman> +fail
[20:28:43] <gchristensen> my last gig burned up about 4tb of nvme/wk. you must be gentle :)
[20:29:51] <ptx0> wow
[20:29:56] <ptx0> you must have been lighting it on fire.
[20:30:06] <ptx0> or improperly cooling it
[20:30:11] <ptx0> either way, you are a monster
[20:30:54] <Slashman> usually, we don't write the full disk everyday, so we are pretty fine with the endurance for several years
[20:30:59] <gchristensen> sort of the same, right? anyway, no, to neither -- they were properly cooled and not on fire. we had an astronomical quantity of nvme drives and 4tb wasn't very much -- everything failed within specified boundaries.
[20:31:18] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452053307>
[20:31:30] <ptx0> gchristensen: then you are at fault for misleading information earlier
[20:31:32] <ptx0> shame
[20:31:53] <gchristensen> ^.^
[20:32:05] <Slashman> what brand was that?
[20:32:37] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452053724>
[20:32:43] <gchristensen> a variety
[20:33:11] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[20:34:37] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452054348>
[20:37:25] <zfs> [zfsonlinux/zfs] Bump commit subject length to 72 characters (#8250) comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/8250>
[20:44:46] * CompanionCube still wonders why use an uncompressed kernel when you can get better compression with XZ
[20:45:03] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has joined #zfsonlinux
[20:46:51] <zfs> [zfsonlinux/zfs] Removed suggestion to use root dataset as bootfs (#8247) comment by bunder2015 <https://github.com/zfsonlinux/zfs/issues/8247>
[20:47:36] <bunder> CompanionCube: i guess it depends on where you want to do the compression
[20:47:54] <bunder> you can compress the kernel, or the fs could compress the file it consumes
[20:48:41] <bunder> since i'm using efi and i don't feel like booting 200mb kernels off fat, well you can see where this is going :P
[20:48:41] <CompanionCube> true, but you can get better results
[20:54:40] <DHE> in case you hadn't heard, github now has private repositories for all users for free. the catch: max 3 users allowed to access a free private repo, limiting collaboration
[20:55:47] <cirdan> all your code belong to MS
[20:56:33] <DHowett> i consider that slightly better than "all your code belong to that one VC-funded company"
[20:56:41] <DHowett> > shrug.exe
[20:56:56] <cirdan> i'd say it's a toss up
[20:57:31] <bunder> i dunno i might go for that, there's a few things i could put in git that i don't want to be public (ie firewall scripts)
[20:57:31] <cirdan> i care more about an ios 12 jailbreak
[20:57:43] <zfs> [zfsonlinux/zfs] Deleting Files Doesn't Free Space, unless I unmount the filesystem (#1548) comment by kpande <https://github.com/zfsonlinux/zfs/issues/1548#issuecomment-452061647>
[20:57:44] <zfs> [zfsonlinux/zfs] Deleting Files Doesn't Free Space, unless I unmount the filesystem (#1548) comment by kpande <https://github.com/zfsonlinux/zfs/issues/1548#issuecomment-452061647>
[20:57:45] <ptx0> bunder: your wish is my command
[20:57:48] <cirdan> bunder: thing is you dont even need a server for git
[20:57:49] <zfs> [zfsonlinux/zfs] Deleting Files Doesn't Free Space, unless I unmount the filesystem (#1548) closed by kpande <https://github.com/zfsonlinux/zfs/issues/1548#event-2058504620>
[20:58:32] <bunder> ptx0: but it still is broken isn't it
[20:58:38] <ptx0> in 0.6.5 yes
[20:58:49] <ptx0> my customers stopped bitching about it though
[20:59:05] <bunder> maybe because they stopped using dumb layouts :P
[20:59:13] <ptx0> that's most certainly not it
[20:59:42] <ptx0> don't worry though, we started chmod -w the root dataset via a zedlet
[20:59:51] <ptx0> when they create the pool they can no longer write to the root dataset ahaha
[21:01:02] <cirdan> good call
[21:01:08] <bunder> cirdan: its not so much the vcs alone that i'd want, i'd want somewhere to store it that isn't local
[21:01:19] <zfs> [zfsonlinux/zfs] 0.7.12 gives warning messages on Centos Release 6.10 (#8245) closed by kpande <https://github.com/zfsonlinux/zfs/issues/8245#event-2058511682>
[21:01:29] <zfs> [zfsonlinux/zfs] 0.7.12 gives warning messages on Centos Release 6.10 (#8245) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8245#issuecomment-452062783>
[21:01:38] <cirdan> store it on my box, I even have a special dataset for passwords/keys ;-)
[21:02:18] <bunder> i trust github more, just saying :P
[21:02:19] <ptx0> the only safe way to store a 4096 bit key is to memorize it
[21:02:48] <gchristensen> extremely vulnerable to rubber hoses and diskrot though.
[21:03:50] <ptx0> looks like seagate keeps migrating their low end desktop HDDs over to SMR
[21:04:09] <ptx0> there are these DM004 drives, they were once PMR but now the DM0004 model is PMR and DM004 is SMR
[21:04:11] <cirdan> making low end even lower
[21:04:21] <ptx0> same price
[21:04:34] <ptx0> they introduced "barracuda pro" series that have no SMR
[21:04:43] <cirdan> of course
[21:05:03] <cirdan> nerf something then introduce a more expensive version to bring back old functionality
[21:05:15] <ptx0> it's almost like tim cook works there
[21:05:25] <cirdan> i agree
[21:05:40] <ptx0> wait, what do we do now then
[21:05:46] <ptx0> i've never agreed with anyone before
[21:05:57] <ptx0> do we.. get married?
[21:05:59] <cirdan> no
[21:06:05] <cirdan> but I do get 1/2 your cheque
[21:06:20] <ptx0> ride the alimonie poney
[21:06:43] <ptx0> pwnie?
[21:06:52] <cirdan> poony
[21:07:00] <ptx0> inappropriate
[21:07:35] <cirdan> it's just a little horese
[21:10:56] <zfs> [zfsonlinux/zfs] apt-get/dpkg commands are really slow on a ZFS rootfs (#3857) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3857#issuecomment-452065527>
[21:11:01] <zfs> [zfsonlinux/zfs] apt-get/dpkg commands are really slow on a ZFS rootfs (#3857) closed by kpande <https://github.com/zfsonlinux/zfs/issues/3857#event-2058531545>
[21:11:18] <ptx0> keep bumping these useless issues, guise
[21:11:21] <ptx0> i'll keep closing them
[21:16:11] <zfs> [zfsonlinux/zfs] Use ZFS version for pyzfs and remove unused requirements.txt (#8243) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8243#issuecomment-452067090>
[21:16:42] <zfs> [zfsonlinux/zfs] Use ZFS version for pyzfs and remove unused requirements.txt (#8243) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8243>
[21:22:50] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has joined #zfsonlinux
[21:25:56] <zfs> [zfsonlinux/zfs] apt-get/dpkg commands are really slow on a ZFS rootfs (#3857) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/3857#issuecomment-452069961>
[21:26:52] <CompanionCube> bunder: self-hosting git isn't tterribly complex
[21:27:17] <gchristensen> especially if it is _just_ git
[21:29:21] <zfs> [zfsonlinux/zfs] apt-get/dpkg commands are really slow on a ZFS rootfs (#3857) comment by kpande <https://github.com/zfsonlinux/zfs/issues/3857#issuecomment-452070912>
[21:30:40] <Celmor> if I have a pool with a mirror configured where one disk is offlined, can I import that offlined disk into a new (temporary) pool to recover data from it?
[21:31:33] <Celmor> I know that it should be possible while the current pool is not imported
[21:33:07] <ptx0> no
[21:34:06] <Celmor> so I would have to import that temporary pool in a VM then I suppose
[21:34:48] <PMT> Yes.
[21:35:01] <ptx0> and 'zpool split'
[21:35:03] <PMT> There might be a feature request bug for that somewhere, since it's similar to just zpool reguid
[21:35:29] <PMT> ptx0: I don't think that would actually work, wouldn't it keep the "live" side of the pool the old GUID?
[21:36:57] <ptx0> it reguids internally
[21:37:33] <zfs> [openzfs/openzfs] Add "pkg:/library/idnkit/header-idnkit" to build dependencies (#727) closed by Prakash Surya <https://github.com/openzfs/openzfs/issues/727#event-2058583457>
[21:37:39] *** Slashman <Slashman!~Slash@cosium-152-18.fib.nerim.net> has quit IRC (Remote host closed the connection)
[21:37:56] <PMT> Yes, I assumed so, but I didn't know that split on a half-missing pool would reguid the live half, not the gone half
[21:38:02] <zfs> [openzfs/openzfs] Merge remote-tracking branch 'illumos/master' into illumos-sync (#728) closed by Prakash Surya <https://github.com/openzfs/openzfs/issues/728#event-2058584374>
[21:38:04] <ptx0> no
[21:38:10] <ptx0> they have to import the whole pool
[21:38:24] <ptx0> but that's an interesting idea you should implement
[21:38:41] <PMT> I mean, their whole premise was importing the offline half for data recovery
[21:39:02] <DHE> a read-only import would be okay...
[21:40:01] <zfs> [openzfs/openzfs] Merge remote-tracking branch 'illumos/master' into illumos-sync (#730) created by zettabot <https://github.com/openzfs/openzfs/issues/730>
[21:42:15] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has quit IRC (Ping timeout: 252 seconds)
[21:45:14] *** stefan00 <stefan00!~stefan00@ip9234924b.dynamic.kabel-deutschland.de> has quit IRC (Quit: stefan00)
[21:46:59] <zfs> [zfsonlinux/zfs] Include third party licenses in dist tarballs (#8242) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8242>
[21:54:24] <blackflow> Hm, methinks I'm maybe misunderstanding L2ARC here. I've designated a 64G partition off of an SSD, added it as cache dev to the pool. According to arc_summary, it's adaptive size is only 3G but I have a feeling it should be a lot bigger. What exactly does ZFS put in L2? I was assuming any block read by ZFS would go through either ARC or L2
[21:59:40] <ptx0> use kstat analyzer
[22:00:31] <DHE> there's a thread that trickle-feeds the l2arc from data on the ends of the main ARC LRU lists
[22:01:03] <ptx0> basically l2arc contains evicted arc entries
[22:01:53] <ptx0> arc is much faster than l2arc so you really want to use that if possible and l2arc is just for things that are nice to cache but not used enough to be in main memory
[22:02:29] <PMT> I eagerly await if NVDIMMs become ubiquitous someone attempting to grow that functionality in.
[22:05:47] <zfs> [zfsonlinux/zfs] 0.7.12 gives warning messages on Centos Release 6.10 (#8245) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8245#issuecomment-452081500>
[22:07:28] <Celmor> ptx0, why `zpool split`?
[22:08:34] <ptx0> two pool
[22:08:59] <Celmor> I don't follow
[22:12:58] <PMT> Celmor: zpool split turns one pool of N-way mirrors into 2 pools of N-1 and 1-way vdevs, respectively, and reguids+renames one of them.
[22:13:31] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452083701>
[22:14:08] <Celmor> alright. thanks, I only need to mount the pool on the offline mirror temporarily and readonly
[22:19:48] <zfs> [zfsonlinux/zfs] avoid retrieving unused snapshot props (#8077) comment by Alek P <https://github.com/zfsonlinux/zfs/issues/8077#issuecomment-452085532>
[22:22:50] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) comment by Alek P <https://github.com/zfsonlinux/zfs/issues/8142#issuecomment-452086411>
[22:26:30] <PMT> Celmor: I think there's a bug # for wanting to reguid a pool on import somewhere, but I might be making that up
[22:27:05] <Celmor> not sure what that means but ok
[22:28:10] <PMT> Celmor: so the problem you'd hit if you tried importing half the pool read-only while the other half is imported is that, well, it's already imported. So you'd want to change any of the unique IDs involved. (I would not be surprised if the official response is "lol never" to the question "can i import more than one part of an N-way pool at the same time")
[22:29:01] <Celmor> didn't even get to the point of actually attempting to import it, as it didn't list anything to import when I scanned for pools
[22:29:40] <PMT> It probably explicitly avoids listing pools that are already imported.
[22:31:03] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[22:37:14] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by Alek P <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-452090641>
[22:53:24] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Ping timeout: 252 seconds)
[22:53:28] <zfs> [zfsonlinux/zfs] OpenZFS 8473 - scrub does not detect errors on active spares (#8251) created by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8251>
[23:03:05] <ptx0> PMT: it won't import an offline device either
[23:03:09] <ptx0> even if the other half is not imported
[23:03:45] <Shinigami-Sama> ...why would it import
[23:04:04] <Shinigami-Sama> pull it out and try and import it somewhere else.
[23:04:21] <Shinigami-Sama> or pass the raw disks through kvm or something
[23:04:55] <ptx0> read the scrollback
[23:06:46] <Shinigami-Sama> I get the datarecovery part, but I don't understand how it would be possible in and of itself
[23:07:00] <Shinigami-Sama> seems like putting socks over your shoes
[23:08:28] <zfs> [zfsonlinux/zfs] Include third party licenses in dist tarballs (#8242) comment by Neal Gompa (?????????) <https://github.com/zfsonlinux/zfs/issues/8242#issuecomment-452099310>
[23:08:30] <ptx0> how, exactly
[23:28:01] <zfs> [zfsonlinux/zfs] Include third party licenses in dist tarballs (#8242) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8242>
[23:34:15] <Celmor> if I have an older zfs snapshot which I backed up I can bookmark and destroy that snapshot but still be able to recover that snapshot via zfs recv, right?
[23:37:25] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8142>
[23:41:49] <ptx0> bookmarks are so that you can incremental send from missing snapshots
[23:42:56] *** leper` <leper`!~leper`@77-57-120-172.dclient.hispeed.ch> has quit IRC (Quit: .)
[23:43:42] <DeHackEd> bookmarks are for the sender AND ONLY THE SENDER. every time I run "zfs send [-i source] $SNAPSHOT" I always first do "zfs bookmark $SNAPSHOT $BOOKMARK" so that a future incremental is guaranteed to be possible
[23:44:37] <ptx0> i do it after.
[23:44:54] <ptx0> and now my script takes a bookmark after resuming a token as of yesterday
[23:45:46] <ptx0> after => less accumulated bookmarks from failed send attempts, at least pre-resume support, for me
[23:46:15] *** donhw <donhw!~quassel@host-184-167-36-98.jcs-wy.client.bresnan.net> has quit IRC (Remote host closed the connection)
[23:46:19] *** leper` <leper`!~leper`@77-57-120-172.dclient.hispeed.ch> has joined #zfsonlinux
[23:46:40] <blackflow> ptx0: "arc is much faster than l2arc so you really want to use that if possible and l2arc is just for things that are nice to cache but not used enough to be in main memory" -- I know. But I guess I was assuming wrong that every block read will go through ARC or L2, but will always be cached. I have a very slow 5400k HDD I use for Steam games and I wanted to speed up (frequent) level loads but I'm
[23:46:47] <blackflow> not sure it's happening
[23:46:55] <ptx0> DeHackEd: i find it irritating that zfs hold uses the tag argument before the snapshot instead of how bookmark does
[23:47:20] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Quit: simukis)
[23:47:28] <blackflow> I have 64G of L2 but arc_summary says only 3.3G is in L2 and 90% is hit miss ...
[23:47:33] <blackflow> s/hit miss/cache miss/
[23:48:09] <blackflow> 8G of RAM but adaptive ARC size says ~300MB
[23:48:30] <PMT> blackflow: what version?
[23:48:49] <blackflow> In fact, some in-game loading has visibly slowed down and I was assuming that's beause those blocks had to be (synchronously written to L2 first)
[23:49:16] <blackflow> eh parentheses.... 0.7.5 ubuntu Bionic
[23:49:55] <PMT> Look at #7820 for more information on why that might happen. Feel free to try and convince Ubuntu to ship a backport of the currently-committed fix in there, though be aware it does not appear to be a complete fix.
[23:49:57] <bunder> that l2arc is yuge, try something smaller like 8gb so your arc isn't all l2arc pointers
[23:50:06] <zfs> [zfs] #7820 - ARC target size varying wildly <https://github.com/zfsonlinux/zfs/issues/7820>
[23:50:24] <PMT> bunder: given that the ARC is hovering at what I presume is near arc_min, I'm betting on ^
[23:50:53] <blackflow> PMT: I see
[23:52:02] <blackflow> bunder: yeah I thought about that, but it's only filled up ~3G, and ~3MB is L2 header size for now. I'd be delighted to see all 64G filled up before I trim that down :)
[23:52:23] <PMT> blackflow: l2arc entries take up ARC memory, albeit a relatively small amount each, so that's an important thing to consider.
[23:52:41] <blackflow> I know, we discussed this here before :) I was asking how to read it from arc_summary
[23:53:04] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[23:53:04] <PMT> Sounds likely. My memory has more holes in it than 20yo mass-produced socks
[23:53:31] <blackflow> lol
[23:54:21] <PMT> I'd blame age or medical treatments that list memory impairment as a side effect, but there's extremely compelling evidence this isn't a change in the last 10 years, and since I'm under 40, it's unlikely to be from age unless my brain more closely resembles a raisin than a prune.
[23:58:33] <zfs> [zfsonlinux/zfs] vdev_open maybe crash when vdev_probe return NULL (#8244) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8244#issuecomment-452112430>
top
   January 7, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31