Switch to DuckDuckGo Search
   January 10, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >
Toggle Join/Part | bottom
[00:03:10] <zfs> [zfsonlinux/zfs] Linux 5.0: asm/i387.h: No such file or directory (#8259) comment by Marc Dionne <https://github.com/zfsonlinux/zfs/issues/8259#issuecomment-452902510>
[00:03:50] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246579860>
[00:06:11] *** obadz <obadz!~obadz@unaffiliated/obadz> has joined #zfsonlinux
[00:10:34] <PMT> CompanionCube: I mean, you could send them elsewhere. But I think the confusion comes from things like e.g. LVM where you have to logically upper bound how much space snapshots can take out of the space not already used by the LV.
[00:12:40] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[00:37:06] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has joined #zfsonlinux
[00:37:16] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[00:37:40] <pink_mist> https://seclists.org/oss-sec/2019/q1/54
[00:44:24] <zfs> [openzfs/openzfs] Add a manual for ztest. (#729) new commit by Sevan Janiyan <https://github.com/openzfs/openzfs>
[00:45:08] <gchristensen> pretty good one there
[00:45:29] <gchristensen> Qualys is incredible
[00:46:31] <snehring> yeah they do good work
[01:00:07] <ptx0> i think this motherboard only works with registered ecc
[01:00:10] <ptx0> sigh
[01:01:22] <zfs> [zfsonlinux/zfs] Linux 5.0: asm/i387.h: No such file or directory (#8259) comment by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/8259#issuecomment-452918667>
[01:31:43] <ptx0> 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 0
[01:31:46] <ptx0> ^_^
[01:32:06] <Shinigami-Sama> ptx0: you too? my brand new drive is doing that nonsense too
[01:32:14] <ptx0> yep
[01:32:16] <ptx0> stupid new disks
[01:32:30] <Shinigami-Sama> half the items on it are showing old_age and pre_fail
[01:32:35] <ptx0> that's just how 
[01:32:38] <ptx0> SMART
[01:32:39] <ptx0> works
[01:32:47] <Shinigami-Sama> yup
[01:33:26] <ptx0> just wanted to make sure the drive spins up
[01:33:34] <ptx0> it is a spare for later
[01:34:16] <ptx0> also not the cpu lanes at fault, started on a e5 2670 and gave that to a friend, works great there
[01:34:22] <PMT> ptx0: that's funny, I thought registered required more power and that was why most things worked with unbuffered not registered
[01:34:26] <PMT> do i have it backwards
[01:34:33] <ptx0> yes
[01:35:41] <ptx0> in unbuffered memory configurations go directly from the controller to the memory module
[01:35:48] <ptx0> oops
[01:35:51] <ptx0> memory commands in*
[01:36:39] <Shinigami-Sama> registered is how they can get stupid densties as well. because theres doesn't have to be as many "direct" lanes to the individual rows
[01:36:48] <ptx0> right.
[01:37:00] <ptx0> there is lower electrical load
[01:37:05] <Shinigami-Sama> the 'register" can fake being stupid, and actually be a little smart
[01:37:21] <ptx0> no one knows what that means
[01:38:22] <Shinigami-Sama> it started off being a literal register, now they're actual little controllers inside the DIMM
[01:38:41] <Shinigami-Sama> so now they fake being stupid like a registery
[01:43:49] *** cirdan <cirdan!~cirdan@2601:85:4400:6fd3:f64d:30ff:fe60:210d> has quit IRC (Ping timeout: 250 seconds)
[01:57:08] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Quit: shibboleth)
[02:04:32] *** sponix <sponix!~sponix@68.171.186.43> has joined #zfsonlinux
[02:05:45] <zfs> [zfsonlinux/zfs] Eliminate ZTHR races by serializing ZTHR operations. (#8229) new review comment by Serapheim Dimitropoulos <https://github.com/zfsonlinux/zfs/pull/8229#discussion_r246606628>
[02:09:12] *** cirdan <cirdan!~cirdan@2601:85:4400:6fd3:f64d:30ff:fe60:210d> has joined #zfsonlinux
[02:15:43] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[02:16:10] <zfs> [zfsonlinux/zfs] Add dmu_object_alloc_hold() and zap_create_hold() (#8015) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8015#issuecomment-452933509>
[02:19:21] <zfs> [zfsonlinux/zfs] Linux 5.0: macro "access_ok" passed 3 arguments, but takes just 2 (#8261) created by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/8261>
[02:22:07] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 240 seconds)
[02:24:55] <zfs> [zfsonlinux/zfs] Eliminate ZTHR races by serializing ZTHR operations. (#8229) new review comment by Serapheim Dimitropoulos <https://github.com/zfsonlinux/zfs/pull/8229#discussion_r246609733>
[02:32:07] <zfs> [zfsonlinux/zfs] Add dmu_object_alloc_hold() and zap_create_hold() (#8015) comment by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/8015>
[02:41:40] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee600842282460d00473a.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[02:54:48] *** veegee <veegee!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has quit IRC (Quit: veegee)
[03:09:24] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Quit: shibboleth)
[03:29:19] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[03:41:54] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 246 seconds)
[03:46:49] <bunder> ptx0: what board? is that the prime x399?
[03:47:35] <bunder> it says in the manual ECC and non-ECC, un-buffered memory
[03:57:13] <bunder> sadly they never validated any, i don't see any on the qvl list
[04:04:11] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Quit: Qutting)
[04:09:02] <ptx0> no
[04:09:11] <ptx0> KM1D-X79+ v2.0
[04:09:18] <bunder> lol x79
[04:11:20] <bunder> http://www.pc-doskoi.jp/htmls/1100000245070.html
[04:11:30] <bunder> if thats the only webpage google knows about it, i think you're sol
[04:13:00] <bunder> how'd you end up with it, aliexpress? :P
[04:20:57] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[04:35:39] <ptx0> ebay lol
[04:36:10] *** veegee <veegee!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has joined #zfsonlinux
[04:37:10] <ptx0> i remembered my A4 system has some non-ECC DDR3
[04:37:15] <ptx0> pulled that and tried it, same thing
[04:37:22] <ptx0> the system powers on for about 3 seconds and then dies
[04:38:20] <ss23> Have you tried turning it off and on again?
[04:43:37] <DHowett> nah he put the new RAM in while it was on presumably
[04:44:03] <ss23> heh
[04:44:43] <ptx0> i thought that's what hotplug meant.
[04:44:43] <ptx0> i enabled the option in the kernel first
[04:46:42] <pink_mist> lol
[04:47:57] <ss23> Just tried hot plugging my power supply and now my friend is telling me I need to buy some new magic smoke. Does anyone know where I can get some?
[04:48:21] <DHowett> ss23: i think i know a guy
[04:48:34] <ptx0> you can use my headlight fluid guy
[04:55:14] *** tnebrs <tnebrs!~barely@212.117.188.100> has joined #zfsonlinux
[04:59:53] *** compdoc <compdoc!~me@unaffiliated/compdoc> has quit IRC ()
[05:04:03] <TemptorSent> ss23: Check the old shops for some new-old-stock Lucas Magic Smoke.
[05:08:41] *** xlued <xlued!~xlued@45.76.247.183> has quit IRC (Remote host closed the connection)
[05:09:31] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has quit IRC (Ping timeout: 244 seconds)
[05:09:37] *** xlued <xlued!~xlued@45.76.247.183> has joined #zfsonlinux
[05:30:50] *** tnebrs <tnebrs!~barely@212.117.188.100> has quit IRC (Ping timeout: 272 seconds)
[05:35:23] <ptx0> anyone familiar with the supermicro x9sra?
[05:43:37] <ptx0> enough google-fu showed me a dmesg output from 2012 where the TSC calibrated just fine
[05:43:41] * ptx0 buys
[05:58:51] *** tlacatlc6 <tlacatlc6!~tlacatlc6@68.202.46.96> has quit IRC (Quit: Leaving)
[06:21:52] <ptx0> i think my crucial mistake was buying that pumpkin pie for my birthday
[06:22:03] <ptx0> or maybe it was that i ate half of it in one sitting
[06:22:38] <PMT> ptx0: were you having TSC problems somewhere and I forgot
[06:24:51] <ptx0> yes every single intel dx79 board has it
[06:25:06] <ptx0> and presumably all of the knock-off chinese counterparts
[06:27:33] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[06:28:08] *** eab <eab!~eborisch@75-134-18-245.dhcp.mdsn.wi.charter.com> has joined #zfsonlinux
[06:53:48] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 272 seconds)
[07:07:10] <ptx0> anyone here play Far Cry 5?
[07:07:17] <ptx0> [on PC]
[07:09:02] *** sauravg <sauravg!~sauravg@171.49.233.132> has joined #zfsonlinux
[07:29:18] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[07:31:54] *** fishfox_ <fishfox_!~fishfox@047-232-140-097.res.spectrum.com> has quit IRC (Remote host closed the connection)
[07:32:33] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[07:39:40] *** catalase <catalase!~catalase@unaffiliated/catalase> has quit IRC (Ping timeout: 246 seconds)
[07:42:05] *** catalase <catalase!~catalase@unaffiliated/catalase> has joined #zfsonlinux
[07:55:43] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[08:04:40] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has joined #zfsonlinux
[08:16:45] <lundman> yes
[08:17:08] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 245 seconds)
[08:19:02] *** prologic <prologic!~prologic@unaffiliated/prologic> has quit IRC (Read error: Connection reset by peer)
[08:19:47] <ptx0> lemme know if you ever wanna play co-op
[08:22:01] *** prologic <prologic!~prologic@unaffiliated/prologic> has joined #zfsonlinux
[08:35:36] <lundman> i finished it alas
[08:40:26] *** DeHackEd <DeHackEd!~dehacked@216.75.170.33> has quit IRC (Ping timeout: 268 seconds)
[08:40:52] *** DeHackEd <DeHackEd!~dehacked@atmaweapon.dehacked.net> has joined #zfsonlinux
[09:06:39] *** leothrix <leothrix!~leothrix@elastic/staff/leothrix> has quit IRC (Ping timeout: 250 seconds)
[09:12:47] *** leothrix <leothrix!~leothrix@elastic/staff/leothrix> has joined #zfsonlinux
[09:30:40] <ptx0> well jeez
[09:30:53] <ptx0> now the pcie ports are working on that dx79 after i reinstall the cpu
[09:31:02] <ptx0> but it was broken with two CPUs..
[09:31:07] <ptx0> maybe it wasn't o.O
[09:31:52] *** tnebrs <tnebrs!~barely@212.117.188.100> has joined #zfsonlinux
[09:36:32] *** socra <socra!socra@gateway/shell/suchznc/x-enzcszifwhcfakql> has quit IRC (Ping timeout: 250 seconds)
[09:42:44] *** kaipee <kaipee!~kaipee@81.128.200.210> has joined #zfsonlinux
[09:44:23] *** FireSnake <FireSnake!firesnake@gateway/shell/xshellz/x-dufhinohydkduejj> has quit IRC (Remote host closed the connection)
[09:48:47] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[09:51:43] *** hoonetorg <hoonetorg!~hoonetorg@77.119.226.254.static.drei.at> has quit IRC (Ping timeout: 250 seconds)
[09:51:54] *** FireSnake <FireSnake!firesnake@gateway/shell/xshellz/x-ccwgwhkdctccepus> has joined #zfsonlinux
[09:52:33] *** FireSnake <FireSnake!firesnake@gateway/shell/xshellz/x-ccwgwhkdctccepus> has quit IRC (Remote host closed the connection)
[09:53:33] *** tnebrs <tnebrs!~barely@212.117.188.100> has quit IRC (Ping timeout: 252 seconds)
[09:55:37] *** hoonetorg <hoonetorg!~hoonetorg@77.119.226.254.static.drei.at> has joined #zfsonlinux
[09:56:26] *** Slashman <Slashman!~Slash@cosium-152-18.fib.nerim.net> has joined #zfsonlinux
[09:56:44] *** FireSnake <FireSnake!firesnake@gateway/shell/xshellz/x-ygwqafkiiahzfewn> has joined #zfsonlinux
[10:03:44] <ptx0> mmhm, now the nvme device AND 10gb nic work together
[10:09:11] *** socra <socra!socra@gateway/shell/suchznc/x-cfetxleityrqvvht> has joined #zfsonlinux
[10:21:34] *** logan- <logan-!~logan@irc.protiumit.com> has quit IRC (Ping timeout: 268 seconds)
[10:29:03] *** insane^ <insane^!~insane@fw.vispiron.de> has joined #zfsonlinux
[10:45:32] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has quit IRC (Excess Flood)
[10:47:13] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has joined #zfsonlinux
[10:52:55] *** tnebrs <tnebrs!~barely@212.117.188.100> has joined #zfsonlinux
[11:06:15] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 250 seconds)
[11:08:03] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #zfsonlinux
[11:12:17] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[11:12:44] *** tnebrs <tnebrs!~barely@212.117.188.100> has quit IRC (Ping timeout: 250 seconds)
[11:13:35] *** clete2 <clete2!~clete2@71-135-200-38.lightspeed.tukrga.sbcglobal.net> has quit IRC (Quit: ZNC 1.6.5 - http://znc.in)
[11:15:02] *** clete2 <clete2!~clete2@71-135-200-38.lightspeed.tukrga.sbcglobal.net> has joined #zfsonlinux
[11:26:26] *** blassin <blassin!~sardaukar@142.19.249.5.rev.vodafone.pt> has quit IRC (Remote host closed the connection)
[11:27:19] *** blassin <blassin!~sardaukar@142.19.249.5.rev.vodafone.pt> has joined #zfsonlinux
[11:32:10] *** k-man <k-man!~jason@unaffiliated/k-man> has quit IRC (Quit: WeeChat 2.3)
[11:49:02] *** IonTau <IonTau!~IonTau@ppp121-45-221-77.bras1.cbr2.internode.on.net> has quit IRC (Remote host closed the connection)
[12:08:41] *** morphin <morphin!c38e669e@gateway/web/freenode/ip.195.142.102.158> has quit IRC (Ping timeout: 256 seconds)
[12:09:56] <zfs> [zfsonlinux/zfs] Feature request: incremental scrub (#8248) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/8248#issuecomment-453058711>
[12:16:25] <zfs> [zfsonlinux/zfs] Verify checksum of the ZFS module text and rodata before each transaction group commit (#2832) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/2832#issuecomment-453060380>
[12:17:52] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[12:24:37] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[12:33:16] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 272 seconds)
[12:48:56] *** sauron__ <sauron__!~quassel@thor.coruscant.org.uk> has quit IRC (Ping timeout: 250 seconds)
[13:04:33] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee60021401c4d8cbb67e2.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[13:38:00] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[13:41:34] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[13:57:24] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 252 seconds)
[14:09:09] <zfs> [zfsonlinux/zfs] #8235 feature request: zpool iostat N should repeat header like vmstat N (#8246) closed by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/issues/8246#event-2065811032>
[14:09:10] <zfs> [zfsonlinux/zfs] #8235 feature request: zpool iostat N should repeat header like vmstat N (#8246) comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/issues/8246#event-2065811032>
[14:09:16] <zfs> [zfsonlinux/zfs] #8235 feature request: zpool iostat N should repeat header like vmstat N (#8246) comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/issues/8246#issuecomment-453090131>
[14:13:15] <zfs> [zfsonlinux/zfs] #8235 feature request: zpool iostat N should repeat header like vmstat N (#8246) comment by bunder2015 <https://github.com/zfsonlinux/zfs/issues/8246#issuecomment-453091247>
[14:20:03] *** patdk-lap <patdk-lap!~patrickdk@208.94.190.191> has quit IRC (Ping timeout: 245 seconds)
[14:23:39] *** patdk-lap <patdk-lap!~patrickdk@208.94.190.191> has joined #zfsonlinux
[14:29:08] *** mquin <mquin!~mike@freenode/staff/mquin> has quit IRC (Quit: So Much For Subtlety)
[14:31:33] <bunder> tbh i didn't see anything blaringly wrong with 8246, other than using the master branch of their repo for pushing
[14:31:36] <madwizard> bunder++ Yeah, just reading it
[14:32:27] <bunder> did the test bots mass fail? i didn't catch it
[14:33:25] <madwizard> They did on the second round, I don't know yet wy
[14:33:31] <madwizard> s/wy/why/
[14:34:25] <bunder> you can run the test suite locally too but you need ksh and nfs and samba and a few other apps, and to run it off a non-zfs host
[14:35:43] <bunder> (well, host or a vm instance)
[14:35:50] <madwizard> Yup, will have to set up a VM for that
[14:47:01] <bunder> its a little more advanced that a beginner article, but you can also squash your branch into a single commit too
[14:47:09] <bunder> s/that/than
[14:51:42] <madwizard> ugh, I was fooling around with whitespaces in vim and my indents ended up as >··· instead of real tab
[14:51:54] <madwizard> This what broke test bots
[14:51:58] <bunder> ah
[14:52:04] <bunder> i use nano /shrug
[14:52:12] <lblume> Tabs for indentation! Heresy!
[14:52:19] <lblume> No coffee for you.
[14:52:25] <madwizard> lblume: shoo
[14:52:34] <bunder> zfs source code is weird, sometimes its full tab, other times its 2 spaces
[14:52:40] <bunder> or tabs and two spaces
[14:52:45] <madwizard> bunder: zpool_main.c is all tabs
[14:52:56] <madwizard> yay
[14:52:58] <madwizard> It builds now
[14:53:14] * lblume ponders a tab-free fork.
[14:53:29] <bunder> mahrens will throw you into the pit of fire
[14:54:17] <lblume> There are some things worth dying for!
[14:54:47] <lblume> It's not altogether clear this is one of them, but still.
[14:59:21] <bunder> actually one thing i'd like to see is dropping the 80 column rule
[14:59:33] <bunder> it's not 1990 anymore, we have widescreen displays
[14:59:41] <cirdan> some do!
[14:59:56] <cirdan> my rack has a 17" 4:#
[14:59:58] <cirdan> 4:3
[15:00:18] <bunder> yes but you don't code from your rack kvm do you :P
[15:00:58] <DHE> emergencies only
[15:01:04] <cirdan> ^^^
[15:01:09] <Lalufu> 80x25 for life
[15:01:20] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[15:01:38] <cirdan> i think my terminal.app is 132w right now
[15:01:49] <cirdan> no 256x32
[15:04:22] <bunder> 237x57 lel
[15:04:39] <bunder> i guess my window borders are an oddball size
[15:05:29] <FireSnake> Lalufu: be cooler than the rest, go 80x24
[15:05:33] <FireSnake> :D
[15:05:36] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[15:06:45] <madwizard> I'll have a coffee
[15:07:07] <bunder> mmmm coffeeeeeeee
[15:07:40] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[15:11:39] <stefan00> zfs really is awesome. moving backups from ext4 to a gzip-9 dataset >2:1 space saving. yeah :-)
[15:11:55] <cirdan> did you try lxr?
[15:11:57] <cirdan> lx4?
[15:11:59] <cirdan> lz4
[15:12:01] <cirdan> ...
[15:12:23] <cirdan> it's a whole lot faster but may not compress as much
[15:12:46] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[15:13:22] <FireSnake> he probably cares more about space saving than speed
[15:13:23] <stefan00> …that’s why I use gzip for archive / backup datasets. speed doesn’t matter, disk space does
[15:13:48] <stefan00> and yes, all others go lz4
[15:14:29] <Lalufu> gzip-9 is pretty brutal, though
[15:15:30] *** veegee <veegee!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has quit IRC (Quit: veegee)
[15:15:34] <cirdan> and with some data it's larger than uncompressed
[15:15:46] <stefan00> yes, but >2:1 in a total mixed real life scenario is awesome. saves 2TB space in this case.
[15:15:47] <Lalufu> that's true for all compression algorithms
[15:15:52] <bunder> i only use lz4 because of the early abort, if i had to wait for every video i stored, eww
[15:18:23] <lblume> madwizard: No coffee!
[15:19:13] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 246 seconds)
[15:19:13] *** libertas <libertas!~libertas@a95-93-229-182.cpe.netcabo.pt> has quit IRC (Ping timeout: 245 seconds)
[15:23:07] <madwizard> pfft
[15:24:13] *** stefan00 <stefan00!~stefan00@ip9234924b.dynamic.kabel-deutschland.de> has quit IRC (Ping timeout: 245 seconds)
[15:26:17] <lblume> I want all those tabs replaced by non-breaking spaces first.
[15:27:04] <cirdan> s/ /\t/g && done
[15:30:59] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has quit IRC (Ping timeout: 256 seconds)
[15:35:32] <rjvb> stefan00 : isn't gzip-8 noticeably faster and not noticeably less efficient on your data?
[15:39:06] <PMT> I believe that's the general consensus.
[15:40:51] <Lalufu> the tradeoff at -9 is.... questionable
[15:42:18] <zfs> [zfsonlinux/zfs] Suggestion: update the embedded lz4 copy (#8260) comment by Rich Ercolani <https://github.com/zfsonlinux/zfs/issues/8260#issuecomment-453118721>
[15:56:21] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[16:06:11] <bunder> template reeeeee
[16:19:24] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) created by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/issues/8262>
[16:19:25] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[16:19:53] <madwizard> Whatever this bot is, it's discriminating against non-USA characters in names :D
[16:23:02] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by George Melikov <https://github.com/zfsonlinux/zfs/pull/8262#pullrequestreview-191259341>
[16:23:48] *** insane^ <insane^!~insane@fw.vispiron.de> has quit IRC (Ping timeout: 272 seconds)
[16:23:54] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) comment by George Melikov <https://github.com/zfsonlinux/zfs/issues/8262>
[16:24:45] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/pull/8262#discussion_r246801704>
[16:24:54] <cirdan> non-7bit
[16:28:40] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/pull/8262#discussion_r246803289>
[16:28:51] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/pull/8262#discussion_r246803374>
[16:29:09] <lblume> Bad bot, bad. Here, you can have a coffee to cheer you up. 🍵
[16:29:09] <PMT> bunder: what, for 8260? It's not a bug report.
[16:37:51] *** yawkat <yawkat!~yawkat@cats.coffee> has quit IRC (Ping timeout: 246 seconds)
[16:39:25] *** mquin <mquin!~mike@freenode/staff/mquin> has joined #zfsonlinux
[16:42:47] *** yawkat <yawkat!~yawkat@159.69.41.126> has joined #zfsonlinux
[16:52:28] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 250 seconds)
[16:56:17] <prometheanfire> swapping on zfs doesn't seem as stable as it used to (aka, I tend to crash when I swap) or at least lock up
[16:57:01] <FinalX> I'm using it for swap just fine
[16:57:17] *** malwar3hun73r <malwar3hun73r!~malwar3hu@unaffiliated/malwar3hun73r> has left #zfsonlinux ("Leaving")
[16:57:46] <FinalX> I did crash it once, but that wasn't ZFS's fault. That was my fault for secure-erasing the NVMe disk it was on while it wasn't done putting pages back into RAM yet .. :)
[16:58:05] <FinalX> Can tell you this much: That *really* doesn't end well :P
[16:58:26] <cirdan> heh
[16:59:43] <DHE> and the pool suspended itself?
[17:00:59] <cirdan> oh man looks like I get to full erase 14 lto-5 tapes.
[17:01:17] * cirdan sighs
[17:01:34] * cirdan wonders how long each one will take. I'm almost at 2.5hrs right now...
[17:02:33] <cirdan> guess i'll have to script it up
[17:02:37] <FinalX> I honestly don't remember exactly in what order I did what (also because I've been awake since 04:00 AM and heard my company will cease to exist in the next few years). But the nvme-tool doesn't really care if a namespace is in use or not.
[17:03:04] <prometheanfire> cirdan: just burn them
[17:03:05] <cirdan> FinalX: bought out, or declinine relivence or...
[17:03:11] <cirdan> no way they are brand new prometheanfire
[17:03:13] <FinalX> Neither
[17:03:18] <cirdan> i just bought 'em
[17:03:23] <prometheanfire> ah
[17:03:32] <FinalX> But the box died before I even could tell how ZFS reacted to it :p
[17:03:55] <cirdan> but seems the drive decided it was dirty and would only write 19gb on each, so it used all 14
[17:04:33] <cirdan> tape drives get very pissy if they write something unintended... when they encounter it again they just give up so a full erase is needed
[17:04:42] <FinalX> cirdan: Our company was bought by our mother company over 20 years ago, and we're still making increasing profits.. but the mother company always had a "3 brand" strategy. Their budget brand was already assimilated before, then there's them, and us, their premium brand as a seperate legal entity.
[17:05:08] <FinalX> Now because of all the competition and internet access market kinda being immovable, they want to merge everything under 1 brand (themselves)
[17:05:47] <cirdan> that happens if my lto3 drive has a tape mounted and it's rebooted... it writes an EOM and it won't write past that point anymore without a full erase :/
[17:05:51] <madwizard> We were bought by IBM.
[17:05:53] <cirdan> FinalX: so they are doign a WD
[17:05:58] <cirdan> or they are WD
[17:05:59] <cirdan> :)
[17:06:04] <prometheanfire> FinalX: lol
[17:06:05] <FinalX> pretty much, yeah..
[17:06:18] <FinalX> they're not actual WD, but it's a good comparison ;)
[17:06:24] <cirdan> :)
[17:06:28] <FinalX> and we're like HGST in this picture
[17:06:32] * cirdan mourns hgst
[17:06:36] * prometheanfire needs a laptop with 32G to build chromium now
[17:06:37] <FinalX> pretty good, shitty period, really good
[17:06:38] <cirdan> and possibly soon sandisk
[17:07:03] <prometheanfire> ya... hgst is their good brand still right?
[17:07:04] <cirdan> i don't mind if sandisk ssds are renamed wd, but leave the thumb drive/sd cards alone
[17:07:08] <cirdan> no it's gone
[17:07:13] <FinalX> prometheanfire: they absorbed HGST into WD
[17:07:21] <FinalX> so, kinda precisely like us now
[17:07:24] <prometheanfire> I know they were bought, but don't know if they were changed
[17:07:28] <cirdan> hgst is now wd red or wd ultrastor
[17:07:32] <cirdan> wd gold is gone as well
[17:07:49] <cirdan> prometheanfire: the website just redirects to wd.com since summer
[17:08:04] <cirdan> err wd red *pro*
[17:08:11] <cirdan> not normal reds
[17:08:26] <FinalX> KPN is our mother company, we're XS4ALL, one of Europe's first consumer ISPs (first in the Netherlands). We're know for security and privacy... they're known for being not really good at either. So our customers are now calling, mailing, tweeting and fb'ing us en-masse and taking to comments on various sites to express their disbelief and outrage..
[17:08:34] <cirdan> oh i know xs4all!
[17:08:45] <cirdan> they used to give is free hosting for the Fink project
[17:08:48] <prometheanfire> I'm thinking of https://www.amazon.com/HGST-Ultrastar-HUH721212ALE601-Encryption-Enterprise/dp/B079SJCCWZ
[17:09:14] <FinalX> we still host a lot of sponsored stuff, like things for Amnesty International.. we used to host Doctors Without Borders, FreeBSD, Debian etc as well
[17:09:15] <prometheanfire> ya, I know them as well, for other reasons...
[17:09:29] <cirdan> :)
[17:10:04] <cirdan> https://nascompares.com/2018/08/01/wd-gold-is-end-of-life-alternative-hgst-ultrastar-dc-drive-list-here/
[17:10:20] <cirdan> but they aren't calling it hgst it's WD UltraStar DC
[17:10:40] <prometheanfire> ah, rebrand
[17:10:41] <cirdan> it's very odd they can't read the labels on the pictures they put with the article :)
[17:10:45] <FinalX> and Deskstar NAS is now WD Red Pro
[17:10:48] <prometheanfire> I guess that's their top brand now?
[17:10:53] <FinalX> with a heightened price
[17:10:56] <prometheanfire> ofc
[17:11:14] <cirdan> there's always toshiba, seagate ironwolf... :)
[17:11:22] <cirdan> my tishiba has been good so far
[17:11:23] <prometheanfire> he is worth it for lower power cost (unless the competition has improved)
[17:11:32] <prometheanfire> toshiba seems to be good too
[17:11:41] <cirdan> the main downside about toshiba i hear is the rma
[17:11:59] <cirdan> sometimes it takes forever, and then they refund your purchase price, so if you got it for a steal...
[17:11:59] <FinalX> honestly I'm surprised with how long my Seagate Archive HDD 8TB's have lasted so far. I gave them a year, they had warranty for 3 and it's been way longer still.
[17:12:01] <Lalufu> FinalX: you're with xs4all?
[17:12:05] <FinalX> yes, Lalufu
[17:12:24] <FinalX> sr. sysadmin here (working here since 2001, since I was 18 :P)
[17:12:42] <cirdan> i need some remote sysadmin work :)
[17:12:59] <FinalX> I'm hoping they'll keep existing till 17-06-2019 .. at that point I will be working there for longer than I was old when I started there.
[17:13:00] *** chasmo77 <chasmo77!~chas77@158.183-62-69.ftth.swbr.surewest.net> has joined #zfsonlinux
[17:13:04] <Lalufu> I approve of xs4all, but the internet speed I could get from you is not sufficient, so I'm somewhere else
[17:13:04] <FinalX> eh
[17:13:06] <Lalufu> sad as it is
[17:13:07] <FinalX> 17-06-2020
[17:13:18] <cirdan> thre's no 17th month!
[17:13:20] <cirdan> ;-)
[17:13:25] <FinalX> well, KPN is finally picking up rolling out fiber again
[17:13:36] <Lalufu> I'll take that in a heartbeat
[17:14:19] * prometheanfire can't wait for google fiber (wondering what size prefix they hand out)
[17:14:19] <cirdan> "this is because the WD Red Hard Drives have a significantly higher market recognition"
[17:14:28] <cirdan> really? cause I avoid that when i can
[17:14:35] <Lalufu> google fiber is still a thing?
[17:14:39] <FinalX> they're rolling it out faster and faster now.. there's quite some pressure on it. so who knows.
[17:15:09] <FinalX> Lalufu: you can check https://xs4all-fpi-info.fourstack.nl/addresses/search to see if any upgrades are planned in the near future
[17:15:16] <prometheanfire> Lalufu: they are rolling out in my area
[17:15:24] <prometheanfire> burried the fiber a month or so ago
[17:15:30] <FinalX> for me it used to be 12mbit down, 768kbit up.... I know have 500mbit down, 800mbit up
[17:15:37] <FinalX> now*
[17:16:14] <Lalufu> Meh, nothing in the next two months
[17:16:20] <prometheanfire> I can't find the DC drives on amazon :|
[17:16:22] <Lalufu> which is a pretty narrow horizon, though
[17:16:49] <prometheanfire> rebranded as gold?
[17:17:17] <FinalX> Lalufu: yeah, it used to state "next 6", but that's usually less accurate. if you want you can PM me your postal code + number and I can ask the guy who receives all the planning if there's something planned a bit further along btw
[17:17:37] <Shinigami-Sama> prometheanfire: yes, appearntly gold is the ultrastore
[17:17:39] <zfs> [zfsonlinux/zfs] Eliminate ZTHR races by serializing ZTHR operations. (#8229) new review comment by Tom Caputi <https://github.com/zfsonlinux/zfs/pull/8229#discussion_r246823988>
[17:17:39] <FinalX> just in case you're interested, obv
[17:17:49] <prometheanfire> Shinigami-Sama: ah, thanks
[17:18:10] <Shinigami-Sama> and red pro are hgst
[17:18:49] <FinalX> well, the wd reds are kinda hgst builds as well, just lower rpm and slightly different firmware iirc
[17:19:00] <FinalX> the non-pro, i mean (already before hgst got absorbed)
[17:19:44] <Shinigami-Sama> no, reds are still WD NAS units, the Redpros are from hitachi, they're just rebranded hgst
[17:19:47] <FinalX> personally I'm gonna wait with buying new big disks tbh
[17:19:52] <prometheanfire> ya, don't think they support tler
[17:19:57] <prometheanfire> does zfs even use tler?
[17:19:58] <Shinigami-Sama> red pro do
[17:20:20] <Shinigami-Sama> we sell them at work for our backup NASes
[17:20:26] <FinalX> there's big shifts coming in storage this year by the looks of it, so I'm holding off with buying new stuff
[17:20:35] <prometheanfire> FinalX++
[17:21:17] <FinalX> and I'm liking Intel's idea of combining their Optane with QLC
[17:21:21] <prometheanfire> wd gold has the lowest power usage
[17:21:21] <Shinigami-Sama> FinalX: yeah, I just bought a second 6TB wd red so I can have some redundancy, but I"d like to get a few 8TB+ and a couple 320GB+ SSDs to use MAC
[17:22:08] <FinalX> For me personally, my redundancy is Google Drive.. no local redundancy anymore. Not saying it's a good idea per se or for everyone, but hey :)
[17:22:36] <FinalX> Had someone at work who wanted to spend €5000 on a home NAS, but now he just bought a little one for like 10% of that and went for Drive for other stuff :p
[17:23:01] <Shinigami-Sama> 5k? why in the world...
[17:23:44] <FinalX> yeah, I had that same reaction
[17:24:39] <prometheanfire> I was going to rebuild for close to that, just the drives alone are expensive, but I can probable get away with rebuilding my array slowly
[17:24:58] <prometheanfire> home nas does stuff for my home openstack stuff too though, so not quite normal
[17:25:41] <FinalX> well, no redundancy is not entirely true. I have a stripe of the 3x8TB SMR's, and a mirror of a 4+6TB HGST with the spare 2TB as a seperate pool... and a raidz1 of 3x1TB old shucked USB-disks I had laying around. Then a mirror of my 128GB+240GB SSDs (same principle, spare space seperate pool).. and next week there'll be a mirror of 2x512GB NVMe SSDs, too.
[17:25:42] <zfs> [zfsonlinux/zfs] Eliminate ZTHR races by serializing ZTHR operations. (#8229) new review comment by Tom Caputi <https://github.com/zfsonlinux/zfs/pull/8229#discussion_r246827090>
[17:26:52] <FinalX> I currently have 140TB on Google Drive for €9,68/month and they also take care of their own redundancy there.. so :P
[17:27:16] <cirdan> yeah they even scan and catalog your shit :)
[17:27:27] <FinalX> if you let them, yes :)
[17:27:32] <FinalX> rclone with on-the-fly encryption++
[17:27:32] <cirdan> you encrypt it all?
[17:27:36] <cirdan> nice
[17:27:39] <cirdan> I thought about that
[17:27:39] <Lalufu> and inform the authorities if they think you should not have something
[17:27:55] <cirdan> pictures of kids in bathtub? child porn alert!
[17:27:58] <FinalX> it's not all encrypted, but stuff I don't want them poking into is.. heavily
[17:28:09] <FinalX> but honestly, even DMCA'd material they don't give a crap about
[17:28:19] <cirdan> i guess that's not a bad idea for $10/mo
[17:28:22] <FinalX> I know people that have 200TB of unencrypted bluray remuxes on there
[17:28:42] <cirdan> FinalX: well the tricky thing is if you own the disc it's legal
[17:28:48] <cirdan> as long as you aren't sharing
[17:29:18] <FinalX> you know when they suddenly care? if you hit the "Share..." function. if something is DMCA'd you'll get pop-up saying you can't share it because of DMCA. If you were "Share"ing content that later got DMCA'd however, your account does get flagged.
[17:29:46] <FinalX> Amazon deleted accounts for having more encrypted content than indexable content.
[17:29:51] <cirdan> I thought about trying to spin up a windows vm just for the $5/month backblaze
[17:30:02] <cirdan> FinalX: fuck them then
[17:30:10] <Shinigami-Sama> I can't sanely get that price for drive because I'm under gsuite
[17:30:28] <Shinigami-Sama> thats like 50$CAD/m
[17:30:34] <FinalX> Google doesn't care as long as you don't share the content with others.. if it's not shared it's just like it being on your own disk (that's how they say they see it)
[17:30:46] <cirdan> but encrypting data for backblaze i think is a big hassle
[17:30:48] <FinalX> Shinigami-Sama: yes you can..
[17:31:01] <FinalX> Shinigami-Sama: just sign up as a "single employee company" through https://gsuite.google.com/
[17:31:14] <FinalX> Shinigami-Sama: it'll tell you that you need 5 accounts, but that's not enforced. just 1 will do.
[17:31:44] <FinalX> you can just sign up for a new gsuite environment if you're already using it. keep that one on the side.
[17:31:56] <FinalX> that's what I do, and what other coworkers do, too
[17:32:14] <cirdan> you need gsuite for those prices?
[17:32:36] <cirdan> looks like personal is 2tb for $9.99
[17:32:38] <FinalX> Go to https://gsuite.google.com/ and sign up with a random domain, sign up as a single-user company, put cc, and you even get a 14 day trial with no obligations
[17:32:41] <FinalX> yeah, gsuite
[17:32:42] <prometheanfire> I still have the free account
[17:32:50] <FinalX> me too, on my other gsuite, prometheanfire
[17:33:11] <Shinigami-Sama> I have 2 free gsuites still
[17:33:14] <FinalX> my family's mail is on my old free gsuite, so I made a new one with just 1 account .. just for the unlimited storage
[17:33:23] <cirdan> huh.
[17:33:49] <cirdan> partly makes me want to sell all my tape drive stuff
[17:34:00] <cirdan> I still don't trtust online as only backup
[17:34:02] <Shinigami-Sama> I can upgrade to business, but they stopped showing me the price, last I saw a few months ago was about 50$/user/month
[17:34:15] <FinalX> don't upgrade man
[17:34:21] <Shinigami-Sama> ofc
[17:34:23] <FinalX> because if you upgrade you start paying per account
[17:34:46] <FinalX> ..just sign up for a new gsuite, and only do 1 user.. that works :P plus then you keep it seperate from your existing gsuite stuff too
[17:35:03] <cirdan> yeah it says unlimited cloud storage or 1tb per user for under 5 users
[17:35:11] <prometheanfire> once I get faster upload that could make sense
[17:35:25] <cirdan> they could easily make your shit ro until you get under quota
[17:35:54] <Shinigami-Sama> prometheanfire: yeah, I"ve only got ~7up right now
[17:36:36] <prometheanfire> cirdan: yep, that's the other reason not to trust it
[17:36:51] <FinalX> cirdan: that 1tb/user under 5 isn't enforced. you get unlimited, even with 1
[17:37:01] <prometheanfire> I'd probably just have a few TB of system backups though
[17:37:05] <cirdan> prometheanfire: but I could upload all my media there and let family stream from it with plex...
[17:37:14] <FinalX> you can upload 1TB/day max, and download 5TB/day max, btw.
[17:37:32] <cirdan> FinalX: the key is "isn't enforced *ATM*"
[17:37:34] <FinalX> use rclone for syncing/uploading; google limits it to 250mbit/stream, so rclone defaults to 4 simultaneous uploads
[17:37:41] <cirdan> lol nice
[17:38:14] <cirdan> sounds like I should take drives to my buddy's house he has 500/500 i have 1000/43
[17:38:20] <FinalX> 1TB/day max = 100mbit for 24hr straight; so if you're gonna upload a lot, rclone --bwlimit=12M will avoid hitting the limit, obv.
[17:38:21] <cirdan> fucking comcast POS....
[17:38:52] <prometheanfire> I'd want to do zfs incrimental sends, though at ~1000 up could do full
[17:38:55] <FinalX> well even if they'd enforce it, they'd give you some time, and if they do enforce it, I might actually spend the 5x€8 + VAT
[17:39:03] <FinalX> I mean, even at €50, 140TB+ is a steal
[17:39:07] <cirdan> yeah
[17:39:21] <cirdan> I only have like 45tb used
[17:39:54] <FinalX> thing with unlimited storage and gbit speeds is that you kinda stop caring about the size of things you download and put there, and stop caring about deleting old backups and stuff
[17:40:19] <FinalX> at some point I was syncing a *full* copy of every lxc-container dataset to it because it was easier than sending incremental snapshots :p
[17:40:21] <cirdan> I already do that :)
[17:40:39] <cirdan> the stop caring part
[17:40:45] <cirdan> I just get a drive on sale
[17:41:12] <Shinigami-Sama> cirdan: drive limits your download after ~100GB/day down to just 10mbps
[17:41:13] <prometheanfire> cirdan: yep, I can fit a few monthly incrimentals on a 12T drive and just rotate them out
[17:41:29] <Shinigami-Sama> someone at work runs that same plex setup
[17:41:41] <FinalX> Shinigami-Sama: ? it doesn't not for me, anyway
[17:41:45] <Shinigami-Sama> and its something like 50k/day API calls and they'll heavily throttle
[17:42:06] <Shinigami-Sama> he has a stupid big plex setup though
[17:42:21] <FinalX> btw if you're gonna use rclone with it, and you want to mount it locally for a lot of read-only things (hi Plex :P), take a look at plexdrive. It caches the index of the filesystem extremely efficiently. Using a straight rclone mount is far less efficient. Their caching method sucks.
[17:42:24] <cirdan> Shinigami-Sama: i have 18tb in plex so tiny :)
[17:42:36] <FinalX> it's the API call limit that he hits, then.
[17:42:47] <FinalX> I have a stupidly big Plex setup, too.
[17:43:08] <FinalX> If he's hitting API call limits, it's usually because of not using plexdrive or something for the index caching.
[17:43:13] <cirdan> i'm happy with my personal nexflix everytime something gets removed and I go to watch it
[17:43:54] <cirdan> plex can do straight google drive now though
[17:44:10] <FinalX> not anymore, they're removing cloud support. plus that requires it to be unencrypted.
[17:44:18] <cirdan> does plexdrive work with encrypted files?
[17:44:20] <FinalX> plus that also triggered the API limits
[17:44:29] <cirdan> oh really? i guess google had a bitch fit :)
[17:44:36] <FinalX> no, you let plexdrive do the full mount, then use rclone to do the decryption
[17:44:42] <cirdan> oh
[17:44:46] <cirdan> interesting
[17:44:53] <cirdan> does that work on windows as well?
[17:45:23] <FinalX> so my Drive is mounted by plexdrive on /mnt/plexdrive. rclone then has a local decryption mount from /mnt/plexdrive => /google. And I then combine my 3x8TB stripe on /local with unionfs, so /local and /google together form /data. Plex points to /data. Win.
[17:45:30] <cirdan> I could pay $10/mo and give family ro access
[17:45:52] <FinalX> not sure, not a Windows user. But Windows has the Google Stream application that mounts it, too.
[17:46:06] <FinalX> or well, not a Windows user where I use my Drive and Plex..
[17:47:11] <cirdan> https://github.com/dweidenfeld/plexdrive/issues/291
[17:47:39] <FinalX> Only issue I'm having with Plex streaming straight from the Google mount is that it takes up to 20s to start a big file (bluray remuxes, for instance). Because streaming from Drive starts out quite slow, and then picks up speed a little while after to 250mbit.
[17:48:00] <FinalX> ah, that sucks
[17:48:22] <cirdan> oh wtf they are removing plugins!?!?!
[17:48:33] <FinalX> Thing is, rclone caches a readdir() for X time and then expunges the entire dir from the cache. Next time it'll read it again fully and then expire it again after X time. Where X is usually very long.
[17:48:35] <FinalX> yes
[17:48:36] <cirdan> f that
[17:48:54] <cirdan> I like the few plugins i use...
[17:49:02] <FinalX> But plexdrive's cache is different. They index the entire remote *once*, and *only* once. Then they fetch *changes* from Drive.
[17:49:15] <FinalX> so the cache is always 100% accurate, but never really taxing on your API limits
[17:49:32] <FinalX> whereas if you have a lot of dirs and you're using rclone, it'll eat through your API call limits like mad
[17:49:39] <cirdan> wtf did I buy a lifetime pass if they are cutting features or not fixing bugs that I care about :-/
[17:49:40] *** tnebrs <tnebrs!~barely@212.117.188.100> has joined #zfsonlinux
[17:49:58] * FinalX hides
[17:50:07] * FinalX got 3 lifetime accounts for free
[17:50:13] <cirdan> how?
[17:50:16] <cirdan> know someone?
[17:51:12] <FinalX> we host the European servers for plexapp.com
[17:51:18] <cirdan> I really like the plugin i discovered for syncing with trackt.tv
[17:51:20] <cirdan> nice
[17:51:21] <FinalX> and my coworker goes to Burning Man with Plex's CEO :P
[17:51:28] <FinalX> brb, dog puking
[17:51:33] <cirdan> of course plex goes to burning man...
[17:51:34] <cirdan> heh
[17:51:55] <zfs> [zfsonlinux/zfs] Eliminate ZTHR races by serializing ZTHR operations. (#8229) new review comment by Serapheim Dimitropoulos <https://github.com/zfsonlinux/zfs/pull/8229#discussion_r246838107>
[17:52:01] <cirdan> anyway I've had my library's watched status reset too ofthen when migrating stuff
[17:52:41] <prometheanfire> ya, hate that
[17:53:10] <cirdan> also, I can see how many times I play certain things and all
[17:53:22] <cirdan> 74 days, 16 hours, 8 mins watching
[17:53:23] <cirdan> 594 movies (1,074 plays)
[17:53:24] <cirdan> :)
[17:53:30] <cirdan> not bad in 11 months
[17:53:55] <PMT> prometheanfire: I believe it's been the case that swap on zvol is "we fixed all the immediate fires but there are slow burning fires people haven't spent the time to run down yet"
[17:54:05] <cirdan> also there's a plugin you can copy playlists to/from other accounts which is great for family stuff
[17:54:09] *** ghfields <ghfields!~garrett@128.164.34.25> has quit IRC (Ping timeout: 246 seconds)
[17:54:25] <prometheanfire> PMT: yep
[17:54:51] <prometheanfire> cirdan: I manually coppied the DB (or at least set those DB fields to watched)
[17:55:04] <prometheanfire> something like that
[17:55:14] <prometheanfire> maybe I just changed the path in the DV
[17:55:15] <cirdan> with trackt you can just import :)
[17:55:18] <prometheanfire> s/DV/DB
[17:55:47] * prometheanfire shrugs and continues to use kodi
[17:55:53] <cirdan> i guess if I switched to infuse full time for waching it would work...
[17:56:07] <cirdan> kodi less useful on appletv
[17:56:12] <prometheanfire> ya
[17:56:23] <prometheanfire> using a fitlet here
[17:58:18] <cirdan> +3 hours on this tape erase, pretty sure it's fine. ugh.
[17:58:24] <cirdan> going to be days of erasing
[17:59:18] <zfs> [zfsonlinux/zfs] Eliminate ZTHR races by serializing ZTHR operations. (#8229) new review comment by Serapheim Dimitropoulos <https://github.com/zfsonlinux/zfs/pull/8229#discussion_r246841007>
[18:00:10] *** ghfields <ghfields!~garrett@128.164.34.25> has joined #zfsonlinux
[18:00:17] <FinalX> cirdan: I'm told there's Trakt for that :P but yeah, it's kinda annoying
[18:05:00] *** tnebrs <tnebrs!~barely@212.117.188.100> has quit IRC (Ping timeout: 246 seconds)
[18:05:31] *** kaipee <kaipee!~kaipee@81.128.200.210> has quit IRC (Read error: Connection reset by peer)
[18:05:57] <cirdan> FinalX: yeah but not if they dump plugins
[18:07:04] <zfs> [zfsonlinux/zfs] Eliminate ZTHR races by serializing ZTHR operations. (#8229) new review comment by Tom Caputi <https://github.com/zfsonlinux/zfs/pull/8229#discussion_r246843929>
[18:07:26] *** f_g <f_g!~f_g@213-47-131-124.cable.dynamic.surfer.at> has quit IRC (Ping timeout: 250 seconds)
[18:18:58] <zfs> [zfsonlinux/zfs] Eliminate ZTHR races by serializing ZTHR operations. (#8229) new review comment by Tom Caputi <https://github.com/zfsonlinux/zfs/pull/8229#discussion_r246848048>
[18:21:03] *** f_g <f_g!~f_g@213-47-131-124.cable.dynamic.surfer.at> has joined #zfsonlinux
[18:26:47] <MilkmanDan> Hmmm. Is ZTHR crypto? /me rolls dice to find out.
[18:30:22] <zfs> [zfsonlinux/zfs] task txg_sync: blocked for more than 120 seconds (#4361) comment by Fabrice Bacchella <https://github.com/zfsonlinux/zfs/issues/4361#issuecomment-453183034>
[18:39:12] *** rich0 <rich0!~quassel@gentoo/developer/rich0> has quit IRC (Quit: rich0)
[18:44:35] *** rich0 <rich0!~quassel@gentoo/developer/rich0> has joined #zfsonlinux
[18:51:48] <ptx0> ok uhm
[18:51:49] <ptx0> so
[18:51:51] <ptx0> still confused
[18:51:54] <ptx0> how do my pcie ports work now
[18:52:28] <ptx0> special - - - - - -
[18:52:28] <ptx0> nvme0n1p2 14.6G 169G 0 0 0 0
[18:52:43] <ptx0> that's the broken x79 board with nvme and 10gb working o.o
[18:55:27] <Shinigami-Sama> its working now?
[18:56:35] <Lalufu> what's broken about it?
[19:07:29] <zfs> [zfsonlinux/zfs] Make zpool status counters match err events count (#7817) comment by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/7817#issuecomment-453195375>
[19:07:48] <Shinigami-Sama> Lalufu: he somehow killed his PCIe lanes
[19:07:59] <Shinigami-Sama> and then somehow necromanced them back?
[19:08:15] <Lalufu> Funky
[19:08:29] <zfs> [zfsonlinux/zfs] Eliminate ZTHR races by serializing ZTHR operations. (#8229) new review comment by Serapheim Dimitropoulos <https://github.com/zfsonlinux/zfs/pull/8229#discussion_r246864801>
[19:08:32] <Shinigami-Sama> thats him in a nutshell
[19:11:51] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[19:11:56] *** zfs sets mode: +b *!*@212.117.188.13$#zfsonlinux-quarantine
[19:13:20] <ptx0> last year i tried plugging in a 1x to 16x pcie riser and the system didn't wanna turn on, when i removed it, it did, but then the pcie ports stopped working
[19:13:52] <ptx0> had to switch to pcie 1x gpu otherwise it wouldn't boot, the primary 16x slot wouldn't work but the 2ndary 16x one did, so i put the 10gb nic there
[19:14:43] <ptx0> yesterday i finally got a chinese replacement x79 (KM1D-X79+ V2.0) and it just wouldn't work at all, system turns on but shuts off a few seconds later, no beep, no POST, happens with ECC or non-ECC RAM
[19:14:51] <FinalX> note to self: maybe not use that x1=>x16 i got on aliexpress
[19:15:14] <ptx0> so i decided to try the DX79TO again and see if i can't get the broken pcie slots to work
[19:15:24] <ptx0> i was pretty surprised that the NVMe device was showing up wherever i plugged it in
[19:20:52] <ptx0> kinda bittersweet because the replacement motherboard is already shipped
[19:21:05] <ptx0> but it has 3x 16x slots
[19:21:22] <Shinigami-Sama> but are they 48 lanes total?
[19:21:29] <ptx0> no one is 8x
[19:21:40] <ptx0> and shares lanes with a m.2 device or something
[19:22:50] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 250 seconds)
[19:36:38] <Shinigami-Sama> its so hard to get lots of pcie lanes it seems
[19:38:33] <BtbN> on consumer hardware
[19:38:45] <BtbN> Server hardware has a bunch
[19:43:48] *** Slashman <Slashman!~Slash@cosium-152-18.fib.nerim.net> has quit IRC (Remote host closed the connection)
[19:56:08] *** gchristensen is now known as c
[19:56:14] *** c is now known as gchristensen
[19:57:12] <ptx0> my threadripper does pretty well
[19:57:48] <ptx0> got two GPUs at 16x, 10gb nic at 4x, audigy zs pcie 1x card, two NVMe devices (8x)
[19:58:01] <ptx0> each one runs at full link speed
[19:58:03] <BtbN> Well, TR is pretty much server hardware
[19:58:08] <ptx0> it's really not
[19:58:16] <ptx0> you can tell by how shitty the boards are
[19:58:33] <BtbN> Epyc boards are also... not great
[19:58:49] <ptx0> there are better ones available, at least.
[19:58:53] <ptx0> they are better designed
[19:59:19] <cirdan> the chips are pretty much server hardware at least, with all the lanes available :)
[19:59:35] <ptx0> cirdan: no, it is workstation hw
[19:59:55] <ptx0> server chips have transparent virtualization encryption thing, among others
[20:00:07] <ptx0> plus 8 NUMA zones instead of 2
[20:00:24] <zfs> [zfsonlinux/zfs] zfs should optionally send holds (#7513) comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/issues/7513#issuecomment-453212988>
[20:00:26] <ptx0> xeon chips were sold in workstations and have 40 pcie lanes
[20:04:20] <ptx0> also can't run two TR chips on one board
[20:04:20] <ptx0> :P
[20:04:52] <cirdan> no?
[20:05:03] *** Shinigami-Sama <Shinigami-Sama!~xero@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 246 seconds)
[20:07:10] <ptx0> TR4 is UP only
[20:07:45] <ptx0> EPYC supports both and has SKUs similar to xeon with the 1xxx series being uniproc etc (for xeon anyway)
[20:08:06] <ptx0> 7301p for epyc is uniproc, i think that is what the P means
[20:08:20] <ptx0> you can get them $1,000 less that way
[20:08:56] <ptx0> so uniproc epyc starting cost is about the same as highest level threadripper
[20:09:13] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has left #zfsonlinux
[20:09:32] <ptx0> probably more if you need all memory channels populated because DDR4 = $$$
[20:10:04] <ptx0> think it has 8 channels so you'll need to spend at least 8 DIMMs worth :P
[20:10:58] <DHE> yes Epyc is 8-channel memory
[20:11:31] <DHE> I'd be interested to know if there's an optimal configuration if you have only 4 DIMMs where each module goes to a different Zen core
[20:12:52] <ptx0> it handles it internally but you can set it to die-interleave
[20:12:58] <ptx0> =]
[20:13:15] <zfs> [zfsonlinux/zfs] Linux 5.0: error: invalid operands to binary << (#8263) created by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/8263>
[20:13:21] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[20:13:30] <ptx0> "face melting performance" as it has been described (vs classic interleave)
[20:14:20] <BtbN> I hope we can buy an Epyc 2 cluster this year
[20:14:39] <ptx0> i was gonna say "not til 2019" but
[20:15:10] <DHE> well, there's a lot of year left to happen
[20:15:21] <BtbN> it's more a matter if us getting the money for it gets delayed enough for Epyc 2 to be available
[20:17:50] <ptx0> or it?
[20:18:10] <BtbN> What?
[20:18:45] <ptx0> your "for" seems incorrect but i can't quite figure out what should be there
[20:19:00] <BtbN> What about money for a cluster is incorrect?
[20:19:17] <ptx0> i don't know
[20:19:28] <ptx0> i'm uh, it's morning here
[20:19:34] <BtbN> We applied for funding, it will arrive at some point this year. And we have to order pretty much when we get it
[20:19:36] * ptx0 slaps himself awake
[20:19:43] <ptx0> i re-read it now and it makes more sense
[20:19:52] <ptx0> english tho, what a hell of a language
[20:19:53] <BtbN> I wrote "if" instead of "of"
[20:20:05] <ptx0> oh
[20:20:22] <ptx0> well the whole thing makes less sense again
[20:20:27] <ptx0> but i understand anyway
[20:20:30] <ptx0> don't worry :P
[20:21:07] <BtbN> Companied get surprisingly friendly all of a sudden when they realize you actually want to spend 7 figured on hardware
[20:21:10] <BtbN> *Companies
[20:21:18] *** elxa <elxa!~elxa@2a01:5c0:e08e:f2e1:753:a29f:db79:e5f6> has joined #zfsonlinux
[20:21:56] * ptx0 ships BtbN a new keyboard
[20:22:22] <gchristensen> BtbN: on *their* hardware certainly. I've found when I want a company to spend 7figs they're not thrilled
[20:23:18] <BtbN> Apparently Intel gives you quite a discount when they notice you want to go for AMD
[20:23:26] <cirdan> :)
[20:23:44] <BtbN> So it'll be interesting
[20:26:11] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has left #zfsonlinux
[20:27:15] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[20:27:45] <DHE> I'm interested in AMD chips, but for what I need right now they're all overkill...
[20:28:06] <ptx0> that's the perfect amount of kill
[20:28:10] <gchristensen> ARM?
[20:28:19] <DHE> gchristensen: well, that's way underkill
[20:28:28] <cirdan> depends
[20:28:46] <cirdan> iirc apple has the fastest chips/Watt and they are arm and not slow at all
[20:28:47] <DHE> honestly if I didn't need the network throughput I'd be happy with making virtual machines. but I'll need to saturate some 10gig I think...
[20:28:49] <gchristensen> I dunno, these here ampere chips are like 3.5ghz
[20:28:59] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has left #zfsonlinux
[20:29:56] <prometheanfire> the new ryzen stuff looks nice
[20:30:21] <prometheanfire> 3700 should be 8/16 at 80Wish iirc
[20:30:26] <BtbN> The unfortunate thing for AMD is that Intel will get their 10 and 7 nm stuff together eventually
[20:30:32] <gchristensen> also when you can get ~100 cores, I don't mind the 2ghz clock speed so much
[20:31:32] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[20:32:28] <BtbN> My mission will soon be to get as many cores into 43U as I possibly can. AMD is actually winning there by a huge margin already
[20:34:36] <gchristensen> at that metric, you won't beat ARM
[20:35:02] <BtbN> Actually fast cores
[20:35:12] <gchristensen> how fast is fast?
[20:35:30] <prawn> Can confirm, our 2x 32c64t 1U epyc server is absolutely adorable
[20:36:54] <BtbN> 64 cores from Intel would need a 4 socket server, and 4 ridiculously expensive CPUs
[20:37:31] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8254) comment by Tom Caputi <https://github.com/zfsonlinux/zfs/issues/8254>
[20:37:45] <BtbN> Right now two Eypc 7601 are absolutely amazing. Intel has nothing that compares
[20:37:53] <gchristensen> how about 32 3.0GHz (3.3GHz w/ turbo at 125W) cores in 2U
[20:37:56] <prawn> I thought they have at least 24c CPUs nowadays albeit ridiculously priced
[20:38:20] <BtbN> Not in the variants for 4 sockets iirc
[20:38:24] <BtbN> Only for dual socket
[20:40:52] <gchristensen> only 2.2ghz base clock rate, which is about the same as the 96-core ARM 1/2-width 1U system and less than the 3.0ghz ARM system w/ 32 cores which can be had for much less than the amd machine, but so be it
[20:41:35] <BtbN> You're not gonna get a proper HPC cluster going with non-x86
[20:42:05] <gchristensen> I mean, I think you're wrong (power9 for example), but okay
[20:42:27] <BtbN> There are too many libraries and proprietary components that come as blob
[20:43:51] <DHE> if it's cores per rack, does AMD have blade servers available?
[20:44:10] <BtbN> Dual-Socket 1U server from SuperMicro are the densest thing I can find
[20:44:19] <BtbN> Need to cool those CPUs somehow
[20:44:53] <gchristensen> Sandia National Labs already has a "proper" HPC cluster running on Cavium ThunderX2s, in position 204 of the TOP500
[20:45:02] <prawn> The Supermicro 2123BT-HNC0R fits 4 dual socket modules on 2U
[20:45:15] <DHE> hmm.. supermicro's blades are limited to 135 cpus, which is a bit shy of the e5-2600v4 limit of 145
[20:45:16] <prawn> That's 8 sockets on 4U
[20:45:38] <prawn> *2U, derp
[20:45:51] <BtbN> Sure you can benchmark them niceley, but they are useless for a lot of applications
[20:46:52] <BtbN> prawn, the problem with them is the very limited expandability
[20:47:21] <ptx0> how many people have to tell FireSnake that their idea is bad for them to understand this
[20:47:28] <ptx0> #8248
[20:47:30] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/pull/8262#pullrequestreview-191372598>
[20:47:35] <zfs> [zfs] #8248 - Feature request: incremental scrub <https://github.com/zfsonlinux/zfs/issues/8248>
[20:48:01] <DHE> there's https://www.supermicro.com/products/system/2u/2029/SYS-2029TP-HTR.cfm which is 8 sockets in 2U, limited to 165watt CPUs each which isn't too bad considering
[20:48:07] <ptx0> wow
[20:48:24] <ptx0> 165w is almost threaadripper
[20:48:46] <ptx0> well
[20:48:46] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[20:48:52] <ptx0> almost :)
[20:48:52] <DHE> almost, but this is 6-channel memory and in terms of "density" is the best I know of without going into blades
[20:49:16] <ptx0> DHE: if you want the opposite of density you could find an Origin 3000
[20:49:27] <BtbN> prawn, the issue is we want 10GbE and IB
[20:49:37] <prawn> DHE: looks like the Intel version of what I was looking at
[20:49:39] <BtbN> Just can't do that with those
[20:49:48] <DHE> huh... does this stupid thing where each CPU has 6 channels but 8 slots...
[20:50:00] <ptx0> LOL
[20:50:23] <DHE> prawn: you said 4U initially and I didn't notice your correction...
[20:51:49] <BtbN> Oh, it _does_ have two extra PCIe slots per node, nvm that. That looks perfect
[20:51:55] <BtbN> The 2123BT-HNC0R
[20:52:10] <prawn> BtbN: uh, are you sure? The SIOM stuff on the Supermicro website looks like it has 10G for those modules but I'm out of my depth here
[20:52:11] <zfs> [openzfs/openzfs] DLPX-52397 Fast Clone Deletion (#731) created by Sara Hartse <https://github.com/openzfs/openzfs/issues/731>
[20:52:23] <prawn> I see :)
[20:52:24] <BtbN> prawn, it has either 10G _or_ IB
[20:52:33] <BtbN> Not both, and you can only fit one SIOM card
[20:52:47] <BtbN> The IB modules only have a 1G NIC
[20:53:00] <DHE> 2 low profile PCI-E 16x, 1 NIC addon slot, 1 M.2 slot, and 4 NCMe U.2 slots.... not bad for the tiny profile
[20:53:10] <BtbN> But I missed it also having normal PCIe slots, so can just slap IB or 10G in there
[20:53:13] <prawn> Pretty sure I've seen 10GbE/IB combo modules
[20:53:19] <prawn> Or was that FC?
[20:53:25] <BtbN> https://www.supermicro.com/support/resources/AOC/AOC_Compatibility_SIOM.cfm
[20:54:01] <prawn> Oh that's SIOM specific, whatever that even is 🙃
[20:54:45] <BtbN> On the EDR module the RJ45 doesn't even work on AMD
[20:54:55] <DHE> onboard networking is modular... wait what?
[20:55:14] <BtbN> It's just a special form-factor PCIe 16x card
[20:56:31] <prawn> Yeah the combo ones I meant were regular pcie
[20:57:51] <BtbN> I'd probably just put 10G SFP+ into the SIOM, and IB as PCIe card
[20:58:10] <BtbN> Unless the SIOM IB card is miraculously really cheap
[21:05:43] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/pull/8262#discussion_r246903326>
[21:05:49] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/pull/8262#discussion_r246903350>
[21:05:52] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/pull/8262#discussion_r246903375>
[21:06:05] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/pull/8262#discussion_r246903402>
[21:06:14] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/pull/8262#discussion_r246903473>
[21:06:24] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/pull/8262#discussion_r246903442>
[21:06:50] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/issues/8262#issuecomment-453235318>
[21:07:09] *** Shinigami-Sama <Shinigami-Sama!~xero@unaffiliated/setsuna-xero> has joined #zfsonlinux
[21:11:24] <madwizard> ugh
[21:12:42] <ptx0> i remember my first time using github :P
[21:13:45] <madwizard> It's not my first time on guthub, but first time in a team
[21:13:54] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[21:14:25] <BtbN> 100k for a nice 256 cores in 2U
[21:19:28] <ptx0> how many amperes
[21:19:55] <ptx0> i doubt my 670W solar array could power it ^_^
[21:20:47] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by Tony Hutter <https://github.com/zfsonlinux/zfs/pull/8262#pullrequestreview-191397067>
[21:21:11] <ptx0> madwizard: afaik there's a way to submit all comments as one review
[21:21:53] <madwizard> ptx0: Looks like I'll be learning for some time yet
[21:21:55] <madwizard> thanks
[21:22:59] <ptx0> np, thanks for the PR
[21:24:19] *** ralfi <ralfi!~ralfi@p200300C0C71056004C05B6EAA8A8BD92.dip0.t-ipconnect.de> has joined #zfsonlinux
[21:24:47] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8254) new review comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/pull/8254#pullrequestreview-191388252>
[21:25:29] *** ralfi <ralfi!~ralfi@p200300C0C71056004C05B6EAA8A8BD92.dip0.t-ipconnect.de> has quit IRC (Quit: Quit)
[21:26:58] <zfs> [zfsonlinux/zfs] feature request: zpool iostat N should repeat header like vmstat N (#8235) comment by "Joshua M. Clulow" <https://github.com/zfsonlinux/zfs/issues/8235#issuecomment-453241393>
[21:29:15] <zfs> [zfsonlinux/zfs] zpool iostat should print headers when terminal fills (#8262) new review comment by Tony Hutter <https://github.com/zfsonlinux/zfs/pull/8262#discussion_r246909988>
[21:31:34] <zfs> [zfsonlinux/zfs] feature request: zpool iostat N should repeat header like vmstat N (#8235) comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/issues/8235#issuecomment-453242723>
[21:38:04] <zfs> [zfsonlinux/zfs] feature request: zpool iostat N should repeat header like vmstat N (#8235) comment by "Joshua M. Clulow" <https://github.com/zfsonlinux/zfs/issues/8235#issuecomment-453244648>
[21:44:01] <zfs> [zfsonlinux/zfs] feature request: zpool iostat N should repeat header like vmstat N (#8235) comment by Damian Wojs?aw <https://github.com/zfsonlinux/zfs/issues/8235#issuecomment-453246455>
[21:49:50] *** tnebrs <tnebrs!~barely@212.117.188.100> has joined #zfsonlinux
[21:53:43] *** ralfi <ralfi!~ralfi@p200300C0C71056004C05B6EAA8A8BD92.dip0.t-ipconnect.de> has joined #zfsonlinux
[22:05:13] *** tnebrs <tnebrs!~barely@212.117.188.100> has quit IRC (Ping timeout: 246 seconds)
[22:10:27] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[22:31:23] *** obadz <obadz!~obadz@unaffiliated/obadz> has quit IRC (Quit: WeeChat 2.3)
[22:33:15] <prawn> BtbN: 100k what? Us Dollars? Sounds like a lot just for the higher density compared to our two socket 1U, although we did cheap out on everything but cpu cores, what amount of ram and storage did you configure?
[22:33:36] <BtbN> €
[22:33:58] <BtbN> It's cheaper than 4 1U servers actually
[22:34:37] <BtbN> 1500€ for the bare 1U server, 3500€ for the bare 4 units 2U server. Otherwise cost for components is identical
[22:35:30] <BtbN> The CPU is 4000€, and there are 8 of them in there. Plus 2TB of RAM total.
[22:36:07] <Shinigami-Sama> quad socket in 1RU>
[22:36:17] <Shinigami-Sama> I didn't think that was a thing due to thermal constraints
[22:36:29] <BtbN> It's two dual socket in 1U
[22:36:50] <BtbN> 4 independend systems as one "thing" that's 2U high
[22:37:02] <Shinigami-Sama> oh, its a mini-blade
[22:37:09] <BtbN> Wouldn't call it blade
[22:37:15] <Shinigami-Sama> "mini"
[22:37:36] <Shinigami-Sama> its two half width units right?
[22:37:36] <BtbN> https://www.supermicro.com/Aplus/system/2U/2123/AS-2123BT-HTR.cfm this thing
[22:38:11] <Shinigami-Sama> yeah they're blades, in a non-standard case
[22:39:49] <Sketch> the quad-node 2U supermicros aren't really blades, blades generally have shared backplanes for networking/etc
[22:39:58] <ptx0> special vdev on nvme for 6x SMR mirror is pretty great
[22:40:23] <ptx0> the zfs recv speed was 39MiB/s before and now it is 200MiB/s
[22:40:54] <ptx0> for all types of dataset there is improvement in throughput and system response times during IO
[22:41:12] <ptx0> i've got 128k blocks offloaded to nvme
[22:42:08] <ptx0> so i see the SMR handling only 512k/1M writes
[22:42:29] <ptx0> every now and then a 256k IO
[22:47:00] <ptx0> the smr avgio times are like 20ms but the system is quick because of that metadata
[22:47:03] <ptx0> mmm mmm
[22:47:52] <Shinigami-Sama> now just a nice big ZIL to aggregate those SMR writes
[22:53:05] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[22:55:58] <zfs> [zfsonlinux/zfs] feature request: zpool iostat N should repeat header like vmstat N (#8235) comment by "Joshua M. Clulow" <https://github.com/zfsonlinux/zfs/issues/8235#issuecomment-453268915>
[22:57:21] *** tnebrs <tnebrs!~barely@212.117.188.100> has joined #zfsonlinux
[22:58:35] <bunder> he did what
[22:58:37] * bunder slaps bot
[23:04:12] <ptx0> Shinigami-Sama: it is partitioned
[23:04:15] <ptx0> the nvme serves dual purpose
[23:20:13] *** k-man <k-man!~jason@unaffiliated/k-man> has joined #zfsonlinux
[23:20:45] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has quit IRC (Excess Flood)
[23:22:14] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has joined #zfsonlinux
[23:22:16] *** tnebrs <tnebrs!~barely@212.117.188.100> has quit IRC (Ping timeout: 268 seconds)
[23:26:40] <zfs> [zfsonlinux/zfs] Linux 5.0: asm/i387.h: No such file or directory (#8259) comment by Eric <https://github.com/zfsonlinux/zfs/issues/8259#issuecomment-453279143>
[23:31:19] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Quit: shibboleth)
[23:36:47] <zfs> [zfsonlinux/zfs] Don't allow dnode allocation if dn_holds != 0 (#8249) merged by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8249#event-2067163122>
[23:38:02] <zfs> [zfsonlinux/zfs] Add dmu_object_alloc_hold() and zap_create_hold() (#8015) merged by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8015#event-2067165741>
[23:38:44] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Quit: simukis)
[23:54:21] <bunder> wtf greg
[23:57:46] *** elxa <elxa!~elxa@2a01:5c0:e08e:f2e1:753:a29f:db79:e5f6> has quit IRC (Ping timeout: 260 seconds)
top
   January 10, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >