Switch to DuckDuckGo Search
   January 5, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >
Toggle Join/Part | bottom
[00:05:47] *** donhw <donhw!~quassel@host-184-167-36-98.jcs-wy.client.bresnan.net> has joined #zfsonlinux
[00:12:35] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has quit IRC (Read error: Connection reset by peer)
[00:20:55] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Read error: Connection reset by peer)
[00:21:36] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[00:27:33] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has joined #zfsonlinux
[00:42:19] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[00:48:20] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Read error: Connection reset by peer)
[00:48:52] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[00:53:22] *** gila <gila!~gila@5ED74129.cm-7-8b.dynamic.ziggo.nl> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[00:55:31] <zfs> [zfsonlinux/zfs] Disable 'zfs remap' command (#8238) created by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8238>
[00:56:00] *** elxa_ <elxa_!~elxa@2a01:5c0:e095:9ab1:286:2681:a937:345f> has joined #zfsonlinux
[00:57:24] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has quit IRC (Quit: no reason)
[00:57:56] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has joined #zfsonlinux
[01:01:09] <PMT> That's a bold move.
[01:01:45] <Setsuna-Xero> probably for the best
[01:04:52] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6001dcc68519deda765.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[01:08:38] <zfs> [zfsonlinux/zfs] Python 2 and 3 compatibility (#8096) comment by John <https://github.com/zfsonlinux/zfs/issues/8096>
[01:12:59] <MTecknology> Yay! I'm back to where I was before, and this time grub is installing happily. :D
[01:18:45] <Setsuna-Xero> grub is never happy
[01:19:33] <zfs> [zfsonlinux/zfs] Python 2 and 3 compatibility (#8096) new commit by Brian Behlendorf <https://github.com/zfsonlinux/zfs>
[01:22:04] <MTecknology> Setsuna-Xero: a portion of grumpy makes up 3/4th it's name
[01:22:16] <MTecknology> long live the lilo boot loader!
[01:23:58] <MTecknology> 6.3 says to run this command to unmount file systems, I just noticed that it did a lazy unmount. The next step is 'zfs export poolname', but I can't do that because it's reporting as busy.
[01:24:22] <MTecknology> ... nevermind
[01:25:31] <MTecknology> it's reboot time!
[01:25:34] * MTecknology holds breath
[01:30:47] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has quit IRC (Ping timeout: 258 seconds)
[01:31:23] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Quit: Qutting)
[01:32:10] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has joined #zfsonlinux
[01:44:03] <zfs> [zfsonlinux/zfs] zfs receive and rollback can skew filesystem_count (#8232) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8232>
[01:45:31] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has quit IRC (Quit: no reason)
[01:45:55] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has joined #zfsonlinux
[01:57:20] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Quit: simukis)
[02:02:37] *** elxa_ <elxa_!~elxa@2a01:5c0:e095:9ab1:286:2681:a937:345f> has quit IRC (Ping timeout: 252 seconds)
[02:20:41] <tlacatlc6> so zstd is not even on git?
[02:21:55] <DeHackEd> #8044
[02:22:06] <zfs> [zfs] #8044 - Support zstd compression (port of Allan Judes patch from FreeBSD) by BrainSlayer <https://github.com/zfsonlinux/zfs/issues/8044>
[02:23:34] <PMT> "the freebsd variant hat serious memory allocation problems and bugs in the original version. so i'm really suprised how it ever worked for you."
[02:23:48] <tlacatlc6> oh, nice. is anyone using it on zol?
[02:26:28] <PMT> tlacatlc6: the person who opened that PR says he is. I wouldn't suggest playing with it until it's in a stable release unless you want to A) debug it yourself and B) restore from backups if there are bugs
[02:27:10] <PMT> though depending on the way you took the backups, those could also be mangled.
[02:34:45] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has quit IRC (Read error: Connection reset by peer)
[02:39:28] <zfs> [zfsonlinux/zfs] avoid retrieving unused snapshot props (#8077) comment by Alek P <https://github.com/zfsonlinux/zfs/issues/8077#issuecomment-451616659>
[02:47:41] *** biax_ <biax_!~biax@unaffiliated/biax> has quit IRC (Ping timeout: 244 seconds)
[03:13:38] *** biax_ <biax_!~biax@unaffiliated/biax> has joined #zfsonlinux
[03:16:15] *** biax_ <biax_!~biax@unaffiliated/biax> has quit IRC (Client Quit)
[03:37:30] <zfs> [zfsonlinux/zfs] making sure the last quiesced txg is synced (#8239) created by seekfirstleapsecond <https://github.com/zfsonlinux/zfs/issues/8239>
[03:47:31] <ptx0> woo, my threadripper system lives again
[03:47:46] <ptx0> this time with my 32" curved display in the centre and two 24" on either side
[03:52:04] <DeHackEd> and acceptable workstation...
[04:19:04] <ptx0> it used to have the 32" on top and two 24" side by side under it
[04:19:16] <ptx0> (i started with the two 24" and added the 32" later)
[04:19:30] <ptx0> i didn't think about it at the time but my neck really disagreed with the decision later
[04:19:38] <ptx0> looking 'up' at a monitor is not good
[04:20:15] <ptx0> PMT: the person who opened that PR.... well...
[04:20:24] <ptx0> they don't seem very reliable as a test source :P
[04:45:50] <Setsuna-Xero> ptx0: my i7 980x will soon have dual 28" 4k g-sync displays...
[04:45:58] <Setsuna-Xero> as soon as that 1080ti shows up
[05:06:01] <ptx0> but why gsync
[05:06:05] <ptx0> it is expensive
[05:09:06] <Setsuna-Xero> I stare at monitors all day long, tearing drives me crazy
[05:09:20] <ptx0> yeah but you can use gsync on a freesync monitor aiui
[05:15:29] <PMT> ptx0: i am aware
[05:15:54] <PMT> Setsuna-Xero: and using a Nehalem isn't driving you mad
[05:15:59] <PMT> also, ptx0: no, you can't
[05:16:19] <PMT> it's mutually exclusive, not a superset problem
[05:17:02] *** Pewpewpewpantsu <Pewpewpewpantsu!~xero@unaffiliated/setsuna-xero> has joined #zfsonlinux
[05:17:23] <PMT> you can maybe trick it into working if you also have an AMD GPU/APU in the system?
[05:17:34] *** Setsuna-Xero <Setsuna-Xero!~xero@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 250 seconds)
[05:21:03] <Pewpewpewpantsu> bah, accidently unpluged my workstation installing that new wd red
[05:21:14] <Pewpewpewpantsu> anyways g-sync and freesync don't play nice
[05:21:20] <Pewpewpewpantsu> I already looked into that
[05:25:43] <Pewpewpewpantsu> now.. how do I add this new drive as a mirror sanely
[05:27:02] <bunder> you cover it in peanut butter and press the two drives together
[05:27:45] <Pewpewpewpantsu> I was thinking using /dev/by-id but I guess it doesn't matter does it
[05:27:56] *** Pewpewpewpantsu is now known as Setsuna-Xero
[05:28:15] <ptx0> https://www.pcworld.com/article/3300167/components-graphics/amd-freesync-on-nvidia-geforce-graphics.html
[05:28:20] <ptx0> suck it PMT
[05:29:33] <gchristensen> "A ZFSSA with 192 4TB drives, configured as a single RAIDz1 pool" O.o
[05:31:05] <Setsuna-Xero> wait what.. I need to manually mirror the partition table for zfs to just take the disk?
[05:31:12] <bunder> gchristensen: sounds like phoronix
[05:31:30] <gchristensen> close: Oracle Cloud
[05:31:38] <gchristensen> https://blogs.oracle.com/wonders-of-zfs-storage/disk-scrub-why-and-when-v2
[05:37:20] <Setsuna-Xero> ugh
[05:37:26] <Setsuna-Xero> too tired for this
[05:44:09] <bunder> "No, I'm not going to talk about Stripe. That should only ever be used on a simulator; I don't even know why it exists on a ZFS appliance"
[05:44:11] <bunder> LOL
[05:45:13] * gchristensen has an 8-disk stripe now....
[05:46:24] <Setsuna-Xero> I give up
[05:46:38] <Setsuna-Xero> it partially mirroed the partition table and now gives me nosense about no such device
[05:46:46] <Setsuna-Xero> tomorrow...
[05:47:58] *** veegee <veegee!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has joined #zfsonlinux
[05:58:50] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[06:08:18] *** tlacatlc6 <tlacatlc6!~tlacatlc6@68.202.46.96> has quit IRC (Quit: Leaving)
[07:00:48] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[07:28:26] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 250 seconds)
[07:29:14] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #zfsonlinux
[07:49:07] <MTecknology> Guess who forgot to set a root password before rebooting!
[07:49:23] <ptx0> ^ this guy ^
[07:49:38] <MTecknology> Excellent guess!
[07:59:28] <MTecknology> bright side- I actually booted into the installed environment. That means I'm pretty much done! :D
[09:01:27] *** cheet <cheet!~cheet@modemcable056.70-59-74.mc.videotron.ca> has quit IRC (Quit: ZNC 1.8.x-nightly-20181211-72c5f57b - https://znc.in)
[09:11:08] <veremitz> chroot back in ;P lol
[09:15:24] <MTecknology> I'm currently raging against the machine.
[09:16:12] <MTecknology> My write performance is utter garbage and I'm 99.9% confident it's because of the controller, even though this is JBOD.
[09:39:59] *** Celmor <Celmor!~Celmor@unaffiliated/celmor> has joined #zfsonlinux
[09:43:17] *** elxa_ <elxa_!~elxa@2a01:5c0:e09a:5f21:73d:6156:13c5:f51e> has joined #zfsonlinux
[09:51:58] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[10:00:53] <MTecknology> so... wtf? http://dpaste.com/3E66AN0
[10:01:24] <MTecknology> that only happens with oflag=direct
[10:10:27] *** cheet <cheet!~cheet@modemcable056.70-59-74.mc.videotron.ca> has joined #zfsonlinux
[10:34:31] <rjvb> Hi. All this debian-related discussion recently and I completely forgot to evoke a known annoyance under Debian:
[10:35:03] <rjvb> during apt/dpkg operations, the "reading database" step can take *really* long
[10:36:17] <rjvb> none of the dataset tuning attempts I've made to date made any real difference there
[10:56:50] <bunder> how does ubu only have ~450 employees https://www.phoronix.com/scan.php?page=news_item&px=Canonical-Financial-EOY31May18
[10:57:26] <bunder> do they only have one guy on the phones doing support requests
[11:24:57] <MTecknology> bunder: dunno how that's relevant here, but Canonical is a company that's generally good at keeping around experts. Those experts are generally excellent at resolving issues in a very timely manner. Also- most of that revenue is not going to be support requests because companies that pay for ubuntu support are looking for things like landscape and employ their own experts.
[11:27:08] <MTecknology> What is Why did sabdfl step down, for $100mil?
[11:39:57] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6006d1146548036a454.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[11:56:32] *** Celmor[m] <Celmor[m]!celmormatr@gateway/shell/matrix.org/x-krkmszgmsnwchvsw> has quit IRC (Remote host closed the connection)
[11:59:03] *** captain42 <captain42!~captain42@unaffiliated/captain42> has quit IRC (Ping timeout: 246 seconds)
[11:59:19] <PMT> rjvbb: what's a really long time here, and on what pool config?
[12:00:05] <PMT> MTecknology: I know of at least one large support contract company that stopped in the last 12mo
[12:00:17] <PMT> MTecknology: what controller?
[12:01:25] <PMT> Also, what ZoL version, but O_DIRECT is not a thing I expect to work
[12:05:57] *** captain42 <captain42!~captain42@unaffiliated/captain42> has joined #zfsonlinux
[12:11:36] <bunder> last i recall, it doesn't
[12:21:24] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[13:10:56] <zfs> [zfsonlinux/zfs] making sure the last quiesced txg is synced (#8239) new commit by seekfirstleapsecond <https://github.com/zfsonlinux/zfs>
[13:34:11] <zfs> [zfsonlinux/zfs] making sure the last quiesced txg is synced (#8239) new commit by seekfirstleapsecond <https://github.com/zfsonlinux/zfs>
[13:48:51] <rjvb> PMT: what config you want to see? The dataset which holds /var/lib/apt* has copies=2,compression=lz4 and sync=disabled (for a bit more performance)
[13:50:33] <rjvb> The /var/lib/dpkg dir is symlinked to anoher dataset that has the same settins but compression=off
[13:51:11] <rjvb> all on pool on a partition of an internal SSHD,
[13:52:08] <rjvb> when I say really long it can be a minute or more (never really timed it); you can see the percentage counter advance in slow bursts, often getting stuck at around the 45% mark
[13:52:59] <rjvb> not as on e.g. btrfs where it will count up to 100% just slow enough to read the progress values
[13:54:35] <zfs> [zfsonlinux/zfs] making sure the last quiesced txg is synced (#8239) new commit by seekfirstleapsecond <https://github.com/zfsonlinux/zfs>
[13:56:08] <zfs> [zfsonlinux/zfs] making sure the last quiesced txg is synced (#8239) comment by seekfirstleapsecond <https://github.com/zfsonlinux/zfs/issues/8239#issuecomment-451653286>
[14:01:15] <zfs> [zfsonlinux/zfs] NULL pointer dereference when attempting to destroy snapshots en-mass to allow a pool to resilver (#8237) comment by Will Rouesnel <https://github.com/zfsonlinux/zfs/issues/8237#issuecomment-451653598>
[14:01:35] <rjvb> ZoL 0.7.12 but I'm not the only one who's noticed this since many versions ago already. What's that about O_DIRECT?
[14:03:41] <rjvb> I'm guessing it could have something to do with /var/lib/dpkg/info which has over 19000 files in it
[14:09:25] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[14:25:15] *** fs2 <fs2!~fs2@pwnhofer.at> has quit IRC (Quit: Ping timeout (120 seconds))
[14:25:47] *** fs2 <fs2!~fs2@pwnhofer.at> has joined #zfsonlinux
[14:34:11] <prawn> rjvb: I was about to post the github issue but it's yours :D
[14:34:48] <prawn> But hey, to be fair, your issue is assigned to the 1.0 milestone and everyone is working on 0.8 :^)
[14:37:34] *** Celmor <Celmor!~Celmor@unaffiliated/celmor> has left #zfsonlinux
[14:45:47] <PMT> rjvb: 19000 is annoyingly many but shouldn't be terrible
[14:46:35] <PMT> oh they're not all in one directory that should absolutely not be lethal
[14:47:40] <PMT> rjvb: I was asking about what the pool setup was - single disk, raidz, mirror, l2arc, log device, etc
[14:48:10] <PMT> Also, if you have a bug open, it'd be useful
[14:50:11] <rjvb> prawn: what github issue?
[14:50:34] <rjvb> PMT: single disk, no mirrors, log devices etc, and AFAIK, default ARC
[14:50:38] <rjvb> 8Gb RAM
[14:50:59] <rjvb> and yes, those /var/libdpkg/info is a single directory
[14:51:00] <prawn> rjvb: https://github.com/zfsonlinux/zfs/issues/3857
[15:09:27] <rjvb> ah, see how long the issue has been known ... I completely forgot about that ticket :-/
[15:17:38] *** PewpewpewPantsu <PewpewpewPantsu!~pewpew@unaffiliated/setsuna-xero> has joined #zfsonlinux
[15:18:00] *** Shinigami-Sama <Shinigami-Sama!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 268 seconds)
[15:18:10] *** Setsuna-Xero <Setsuna-Xero!~xero@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 250 seconds)
[15:33:33] *** fp7 <fp7!~fp7@unaffiliated/fp7> has joined #zfsonlinux
[15:47:49] *** mmlb <mmlb!~mmlb@76-248-148-178.lightspeed.miamfl.sbcglobal.net> has joined #zfsonlinux
[15:51:16] <elxa_> do you split ssds using partitions or lvm to split their storage for zil, l2arc, allocation classes?
[15:51:18] *** elxa_ is now known as elxa
[15:51:47] <gchristensen> elxa: do you have many SSDs or one SSD?
[15:52:14] <gchristensen> I'm ... pretty sure ... you won't get any benefit of putting a zil and/or l2arc on the same device as storage
[15:54:21] <elxa> gchristensen: ssds are big and powerful, my workload is mostly read. So I figured I could add 4x NVMe SSDs in a 1xPCIe x16 to 4x m.2 PCIe adapter and use it in various ways?
[15:55:11] <elxa> e.g. for allocation classes I thought I should have the same raid level on the ssds as on the main storage
[15:55:24] <gchristensen> if mostly read, a zil isn't so useful, but an l2arc is useful if reading from the l2arc is faster than reading from the storage itself
[15:57:24] <gchristensen> and maybe the l2arc is faster to read from than storage due to optimisations? not sure. I wouldn't slice up a single device though...
[15:57:42] <gchristensen> I don't know anything about allocation classes
[15:59:13] <elxa> gchristensen: allocation classes are coming with 0.8.0 afaik. It is about moving certain data (metadata or data below a certain size) from the main storage to the "special device" (ssds)
[16:01:29] <elxa> so for any write the data matching those (to some extent configurable) conditions will go to the special device unless it is full (fallback to main storage)
[16:02:17] <gchristensen> still not sure that is useful when your disks are all the same like that
[16:02:18] <elxa> on my pools listing directories/files is a relatively slow operation. I did some testing using l2arc and it helped a lot, but allocation classes will basically mean I don't need to warm up the cache to achieve the same :D
[16:04:04] <gchristensen> oh well if you've benchmarked you should tell me to piss off :P
[16:06:11] *** tlacatlc6 <tlacatlc6!~tlacatlc6@68.202.46.96> has joined #zfsonlinux
[16:08:47] <elxa> well nothing special. I timed "find /path/to/pool" with cold l2arc and warm l2arc. Quite the difference :)
[16:09:08] <gchristensen> and how does that compare to doing it twice without an l2arc?
[16:09:44] <elxa> doing what twice?
[16:09:56] <gchristensen> the `find /path/to/pool`
[16:10:29] <gchristensen> the speedup with the l2arc could have been many things, like the dirent cache
[16:12:59] <elxa> gchristensen: you're probably right, many cache layers could've helped. I want to use allocation classes to move some of my cache to ssds in a (reboot) persistent manner.
[16:14:36] <gchristensen> hrm not sure l2arc is? https://github.com/zfsonlinux/zfs/issues/925 / https://wiki.illumos.org/display/illumos/Persistent+L2ARC / https://www.illumos.org/issues/3525
[16:16:08] <elxa> yes l2arc is (not yet) persistent
[16:16:29] <elxa> brackets fail :D
[16:17:24] <elxa> it seems such waste for modern nvme drives to use them for just one of zil/l2arc/allocation classes
[16:17:33] <elxa> in my systems they will be mostly idle :D
[16:18:01] <elxa> compared to the performance that they offer
[16:18:19] <gchristensen> you could get much less good performance drives, and then they'll be idle less?
[16:18:52] <gchristensen> (that is a joke)
[16:19:01] <elxa> I don't need dramatic improvements, anything ssd based will blow hard disks out of the window :)
[16:20:06] <elxa> except for sustained writes maybe?
[16:21:00] <elxa> I went cheap (capacity instead of performance) on my desktop with a samsung 850 evo 1tb
[16:21:17] <elxa> my system freezes everything I write anything more than a few megabytes, it's driving me crazy
[16:21:34] <elxa> everything -> everytime
[16:31:13] *** b <b!coffee@gateway/vpn/privateinternetaccess/b> has joined #zfsonlinux
[16:52:21] <PMT> gchristensen: pL2ARC has not landed on any platform AFAIK
[16:52:44] <PMT> (it's in some of the closed appliances I believe, but none of the "main" OpenZFS projects)
[17:03:27] *** kim0 <kim0!uid105149@ubuntu/member/kim0> has joined #zfsonlinux
[17:50:06] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[18:02:33] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Quit: simukis)
[18:29:13] <ptx0> PMT: i don't think ith as
[18:29:24] <ptx0> landed in any platform, that is
[18:29:41] <ptx0> last i heard, there were potential corruption issues - they didn't matter before the l2arc was persisted.
[18:35:48] <zfs> [zfsonlinux/zfs] After a week of running array, issuing zpool scrub causes system hang (#7553) closed by kpande <https://github.com/zfsonlinux/zfs/issues/7553#event-2055969601>
[18:35:53] <zfs> [zfsonlinux/zfs] After a week of running array, issuing zpool scrub causes system hang (#7553) comment by kpande <https://github.com/zfsonlinux/zfs/issues/7553#issuecomment-451674637>
[18:39:23] <PMT> ptx0: oh? I didn't realize L2ARC could get mangled like that.
[18:43:11] <zfs> [zfsonlinux/zfs] enable compression by default on new pools (#8213) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8213#issuecomment-451675439>
[18:43:44] <ptx0> PMT: when persistent
[18:45:26] <PMT> I do so wish OpenZFS was less afraid of changing defaults, even as I wish it was less haphazard with features that turn out to have catastrophic bugs.
[18:45:33] * ptx0 added note about checksum being on by default
[18:45:46] <ptx0> other filesystems don't have checksum, should we disable it by default to avoid "confusing" new users? no
[18:45:58] <PMT> i'm still mad ext4 chickened out of data checksums
[18:46:03] <ptx0> yeah it's fucking stupid
[18:46:12] <ptx0> they would have probably done it as stupidly as Ceph did though
[18:46:18] <PMT> how did Ceph do it stupidly?
[18:46:19] <ptx0> CRC32 generated on scrub, lol
[18:46:27] <PMT> ...only generated on scrub?
[18:46:27] <ptx0> they don't generate checksum during storage
[18:46:30] <ptx0> yes
[18:46:34] * PMT stares into the camera
[18:46:45] <ptx0> well
[18:46:50] <ptx0> ceph is a multi host filesystem
[18:47:00] <ptx0> when it generates the checksum it is comparing object A to B and C's
[18:47:06] <gchristensen> :|
[18:47:07] <ptx0> i guess if B and C have consensus then A is wrong
[18:47:38] <ptx0> copies B/C over A, moves on
[18:47:44] <ptx0> that is a simplified version, but, yeah
[18:48:07] <PMT> yeah i am aware of how fun consensus algorithms are
[18:48:10] <ptx0> but if anyone tries to tell you 'ceph has checksum and scrub' just laugh/spit in their face
[18:48:24] <PMT> i mean, it does have scrub
[18:48:29] <PMT> it's just also stupid
[18:48:35] <ptx0> well duh
[18:48:35] <PMT> and i really wanted to like it
[18:48:39] <ptx0> that's why laugh/spit
[18:48:53] <ptx0> i don't see why they can't add a new on disk format with real checksum
[18:48:59] <gchristensen> let's just hope all our devices aren't so homogenous as to all corrupt in the same way
[18:49:03] <ptx0> they fucking love changing on disk format
[18:49:05] <PMT> ptx0: maybe blueshift is gonna grow it
[18:49:11] <ptx0> lol no
[18:49:14] <ptx0> blueshit is based on XFS
[18:49:23] <PMT> wait really
[18:49:27] <ptx0> yeah
[18:49:40] <PMT> bluestore, rather, but
[18:50:02] <ptx0> and it's bluestore
[18:50:03] <ptx0> yeah
[18:50:34] <ptx0> oh
[18:50:38] <ptx0> might have been wrong about that
[18:50:47] <ptx0> apparently BlueStore talks directly to hdd, what could possibly go wrong
[18:51:09] <PMT> ptx0: yeah i was gonna say
[18:52:08] <ptx0> they even go into some details about how nice FileStore was because it got to use the kernel caching mechanisms
[18:52:12] <ptx0> now they're trying to reinvent ZFS ARC
[18:52:18] *** hsp <hsp!~hsp@unaffiliated/hsp> has quit IRC (Quit: WeeChat 2.3)
[18:52:42] <ptx0> BlueStore calculates, stores, and verifies checksums for all data and metadata it stores. Any time data is read off of disk, a checksum is used to verify the data is correct before it is exposed to any other part of the system (or the user).
[18:52:47] <ptx0> By default we use the crc32c checksum. A few others are available (xxhash32, xxhash64), and it is also possible to use a truncated crc32c (i.e., only 8 or 16 bits of the 32 available bits) to reduce the metadata tracking overhead at the expense of reliability. It’s also possible to disable checksums entirely (although this is definitely not recommended). See the checksum section of the docs for more
[18:52:53] <ptx0> information.
[18:52:55] <ptx0> crc32 though
[18:52:56] <ptx0> lmao
[18:53:08] <ptx0> or even worse, truncated crc32
[18:53:17] <PMT> why would you do this
[18:53:29] <PMT> are you trying to implement ceph on embedded systems or something
[18:53:31] <ptx0> that's what i find myself saying a LOT when looking at Ceph docs/source
[18:53:47] *** hsp <hsp!~hsp@unaffiliated/hsp> has joined #zfsonlinux
[18:53:52] <PMT> maybe if IBM gets desperate they'll open source GPFS as Ceph 2 =P
[18:54:00] <ptx0> i thought they were already desperate
[18:54:09] <PMT> unclear
[18:54:20] <PMT> did you see the NSA is releasing their reveng toolkit at RSA
[18:54:22] <ptx0> i mean
[18:54:26] <ptx0> they bought RedHat
[18:54:54] <ptx0> 34 billion dollars
[18:55:00] <PMT> i'll be really fascinated to see how they handle the culture clash
[18:55:01] <ptx0> that does not spell desperation?
[18:55:03] <DeHackEd> does CRC32 have known defects? (other than lack of crypto secureness)
[18:55:24] <DeHackEd> I mean, okay, only 32 bits, but in theory you're defending against a flaky drive/controller and not an evil enemy
[18:55:35] <PMT> DeHackEd: mostly just that it's a weaker checksum and it's not clear it's much faster than e.g. lzjb or friends
[18:55:40] <ptx0> psh if crc were so great then the on disk corruption checking would work
[18:55:50] <PMT> ptx0: tbf it often works for me
[18:55:59] <ptx0> yeah well it does here too but that ruins my point so stfu
[18:56:17] <DeHackEd> objection, lzjb isn't a checksum algorithm
[18:56:28] <PMT> excuse me, brainfart
[18:56:30] <PMT> fletcher
[18:56:31] <ptx0> DeHackEd: but can't you try decompressing and if fail, checksum obv broken
[18:56:35] <ptx0> :P
[18:56:47] <PMT> ptx0: or compression broken :P
[18:57:03] <ptx0> this is a ceph feature, we work on Consensus (TM) here.
[18:57:33] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 245 seconds)
[18:57:41] <ptx0> weird, didn't know IBM bought Verizon's cloud business
[18:57:51] <PMT> ptx0: I did say you could do it if you have an AMD GPU
[18:57:53] <ptx0> also, it's weird that a company can just buy a department from another company, isn't it?
[18:57:56] <PMT> (re: freesync/g-sync)
[18:58:27] <PMT> ptx0: I believe it's usually the case that said dept is spun out into a distinct company and then that gets bought
[18:58:42] <ptx0> looks like IBM has spent almost 100 billion in acquisitions in the last decade
[18:58:48] <PMT> only?
[18:58:52] <gchristensen> DeHackEd: drive controllers are my sworn, evil enemy anyway
[18:59:35] <ptx0> PMT: that's probable. it was called Verizon Cloud Business
[18:59:48] <ptx0> super original name
[19:01:03] <PMT> seriously though, why does my version control system use a stronger checksum than almost any filesystem
[19:01:15] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Remote host closed the connection)
[19:01:24] <ptx0> heheh
[19:01:29] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[19:01:37] <ptx0> because you didn't submit a PR for ZFS to have stronger checksums yet
[19:01:59] <PMT> i was just talking about use of sha1, e.g. not crc32 =P
[19:02:06] <DeHackEd> (zfs has sha256 even without any feature flags)
[19:02:21] <DeHackEd> (not default, but available)
[19:02:37] <PMT> I am aware
[19:02:38] <PMT> monolith checksum sha256 local
[19:02:41] <gchristensen> PMT: do you fsck your version control? many don't and ... ugh
[19:03:24] <gchristensen> fetch.fsckObjects = true and receive.fsckObjects = true and transfer.fsckObjects = true
[19:03:29] <DeHackEd> also, your *distributed* *network-enabled* version control has a stronger checksum since 2nd and 3rd parties may have write access to it
[19:03:52] <DeHackEd> again, ZFS, Ceph and the like are designed around the only real "enemy" being hardware with a bad case of the crazies
[19:04:05] <ptx0> if you store ceph on zfs you get double checksum
[19:04:09] <ptx0> double better
[19:04:14] <ptx0> double safe!
[19:05:47] <DeHackEd> I might actually be doing something like that later this year
[19:05:57] <DeHackEd> though ZFS itself probably won't be doing any mirrors or RAID-Z...
[19:07:32] <PMT> i also enjoyed the fun demo of superobj repos on github
[19:08:41] <DeHackEd> first I'll need moar disks
[19:09:38] <PMT> ptx0: i know people who did that.
[19:13:21] * ptx0 does that
[19:13:40] <ptx0> i had to fight tooth and nail to make sure it's ceph on top of ZFS and not the other way around
[19:14:00] <ptx0> the perserverance and ignorance of some people astounds me
[19:14:21] <PMT> yeah i uh
[19:14:27] <PMT> know that feeling
[19:15:01] <ptx0> "but if you put zfs on ceph you only access the cluster via one host and then if it goes down you have to wait for import, and, the storage protocol would be limited to iscsi/nfs/cifs and not native rbd..."
[19:15:13] <ptx0> there are so many fuckin reasons not to do it
[19:25:50] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[19:37:18] <zfs> [zfsonlinux/zfs] Python 2 and 3 compatibility (#8096) comment by loli10K <https://github.com/zfsonlinux/zfs/issues/8096>
[19:38:40] <DeHackEd> well no, you don't dedicate a ceph cluster to a single ZFS pool...
[19:39:10] <DeHackEd> this is more a matter of trying to answer the age old question: do you run ZFS on ZFS volume? (where "ZFS" could be swapped for Ceph in one of those two places)
[19:39:58] <ptx0> but then i could just snapshot and send a single volume instead of the individual datasets in the pool
[19:40:16] <ptx0> imagine that, never needing to use send -R
[19:40:27] <DeHackEd> but then the VM itself lacks the ability to access snapshots so readily
[19:40:35] <ptx0> it has its own zfs pool yo
[19:40:38] <ptx0> it can snapshot inside
[19:41:04] <DeHackEd> yeah... raw disks -> ceph (for VM cluster reasons) -> VM -> zfs (for snapshots, compression, etc) -> apps
[19:41:19] <DeHackEd> (ceph does support snapshots, but I'm a bit confused by what you can and can't do with it)
[19:41:25] <ptx0> me too
[19:41:28] <ptx0> :P
[19:41:44] <DeHackEd> oh good, glad I'm not completely retarded
[19:43:02] <DeHackEd> there's also weird shit like a cluster consists of multiple pools and I think you can snapshot/clone a volume into another pool or something... it's weird...
[19:43:19] <zfs> [zfsonlinux/zfs] Python 2 and 3 compatibility (#8096) comment by Neal Gompa (?????????) <https://github.com/zfsonlinux/zfs/issues/8096>
[19:59:20] <ptx0> aw man
[19:59:30] <ptx0> the modem we've got here at my new place only has a 100mbps internal switch
[19:59:31] <ptx0> wtf
[19:59:37] <ptx0> the plan is 300mbps
[20:02:08] <ptx0> oh, wait
[20:02:13] <ptx0> maybe my switch in my room is going apeshit
[20:09:09] <ptx0> verdict says: it was the ethernet cable
[20:09:47] <ptx0> 330mbps internet speeds now
[20:09:53] <Lalufu> 4 pins connected only?
[20:09:53] <ptx0> much more reasonable than the 95mbps-ish
[20:09:58] <ptx0> no idea
[20:10:08] <ptx0> i can just chop the ends off and put new ones
[20:10:17] <ptx0> but i'll probably forget and just go through this again someday
[20:10:48] <gchristensen> throw it away?
[20:11:13] <ptx0> it is a 50 metre length of cable...
[20:11:24] <gchristensen> cut the ends off now, then? :)
[20:11:26] <ptx0> do you just throw everything away that is repairable?
[20:11:35] <ptx0> now that's the first sensible thing you've said all day :)
[20:11:39] <gchristensen> it is not!
[20:11:48] <ptx0> oh, back to insanity..
[20:12:07] <gchristensen> I throw away things which are low value and useless to me and cause me trouble, and saves me trouble later.
[20:12:23] <ptx0> you must be popular in Family Court
[20:12:50] <gchristensen> speaking of insanity
[20:16:55] *** DzAirmaX <DzAirmaX!~DzAirmaX@unaffiliated/dzairmax> has quit IRC (Quit: We here br0.... xD)
[20:17:31] *** DzAirmaX <DzAirmaX!~DzAirmaX@unaffiliated/dzairmax> has joined #zfsonlinux
[21:03:13] *** hoonetorg <hoonetorg!~hoonetorg@77.119.226.254.static.drei.at> has quit IRC (Read error: Connection reset by peer)
[21:27:45] *** Shinigami-Sama <Shinigami-Sama!~xero@unaffiliated/setsuna-xero> has joined #zfsonlinux
[21:34:48] <bunder> ptx0 | do you just throw everything away that is repairable? -- a broken 50ft cable probably, who wants to splice that shit back together and look at the hack job
[21:46:05] <bunder> https://zfsonfreebsd.github.io/ZoF/
[21:46:14] <bunder> well it has a website now
[21:47:21] *** elxa <elxa!~elxa@2a01:5c0:e09a:5f21:73d:6156:13c5:f51e> has quit IRC (Ping timeout: 250 seconds)
[21:56:44] <ptx0> dude, that description
[21:56:45] <ptx0> ZFS on Linux is an advanced file system and volume manager which was originally developed for Solaris and is now maintained by the OpenZFS community. ZoF is the work to bring FreeBSD support into the ZoL repo.
[21:56:50] <ptx0> who the hell wrote that lol
[21:56:59] <ptx0> ZFS on Linux was developed for Solaris?
[21:58:19] <Markow> haha
[21:58:30] <Markow> I couldn't help laugh
[21:58:38] <DeHackEd> no, it's funny, carry on laughing
[21:59:01] <ptx0> mm that project seems stupid
[21:59:17] <ptx0> it has one person, not even one of the principle freebsd zfs developers, contributing to it
[21:59:30] <DeHackEd> which project? discard ZFS in BSD, replace with Linux variant?
[21:59:44] <Markow> It's a good catch
[21:59:44] <ptx0> the out-of-tree ZoL port to FreeBSD
[22:00:02] <bunder> i'm sure more will contribute once they can get it working
[22:00:21] <ptx0> i doubt it
[22:00:22] <bunder> i'm guessing its not in a working state at the moment
[22:00:27] <ptx0> this is the same dude who forked freebsd a couple times
[22:01:44] <ptx0> he seems to have an axe to grind with just about every one of the freebsd zfs developers
[22:01:50] <ptx0> so, i don't think this project is going to go anywhere
[22:02:01] <ptx0> other than, in circles, on his own hardware
[22:02:22] * bunder shrug
[22:02:27] <bunder> someone has to do the work
[22:02:49] <bunder> i doubt allan is gonna do it all by himself
[22:03:37] <ptx0> allan isn't the only zfs developer, just one of the people not being consulted on the changes
[22:03:40] <ptx0> :P
[22:04:00] <DeHackEd> am I a ZFS developer?
[22:04:07] <DeHackEd> (no really, I'm not sure and want to know)
[22:04:26] <bunder> in a loose sense i guess so, but no officially :P
[22:04:47] <ptx0> no but you are a stakeholder.
[22:05:51] *** donhw <donhw!~quassel@host-184-167-36-98.jcs-wy.client.bresnan.net> has quit IRC (Quit: https://quassel-irc.org - Chat comfortably. Anywhere.)
[22:06:18] *** donhw <donhw!~quassel@host-184-167-36-98.jcs-wy.client.bresnan.net> has joined #zfsonlinux
[22:07:29] <ptx0> is it a bad omen that Steven Hartland's username is smh?
[22:08:04] <ptx0> his main gripe with switching freebsd-zfs for zfsonlinuxonfreebsd is that ZoL lacks TRIM, lmao
[22:08:34] <ptx0> "so then respond to the email i CC'd you on a week ago to review the TRIM PR" was mmacy's response
[22:14:54] <DeHackEd> trim isn't even that big a deal. modern SSDs perform very well without it. if your system is bottleneck'd on untrim'd disks, you have more serious issues
[22:15:49] <ptx0> it's for thin provisioned storage aiui
[22:16:04] <ptx0> at least thats why /i/ care about trim
[22:16:20] <ptx0> zpool initialize provides an interesting middle ground there
[22:16:27] <bunder> yeah i was about to say
[22:16:36] <bunder> that initialize pr is floating around here somewhere
[22:16:40] <ptx0> it's a neat way of bypassing snapshots instead of doing dd if=/dev/zero of=../file/...
[22:16:41] <DeHackEd> TRIM + zpool initialize = endless fun
[22:17:03] <ptx0> think it was merged, no?
[22:17:26] <DeHackEd> #8230 no
[22:17:27] <zfs> [zfs] #8230 - OpenZFS: 'vdev initialize' feature by behlendorf <https://github.com/zfsonlinux/zfs/issues/8230>
[22:25:34] *** hoonetorg <hoonetorg!~hoonetorg@77.119.226.254.static.drei.at> has joined #zfsonlinux
[22:46:56] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[22:59:21] *** beardface <beardface!~bearface@unaffiliated/bearface> has quit IRC (Quit: Lost terminal)
[23:10:56] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Quit: shibboleth)
[23:15:20] *** beardface <beardface!~bearface@unaffiliated/bearface> has joined #zfsonlinux
[23:22:59] <phantomcircuit> i've followed the install instructions for debian stretch, but the zfs module isn't being loaded
[23:23:02] <phantomcircuit> am i missing something?
[23:29:22] <DeHackEd> are the startup services installed and enabled? might be just "zfs" (old) or "zfs-import" + "zfs-mount"
[23:30:05] *** King_InuYasha <King_InuYasha!~King_InuY@fedora/ngompa> has quit IRC (Read error: Connection reset by peer)
[23:35:04] <phantomcircuit> DeHackEd, zfs.target zfs-zed.service zfs-mount.service zfs-share.service are all loaded
[23:35:55] <PMT> phantomcircuit: what does dkms status say?
[23:36:06] <phantomcircuit> the 3 services are failed with "The ZFS modules are not loaded.\nTry running '/sbin/modprobe zfs' as root to load them."
[23:36:17] <DeHackEd> and what does that modprobe command say?
[23:36:29] <phantomcircuit> modprobe zfs works fine, but they dont load on boot so the services fail
[23:41:28] <PMT> so regenerate your initramfs or equivalent
[23:41:53] <phantomcircuit> i did
[23:42:14] <phantomcircuit> (twice actually, once after the apt install triggered one as well)
[23:42:42] <PMT> are the modules in the initramfs?
[23:43:45] <phantomcircuit> i'll try again i guess
[23:43:51] <PMT> I'm not asking you to try again.
[23:43:54] <PMT> I'm asking you if they're there.
[23:44:08] <phantomcircuit> maybe zfs needs to be loaded when the initramfs is generated ? lets find out
[23:44:20] <PMT> No, that's not how anything works.
[23:45:00] *** EHG- <EHG-!~EHG|@unaffiliated/ehg-> has quit IRC (Quit: EHG-)
[23:45:38] <PMT> phantomcircuit: when you say modprobe zfs works fine, do you mean from the rescue shell prompt you get from the initramfs, or
[23:45:43] *** EHG- <EHG-!~EHG|@unaffiliated/ehg-> has joined #zfsonlinux
[23:46:23] <phantomcircuit> PMT, im not booting to zfs
[23:47:09] <PMT> Okay. So you're talking about it being loadable after it boots. I assumed you said "followed the instructions" because you were, indeed, following the root-on-zfs instructions.
[23:47:48] <bunder> i notice you didn't mention the import service, it probably loads the module there
[23:47:53] <PMT> phantomcircuit: if you could pastebin the output from dpkg -l | egrep 'ii (spl|zfs|libuutil|libnvpair|libzfs|libzpool)' it might useful.
[23:47:57] <PMT> might be, even
[23:49:05] <phantomcircuit> well grep says zfs is in the initrd file
[23:49:06] <PMT> bunder: so, on my Debian stretch system that loads ZFS on boot: lrwxrwxrwx 1 root root 9 Nov 19 06:32 /lib/systemd/system/zfs-import.service -> /dev/null
[23:49:26] <PMT> phantomcircuit: did you unpack the initramfs and see if the module is there, or did you just grep the binary and hope for the best
[23:52:42] <phantomcircuit> dpkg https://paste.debian.net/1058818/
[23:52:47] *** ShellcatZero <ShellcatZero!~ShellcatZ@cpe-66-27-89-254.san.res.rr.com> has quit IRC (Ping timeout: 240 seconds)
[23:52:48] <phantomcircuit> systemctl zfs entries https://paste.debian.net/1058819/
[23:53:02] <PMT> phantomcircuit: you may find the package "zfs-initramfs" useful for your life.
[23:54:02] <PMT> You may not. It may be unnecessary. But if it's not in the initramfs, and there's a package that adds initramfs hooks for it, that'd be my suggestion absent more data.
[23:54:30] <PMT> If it is in the initramfs, then this becomes more interesting.
[23:55:07] <bunder> shouldn't need it if you're not booting off it, i'm no systemd guy but that output says the import scripts are dead
[23:56:17] <PMT> You probably only want zfs-import-cache unless you want it to try importing everything it sees on boot, versus whatever it thought was imported before.
[23:56:47] <phantomcircuit> bunder, zfs-import-cache.service is indeed what loads the module
[23:57:41] <bunder> so why is it dead :P
[23:57:51] <phantomcircuit> which seems to be in the preset file but is still inactive
[23:58:19] <phantomcircuit> bunder, cause systemd is nonsense
[23:58:29] <bunder> indeed :)
[23:59:37] <phantomcircuit> Condition: start condition failed at Sat 2019-01-05 17:44:53 EST; 13min ago
[23:59:42] <phantomcircuit> my only clue to why it's not started
[23:59:49] <bunder> lovely
top
   January 5, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >