Switch to DuckDuckGo Search
   January 9, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31
Toggle Join/Part | bottom
[00:00:12] <ptx0> github runs on aws
[00:00:15] <ptx0> :D
[00:00:27] * ptx0 kidding btw
[00:00:39] <bunder> i guess it could be worse, it could be oracle :P
[00:00:39] <PMT> ptx0: I would be surprised if migrating that wasn't on their timetable, if they were.
[00:00:47] <jasonwc> I want to replace my rpool which currently uses two old 240GB SSDs with two modern 1TB SSDs. I plan to create the manual partition for grab, and then allocate about 80% of the disk to ZFS with the remaining 20% for overprovisioning. I'm just going to do a zfs replace. Will it automaatically use all available space or will I need to expand it after the replace completes?
[00:01:07] <PMT> O-oh. Github has its own IPv4 allocation.
[00:01:59] <jasonwc> Did Amazon ever publicly announce the storage medium for AWS Glacier?
[00:02:02] <PMT> jasonwc: it should use all available space after both devices are replaced, at worst requiring online -e on each disk. That said, you could also probably tell the SSD to just hide the remaining 20% of the disk (many SSD vendor tools let you specify "even more overprovisioning")
[00:02:16] <PMT> jasonwc: AFAIK no, but everyone is p. certain it's tape or optical media jukeboxes based on the lead times
[00:02:50] <jasonwc> PMT: I was asking because an Amazon employee told me the medium but I never saw the answer publicly confirmed
[00:03:27] <jasonwc> PMT: What's the advantage to hiding the additional space rather than just creating a partition and leaving it unused?
[00:03:31] <PMT> "n 2012, ZDNet quoted a former Amazon employee as saying that Glacier is based on custom low-RPM hard drives attached to custom logic boards where only a percentage of a rack's drives can be spun at full speed at any one time."
[00:03:42] <PMT> jasonwc: not being able to accidentally use it, more or less.
[00:03:43] <jasonwc> Yeah, that's not what I was told
[00:03:54] <jasonwc> PMT: It was one of your proposed mediums
[00:04:12] <jasonwc> I specifically asked about the low RPM drives and he said it wasn't that
[00:04:24] <PMT> jasonwc: I would suspect it's optical media based on everyone seeming to agree lots of the parts are commodity storage.
[00:04:40] <PMT> And tape isn't really "commodity" outside of enterprise envs.
[00:04:56] <jasonwc> At the scale that they purchase, tape should be pretty cheap
[00:05:11] <PMT> Yes, but BD-R is even cheaper.
[00:05:30] * PMT shrugs
[00:05:42] <PMT> The actual implementation is academic unless I was trying to plan for a post-EMP recovery scenario
[00:06:01] <jasonwc> I've never found optical media all that reliable for long term storage
[00:06:05] <bunder> optical discs aren't perfect either, they have disc rot
[00:06:15] <jasonwc> Tape is pretty much designed for this
[00:06:22] <ptx0> PMT: i saw that at the same time you did when i whois'd to see out of curiosity
[00:06:32] <jasonwc> maintain the environment within spec and it'll last for decades
[00:06:47] <ptx0> tried traceroute and it goes straight from ISP to Github
[00:06:48] <ptx0> ahahaha
[00:06:51] <PMT> I basically had almost no rot from the era of dual-layer DVD onward, when I bulk-recovered optical media.
[00:07:16] <PMT> (An associate of mine had an optical reader jukebox that he drove with his own glue code and permitted me to use it.)
[00:07:34] <jasonwc> I had some Taiyo Yuden single-layer DVD-R disks refuse to read after ~5 years. They were supposed to be among the best.
[00:07:38] <bunder> i've never investigated how bad it is myself, i know i have a few cdr/rw's with the label flaking off
[00:07:51] <jasonwc> that and Verbatim
[00:08:10] <bunder> verbatim are supposed to be good, odd
[00:08:11] * PMT shrugs
[00:08:15] <PMT> I know T-Y shut down.
[00:08:22] <jasonwc> yeah, this was many years ago
[00:08:48] <jasonwc> Anyway, I wouldn't trust optical media for long-term storage. HDDs with semi-regular scrubs or tape seem better.
[00:08:57] <bunder> i have a couple old spindles hanging around, i forget if they're maxell, which are also supposed to be good
[00:09:48] <jasonwc> Generally people say if you have less than 75/100TB of data, don't bother with tape as it's not worth the hassle/cost
[00:09:50] <PMT> Optical media degrades more gracefully - e.g. I've seen almost no optical discs that were entirely unreadable, and there are _definitely_ hard drives where that statement is false.
[00:10:16] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246193853>
[00:10:22] <PMT> i guess if I visited one of those parts of the world that have organisms which eat the adhesive between layers of optical media i might have some fun
[00:10:41] <bunder> i have a tub of old drives i need to go through one of these days, i know all i'm gonna find are mp3's and nes roms lol
[00:10:44] <jasonwc> PMT: heh, well IIRC, my issue was exactly that with the optical media - unreadable. I actually have the disks. I even have a DVD drive in my desktop - just disconnected, heh.
[00:11:02] <Shinigami-Sama> almost all my optical media is nearing unreadablity, and its been stored under pretty close to ideal conditions. jewel cases, vertical, in another case, in a cool, dark and dry environment
[00:11:18] <Shinigami-Sama> the archival ones died first as I recall
[00:11:20] <jasonwc> Shinigami-Sama: How old?
[00:11:25] <PMT> I actually have 3 BD multilayer burners because every time I want to read optical media I forget I didn't have the old one break or borrowed, so I buy a new one for like $20
[00:11:35] <PMT> And then remember I own one a little after it arrives
[00:11:39] <jasonwc> In my experience, they work fine for a few years and then after about 5 you get unreadable disks
[00:11:41] <Shinigami-Sama> 7-14y
[00:11:55] <PMT> The oldest ones I got archival data from was ~15y
[00:12:31] <Shinigami-Sama> I can see data, and I can try and copy it off, but they've got non-negligable bitrot and you can't copy them without DD magic
[00:13:04] <jasonwc> Hasn't tape been tested for 30 years+?
[00:13:30] <jasonwc> This makes me want to test some more of those old disks. They should be over 10 years by now.
[00:13:31] <Shinigami-Sama> I've pulled data off tapes older than me
[00:13:49] <ptx0> all my old optical media forever ruined
[00:13:55] <ptx0> dunno how but they developed holes
[00:14:00] <jasonwc> lol
[00:14:05] <ptx0> the data layer flaked away
[00:14:14] <ptx0> they were stored in a spindle thing vOv
[00:14:15] <jasonwc> yeah, I've heard about that happening
[00:14:21] <jasonwc> high humidity?
[00:14:25] <ptx0> dunno
[00:14:33] <ptx0> don't think so
[00:14:42] <ptx0> gennerally in an air conditioned house on a shelf
[00:14:51] <bunder> friction from the discs above maybe
[00:14:59] <PMT> http://www.ioccc.org/2018/mills/hint.html the person who wrote this is a witch
[00:15:00] <bunder> storing them on the spindle is probably bad
[00:15:04] <jasonwc> I live in an area with very high humidity (around D.C.). HVAC reduces the humidity but you get a pretty significant variation over the course of a year.
[00:15:10] <ptx0> yeah i thought about it but once i put a perfectly fine disc into the drive and then it had holes
[00:15:25] <bunder> eww
[00:15:26] <ptx0> so probably some movement in the 52x+ drive
[00:15:49] <ptx0> just a guess though
[00:16:00] <jasonwc> Does the burning speed matter? People claimed burning at slower speeds helped. I verified all the disks after burning them so they were fine after writing.
[00:16:07] <PMT> yes
[00:16:11] <PMT> I forget why
[00:16:34] <jasonwc> that makes disks pretty unattractive as the recommending writing speed was really slow - 2x or 4x for DVD-R
[00:16:48] <jasonwc> Both tape and hard drives are much faster
[00:16:50] <ptx0> because the laser focus matters
[00:17:04] <PMT> I thought it was properties of how the crystalline structure warped.
[00:17:16] <PMT> Which is also why you couldn't buy CD-RWs above 4x, lmao.
[00:17:19] <ptx0> Technology Connections on YT explains it well.
[00:17:53] <bunder> i thought it was the buffer
[00:18:05] <bunder> with a limited buffer you can't hold enough to burn at 52x
[00:18:30] <bunder> and they didn't want to put like a gig of memory into the things
[00:18:35] <bunder> so 8x was all you got on write
[00:18:45] <ptx0> you can use high rate discs burned at a slow speed but accuracy suffers
[00:18:45] <PMT> bunder: IIRC 52x made the discs come apart sometimes
[00:18:51] <ptx0> in general faster discs are less accurate
[00:19:13] <bunder> PMT: oh i have a video for you
[00:20:10] <ptx0> https://en.wikipedia.org/wiki/CD_and_DVD_writing_speed
[00:20:48] <ptx0> A higher writing speed results in a faster disc burn, but the optical quality may be lower (i.e. the disc is less reflective). If the reflectivity is too low for the disc to be read accurately, some parts may be skipped or it may result in unwanted audio artifacts such as squeaking and clicking sounds. For optimal results, it is suggested that a disc be burnt at its rated speed.[6][7]
[00:20:56] <ptx0> i.e. laser focus
[00:21:06] <bunder> https://www.youtube.com/watch?v=hf90IKKgxks
[00:21:33] <ptx0> when that explodes it's gonna be fun
[00:21:57] <bunder> (no i wasn't trying to make it louder, i was trying to press the eject button)
[00:22:16] *** veegee <veegee!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has joined #zfsonlinux
[00:22:44] <bunder> that disc was unbalanced from the day i got it, it was even loud at 4x
[00:23:08] <Shinigami-Sama> I burned thousands of discs but never exploded one...
[00:23:24] <PMT> that reminds me of the stupid fix for some optical read issues on game consoles.
[00:23:25] <Shinigami-Sama> but some of my friends? oh boy... they blew one up a week it seemed like
[00:23:50] <PMT> Put two pieces of tape parallel to each other on opposite halves of the disc, it made it more stable.
[00:24:12] <Shinigami-Sama> but but... lightscribe!
[00:27:42] <zfs> [zfsonlinux/zfs] Blocked I/O with failmode=continue (#7990) comment by stuartthebruce <https://github.com/zfsonlinux/zfs/issues/7990#issuecomment-452489221>
[00:28:06] <ptx0> lightscribe DVDs > lightscribe CDs
[00:28:10] <ptx0> in general, DVDs > CDs
[00:28:25] <ptx0> they are sandwiched in the middle of the disc rather than on the surface so the data layer doesn't flake away
[00:28:26] <Shinigami-Sama> DVDs seperated though
[00:28:29] <ptx0> shh
[00:28:36] <ptx0> it didn't FLAKE though
[00:28:45] <Shinigami-Sama> I puled one out of the tray and it peeled like banana
[00:29:23] <ptx0> so you ate it?
[00:30:48] <Shinigami-Sama> no, I threw at a friend like a sharp frisbee
[00:31:58] <PMT> DVDs have much better error correction, if memory serves.
[00:32:07] <PMT> And BD even more.
[00:32:20] <Shinigami-Sama> it was still 8/10 I think
[00:33:15] <chesty> ptx0, it works fine with 4.15. I have nothing against usb storage, I'm using it for backups and I've never had a problem with it before now. I guess I could esata but external usb drives are readily available.
[00:34:12] <ptx0> using a single device for backups is pretty bad too
[00:34:17] <ptx0> zpool scrub can't repair anything
[00:34:42] <ptx0> well it can repair metadata
[00:34:43] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has quit IRC (Read error: Connection reset by peer)
[00:34:48] <PMT> apparently they use much cleverer reed-solomon on BD drives
[00:35:21] <PMT> If I'm digesting this right, they do use an 8:10 code, but then they also have wider codes over larger areas in case they need to recover from more than one bit.
[00:36:03] <chesty> sure, if both my ssd and hdd die at the same time I lose data. if my house burns down I lose data regardless of how many hdds I have at home too.
[00:36:16] <PMT> That's true.
[00:36:27] <PMT> see also: Iron Mountain
[00:38:12] <chesty> cheers, I do have some stuff on mega, but not full or automated atm. I'll find some sort of offsite solution soon
[00:38:47] <PMT> tarsnap is somewhat popular. Backblaze and others (in addition to the obvious) offer archival-tier block storage solutions.
[00:39:24] <PMT> I'm probably obligated to plug Google Cloud Storage.
[00:40:26] <Shinigami-Sama> ptx0: copies=2
[00:40:42] <zfs> [openzfs/openzfs] Merge remote-tracking branch 'illumos/master' into illumos-sync (#730) comment by Prakash Surya <https://github.com/openzfs/openzfs/issues/730#issuecomment-452492177>
[00:40:47] <PMT> Shinigami-Sama: yes but that doesn't save you from a drive eating shit.
[00:41:16] <Shinigami-Sama> doesn't everyone have a clean room and spare drives to swap spindles/microsolder with?
[00:41:30] <PMT> I've never had good enough reason to try that trick.
[00:43:14] <chesty> I have the most boring data to backup, but any solution I choose will need to have I think the term is zero knowledge encryption or maybe local encryption. ie I encrypt the data locally and my provider doesn't have the private key
[00:45:25] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has quit IRC (Quit: no reason)
[00:45:56] *** apekatten <apekatten!~apekatten@unaffiliated/apekatten> has joined #zfsonlinux
[00:46:45] <Shinigami-Sama> zfs send -> flat file -> bcrypt -> ??? -> storage target
[00:47:04] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has joined #zfsonlinux
[00:50:55] <bunder> PMT | see also: Iron Mountain
[00:51:06] <bunder> you mean the company that loses people's data all the time
[00:52:05] <DHowett> i thought iron mountain existed solely to lose peoples' data
[00:52:12] <DHowett> i.e. it was a shredding company. now you're telling me it does other things?
[00:52:32] <bunder> In May 2005, Time Warner disclosed that a container of 40 unencrypted backup tapes containing the personal information of 600,000 current and former employees had disappeared while being transported in an Iron Mountain van that made 18 other stops in Manhattan that day. After the loss, Time Warner began encrypting its tapes, and Iron Mountain advised its other clients to do the same
[00:52:52] <Shinigami-Sama> DHowett: they're the largest consentual data backup provider
[00:53:10] <Shinigami-Sama> we had a client who had IM restore data from
[00:53:21] <Shinigami-Sama> LTO-2 tapes
[00:53:25] <bunder> i'll give it that most of their losses are from fires
[00:53:39] <Shinigami-Sama> and spin them on LTO-7 for them
[00:53:47] <bunder> but is it too much to ask to lock your truck when you leave it unattended
[00:54:43] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has quit IRC (Quit: AAAGH! IT BURNS!)
[00:54:44] <Shinigami-Sama> something something insurance?
[00:54:57] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has joined #zfsonlinux
[00:55:19] <bunder> something something i'm putting all this data on pastebin
[00:55:27] <jasonwc> kind of crazy that they were storing offsite backups unencrypted
[00:56:22] <Shinigami-Sama> jasonwc: Canadian Student loans also did the same thing... with SIN(SS) Numbers, Bank info... Mother's maiden name ect... not to long ago
[00:56:41] <Shinigami-Sama> the class action just finished up last year?
[00:57:01] <zfs> [zfsonlinux/zfs] Use ZFS version for pyzfs and remove unused requirements.txt (#8243) merged by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8243#event-2061656164>
[00:58:25] <chesty> Shinigami-Sama, I'll do some testing with zfs send -> flat file -> bcrypt -> ??? -> storage target, I didn't know how to do it so cheers
[00:59:04] <Shinigami-Sama> yeah you can send to flat files, they're quite large though, so you may want to inline an lzma or gzip in there
[00:59:35] <DeHackEd> I'm receiving to a LUKS-based pool that has mountpoint=none on all datasets
[00:59:47] <DeHackEd> I own the hardware there
[01:01:47] <chesty> that's a possibility, an rpi and (sorry ptx0) usb hdd at a mates place
[01:04:50] <ptx0> but why
[01:04:58] <ptx0> you can get an ARM board with real SATA ports
[01:05:03] <ptx0> banana pi
[01:05:20] <chesty> oh, ok. I didn't think of that. cheers
[01:05:39] <FinalX> I'm using an rpi 3 with a usb hdd that I was already running for remote shell access to my home.. but anything prior to the newest rpi only has 100mbit
[01:05:48] <chesty> I'll do that for sure, then I can get a cheaper 5.25 internal hdd
[01:06:04] <FinalX> and it won't even go at full 100mbit here .. :P even though it's directly hooked into my 500mbit fiber, so it's the pi that won't keep up
[01:06:12] <FinalX> better off getting something like ptx0 said .. that's for sure :)
[01:06:27] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/pull/8142#pullrequestreview-190502269>
[01:06:28] <ptx0> 5.25"?
[01:06:34] <ptx0> quantum bigfoot?
[01:06:38] <chesty> oh, soz
[01:06:40] <chesty> 3.5
[01:06:45] <DeHackEd> I assume some kind of 3.5" adaptor
[01:06:48] <FinalX> you can fit 2 pi's in a 5.25" enclosure ;)
[01:06:49] <chesty> not the laptop size I mean
[01:06:55] <DeHackEd> 5.25 is CD-ROM size
[01:08:36] <chesty> FinalX, 100mb is luxury, internet in australia sucks, it's 90s technology and infrastructure, maybe 00s if you're being generous. I'd have to rate limit backups to 1mbps
[01:09:13] <FinalX> ew
[01:10:33] <FinalX> tbh it's not so bad, my zrep sync only takes about 10 mins to sync over incrementals, it's just the initial one that's really long
[01:11:13] <FinalX> zrep syncing to a local backup-dataset on another stripe, and every day I zrep sync it to my pi at home as well for offsite
[01:11:18] <chesty> which is enough for /home, but I couldn't do a full system backup, a large package upgrade like libreoffice would take many hours, maybe days to catch up (I haven't done any calculations)
[01:11:20] <FinalX> I should let the pi pull though, instead of push
[01:11:42] <DeHackEd> I'm running: zfs send ... | xz -9e | ssh "xz -dc | zfs receive ..."
[01:11:43] <FinalX> ha, try Plex with 160GB of miniscule files
[01:11:50] <DeHackEd> because I also have <1 megabit upload on my DSL line
[01:12:12] <FinalX> I'm just sending the already compressed blocks, decompressing on the pi side makes things slower
[01:12:20] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Jorgen Lundman <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246213941>
[01:12:24] <FinalX> plus it wouldn't compress much more than lz4, I suppose
[01:13:07] <FinalX> both the server and pi run ubuntu 18.04 w/ 0.7.5 at least :)
[01:13:22] <FinalX> building the zfs module on the pi takes over an hour though
[01:13:54] <chesty> raspbian doesn't come with zfs I guess?
[01:14:15] <FinalX> not that I know of
[01:14:22] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Jorgen Lundman <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246214300>
[01:15:02] *** mmlb2 <mmlb2!~mmlb@76-248-148-178.lightspeed.miamfl.sbcglobal.net> has quit IRC (Ping timeout: 258 seconds)
[01:15:48] <zfs> [zfsonlinux/zfs] Removed suggestion to use root dataset as bootfs (#8247) merged by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/8247#event-2061721304>
[01:15:56] <chesty> that's a shame. I'd probably check if ubuntu have a version for the pi, I think they would have an arm version that would work, maybe not one that takes full advantage of the pi cpu
[01:17:30] <FinalX> they don't have an official release per se for the pi, but there's endorsed images referred to for download on the ubuntu site
[01:17:56] <FinalX> https://www.ubuntu.com/download/iot/raspberry-pi-2-3
[01:18:08] <chesty> sweet, I'll check it out. thanks
[01:18:21] <FinalX> oh, that's Ubuntu Core. I just had plain 18.04
[01:18:34] <FinalX> there should be a different download page still
[01:19:15] <bunder> if its just 1804, shouldn't that use a regular iso?
[01:19:17] <FinalX> ah, here https://wiki.ubuntu.com/ARM/RaspberryPi
[01:19:20] <FinalX> no
[01:19:31] <FinalX> pi doesn't boot like a normal device
[01:19:44] <FinalX> requires a special kernel and stuff
[01:20:26] <FinalX> I think I used that unofficial image ubuntu-18.04-preinstalled-server-armhf+raspi3.img.xz
[01:21:08] <bunder> server, for a rpi? don't they run xorg and stuff off those
[01:21:32] <bunder> its more desktop than anything if all you want is a cheap kodi/retropie box
[01:21:35] <FinalX> I never did at least, mine has always run headless
[01:22:06] <FinalX> but you can always install other things, obv.
[01:23:32] <FinalX> my point was more that the other one is Ubuntu Core, and requires you to sign up for Ubuntu's cloud stuff
[01:23:40] <FinalX> which most people don't want :)
[01:29:53] <bunder> ah, for some reason i thought it was a bundle package
[01:30:02] <bunder> like you install it and its all there
[01:33:21] <bunder> kindof like a livecd but a full desktop (like gentoo's former "admin cd")
[01:33:24] <Shinigami-Sama> I have an Rpi, and I thought I could do some cool things with it, but once I got rasbian installed on it, I realized my phone was far more powerful and useful than it
[01:33:56] <bunder> and yet people run zfs off them, don't ask me how :P
[01:34:03] <Shinigami-Sama> I can't see anything other maybe a thinclient setup for it being a reasonable use for it on a daily basis
[01:34:19] <Shinigami-Sama> maybe a controller for a homebrew fermentor...
[01:37:25] <PMT> bunder: just install it? :P
[01:37:31] <PMT> there's even raidz neon optimizations :P
[01:37:41] <jasonwc> I don't get this error. I'm trying to replace one disk with a new one and it says the old disk is in use in the pool. Yeah...
[01:37:41] <jasonwc> https://pastebin.com/YEgetGBL
[01:37:52] <bunder> PMT: yes but the perf must be awful
[01:37:57] <bunder> even with neon
[01:38:00] <jasonwc> man page says zpool replace pool old_disk new_disk
[01:38:11] <jasonwc> So, why the error?
[01:41:53] <zfs> [zfsonlinux/zfs] It would be nice to have a 'safe mode' for zfs and zpool commands (#4134) comment by Chris Smith <https://github.com/zfsonlinux/zfs/issues/4134#issuecomment-452515943>
[01:43:08] <bunder> uh
[01:43:11] <bunder> we nave no-op
[01:46:22] <zfs> [zfsonlinux/zfs] It would be nice to have a 'safe mode' for zfs and zpool commands (#4134) comment by bunder2015 <https://github.com/zfsonlinux/zfs/issues/4134#issuecomment-452516796>
[01:46:27] <bunder> old bugs are old? dunno
[01:46:43] *** catalase <catalase!~catalase@unaffiliated/catalase> has quit IRC (Ping timeout: 245 seconds)
[01:48:10] *** catalase <catalase!~catalase@unaffiliated/catalase> has joined #zfsonlinux
[01:48:24] <jasonwc> Should I override with -f or use zfs attach for the new disk? I don't understand why it would complain that the old disk is part of the pool. Isn't that presumed if you're using zfs replace?
[01:49:30] <FinalX> uh
[01:49:37] <jasonwc> Error is: /dev/disk/by-id/ata-Crucial_CT240M500SSD1_14310CD5F5B8-part1 is part of active pool 'rpool-server' (that disk is a member of the pool that I want to replace)
[01:49:38] <FinalX> look at your command again, really well
[01:49:44] <FinalX> yeah, not that weird
[01:49:57] <FinalX> you're replacing the same disk and partition with the same disk and partition
[01:50:25] <jasonwc> oh, lol - paste didn't work. Thanks for noticing the obvious!
[01:51:24] <FinalX> though :)
[01:51:31] <FinalX> uh, -though
[01:51:50] <jasonwc> lol, this is the problem with replacing two disks from the same manufacturer
[01:51:57] <jasonwc> zpool replace rpool-server ata-Crucial_CT240M500SSD1_14310CD5F5B8-part1 ata-CT1000MX500SSD1_1844E1D502F8-part1
[01:52:00] <jasonwc> that worked :P
[01:52:05] <FinalX> yeah.. I have the same problem.. 3 identical disks with different serial numbers
[01:54:43] <ptx0> i don't always test my fixes fully, but this time i did, and still all the tests failed and i'm pulling my hair out wondering why, but then i see git mis merged somehow
[01:55:00] <ptx0> two pipes || instead of one |, my original branch is correct
[01:55:05] * ptx0 fist at sky
[01:59:06] <jasonwc> resilvering SSDs is super fast :)
[02:00:59] <DeHackEd> resilvering spinning disks with SSD MAC is also pretty good
[02:01:17] <Shinigami-Sama> jasonwc: unless you have the wrong ashift and a bad SSD, and get murdered by read-write-amplification hell
[02:01:33] <jasonwc> I set ashift=12 when I created the pool
[02:02:15] <jasonwc> and it autoexpanded as expected, nice
[02:02:54] <bunder> ptx0: whatcha fixing now :P
[02:04:04] <jasonwc> I'm really looking forward to a 0.8 stable release so I can utilize sequential resilvering and special allocation classes. L2ARC gave me a 0.05% hit rate, lol
[02:04:26] <jasonwc> tried with metadata-only and all, both are meh
[02:04:30] <DeHackEd> jasonwc: I know that feeling...
[02:04:48] <ss23> Who needs l2arc when you can just use all nvme SSD
[02:06:27] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has quit IRC (Quit: AAAGH! IT BURNS!)
[02:06:50] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has joined #zfsonlinux
[02:07:15] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6003cdcccdf530d8636.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[02:09:07] *** Bhakimi <Bhakimi!~textual@208.78.139.170> has joined #zfsonlinux
[02:09:27] *** tlacatlc6 <tlacatlc6!~tlacatlc6@68.202.46.96> has joined #zfsonlinux
[02:10:02] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Alek P <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246223732>
[02:11:36] <jasonwc> ss23: Well, cost. 100TB of NVMe storage is not cheap
[02:11:58] <ss23> Yeah, there is that
[02:12:11] <Shinigami-Sama> jasonwc: friend of mine just ordered 5x that...
[02:13:08] <jasonwc> for work I imagine
[02:13:13] <jasonwc> this is my home server
[02:13:18] <jasonwc> and I'll be buying 200TB later this year
[02:13:27] <Shinigami-Sama> yes, something about scaling to >1M https connections/second
[02:16:00] <zfs> [zfsonlinux/zfs] bpobj_iterate overflows the stack (#7675) comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/7675>
[02:30:02] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Read error: Connection reset by peer)
[02:39:48] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246235766>
[02:44:54] <jasonwc> I suppose I should manually disable compression on a dataset with recordsize=4k on a pool with ashift=12
[02:45:50] <jasonwc> or should I keep it on so that it can compress metadata - does metadata respect recordsize?
[02:46:42] <DeHackEd> metadata is compressed anyway
[02:47:01] <DeHackEd> also with features like embedded data small files can try to fit inside metadata and here compression might benefit you
[02:47:27] <DeHackEd> dunno if metadata respects recordsize, but the theory holds
[02:47:51] <jasonwc> ah, so it seems like it can't hurt to leave it on
[02:48:55] <jasonwc> I'm creating a dataset for /var/lib/libvirt/images. Should this be mounted manually via /etc/fstab like /var/log to avoid conflicts?
[02:49:25] <jasonwc> I would assume ZFS should mount before libvirt is loaded
[02:49:43] <DeHackEd> I'm guessing so, whereas logs would be an early thing because systemd does whatever it wants
[02:50:27] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[02:51:27] <jasonwc> Any harm to mounting it via /etc/fstab to be sure?
[02:51:40] *** xaero <xaero!~xaero@ec2-18-216-168-111.us-east-2.compute.amazonaws.com> has joined #zfsonlinux
[03:09:31] *** metallicus <metallicus!~metallicu@bifrost.evert.net> has quit IRC (Quit: WeeChat 2.3)
[03:12:03] *** Bhakimi <Bhakimi!~textual@208.78.139.170> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[03:21:57] <zfs> [zfsonlinux/zfs] include/zpios-ctl.h: current_kernel_time64 compat (#8256) created by Georgy Yakovlev <https://github.com/zfsonlinux/zfs/issues/8256>
[03:22:29] <cirdan> jasonwc: well systmd will shit the bed if it can't be mounted
[03:23:28] <bunder> systemd will shit the bed period
[03:23:32] <bunder> zing
[03:23:51] <cirdan> yeah
[03:23:57] <jasonwc> good point
[03:26:07] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Alek P <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246242002>
[03:27:52] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Alek P <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246242229>
[03:31:39] *** Caelum is now known as fishies
[03:37:26] <zfs> [zfsonlinux/zfs] include/zpios-ctl.h: current_kernel_time64 compat (#8256) comment by bunder2015 <https://github.com/zfsonlinux/zfs/issues/8256#issuecomment-452550556>
[03:39:53] <zfs> [zfsonlinux/zfs] include/zpios-ctl.h: current_kernel_time64 compat (#8256) comment by Georgy Yakovlev <https://github.com/zfsonlinux/zfs/issues/8256#issuecomment-452550950>
[03:57:44] <zfs> [zfsonlinux/zfs] 'SUBDIRS' will be removed after Linux 5.3 (#8257) created by bunder2015 <https://github.com/zfsonlinux/zfs/issues/8257>
[04:02:09] <bunder> i mean 5.3 is probably a couple years away but if its complaining we should probably fix it :P
[04:04:46] <lundman> or double down!
[04:07:45] <PMT> bunder: I mean, 5.0 is after 4.20
[04:08:10] <bunder> only because linus can't count to 22 :P
[04:08:52] <DeHackEd> fingers + toes = 20 (for most people)
[04:14:46] <PMT> linus secretly had two dicks installed for better inappropriate slapping of people in the face
[04:14:57] <PMT> and that is how he can count to 22 =P
[04:18:09] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Alek P <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246248829>
[04:23:32] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) comment by Alek P <https://github.com/zfsonlinux/zfs/issues/8142#issuecomment-452558348>
[04:24:10] <bunder> python is a disgusting language
[04:24:27] <bunder> yes lets just leave random commas at the end of statements
[04:26:55] *** jasonwc <jasonwc!~jasonwc@pool-72-66-15-203.washdc.fios.verizon.net> has quit IRC (Ping timeout: 246 seconds)
[04:27:36] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 250 seconds)
[04:29:24] <PMT> bunder: if that's the worst complaint you have, ...
[04:29:51] <bunder> well some say python is what makes portage slow, but i don't see anyone rewriting it in c
[04:31:15] <bunder> but wtf https://github.com/zfsonlinux/zfs-buildbot/blob/master/master/master.cfg#L827
[04:34:17] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Quit: Qutting)
[04:38:47] <bunder> oh i forgot to link this early... super OT but lul commentator's curse https://www.youtube.com/watch?v=2PD_3VNMsRA&t=7200s
[04:39:07] <bunder> i almost spilled my coffee this morning watching that
[04:44:30] <bunder> that's almost as sad as getting the softlock in the refight room
[04:55:31] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[05:22:45] *** tlacatlc6 <tlacatlc6!~tlacatlc6@68.202.46.96> has quit IRC (Quit: Leaving)
[05:49:21] <zfs> [zfsonlinux/zfs] Centos: systemd-journald.service misses the zfs-mount.service dependency (#8060) comment by Richard Laager <https://github.com/zfsonlinux/zfs/issues/8060#issuecomment-452570953>
[05:57:09] *** malevolent_ <malevolent_!~quassel@93.176.182.131> has joined #zfsonlinux
[05:57:33] *** gila_ <gila_!~gila@5ED74129.cm-7-8b.dynamic.ziggo.nl> has joined #zfsonlinux
[05:57:43] *** buu__ <buu__!~buu@99.74.60.251> has joined #zfsonlinux
[05:57:57] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[06:00:18] *** rlaager_ <rlaager_!~rlaager@grape.coderich.net> has joined #zfsonlinux
[06:01:45] *** obadz <obadz!~obadz@unaffiliated/obadz> has quit IRC (Ping timeout: 244 seconds)
[06:02:09] *** rlaager <rlaager!~rlaager@grape.coderich.net> has quit IRC (*.net *.split)
[06:02:09] *** augustus <augustus!~augustus@c-73-152-30-9.hsd1.va.comcast.net> has quit IRC (*.net *.split)
[06:02:09] *** malevolent <malevolent!~quassel@93.176.182.131> has quit IRC (*.net *.split)
[06:02:09] *** gila <gila!~gila@94.215.65.41> has quit IRC (*.net *.split)
[06:02:09] *** chasmo77 <chasmo77!~chas77@158.183-62-69.ftth.swbr.surewest.net> has quit IRC (*.net *.split)
[06:02:09] *** Llewelyn <Llewelyn!~derelict@184.12.106.191> has quit IRC (*.net *.split)
[06:02:09] *** qzo <qzo!~qzo@c-73-229-59-252.hsd1.co.comcast.net> has quit IRC (*.net *.split)
[06:02:09] *** EHG- <EHG-!~EHG|@unaffiliated/ehg-> has quit IRC (*.net *.split)
[06:02:09] *** ShellcatZero <ShellcatZero!~ShellcatZ@cpe-66-27-89-254.san.res.rr.com> has quit IRC (*.net *.split)
[06:02:09] *** jailbox <jailbox!~laj2@0120600178.0.fullrate.ninja> has quit IRC (*.net *.split)
[06:02:09] *** buu <buu!~buu@99-74-60-251.lightspeed.hstntx.sbcglobal.net> has quit IRC (*.net *.split)
[06:02:09] *** Baughn <Baughn!~Baughn@madoka.brage.info> has quit IRC (*.net *.split)
[06:03:18] *** EHG- <EHG-!~EHG|@unaffiliated/ehg-> has joined #zfsonlinux
[06:03:51] *** obadz <obadz!~obadz@unaffiliated/obadz> has joined #zfsonlinux
[06:07:00] <PMT> https://betanews.com/2019/01/08/toshiba-16tb-mg08-hard-drive/ and toshiba has reclaimed the lead for now
[06:08:50] *** jailbox <jailbox!~laj2@0120600178.0.fullrate.ninja> has joined #zfsonlinux
[06:09:52] *** ShellcatZero <ShellcatZero!~ShellcatZ@cpe-66-27-89-254.san.res.rr.com> has joined #zfsonlinux
[06:36:39] *** hoonetorg <hoonetorg!~hoonetorg@77.119.226.254.static.drei.at> has quit IRC (Ping timeout: 246 seconds)
[06:49:53] *** hoonetorg <hoonetorg!~hoonetorg@77.119.226.254.static.drei.at> has joined #zfsonlinux
[06:58:59] <prometheanfire> hmm, 32 pending sector and 32 uncorrectable errors
[07:01:58] <Shinigami-Sama> prometheanfire: looks healthy to me
[07:02:09] * Shinigami-Sama runs his own smart report
[07:04:19] <prometheanfire> ya, seems ok so far
[07:04:28] <prometheanfire> still scrubbing just in case
[07:04:29] <prometheanfire> https://www.backblaze.com/blog/what-smart-stats-indicate-hard-drive-failures/
[07:04:51] <Shinigami-Sama> ...brand new WD red, smart shows prefail still wth man
[07:05:28] <prometheanfire> lol, nice
[07:05:56] <Shinigami-Sama> or my smartDB is very out of date
[07:07:51] <Shinigami-Sama> I can't believe it, its got 97 power up hours and no hits, but still prefail
[07:08:18] <Shinigami-Sama> oh well
[07:33:16] <zfs> [zfsonlinux/zfs] include/zpios-ctl.h: current_kernel_time64 compat (#8256) closed by Georgy Yakovlev <https://github.com/zfsonlinux/zfs/issues/8256#event-2062235831>
[07:43:17] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[07:55:49] *** zrav <zrav!~zravo_@2001:a61:460b:9d01:1465:9afe:ecd6:9a8f> has joined #zfsonlinux
[08:07:35] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has joined #zfsonlinux
[08:08:43] *** MarisaKirisame <MarisaKirisame!~marisa@marisakirisa.me> has quit IRC (Remote host closed the connection)
[08:09:01] *** MarisaKirisame <MarisaKirisame!~marisa@marisakirisa.me> has joined #zfsonlinux
[08:11:57] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 244 seconds)
[08:13:06] *** gmelikov <gmelikov!~quassel@89.207.88.249> has joined #zfsonlinux
[08:13:58] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[08:16:45] *** dadinn <dadinn!~DADINN@188.172.153.77> has quit IRC (Ping timeout: 246 seconds)
[08:40:00] *** catalase <catalase!~catalase@unaffiliated/catalase> has quit IRC (Remote host closed the connection)
[08:54:37] *** Llewelyn <Llewelyn!~derelict@184.12.106.191> has joined #zfsonlinux
[09:08:24] *** stefan00 <stefan00!~stefan00@ip9234924b.dynamic.kabel-deutschland.de> has joined #zfsonlinux
[09:19:50] *** futune_ <futune_!~futune@83.240.61.51> has joined #zfsonlinux
[09:20:05] *** futune <futune!~futune@83.240.61.51> has quit IRC (Quit: Leaving)
[09:20:53] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[09:37:39] *** morphin <morphin!c38e669e@gateway/web/freenode/ip.195.142.102.158> has joined #zfsonlinux
[09:47:39] *** kaipee <kaipee!~kaipee@81.128.200.210> has joined #zfsonlinux
[09:48:45] <zfs> [zfsonlinux/zfs] `zpool export` does not delete the mountpoint on pools with local mountpoint property (#4824) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/4824#issuecomment-452617575>
[09:51:58] *** endre <endre!znc@end.re> has joined #zfsonlinux
[09:59:13] *** malevolent_ <malevolent_!~quassel@93.176.182.131> has quit IRC (Quit: https://quassel-irc.org - Chat comfortably. Anywhere.)
[09:59:45] <zfs> [zfsonlinux/zfs] `zpool export` does not delete the mountpoint on pools with local mountpoint property (#4824) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/4824#issuecomment-452620629>
[10:00:14] *** malevolent <malevolent!~quassel@93.176.182.131> has joined #zfsonlinux
[10:24:27] <zfs> [zfsonlinux/zfs] `zpool export` does not delete the mountpoint on pools with local mountpoint property (#4824) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/4824#issuecomment-452628012>
[10:45:50] <zfs> [zfsonlinux/zfs] Adaptive compression [was: auto compression] (#7560) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/7560#issuecomment-452634657>
[10:50:59] *** cinch <cinch!~cinch@freebsd/user/cinch> has quit IRC (Quit: Bye)
[10:51:27] *** cinch <cinch!~cinch@freebsd/user/cinch> has joined #zfsonlinux
[10:55:16] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee600842282460d00473a.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[11:13:32] *** insane^ <insane^!~insane@fw.vispiron.de> has joined #zfsonlinux
[11:28:57] *** insane^ <insane^!~insane@fw.vispiron.de> has quit IRC (Read error: Connection reset by peer)
[11:36:36] *** insane^ <insane^!~insane@fw.vispiron.de> has joined #zfsonlinux
[11:49:51] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[12:12:03] *** ahasenack <ahasenack!~ahasenack@33.93.189.91.lcy-02.canonistack.canonical.com> has quit IRC (Remote host closed the connection)
[12:23:29] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[12:26:39] <zfs> [zfsonlinux/zfs] `zpool export` does not delete the mountpoint on pools with local mountpoint property (#4824) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/4824#issuecomment-452663815>
[12:31:27] <insane^> this mailinglist guy is somewhat droll
[12:34:21] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 246 seconds)
[12:36:18] <stefan00> it’s usually recommended to use lz4 compression. Is that still true for fast nvme pools (just for systems / vm in my case)?
[12:36:32] <insane^> sure
[12:36:40] <insane^> why not?=
[12:39:43] <stefan00> because of possible performance degradation due to compression, especially when raw vnme speeds are pretty high.
[12:40:07] <insane^> i dont think that this is the case
[12:47:19] <DeHackEd> NVMe drives tend to be expensive and small. consider that a possible justification for enabling compression
[12:49:47] *** patdk-lap <patdk-lap!~patrickdk@208.94.190.191> has quit IRC (Ping timeout: 240 seconds)
[12:54:10] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Ping timeout: 250 seconds)
[13:01:23] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[13:06:10] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[13:10:43] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[13:11:20] *** fassl <fassl!80838e22@gateway/web/freenode/ip.128.131.142.34> has joined #zfsonlinux
[13:19:05] <zfs> [zfsonlinux/zfs] `zpool export` does not delete the mountpoint on pools with local mountpoint property (#4824) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/4824#issuecomment-452677247>
[13:21:52] *** radkos <radkos!~radkos@213.91.182.188> has quit IRC (Read error: No route to host)
[13:29:09] *** fassl <fassl!80838e22@gateway/web/freenode/ip.128.131.142.34> has left #zfsonlinux
[13:32:02] <FinalX> stefan00: I use compression because of what DeHackEd said, but technically speaking you can also have even higher read speeds because you can fit more in the blocks you're reading
[13:32:16] <FinalX> the overhead for compressing is kinda really minimal
[13:54:50] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[13:58:42] *** radkos <radkos!~radkos@213.91.182.188> has joined #zfsonlinux
[13:59:57] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Ping timeout: 268 seconds)
[14:05:42] <zfs> [zfsonlinux/zfs] vdev_open maybe crash when vdev_probe return NULL (#8244) comment by Leroy8508 <https://github.com/zfsonlinux/zfs/issues/8244#issuecomment-452689491>
[14:21:42] <zfs> [zfsonlinux/zfs] `zpool export` does not delete the mountpoint on pools with local mountpoint property (#4824) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/4824#issuecomment-452694146>
[14:27:40] *** Floflobel_ <Floflobel_!~Floflobel@80.214.18.159> has joined #zfsonlinux
[14:28:46] <bunder> lol hardware unboxed blacklisted by nvidia
[14:30:37] *** patdk-lap <patdk-lap!~patrickdk@208.94.190.191> has joined #zfsonlinux
[14:31:04] <FinalX> can I somehow prevent these going to syslog? Jan 9 13:30:39 lxc zed: eid=127842 class=history_event pool_guid=0x07DECBB7CC332F1A
[14:31:29] <FinalX> and rather a seperate log (I guess I could filter them out with syslog and send them to a diff syslog file, but still)
[14:31:39] <hyper_ch2> anyone here has a home mesh setup?
[14:33:00] <insane^> filter it?
[14:33:06] <insane^> which syslogd?
[14:33:34] <FinalX> insane^: yeah, that was my next stop.. but not sure I care about those at all :P
[14:35:39] <bunder> history event, i think those are from snapshots?
[14:40:11] *** hyper_ch2_ <hyper_ch2_!c105d864@openvpn/user/hyper-ch2> has joined #zfsonlinux
[14:40:54] <FinalX> yeah
[14:41:17] <FinalX> and I make them every 15 min for quite a few datasets, so then the log gets flooded :)
[14:41:43] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[14:42:21] *** hyper_ch2 <hyper_ch2!c105d864@openvpn/user/hyper-ch2> has quit IRC (Ping timeout: 256 seconds)
[14:42:22] <bunder> i thought they put in something to make it less quiet, lemme go looking
[14:44:51] <bunder> oh that was for config sync, and it never got merged, and something about autoexpand
[14:45:45] *** veegee <veegee!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has quit IRC (Quit: veegee)
[14:46:41] <FinalX> oh
[14:46:57] *** fishies is now known as Crocodillian
[14:49:08] *** veegee <veegee!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has joined #zfsonlinux
[14:52:39] <bunder> might be worth it to put in a feature request for a dedicated /var/log/zed.log or something
[14:52:52] <DHE> <FinalX> the overhead for compressing is kinda really minimal # true, but NVMe is also fast. multiple NVMe drives might approach the point where LZ4 overhead is detectable
[14:55:26] <insane^> and lz4 checks if something is compressable within the first few byte/kb if i remember right
[15:03:13] <PMT> I mean, zed.log would be trivial. I'd like the ability to make a damn persistent log for it.
[15:04:36] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[15:05:20] *** libertas <libertas!~libertas@a95-93-229-182.cpe.netcabo.pt> has joined #zfsonlinux
[15:05:52] <libertas> hi, when zfs list -t snapshot, there's a lot of https://jd274.infusionsoft.com/app/linkClick/35649/a754084980cc0373/44770839/846e5ddc7a42392b
[15:05:56] <PMT> Huh, there's not actually an open bug about zed not keeping persistent logging. Heh.
[15:06:03] <libertas> sorry, wrong paste
[15:06:14] <PMT> was about to say, 404
[15:06:38] <zfs> [zfsonlinux/zfs] `zpool export` does not delete the mountpoint on pools with local mountpoint property (#4824) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/4824#issuecomment-452707747>
[15:08:12] <libertas> zfs list -t snapshot have hourly and daily entries like zroot/ROOT/default at hourly dot 0 4.07M - 2.49G -
[15:08:25] <libertas> what are these zroot/ROOT/default?
[15:11:03] <PMT> That sounds like you have a cronjob installed that takes snapshots hourly and daily.
[15:11:07] <PMT> And possibly weekly etc.
[15:11:39] <PMT> As far as zroot/ROOT/default, that's a dataset on your pool, probably the root filesystem.
[15:13:00] <libertas> I have a cronjob, that's right, one that I'll modify
[15:13:23] *** Floflobel_ <Floflobel_!~Floflobel@80.214.18.159> has quit IRC (Read error: Connection reset by peer)
[15:13:43] <PMT> Any particular reason?
[15:13:48] *** Shinigami-Sama <Shinigami-Sama!~xero@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 245 seconds)
[15:14:11] <libertas> Snapshots are eating my free space fast, and I've already deleted the ones that should take most space in zroot/usr/home
[15:14:18] *** Shinigami-Sama <Shinigami-Sama!~xero@unaffiliated/setsuna-xero> has joined #zfsonlinux
[15:14:30] <libertas> modify it because I don't want snapshots in zroot/usr/home
[15:15:03] <libertas> but the thing is that even after deleting those zroot/usr/home, df -h hasn't improved as much as it should
[15:15:14] <PMT> Most of the automatic snapshot takers let you set a property to not snapshot the dataset.
[15:16:28] <PMT> libertas: two things. snapshots only list the amount of space they and only they are occupying in the "USED" column (so if two snapshots reference the same data that isn't in the live version, it won't show up in either of their USED until one is deleted), and snapshots get asynchronously deleted so check the zpool get freeing property on the pool to see if it's still freeing some of them.
[15:18:03] *** Floflobel <Floflobel!~Floflobel@80.214.18.159> has joined #zfsonlinux
[15:18:35] <libertas> I was going to ask about the first option, as the USED column summed up is way much less than REFER
[15:19:07] <libertas> but what do you mean by the live version? what's the other one? how to look for it?
[15:23:37] <DHE> the real filesystem, not a snapshot
[15:23:48] <PMT> libertas: so let's say at 12 PM your dataset had 100 GB in it, and you took a snapshot. Then you deleted 10 GB from it. The latest/live version would have 90 GB, and the snapshot would reference 100.
[15:25:02] <PMT> If you had taken two snapshots of it at 100G before deleting the 10, the live one would reference 90 GB, the two snapshots would reference 100, and neither snapshot's USED would say 10G, because the other snapshot is also referencing that data.
[15:25:56] *** Floflobel <Floflobel!~Floflobel@80.214.18.159> has quit IRC (Quit: Leaving)
[15:26:45] <PMT> Put briefly, the USED column for a snapshot is basically "if you destroy this snapshot and only this snapshot, this is how much space you'll get back."
[15:27:55] <libertas> thanks, PMT, I get the idea. That's fine for zroot/usr/home, have to try to find what is the zroot/ZROOT/default, maybe it's snapshoting zroot/usr/home as well
[15:28:10] <PMT> ...what?
[15:28:43] <libertas> otherwise don't really know where those GB are coming from
[15:28:45] <PMT> By usual convention, things that look like POOLNAME/ROOT/name are the / filesystem.
[15:29:44] <libertas> excluding other datasets, right?
[15:29:55] <PMT> libertas: you might find this alias helpful, it lists datasets sorted by amount of space taken by snapshots
[15:29:58] <PMT> alias zl='zfs list -r -S usedbysnapshots -o name,used,usedbydataset,usedbysnapshots,referenced,refcompressratio,mountpoint'
[15:30:33] <PMT> Unless there's some under zroot/ZROOT/default, e.g. zroot/ZROOT/default/anotherthing, yes, it's only talking about space occupied by default and not anything else.
[15:33:27] <libertas> great alias! Have to find out about that zroot/ZROOT/default.
[15:33:37] <zfs> [zfsonlinux/zfs] `zpool export` does not delete the mountpoint on pools with local mountpoint property (#4824) comment by Gregor Kopka <https://github.com/zfsonlinux/zfs/issues/4824#issuecomment-452716714>
[15:35:24] <libertas> regarding the second idea, that property is set by snapshot takers or can be a default for the dataset, i.e. set manually?
[15:36:09] *** Shinigami-Sama <Shinigami-Sama!~xero@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 268 seconds)
[15:36:11] <PMT> Depends on the tool, but in general you would set it on a dataset and the tool would not take snapshots on the dataset or its children.
[15:38:27] <libertas> zroot/ROOT mounted no
[15:39:16] <PMT> Yes, that's expected.
[15:39:39] <libertas> ok zroot/ROOT/default mounted in /
[15:39:49] <libertas> like you said
[15:40:12] <zfs> [zfsonlinux/zfs] `zpool export` does not delete the mountpoint on pools with local mountpoint property (#4824) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/4824#issuecomment-452719019>
[15:41:51] *** hyper_ch2_ <hyper_ch2_!c105d864@openvpn/user/hyper-ch2> has quit IRC (Ping timeout: 256 seconds)
[15:41:51] <libertas> PMT: you've been very helpful. Thank you.
[15:50:31] *** Slashman <Slashman!~Slash@cosium-152-18.fib.nerim.net> has joined #zfsonlinux
[15:56:01] *** Shinigami-Sama <Shinigami-Sama!~xero@unaffiliated/setsuna-xero> has joined #zfsonlinux
[16:30:33] <stefan00> is it save to rm everything in /usr/portage/distfiles and /var/tmp/portage/ ?
[16:30:39] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has quit IRC (Ping timeout: 252 seconds)
[16:43:07] <PMT> stefan00: as of last time I ran Gentoo, everything in distfiles was safe to purge. I have no memory of /var/tmp/portage but would guess so.
[16:43:40] <stefan00> oh no, sorry - wrong channel ;-)
[16:43:53] <stefan00> PMT: but thanks anyway ;-)
[16:44:33] *** troyt <troyt!zncsrv@2601:681:4100:8981:44dd:acff:fe85:9c8e> has joined #zfsonlinux
[16:44:42] *** insane^ <insane^!~insane@fw.vispiron.de> has quit IRC (Ping timeout: 250 seconds)
[17:05:22] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 246 seconds)
[17:18:50] <FireSnake> I got this https://termbin.com/tl62 0.8.0.-rc2 kernel message during 'perf top' and yum install blah - is it something to worry? haven't found anything similar
[17:19:25] *** Scott0_ <Scott0_!~Scott0_@unaffiliated/scotto/x-4000254> has joined #zfsonlinux
[17:20:31] <FireSnake> a-ha, grepping logs shows one occurence
[17:20:37] <FireSnake> *irclogs
[17:21:42] <FireSnake> Jan 09 18:11:04 zfs kernel: WARNING: kernel stack regs at ffffb73dc431f6d8 in z_rd_int:1074 has bad 'bp' value ffffffffc0649000 pasting the first line to easily grep it later
[17:24:55] <PMT> yikes
[17:26:58] <DHE> stack corruption?
[17:29:08] <PMT> https://lkml.org/lkml/2018/6/8/181 seems to suggest this can happen from things being compiled and using the bp registers for data payloads.
[17:30:22] <PMT> (I say registers, but it's really "one" register with different widths, which is also false because they have more registers under the hood and just play swap which is holding that.)
[17:31:23] <FireSnake> fwiw, the top function in perf was at that time something with SHA256 in name, didn't copy it
[17:33:48] *** gila_ <gila_!~gila@5ED74129.cm-7-8b.dynamic.ziggo.nl> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[17:39:45] <PMT> I think your pastebin probably has the stacktrace
[17:39:58] <PMT> yup,
[17:40:14] <PMT> I can't guess whether it's harmful or not in practice, but
[17:40:41] <PMT> "breaks stack traces when unwinding from an interrupt in the crypto code."
[17:40:48] <PMT> (from https://patchwork.kernel.org/patch/10428863/ )
[17:41:43] <PMT> https://patchwork.kernel.org/patch/10454043/ suggests that forcing O3 can trigger this
[17:47:04] *** FireSnake <FireSnake!firesnake@gateway/shell/xshellz/x-zziyjlxvqcbzgyld> has quit IRC (Quit: leaving)
[17:47:24] *** FireSnake <FireSnake!firesnake@gateway/shell/xshellz/x-ymbepamflxqkmado> has joined #zfsonlinux
[17:51:44] <PMT> Hahah updating the baked-in lz4
[17:51:56] <PMT> That might improve things, but it would run afoul of the same issues as zstd's PR.
[18:08:06] <ghfields> Why is cool stuff expensive? https://www.thinkmate.com/system/superstorage-server-2028r-dn2r40l This distributor has a minimum 20x 1TB NVME drives that contributes $43,580 to its cost.
[18:11:58] <ghfields> Anyone use dual port NVME yet?
[18:14:33] <PMT> Is that active-active, or just an 8x PCIe link for one place? Because one is a much neater trick than the other.
[18:17:00] <ghfields> You have the same questions I have. I was hoping for it to be a substitution for SAS topology for 2 node HA configuration.
[18:17:31] <PMT> https://itpeernetwork.intel.com/an-introduction-to-dual-port-nvme-ssd/ alleges it's the neater one.
[18:18:32] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[18:19:11] <zfs> [zfsonlinux/zfs] Adaptive compression [was: auto compression] (#7560) comment by Rich Ercolani <https://github.com/zfsonlinux/zfs/issues/7560#issuecomment-452770499>
[18:19:17] <ghfields> I know the principle is out there but would like to see actual hardware doing it.
[18:19:37] <PMT> Said link also alleges it in the context of mentioning their SSDs do it.
[18:20:38] <ghfields> now is the fabric / connection to nodes going to embrace it....
[18:21:46] *** dadinn <dadinn!~DADINN@188.172.153.77> has joined #zfsonlinux
[18:21:54] <PMT> Technically the lesser known sibling of SR-IOV is MR-IOV.
[18:23:45] <ghfields> You can pass through entire HBAs to VMs. What would NVME passthrough look like?
[18:25:15] <ghfields> (you might have just answered that)
[18:26:14] <PMT> SR-IOV stands for Single Root I/O Virtualization. I invite you to consider what the M logically stands for in MR-IOV.
[18:26:30] *** elxa <elxa!~elxa@2a01:5c0:e08b:681:3b0f:5fdc:828:51f4> has joined #zfsonlinux
[18:28:47] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 240 seconds)
[18:33:35] *** FireSnake <FireSnake!firesnake@gateway/shell/xshellz/x-ymbepamflxqkmado> has quit IRC (Remote host closed the connection)
[18:40:43] *** Stoob <Stoob!~steev@krypton.bugfix.in> has joined #zfsonlinux
[18:48:38] <bunder> PMT: stefan00 yes /var/tmp/portage can be thrown away, as long as you're not actively building/installing something
[18:49:12] *** kaipee <kaipee!~kaipee@81.128.200.210> has quit IRC (Remote host closed the connection)
[18:52:32] <stefan00> bunder: thank you - sorry that was the wrong channel. Done anyway ;-)
[18:54:43] *** ReimuHakurei <ReimuHakurei!~Reimu@raphi.vserver.alexingram.net> has quit IRC (Ping timeout: 268 seconds)
[18:55:32] <zfs> [zfsonlinux/zfs] 'SUBDIRS' will be removed after Linux 5.3 (#8257) comment by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/8257#issuecomment-452786466>
[18:57:24] *** ReimuHakurei <ReimuHakurei!~Reimu@raphi.vserver.alexingram.net> has joined #zfsonlinux
[18:58:34] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has left #zfsonlinux
[18:58:56] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[18:59:10] <stefan00> am I understanding correctly that sync=disabled „bypasses“ ZIL (purpose: avoiding 2 write cycles on my NVMEs)?
[19:00:41] <bunder> yes but you don't get sync writes, you're at the mercy of txgs writing your stuff out, and if you reboot between txg's you lose everything since the last txg
[19:00:58] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Remote host closed the connection)
[19:01:29] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[19:03:15] <stefan00> but I assume sync syscall before reboot forces write out, right? In this case, power failure or frozen system would be worst case, right?
[19:05:11] <bunder> afaik running sync should force a txg writeout
[19:05:19] *** FireSnake <FireSnake!firesnake@gateway/shell/xshellz/x-dufhinohydkduejj> has joined #zfsonlinux
[19:06:17] <PMT> bunder: I thought the whole point of sync=disabled was that it noops all sync()
[19:11:55] <Slashman> hello, I'm looking for a PCIe JBOD SAS card to connect someexternal Dell MD1220 powervault, any advice?
[19:12:24] <bunder> i forget, i thought if you specifically asked for it it would still do it
[19:12:26] <PMT> Any SAS card that's SAS2 or newer should be fine and only require passive adapters if they have different cable expectations.
[19:12:55] <PMT> bunder: the docs explicitly say that it turns sync into a noop; I don't have the time ATM to look in the source and see, but.
[19:13:30] <bunder> boo
[19:14:02] <bunder> okay txg or bust then i suppose :P
[19:14:10] <PMT> Why would it special case sync()? That's kind of the whole point of the setting.
[19:14:36] <bunder> i meant like fsync calls vs sync(1)
[19:15:35] <PMT> I mean, sync(1) just calls sync(2).
[19:16:32] <bunder> good thing i don't run a million transaction per second database heh
[19:23:27] <zfs> [zfsonlinux/zfs] Adaptive compression [was: auto compression] (#7560) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/7560#issuecomment-452793564>
[19:54:03] *** tnebrs <tnebrs!~barely@212.117.188.100> has joined #zfsonlinux
[19:54:55] <zfs> [zfsonlinux/zfs] implicit declaration of function current_kernel_time64 (#8258) created by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/8258>
[20:05:50] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246489851>
[20:06:14] *** Slashman <Slashman!~Slash@cosium-152-18.fib.nerim.net> has quit IRC (Remote host closed the connection)
[20:12:52] *** tnebrs <tnebrs!~barely@212.117.188.100> has quit IRC (Ping timeout: 272 seconds)
[20:12:57] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246491138>
[20:19:14] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[20:23:19] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246492843>
[20:26:45] <zfs> [zfsonlinux/zfs] Adaptive compression [was: auto compression] (#7560) comment by kpande <https://github.com/zfsonlinux/zfs/issues/7560#issuecomment-452806958>
[20:36:26] <PMT> sigh
[20:36:38] <PMT> no, "the format hasn't changed" isn't the same as "the bytes emitted are identical"
[20:36:47] <zfs> [zfsonlinux/zfs] Adaptive compression [was: auto compression] (#7560) comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/7560#issuecomment-452809347>
[20:37:01] <ghfields> bot just woke up....
[20:37:34] <PMT> Github is having issues today.
[20:38:02] <ptx0> ^ that
[20:38:05] <ghfields> Everyone loves the bot.
[20:38:08] <ptx0> because the event just showed up in the feed
[20:38:09] <bunder> i haven't noticed anything wrong, but the bot is always slow
[20:38:17] <zfs> [openzfs/openzfs] Add a manual for ztest. (#729) comment by Matthew Ahrens <https://github.com/openzfs/openzfs/issues/729#issuecomment-452809804>
[20:38:29] <bunder> usually by a minute or two
[20:38:43] <ptx0> think github delays their own notifications in case someone deletes the comment immediately
[20:38:46] <ptx0> vOv
[20:39:14] <bunder> they should do that with email too
[20:39:14] <PMT> bunder: https://www.githubstatus.com/incidents/sx4ctyf65d2y
[20:39:39] <bunder> odd
[20:39:46] <bunder> i've been getting mails okay
[20:40:01] <ghfields> Keeping sysadmins employed
[20:40:05] <PMT> Maybe you're in a bucket of servers not on fire.
[20:40:11] <bunder> perhaps
[20:40:19] <bunder> you never know with the cloud
[20:40:30] <ptx0> they should call Geek Squad
[20:40:55] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[20:48:50] <zfs> [zfsonlinux/zfs] Consider adding mitigations for speculative execution related concerns (#7035) comment by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/7035#issuecomment-452812310>
[20:49:10] <zfs> [zfsonlinux/zfs] Disable 'zfs remap' command (#8238) comment by Matthew Ahrens <https://github.com/zfsonlinux/zfs/issues/8238>
[20:49:45] <bunder> oh that was an hour ago, jeez
[20:50:29] <bunder> when did i get that email
[20:51:00] *** malwar3hun73r <malwar3hun73r!~malwar3hu@unaffiliated/malwar3hun73r> has joined #zfsonlinux
[20:51:27] <bunder> 12:55, looks about right
[20:51:42] <bunder> the bot got that one on time too
[20:51:59] <malwar3hun73r> i have an esxi server at home and am looking to add storage - data retention is a must. i thought about going the raid route but many have recommended against it
[20:53:14] <PMT> I mean, ZFS's redundancy properties are also RAID layouts. So you probably mean not using HW RAID.
[20:53:18] <bunder> who cares what they think, you do you :P
[20:53:32] <PMT> Also, remember the 3-2-1 rule. https://www.backblaze.com/blog/the-3-2-1-backup-strategy/
[20:53:39] <malwar3hun73r> does anyone have thoughts on buying two equal size drives and then using ZFS to create a mirror device from storage pulled from each drive?
[20:53:55] <PMT> We'd probably suggest not using the drive for other things at the same time, but only mildly.
[20:54:04] *** tnebrs <tnebrs!~barely@212.117.188.13> has quit IRC (Ping timeout: 252 seconds)
[20:54:06] <malwar3hun73r> say i buy to 4 TB drives, but only use 2 TB from each
[20:54:13] <cirdan> sure you can do that
[20:54:19] <malwar3hun73r> ahhh... so the drives need to be wholly zfs?
[20:54:32] <cirdan> perf will be crap if you write to zfs and non zfs at the same time
[20:54:44] <bunder> or two pools on the same disk(s)
[20:54:49] <cirdan> yeah
[20:54:52] <bunder> simultaneously
[20:54:56] <cirdan> same as writing to 2 partitions at the same time
[20:55:01] <malwar3hun73r> ok, so that works in theory though
[20:55:06] <cirdan> of course
[20:55:09] <malwar3hun73r> crap, gotta a meeting brb
[20:55:16] <malwar3hun73r> thanks for the input!
[20:55:18] <cirdan> it can take a parition w/o issue
[20:57:16] <cirdan> ptx0: i got some good deals on backup tapes :) just ordered 13 for about $13.5 each, a min of 20tb storage for $180
[20:57:24] <cirdan> all new too, surprisingly
[20:57:47] <cirdan> i already have 14 tapes so I think I can finally do a full backup, even of my media
[20:59:13] * cirdan wonders if sw or hw compression would save anything with hvec
[21:01:24] <snehring> in my experience no
[21:01:29] <PMT> malwar3hun73r: they don't have to be, no. it's just that spinning drives don't handle random IO that well, and multiple users means more approximately random IO.
[21:01:37] <PMT> cirdan: 13 which type?
[21:02:04] <cirdan> lto-5
[21:24:11] *** sponix <sponix!~sponix@68.171.186.43> has quit IRC (Quit: Leaving)
[21:33:16] *** catalase <catalase!~catalase@unaffiliated/catalase> has joined #zfsonlinux
[21:36:45] *** leper` <leper`!~leper`@77-57-120-172.dclient.hispeed.ch> has quit IRC (Quit: .)
[21:40:09] *** leper` <leper`!~leper`@77-57-120-172.dclient.hispeed.ch> has joined #zfsonlinux
[21:42:16] *** zrav <zrav!~zravo_@2001:a61:460b:9d01:1465:9afe:ecd6:9a8f> has quit IRC (Read error: Connection reset by peer)
[21:44:37] *** tnebrs <tnebrs!~barely@212.117.188.100> has joined #zfsonlinux
[21:55:29] *** futune_ <futune_!~futune@83.240.61.51> has quit IRC (Remote host closed the connection)
[22:01:58] *** eab <eab!~eborisch@75-134-18-245.dhcp.mdsn.wi.charter.com> has quit IRC (Quit: WeeChat 2.3)
[22:02:52] *** futune <futune!~futune@83.240.61.51> has joined #zfsonlinux
[22:06:51] *** buu__ is now known as buu
[22:07:18] *** tnebrs <tnebrs!~barely@212.117.188.100> has quit IRC (Ping timeout: 246 seconds)
[22:11:09] <malwar3hun73r> hmm, guess i need to see if i can do pass thru in esxi
[22:11:19] <malwar3hun73r> thanks for all the input!
[22:12:15] <Shinigami-Sama> malwar3hun73r: you can, but its PITA
[22:14:59] <malwar3hun73r> maybe not the correct forum here (but i'm assuming there's some folks here with home labs and large data storage reqs)
[22:15:53] <malwar3hun73r> but... what would you guys recommend then for providing large amounts of storage (mostly backups of family picture/videos)?
[22:16:39] <malwar3hun73r> i thought about building a NAS, but that can bet pricey and seemed redundant when i already had a server that i could virtualize whatever OS i needed... i just need disk space
[22:17:03] <malwar3hun73r> my family sucks at backups, so i was thinking of something like nextcloud as a means to provide back ups for them
[22:21:00] <rjvb> family that sucks at backups, can we assume they use MSWin? If so there exists something that's inspired by Apple's TimeMachine, even gives a timeline interface to browse backups. I forget the name, sadly
[22:21:33] <malwar3hun73r> yes, windows, but they lack the space/funding to do this themselves
[22:21:33] <zfs> [zfsonlinux/zfs] Adaptive compression [was: auto compression] (#7560) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/7560#issuecomment-452845513>
[22:22:52] <rjvb> I'm pretty certain that the version of said software I used was free, I don't have budget for expensive commercial backup solutions either
[22:23:44] <malwar3hun73r> ok, but what if the drive fails (this is my primary motivation, my mother last years of digital photos to a drive failure)
[22:24:10] <zfs> [zfsonlinux/zfs] Consider adding mitigations for speculative execution related concerns (#7035) closed by Brian Behlendorf <https://github.com/zfsonlinux/zfs/issues/7035#event-2064134580>
[22:24:30] <rjvb> ah: https://www.genie9.com/home/Genie_Timeline_Home/overview.aspx
[22:24:42] <malwar3hun73r> no worries, just didn't know if someone had a strong feeling one way or another with a home setup (esxi attached storage, NAS, etc)
[22:24:57] <rjvb> well, you better do the backing up to an external drive of course, which could be a NAS
[22:25:21] <malwar3hun73r> still gotta have a drive to store the backup even with genie
[22:25:29] <malwar3hun73r> why does the backup need to be an external drive?
[22:25:34] <rjvb> (BTW, I see that Win8 and up have sometime TimeMachine-like built in)
[22:26:44] <rjvb> you can also use an additional internal drive, but that's less flexible. You often want to put the backup in a different location when you leave for a longer period, for instance
[22:27:31] <gchristensen> also consider if a fire is within your threat model
[22:29:00] <zfs> [zfsonlinux/zfs] zfs filesystem skipped by df -h (#8253) comment by Paul Zuchowski <https://github.com/zfsonlinux/zfs/issues/8253#issuecomment-452851565>
[22:30:01] <PMT> rjvb: just to answer you directly here instead of in the bug, "new version can be decompressed by old version" and "new version outputs byte-identical compressed blocks" are different things, and the former is what the LZ4 person said in that link.
[22:30:09] <ptx0> my new mobo is hereeee
[22:31:46] <PMT> ptx0: I remember you told me what was catastrophically wrong on the old one, but I've forgotten. It was pretty entertaining, but I don't remember what it was.
[22:32:47] <ptx0> 90% of the pcie ports stopped working
[22:33:36] <rjvb> PMT: effectively, if the new version gives better compression it cannot output byte-identical blocks
[22:34:15] <rjvb> however I don't see how that can be a problem; how is different from rewriting data with a different compressor?
[22:35:46] <Shinigami-Sama> ptx0: and then you find out its the PCIe lanes on your CPU that are the problem?
[22:35:49] <zfs> [zfsonlinux/zfs] Add dmu_object_alloc_hold() and zap_create_hold() (#8015) comment by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/8015#issuecomment-452858348>
[22:35:55] <Shinigami-Sama> my phones should be here soon
[22:36:56] <PMT> rjvb: the link I gave explains that if you have compressed ARC off and use an L2ARC device, b/c it will decompress and then recompress, the old block and new might not match checksum, and currently the L2ARC entry doesn't store a checksum, it just references the ondisk block.
[22:37:00] <PMT> And then fire.
[22:40:28] <rjvb> I think I don't understand (because looking at the words I cannot help but think "what a stupid idea") ...
[22:40:29] <zfs> [zfsonlinux/zfs] Adaptive compression [was: auto compression] (#7560) comment by kpande <https://github.com/zfsonlinux/zfs/issues/7560#issuecomment-452871202>
[22:42:15] <malwar3hun73r> Shinigami-Sama, it looks like RDM works and is pretty easy - have you had a different experience
[22:42:44] <Shinigami-Sama> I had compatiblity hell issues on 5.x malwar3hun73r
[22:43:24] <malwar3hun73r> ah, ok, thanks!
[22:45:36] <rjvb> PMT: so currently if you want to upgrade the internal copy of a compressor you'd need to do something like rename the old version and then manage to embed a 2nd copy?
[22:45:46] <zfs> [zfsonlinux/zfs] Linux 5.0: asm/i387.h: No such file or directory (#8259) created by Tony Hutter <https://github.com/zfsonlinux/zfs/issues/8259>
[22:46:32] <rjvb> that does come across as unhappy design...
[22:46:51] *** obadz <obadz!~obadz@unaffiliated/obadz> has quit IRC (Ping timeout: 252 seconds)
[22:49:02] <zfs> [openzfs/openzfs] Add a manual for ztest. (#729) comment by Igor K <https://github.com/openzfs/openzfs/issues/729#issuecomment-452880947>
[22:49:27] <malwar3hun73r> Shinigami-Sama, compatibility with the drive not being recognized?
[22:50:15] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[22:50:34] <Shinigami-Sama> I was trying to pass a controller through
[22:50:51] <Shinigami-Sama> I forget why, $CLIENT_REQUEST
[22:52:01] <PMT> rjvb: no, the way people would do it is almost definitely what I said and what was in the link I sent, of adding sufficient metadata to the L2ARC to not break this.
[22:52:55] <PMT> Though I suppose it could also cause unnecessary duplication on things with dedup, but since people suggest not using dedup, ...
[22:53:35] <rjvb> That seems like the better approach, but I did say currently (I checked) :)
[22:53:59] <PMT> I mean, currently, the answer is "we don't"
[22:54:02] <PMT> So anything is a change.
[22:55:44] *** Llewelyn <Llewelyn!~derelict@184.12.106.191> has quit IRC (Remote host closed the connection)
[22:56:14] <rjvb> "we don't but we could" sounds better than "we don't coz we can't" O:-)
[22:56:29] *** yomisei <yomisei!~void@ip4d16bd91.dynamic.kabel-deutschland.de> has quit IRC (Ping timeout: 244 seconds)
[22:57:29] <PMT> rjvb: since I didn't say can't, please stop setting up strawmen to knock down.
[22:58:49] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has left #zfsonlinux
[22:59:46] *** Llewelyn <Llewelyn!~derelict@184.12.106.191> has joined #zfsonlinux
[22:59:49] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[23:00:20] <rjvb> sorry, but the can't was implied quite explicitly in what you said.
[23:02:44] <zfs> [zfsonlinux/zfs] Adaptive compression [was: auto compression] (#7560) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/7560#issuecomment-452885391>
[23:04:42] <PMT> I don't think that A and B both require C be done is the same as saying "can't do A or B". "can't do A or B without C" perhaps, but at that point, everything is a can't, so
[23:05:46] <PMT> Either way, this seems like an academic discussion. Any particular reason you don't want to open a feature request for updating LZ4, even if you aren't going to do the work yourself?
[23:15:52] *** tlacatlc6 <tlacatlc6!~tlacatlc6@68.202.46.96> has joined #zfsonlinux
[23:17:13] <zfs> [zfsonlinux/zfs] Adaptive compression [was: auto compression] (#7560) comment by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/7560#issuecomment-452889789>
[23:21:45] *** IonTau <IonTau!~IonTau@ppp121-45-221-77.bras1.cbr2.internode.on.net> has joined #zfsonlinux
[23:32:13] *** yomisei <yomisei!~void@ip4d16bd91.dynamic.kabel-deutschland.de> has joined #zfsonlinux
[23:34:23] *** compdoc <compdoc!~me@unaffiliated/compdoc> has joined #zfsonlinux
[23:37:04] <compdoc> how big are snaphots for a 4 to 6TB volume? do you people tend to have a seperate drive to store them?
[23:37:59] <DeHackEd> snapshots consume space as needed. at the moment of creation they are like a few kilobytes of metadata
[23:44:36] <CompanionCube> also you can't 'have a seperate drive to store them'
[23:44:40] <CompanionCube> that's not how it works
[23:46:21] <zfs> [zfsonlinux/zfs] Suggestion: update the embedded lz4 copy (#8260) created by Rene? Bertin <https://github.com/zfsonlinux/zfs/issues/8260>
[23:47:13] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) new review comment by Alek P <https://github.com/zfsonlinux/zfs/pull/8142#discussion_r246575617>
[23:47:19] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Quit: simukis)
[23:47:31] *** tnebrs <tnebrs!~barely@212.117.188.13> has joined #zfsonlinux
[23:49:03] <zfs> [zfsonlinux/zfs] port async unlinked drain from illumos-nexenta (#8142) comment by Alek P <https://github.com/zfsonlinux/zfs/issues/8142#issuecomment-452898700>
[23:49:06] *** elxa <elxa!~elxa@2a01:5c0:e08b:681:3b0f:5fdc:828:51f4> has quit IRC (Ping timeout: 252 seconds)
[23:50:14] <zfs> [zfsonlinux/zfs] zfs should optionally send holds (#7513) new review comment by loli10K <https://github.com/zfsonlinux/zfs/pull/7513#pullrequestreview-190926908>
top
   January 9, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31