Switch to DuckDuckGo Search
   November 18, 2018
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30
Toggle Join/Part | bottom
[00:01:27] *** adilger <adilger!~adilger@S0106a84e3fe4b223.cg.shawcable.net> has joined #zfsonlinux
[00:43:11] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Quit: Qutting)
[00:47:33] *** adilger <adilger!~adilger@S0106a84e3fe4b223.cg.shawcable.net> has quit IRC (Ping timeout: 268 seconds)
[01:01:52] *** darkmeson <darkmeson!~darkmeson@gateway/tor-sasl/darkmeson> has quit IRC (Remote host closed the connection)
[01:03:17] *** darkmeson <darkmeson!~darkmeson@gateway/tor-sasl/darkmeson> has joined #zfsonlinux
[01:17:57] <zfs> [zfsonlinux/zfs] old device - naming weirdness during resilver ( /dev/sdc1/old instead of /dev/sdc (old) ) (#8138) closed by kpande <https://github.com/zfsonlinux/zfs/issues/8138#event-1973012380>
[01:18:27] <zfs> [zfsonlinux/zfs] old device - naming weirdness during resilver ( /dev/sdc1/old instead of /dev/sdc (old) ) (#8138) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8138#issuecomment-439657477>
[01:20:57] <zfs> [zfsonlinux/zfs] zpool list -v console output formatting (#7308) comment by kpande <https://github.com/zfsonlinux/zfs/issues/7308#issuecomment-439657591>
[01:21:40] <ptx0> devZer0: uhm
[01:21:45] <ptx0> you have a single special vdev with no redundancy
[01:21:50] <ptx0> just fyi if it dies your whole pool is gone
[01:25:41] <devZer0> yes sure, it's for testing only. thanks
[01:27:00] <devZer0> why are tickets being closed in such a harsh way? i try to be helpful, give feedback from a user perspective and if tickets getting closed that quick i will never open tickets anymore, because it's frustrating when work and time being put to trash
[01:29:28] <zfs> [zfsonlinux/zfs] Huge performance drop (30%~60%) after upgrading to 0.7.9 from (#7834) comment by kpande <https://github.com/zfsonlinux/zfs/issues/7834#issuecomment-439657932>
[01:30:08] <ptx0> because you are not following the rules
[01:30:23] <ptx0> and it's not the first time
[01:31:06] <devZer0> what are "the rules" ?
[01:31:10] <ptx0> zfs-discuss is where you need to focus that effort until someone on there tells you that the behaviour is questionable and needs an issue report
[01:32:08] <ptx0> github issues should be actual issues and not an investigation into what appears to be a system configuration issue, or simple FAQ
[01:32:15] <devZer0> it would be followed more precisesly if that information would be inside the issue template, too
[01:32:35] <ptx0> brian didn't want to put it into the template, and i don't know why
[01:32:45] <ptx0> i'm pretty sure his response was "we can just close issues, it is no big deal"
[01:33:14] <ptx0> trust me, the issue template also should get info about the person's hardware, whether they use ECC memory or not
[01:33:19] <ptx0> it's in no way complete
[01:34:00] <devZer0> i think it is, because it scares people away. people want to be helpful. they think: hey, i have an issue, let's report it. and if they get handled that way, they will stop doing it. the will not report on other place.
[01:34:31] <devZer0> i have seen lots of diskussion on scaring people away who want to contribute
[01:34:48] <devZer0> on mailing lists, for example. people getting harsh response there, too.
[01:35:30] <devZer0> so, whoever complains or contributes - often they get "naaa, not here. or: naaa, you didn't follow the rules". sorry, but this is not so helpful
[01:36:35] <devZer0> i don't want to insult, i just want to express how someone feels who takes a look and wants to contribute by feedback
[01:39:21] <ptx0> great, but it is not feedback in 8138
[01:39:28] <ptx0> it is just a FAQ
[01:39:30] <devZer0> with "i think it is" i was referring to " it is no big deal"
[01:40:18] <devZer0> so can we discuss here regarding /dev/sdc1/old ?
[01:40:24] <ptx0> sure
[01:40:28] <devZer0> i think this is completely misleading for the end user
[01:40:38] <ptx0> but everything has already been explained to you in the issue
[01:41:15] <devZer0> ???
[01:41:15] <ptx0> i don't think anyone will actually think it believed the device was called /dev/sdnX/old.
[01:41:24] <ptx0> why is this "completely misleading"
[01:42:25] <ptx0> your assumption about sdc1 being wrong is also, well, wrong
[01:42:34] <ptx0> ZFS uses partitions even when you give it a whole disk
[01:42:53] <devZer0> i think concatenating a device name and an informational addon information into a single string is misleading
[01:43:13] <ptx0> not when you know it's going to happen.
[01:43:36] <ptx0> your request change, fwiw doesn't separate it into two strings
[01:43:39] <devZer0> and if i'm wrong regarding sdc1 - why does zpool status or iostat show device name instead of partition?
[01:44:06] <ptx0> because that's how it works :D
[01:44:51] <ptx0> the actual path is changed to a nonexistent device node so that ZFS doesn't try reopening it
[01:45:00] <devZer0> sorry, but telling "your device sdc was /dev/sdc1/old" simply sounds weird to me
[01:45:12] <ptx0> ok, sounds like a personal problem
[01:45:30] <ptx0> there is no actual issue that results from this
[01:45:42] <devZer0> sounds like you take it from the developers point of view...
[01:46:05] <ptx0> no, it sounds like you want something you just don't like, to be called a bug
[01:46:52] <devZer0> no, not really. i think it is questionable from a end-users perspective and i wanted to like to have another opinion on that. you are right, that could have been discussed on the ML
[01:47:29] <ptx0> so when did you notice this problem
[01:47:39] <ptx0> because uhm it's been this way *forever*
[01:47:55] <ptx0> in more than 15 years no one has thought it an issue but you..
[01:49:41] <devZer0> i have never seen /..../old before. sorry
[01:49:49] <ptx0> vOv
[01:53:30] <devZer0> this is muzzling
[01:55:27] * TemptorSent wanders in and grabs a bag of popcorn.
[01:55:57] <ptx0> muzzle is an apt metaphor because it's used on misbehaving dogs
[01:56:12] <ptx0> and it's also used to protect those around them from the dog
[01:56:32] <ptx0> but most people don't realise the muzzle is also for the dog's protection
[01:57:08] <TemptorSent> Just imagine the nasty diseases they could pick up if they bit the wrong person...
[01:57:28] <ptx0> or the nasty things that could happen if the wrong person came up to pet the dog and it bit them in defense
[01:58:03] <ptx0> i always muzzled my dog since it kept other people (adults, small children) far away from both of us
[01:58:24] *** devZer0 <devZer0!51adeeb9@gateway/web/freenode/ip.> has left #zfsonlinux
[01:58:27] <TemptorSent> ...why is it that small dogs are never muzzled, when they are by far the most likely to take a chunk out of you?
[01:58:43] <ptx0> because their average IQ is shared with their owners
[01:59:23] <TemptorSent> They do fly well when punted...
[02:00:18] <ptx0> especially when feeding them a steady diet of beans
[02:00:28] <TemptorSent> Self-propelled!
[02:00:59] <zfs> [openzfs/openzfs] Merge remote-tracking branch 'illumos/master' into illumos-sync (#714) created by zettabot <https://github.com/openzfs/openzfs/issues/714>
[02:02:01] <ptx0> there's this trend ever since ubuntu came along and pushed zfs into the mouths of children for newcomers to try watering it down to make it "easier for newcomers"
[02:02:44] <ptx0> because reading docs is somehow a thing of the past and everything should Just Work the way they expect it
[02:02:48] <TemptorSent> I've noticed... it's not just ZFS, it's everywhere.
[02:03:02] <ptx0> but what if i forget an argument, zfs should figure it out and offer a replacement
[02:03:19] <TemptorSent> RTFM -- if the manual isn't clear, ask, if there isn't a solid answer, that *might* be a bug.
[02:03:43] <ptx0> some things aren't in the manual but require reading a PR
[02:03:55] <ptx0> but it's a filesystem, how many of ext4's quirks are documented
[02:05:22] <bunder> i don't know, that whole thing with classes could be documented better, i was under the impression it was doing things backwards
[02:05:32] * bunder shrug
[02:06:04] <ptx0> i didn't realise that you could turn xattr into metadata using large_dnode feature but that feature scares me too
[02:06:18] <DeHackEd> features that break send/recv are scary
[02:06:24] <ptx0> too many instances of corruption when non-legacy dnode size are in use
[02:06:49] <CompanionCube> i was just going to ask why but you ninja'd me :p
[02:07:12] <ptx0> heheh
[02:07:28] <ptx0> there was one guy who had to recreate his pool a few times due to weird hangs during import that seem to trace back to large dnode
[02:07:36] * DeHackEd has discovered "tubrostat", a tool for intel (?) cpus to see the turbo speeds and power consumption
[02:07:45] <ptx0> but i never saw that it was actually pinned down and resolved
[02:07:55] <ptx0> tubro?
[02:08:18] <DeHackEd> CPUs boost their MHz speed when cores are idle and there's power in the TDP budget
[02:08:49] <bunder> can you even measure the tdp under turbo? i only ever see reviewers use a killawatt
[02:09:05] <DeHackEd> apparently my xeons are measuring it themselves and turbostat can report it
[02:09:12] <bunder> interesting
[02:10:20] <bunder> there is also powertop but i forget if it needs kernel options to work
[02:11:57] <bunder> lol documentation
[02:11:59] <bunder> This mode allows you to execute PowerTOP when the system is connected to an Extech power analyzer.
[02:12:15] <bunder> why only extech, i bought this expensive fluke for nothing /s
[02:13:22] <zfs> [zfsonlinux/zfs] man/zfs.8: document 'received' property source (#8134) comment by Giuseppe Di Natale <https://github.com/zfsonlinux/zfs/issues/8134>
[02:13:49] <DeHackEd> powertop is more generic, whereas turbostat is very cpu-specific. cpu sleep states, wakeups, per-core/thread speeds, etc
[02:20:34] *** Praeceps <Praeceps!1fcdf44e@gateway/web/freenode/ip.> has joined #zfsonlinux
[02:22:03] <Praeceps> Is there any reason to use zvol's as VM storage - it seems like the concensus at least for performance qcow images are better, snapshotting takes up less space and are more portable.
[02:22:50] <ptx0> if by "consensus" you mean one guy's blog, sure
[02:23:35] <DeHackEd> zvols are effectively raw images stored on a device designed for them with most of those same features available through ZFS instead.
[02:23:42] <Praeceps> I mean I found more than one source but the primary source was probably the same :P
[02:24:10] <Praeceps> In my head I'm guessing a zvol is better for performance if you're using certain zfs features?
[02:24:24] <ptx0> yes, everyone is taking jim's word at face value even though he hasn't updated the article with corrections
[02:24:38] <DeHackEd> not quite like that. a zvol is effectively a single huge file, but without the VFS layer
[02:24:43] <ptx0> snapshots on zvol work just fine if you don't reserve space for it
[02:25:18] <Praeceps> I thought that could be the case as I couldn't find any recent primary sources
[02:25:23] <Praeceps> So what's the state of play these days?
[02:25:41] <ptx0> performance depends on workload, aiui
[02:26:09] <ptx0> for the longest time zvol were the only way to use O_DIRECT semantics
[02:26:13] <DeHackEd> zvols are still recommended, but we understand why sometimes it's a pain. /dev entries can be a permissions problem for non-root users, for example
[02:26:51] <ptx0> DeHackEd: in gentoo just add user to 'disk' group
[02:27:14] <ptx0> i'm sure we could change it to 'zvol' group with udev rules
[02:27:24] <DeHackEd> sure, but I don't want them to access my real (SATA) drives
[02:27:33] <DeHackEd> yeah that would be better
[02:27:53] <ptx0> well i look forward to your PR :P
[02:28:30] <Praeceps> Also, I imagine that TRIM would function on a zvol when enabled correct?
[02:28:39] <Praeceps> Just like any other subsystem
[02:28:41] <ptx0> kinda
[02:28:47] <ptx0> depends how you access it
[02:28:52] <CompanionCube> you can TRIM the zvol but not the underlying zpool
[02:28:56] <DeHackEd> yes, a zvol will punch out disk space if you use the usual TRIM commands
[02:28:59] <bunder> in theory you could write a special udev rule but annoying
[02:29:10] <bunder> re disk group
[02:29:20] <ptx0> zpool can't send trim to its vdevs yet, but it can accept trim commands to zvol, which will free space if you have no snapshots
[02:29:56] <ptx0> and virtio devices may or may not send trim commands from a guest to the zvol
[02:30:06] <ptx0> virtio-scsi definitely does
[02:30:07] <DeHackEd> I heard that you need to use a virtio-scsi card instead
[02:30:19] <ptx0> yes, virtio-blk may as of last year or something
[02:30:38] <ptx0> i'm using iscsi though which definitely does work
[02:31:00] <DeHackEd> well that's cheating. :)
[02:31:13] <ptx0> i think ceph rbd is cheating :P
[02:32:29] <Praeceps> What's a vdev? Like vda under a virtual machine?
[02:32:41] <ptx0> i miss the old sun docs
[02:32:57] <ptx0> people clearly don't read them anymore
[02:33:00] <DeHackEd> either the raw disks under ZFS, or the virtual RAID devices between them and the pool
[02:33:04] <DeHackEd> raidz, mirrors
[02:33:17] <ptx0> "man zpool"
[02:33:32] <ptx0> read it top to bottom :)
[02:33:36] <Praeceps> :P
[02:34:15] <ptx0> our docs are better than the freebsd handbook nowadays
[02:34:29] <Praeceps> So you can trim a zvol how does that work O_o
[02:34:49] <ptx0> that thing used to be the pinnacle of open source documentation. want an example of good docs? look at the freebsd handbook.. but i tried setting up a bridge last week and the examples and docs were incomplete.
[02:35:01] <DeHackEd> zvol sectors are deleted and replaced with sparse holes
[02:35:30] <ptx0> Praeceps: it overwrites empty space with zeroes, essentially compressing the blocks and deallocating them from the zvol, but only if no snapshots reference those blocks
[02:35:52] <ptx0> if you use snapshots, trimming the zvol will use more space
[02:36:50] <Praeceps> A zvol reserves all it's space anyway right, so why would you want to do that? If you've turned the reservation off I guess?
[02:36:56] <bunder> actually i like the gentoo handbook and wiki better than freebsd
[02:37:02] <ptx0> no
[02:37:07] <bunder> i had to use a blog to tell me how to install xorg/mate
[02:37:22] <ptx0> that's what jim said and he was wrong. zvol may be thin provisioned.
[02:40:07] <Praeceps> Ahh, okay. So if you're using a thin provisioned or over provisioned zvol you want trim support on your device driver to free that space for the rest of the pool, but it doesn't actually trim the underlying device.
[02:40:27] <ptx0> right
[02:40:38] <ptx0> and the snapshots caveat applies to both ZPL or ZVOL
[02:41:09] <ptx0> oh debian wheezy
[02:41:16] <Praeceps> And I'm guessing theoretically when trim support is added(soon hopefully) the device would be trimmed independently of that?
[02:41:17] <ptx0> can't run 4.19.2 kernel now
[02:41:22] <ptx0> toooooo old
[02:41:45] <ptx0> and when i'm compiling zfs it's like, hey what the hell are you doing
[02:42:24] <ptx0> "soon hopefully" lol
[02:43:01] <ptx0> the reason TRIM has been delayed so long is mostly political at this point
[02:43:16] <bunder> you mean like most big patches
[02:43:46] <bunder> oh wait, classes wan't political, it was just a slow brady bunch ;)
[02:44:09] <ptx0> yeah when two features are "overlapping" and developed by two or more companies you end up getting the one related to the more powerful company integrated sooner while the other patch which may even have been in progress longer, now has to be modified to accommodate
[02:44:11] * CompanionCube remembers thinking that by the time he moved from old-desktop to current-desktop TRIM would be merged.
[02:44:16] <CompanionCube> That was over a year ago
[02:44:29] <bunder> i guess i can't really blame him, he was inbetween jobs
[02:44:30] <Praeceps> I mean I looked at the request which makes it look like they are pretty much there but yeah it's been open for a while lol
[02:44:55] <ptx0> TRIM is the patch that my company funded, while "Eager Zero" from delphix came along after and ruined our plans
[02:45:21] <Praeceps> Yeah I saw that in the parts that I've read up on
[02:45:27] <bunder> what company, i thought you rode bikes all day lul
[02:45:42] <ptx0> the one that keeps me from riding bikes all day
[02:46:19] <ptx0> oooh i finally building 0.8.0-rc2 on debian wheezy
[02:46:25] <ptx0> and they said it couldn't be done... psh
[02:46:46] <Praeceps> Hahaha
[02:46:53] <bunder> why couldn't it, you just need to stop using an archaic kernel
[02:47:00] <ptx0> 20:42:03 <@ptx0> can't run 4.19.2 kernel now
[02:47:00] <ptx0> 20:42:07 <@ptx0> toooooo old
[02:47:09] <ptx0> i'm stuck on 4.14
[02:47:19] <bunder> does it have a really old glibc or something
[02:47:22] <DeHackEd> 3 month old kernel (base). too old.
[02:47:25] <ptx0> 4.19.2 has dependencies on some newer debian utilities with unsupported arguments
[02:47:31] <ptx0> DeHackEd: wheezy is too old
[02:48:00] <ptx0> fuck these assholes for moving to newer deb tools though
[02:48:08] <bunder> yeah aren't they on aa or bb now
[02:48:19] <Praeceps> Isn't Wheezy eol?
[02:48:27] <ptx0> Praeceps: you sound like my mother
[02:48:35] <Praeceps> Lmfao
[02:48:49] <CompanionCube> lol wheezy
[02:48:51] <ptx0> she's a systemd developer
[02:49:00] <ptx0> needless to say we don't speak much
[02:49:20] <Praeceps> Hahaha
[02:49:30] <Praeceps> Linux family rivalry what a time to be alive
[02:49:50] <ptx0> she replaced all the kitchen appliances with a single fridge/oven/dishwasher combo thing
[02:50:01] <ptx0> that was the last straw.
[02:50:07] <Praeceps> Hahahaha
[02:50:43] <Praeceps> In regards to performance it's game loading times so I guess random reads are what you want to optimise for
[02:50:57] <ptx0> or #5182
[02:51:01] <zfs> [zfs] #5182 - Metadata Allocation Classes by don-brady <https://github.com/zfsonlinux/zfs/issues/5182>
[02:51:12] <ptx0> which helped my buddy who lives on SMR drives quite a lot
[02:51:39] <ptx0> of course zvol don't have as much of a benefit as the host OS will due to small block offload working for files but not zvol
[02:51:55] <ptx0> but zvol metadata offload from spinning rust helps immensely
[02:52:47] <Praeceps> I'm guessing if you offload it to a single device rather than raid and that single device fails you're extremely fucked tho right?
[02:52:58] <ptx0> yes which is why you mirror them
[02:53:09] <ptx0> raid isn't supported for special vdev
[02:53:22] <CompanionCube> yep, all your metadata goes poof and you're stuck with a random blob of bits
[02:53:29] <ptx0> you can use mirror special vdev on top of raidz pool though
[02:53:48] <Praeceps> I think I'll keep it on my raid1 then :P
[02:54:23] <ptx0> i'd offload metadata from my backup server if i could, but not enough free SATA ports
[02:54:54] <Praeceps> I am learning entirely too much about storage today :L
[02:55:21] <ptx0> asking too many fucking questions
[02:55:27] <ptx0> ;)
[02:55:31] <Praeceps> Hahahaha ;)
[02:56:08] <CompanionCube> i wonder
[02:56:10] <ptx0> oh man, wheezy has dracut v20
[02:56:20] <ptx0> but that's better than centos 6 which uses v4
[02:56:20] <CompanionCube> will anyone ever be stupid enough to use volatile storage for a special vdev
[02:56:29] <ptx0> CompanionCube: yes
[02:56:34] <ptx0> i've seen it happen on EC2
[02:56:41] <ptx0> "i rebooted and my pool was gone"
[02:56:42] <ptx0> lol
[02:56:43] <Praeceps> There will always been someone stupid enough to do something stupid
[02:56:57] <ptx0> Praeceps speaks from experience
[02:57:05] <Praeceps> My career is in security
[02:57:14] <Praeceps> Everyone is fucking dumb enough to do something stupid that's why we exist.
[02:57:28] <CompanionCube> ptx0: special vdev is arguably worse
[02:57:36] <CompanionCube> because you still *have* your data
[02:57:42] <CompanionCube> but can't find it
[02:57:47] <ptx0> i typo'd once and told this attractive woman i was in the bond business and she loved it, thought i'm wealthy. it took a few days for her to realise i meant the bonG business.
[02:58:09] <Praeceps> lol
[02:59:29] <CompanionCube> (for real !!fun!! store metadata on a tmpfs file)
[03:00:18] <Praeceps> I'm just thinking, if you lost the meta data you probably couldn't even recover anything right?
[03:00:26] <Praeceps> Like I don't even think you could file carve or anything like that
[03:01:05] <CompanionCube> well, if you didn't use compression or dedup you could brute-force filecarve maybe?
[03:01:58] <bunder> you'd still not know where files start and end
[03:02:15] <Praeceps> Yeah the way zfs allocates stuff would fuck you right?
[03:02:26] <bunder> i'm not sure if you need to know the hash algo either, or if that's just metadata
[03:03:59] <ptx0> lol brute forcing a several tb pool
[03:04:05] <ptx0> glwt
[03:05:05] <Praeceps> Does zfs even store all the sectors next to each other?
[03:05:10] <Praeceps> It doesn't sound like it
[03:05:40] <Praeceps> Probably wrong terminology there, I'm thinking blocks?
[03:06:57] <bunder> if they're in the same txg i think so
[03:07:18] <bunder> but once you start changing data and taking snapshots its gonna be all over the place
[03:07:28] <Praeceps> Yeah I was thinkinh snapshots
[03:07:36] <Praeceps> Foresensics on that sounds like a bitch
[03:08:46] <bunder> does anyone even do that? i know i've hinted that recovery kindof stuff is a george wilson thing but i'm not even sure he actually does that stuff
[03:08:58] <Praeceps> Yeah totally
[03:09:39] <Praeceps> Mostly for court cases
[03:09:42] <bunder> i mean if drivesavers can fix your disk and zfs still says no, i think you're boned
[03:10:08] <Praeceps> I'm more thinking of a TLA trying to recover inteligence or smth :P
[03:11:33] <CompanionCube> https://www.sciencedirect.com/science/article/pii/S1742287609000449 someone wrote this
[03:13:07] <Praeceps> Of course theres a paper you have to pay for :P
[03:13:34] <ptx0> Dr. Beebe, haha
[03:13:57] <CompanionCube> Praeceps: Google is your friend :3
[03:14:02] <Praeceps> doctor by day, pop star by night
[03:31:47] <Praeceps> Alright guys I'm gonna head off night :)
[03:32:04] *** Praeceps <Praeceps!1fcdf44e@gateway/web/freenode/ip.> has left #zfsonlinux
[03:34:54] *** IonTau <IonTau!~IonTau@ppp121-45-221-40.bras1.cbr2.internode.on.net> has joined #zfsonlinux
[03:43:28] <bunder> "How the hell do you say Eevee’s name?" The Verge
[03:43:41] <bunder> if that case building video wasn't bad enough
[03:46:11] <ptx0> what the hell happened, man
[03:46:18] <ptx0> i install zfs-dracut and it's in the wrong place
[03:46:19] <ptx0> lol
[03:46:45] <ptx0> it went to /usr/share/dracut/modules.d but wheezy has things in /usr/lib/dracut/modules.d/
[03:47:55] <bunder> hmm
[03:48:19] <ptx0> i had to build my own grub, too
[03:48:29] <bunder> /usr/lib64/dracut/modules.d
[03:48:36] <bunder> yay gentoo
[03:49:03] <ptx0> I: *** Including module: zfs ***
[03:49:07] <ptx0> /usr/lib/dracut/modules.d/90zfs/module-setup.sh: line 91: dracut_module_included: command not found
[03:49:10] <ptx0> lol.
[03:49:15] <bunder> so who is wrong, dracut or zfs
[03:49:15] <ptx0> fml
[03:49:27] <ptx0> zfs-dracut isn't backwards compatible with dracut v20
[03:49:58] <bunder> 20, we're in the 40's now
[03:50:32] <ptx0> i am aware
[03:50:45] <ptx0> if you scroll up, you'll see
[03:51:24] <bunder> lol
[03:59:59] *** ps-auxw <ps-auxw!~arneb@p548D43E8.dip0.t-ipconnect.de> has quit IRC (Disconnected by services)
[04:00:05] *** ArneB <ArneB!~arneb@p548D44F7.dip0.t-ipconnect.de> has joined #zfsonlinux
[04:01:36] <bunder> Mark Shuttleworth Reveals Ubuntu 18.04 Will Get a 10-Year Support Lifespan
[04:01:41] <bunder> oh dear god please no
[04:02:28] <bunder> or at least stop putting out a new lts every 2 years if you're gonna make one last 10
[04:14:21] <ptx0> 'reveals'
[04:14:54] <bunder> its better than exposing himself :P
[04:16:30] <ptx0> brb
[04:19:10] *** zfs <zfs!~zfs@unaffiliated/ptx0> has quit IRC (Ping timeout: 246 seconds)
[04:20:28] <bunder> yay no bot, lets riot
[04:23:01] *** ptx0 <ptx0!~cheesus_c@unaffiliated/ptx0> has quit IRC (Ping timeout: 246 seconds)
[05:25:31] <PMT_> you rioted too hard, now mods are asleep post [...]
[05:47:10] <gdb> He's obviously doing this to take advantage of people fearful of IBM's acquisition of Red Hat.
[05:47:25] <gdb> er well "to take advantage of industry anxiety over ..."
[05:50:51] *** delx <delx!~delx@> has quit IRC (Ping timeout: 250 seconds)
[05:52:06] *** delx <delx!~delx@> has joined #zfsonlinux
[05:58:58] *** IonTau <IonTau!~IonTau@ppp121-45-221-40.bras1.cbr2.internode.on.net> has quit IRC (Remote host closed the connection)
[06:01:53] <bunder> he didn't even say where he was going
[06:02:02] <bunder> afaik the bot runs off a vps
[06:02:20] <PMT_> I believe that's true.
[06:02:38] <PMT_> He did say brb, so it wasn't accidental; he might have rebooted his internet connection
[06:02:57] <bunder> unless that dracut was for his vps
[06:03:13] <bunder> (shouldda used gentoo lul)
[06:04:25] <PMT_> a lot of the VPSes that let you run whatever you want are costlier
[06:06:18] <bunder> i'm not sure if ovh cares
[06:06:36] <bunder> they didn't seem too keen on shutting down spammers or ssh bots
[06:06:52] <bunder> unless they're counting clock cycles or something now
[06:07:54] <PMT_> if they don't let you choose how it boots, that can be a fairly effective lockout
[06:07:57] <PMT_> for example.
[06:09:24] <bunder> they can try to restrict that but good luck
[06:09:59] <bunder> afaik you can get a digital ocean box with fbsd (or linux for that matter) and install gentoo on it
[06:12:20] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[06:13:27] <bunder> i wonder if he had to file a support ticket to get them to mount an iso
[06:14:03] <bunder> if so he ain't coming back tonight :P
[06:14:14] <mason> Who got locked out of something?
[06:14:24] <mason> Oh, ptx0?
[06:14:31] <bunder> 22:16:30 @ptx0 | brb
[06:14:33] <bunder> yes
[06:16:06] <mason> Ah, well. Hope it doesn't eat his night. Not sure what timezone he's in anyway.
[06:16:16] <mason> Bedtime, here. o/
[06:16:24] <bunder> i think he's still in ontario, so eastern
[06:20:33] <pistache> bunder: about #7294, I think it's a non-issue (not a bug) and should be closed
[06:22:08] <pistache> so there was no bug to begin with, and --no-canonicalize is very good as the mount's source is not a path anyway :)
[06:23:54] <bunder> eh i'll let brian do it
[06:24:40] <pistache> oh yes ofc, just wanted to let you know
[06:24:48] <pistache> as you said you did not know how to go for fixing that one
[06:25:13] <pistache> best fix : no bug !
[06:26:36] <bunder> i just want zfs allow to work right, i've never seen that error myself
[06:26:57] <bunder> i've only ever seen the "xyz created but only root can mount"
[06:27:51] <pistache> that error would actually only occur when mounting from a non-root user, whether it be in the global namespace or an user namespace
[06:28:47] <pistache> and it was occurring before --no-canonicalize was added, as --options can not be used by non-root users either
[06:29:33] <bunder> i kindof only tried it once and didn't want to futz with it again because i know they haven't fixed it yet
[06:30:21] <pistache> from what I read of util-linux/mount.c, allowing non-root mounts (in any kind of namespace) using the mount command is tricky
[06:30:51] <bunder> if we can do it with ext/etc i don't see what the big holdup is for zfs
[06:31:16] <pistache> as --no-canonicalize can't be passed, but neither --options (required to pass zfsutil), and only one of the source and target can be specified
[06:31:28] <bunder> although /etc/fstab is kindof a chicken/egg problem
[06:33:32] <pistache> it can work if 'zfs mount' sets up /etc/fstab, uses 'user' in the options, and calls 'mount <mountpoint>'
[06:35:28] <bunder> zfs sharenfs uses nfs, and nfs writes /etc/dfs/sharetab, there's gotta be a way we can leverage the kernel somehow heh
[06:36:00] <pistache> I think the current logic (calling 'mount --no-canonicalize --options <options>,zfsutil dataset mountpoint' can not work with non-root users, because of how /bin/mount works.
[06:36:46] <bunder> hmm, zfs allow can do share too, on non-linux
[06:37:15] <bunder> so in theory a user with allow can indirectly add entries to sharetab
[06:39:33] <pistache> I will try to investigate how does zfs sharenfs does it
[06:39:37] <pistache> it's a good idea
[06:40:40] <bunder> of course it doesn't work on linux because of the mount excuse (at least according to the man page)
[06:41:25] <bunder> the only reason i can think of them not implementing the whole thing was it trips selinux or something
[06:41:50] <bunder> but that's more opinion than actual fact, it was before my time using zfs
[06:51:19] <pistache> ah ok, it doesn't actually work for non-root users with ZoL
[06:51:22] <pistache> pistache@roko:/root$ /sbin/zfs set sharenfs=on test/foo
[06:51:22] <pistache> exportfs: could not open /var/lib/nfs/.etab.lock for locking: errno 13 (Permission denied)
[06:52:27] <bunder> yeah that file is root:root
[06:54:27] <pistache> another thing about #7294 is that it's not related to #6865 (user namespace bugfixes and features)
[06:54:55] <pistache> I managed to rebase the code in #6865 on master and test it, but things are weird.
[06:55:29] <pistache> it felt nice to use a real (not tmpfs) filesystem with FS_USERNFS_MOUNT :)
[06:56:01] <pistache> as far as I know there are none other than tmpfs at the time
[06:56:09] <bunder> weird how?
[06:57:06] <pistache> root@test:/mnt# touch foo
[06:57:10] <pistache> root@test:/mnt# ls -l
[06:57:35] <bunder> nice uid lol
[06:57:38] <pistache> and I can edit both files the same
[06:57:51] *** MrCoffee <MrCoffee!coffee@gateway/vpn/privateinternetaccess/b> has joined #zfsonlinux
[06:58:04] <pistache> so 'test' is a container, with UIDs mapped from +1000000
[06:58:53] <pistache> this is in a dataset that 'test' is allowed to mount
[06:59:07] <pistache> and the commands are run in test, with the datset mounted in /mnt
[06:59:21] <pistache> created_on_host was created from the global namespace
[06:59:59] <bunder> tbh i've never used containers or namespaces
[07:00:52] <pistache> as far as I understand things, the UIDs should be mapped, and created_on_host should have uid/gid -1:-1, that should show as nobody:nogroup because of wrapping
[07:01:06] <pistache> so foo should have root:root
[07:01:44] <pistache> this PR implements the FS_USERNS_MOUNT filesystem flag, indicating to the kernel that the filesystem is "safe" (heh) for being mounted in an user namespace
[07:02:19] <pistache> which allows unprivileged (non-root and without CAP_SYS_ADMIN) users to mount this filesystem in their own user namespace
[07:03:00] <pistache> the only filesystem that I know implements this flag is tmpfs, which is often used in containers
[07:04:01] <pistache> as the code in the PR doesn't behave like tmpfs, I understand that the implementation of FS_USERNS_MOUNT is not complete
[07:04:20] <pistache> but this might because of rebasing on v0.8.0, I have to try with 0.7.something
[07:07:41] <CompanionCube> PMT_: better: mods are asleep, post issues without the template!
[07:10:59] *** veegee <veegee!~veegee@ipagstaticip-3d3f7614-22f3-5b69-be13-7ab4b2c585d9.sdsl.bell.ca> has joined #zfsonlinux
[07:13:45] <PMT_> CompanionCube: nah they do that when mods are awake too
[07:25:31] *** fling <fling!~user@fsf/member/fling> has joined #zfsonlinux
[07:25:45] <fling> How safe is it to use master without enabling new features?
[07:26:15] <bunder> pretty good
[07:27:26] <bunder> at worst, https://github.com/zfsonlinux/zfs/issues/8003
[07:31:22] <pistache> that's just a kernel/userland mismatch i think
[07:31:46] <pistache> I think it happens because the package structure change
[07:32:00] <pistache> changed*
[07:32:01] <bunder> they added some sysfs thing the old modules can't use
[07:32:32] <pistache> I think it's old userland that can't use them (at least for the issue above)
[07:33:07] <pistache> oh no you're right
[07:33:09] <pistache> it's old modules
[07:33:33] <pistache> so that means I need sleep, good night
[07:38:21] <CompanionCube> bunder: does that stll apply
[07:38:45] <CompanionCube> because i'm planning a kernel update to 4.19 very soon
[07:39:55] <CompanionCube> and I havent touched the kernel (with ZFS built-in) since july
[07:49:30] <bunder> probably
[08:02:15] <CompanionCube> welp, best hope the current binaries will be forward-compatible *enough* with newer code to boot and do dracut then
[08:05:03] *** veremitz <veremitz!~veremit@unaffiliated/veremit> has quit IRC (Remote host closed the connection)
[08:08:08] *** veremitz <veremitz!~veremit@unaffiliated/veremit> has joined #zfsonlinux
[08:44:12] *** eckomute <eckomute!~eckomute@d75-156-89-44.bchsia.telus.net> has quit IRC (Quit: WeeChat 2.2)
[08:53:39] *** percY- <percY-!~percY@> has quit IRC (Read error: Connection reset by peer)
[08:56:03] *** percY- <percY-!~percY@> has joined #zfsonlinux
[09:07:44] *** ezbp <ezbp!~ezbp@> has joined #zfsonlinux
[09:41:21] *** MrCoffee <MrCoffee!coffee@gateway/vpn/privateinternetaccess/b> has quit IRC (Ping timeout: 244 seconds)
[10:08:42] *** janlam7 <janlam7!~janlam7@> has joined #zfsonlinux
[10:32:01] *** lord4163 <lord4163!~lord4163@90-230-194-205-no86.tbcn.telia.com> has joined #zfsonlinux
[11:04:03] *** hyper_ch <hyper_ch!~hyper_ch@openvpn/user/hyper-ch> has quit IRC (Read error: Connection reset by peer)
[11:04:33] *** hyper_ch <hyper_ch!~hyper_ch@openvpn/user/hyper-ch> has joined #zfsonlinux
[11:38:38] <perfinion> i just ordered a second SSD, planning on making my / a mirror. does anyone have a thing to make dracut unlock multiple luks volumes with teh same key?
[11:40:32] *** endre <endre!znc@end.re> has quit IRC (Quit: nope)
[11:49:39] *** chesty <chesty!~chesty@li1449-118.members.linode.com> has joined #zfsonlinux
[12:05:22] *** hoonetorg <hoonetorg!~hoonetorg@> has joined #zfsonlinux
[12:34:05] *** alfatau <alfatau!5d2271af@gateway/web/cgi-irc/kiwiirc.com/ip.> has joined #zfsonlinux
[12:39:00] <alfatau> hello everybody. I'm asking here to help me understanding (real) gotachas having zfs on top of an hardware raid array. I red a lot of different opinions or statement, and what is actually for me really clear is that avoiding hw (or sw raid) is a disadvantage because I lose some very useful zfs feature (such as self-healing, optimized resilvering, r
[12:39:01] <alfatau> aid5 hole invulnerability, and so on...)
[12:40:15] <alfatau> ... sorry, i mean *using* hw/sw raid is a disadvantage...
[12:40:23] <pink_mist> "avoiding hw (or sw raid)"? zfs is sw raid ... and you should absolutely always avoid hw raid
[12:40:56] <alfatau> pink_mist: sorry, mistake due to rewriting sentence
[12:41:25] <alfatau> pink_mist: i meant *using* is a disadvantage
[12:41:46] <pink_mist> I don't understand. please restate your original question correctly
[12:42:05] <alfatau> ok. sorry. I'll rewrite
[12:44:25] <alfatau> hello everybody. I'm asking here to help me understanding (real) gotachas having zfs on top of an hardware raid array. I red a lot of different opinions or statement, and what is actually for me really clear is that using zfs on top of an hw (or sw raid like linux md) is a disadvantage because I lose some very useful zfs feature (such as self-heali
[12:44:25] <alfatau> ng, optimized resilvering, raid5 hole invulnerability, and so on...)
[12:45:08] <rlaager> The first part of that says you're asking to help understand. The second part says you already understand the issues.
[12:45:09] <pink_mist> yes, that's all true ... there are other reasons too for avoiding hw raid
[12:45:35] <pink_mist> such as vendor lock-in/lack of support
[12:48:34] <alfatau> pink_mist: yes, i'm asking to help understanding because somewhere i found some documentation where it was stated that using zfs on top of an hw raid array can produce data loss. What I actually want to understand is: having zfs on top of an hw raid array, will be as safe as using a "normal" filesystem such as (for example) xfs or ext4, or not?
[12:49:01] <rlaager> It will be at least as safe, yes.
[12:52:25] <alfatau> rlaager, pink_mist: ok, in other words: suppose I want to use only a subset of zfs features, such as zvols, snapshots and zfs-send/receive. I've an (not-fake) hw raid controller with nv-ram cache and I actually have an xfs volume on top of an lvm group, on top of the hw-raid array. I want to replace the xfs volume with a zfs one in order to use the
[12:52:25] <alfatau> aforesaid features.
[12:52:58] <hyper_ch> how can I find out what zfs actually writes to disk? I have like 20GB write per 6h on an idling notebook
[12:55:29] <alfatau> then, how can I convince mysalf that my data into a zfs volume will be at least as safe as into the actual non-zfs volume (not comparing fs of course)? any clear documentation that excludes any underhand trouble about it?
[12:58:51] <pink_mist> alfatau: like rlaager said, zfs on hw raid will be just as safe as ext4/xfs/anything normal on hw raid
[12:59:41] <hyper_ch> but why would you want hw raid?
[13:02:23] <alfatau> hyper_ch: because this is the only option. My server has not enough "direct" sata ports to connect all disks and the hw-raid controller does not support pass-through. Also, I read that pass-through is not a real option because it hides SMART.
[13:06:33] <alfatau> pink_mist, rlaager: ok, thank you. the reason why I'm doing this question is because I found a lot of discording statements about this setup. For example, another was this: https://www.freenas.org/blog/freenas-worst-practices/ where it's explained that having an hw raid cache can produce data corruption. what do you think about it?
[13:07:56] <pink_mist> it can do that regardless of if you use zfs or anything else
[13:08:06] <pink_mist> and is another reason you shouldn't use hw raid
[13:10:15] <alfatau> pink_mist: in fact this was exactly my own opinion, but I wanted to ask to some "more-experienced" people like these in this channel to explain me what's the real difference with a normal fs.
[13:10:39] *** c0ffee152 <c0ffee152!coffee@gateway/vpn/privateinternetaccess/b> has joined #zfsonlinux
[13:11:16] <pink_mist> the difference is: if you allow zfs to take care of all raid, it's safer
[13:11:25] <pink_mist> if you don't allow that, there's not much difference at all
[13:11:32] <alfatau> pink_mist: anyway my controller has a nv-ram cache with a battery
[13:11:40] <alfatau> pink_mist: ok thank you
[13:18:09] <alfatau> now I have a second "big-question": having an hw raid, I can add new disks to the array and the result is to have a larger virtual drive. Actually handling this maintenance task is very simple since the partition size and lvm group are really simple to expand, while expanding an lvm volume automatically also resizes the fs on top of it. Then I adde
[13:18:10] <alfatau> d disks in the last years very smoothly.
[13:19:17] <alfatau> since I can't use zfs builtin raid, I'll have a single vdev on top of the single virtual drive provided by the raid array
[13:19:53] <alfatau> what happens when expanding that virtual drive? is it possible to expand both the vdev and the corresponding zpool?
[13:23:38] <alfatau> I was not able to find any command for resizing a zpool and I also have found web documentation were it's advised to avoid zpool expansion. I can't understand if this statements are like a "corollary" to the "avoid hw raid" ones or if a zpool expansion will really be difficult or can lead to real problems
[13:25:02] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[13:31:35] <DeHackEd> there's 2 ways to expand a pool. 1) replace all disks in a vdev (mirror, raidz) with biger disks. 2) add a new vdev. sounds weird but ZFS allows adding disks to a RAID-0 and that includes RAID-10, RAID-60, etc.
[13:32:21] <DeHackEd> if you have 6 disks in RAID-Z2 (raid-6) the officially recommended procedure is to add another 6 disks at once as a second RAID-Z2 and effectively double the capacity
[13:37:12] <alfatau> DeHackEd: I'll have a single vdev on top of an hw raid array. Adding a new disk to the hw raid array will result into a larger virtual disk. I don't know what happens to that vdev and zpool
[13:38:00] <DeHackEd> if the system sees a larger disk it can be expanded under method 1
[13:40:40] *** cinch <cinch!~cinch@freebsd/user/cinch> has joined #zfsonlinux
[14:11:21] <mf> any news on the expansion feature ahrens was working on?
[14:15:33] <mf> ah, still "in progress" as of the feature matrix spreadsheet
[14:15:44] <DeHackEd> expansion?
[14:15:54] <DeHackEd> oh raidz expansion
[14:16:19] <mf> yeah
[14:17:23] <mf> it's weird, up until a few months ago i'd naturally assumed that was just a thing that would be possible considering every hardware raid card on the planet can do it
[14:17:55] <DeHackEd> the thing you have to understand about RAID-Z is that parity is not fixed, and in fact every filesystem allocation block has its own private parity not shared with any other block
[14:18:06] <mf> yeah i watched two talks about it
[14:18:35] <mf> still, it seems more "difficult" than "impossible" - and the fact that it's now getting implemented confirms that :)
[14:19:13] <DeHackEd> well a hack has to be made. you can't change the raid level (5,6,7) and a bunch of management stuff is faking the old layout
[14:19:57] <mf> i'm sure it will get less hacky as it matures
[14:23:07] *** DzAirmaX <DzAirmaX!~DzAirmaX@unaffiliated/dzairmax> has quit IRC (Quit: We here br0.... xD)
[14:23:33] *** DzAirmaX <DzAirmaX!~DzAirmaX@unaffiliated/dzairmax> has joined #zfsonlinux
[14:44:06] <DeHackEd> the issue is that the where parity and data are stored is a function of the block pointer. changing the parity geometry requires a block pointer change, and Block Pointer Rewrite (BPR) is one of those problems that makes CS students and professors cry
[14:44:33] <DeHackEd> if there's no snapshots, it's easy. if there are snapshots, clones and dedup, you have tears.
[14:46:51] <mf> i have no snapshots or clones but i do have dedup
[14:48:07] <mf> and yeah the BPR should really be treated like a CERN project. it's for the greater public good, i demand public funding :p
[14:48:54] <DeHackEd> more like you'll need a terabyte of RAM and/or a scrach SSD for the work
[14:49:27] <mf> SSD storage is slated to reach 8 cents per gb next year
[14:50:38] <DeHackEd> with snapshots allowing multiple datasets to reference the same (meta)data blocks, the filesystem structure becomes a tree with quasi-loops in it. rewriting the tree live while preserving the structure is a management nightmare.
[14:51:05] <mf> https://i.kym-cdn.com/photos/images/newsfeed/000/840/283/350.png
[14:52:05] <mf> with the current geopolitical climate and the state of the world i feel i'm entitled to be just a little optimistic about hard-to-solve computer science problems
[14:52:14] <mf> i mean, what else have we got
[14:52:21] <DeHackEd> it is not impossible, no. but for someone with a petabyte array (I'm half way there) it's depressing
[14:53:05] <mf> in other news scrub is almost done (eta 30 mins) and only 19kb was damaged
[14:53:07] *** jwynn6 <jwynn6!~justin@050-088-127-079.res.spectrum.com> has quit IRC (Ping timeout: 240 seconds)
[14:53:26] <DeHackEd> as long as it's repaired and no errors, it's good news
[14:53:34] <mf> yup
[14:55:19] *** jwynn6 <jwynn6!~justin@050-088-127-079.res.spectrum.com> has joined #zfsonlinux
[15:23:17] *** futune <futune!~futune@> has quit IRC (Read error: Connection reset by peer)
[15:23:18] *** futune_ <futune_!~futune@> has joined #zfsonlinux
[15:30:36] <mf> btw, odd question -- would it be possible to create a raidz pool in degraded mode
[15:32:53] <bunder> i don't think so but you could build the pool and take disks out as long as you don't take too many out
[15:35:31] <bunder> oh and https://i.imgur.com/pVuCFTZ.png
[15:36:56] <bunder> hmm, do i buy 12 2tb drives or 6 4tb drives, they cost about the same
[15:37:56] <pink_mist> 3 4tb and 6 2tb :P
[15:38:02] <futune_> build raidz1 with two drives and 1 file, then delete file?
[15:38:28] <bunder> i don't think it works that way
[15:38:38] <futune_> perhaps not
[15:39:34] <bunder> you might be able to do it and offline the file members, but oh god please no
[15:40:02] <bunder> by the time you put real disks in there all your data is all on the other disks and you can't rebalance
[15:55:55] <bunder> i guess the comparison i was trying to make was with mdadm, where they can do "mirror sda missing", we can't do that
[16:01:27] <cirdan> bunder: larger drives so you can buy more later and they use less pwoer :)
[16:02:09] <bunder> lol power, if i was worried about that i wouldn't be going tr ;)
[16:02:28] <cirdan> bunder: and no that's exactly how you do it, you make a sparse file and use it as a drive in the raidz, then remove it
[16:02:47] <bunder> eww just eww
[16:03:01] <cirdan> just remove it before you write much data... :)
[16:03:12] <cirdan> bunder: there are reasons to do it
[16:03:36] <bunder> if its to be cheap, you still need all the drives to create the pool
[16:04:06] <cirdan> sure but if you are migrating to a new pool with old drives with data on them it can be useful
[16:05:25] <mf> so let's say you have four 1tb drives (hypothetically) in either mirror or raidz2 (doesn't make a difference in terms of capacity) - so you have 2tb of actual data. you buy two extra disks, take two offline from the original array, so it's now unprotected/degraded, make a new array in raidz2 that's supposed to be 6 disks, but two are missing (hypothetically assuming you can create an array in degraded
[16:05:31] <mf> mode). you copy the 2tb of data from the original (degraded) array to the new (degraded) array, and then once that's done, take the original array offline and use those two disks as the initially missing disks from the new array. resilver and done
[16:08:03] <mf> other possible strats: destroying array and copying from offsite backup (slow) or abusing computer stores' return policy to temporarily gain two extra disks that you return when you're done
[16:08:58] <alfatau> DeHackEd: thank you. so if the system at a certain time start seeing a larger disk, then I can get the expanded pool by setting "zpool set autoexpand=on <poolname>" before starting the hw-raid expansion, correct?
[16:09:30] <bunder> nah skip that, zpool online -e
[16:09:37] <bunder> you can do it with the pool imported too
[16:10:17] <bunder> autoexpand makes zed unhappy
[16:15:10] <alfatau> bunder: ah! ok thank you
[16:16:08] <alfatau> bunder: what means "autoexpand makes zed unhappy" ??
[16:18:06] <rlaager> alfatau: Even if your controller doesn't support pass-through (and are you absolutely sure about that), and you can't flash it with IT firmware (have you looked into that), you can almost certainly create multiple single-disk "RAID-0" volumes to export disks individually. Basically, put as many disks on direct SATA ports as you can, and export the remaining ones through the RAID card.
[16:18:15] <rlaager> Or, you know, just buy a cheap HBA.
[16:22:48] <bunder> zed generates events when it sees things change, but autoexpand generates an event every time it checks for changes
[16:23:02] <bunder> and if you're logging zed, that means a lot of useless writes
[16:25:39] <alfatau> rlaager: my controller does not support pass-through (see dell perc h710) but of course it can support multiple raid0 volumes. I can't flash it just because even if a new firmware will solve the problem, I'll completely lose the hw warranty.
[16:26:23] <mf> alfatau: just flash it back to stock when you run into trouble
[16:26:51] <mf> that's what i always do with my android phones :p
[16:27:12] <mf> screen broken? flash back to stock, relock the bootloader, return for warranty
[16:27:13] <rlaager> alfatau: Right, so use as few drives on it as possible, and pass them through as individual volumes.
[16:27:33] <bunder> https://i.imgur.com/gOuLLX9.png github stahp
   November 18, 2018
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30