Switch to DuckDuckGo Search
   May 25, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31
Toggle Join/Part | bottom
[00:17:34] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Quit: shibboleth)
[00:24:06] *** tiqpit9 <tiqpit9!~tiqpit@195.158.107.83> has quit IRC (Ping timeout: 272 seconds)
[00:34:00] *** sa02irc <sa02irc!~sa02irc@155-079-043-212.ip-addr.inexio.net> has quit IRC (Quit: Leaving)
[00:50:21] *** elxa <elxa!~elxa@2a01:5c0:e099:d7f1:52b3:e8b6:cd70:c33c> has quit IRC (Ping timeout: 258 seconds)
[01:21:22] *** tyilanmenyn <tyilanmenyn!~root@scriptkitties/tyil> has joined #zfsonlinux
[01:21:38] *** donhw <donhw!~quassel@host-184-167-37-58.jcs-wy.client.bresnan.net> has quit IRC (Remote host closed the connection)
[01:21:54] *** donhw <donhw!~quassel@host-184-167-37-58.jcs-wy.client.bresnan.net> has joined #zfsonlinux
[01:34:06] *** iamGavinJ <iamGavinJ!~iamGavinJ@unaffiliated/iamgavinj> has quit IRC (Quit: iamGavinJ)
[01:34:57] *** jtara <jtara!~janet@208.91.6.184> has joined #zfsonlinux
[01:36:40] *** tyilanmenyn <tyilanmenyn!~root@scriptkitties/tyil> has quit IRC (Ping timeout: 258 seconds)
[01:39:34] *** iamGavinJ <iamGavinJ!~iamGavinJ@unaffiliated/iamgavinj> has joined #zfsonlinux
[01:47:16] *** trumee <trumee!~rajlon.dy@c-98-194-48-184.hsd1.tx.comcast.net> has joined #zfsonlinux
[01:48:31] <trumee> I have been using luks encryption with a "password prompt" on bootup. Since zfs 0.8 has landed in archlinux, can i use replace luks with encrypted zfs with a password prompt?
[01:48:54] <trumee> in simple words i need zfs on root with encryption.
[01:53:10] <fryfrog> 14 hour scrub time reduced to 7 hours on 0.8.0, nice improvement!
[01:53:44] <fryfrog> trumee: its very new, maybe you could try and edit the arch zfs wiki? :)
[01:55:12] <DeHackEd> how's the initrd support for encryption?
[01:55:59] <trumee> DeHackEd, there is a section on arch wiki on native encryption (https://wiki.archlinux.org/index.php/Installing_Arch_Linux_on_ZFS#Native_encryption), but it does not talk about bootloader.
[01:56:01] <zfs-bot> [ Installing Arch Linux on ZFS - ArchWiki ] - wiki.archlinux.org
[01:56:46] <fryfrog> I didn't get that daring, my `/boot` is my fat32 efi using md raid1 metadata .9 or whatever the one that stores it at the end is
[02:03:32] <futune> trumee, i have been running root on native encryption in arch linux for about a year, using git master
[02:03:46] <fryfrog> futune: nice, how does booting work?
[02:04:04] <futune> same prompt you get on import -l
[02:04:13] <fryfrog> futune: any way to have it run a little dropbear ssh so you can do it remotely?
[02:04:19] <trumee> futune, that is nice
[02:04:21] <futune> the zfs hook for the initramfs seems to just work
[02:04:32] <trumee> futune, do you use efi?
[02:04:37] <trumee> futune, do you use efi partition?
[02:04:57] <futune> trumee, yes I do, separate /boot
[02:05:31] <futune> this is because grub dies when he sees feature encryption enabled
[02:05:56] <futune> regardless of whether the actual dataset is encrypted
[02:06:21] <DeHackEd> it's a pool-wide flag that says "encrypted dataset exist" and it's not backwards compatible (read-only). so, yeah that's expected.
[02:06:29] <fryfrog> futune: but its fine if the *pool* isn't encrypted, right?
[02:06:35] <futune> fryfrog, i looked into dropbear, it's doable but i haven't tried it, should work
[02:06:41] <fryfrog> futune: you can just encrypt datasets?
[02:06:59] <futune> fryfrog, nope, if pool contains any encrypted datasets, grub gives up
[02:07:24] <fryfrog> https://old.reddit.com/r/zfs/comments/bnvdco/zol_080_encryption_dont_encrypt_the_pool_root/
[02:07:25] <zfs-bot> [REDDIT] ZoL 0.8.0 encryption: don't encrypt the pool root! (self.zfs) | 44 points (96.0%) | 16 comments | Posted by numinit | Created at 2019-05-12 - 22:55:33UTC
[02:07:29] <zfs-bot> [ ZoL 0.8.0 encryption: don't encrypt the pool root! : zfs ] - old.reddit.com
[02:07:48] <fryfrog> futune: ah
[02:07:54] <futune> i always use a separate encryption root, e.g. rpool/crypt/root
[02:08:16] <futune> doesn't matter, feature flag enabled makes grub want nothing to do with your pool
[02:09:01] <futune> the patch to make grub work in the sane way is trivial, but cannot be upstreamed because grub devs hate us or something
[02:09:17] <trumee> futune, so rpool should not be enrcrypted but rpool/crypt/root can?
[02:10:35] <futune> trumee, yes, everything under rpool/crypt inherits encryption
[02:11:31] <futune> I also have rpool/plain containing unencrypted datasets for stuff that either want more performance or is not sensitive
[02:11:51] <trumee> futune, ok like swap and /tmp?
[02:12:02] <futune> I consider those sensitive :p
[02:12:09] *** baojg <baojg!~baojg@162.243.44.213> has joined #zfsonlinux
[02:12:15] <trumee> futune, ok so what is not sensitive?
[02:12:25] <trumee> or needs performance
[02:12:27] <futune> pacman package cache for example
[02:12:31] <trumee> ah isee
[02:13:01] <DeHackEd> a build directory while compiling something
[02:13:26] <trumee> futune, can you pastebin your /boot/grub/grub.cfg and arch specific datasets?
[02:13:47] <trumee> futune, i will shamelessly copy your config :)
[02:14:27] <trumee> futune, /etc/mkinitcpio.conf will be usefull too
[02:14:56] *** yawkat <yawkat!~yawkat@cats.coffee> has quit IRC (Ping timeout: 272 seconds)
[02:14:56] <futune> my dataset layout is mainly based on snapshot convenience
[02:15:07] <trumee> how did you install zfs-0.8, since there is no livecd based on it.
[02:15:20] <futune> I make my own archiso
[02:15:24] <trumee> ah
[02:15:44] <trumee> i started doing that, but dont know how to create an AUR repo.
[02:15:56] <futune> good thing somebody else did then
[02:15:59] <futune> it's called archzfs
[02:16:04] <trumee> oh wow
[02:16:47] <fryfrog> Often the livecd kernel version and pre-compiled zfs modules will line up and you can just install them after booting the livecd.
[02:16:56] <fryfrog> You could also just build the dkms ones.
[02:17:24] <fryfrog> But the archiso w/ zfs in it already is nice. I built one, it wasn't *that* hard... but it is mildly annoying keeping it up to date.
[02:17:30] <futune> I found out pretty quick that dkms is the way to go
[02:17:34] <fryfrog> Is it automated anywhere?
[02:17:39] <trumee> fryfrog, so i can boot using livecd and install zfs on host, and then create zfs partition?
[02:17:43] <futune> using precompiled breaks rolling release pretty badly
[02:18:17] <fryfrog> trumee: install zfs to the *live* host, you mean right? Yes, that works.
[02:18:34] <futune> trumee, this works fine
[02:18:36] <fryfrog> futune: Its not *that* bad, generally just a day or two delay for kernel updates.
[02:18:47] <fryfrog> You switched from built to dkms?
[02:19:02] <futune> fryfrog, my experience was that half the days of the week i couldn't run pacman -Syu
[02:19:45] <fryfrog> Oh, for that you can just ignorepkg
[02:20:02] <fryfrog> I just toggle between #commenting it and not
[02:20:11] <fryfrog> But maybe I should switch to dkms :)
[02:20:25] <futune> if you can make precompiled work for you, more power to you
[02:20:55] <fryfrog> Anyone can, but I have to admit dkms sounds nicer :)
[02:20:59] <futune> i didn't want to mess around with manually holding back the kernel
[02:21:20] <fryfrog> where does your zfs-utils come from? aur or archzfs repo?
[02:21:26] <futune> same repo
[02:21:28] <futune> archzfs
[02:22:20] <fryfrog> And DKMS version builds it for the current kernel, right? Like, right now I'm stuck on 5.0.13 because 5.1.x is behaving oddly on my server :(
[02:22:27] <futune> yep
[02:22:37] <futune> or whatever you have headers installed for
[02:23:03] <fryfrog> and zfs-dkms replaces zfs-linux?
[02:23:11] <futune> they conflict, yeah
[02:23:11] <fryfrog> ah, headers
[02:23:27] <futune> remember to install linux-headers
[02:23:37] <futune> before you try to dkms
[02:23:52] <trumee> futune, do you use the latest kernel or LTS?
[02:24:07] <fryfrog> futune: yeah, doing that right now! :)
[02:24:20] <futune> trumee, I use the latest, but not sure if I should keep doing it
[02:24:22] <trumee> futune, i am wondering about this, https://github.com/zfsonlinux/zfs/issues/8804
[02:24:22] <zfs-bot> [GitHub] [zfsonlinux/zfs #8804] ptx0: 0.8 changelog missing lack of SIMD acceleration | Major performance loss is not documented in the 0.8 changelog, people should probably know that.
[02:24:57] <trumee> looks like it latest kernel is not the best, https://github.com/zfsonlinux/zfs/issues/8793
[02:24:57] <zfs-bot> [GitHub] [zfsonlinux/zfs #8793] ptx0: no SIMD acceleration | 4.14.x, 4.19.x, 5.x all have no SIMD acceleration, it is like a turtle. very slow.
[02:25:22] <trumee> wonder what repercussions is there because of lack of SIMD.
[02:25:48] <cirdan> slooow like turtle
[02:25:59] <futune> yeah... I was using zfs send between two native encrypted datasets earlier yesterday... cpu usage was sky high and it was pretty slow
[02:26:41] <futune> single dataset r/w is ok, haven't noticed any practical impact
[02:27:21] <fryfrog> can you encrypt existing dataset? or have to create new encrypted one and zfs send it?
[02:27:45] <futune> fryfrog, you zfs send pool/data | zfs recv pool/crypt/data
[02:28:01] <fryfrog> makes sense
[02:29:49] <fryfrog> Any way to feed `-j8` or what ever to dkms builds?
[02:30:03] <fryfrog> A quick Google isn't pointing out anything obvious
[02:30:09] <futune> it seems to be a bit of a black box
[02:30:24] <futune> my observation is that it starts out single threaded, about halfway in it starts using all cores
[02:30:26] <futune> no idea why
[02:30:33] <fryfrog> Ah
[02:30:55] <fryfrog> It also isn't very a very verbose step!
[02:31:10] <fryfrog> Just sitting there like "==> dkms install zfs/0.8.0 -k 5.0.13-arch1-1-ARCH", is right?
[02:31:16] <futune> exactly
[02:31:40] <futune> you know it's compiling cause the computer is emitting heat and sound, but no console output
[02:32:09] <futune> it's a bit disconcernining if you are used to constant warnings and compiler vomit :p
[02:32:24] <fryfrog> I'm watching the process in htop and its using very little cpu :(
[02:32:42] <fryfrog> THERE WE GO
[02:32:47] <futune> it takes 7 minutes on this machine
[02:32:48] <fryfrog> ALL THE CPUS JUST LIKE YOU SAY! :)
[02:32:59] <futune> 28 minutes on my netbook
[02:33:43] <fryfrog> Whelp, lets see if this shit reboots ;)
[02:33:51] *** fryfrog <fryfrog!~fryfrog@gallery/fryfrog> has quit IRC (Quit: leaving)
[02:34:46] *** yawkat <yawkat!~yawkat@cats.coffee> has joined #zfsonlinux
[02:38:18] *** behlendorf <behlendorf!~behlendo@c-24-4-77-236.hsd1.ca.comcast.net> has quit IRC (Quit: leaving)
[02:38:22] *** fryfrog <fryfrog!~fryfrog@gallery/fryfrog> has joined #zfsonlinux
[02:38:32] <fryfrog> Worked great, thanks for the suggestion. :)
[02:39:32] <futune> glad to hear that
[02:40:45] <fryfrog> "584G scanned at 10.4G/s" DANG
[02:40:52] <fryfrog> (6 disk raidz SSD pool)
[02:41:53] <trumee> futune, archiso failed to build for me an iso, /home/user/archlive/work/efiboot: unknown filesystem type 'vfat'.
[02:43:34] <futune> trumee, possibly you need to install dosfstools on the build host?
[02:44:52] <trumee> futune, i have that installed. i guess dosfstools is needed in archiso
[02:45:20] <trumee> futune, i am going to build baseline iso, and see if that breaks too
[02:45:28] <futune> trumee, i used releng
[02:46:04] <trumee> futune, yeah, i was trying to use that too
[02:46:31] <trumee> futune, huh baseline builds fine
[02:48:43] <futune> baseline might be for containers? and so not need efi
[02:57:15] *** tyilanmenyn <tyilanmenyn!~root@scriptkitties/tyil> has joined #zfsonlinux
[02:59:51] *** pR0Ps <pR0Ps!~pR0Ps@142.114.245.43> has quit IRC (Quit: Quitting)
[03:04:59] <fryfrog> ha, 14min -> 12 min on that one :)
[03:05:00] *** misuto2 <misuto2!~misuto@193.183.116.13> has joined #zfsonlinux
[03:05:46] *** misuto2 <misuto2!~misuto@193.183.116.13> has quit IRC (Client Quit)
[03:05:46] *** misuto <misuto!~misuto@193.183.116.21> has quit IRC (Ping timeout: 252 seconds)
[03:07:28] *** pR0Ps <pR0Ps!~pR0Ps@142.114.245.43> has joined #zfsonlinux
[03:10:16] *** tyilanmenyn <tyilanmenyn!~root@scriptkitties/tyil> has quit IRC (Ping timeout: 258 seconds)
[03:11:58] <fryfrog> Does new scrub work w/o feature upgrade on pool?
[03:12:28] <fryfrog> Looks like yes, nice.
[03:37:11] <RoyK> will this work with SAS disks? https://www.ebay.com/itm/1M-Mini-SAS-SFF-8087-36-PIN-to-4-SATA-7-P-HD-Splitter-Breakout-Cable-Q4Y5/122725996813?epid=2269356736&hash=item1c930a190d:g:qt4AAOSwextZyokZ
[03:37:12] <zfs-bot> [ 1M Mini SAS SFF-8087 36 PIN to 4 SATA 7 P HD Splitter Breakout Cable Q4Y5 192090057733 | eBay ] - www.ebay.com
[03:44:22] <manfromafar> no you cant stick sas into sata but you can stick sata into sas
[03:45:44] *** misuto <misuto!~misuto@193.183.116.13> has joined #zfsonlinux
[03:52:49] *** iamGavinJ <iamGavinJ!~iamGavinJ@unaffiliated/iamgavinj> has quit IRC (Quit: iamGavinJ)
[03:59:57] <RoyK> manfromafar: it's a sas controller - most of my drives are sata
[04:02:07] *** tyilanmenyn <tyilanmenyn!~root@scriptkitties/tyil> has joined #zfsonlinux
[04:03:28] *** mmlb <mmlb!~mmlb@76-248-148-178.lightspeed.miamfl.sbcglobal.net> has joined #zfsonlinux
[04:06:58] *** tyilanmenyn <tyilanmenyn!~root@scriptkitties/tyil> has quit IRC (Read error: Connection reset by peer)
[04:08:25] *** tyilanmenyn <tyilanmenyn!~root@scriptkitties/tyil> has joined #zfsonlinux
[04:09:45] *** tyilanmenyn is now known as tyil
[04:12:05] <manfromafar> not what you posted ;P
[04:22:24] *** zapotah <zapotah!~zapotah@unaffiliated/zapotah> has quit IRC (Ping timeout: 252 seconds)
[04:23:54] *** ghoti <ghoti!~paul@glphon2233w-grc-05-70-50-123-70.dsl.bell.ca> has quit IRC (Read error: Connection reset by peer)
[04:26:53] <Dagger> RoyK: no, you can't plug those connectors directly into a SAS drive. you know how SATA drives have two separate connectors for power and data? SAS drives have the two bits of plastic bridged together
[04:30:07] <sarnold> is there a way to gauge how large the dedup tables would be for a given zdb -S output? eg http://paste.ubuntu.com/p/mChH9qPsK6/
[04:30:07] <zfs-bot> [ Ubuntu Pastebin ] - paste.ubuntu.com
[04:31:27] *** zapotah <zapotah!~zapotah@unaffiliated/zapotah> has joined #zfsonlinux
[04:31:50] <Dagger> RoyK: you need one of these cables: https://cgi.ebay.co.uk/133047243644
[04:31:51] <zfs-bot> [ Mini SAS SFF-8643 to 4 SATA 7pin hard disk 6Gbps data Server Raid Cable 1m 601951004552 | eBay ] - cgi.ebay.co.uk
[04:32:20] <fryfrog> sarnold: google for the amount of memory used per block and then multiple by the number of blocks
[04:32:42] <Dagger> alternately, you can buy four of these things: https://cgi.ebay.co.uk/283476253718 and use them with the cable you linked, but that ends up being more expensive :/
[04:32:43] <zfs-bot> [ SFF-8482 SAS To SATA 180 Degree Angle Adapter Converter Straight Head 760960793601 | eBay ] - cgi.ebay.co.uk
[04:32:47] <fryfrog> For some reason 320bytes comes to mind, but that is probably wrong?
[04:33:13] <sarnold> fryfrog: yeah I hate ~320 bytes in mind too but couldn';t recall if that was l2arc metadata size or ddt size :)
[04:34:42] <fryfrog> sarnold: might have been old l2arc size, but modern is much smaller
[04:34:43] <Dagger> my ZFS notes have this in them: "5 GB of DDT tables per 60 GB (recordsize=8k), 1 TB (recordsize=128k) or 10 TB (recordsize=1M) of data, if dedup is enabled."
[04:36:44] <fryfrog> sarnold: going to play w/ the new dedicated ddt devices in 0.8?
[04:37:04] <sarnold> fryfrog: ooh, this is the first I'm hearing of them :)
[04:37:58] <sarnold> fryfrog: I'd been thinknig of playing with the special vdev type; I've got a fst pool of ssd mirrors, a slow pool with hdd mirrors.. and running against the edges of space :)
[04:38:22] <fryfrog> Do you actually have a lot of dedupable data?
[04:38:24] <sarnold> but I wanted to make sure that dedup really would be terrible before discounting it entirely
[04:38:47] <sarnold> the simulation here reports I'd save more space deduping than I do with compression
[04:39:07] <sarnold> but it doesn't report the size of the tables :) which is unfortunate
[04:39:36] <fryfrog> Are you doing a bunch of VMs?
[04:39:54] <sarnold> no; it's the ubuntu archive, unpacked
[04:40:10] <sarnold> so a few thousand copies of the GPL, etc :)
[04:40:29] <sarnold> all the unchanged files between version a and version b of packages..
[04:40:33] <fryfrog> Oh, that does sound like a decent use case.
[04:40:50] <sarnold> it seemed plausible that dedup might actually work out for this. but I might not have the ram to pull it off :)
[04:41:17] <fryfrog> I *think* you can enable it on just one dataset. But its also at the block level, so you might need to use a recordsize that puts files into one or two blocks?>
[04:41:21] <sarnold> and my mind starts going in circles trying to figure out how Dagger's rule of thumb might work out for my collection of blocks, hehe
[04:41:28] <fryfrog> If they got packed together, they might not dedup>
[04:41:32] <fryfrog> ?
[04:42:01] <fryfrog> Do you have the space to have both at the same time? You can just enable it on a data set and then send to it and see how it turns out.
[04:42:09] <sarnold> heh, no :(
[04:42:24] <fryfrog> Ah, dang.
[04:42:33] <sarnold> since I've got all the data locally maybe I just run some tests and find out
[04:42:33] <fryfrog> Can you do that w/ just a tiny portion and copy?
[04:43:10] *** Frozear <Frozear!~Frozear@2600:1700:f1c1:9030::5e7> has joined #zfsonlinux
[04:43:29] <sarnold> not in a way that would be both realistic for de-duping within packages and across packages.. I've thought about trying that before but can't think of a great way to pull it off
[04:43:33] <fryfrog> Man, new scrub is so awesome.
[04:43:48] <fryfrog> How big is the data? How easy to get?
[04:47:37] <sarnold> http://paste.ubuntu.com/p/VfsC4r4pnQ/
[04:47:38] <zfs-bot> [ Ubuntu Pastebin ] - paste.ubuntu.com
[04:48:05] <sarnold> it's ~1.2 TB of packages, ~1.5 TB of unpacked data
[04:49:22] <sarnold> it's easy to regen the unpacked data from the raw packages.. it just takes a while. days perhaps? it's been a while since the first one..
[04:57:52] *** user_51_ <user_51_!~quassel@92.117.137.189> has joined #zfsonlinux
[05:02:09] *** user_51 <user_51!~quassel@92.117.146.38> has quit IRC (Ping timeout: 258 seconds)
[05:02:46] *** Albori <Albori!~Albori@64-251-148-96.fidnet.com> has quit IRC (Ping timeout: 272 seconds)
[05:05:02] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 245 seconds)
[05:05:21] *** ghoti <ghoti!~paul@glphon2233w-grc-05-70-50-123-70.dsl.bell.ca> has joined #zfsonlinux
[05:06:59] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #zfsonlinux
[05:17:33] <fryfrog> Can't use srv/mirror/ubuntu? says it has 6T free
[05:19:58] <sarnold> thats on hdd, not ssd
[05:20:13] <sarnold> the read speed difference is immense
[05:21:25] <fryfrog> Sure, but you just need to see if it'll dedup
[05:21:28] <fryfrog> As a test.
[05:24:25] <sarnold> I'd rather toast the fst pool testing, it's single-purpose at this point :)
[05:24:44] <sarnold> if I try dedup on srv, then I'm stuck with it on srv, and tearing it down would be more effort
[05:24:57] <sarnold> I probably have enough storage to make that work too , but it'd be annoying :)
[05:25:14] <sarnold> fryfrog: thanks for thinking it over with me -- it's time to get some dinner and start in on this weekend :)
[05:25:15] <fryfrog> If it works, you could just send it back
[05:43:02] <manfromafar> only if you dont create a seperate dataset that has dd on
[05:43:16] <manfromafar> once you delete the last of the datasets using dedup the ddtable is removed
[05:44:01] *** Fusl <Fusl!~fusl@opennic/fusl> has quit IRC (Remote host closed the connection)
[05:45:03] *** piti <piti!~root@pdpc/supporter/active/piti> has quit IRC (Read error: Connection reset by peer)
[05:45:09] *** Fusl <Fusl!~fusl@opennic/fusl> has joined #zfsonlinux
[05:47:58] *** piti <piti!~root@pdpc/supporter/active/piti> has joined #zfsonlinux
[06:01:41] <patdk-lap> heh, seriously doubt it is going dedup to anything reasonable
[06:01:58] <patdk-lap> unless your storing multible versions
top
   May 25, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31