Switch to DuckDuckGo Search
   January 1, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >
Toggle Join/Part | bottom
[00:00:33] *** King_InuYasha <King_InuYasha!~King_InuY@fedora/ngompa> has joined #zfsonlinux
[00:08:37] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[00:57:02] *** DzAirmaX <DzAirmaX!~DzAirmaX@unaffiliated/dzairmax> has joined #zfsonlinux
[01:09:47] *** buu <buu!~buu@99-74-60-251.lightspeed.hstntx.sbcglobal.net> has quit IRC (Ping timeout: 240 seconds)
[01:32:49] <PMT> Ryushin: so, Mo emailed me off-list, though he forwarded it to the bug afterward, saying that he found if you didn't have insserv installed, it didn't happen. I replied (and should probably forward it to the bug) "well, since the systemd sysv compat stuff requires insserv, and it seems unreasonable to expect people to uninstall all sysv compat to run ZoL, maybe we should break these inits into one/two
[01:32:55] <PMT> subpackages with a mutually exclusive installation config, and let the package dep solver figure out which one is right"
[01:34:05] <PMT> (That's a brief summary, because he did sink a fair amount of time into reproducing it, and I spent a little time figuring out that in fact, insserv appears to be a core dep of the systemd sysv compat layer, after asking apt to remove it resulted in it offering to remove all sysv support and requiring a blood oath to do it.)
[01:48:30] <DeHackEd> all I read was "systemd" and "blood oath" and am making the rest in my head.
[01:48:34] <DeHackEd> :)
[01:52:09] <Ryushin> PMT: I saw the email he sent. Thank you for providing the other details. DeHackED: You crack me up.
[01:52:47] <Ryushin> PMT: Impressed with what it took took to figure that out.
[01:53:03] <DeHackEd> I am to both entertain and be functionally useful
[01:56:18] <Ryushin> PMT: So are you thinking there should be a debian zfs-sysvinit and a zfs-systemd package? Am I understanding that right. Sorry, feel a bit sick and my mind is fairly fuzzy. And no, I'm not drinking.... Well not yet. :)
[02:05:07] <PMT> Ryushin: _I_ think that, idk what he thinks yet
[02:08:04] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[02:08:22] <PMT> Because his suggestion is to forbid having systemd-sysv installed with ZoL entirely, AIUI, and making people disable all sysvinit compat support in order to ship sysvinit scripts seems...problematic.
[02:08:36] *** DHowett <DHowett!~dustin@velocitylimitless/awesome/ultraviolet/DHowett> has joined #zfsonlinux
[02:09:04] *** rjvb <rjvb!~rjvb@2a01cb0c84dee6008160db5b8471c29e.ipv6.abo.wanadoo.fr> has quit IRC (Ping timeout: 252 seconds)
[02:11:02] <PMT> another direction we could try depending is that he said the testing version of insserv doesn't break this, so
[02:22:32] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Remote host closed the connection)
[02:22:51] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[02:31:54] *** Celmor <Celmor!~Celmor@unaffiliated/celmor> has left #zfsonlinux
[03:06:08] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Remote host closed the connection)
[03:06:39] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[03:10:30] *** fp7 <fp7!~fp7@unaffiliated/fp7> has joined #zfsonlinux
[03:14:31] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Quit: shibboleth)
[03:59:09] <Ryushin> PMT: I did not see or I'm just too bloody tired to see it, that he said the testing version of insserv doesn't break it.
[04:00:01] <Ryushin> Either way, enjoy your New Years Eve/Day and we'll talk in a year .... which is four hours from now for me. :)
[04:02:55] *** elxa <elxa!~elxa@2a01:5c0:e084:2a71:510c:4c7a:e52c:a3b3> has quit IRC (Ping timeout: 252 seconds)
[04:03:04] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has quit IRC (Quit: Qutting)
[04:07:47] <PMT> Ryushin: i think he said sid, but the version in testing is the version from sid, IIRC.
[04:08:18] <PMT> yup
[04:08:46] <PMT> "In insserv/sid, the postinst process will nolonger fail"
[04:09:05] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[04:09:12] <PMT> i suppose i could just upgrade to buster, it's close enough to freeze, but also why this
[04:10:14] <bunder> lol
[04:10:35] <bunder> you guys make it sound like he's making it harder than it needs to be
[04:11:52] <Ryushin> bunder: Well, it is hard. Two. 2. That is way too many choices.
[04:12:45] <bunder> afaik gentoo does systemd and openrc, screw everything else :P
[04:12:48] <Ryushin> PMT: Cannot insserv be updated in stretch?
[04:13:23] <PMT> Ryushin: it could, but that involves convincing the maintainers of another package to bump it just for this, and I'd like fewer complications.
[04:13:26] <Ryushin> Devuan does sysv and openrc. Screw everything else. :)
[04:13:37] *** zapotah <zapotah!~zapotah@unaffiliated/zapotah> has quit IRC (Ping timeout: 250 seconds)
[04:14:40] <Ryushin> PMT: True. Very true. It's best the ZFS package create the work around. And in my eyes, it is on Debian, not ZoL to fix the problem.
[04:15:36] <Ryushin> I'm going to have to look into OpenRC. It's now the default for Devuan. I'm still stuck in 1990 with my sysvinit.
[04:16:18] <bunder> its not bad, just saying :P
[04:50:58] <ptx0> my new office chair showed up and it feels like a Herman Miller but without the price tag, woot
[04:51:21] <ptx0> https://www.amazon.ca/gp/product/B07HGTCGTQ
[04:53:28] <bunder> my chair feels like you're sitting on plywood, oh wait
[04:57:45] *** ReimuHakurei <ReimuHakurei!~Reimu@raphi.vserver.alexingram.net> has quit IRC (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.)
[04:59:31] <tlacatlc6> i also got a new chair, although it's still in the package. :/
[05:01:58] <tlacatlc6> https://www.amazon.com/KADIRYA-High-Back-Office-Chair/dp/B06XKZ83SW
[05:02:09] <tlacatlc6> not fancy but was cheap. :)
[05:05:03] <ptx0> the adjustable headrest for mine was essential
[05:05:44] <ptx0> had a 3 monitor mount with one above two and when looking at the top one while gaming it was better to lean back but you need a headrest to support the head
[05:05:49] <ptx0> vOv
[05:07:07] <tlacatlc6> it's nice!
[05:29:23] <PMT> Ryushin: amusingly, i think Mo runs OpenRC
[05:30:45] <Ryushin> PMT: Well then. I guess I have to switch now. I started reading about it a few months back, and life got in the way. Really need to see the pros of it.
[05:31:33] <PMT> Ryushin: at least, he posted a bug on ZoL github about supporting iit
[05:31:35] <PMT> so
[05:31:48] <PMT> #8204
[05:31:49] <zfs> [zfs] #8204 - Init scripts doesn't work for Debian + OpenRC setup. <https://github.com/zfsonlinux/zfs/issues/8204>
[05:40:25] <ptx0> "P as in..?" "T as in Terry" "Oh, Tango?" "tomato, yes."
[05:40:38] <ptx0> phone spelling has not gotten easier in the last 20+ years
[05:43:33] <Ryushin> PMT: I wonder if Devuan has the same problem.
[05:46:27] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 240 seconds)
[05:47:37] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has joined #zfsonlinux
[05:51:34] <ptx0> bunder: so Shaw cable is doubling my bandwidth for free
[05:51:47] <ptx0> going from 300mbps to 600mbps down but remaining at 20mbps upload
[05:51:48] <ptx0> ahahaha
[05:51:59] <ptx0> i emailed to ask uhm, if i could somehow, like, pay for more
[05:54:37] <ptx0> > Adult signature required on delivery.
[05:54:46] <ptx0> because kids can't buy m.2 nvme devices?
[06:13:55] <PMT> i think it's probably the standard boilerplate about children and entering contracts - at least in the US, they can, but they and their parents can just sort of void them on a whim, AIUI
[06:16:07] <ptx0> depends whether the child can be irreversibly harmed by the parent's contract
[06:17:01] <ptx0> been shown with those trampoline park waivers that the park cannot sign away liability for lifelong injuries as a result of negligence on their employees' part
[06:17:27] <ptx0> it is a complicated matter but the parents don't have the right to do so
[06:17:53] <PMT> quite
[06:18:27] <ptx0> basically all of those amusement park waivers are BS, and it's surprising how many of them contain wording to specifically disclaim liability for employees being negligent and not checking equipment safety etc
[06:19:36] <PMT> if there's not penalties for attempting to include impossible clauses there's little reason to not do it, since you can strike them without losing the whole enchilada
[06:35:17] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 250 seconds)
[06:37:11] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #zfsonlinux
[06:43:05] <ptx0> happy gnu year
[07:02:44] *** Comnenus is now known as Comnenus_
[07:06:20] *** Comnenus_ is now known as Comnenus
[07:41:43] *** PewpewpewPantsu <PewpewpewPantsu!~pewpew@unaffiliated/setsuna-xero> has joined #zfsonlinux
[07:43:57] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 246 seconds)
[07:54:53] *** ReimuHakurei <ReimuHakurei!~Reimu@raphi.vserver.alexingram.net> has joined #zfsonlinux
[08:00:25] *** tlacatlc6 <tlacatlc6!~tlacatlc6@68.202.46.96> has quit IRC (Quit: Leaving)
[08:04:21] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[08:07:36] <zfs> [zfsonlinux/zfs] WARNING: Pool has encountered an uncorrectable I/O failure and has been suspended. (#8234) created by ShanmuHuang <https://github.com/zfsonlinux/zfs/issues/8234>
[08:14:27] *** buu <buu!~buu@99-74-60-251.lightspeed.hstntx.sbcglobal.net> has joined #zfsonlinux
[08:17:38] <zfs> [zfsonlinux/zfs] WARNING: Pool has encountered an uncorrectable I/O failure and has been suspended. (#8234) comment by ShanmuHuang <https://github.com/zfsonlinux/zfs/issues/8234#issuecomment-450713871>
[09:32:33] *** rjvbb <rjvbb!~rjvb@2a01cb0c84dee6009bda76eb03bc33f7.ipv6.abo.wanadoo.fr> has joined #zfsonlinux
[09:33:16] *** Wharncliffe <Wharncliffe!coffee@gateway/vpn/privateinternetaccess/b> has quit IRC (Ping timeout: 246 seconds)
[10:07:30] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has quit IRC (Excess Flood)
[10:09:21] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has joined #zfsonlinux
[10:34:37] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has joined #zfsonlinux
[10:40:15] <cluelessperson> question
[10:40:21] <cluelessperson> how do you prevent disk io getting overloaded?
[10:40:27] <cluelessperson> anyway to slow it down?
[10:46:37] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has quit IRC (Ping timeout: 252 seconds)
[10:47:29] *** cluelessperson <cluelessperson!~cluelessp@unaffiliated/cluelessperson> has joined #zfsonlinux
[11:07:08] <PMT> cluelessperson: in what manner?
[11:18:35] *** jugo <jugo!~jugo@unaffiliated/jugo> has joined #zfsonlinux
[11:21:05] *** lblume <lblume!~lblume@greenviolet/laoyijiehe/lblume> has quit IRC (Quit: Leaving.)
[11:24:44] *** lblume <lblume!~lblume@greenviolet/laoyijiehe/lblume> has joined #zfsonlinux
[11:25:42] *** rjvb <rjvb!~rjvb@lfbn-ami-1-204-20.w86-208.abo.wanadoo.fr> has joined #zfsonlinux
[11:39:13] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Quit: simukis)
[11:57:21] *** jugo <jugo!~jugo@unaffiliated/jugo> has quit IRC (Ping timeout: 246 seconds)
[12:00:23] *** lynchc <lynchc!~quassel@c-73-93-58-104.hsd1.ca.comcast.net> has quit IRC (Ping timeout: 268 seconds)
[12:26:39] *** jugo <jugo!~jugo@unaffiliated/jugo> has joined #zfsonlinux
[12:31:06] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has joined #zfsonlinux
[13:05:20] *** lynchc <lynchc!~quassel@c-73-93-58-104.hsd1.ca.comcast.net> has joined #zfsonlinux
[13:28:47] *** fp7 <fp7!~fp7@unaffiliated/fp7> has quit IRC (Quit: fp7)
[14:05:10] *** leito <leito!c38e669e@gateway/web/freenode/ip.195.142.102.158> has joined #zfsonlinux
[14:05:32] <leito> Hello guys. Happy new year.
[14:32:39] *** mf <mf!~mindfly@reactos/developer/mf> has joined #zfsonlinux
[14:39:13] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[14:41:27] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 240 seconds)
[15:02:50] <bunder> ptx0: shaw sucks, they're basically rogers but they had their backbone built faster
[15:07:51] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[15:22:02] <bunder> oh i didn't realize rogers bought source cable, so there's no shaw/rogers divide here anymore
[15:22:58] <bunder> that actually ticks me off, because rogers treats customers like crap
[15:22:58] *** raindev <raindev!8613b4a8@gateway/web/freenode/ip.134.19.180.168> has joined #zfsonlinux
[15:23:43] <bunder> i kindof liked the three-way we had going, although you still had to move to get into another slice of the pie
[15:24:09] <bunder> cogeco just hasn't been the same since they re-did their billing system
[15:24:43] *** fs2 <fs2!~fs2@pwnhofer.at> has quit IRC (Quit: Ping timeout (120 seconds))
[15:25:03] <bunder> i checked with bell last night though, i think they might finally be ready to give me fibre
[15:25:16] *** fs2 <fs2!~fs2@pwnhofer.at> has joined #zfsonlinux
[15:26:27] <raindev> I want to get one of datasets in a zpool to not be mounted automatically so I've set mountpoint to none. Now it doesn't seem to be a way to mount it manually specifying mountpoint.
[15:26:39] <raindev> Or do I misunderstand the purpose of mountpoint=none?
[15:27:32] <bunder> you can set canmount to noauto and still give it a mountpoint
[15:29:57] <PMT> i forget did anyone mention the hilarious 911 outage in the US here when it happened
[15:30:01] <PMT> raindev: canmount=noauto
[15:30:26] <PMT> mountpoint says where it will mount when mounted, canmount is whether it gets mounted by e.g. zpool import or zfs mount -a
[15:30:27] <bunder> which, the one caused by centurylink?
[15:30:30] <PMT> yes.
[15:30:57] <bunder> https://i.imgur.com/9ePmd8T.png
[15:31:36] <bunder> i think they're still working out the kinks, did they ever come clean about what caused it?
[15:31:58] <bunder> all i've seen is heresay on reddit and nanog
[15:34:44] <raindev> bunder, PMT, thanks, sounds like exactly what I'm looking for
[15:35:05] <PMT> Between a prior news piece about single points of failure in the system and the fact that the actual numbers for the callcenter worked, just not the 911 redirect, I'm guessing they had a single point of failure for doing the dispatch of 911->callcenter.
[15:35:29] <raindev> It doen't seem I can mount the dataset manually now with zfs mount: "cannot mount 'external/crypt': 'canmount' property is set to 'off'"
[15:35:39] <PMT> raindev: what did you set canmount to?
[15:36:25] <raindev> My bad, set it to off instead of noauto as you suggested.
[15:36:30] <bunder> oops :)
[15:36:37] <raindev> Thanks a lot, that works as intended :)
[15:37:18] <bunder> PMT: i was referring to the network outage more than phones, because that was the actual failure iirc
[15:38:10] <PMT> I'm guessing they may have discovered an exciting failure domain, or had a known problem they hadn't fixed yet because they run it as cheaply as they fucking can and just pay fines when it breaks for people
[15:38:39] <PMT> https://www.tomshardware.com/news/centurylink-outage-caused-bad-networking-card,38306.html claims a single NIC shat the bed
[15:38:41] <bunder> i have no idea what dwdm means/does
[15:39:01] <PMT> bunder: it's a technology used for multiplexing multiple signals along single mode fiber, AIUI
[15:39:11] <PMT> well, if memory serves and you're talking about the same thing
[15:39:23] <bunder> that's what everyone was guessing was the fault
[15:39:27] <PMT> I first heard about it when $OLDJOB grew a 100GbE interface
[15:39:48] <PMT> bunder: apparently it was a single NIC going rogue and sending malformed data and other clients not filtering them out?
[15:40:31] <PMT> (Basically if you're clever you can send arbitrarily many wavelengths down a single-mode fiber run as long as you have arbitrarily expensive interfaces on either end that can reliably decode the encoding)
[15:40:46] <bunder> my building has problems with stuff backfeeding into cogeco's lines all the time
[15:41:01] <DeHackEd> I can confirm that some shit routers (I specifically single out Cisco because I tested it) will fall over if the ASIC router punts too many packets to the main CPU
[15:41:05] <bunder> it scrwes up the whole node
[15:41:12] <PMT> Telephony, digital networking of some kind, other?
[15:42:23] <PMT> DeHackEd: this reminds me of the time I told a tech who was in charge of the Cisco part of a trunk connection we were setting up that she didn't need to down the link to add a port to it, I'd never seen one of the (Arista) switches do the wrong thing or complain about that, and she said "they might not, but I don't trust [the big Cisco router]"
[15:42:28] <bunder> we had fex problems at work too when they rolled out the cisco nexus gear
[15:43:14] <PMT> (And since we did indeed have problems where it was possible for one end of the trunk to lose one of the 40Gb links in the aggr and the other end to think it was up and consequently start dropping 1/N packets because it tried sending them over an absent link, she was obviously right)
[15:43:55] <PMT> (I don't know which end ended up being at fault b/c my last day at that job was while we were debugging that, so ???)
[15:44:12] <DeHackEd> PMT: I hear Cisco has improved immensely over the years, but that test I did (7-8 years ago?) wa against a 6-slot 7600 router with a largely default configuration and my 10/100 Atom-based laptop (of appropriate age) killed it.
[15:44:45] <PMT> DeHackEd: I think my favorite was a Dell switch that predated their lifetime warranty policy that crashed the http daemon if you sent any traffic to TCP 443.
[15:44:54] <bunder> our problems at work were last year so they're still busted
[15:44:58] <PMT> (The switch would never restart the daemon short of a reboot.)
[15:45:24] <PMT> We found this out while probing our network and then discovering one of our switches lost network connectivity whenever we probed.
[15:45:38] <bunder> that's funny though PMT, i'm glad my dlink doesn't do that
[15:45:44] <DeHackEd> I suspect the old Cisco IOS platforms (non-XR, and all that) just run all "processes" as kernel threads with only basic accounting...
[15:45:46] <bunder> then again i turned off the web interface
[15:45:51] <PMT> (...we also had a rack-sized tape jukebox that had its entire OS stack crash sometimes while we were probing the network, but it wasn't reliable.)
[15:46:47] <bunder> hp?
[15:47:42] <PMT> I forget who made the actual jukebox, it was being driven by IBM's Tivoli bits.
[15:47:58] <PMT> So it was some approved vendor and drive models and whatever other bullshit.
[15:48:07] <bunder> ah tivoli, we still use that
[15:48:24] <PMT> I was already pretty convinced Tivoli wasn't amazing from a few usage encounters.
[15:48:51] <PMT> When I found a crash bug in their client and their support's response was [crickets] after I gave them a core dump and told them I could reproduce it 100% of the time, I was entirely disenchanted.
[15:49:27] <bunder> the sysadmins dont' talk much to us about it unless they have to reboot a server
[15:50:02] <bunder> the joys of being a support bitch, nobody tells you anything until you call them because you're being barraged by users
[15:50:45] <bunder> "is netscaler down? oh it would have been nice to know, we're getting raped here"
[15:51:07] <PMT> I wrote a better algorithm for distributing their SQL backup schedules b/c we were having problems with all the machines synchronizing on when they tried to do the full backups, and comedy gold ensued. So I wrote a probabilistic backoff where the P function grows nonlinearly toward 100% as the number of days since last backup approaches whatever schedule they wanted, and always took fulls of new ones when
[15:51:13] <PMT> it saw them.
[15:51:23] <DeHackEd> our monitoring system does provide notifications to our support staff... well, one of them at least...
[15:51:37] <PMT> Obviously not perfect because you can still get clumping and stuff, but it was strictly better than the existing situation.
[15:51:55] <bunder> we only know when there's a major outage when the clinical interfaces all go down at the same time
[15:52:16] *** tlacatlc6 <tlacatlc6!~tlacatlc6@68.202.46.96> has joined #zfsonlinux
[15:53:32] <PMT> My favorite event while I was at $OLDJOB was when I stayed very late working on something because I lost track of time, then I got home and my workplace's entire network was unreachable
[15:53:43] <PMT> So I got a shower, and it was still down when I got out, so I went back.
[15:53:52] <PMT> There were 6 firetrucks with sirens flashing in the parking lot.
[15:54:11] <DeHackEd> I can't beat that story
[15:54:23] <PMT> A three-phase busbar had connected one of the phases to another one in an unintended way
[15:54:46] <DeHackEd> ... What? how does that happen?
[15:54:53] <PMT> And as you might imagine from a three-phase busbar, the metal proceeded to approximately vaporize and coat the entire room in a fine mist
[15:56:33] <bunder> we had a fire hose outlet break once, and it gushed water for 2 hours before anyone noticed it because it was in an empty part of the building, when i walked out the front door that morning, i had no idea how many gallons of water were above my head because it hadn't leaked through the floor yet
[15:57:12] <bunder> it trashed half of the er and or, took them 8 months to clean it up and get everyone moved back into their proper rooms
[15:57:53] <PMT> https://occamy.chemistry.jhu.edu/busbarburnout/
[15:58:06] <PMT> DeHackEd: ^
[15:59:08] <bunder> you'd think they would insulate that stuff
[15:59:11] <PMT> said catastrophic failure also meant i got to walk somewhere where power still worked on the campus to send a message to all my coworkers' backup contacts saying "don't come to work today the transformer exploded and the building is closed"
[15:59:39] <PMT> bunder: apparently the tech they called out said this was a common failure mode for this model
[16:00:19] <PMT> but yeah AIUI something fell off and shorted one phase to another, resulting in the above amazing photos
[16:00:38] <PMT> also that was one of my favorite work emails I've ever written, even above the "may be late to work today, got jumped on way home and filed police report"
[16:00:52] <bunder> i don't know if i should link mine, since i still work there, although it shouldn't be hard to find in google news
[16:00:59] *** raindev <raindev!8613b4a8@gateway/web/freenode/ip.134.19.180.168> has quit IRC ()
[16:01:10] * PMT shrugs
[16:01:32] <PMT> I already have a bunch of ZFS bugs filed from my work and personal emails, so that bridge is burnt for me.
[16:01:39] *** Essadon <Essadon!~Essadon@81-225-32-185-no249.tbcn.telia.com> has joined #zfsonlinux
[16:01:46] <PMT> I suppose I haven't filed any with my current work email, but that's because I'm not using it at work, so there'd be no reason to.
[16:06:55] <zfs> [zfsonlinux/zfs] OpenZFS: 'vdev initialize' feature (#8230) comment by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/8230#issuecomment-450736763>
[16:08:48] <bunder> hm i can't find a picture, i know i have a video around here somewhere
[16:12:12] <zfs> [zfsonlinux/zfs] feature request: zpool iostat N should repeat header like vmstat N (#8235) created by mailinglists35 <https://github.com/zfsonlinux/zfs/issues/8235>
[16:12:40] <bunder> wait, it doesnt?
[16:14:02] <bunder> hm, it don't.
[16:14:41] <PMT> No, it never do.
[16:16:52] <tlacatlc6> i'm glad i have no such stories. :D
[16:17:45] <DeHackEd> tlacatlc6: I give it 6 months
[16:18:00] <tlacatlc6> lol
[16:21:08] *** MrCoffee <MrCoffee!coffee@gateway/vpn/privateinternetaccess/b> has joined #zfsonlinux
[16:55:53] <zfs> [zfsonlinux/zfs] Sporadic system hang (#7425) comment by "John M. Drescher" <https://github.com/zfsonlinux/zfs/issues/7425#issuecomment-450739504>
[17:03:41] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[17:07:11] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Remote host closed the connection)
[17:10:48] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[17:20:40] *** Markow <Markow!~ejm@176.122.215.103> has joined #zfsonlinux
[17:29:48] <DeHackEd> tfw you accidentally ran you multi-gigabyte backups out the backup ADSL-1 connection
[17:30:31] <bunder> "welp i'm going back to bed"
[17:31:55] <DeHackEd> I did. it's still running
[17:31:59] <DeHackEd> good morning btw :)
[17:32:09] <bunder> lol nice
[17:32:54] <bunder> how bad of a dsl line is it?
[17:33:24] <DeHackEd> ADSL-1
[17:33:32] <DeHackEd> which means a max upload rate of less than 1 megabit
[17:33:58] <bunder> oh ouch
[17:35:12] <bunder> (i never keep track of dsl speeds because nobody gets what they're rated for anyways)
[17:35:46] *** elxa <elxa!~elxa@2a01:5c0:e087:8e11:c071:6dcd:235d:5ba> has joined #zfsonlinux
[17:35:57] <bunder> i lived a block or two away from a CO once and they couldn't guarantee me a decent speed, so cable it was
[17:36:38] <DeHackEd> ADSL1 maxes out somewhere around 8/0.8, ADSL2 can do ~25/1.2 when downstream biased. VDSL2 can do like 100/50 but over very short distances.
[17:39:47] <bunder> 700 meters walking distance, dunno about wire distance
[17:40:27] <DeHackEd> shopping on bell's site to see if I can do better, they're offering me a capped plan for $80/month, or an unlimited plan for $60/month. Otherwise seems to be identical connections
[17:45:00] <bunder> on crusty copper, probably not
[17:45:35] <bunder> unless you live in a new subdivision
[17:47:09] <jasonwc> Wbat is the difference between arc_meta_max and arc_meta_limit? arc_meta_limit is defined as the maximum size, in bytes, metadata can use in the ARC. However, arc_meta_max is not defined in zfs-module-parameters but it shows up in /proc/spl/kstats/zfs/arcstats
[17:48:18] <bunder> didn't they rename that, lemme look
[17:48:28] <jasonwc> DeHackEd: You are giving me nightmares of having to use AT&T ADSL 1.5/384kbit. God, I never thought switching to Comcast could be a good thing.
[17:48:41] <jasonwc> Thankfully, that was a long time ago, and now I have 940/880 fiber
[17:49:54] <DeHackEd> jasonwc: arc_meta_limit is a user tunable, arc_meta_max is an informational statistic
[17:49:58] <bunder> arc_meta_max = arc_meta_used;
[17:50:18] <DeHackEd> the "max" is the largest the metadata has ever been, the "limit" is how big the user wants to limit it to
[17:50:18] <jasonwc> ah, unfortunate naming
[17:50:25] <jasonwc> got it
[17:51:19] <bunder> hrm, github playing tricks on me, that code snippet was old
[17:55:33] <bunder> https://github.com/zfsonlinux/zfs/blob/master/module/zfs/arc.c#L2798
[17:55:44] <bunder> lol i want max accuracy damnit :P
[17:57:25] <DeHackEd> oh is this one of those CPU cache performance things?
[17:58:40] <bunder> wouldn't aggsum actually be slower though
[17:59:55] <bunder> (being more math than actually just giving the direct number)
[18:03:19] <bunder> https://github.com/zfsonlinux/zfs/blob/master/module/zfs/aggsum.c
[18:03:25] * bunder shrug
[18:03:52] *** lblume <lblume!~lblume@greenviolet/laoyijiehe/lblume> has quit IRC (Ping timeout: 252 seconds)
[18:04:33] <DeHackEd> this concept happens a lot in linux as well. for example iptables and other network counters have a per-CPU counter so that packet processing doesn't cause memory contention, then when the user asks to read the counters the kernel quickly sums them all up
[18:04:58] <DeHackEd> though it sounds like this is being a bit cheaper than that since the summing step happens more frequently
[18:06:32] <bunder> yeah i guess i'm also thinking singlethreaded-ly
[18:07:05] <DeHackEd> it's not just that. it's the contention of L2 cache bouncing around between cores or memory usage on multi-socket systems
[18:07:30] <bunder> things get weird when you want to update a number 4 times at the same time based on their relative setting
[18:11:04] *** erska <erska!erska@91-156-230-12.elisa-laajakaista.fi> has quit IRC (Read error: Connection reset by peer)
[18:11:22] <bunder> you could always have a shared l3 cache, oh wait
[18:13:48] <bunder> oh no i'm wrong, they still do, for some reason i thought they were getting rid of it
[18:14:15] *** erska <erska!erska@91-156-233-69.elisa-laajakaista.fi> has joined #zfsonlinux
[18:14:25] <DeHackEd> no, there's an L3 cache, but each core still wants the data it uses in its local L2 cache
[18:15:23] <DeHackEd> (well, L1 cache really, but L2 cache is on the way)
[18:20:39] <bunder> is the l1 cache really a cache if its the end of the pipeline
[18:20:58] <DeHackEd> it's a cache of the RAM contents, so yes
[18:21:21] <DeHackEd> it's just the design of the chip that the instructions that access "memory" really depend on the caches to have the content
[18:28:11] <bunder> yeah i guess 128kb is still bigger than an instruction and the data its working on
[18:29:02] <bunder> even with a dual simd instruction (according to wiki larabee had a dual 512 bit simd instruction, so 128 bytes?)
[18:33:35] <bunder> (that makes 128kb sound huge again lul)
[18:37:15] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has joined #zfsonlinux
[18:37:42] <bunder> i mean in a world where a terabyte is like forget about it
[18:38:06] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has quit IRC (Ping timeout: 246 seconds)
[18:38:20] <zfs> [zfsonlinux/zfs] Sporadic system hang (#7425) comment by Denis Feklushkin <https://github.com/zfsonlinux/zfs/issues/7425#issuecomment-450745524>
[18:39:51] *** PewpewpewPantsu <PewpewpewPantsu!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 246 seconds)
[19:16:58] *** Albori <Albori!~Albori@216-229-75-72.fidnet.com> has joined #zfsonlinux
[19:37:10] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Quit: shibboleth)
[19:50:18] *** zapotah <zapotah!~zapotah@unaffiliated/zapotah> has joined #zfsonlinux
[20:13:12] <zfs> [zfsonlinux/zfs] Sporadic system hang (#7425) comment by Rich Ercolani <https://github.com/zfsonlinux/zfs/issues/7425#issuecomment-450750860>
[20:44:36] *** lblume <lblume!~lblume@greenviolet/laoyijiehe/lblume> has joined #zfsonlinux
[21:09:22] <zfs> [zfsonlinux/zfs] WARNING: Pool has encountered an uncorrectable I/O failure and has been suspended. (#8234) closed by kpande <https://github.com/zfsonlinux/zfs/issues/8234#event-2049573502>
[21:09:47] <zfs> [zfsonlinux/zfs] WARNING: Pool has encountered an uncorrectable I/O failure and has been suspended. (#8234) comment by kpande <https://github.com/zfsonlinux/zfs/issues/8234#issuecomment-450753852>
[21:10:32] <ptx0> bunder: "treats customers like crap" doesn't sound like shaw since they just doubled our profile for free
[21:10:45] <ptx0> from 150mbps to 300mbps, 300mbps to 600mbps for same cost
[21:11:52] <bunder> no, rogers does
[21:20:37] <ptx0> started ordering 4TB WD Red from amazon.ca since there's a 1 per customer limit and you can buy them once every 7 days
[21:20:48] <ptx0> i figure eventually i'll have a decent number for an array
[21:21:11] <bunder> that's just sad
[21:21:29] <ptx0> why
[21:21:50] <bunder> because that will take like 6 months
[21:22:11] *** biax__ <biax__!~biax@unaffiliated/biax> has joined #zfsonlinux
[21:24:47] *** biax_ <biax_!~biax@unaffiliated/biax> has quit IRC (Ping timeout: 240 seconds)
[21:24:52] *** biax__ is now known as biax_
[21:28:00] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[21:35:49] <BtbN> One per week? That's a pointless limit
[21:50:28] *** buu <buu!~buu@99-74-60-251.lightspeed.hstntx.sbcglobal.net> has quit IRC (Ping timeout: 245 seconds)
[21:53:39] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Quit: shibboleth)
[21:57:41] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has quit IRC (Excess Flood)
[21:58:39] *** Dagger <Dagger!~dagger@sawako.haruhi.eu> has joined #zfsonlinux
[22:02:30] <tlacatlc6> maybe to prevent hdd mining? XD
[22:02:40] <tlacatlc6> but i agree, it's pointless.
[22:08:32] *** gerhard7 <gerhard7!~gerhard7@ip5657ee30.direct-adsl.nl> has quit IRC (Quit: Leaving)
[22:15:44] <DeHackEd> did HDD manufacturing flood over again?
[22:21:56] *** PewpewpewPantsu <PewpewpewPantsu!~pewpew@unaffiliated/setsuna-xero> has joined #zfsonlinux
[22:22:10] *** buu <buu!~buu@99-74-60-251.lightspeed.hstntx.sbcglobal.net> has joined #zfsonlinux
[22:24:13] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 245 seconds)
[22:24:19] <PMT> no
[22:50:28] *** adilger <adilger!~adilger@S0106a84e3fe4b223.cg.shawcable.net> has quit IRC (Ping timeout: 245 seconds)
[23:04:33] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has joined #zfsonlinux
[23:05:29] <ptx0> did you see that linus tech tips video from today
[23:05:43] <ptx0> he took 24 SSDs out of a server and mixed their order all up, "oops"
[23:06:54] *** PewpewpewPantsu <PewpewpewPantsu!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 246 seconds)
[23:07:25] <DeHackEd> I just finished watching it...
[23:07:41] <DeHackEd> next time he should try doing that shit at 3am and see how it's really done
[23:11:29] <PMT> ...which part of this is interesting?
[23:11:50] <DeHackEd> umm.. the fail? the stuff hitting linus in the face?
[23:21:47] <PMT> I guess I'm asking what the premise of said video is, having not attempted to watch it
[23:21:59] <PMT> e.g. is shuffling the SSDs deliberate, a fuckup, or
[23:22:48] <rjvbb> howdy, is support for encrypted dataset coming in a near future?
[23:22:50] <DeHackEd> so LMG got a Trident2-based switch (48x 10gig + 6x 40gig), a Supermicro board with 10gig ports on (router), and a dedicated wavelength to the nearly Internet Exchange
[23:23:12] <DeHackEd> rjvbb: see the topic. if you really want it, go grab either git HEAD or the 0.8.0-rc2 package
[23:23:32] <DeHackEd> note that enabling encryption will render the pool unusable on older versions of ZFS
[23:23:49] <rjvbb> (what topic?)
[23:23:55] <DeHackEd> type /topic
[23:24:20] <rjvbb> ah, right
[23:24:20] <pink_mist> the topic that the irc server shows you when you join a channel, and most irc clients keep at the top of the window
[23:24:47] <DeHackEd> in general this "topic" contains useful information to people joining the channel for the first time
[23:25:12] <rjvbb> which is also about the only time I tend to notice/read it ;)
[23:25:59] <rjvbb> anyway, incompatible with older ZFS versions is fine with me, compatibility with OpenZFS (and thus O3x) is more important
[23:26:06] <ptx0> it is not.
[23:26:06] <DeHackEd> after 18 hours my backup finished (sent over an ADSL-1 connection)
[23:26:22] <ptx0> most OpenZFS do not have native encryption
[23:26:28] <DeHackEd> it is unlikely to be merged into the openzfs repo until it's considered production ready
[23:26:45] <bunder> or when zol becomes the openzfs repo :P
[23:26:47] <ptx0> well, lundman makes some really weird decisions for o3x
[23:26:56] <ptx0> puts a lot of unstable features into it very quickly..
[23:27:07] <rjvbb> OpenZFS-osx does have it, I've started using it
[23:27:33] <ptx0> yea, i wouldn't trust it
[23:27:53] <pink_mist> rjvbb: o3x == openzfs on osx
[23:28:03] <rjvbb> works nicely (but I'm not putting any crucial data in there)
[23:28:09] <bunder> ptx0: did tom ever get back to you about your issue :P
[23:28:13] <ptx0> no.
[23:28:17] <rjvbb> I know what o3x is :)
[23:28:21] <ptx0> he is really bad at his job
[23:28:26] <bunder> lol
[23:28:39] <ptx0> i don't get what could be more important right now than solving a corruption issue
[23:28:48] <ptx0> but hey he apparently has more important things to do
[23:29:03] <DeHackEd> corruption on osx?
[23:29:06] *** simukis <simukis!~simukis_@78-63-88-48.static.zebra.lt> has quit IRC (Quit: simukis)
[23:29:07] <pink_mist> a lot of people take time off during holidays
[23:29:10] <ptx0> corruption on zfs encryption
[23:29:17] <ptx0> pink_mist: this has been about a month..
[23:29:22] <pink_mist> oh
[23:29:37] <ptx0> yeah
[23:29:59] <rjvbb> but what I really wanted to ask is how ZoL will handle getting the key/passphrase from the user.
[23:30:29] <DeHackEd> there's multiple settings, but "ask the user on the console" is what most people are doing
[23:30:29] <ptx0> that is like asking how any linux distro handles anything
[23:30:37] <ptx0> it's all specific to your setup/distro
[23:31:01] <DeHackEd> alternatively you can put a file in a known location containing the key and ZFS can quietly slurp it up with no interaction, and probably some other variants like that...
[23:31:08] <rjvbb> We've been talking a bit about keychain integration on #openzfs-osx, or adding something like zfsaskpass and $ZFS_ASKPASS
[23:31:27] <ptx0> why
[23:31:32] <ptx0> just pipe the answer to the unlock command
[23:31:59] <DeHackEd> I think systemd has some kind of helper to do this shit as well... I mean generally prompt users for passwords during startup
[23:32:13] <ptx0> ZFS on OS X seems still dangerous because of how OS X loves to sleep and hibernate by default
[23:32:50] <rjvbb> it does not hibernate by default, it only prepares for having to hibernate
[23:33:04] <ptx0> mine does
[23:34:39] <PMT> is hibernate intrinsically dangerous here for some reason, because it seems like it should only be dangerous if someone tried doing something unwise like force importing the pool on another OS and then trying to resume the old state
[23:34:44] <rjvbb> you can make it do that, but it's not the default. The default is "instant-on" on wake-from-sleep. Mine actually rarely slept more than an hour at a time until I deactivated wake-on-lan
[23:35:19] <ptx0> PMT: zfs and hibernate are like russian roulette
[23:35:25] <ptx0> even without multiple OS
[23:35:48] <rjvbb> why would it be any different from normal sleep?
[23:40:39] <rjvbb> anyway, currently my encrypted dataset only holds a "private" FireFox profile which I use once or twice a day, and I export the pool before suspend the machine
[23:41:37] <CoJaBo> Is there a way to tell if/how badly a file or directory of files is fragmented?
[23:42:35] <CoJaBo> ptx0: Is standby the same? My shiny new zfs server went into standby the other week, and panic'd the moment it woke up D=
[23:42:39] <PMT> rjvbb: the remark is that having an encrypted dataset on a pool would render it not even read-only importable on older versions.
[23:42:42] <rjvbb> we'll see about corruption, but I certainly would like if `zpool import -l foo` asked me for my keychain password instead of forcing me to remember yet another supposedly random passphrase
[23:43:29] <CoJaBo> And actually, it hung after panicking too. It's supposed to auto-reset, it did not :/
[23:44:27] <rjvbb> PMT: yes, but I don't have older versions. I only have ZoL vs. OpenZFS-osx, and currently you already have to jump through hoops to create pools that can be imported by both
[23:45:21] <PMT> I am acutely aware, yes.
[23:47:18] <rjvbb> I understand some kind of compat option is coming for zpool create, that would be much appreciated (but won't help much with existing incompatible pools)
[23:48:19] <PMT> that was my proposal, yes.
[23:48:53] <rjvbb> heh, I suggested implementing one on the O3X forum myself :)
[23:49:39] <PMT> It turned out to be quite an exciting conversation to convince some people of the value of.
[23:50:41] <rjvbb> it could actually take a more general form where you put the creation options (-o and -O) that you use almost all the time in .config/zpoolrc file (they'd be overriden by what you put on the commandline)
[23:51:09] <rjvbb> I heard that
[23:53:48] *** Markow <Markow!~ejm@176.122.215.103> has quit IRC (Quit: Leaving)
[23:54:27] <rjvbb> I get the impression it'll be just as exciting to convince some people here that getting a key or passphrase could optionally use something a little more advanced than reading from stdin or a cleartext file somewhere on another disk
[23:54:31] <rjvbb> O:-)
[23:55:15] <DeHackEd> oh there's lots of ways to do it. TPM I presume?
[23:55:16] <PMT> rjvbb: I mean, it's not really IRC users you'd need to convince, it's both someone to write the PR (if you're not going to do it) and someone to convince everyone involved it's a good idea. :)
[23:55:27] <DeHackEd> but from the app standpoint, data's gotta get into it somehow, and ZFS itself doesn't want dependencies
[23:57:59] <bunder> can you even access the enclave on a mac as a user, to store a password?
[23:58:07] <rjvbb> right. So on Mac we could use the system keychain directly, but I also like the idea of doing what ssh and gpg do
[23:58:11] <rjvbb> what's the enclave?
[23:58:33] <ptx0> CoJaBo: unlikely
[23:58:43] <ptx0> CoJaBo: some drivers just suck at suspend/resume too
[23:59:03] <PMT> CoJaBo: I suppose you could go try Brian's filefrag PR, if it doesn't have any crippling bugs. :V
[23:59:05] <CoJaBo> ptx0: Yeh, it's Ryzen unfortunately :/
[23:59:11] <bunder> https://developer.apple.com/documentation/security/certificate_key_and_trust_services/keys/storing_keys_in_the_secure_enclave
[23:59:17] <bunder> it would seem you can
[23:59:41] <CoJaBo> PMT: Mostly just trying to figure out if a weird performance issue is due to that, so I don't want to have to make major changes >_>
top
   January 1, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >