Switch to DuckDuckGo Search
   June 20, 2020  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | >

NOTICE: This channel is no longer actively logged. You have been redirected to the last known logged date

Toggle Join/Part | bottom
[00:17:22] *** grawlinson <grawlinson!~grawlinso@158.140.234.136> has quit IRC (Quit: SIGTERM)
[00:23:58] *** grawlinson <grawlinson!~grawlinso@158.140.234.136> has joined #zfsonlinux
[00:43:22] *** ephemer0l <ephemer0l!~ephemer0l@pentoo/user/ephemer0l> has quit IRC (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.)
[00:45:40] *** ephemer0l <ephemer0l!~ephemer0l@pentoo/user/ephemer0l> has joined #zfsonlinux
[00:58:27] *** elxa <elxa!~elxa@2a02:6d40:3599:b201:825a:4f35:f58f:11a5> has quit IRC (Ping timeout: 260 seconds)
[01:06:43] *** electricityZZZZ <electricityZZZZ!~electrici@108-216-157-17.lightspeed.sntcca.sbcglobal.net> has joined #zfsonlinux
[01:07:58] <electricityZZZZ> i have a friend who is looking for a RAID system for some professional work (think photography, media, etc). i don't want to support anything but would like to be able to suggest a product which will work well. the product would probably include a 4 disk (hot swappable?) capability, probably be network attached, and would need to be macOS compatible/easy to work with
[01:08:22] <electricityZZZZ> i'm definitely not going to build a system for this person,... is there anything ya'll can recommend in that department?
[01:09:42] <BtbN> Any of the plaster home NAS I guess
[01:09:46] <BtbN> Synology or QNAP
[01:09:55] <BtbN> They're all terrible though
[01:12:05] <electricityZZZZ> so will i get better just installing freenas on something?
[01:12:12] <electricityZZZZ> i've also thought of suggesting the freenas hardware
[01:14:32] <electricityZZZZ> i'm a bit confused because i doubt that freenas can build or source better parts than a supermicro, dell, etc
[01:16:04] <DeHackEd> well those things, though expensive, are better parts than the synology etc
[01:17:04] <electricityZZZZ> can an unsophisticated use freenas successfully
[01:17:11] <electricityZZZZ> or am i going to be spending lots of hours helping this person
[01:17:26] <DeHackEd> I believe it has a web interface...
[01:18:19] <electricityZZZZ> and then is easy multi-site backup a thing?
[01:18:33] <manfromafar> your right :} ixsystems just buys supermicro parts
[01:18:41] <electricityZZZZ> ha
[01:18:49] <electricityZZZZ> well that's actually a bit of a releif
[01:18:53] <manfromafar> what do you think are in other computers
[01:18:59] <manfromafar> magic dust
[01:19:01] <electricityZZZZ> do you know what kind of a markup they are charging?
[01:19:04] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has joined #zfsonlinux
[01:19:08] <manfromafar> maph
[01:19:38] <electricityZZZZ> well we all know that freenas designed their own CPU and make it with their own fab <|8-P
[01:20:07] <electricityZZZZ> it'd be super nice if i could just switch on an s3 backup option for freenas,... s3 is expensive though!!!
[01:20:24] <PMT> Not compared to data recovery services it's not. :P
[01:20:26] <manfromafar> you can
[01:20:29] <manfromafar> they have cloud replication now
[01:20:49] <snajpa> of ransomware, yeah :D
[01:20:57] <manfromafar> but #freenas would be better for that
[01:21:15] <electricityZZZZ> yeah i'm asking in there but it's not very lively
[01:21:41] <electricityZZZZ> this person is currently using a 30 TB time machine backup and im trying to convince him/her that that isn't what time machine is for 8-O
[01:21:55] <snajpa> oh, that, I missed the freenas bit... thought I was seeing for real a discussion about proprietary quickhacks here :D
[01:22:18] <electricityZZZZ> yeah it's like i don't want this to become my stinkbag
[01:22:31] <electricityZZZZ> is freenas self-driving or do you have to scrub it or something periodically
[01:23:16] <snajpa> didn't they also go kind of the closed-way? a la what happened to pfsense?
[01:24:06] *** cmurphycode <cmurphycode!~chrismurp@pool-98-110-172-29.bstnma.fios.verizon.net> has quit IRC (Quit: cmurphycode)
[01:24:34] <electricityZZZZ> closed-way?
[01:24:36] <BtbN> If you just want something to put there and never touch but trust to work, Synology does the job.
[01:25:23] <snajpa> nah I must be confusing them with someone else
[01:25:59] <BtbN> But Synology sure charges a premium for that level of support
[01:26:14] <electricityZZZZ> is synology supermicro?
[01:26:26] <BtbN> hm?
[01:26:35] <electricityZZZZ> synology = supermicro hardware?
[01:26:39] <BtbN> It's Synology?
[01:28:08] <electricityZZZZ> yeah that's the sticker, but are you telling me they design the mobo etc?
[01:28:15] <BtbN> Set them to auto-install important and well tested DSM updates, and you can probably leave it there and forget about it.
[01:28:24] <BtbN> Synology are proprietary NAS
[01:28:31] <BtbN> There is no standard hardware in there.
[01:28:37] <electricityZZZZ> wat
[01:29:35] <BtbN> Most of the non-supersizes NASes are some ARM board
[01:30:03] <BtbN> The very large and enterpricey ones are running Xeons
[01:31:42] <clever> and ZFS crashed HARD just now
[01:31:55] <snajpa> always fun :D
[01:32:15] <clever> it did manage to at least write an error to the journal before failing hard
[01:32:25] <PMT> What was the failure?
[01:32:25] <clever> general protection fault: 0000 [#1] SMP NOPTI
[01:32:29] <PMT> Oh boy.
[01:32:30] <clever> Call Trace: arc_buf_destroy_impl+0x69/0x2d0 [zfs]
[01:32:35] <snajpa> oh nice :D
[01:32:42] <clever> i tried to destroy a snapshot, and then everything began to hang
[01:32:52] <clever> even `ps aux` would hang, when reading the /proc/PID/cmdline of something
[01:33:04] <snajpa> which version is that?
[01:33:38] <clever> [root@amd-nixos:~]# modinfo zfs
[01:33:38] <clever> filename: /run/current-system/kernel-modules/lib/modules/4.19.84/extra/zfs/zfs.ko.xz
[01:33:41] <clever> version: 0.8.2-1
[01:33:50] <clever> i can pastebin more once chrome has recovered...
[01:35:12] <snajpa> do you have debug symbols on there? can you check where that arc_buf_destroy_impl+0x69 is?
[01:35:52] *** shibboleth <shibboleth!~shibbolet@gateway/tor-sasl/shibboleth> has quit IRC (Quit: shibboleth)
[01:36:24] <snajpa> I'd say somewhere around here https://github.com/openzfs/zfs/blob/master/module/zfs/arc.c#L3077
[01:36:50] *** tlacatlc6 <tlacatlc6!~tlacatlc6@072-188-000-068.res.spectrum.com> has joined #zfsonlinux
[01:37:48] <snajpa> but I'm on master and don't have 0.8.x compiled at hand
[01:40:26] <snajpa> master @ 6bd4f4545 is working pretty much ok without major hiccups - those after have some problems with the new zrele() optimistic kind of unification with FreeBSD
[01:40:31] <snajpa> I'm staying out of those for now :D
[01:40:43] <snajpa> also not using any zvols
[01:40:49] <clever> https://gist.github.com/cleverca22/7b0447a9672259976964ea11ad98bc36 the full error from the journal
[01:42:06] <clever> i dont think i have debug symbols, but i can just objdump the module...
[01:46:07] <clever> snajpa: gist updated with the full asm of that function
[01:46:32] <clever> > (0x70b0 + 0x69).toString(16)
[01:46:32] <clever> '7119'
[01:46:38] <clever> 7119: f0 4c 01 25 00 00 00 lock add %r12,0x0(%rip) # 7121 <arc_buf_destroy_impl+0x71>
[01:46:43] <clever> so it failed around here
[01:46:46] <clever> 7114: e8 87 ee ff ff callq 5fa0 <arc_free_data_buf.isra.43>
[01:46:50] <clever> just after that function call
[01:47:37] <clever> snajpa: which is just 2 lines after the one you linked, good guess!
[01:47:46] <clever> PMT: anything i should check next?
[01:49:53] <snajpa> well you know, like always... whether it works with the latest versions and if not, you've got yourself some fun TODO, if you need the data
[01:50:23] <clever> snajpa: the machine has already booted and resumed working normally
[01:50:51] <clever> but there are systemd timers that create and destroy snapshots constantly (and had done so several times, prior to the failure)
[01:51:22] <snajpa> well then from my experience it's a PITA to debug anything without a clear reproducer...
[01:51:44] <clever> yeah
[01:51:48] <snajpa> but still having those stack traces in the issue tracker might help others while hunting for ghosts
[01:52:24] <clever> ive never even seen linux have a `general protection fault` before
[01:52:30] <snajpa> others have helped me that way with ZFS at least a few times :-D couldn't find ^ anything like that tho
[01:52:34] <clever> its usually an OOPS and a clear page fault type error
[01:53:19] <snajpa> null pointer?
[01:53:32] <snajpa> is the most likely reason, at least when I try shit in kernel
[01:54:00] <snajpa> or not exactly null, more like totally bogus
[01:54:00] <PMT> clever: at the moment, "file a bug, try 0.8.4 though I don't expect that to do anything but maybe change the backtrace" would be my remarks
[01:54:39] <snajpa> but AFAIK, zeros indicate a null pointer, I'm no expert tho
[01:55:13] <snajpa> PMT: master has also saved the day for me a few times
[01:55:32] <clever> related? https://github.com/openzfs/zfs/issues/9329
[01:55:32] <zfs-bot> [GitHub] [openzfs/zfs #9329] IslandLife: kernel: general protection fault: Trace: spl_cache_flush+0x36/0x50 [spl] | ...
[01:55:42] <snajpa> but import it there readonly, right from the start, otherwise you might activate features older releases don't have
[01:55:48] <clever> its a few frames deeper into the stack
[01:56:24] <snajpa> clever: yeah that's what I found too, but that's during send/recv, so I'm not sure if it's related
[01:56:45] <PMT> snajpa: yeah but moving to git master from a release version is a pretty drastic step
[01:57:31] <snajpa> PMT: clever can handle that I'm sure ;)
[01:58:11] <snajpa> I'm just sayin, that's what I do when out of options... and I've been there so many times that here we just run master + cherry-picked patches
[02:03:58] <snajpa> + when truly out of every literal option I can think off, I pull Illumos out of my hat
[02:04:14] <snajpa> and that tends to save the day at least being able to pull out the data out of there
[02:05:14] <snajpa> but we've stayed out of things like encryption, which saved quite a lot of trouble, especially had we tried to adopt early on
[02:39:55] <clever> hold on a sec, this is the 2nd hard lockup ive had lately...
[02:40:28] <clever> journal doesnt show the error for the 2nd lockup, that one might have been GPU related
[02:40:48] <clever> the other instance was just a hard cut in the logs
[02:54:58] *** sauravg_ <sauravg_!~sauravg@27.6.80.182> has quit IRC (Ping timeout: 265 seconds)
[02:55:56] *** sauravg <sauravg!~sauravg@27.6.80.182> has joined #zfsonlinux
[03:10:39] *** peq <peq!~tim@2600:6c40:4780:1301:216:3eff:fe5f:cf7b> has joined #zfsonlinux
[03:18:19] *** cmurphycode <cmurphycode!~chrismurp@pool-98-110-172-29.bstnma.fios.verizon.net> has joined #zfsonlinux
[03:20:16] *** peq <peq!~tim@2600:6c40:4780:1301:216:3eff:fe5f:cf7b> has quit IRC (Quit: WeeChat 1.9.1)
[03:20:41] *** cmurphycode <cmurphycode!~chrismurp@pool-98-110-172-29.bstnma.fios.verizon.net> has quit IRC (Client Quit)
[03:21:06] *** cmurphycode <cmurphycode!~chrismurp@pool-98-110-172-29.bstnma.fios.verizon.net> has joined #zfsonlinux
[03:21:25] <prawn> clever: #10166 ? i think i saw a GPF on this non-zfs machine i'm typing on just today when i tried rebooting after qemu with some usb controller passed through via iommu hung itself unkillably and the system started acting up.
[03:21:26] <zfs-bot> I'm sorry, I can only check GitHub issue numbers between 1 and 10000.
[03:21:45] <prawn> [swearing intensifies] https://github.com/openzfs/zfs/issues/10166
[03:21:46] <zfs-bot> [GitHub] [openzfs/zfs #10166] tdcox: Daily GPFs with ZFS under Proxmox 6.1 | ...
[03:22:51] <prawn> tl;dr: you don't happen to be using qemu with hardware passthrough? :)
[03:23:48] *** cmurphycode <cmurphycode!~chrismurp@pool-98-110-172-29.bstnma.fios.verizon.net> has quit IRC (Client Quit)
[03:28:34] *** peq <peq!~tim@2600:6c40:4780:1301:216:3eff:fe5f:cf7b> has joined #zfsonlinux
[03:30:06] *** rsully_ <rsully_!~rsully@unaffiliated/rsully> has joined #zfsonlinux
[03:47:27] <clever> prawn: havent ran any pci passthru in years
[03:48:03] *** blizzow <blizzow!~blizzow@71-218-126-77.hlrn.qwest.net> has quit IRC (Remote host closed the connection)
[03:48:05] <prawn> awh, would've been an ideal coincidence to blame :D
[04:02:37] *** karlthane_ <karlthane_!~quassel@75-49-154-22.lightspeed.dllstx.sbcglobal.net> has joined #zfsonlinux
[04:03:12] <Tashtari> I'm hoping someone somewhere has done a writeup on this already, but I can't find it. What are the advantages of giving ZFS whole disks for a pool instead of partitions?
[04:03:52] <clever> Tashtari: it sets a special flag, and when you import the pool, it then changes the IO scheduler for the disk
[04:04:04] <clever> Tashtari: which can cause performance issues for any other competing partitions on the device
[04:04:16] <clever> but if its whole-disk, there are no competing partitions
[04:04:23] *** vlm <vlm!~vlm@gateway/tor-sasl/vlm> has quit IRC (Ping timeout: 240 seconds)
[04:04:28] <Tashtari> Even if the competing partition just contains stuff necessary for booting?
[04:04:38] <DeHackEd> in general sharing a disk with multiple filesystems is bad for performance as the drive head needs to jump around to distinct partitions which is a lot of movement
[04:04:38] *** vlm <vlm!~vlm@gateway/tor-sasl/vlm> has joined #zfsonlinux
[04:05:05] <clever> DeHackEd: even when its /boot and its basically never touched?
[04:05:34] *** karlthane <karlthane!~quassel@75-49-154-22.lightspeed.dllstx.sbcglobal.net> has quit IRC (Ping timeout: 260 seconds)
[04:06:42] <clever> id like to see if grub zfs support improved, so you dont need the /boot fs
[04:07:04] <clever> it can boot, but all directory listing fails, which makes diagnosing any potential problems imposible
[04:07:21] <Tashtari> I hear more distros are supporting ZFS as a root filesystem, though I haven't investigated what it's like
[04:07:59] <Tashtari> And, more importantly, if they're smart enough to maintain the bootloader on every disk of an array in the event that one fails
[04:08:18] <clever> Tashtari: one thing i saw recently, is that zfs has provisions for bootloader in the fs
[04:08:57] <Tashtari> Hm. What kind of provisions?
[04:09:03] *** vlm <vlm!~vlm@gateway/tor-sasl/vlm> has quit IRC (Ping timeout: 240 seconds)
[04:09:15] <clever> Tashtari: http://www.giis.co.in/Zfs_ondiskformat.pdf
[04:09:40] <clever> Tashtari: there is a 8kb hole at the start of the FS, where you could stick normal MBR stuff
[04:10:15] <clever> Tashtari: and there is a 3.5mb hole (starting at offset 512kb), that is meant to hold the rest of the bootloader
[04:10:31] *** vlm <vlm!~vlm@gateway/tor-sasl/vlm> has joined #zfsonlinux
[04:10:49] <clever> grub-install could then slot its stage1 and stage1.5 into those 2 holes, and boot enough to be capable of mounting zfs
[04:10:56] <clever> then it can use a plain dataset for /boot/
[04:11:43] <clever> pages 14 and 8 from the pdf
[04:11:47] <Tashtari> grub-install would be doing that at the level of the physical disk, though, right?
[04:11:53] <clever> yeah
[04:12:06] <clever> you would have to have grub-install repeat over every disk in the array
[04:14:51] *** vlm <vlm!~vlm@gateway/tor-sasl/vlm> has quit IRC (Client Quit)
[04:15:04] * manfromafar uses a script
[04:15:04] <Tashtari> It seems like that goal is mainly held up by grub's somewhat rudimentary ZFS support...
[04:15:24] <Tashtari> Though it is functional as long as nothing goes wrong
[04:16:37] *** vlm <vlm!~vlm@gateway/tor-sasl/vlm> has joined #zfsonlinux
[04:18:01] <clever> Tashtari: ive also seen reports that grub cant traverse a directory with "too many" files
[04:18:20] <Tashtari> Is there anything else that can read ZFS short of another Linux kernel?
[04:18:31] <clever> and if /boot and /nix are on the same FS, nixos just puts absolute /nix/store/ paths into the grub config
[04:18:34] <clever> [root@amd-nixos:~]# ls /nix/store/ | wc -l
[04:18:37] <clever> 142877
[04:18:54] <clever> grub: try finding a directory, in a list containing 142,000 other directories
[04:19:10] <elvishjerricco> clever: Grub's ZFS support is... sorta fine. I've been using it for like a year and a half without issue. If grub can't traverse big directories, I've never experienced the problem; I'm guessing a typical `/boot` isn't nearly so big. (I do have `/boot` on a different dataset than `/nix/store`).
[04:19:48] <elvishjerricco> But Grub will not attempt to recover from mirrors, copies=n, or raidz parity, though it will check the checksum. So if there's corruption, it just fails to boot, despite having available redundancy
[04:19:52] <clever> elvishjerricco: if they are seperate mount points, then nixos will copy all kernels automatically
[04:20:00] <elvishjerricco> It does work if disks are flat out missing, as long as you have grub installed on each disk
[04:20:01] <clever> elvishjerricco: nixos also has a dedicated copyKernels flag, to force that
[04:20:07] <elvishjerricco> clever: Right
[04:20:28] <elvishjerricco> That's my point. I don't have issues with directory traversing because my system copies the kernels
[04:20:31] <elvishjerricco> Which is fine
[04:20:52] <clever> my main issue in testing, is that tab completion and `ls` just fail with a weird error
[04:21:03] <clever> i have no idea how it can even boot when it cant list!
[04:21:03] *** vlm <vlm!~vlm@gateway/tor-sasl/vlm> has quit IRC (Ping timeout: 240 seconds)
[04:21:50] <elvishjerricco> Ah, I've never really bothered using the grub shell. On the rare occasion I can't boot, I just boot a usb installer to debug from
[04:22:38] <clever> https://github.com/cleverca22/nixos-configs/blob/master/rescue_boot.nix
[04:22:55] <clever> elvishjerricco: i prefer the system being able to repair itself, this code would put the entire installer into /boot/
[04:23:34] <clever> it costs ~400mb, and doesnt have any rollback options for that extra option, so it still runs the risk of being just as broken as your latest generation
[04:24:37] <elvishjerricco> Eh, if it's the system that's broken, it's the system I'd probably rather not rely on for repair :P
[04:24:38] <cirdan> just keep a workining initrd always in /boot :)
[04:25:07] <clever> cirdan: rescue_boot.nix shoves an entire livecd rootfs into the initrd
[04:25:36] <elvishjerricco> clever: How does that even work? I've never been able to get the kernel to boot an initrd larger than like 80M
[04:25:40] <clever> elvishjerricco: i once spent 12 hours chasing down flakey ram, that spread like a plague to every machine i touched
[04:25:44] <elvishjerricco> It always fails with a cryptic error
[04:26:03] <clever> elvishjerricco: then i discovered, nix was building memtest with new gcc hardening flags, which causes a false error
[04:26:25] <cirdan> pretty much all my boot issues are due to missing zfs.ko
[04:26:27] <clever> elvishjerricco: you need about 2x-3x the initrd size, to unpack it fully to ram
[04:27:28] <clever> cirdan: i originally made rescue_boot.nix so i could move my /nix/store to its own dataset
[04:27:41] <elvishjerricco> clever: Well available ram was never the issue. Kernel just says it doesn't look like an initramfs
[04:27:44] <clever> thats basically as drastic is moving /usr to its own dataset
[04:28:15] <clever> elvishjerricco: weird, ive seen it work at over a gig i think
[04:28:21] <cirdan> I use a debian install on usb for stuff like that
[04:28:43] <clever> cirdan: i once had the "fun" of trying to use the ubuntu livecd to fix a server remotely
[04:28:59] <clever> cirdan: it kept ejecting itself every time you reboot, and id have to ask (in a support ticket) for them to close the tray again
[04:29:00] <cirdan> no this is a real install
[04:29:07] <cirdan> :)
[04:29:19] <clever> i eventually gave up on the ubuntu livecd, and used IMPI
[04:29:23] <cirdan> to usb
[04:29:28] <clever> except, the server was horid old, and needed active-x
[04:29:36] <cirdan> fun!
[04:29:41] <clever> i had to spin up 2 VM's, win7 and winxp, just to get all of the active-x components to work right
[04:29:42] <cirdan> that's what VMs are for
[04:29:47] <cirdan> and soon for flash
[04:29:51] <clever> win7 could do remote desktop, but not remote cdrom
[04:29:55] <clever> winxp could only do remote cdrom
[04:30:01] <cirdan> lol
[04:30:12] <elvishjerricco> clever: Regardless, if /boot is on a ZFS dataset, losing 400M duplicated for the rescue entry isn't so bad.
[04:30:14] <cirdan> i converted my ovh vps to zfs
[04:30:16] <clever> and XP cant download the iso, because its ssl is too old, and gets rejected for fear of downgrade attacks
[04:30:25] <clever> so it was a huge ordeal to even get the ISO into the XP vm!!
[04:30:41] <clever> everybody is making https mandatory, and blocking insecure ssl versions
[04:30:57] <cirdan> yeah retrocomputing needs a ssl mitm proxy to work
[04:33:43] <clever> elvishjerricco: and in other news, my new 3x16tb drives are basically unusable currently, got SAS by accident, and now i need a controller
[04:34:31] <CompanionCube> also GRUB's ZFS support hasn't gotten support for any new features in a while
[04:35:15] <clever> that reminds me, what are the chances of grub actually supporting SAS controllers?
[04:35:25] <elvishjerricco> CompanionCube: Didn't it at least get a change to ignore the encryption feature flag as long as the dataset it's booting from isn't encrypted? That's... something :P
[04:35:29] <CompanionCube> the last meaningful update was in 2015.
[04:35:41] <clever> i was surprised to find that grub technically doesnt support nvme, it relies on the firmware to provide drivers
[04:35:46] <elvishjerricco> Or am I thinking of a patch from someone else...
[04:35:50] <CompanionCube> feature-wise that is
[04:36:06] <CompanionCube> elvishjerricco: probably the latter, see: https://git.savannah.gnu.org/cgit/grub.git/log/grub-core/fs/zfs
[04:36:08] <zfs-bot> [ grub.git - GNU GRUB ] - git.savannah.gnu.org
[04:37:00] <elvishjerricco> Darn. I think there's a ZFS issue about grub supporting encryption where someone posted a patch
[04:41:44] *** electricityZZZZ <electricityZZZZ!~electrici@108-216-157-17.lightspeed.sntcca.sbcglobal.net> has quit IRC (Ping timeout: 244 seconds)
[04:44:28] <manfromafar> just wait for the new bootloader
[04:44:45] <CompanionCube> new bootloader?
[04:45:34] <manfromafar> soon™️
[04:45:46] *** vlm <vlm!~vlm@gateway/tor-sasl/vlm> has joined #zfsonlinux
[04:46:18] <CompanionCube> doing the illumos thing and yoinking FreeBSD's off-the-shelf or what?
[04:48:27] *** timeless <timeless!sid4015@firefox/developer/timeless> has quit IRC (Ping timeout: 244 seconds)
[04:50:05] * clever heads to bed
[04:52:06] *** timeless <timeless!sid4015@firefox/developer/timeless> has joined #zfsonlinux
[04:53:03] *** vlm <vlm!~vlm@gateway/tor-sasl/vlm> has quit IRC (Ping timeout: 240 seconds)
[04:56:58] *** IonTau <IonTau!~IonTau@124-171-136-14.dyn.iinet.net.au> has joined #zfsonlinux
[05:01:50] *** tlacatlc6 <tlacatlc6!~tlacatlc6@072-188-000-068.res.spectrum.com> has quit IRC (Quit: Leaving)
[05:06:00] *** vlm <vlm!~vlm@gateway/tor-sasl/vlm> has joined #zfsonlinux
[05:10:26] *** ENOBUFS <ENOBUFS!~ENOBUFS@072-177-019-125.res.spectrum.com> has quit IRC (Ping timeout: 260 seconds)
[05:11:03] *** vlm <vlm!~vlm@gateway/tor-sasl/vlm> has quit IRC (Ping timeout: 240 seconds)
[05:11:44] *** ENOBUFS <ENOBUFS!~ENOBUFS@072-177-019-125.res.spectrum.com> has joined #zfsonlinux
[05:16:17] *** vlm <vlm!~vlm@gateway/tor-sasl/vlm> has joined #zfsonlinux
[07:18:47] *** tefter <tefter!~bmaxa@109.72.51.23> has joined #zfsonlinux
[07:23:16] *** delx <delx!~delx@59.167.161.142> has quit IRC (Remote host closed the connection)
[07:25:17] *** delx <delx!~delx@59.167.161.142> has joined #zfsonlinux
[07:26:30] *** Fubarovic <Fubarovic!~fubar@beerium.soleus.nu> has quit IRC (Ping timeout: 256 seconds)
[08:11:51] *** rsully_ <rsully_!~rsully@unaffiliated/rsully> has quit IRC (Quit: rsully_)
[08:13:29] *** tsal <tsal!~tsal@i59F52545.versanet.de> has quit IRC (Ping timeout: 265 seconds)
[08:15:13] *** tsal <tsal!~tsal@i59F4AAAC.versanet.de> has joined #zfsonlinux
[08:22:36] *** sauravg <sauravg!~sauravg@27.6.80.182> has quit IRC (Ping timeout: 256 seconds)
[08:52:21] *** phibs <phibs!~phibs@psychotic/admin/phibs> has joined #zfsonlinux
[08:52:35] <phibs> Anyone know when zol will support 5.7+ Kernels? Seems like a lot of GPL symbols added...
[09:04:50] *** sauravg <sauravg!~sauravg@27.6.80.182> has joined #zfsonlinux
[09:35:28] <tefter> I use zol on 5.7 since ages, (I use directly from git repo)
[09:41:42] *** hsp <hsp!~hsp@unaffiliated/hsp> has quit IRC (Quit: WeeChat 2.8)
[09:45:40] *** hsp <hsp!~hsp@unaffiliated/hsp> has joined #zfsonlinux
[11:01:36] <clever> PMT: 3rd lockup overnight, machine was relatively idle, journal just cuts off with no err
[12:22:04] *** Setsuna-Xero <Setsuna-Xero!~pewpew@unaffiliated/setsuna-xero> has quit IRC (Ping timeout: 246 seconds)
[12:29:48] *** yann-kaelig <yann-kaelig!~yann-kael@89-64-54-239.dynamic.chello.pl> has joined #zfsonlinux
[12:44:31] *** elxa <elxa!~elxa@2a02:6d40:359e:d001:c370:405d:ebe0:7998> has joined #zfsonlinux
[12:49:03] *** nostrodamy <nostrodamy!~nostrodam@thunix.net> has joined #zfsonlinux
[12:54:52] *** leah2 <leah2!~leah@vuxu.org> has quit IRC (Ping timeout: 256 seconds)
[13:00:10] *** IonTau <IonTau!~IonTau@124-171-136-14.dyn.iinet.net.au> has quit IRC (Ping timeout: 258 seconds)
[13:02:32] *** IonTau <IonTau!~IonTau@124-171-136-14.dyn.iinet.net.au> has joined #zfsonlinux
[13:05:30] *** Caterpillar <Caterpillar!~caterpill@unaffiliated/caterpillar> has quit IRC (Ping timeout: 260 seconds)
[13:06:22] *** Caterpillar <Caterpillar!~caterpill@unaffiliated/caterpillar> has joined #zfsonlinux
[13:07:06] <clever> starting to look like hw problems memtest causes a brief flash of red, then the machine hard reboots itself
[13:08:04] *** leah2 <leah2!~leah@vuxu.org> has joined #zfsonlinux
[13:10:09] *** gbkersey <gbkersey!~gbkersey@unaffiliated/gbkersey> has quit IRC (Read error: No route to host)
[13:10:50] *** Caterpillar <Caterpillar!~caterpill@unaffiliated/caterpillar> has quit IRC (Client Quit)
[13:17:22] *** IonTau <IonTau!~IonTau@124-171-136-14.dyn.iinet.net.au> has quit IRC (Ping timeout: 246 seconds)
[13:32:02] *** purist <purist!~purist@gateway/tor-sasl/purist> has quit IRC (Remote host closed the connection)
[13:32:20] *** purist <purist!~purist@gateway/tor-sasl/purist> has joined #zfsonlinux
[13:32:47] *** leah2 <leah2!~leah@vuxu.org> has quit IRC (Ping timeout: 272 seconds)
[13:52:43] *** thuttu77 <thuttu77!~thuttu77@85-23-198-231.bb.dnainternet.fi> has quit IRC (Ping timeout: 246 seconds)
[13:57:36] *** leah2 <leah2!~leah@dslb-088-064-114-187.088.064.pools.vodafone-ip.de> has joined #zfsonlinux
[14:00:44] *** tlacatlc6 <tlacatlc6!~tlacatlc6@072-188-000-068.res.spectrum.com> has joined #zfsonlinux
top
   June 20, 2020  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | >