Switch to DuckDuckGo Search
   January 2, 2020
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31

Toggle Join/Part | bottom
[01:24:10] *** Kruppt <Kruppt!~Kruppt@50.111.52.202> has quit IRC (Quit: Leaving)
[02:01:11] <gitomat> [illumos-gate] 12118 system/library/install has gone -- Alexander Pyhalov <apyhalov at gmail dot com>
[02:07:22] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[02:07:33] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[02:25:18] *** nde <nde!~nde@c-98-220-146-104.hsd1.in.comcast.net> has quit IRC (Remote host closed the connection)
[03:26:28] *** X-Scale` <X-Scale`!~ARM@83.223.235.128> has joined #illumos
[03:26:41] *** X-Scale <X-Scale!~ARM@46.50.5.102> has quit IRC (Ping timeout: 265 seconds)
[03:27:09] *** X-Scale` is now known as X-Scale
[03:38:54] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Quit: jcea)
[04:22:25] *** insomnia is now known as TheTick
[04:30:20] *** BOKALDO <BOKALDO!~BOKALDO@81.198.20.180> has joined #illumos
[04:37:54] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[04:40:16] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[04:41:27] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[04:41:43] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has quit IRC (Ping timeout: 260 seconds)
[04:42:44] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[04:43:21] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[04:57:16] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has joined #illumos
[05:04:40] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[05:05:49] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[05:07:23] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[05:08:32] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[05:10:07] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[05:11:03] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[06:34:06] *** nde <nde!~nde@c-98-220-146-104.hsd1.in.comcast.net> has joined #illumos
[06:35:32] *** BOKALDO <BOKALDO!~BOKALDO@81.198.20.180> has quit IRC (Quit: Leaving)
[07:35:16] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[07:36:26] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[07:37:28] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[07:38:36] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[07:40:12] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[07:41:22] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[07:42:24] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[07:43:33] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[07:45:08] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[07:46:19] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[07:47:20] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[07:47:24] *** pwinder <pwinder!~pwinder@86.4.7.64> has joined #illumos
[07:48:31] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[07:50:04] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[07:51:38] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[07:52:16] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[07:53:29] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[07:54:28] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[07:55:31] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[08:54:17] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Quit: tsoome)
[09:28:18] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[09:39:06] *** BH23 <BH23!~BH23@santoroj.plus.com> has joined #illumos
[09:40:17] *** hongkongphooey <hongkongphooey!~joes@santoroj.plus.com> has joined #illumos
[09:40:50] *** BH23 <BH23!~BH23@santoroj.plus.com> has quit IRC (Client Quit)
[09:52:52] *** Guest53823 <Guest53823!~void@ip4d16bc07.dynamic.kabel-deutschland.de> has quit IRC (Ping timeout: 268 seconds)
[10:32:35] *** nde <nde!~nde@c-98-220-146-104.hsd1.in.comcast.net> has quit IRC (Remote host closed the connection)
[10:38:22] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has joined #illumos
[10:58:56] *** BOKALDO <BOKALDO!~BOKALDO@81.198.20.180> has joined #illumos
[11:05:57] *** yomisei <yomisei!~void@ip4d16bc07.dynamic.kabel-deutschland.de> has joined #illumos
[11:42:03] *** cantstanya <cantstanya!~chatting@gateway/tor-sasl/cantstanya> has quit IRC (Ping timeout: 240 seconds)
[11:42:21] *** Lirion <Lirion!~m00se@wikimedia-commons/Lirion> has quit IRC (Ping timeout: 240 seconds)
[11:48:43] *** cantstanya <cantstanya!~chatting@gateway/tor-sasl/cantstanya> has joined #illumos
[11:49:52] *** Lirion <Lirion!~m00se@wikimedia-commons/Lirion> has joined #illumos
[13:29:04] *** neirac <neirac!~neirac@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[13:41:19] *** elegast <elegast!~elegast@83-161-181-201.mobile.xs4all.nl> has joined #illumos
[13:42:23] *** man_u <man_u!~manu@89-92-19-81.hfc.dyn.abo.bbox.fr> has joined #illumos
[13:50:41] *** mnowak_ <mnowak_!~mnowak_@94.142.238.232> has quit IRC (Quit: Leaving)
[14:00:21] *** fgudin[m] is now known as fgudin[m]1
[14:01:18] *** fgudin[m]1 <fgudin[m]1!fgudinmatr@gateway/shell/matrix.org/x-akelohxayxfhlnax> has quit IRC (Quit: issued !quit command)
[14:04:11] *** fgudin1 <fgudin1!fgudinmatr@gateway/shell/matrix.org/x-ymfmwhpxtgjnqwso> has joined #illumos
[14:27:33] *** mnowak_ <mnowak_!~mnowak_@94.142.238.232> has joined #illumos
[14:33:40] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has joined #illumos
[14:48:11] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Ping timeout: 265 seconds)
[14:50:43] *** Kurlon <Kurlon!~Kurlon@cpe-67-253-141-249.rochester.res.rr.com> has quit IRC (Ping timeout: 268 seconds)
[15:09:37] <sjorge> Anybody have RBAC working with LDAP?
[15:09:51] <sjorge> Looking into ldap for accounts, sudo (linux boxes) and RBAC
[15:10:01] <sjorge> but docs all seem to be gonbe
[15:14:27] <sjorge> Except one that is behind the oracle login wall :( https://support.oracle.com/knowledge/Sun%20Microsystems/1003270_1.html
[15:14:50] *** KungFuJesus <KungFuJesus!~adamstyli@207.250.97.74> has joined #illumos
[15:14:50] *** btibble <btibble!~brantibbl@c-69-94-200-89.hs.gigamonster.net> has quit IRC (Ping timeout: 240 seconds)
[15:15:17] <KungFuJesus> should I be struggling to reach 5 gbps with the standard MTU with i40e in iperf?
[15:15:52] <KungFuJesus> I've seen some doubt raised about the quality of drivers for i40e on the developer mailing list but the poster didn't go into specifics
[15:24:43] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[15:28:13] *** arnold_oree <arnold_oree!~arnoldore@ranoldoree.plus.com> has joined #illumos
[15:33:13] <neirac> Anyone has more information on uts/common/disp/cmt.c ? I found a presentation from 2009 that only makes reference to it.
[15:34:50] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 240 seconds)
[15:36:07] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[15:43:03] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[15:45:55] *** psydroid <psydroid!psydroidma@gateway/shell/matrix.org/x-yjyekjnfqkdxyief> has left #illumos ("User left")
[15:47:34] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[15:48:37] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[15:48:52] <rmustacc> KungFuJesus: Depends on the specifics of the configuration. rzezeski would know more about what it can and can't hit.
[15:48:59] <rmustacc> neirac: What kind of information are you looking for?
[15:50:03] <neirac> rmustacc: mostly what's the main idea, I'm reading the code but having the big picture would help also, I saw is related to power saving but that's only thing I know.
[15:51:18] <rmustacc> How familiar are you with hyperthreading?
[15:51:47] <rmustacc> Or the way AMD bulldozer was implemented, etc.?
[15:54:07] <rmustacc> If you're not, I'd suggest reading the big theory statement in cpuid.c and understanding what the pginfo actually represents.
[15:54:23] <rmustacc> I think that's the first step in understanding why that exists and how different resources are actually shared on the processor.
[15:54:38] <rmustacc> Which inherently is what that is interacting with. One domain of which is power.
[15:56:11] <neirac> rmustacc thanks a lot!, I'm not familiar with hyperthreading I'll read that in cpuid.c. thank you very much for the guidance.
[15:57:04] <rmustacc> cpuid.c just gives a high-level overview into how the CPU shares resources, so you'll probalby want some external reading.
[15:57:19] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has quit IRC (Ping timeout: 268 seconds)
[15:57:40] <rmustacc> The key concept is that some logical CPUs that the OS sees share resources with one another.
[15:57:50] <rmustacc> So not all logical CPUs are equal.
[16:05:02] <neirac> rmustacc cpuid.c comments are gold!, thanks a lot.
[16:10:44] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[16:11:23] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has joined #illumos
[16:14:04] *** jellydonut <jellydonut!~quassel@s91904426.blix.com> has joined #illumos
[16:17:48] <jbk> sjorge: i've sort of done it in the past
[16:34:03] *** cantstanya <cantstanya!~chatting@gateway/tor-sasl/cantstanya> has quit IRC (Ping timeout: 240 seconds)
[16:38:03] *** cantstanya <cantstanya!~chatting@gateway/tor-sasl/cantstanya> has joined #illumos
[17:02:45] *** TheTick is now known as Qube
[17:02:58] *** Qube is now known as Selfawerewolf
[17:04:22] *** arnold_oree <arnold_oree!~arnoldore@ranoldoree.plus.com> has quit IRC (Quit: Leaving)
[17:15:00] *** nde <nde!~nde@96-66-145-126-static.hfc.comcastbusiness.net> has joined #illumos
[17:19:21] *** btibble <btibble!~brantibbl@c-69-94-200-89.hs.gigamonster.net> has joined #illumos
[17:24:16] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[17:34:38] *** Kruppt <Kruppt!~Kruppt@50.111.52.202> has joined #illumos
[17:52:47] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Read error: Connection reset by peer)
[17:53:16] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[17:55:59] <Smithx10> KungFuJesus: I just got a machine with i40e 10gb cards, I'll be able to test soon also
[17:59:27] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 260 seconds)
[18:01:22] <KungFuJesus> So part of the problem was the IT group connected to the wrong ports for the link aggregation
[18:01:28] <KungFuJesus> I'm getting something a bit more ideal now with iperf
[18:02:19] <KungFuJesus> however, I have a 136GB file and reading from it locally I don't seem to be able to saturate 90MB/sec, despite having a ton of disks
[18:03:05] <KungFuJesus> 4 x raidz2s with multipathing and plenty of bandwidth (ZFS scrubs can be like 4GB/sec)
[18:03:14] <KungFuJesus> anyone have any ideas for what could be going on?
[18:03:38] <KungFuJesus> this would seem to be an ideal sequential workload
[18:04:10] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[18:04:11] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[18:16:58] <KungFuJesus> is zfs_vdev_max_pending still a thing in Illumos?
[18:30:18] <tsoome> reading locally?
[18:30:24] <kkantor> FWIW a simple code search doesn't turn up anything for the zfs_vdev_max_pending tunable.
[18:31:07] <kkantor> Yeah, zfs_vdev_max_pending is gone. See illumos#4045
[18:36:03] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 264 seconds)
[18:36:43] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #illumos
[19:00:35] *** igitoor <igitoor!igitur@unaffiliated/contempt> has quit IRC (Ping timeout: 246 seconds)
[19:01:04] <KungFuJesus> yeah, I found the zfs_vdev_*_max_active stuff
[19:01:14] *** igitoor <igitoor!igitur@2a00:d880:3:1::c1ca:a648> has joined #illumos
[19:01:27] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 265 seconds)
[19:01:32] <KungFuJesus> the disks aren't 100% busy, but iostat -xnz for the pool is reporting about 5000 r/sec for the peak
[19:01:49] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #illumos
[19:05:42] <KungFuJesus> I mean this file was created very slowly over a long time, so perhaps it's somewhat fragmented. Copying a new one, but it's just a theory :-/
[19:06:14] <KungFuJesus> write IOs are flushing around 700MB/sec, for whatever that's worth
[19:07:24] <KungFuJesus> hah, I feel I've touched a lot of tuneables I probably shouldn't have - but for the most part it would seem the read limits don't actually hurt latency in the long run
[19:15:20] <KungFuJesus> in any case, 4x vdevs consisting of 6x7200 RPM drives a piece should be getting some pretty substantial read performance
[19:15:31] <KungFuJesus> especially for purely sequential IO
[19:22:51] <KungFuJesus> so microstate accounting is reporting SYS % as high as 84
[19:23:00] <KungFuJesus> am I CPU bound for this?
[19:28:33] <KungFuJesus> hah, copying the file oddly helped
[19:36:06] <tsoome> what is the origin of that file? torrent?
[19:50:23] <jbk> has anyone seen where invoking 'zlogin -I <zone>' from a shell, after zlogin exits, the shell's pty no longer sends any output (still receives input)
[19:50:30] <jbk> ?
[19:51:03] <jbk> (no idea if this is 'works as designed' a bug or what)
[19:53:17] <jbk> ( 12057 appears to have broken lx docker on smartos, though a simple fix seems to just break on POLLHUP for the zone/childs stdout/stderr fds, but then that happens, though it appears to do that even before the 12057 fix, so I'm trying to figure out what's going on)
[20:08:21] *** man_u <man_u!~manu@89-92-19-81.hfc.dyn.abo.bbox.fr> has quit IRC (Quit: man_u)
[20:19:03] <neirac> I'm running SmartOS on KVM but I only see that my CPU supports one frequency that's odd https://pastebin.com/tF8HwRaP, it should support more.
[20:45:39] *** neirac <neirac!~neirac@pc-184-104-160-190.cm.vtr.net> has quit IRC (Remote host closed the connection)
[20:45:59] *** neirac <neirac!~neirac@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[21:05:37] <jbk> andyf: you might be interested in https://smartos.org/bugview/OS-8083 -- it looks like the zlogin -I and zfd stuff were done (long ago) for lx, and is in omnios (but not illumos-joyent)
[21:06:37] <jbk> i've got a fix i'm testing (basically just 'break;' where in the test I added fprintf() statements)..
[21:07:36] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 265 seconds)
[21:08:26] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #illumos
[21:20:02] *** BOKALDO <BOKALDO!~BOKALDO@81.198.20.180> has quit IRC (Quit: Leaving)
[21:26:14] *** elegast <elegast!~elegast@83-161-181-201.mobile.xs4all.nl> has quit IRC (Ping timeout: 258 seconds)
[21:36:36] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 265 seconds)
[21:37:25] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #illumos
[21:46:59] *** Kruppt <Kruppt!~Kruppt@50.111.52.202> has quit IRC (Quit: Leaving)
[22:06:26] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 240 seconds)
[22:35:45] <KungFuJesus> tsoome: a radar simulation program run over the course of a day
[22:36:21] <tsoome> if it does small writes/updates over existing data, it can get ugly, yes.
[22:36:23] <KungFuJesus> it helped the local IO, and I bumped nfs4_bsize up to match the stripe size (128k), which made a difference over NFS when warm in cache, but since this cannot fit completely in cache it performed kind of suck when it had to go to disk
[22:36:55] <KungFuJesus> locally I'm still looking at like 400MB/sec max
[22:37:11] <KungFuJesus> I'm playing with record size on another data set, with lz4 off this time, to see if I can do much better
[22:37:54] <KungFuJesus> It's writing at 800-900MB/sec
[22:39:11] <KungFuJesus> very incompressible data, lz4 is buying maybe 100MB
[22:39:22] <KungFuJesus> out of 136.4 GB
[22:40:18] <KungFuJesus> In FreeBSD at home I get way better throughput than this on crappier drives. Then again, I have more vdevs and less parity (raidz1, not raidz2, and groups comprised of 4 disks instead of 6)
[22:41:09] <KungFuJesus> a little part of me is worried it might be the multipath IO causing some grief / the dual ported SAS expander + controllers, but I've seen it scrub at like 4GB/sec, so it shouldn't really be limiting throughput
[22:47:07] <KungFuJesus> seeing ~650MB/sec over the wire - some of which is probably due to tcp buffers not quite ramping up to what they should be
[22:47:19] <KungFuJesus> it seems the i40e driver kind of sucks in Linux as well, hah
[22:49:26] <KungFuJesus> with Linux client, iperf3 with the "reverse flag" (as in download to client), was hitting ~9.4gbps. The opposite direction, seemed to bound between 5 and 8 gbps without any great reason
[22:49:43] <KungFuJesus> setting irq affintity to the same numa node might have helped a bit, though
[22:53:15] *** alhazred <alhazred!~alhazred@mobile-access-bcee7c-94.dhcp.inet.fi> has joined #illumos
[22:55:32] *** alhazred <alhazred!~alhazred@mobile-access-bcee7c-94.dhcp.inet.fi> has quit IRC (Client Quit)
[22:57:33] <KungFuJesus> is there a reason that 2 of the volumes in each raidz2 wouldn't be read from?
[22:58:18] <KungFuJesus> I frequently see IO patterns that look like this: https://pastebin.com/zj9qSHRj
[23:02:05] <KungFuJesus> shouldn't the scheduling distribute the reads a little bit better than that?
[23:03:04] <jbk> i know at least some of the higher speed intel cards have a somewhat insanely complicated programming interface (though I cannot keep all the codenames, product names, part numbers straighed and matched to drivers, so not sure if that's 'i40e', ixgbe, or something else)
[23:04:37] <KungFuJesus> it seems real world, at least with nfs block sizes of 128k, I'm seeing a cap around 5.5-6.2gbps
[23:04:44] <KungFuJesus> with standard MTU sizes
[23:05:28] <KungFuJesus> eliminating the network from the equation, I'm achieving about 1GB/sec locally with reads. Of course I both turned off lz4 compression and used a larger max recordsize of 1m
[23:05:45] <KungFuJesus> so I'm not sure which is buying me how much throughput
[23:06:30] <KungFuJesus> 1GB/sec still seems low, possibly bumping up the recordsize even more might make it better, as sequential scrubs can hit 4GB/sec
[23:07:15] <KungFuJesus> I'm still a bit puzzled by those idle disks, though
[23:07:30] <KungFuJesus> It feels like it could be a bug
[23:33:58] *** nde <nde!~nde@96-66-145-126-static.hfc.comcastbusiness.net> has quit IRC (Remote host closed the connection)
[23:44:40] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has quit IRC (Quit: Leaving)
[23:51:20] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
top

   January 2, 2020
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31