Switch to DuckDuckGo Search
   March 2, 2020
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31

Toggle Join/Part | bottom
[00:00:53] *** KeiraT <KeiraT!~k4ra@gateway/tor-sasl/k4ra> has quit IRC (Remote host closed the connection)
[00:01:21] *** KeiraT <KeiraT!~k4ra@gateway/tor-sasl/k4ra> has joined #illumos
[01:26:05] *** patdk-lap <patdk-lap!~patrickdk@208.95.164.6> has quit IRC (Ping timeout: 268 seconds)
[02:50:41] *** idodeclare <idodeclare!~textual@2600:1700:1101:17c0:44c3:cb6e:12db:6eeb> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[03:46:03] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[04:05:15] *** pmooney <pmooney!~pmooney@67-4-175-230.mpls.qwest.net> has quit IRC (Quit: WeeChat 2.7)
[04:45:12] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[04:47:29] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[04:50:34] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[04:53:58] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[05:11:44] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[05:22:08] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[05:33:00] *** Kruppt <Kruppt!~Kruppt@50-111-62-211.drhm.nc.frontiernet.net> has quit IRC (Remote host closed the connection)
[06:42:18] *** wl_ <wl_!~wl_@2605:6000:1b0c:600d::87c> has quit IRC (Quit: Leaving)
[07:15:28] *** BOKALDO <BOKALDO!~BOKALDO@46.109.200.188> has joined #illumos
[07:29:15] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[07:32:15] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[07:39:05] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[07:59:35] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[08:01:14] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Client Quit)
[08:19:27] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has joined #illumos
[08:19:36] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Quit: This computer has gone to sleep)
[08:21:02] *** neuroserve <neuroserve!~toens@195.71.113.124> has joined #illumos
[08:27:05] *** freakazoid0223 <freakazoid0223!~matt@pool-96-227-98-169.phlapa.fios.verizon.net> has quit IRC (Ping timeout: 240 seconds)
[08:28:59] <sensille> rmustacc: remember my i40e problems? coincidentally most of the times there's a kmem_move_buffer task running on a very large cache, taking up one cpu for minutes
[08:29:38] <sensille> i've found your https://www.illumos.org/issues/8493 addressing something similar, altough in our case no threads are blocking on this
[08:30:27] <sensille> in the ticket you mention other related problems might be fixed by the patch. do you remember if that was the case?
[08:42:26] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has quit IRC (Ping timeout: 256 seconds)
[09:20:35] *** wiedi <wiedi!~wiedi@185.85.220.192> has joined #illumos
[09:36:29] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[09:36:38] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[09:41:10] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 258 seconds)
[09:46:50] <EisNerd> Is there a way to profile zfs/kernel IO stack out of the box, I have deployed OI to a NVMe SSD system, but the IO is less impressive than expected
[09:48:15] <EisNerd> sensille: oh sounds interesting, the box I'm talking about uses i40e for all networks
[09:48:51] <EisNerd> 10g (currently used) as well as 40g ports (unused so far)
[09:48:55] <tsoome> what kind of expectations are looking for that name setup?
[09:48:56] <sensille> EisNerd: i'm absolutely not sure it has anything to do with i40e. though
[09:49:15] <tsoome> nvme*
[09:49:35] <EisNerd> SSG-2029P-DN2R24L
[09:50:13] <EisNerd> I would expect more than ~500MB/s
[09:51:10] <EisNerd> https://pastebin.com/adv7Dw9T
[09:51:45] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[09:52:47] <tsoome> what is the spec of that nvme?
[09:57:30] <EisNerd> https://pastebin.com/gdTwgWsh
[09:58:38] <EisNerd> https://business.kioxia.com/ko-kr/ssd/enterprise-ssd/cm5-r-series.html
[09:59:20] *** tsoome_ <tsoome_!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[09:59:25] *** tsoome_ <tsoome_!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Client Quit)
[09:59:26] <EisNerd> so I would expect the half, as in dual port mode
[10:00:47] <EisNerd> I have the strong feeling that I missed sth obvious
[10:01:53] <tsoome> sequential read/write test is simple - use dd on device and see what you get. bs=128k
[10:02:16] <tsoome> you do not want to write with pool on it obviously
[10:03:41] <tsoome> if you can get to 3350MB/s, then you know that layer is ok.
[10:06:37] <EisNerd> slice2 was whole disk right?
[10:06:50] <tsoome> GPT?
[10:07:28] <EisNerd> no idea, I would use for dd a so far never used ssd
[10:08:33] <tsoome> depends on how much you want to read - whole disk device name (on x86) ends with p0
[10:09:40] <tsoome> if you have VTOC, there is slice 2, yes. with GPT the whole disk is also device ending with d0
[10:09:58] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 265 seconds)
[10:11:18] <EisNerd> that looks bad https://pastebin.com/jPagPc14
[10:12:51] <tsoome> there is/was some work going on to improve nvme driver, I'm not sure about its current state
[10:12:52] <sensille> could as well be spinning disk
[10:14:20] <EisNerd> so the bad zfs performance results from some nvme driver hassle
[10:14:48] <tsoome> what is the block size reported by the disk, 4k?
[10:15:08] <EisNerd> could I meter somethng (dtrace) to help in identifing what is bad?
[10:15:21] <EisNerd> tsoome: yes
[10:15:58] <EisNerd> hm according to nvmeadmin there are 3
[10:16:14] <EisNerd> look at the bottom of the second paste
[10:17:21] <EisNerd> tsoome: best tell me what command to issue to get the answer to your question the way you intended
[10:18:07] <tsoome> i was just wondering what is the possible smallest unit the zfs is operating with:)
[10:18:28] <EisNerd> may I ask zpool then?
[10:18:31] <tsoome> because throughput = block_size * iops
[10:18:54] <tsoome> zdb without arguments will tell
[10:19:42] <tsoome> but as you get lowish raw read from device, that will basically set the baseline..
[10:20:27] <EisNerd> https://pastebin.com/EXQ0eTzC
[10:20:28] <tsoome> of course, you want to run dd few times to see if the result is consistent
[10:20:40] *** man_u <man_u!~manu@manu2.gandi.net> has joined #illumos
[10:20:45] <tsoome> ashift: 9
[10:20:59] <tsoome> your pool is using 512B blocks, not 4k
[10:21:28] <EisNerd> frustrating to get this commodity performance from such high end metal
[10:22:11] <tsoome> you have 5 nvme devices there?
[10:22:18] <EisNerd> as I have several yet unsued disks, I can create a pool with different settings
[10:22:25] <EisNerd> in total there are 12
[10:22:29] <EisNerd> so 7 unused yet
[10:22:30] <tsoome> 12?!
[10:22:48] <EisNerd> this box has up to 24 u2 slots
[10:22:59] <EisNerd> there is even one with 48
[10:23:06] <tsoome> so, what kind of PCI system it has?
[10:23:57] <EisNerd> good question, no idea right the way
[10:24:22] <tsoome> anyhow. thats not the main question:)
[10:25:14] <EisNerd> btw are there experience regarding the penalty for zfs encryption?
[10:25:36] <EisNerd> does it utilize HW capablities of latest xeon generations?
[10:25:49] <tsoome> at least not now. but it *may* also hint about the potential issue - as you can saturate the PCI with nvme devices...
[10:26:14] <sensille> you can also try to read in parallel with several processes
[10:26:37] <EisNerd> but the raw performance of a sngle ssd should be better on an idle system
[10:26:58] *** amrfrsh <amrfrsh!~Thunderbi@134.19.189.92> has quit IRC (Ping timeout: 255 seconds)
[10:26:59] <tsoome> encryption is a bit of unknown land really.
[10:27:15] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[10:28:10] <EisNerd> now is the chance, this box is onstep for production, so if you like me to meter sth I can
[10:28:48] <EisNerd> ashift 12 for 4k pool?
[10:29:10] <tsoome> the kernel crypto framework is using some boosting from CPU, but I have no idea how good it is, but also, the current aes mechanism is not the best one, there is work going on to get better.
[10:29:19] <tsoome> yes, 12 is for 4k
[10:32:15] <EisNerd> I'll use the second node
[10:34:05] <EisNerd> the sata doms for the OS bring 367MB/sec
[10:34:17] <EisNerd> according to dd read with bs 128k
[10:34:42] <EisNerd> so there is defnitely sth screwed up with the vme stuff
[10:35:41] <EisNerd> s/vme/nvme
[10:37:33] <EisNerd> so if some one can comeup with some dtrace to get more detals about nvme raw access, I'll be happy to assist
[10:39:29] <EisNerd> at least the pool params I'd like to get right regarding the only at creation ones
[10:42:42] *** amrfrsh <amrfrsh!~Thunderbi@109.201.133.238> has joined #illumos
[10:42:43] *** andy_js <andy_js!~andy@51.146.99.40> has joined #illumos
[10:43:16] <tsoome> profiling hotspots and lock contention etc is large work. I'd start from asking from illumos developers list first - as I wrote, there has been some work going on to improve nvme, and there may already be something to test with :)
[10:47:28] <EisNerd> are you talking about this https://www.illumos.org/issues/9291? Or is it only tracked on mailing list?
[10:52:23] <tsoome> it may be this one, but there was something posted recently on dev list.
[11:03:25] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 240 seconds)
[11:06:06] <EisNerd> interesting, there seem to be slow and fast ssds
[11:06:19] <EisNerd> at least when pinging with dd
[11:08:23] <EisNerd> the latency script from this bug supports this
[11:16:15] <andyf> ptribble: I'm not familiar with that expression
[11:19:45] <ptribble> https://www.investopedia.com/terms/b/boil-the-ocean.asp
[11:21:00] <ptribble> I mean, yes we could sit down and try to fix every imperfection in libdladm (and there's no shortage of them)
[11:23:08] <ptribble> or we could just make sure we limit the scope to fixing the immediate bugs
[11:25:56] <EisNerd> tsoome: which list exactly?
[11:32:54] <tsoome> if anything then illumos developers
[11:33:46] <tsoome> but perhaps it is good idea to poke adam anyhow:D
[11:35:21] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[11:36:14] <EisNerd> again the question if an illumos change is already in OI
[11:41:55] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 258 seconds)
[11:46:36] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[11:50:32] <EisNerd> if I redo the dd several times on one ssd the result gets into an acceptable range, but only for thoose used by the pool
[11:51:58] <EisNerd> it is freaky to see that the smb write performance is at 70 to 90 MB/s at least stable 100 I would expect
[11:53:17] <EisNerd> as limited by network
[11:53:44] <sensille> smb is susceptible to latency, though
[11:55:00] <EisNerd> but this box should easily saturate gbit
[11:57:37] <jimklimov> does smb imply O_SYNC like NFS does?
[11:58:55] <jimklimov> can you experiment on the target dataset with `zfs set sync=disabled pool/dataset` if that has any impact?
[11:59:41] <EisNerd> I can, so far we are in preproduction phase
[11:59:55] <jimklimov> if yes, there may be a benefit to adding a SLOG device to final pool, for dedicating sequential sync writes and later flushing the data from RAM to real pool, in normal production mode
[12:00:38] <EisNerd> jimklimov: 5 nvme enterprise class ssds as raidz2 shouldn't need such I wuld expect
[12:01:03] <jimklimov> at least, with usual SSDs (SAS/SATA ones) people did report benefits of having separate ZIL devices, preferably on DDRDrives or somesuch
[12:03:02] <EisNerd> hm no impact, maybe smb is slow due to other things
[12:03:08] <jimklimov> well, for sync writes every transaction, so maybe every small fwrite(), has to hit the pool to be acknowledged - so spread around into raidz2 blocks etc. This may be hard on the ssd's controller (lots of small random writes to reprogram the SSD pages), as well as flash wearing out
[12:04:19] <jimklimov> separate ZIL device can afford writing in larger blocks, more likely filling whole pages to do so, and the target pool would also get flushed in every 5 sec (txg sync interval) in larger blocks, hence the possible benefits
[12:11:57] <tsoome> jumpy results from dd read from device means there are other consumers affecting the communication.
[12:12:57] <tsoome> in ideal world we should see similar results from reading the device.
[12:15:06] <jimklimov> possibly, the device also has its DDR cache and re-serves the blocks from it on subsequent runs?
[12:15:54] <tsoome> yes, that can be the case - meaning the test should run long enough to make cache to do more work.
[12:16:55] <tsoome> thats the old question - was the IO performance measurement done with cold or warm caches:D
[12:17:43] <jimklimov> and/or the data transferred should be >> cache size
[12:18:16] <jimklimov> and also the sequential LBAs are not necessarily on same/nearby chips
[12:18:44] <tsoome> the smb may be tricky topic too; I have seen quite reasonable user experience with smb on local (wifi) network, but very poor performance over VPN.
[12:18:46] <jimklimov> and also some controllers can have their own compression of on-chip data
[12:19:09] <jimklimov> notably, empty blocks/pages might involve no real storage and I/O
[12:20:03] <jimklimov> yep, for SMB (over TCP) there were at least OpenVPN know-hows to prefer VPN over UDP, to avoid two TCP window adjusters fighting
[12:20:28] <tsoome> ou
[12:20:39] <tsoome> hm. thats probably the case then
[12:21:35] <tsoome> anyhow, there are 2 alternate implementations to compare.
[12:34:45] *** Asgaroth <Asgaroth!~Asgaroth@51.37.56.182> has joined #illumos
[12:36:11] *** amrfrsh <amrfrsh!~Thunderbi@109.201.133.238> has quit IRC (Quit: amrfrsh)
[12:57:41] <sensille> smb also has it's own windowing. some clients only use 16k there, others 64k
[12:57:49] <sensille> *its
[13:08:26] <tsoome> sensille, auto adjusting, i presume?
[13:08:49] <sensille> not the last time i looked into it
[13:09:01] <sensille> on windows, a registry key
[13:09:08] <sensille> on linux, a mount option
[13:09:16] *** ovi <ovi!~sh42@fsf/member/zeroSignal> has joined #illumos
[13:09:32] <tsoome> i see
[13:50:42] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #illumos
[13:54:06] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 256 seconds)
[13:55:19] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has quit IRC (Ping timeout: 255 seconds)
[14:06:41] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has joined #illumos
[14:21:46] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has joined #illumos
[14:26:22] *** Kurlon_ <Kurlon_!~Kurlon@cpe-67-253-136-97.rochester.res.rr.com> has quit IRC (Ping timeout: 258 seconds)
[14:35:15] *** tru_tru <tru_tru!~tru@157.99.90.140> has quit IRC (Quit: Lost terminal)
[14:52:39] *** Kruppt <Kruppt!~Kruppt@50-111-62-211.drhm.nc.frontiernet.net> has joined #illumos
[14:56:16] *** patdk-lap <patdk-lap!~patrickdk@208.95.164.6> has joined #illumos
[15:03:32] *** BOKALDO <BOKALDO!~BOKALDO@46.109.200.188> has quit IRC (Quit: Leaving)
[15:09:25] <EisNerd> what is the difference betwen /dev/dsk and /dev/rdsk as I get significant performance differences
[15:17:58] *** Lirion <Lirion!~kesselink@wikimedia-commons/Lirion> has quit IRC (Remote host closed the connection)
[15:21:35] <v_a_b> "dsk" is a block device. Data is read in blocks. "rdsk" is a character device. Data are read a character at a time. The "r" historically stands for "raw".
[15:22:48] <tsoome> it is a bit more complicated, as character in this context is the smallest addressable unit of data.
[15:23:18] <tsoome> which in terms of disk is sector size.
[15:24:18] <v_a_b> tsoome Are you sure that someone who doesn't know the difference needs all the complicated details? ;-)
[15:24:40] <tsoome> :)
[15:28:38] <EisNerd> maybe I'll try to bot one node tomorrow with a linux live cd and check if it behaves different
[15:30:42] *** tru_tru <tru_tru!~tru@157.99.90.140> has joined #illumos
[15:32:51] <v_a_b> EisNerd What exactly are you trying to do? And what kind of "differences" do you see?
[15:45:09] *** amrfrsh <amrfrsh!~Thunderbi@109.201.133.238> has joined #illumos
[15:46:21] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Quit: This computer has gone to sleep)
[15:46:54] *** amrfrsh <amrfrsh!~Thunderbi@109.201.133.238> has quit IRC (Client Quit)
[15:52:30] *** papertigers <papertigers!~papertige@pool-72-75-236-166.bflony.fios.verizon.net> has quit IRC (Remote host closed the connection)
[15:55:03] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[16:16:11] *** BOKALDO <BOKALDO!~BOKALDO@87.110.89.165> has joined #illumos
[16:18:06] <sjorge> As for SMB I think it does both sync and async depending on the command issued right?
[16:18:20] <sjorge> Atleast samba does and out implementation generally is better xD
[16:22:15] *** freakazoid0223 <freakazoid0223!~matt@pool-96-227-98-169.phlapa.fios.verizon.net> has joined #illumos
[16:22:46] <gitomat> [illumos-gate] 12347 need hotplug(1m) manpage -- John Levon <john.levon at joyent dot com>
[16:23:21] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[16:25:00] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has quit IRC (Quit: jcea)
[16:25:02] *** amrfrsh <amrfrsh!~Thunderbi@134.19.189.92> has joined #illumos
[16:39:05] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #illumos
[16:41:16] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 256 seconds)
[16:42:35] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[16:46:28] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has quit IRC (Quit: jcea)
[16:46:45] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 240 seconds)
[16:49:12] *** Asgaroth <Asgaroth!~Asgaroth@51.37.56.182> has quit IRC (Ping timeout: 265 seconds)
[16:52:04] *** spicywolf <spicywolf!~spicywolf@c-24-8-18-96.hsd1.co.comcast.net> has joined #illumos
[16:54:29] <spicywolf> So, is it normal and okay to get onu errors the default ONURI?
[16:58:21] <Woodstock> not really
[17:01:39] <spicywolf> Okay. So if ipkg.sfbay doesn't resolve, what should I be using? Where should I point the ONURI?
[17:03:10] <Woodstock> try to add -d /path/to/your/packages
[17:04:59] *** tsoome <tsoome!~tsoome@1e43-0376-eba1-74f5-2f80-4a40-07d0-2001.sta.estpak.ee> has joined #illumos
[17:08:25] <spicywolf> Well, that worked. Can you point me to some resources on why it would have failed like that?
[17:10:44] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[17:25:01] *** neuroserve <neuroserve!~toens@195.71.113.124> has quit IRC (Ping timeout: 255 seconds)
[17:30:02] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #illumos
[17:31:43] <Woodstock> spicywolf: i can't as i really don't have any idea. i hadn't run onu in years, and i figured it's trying to reach a long gone sun-internal nightly repo only if it doesn't know where to get packages from.
[17:32:30] <Woodstock> spicywolf: and it's apparently not smart enough to just look for them in the well-known location inside the workspace it was called from...
[17:32:30] *** AllanJude <AllanJude!~allan@freebsd/developer/AllanJude> has joined #illumos
[17:34:32] <spicywolf> Well, that's probably it, isn't it. Guess that's gonna be the first thing to look into. Thank you Woodstock!
[17:35:22] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 255 seconds)
[17:45:23] *** alanc <alanc!~alanc@inet-hqmc02-o.oracle.com> has quit IRC (Remote host closed the connection)
[17:45:49] *** alanc <alanc!~alanc@inet-hqmc02-o.oracle.com> has joined #illumos
[18:06:47] *** Teknix <Teknix!~pds@172.58.44.166> has quit IRC (Ping timeout: 258 seconds)
[18:09:05] *** Teknix <Teknix!~pds@172.58.44.166> has joined #illumos
[18:12:32] *** man_u <man_u!~manu@manu2.gandi.net> has quit IRC (Quit: man_u)
[18:18:00] *** spicywolf <spicywolf!~spicywolf@c-24-8-18-96.hsd1.co.comcast.net> has quit IRC (Quit: Leaving)
[18:29:37] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[18:34:11] *** wiedi <wiedi!~wiedi@185.85.220.192> has quit IRC (Ping timeout: 260 seconds)
[18:38:13] *** scarcry <scarcry!~scarcry@2001:980:93d7:1:80c9:7fff:fe0f:aaf8> has joined #illumos
[18:38:50] *** nde <nde!uid414739@gateway/web/irccloud.com/x-ivbdjmieupfniwbg> has joined #illumos
[18:46:10] *** tsoome <tsoome!~tsoome@1e43-0376-eba1-74f5-2f80-4a40-07d0-2001.sta.estpak.ee> has quit IRC (Quit: Leaving)
[18:46:21] *** tsoome <tsoome!~tsoome@1e43-0376-eba1-74f5-2f80-4a40-07d0-2001.sta.estpak.ee> has joined #illumos
[19:00:21] *** tru_tru <tru_tru!~tru@157.99.90.140> has quit IRC (Quit: leaving)
[19:00:35] *** tru_tru <tru_tru!~tru@157.99.90.140> has joined #illumos
[19:05:27] *** scarcry <scarcry!~scarcry@2001:980:93d7:1:80c9:7fff:fe0f:aaf8> has left #illumos
[19:14:03] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has joined #illumos
[19:34:42] <tsoome> any suggestions whom to bug to get https://code.illumos.org/c/illumos-gate/+/264 reviewed? someone from nexenta?:)
[19:36:36] <danmcd> Does it affect compiled binaries at all, save maybe linenos to VERIFY/ASSERT ?
[19:38:03] <tsoome> it should not. but it is more verbose about making sense about the signatures
[19:38:07] <tsoome> so to say:)
[19:43:52] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has quit IRC (Quit: ^C)
[19:54:33] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has joined #illumos
[20:34:53] *** tsoome_ <tsoome_!~tsoome@80.235.52.148> has joined #illumos
[20:34:58] *** tsoome_ <tsoome_!~tsoome@80.235.52.148> has quit IRC (Client Quit)
[20:37:26] <gitomat> [illumos-gate] 12343 Direct IO support -- Jerry Jelinek <jerry.jelinek at joyent dot com>
[20:40:46] *** papertigers <papertigers!~papertige@pool-68-133-56-51.bflony.fios.verizon.net> has joined #illumos
[20:58:32] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[20:59:45] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #illumos
[21:08:42] *** Kruppt <Kruppt!~Kruppt@50-111-62-211.drhm.nc.frontiernet.net> has quit IRC (Remote host closed the connection)
[21:12:41] *** BOKALDO <BOKALDO!~BOKALDO@87.110.89.165> has quit IRC (Quit: Leaving)
[21:21:29] *** alanc <alanc!~alanc@inet-hqmc02-o.oracle.com> has quit IRC (Quit: Leaving)
[21:22:06] *** alanc <alanc!~alanc@inet-hqmc02-o.oracle.com> has joined #illumos
[22:27:43] *** mgerdts <mgerdts!~textual@2600:6c44:c7f:ec89:1424:fdad:36d2:d5cb> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[22:35:26] *** mgerdts <mgerdts!~textual@2600:6c44:c7f:ec89:50ec:3ee8:ec8b:cda4> has joined #illumos
[23:08:18] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 256 seconds)
[23:09:43] *** btibble <btibble!~brantibbl@c-69-94-200-89.hs.gigamonster.net> has quit IRC (Ping timeout: 268 seconds)
[23:19:14] *** btibble <btibble!~brantibbl@c-69-94-200-89.hs.gigamonster.net> has joined #illumos
[23:36:32] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[23:42:17] *** andy_js <andy_js!~andy@51.146.99.40> has quit IRC (Quit: andy_js)
[23:53:38] *** richlowe <richlowe!~richlowe@cpe-74-139-197-163.kya.res.rr.com> has quit IRC (Ping timeout: 256 seconds)
[23:54:06] *** Kruppt <Kruppt!~Kruppt@50-111-62-211.drhm.nc.frontiernet.net> has joined #illumos
[23:54:16] *** bahamas10 <bahamas10!~dave@cpe-72-231-182-75.nycap.res.rr.com> has quit IRC (Ping timeout: 255 seconds)
[23:55:01] *** bahamas10 <bahamas10!~dave@cpe-72-231-182-75.nycap.res.rr.com> has joined #illumos
top

   March 2, 2020
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31