Switch to DuckDuckGo Search
   November 2, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30

Toggle Join/Part | bottom
[00:39:48] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[01:51:29] *** cartwright <cartwright!~chatting@gateway/tor-sasl/cantstanya> has quit IRC (Remote host closed the connection)
[01:53:18] *** rsully <rsully!~rsully@unaffiliated/rsully> has joined #illumos
[01:57:17] *** cartwright <cartwright!~chatting@gateway/tor-sasl/cantstanya> has joined #illumos
[02:04:48] *** Qatz <Qatz!~db@c-66-31-24-126.hsd1.nh.comcast.net> has quit IRC (Quit: Gone looking for beer)
[02:06:12] *** Qatz <Qatz!~db@2601:187:8400:5::42d> has joined #illumos
[02:20:00] *** ed209 <ed209!~ed209@165.225.128.67> has quit IRC (Remote host closed the connection)
[02:20:06] *** ed209 <ed209!~ed209@165.225.128.67> has joined #illumos
[02:26:10] <gitomat> [illumos-gate] 11904 Add quick/make-nfs -- Kevin Crowe <kevin.crowe at nexenta dot com>
[02:33:30] <danmcd> <count-von-count>TWO... TWO LESS PATCHES FROM THE NFS-ZONE WAD... HAH HAH HAH </count-von-count>
[02:33:41] <danmcd> Okay, I'm outa here.
[02:47:44] *** Qatz <Qatz!~db@2601:187:8400:5::42d> has quit IRC (Quit: Gone looking for beer)
[02:49:25] *** Qatz <Qatz!~db@2601:187:8400:5::42d> has joined #illumos
[03:04:04] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Remote host closed the connection)
[03:19:41] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[03:20:31] *** tg2 <tg2!~tg2@205.204.66.35> has joined #illumos
[03:23:56] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Ping timeout: 240 seconds)
[03:25:01] *** alanc <alanc!~alanc@inet-hqmc06-o.oracle.com> has quit IRC (Ping timeout: 252 seconds)
[03:25:52] *** alanc <alanc!~alanc@inet-hqmc01-o.oracle.com> has joined #illumos
[03:33:38] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Ping timeout: 245 seconds)
[03:36:58] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[03:47:44] *** Qatz <Qatz!~db@2601:187:8400:5::42d> has quit IRC (Quit: Gone looking for beer)
[03:48:48] *** Qatz <Qatz!~db@c-66-31-24-126.hsd1.nh.comcast.net> has joined #illumos
[04:32:46] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[04:33:09] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[05:39:17] *** rsully <rsully!~rsully@unaffiliated/rsully> has quit IRC (Quit: rsully)
[05:48:44] <gitomat> [illumos-gate] 11870 cleanup sys/ddi_implfuncs.h -- Joshua M. Clulow <josh at sysmgr dot org>
[06:27:52] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[07:53:11] *** leoric_ <leoric_!~leoric@46.147.104.58> has joined #illumos
[08:15:39] <Reinhilde> Woodstock: would you mind instructing me on what to do? since using gdb (my debugger of choice) just points me somewhere where there shouldn't be a bus error
[09:02:16] *** andy_js <andy_js!~andy@97e29e78.skybroadband.com> has joined #illumos
[09:22:39] <Smithx10> @LeftWing https://github.com/golang/go/issues/35085 this seems like a tough one
[09:23:50] <Smithx10> Curious what the answer to "I'd appreciate any advice for how folks would normally debug what seems like a pretty tight race in the runtime like this one"
[09:29:35] *** psarria <psarria!~psarria@26.red-79-146-96.dynamicip.rima-tde.net> has joined #illumos
[09:35:00] *** leoric <leoric!~alp@pyhalov.cc.rsu.ru> has quit IRC (Remote host closed the connection)
[09:35:07] *** leoric_ is now known as leoric
[09:46:56] *** leoric <leoric!~leoric@46.147.104.58> has quit IRC (Quit: Konversation terminated!)
[11:20:00] *** ed209 <ed209!~ed209@165.225.128.67> has quit IRC (Remote host closed the connection)
[11:20:06] *** ed209 <ed209!~ed209@165.225.128.67> has joined #illumos
[12:24:07] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has joined #illumos
[13:08:27] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has quit IRC (Read error: Connection reset by peer)
[13:25:54] *** ibenn <ibenn!~benn@HSI-KBW-095-208-236-187.hsi5.kabel-badenwuerttemberg.de> has joined #illumos
[13:36:11] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has joined #illumos
[13:50:44] *** Kruppt <Kruppt!~Kruppt@104.169.30.251> has joined #illumos
[14:10:13] *** andy_js <andy_js!~andy@97e29e78.skybroadband.com> has quit IRC (Read error: No route to host)
[14:10:26] *** andy_js <andy_js!~andy@97e29e78.skybroadband.com> has joined #illumos
[14:19:59] <Reinhilde> ipadm: Missing local address
[14:20:05] <Reinhilde> and yes, I have specified a local address
[14:23:54] <Reinhilde> please ignore
[14:47:59] *** MarcelT <MarcelT!~marcel@tortuga.telka.sk> has joined #illumos
[14:58:01] *** d0h <d0h!~d0h@99-129-204-97.lightspeed.sndgca.sbcglobal.net> has quit IRC (Read error: Connection reset by peer)
[14:58:54] <andyf> Reinhilde - I am just here to quickly catch up, but open the core file in `mdb` and put the output of ::status, ::stack and ::regs somewhere - that should be a start
[14:59:49] <andyf> also the disassembly for the top record on the stack, so something like - udb_alloc_space+0x56::dis - your function will probably not be udb_alloc_space, that's just an example
[15:00:36] <Reinhilde> andyf: merci. I'm currenty working on a completely different project (a userspace iptun/6i4 endpoint... do not ask me why. Just do not.), for what it's worth
[15:12:12] *** ibenn <ibenn!~benn@HSI-KBW-095-208-236-187.hsi5.kabel-badenwuerttemberg.de> has quit IRC (Quit: Leaving)
[15:22:56] *** wonko <wonko!~quassel@134.209.46.246> has quit IRC (Ping timeout: 240 seconds)
[15:24:04] *** wonko <wonko!~quassel@134.209.46.246> has joined #illumos
[15:46:40] *** Kurlon <Kurlon!~Kurlon@cpe-67-253-141-249.rochester.res.rr.com> has quit IRC (Remote host closed the connection)
[16:05:00] *** cartwright <cartwright!~chatting@gateway/tor-sasl/cantstanya> has quit IRC (Remote host closed the connection)
[16:06:51] *** cartwright <cartwright!~chatting@gateway/tor-sasl/cantstanya> has joined #illumos
[16:12:50] *** Riastradh <Riastradh!~riastradh@netbsd/developer/riastradh> has quit IRC (Ping timeout: 240 seconds)
[17:05:28] *** khng300 <khng300!~khng300@unaffiliated/khng300> has quit IRC (Read error: Connection reset by peer)
[17:06:14] *** khng300 <khng300!~khng300@unaffiliated/khng300> has joined #illumos
[17:32:32] *** arnold_oree <arnold_oree!~arnoldore@ranoldoree.plus.com> has joined #illumos
[17:51:18] <Reinhilde> bleb
[18:01:39] <Reinhilde> 100
[18:13:49] <Reinhilde> andyf: jesus chrisket
[18:14:11] <Reinhilde> just ran into the high load crashbug again
[18:18:18] <Reinhilde> https://umbrellix.net/~ellenor/weechatsigbus.txt
[18:27:27] <despair86> try DBX, p. sure the version from sun studio 12.1 works fine. the latest dbx requires some external libraries
[18:27:51] <Reinhilde> I don't have a copy of sun studio nor can I afford one
[18:27:52] <despair86> at least dbx can reconstruct halfway decent symbol table info
[18:27:55] <despair86> oh
[18:28:24] <despair86> what illumos are you using? openindiana still ships sun studio 12.1
[18:28:29] <Reinhilde> omnios
[18:28:32] <despair86> oh
[18:28:53] <Reinhilde> i've found that if I compiled with debug symbols enabled, gdb will give useful information (if the crash was caused by the application and not, as I am suggesting here, an operating sytem fault)
[18:29:08] <despair86> oh ok at least gdb is finally catching up
[18:29:33] <despair86> yeah i still have gdb 7.1 which is still freakishly broken
[18:30:12] <Reinhilde> this SIGBUS fault is not application related though.
[18:30:26] <Reinhilde> I've never had sigbus when simultaneously running weechat and cc1plus on ANY other system
[18:31:05] <_mjg> 0x0000000000a04370
[18:31:22] <_mjg> this look a lot like a /truncated/ address
[18:32:04] <_mjg> i.e., there is a classic bug where people don't provide a header file for malloc
[18:32:10] <_mjg> so it is assumed to return int
[18:33:07] <Reinhilde> _mjg: and that could be in both cc1plus and weechat.
[18:33:19] <Reinhilde> and tmux. and all these other unconnected programs.
[18:33:21] <_mjg> no, in a lib used by by both
[18:33:44] <_mjg> can you provide disasm of the entire hook_timer_exec func?
[18:34:00] <_mjg> let me see where it takes the crashing rax from
[18:34:08] <Reinhilde> i can provide the original C
[18:34:17] <Reinhilde> but that won't be very useful to you
[18:34:23] <_mjg> no
[18:34:24] <_mjg> just
[18:34:31] <_mjg> hook_timer_exec::dis
[18:34:38] <_mjg> entire thing
[18:34:42] <Reinhilde> k
[18:35:17] <_mjg> if it ultimatley comes from here: movq +0x91ed8(%rip),%rax
[18:35:20] <_mjg> something is very broken
[18:35:49] <Reinhilde> _mjg: the file has been updated, refresh and scroll past the gdb crud
[18:39:13] *** rsully <rsully!~rsully@unaffiliated/rsully> has joined #illumos
[18:40:21] <_mjg> ok, given the code flow it looks like weechat`weechat_hooks+0x10 contains 0x0000000000a04370
[18:40:46] <_mjg> i don't know how print it, does weechat`weechat_hooks::print work?
[18:41:00] <_mjg> alternatively perhaps weechat`weechat_hoooks::dump 100
[18:41:11] <_mjg> there should be something in mdb to also print all memory mappings
[18:41:44] <_mjg> to verify whether the target page is mapped. if it is not, see the suspicion about truncated addr. if it is, that's a weird ass issue
[18:42:37] <_mjg> does ::pmap work?
[18:43:20] <Reinhilde> mdb: no type data available for weechat`weechat_hooks [792]: unknown object file name
[18:44:48] <Reinhilde> _mjg: file updated, refresh and scroll to end. pmap is an unknown dcmd name
[18:45:28] <_mjg> found it: $m
[18:45:46] <_mjg> (literally dollar sign followed by m)
[18:46:16] <_mjg> $m - print address space mappings
[18:46:52] <_mjg> apparently there is also: mappings
[18:48:45] <Reinhilde> _mjg: https://umbrellix.net/~ellenor/weechatsigbus.txt updated again, refresh
[18:51:49] <_mjg> the crashing address fits the heap no problem
[18:52:01] <_mjg> i.e., this 78e000 125a000 acc000 [ heap ]
[18:52:25] <_mjg> so short of the mappign getting fuckedi n the kernel or ram being bad damaged i have no idea
[18:52:32] <_mjg> you will have to ask someone else, sorry :)
[18:54:06] <Reinhilde> _mjg: what other information could I potentially pull from the core file to dx the problem?
[18:55:24] *** Riastradh <Riastradh!~riastradh@netbsd/developer/riastradh> has joined #illumos
[18:55:42] <LeftWing> I think there's an ELF note that talks more about the signal that killed the process
[18:56:14] <_mjg> i think the next step for you will be to do add a dtrace probe in the kernel and dump why it delivers the signal
[18:56:29] <Reinhilde> LeftWing: I forgot how to read elf notes ;-)
[18:56:31] <LeftWing> Well I think the ELF note might contain some information
[18:56:37] <_mjg> now that i said it, if the note contains that... :>
[18:56:43] <LeftWing> /usr/bin/elfdump -n <core>
[18:57:14] <Reinhilde> pr_info:
[18:57:16] <Reinhilde> si_signo: [ SIGBUS ]
[18:57:18] <Reinhilde> si_errno: [ ENOMEM ]
[18:57:20] <Reinhilde> si_code: [ BUS_OBJERR ]
[18:57:22] <Reinhilde> si_addr: 0x0000000000a043b0
[18:57:31] <LeftWing> Do you have swap configured?
[18:57:36] <_mjg> ENOMEM? are you swapping?
[18:58:12] <_mjg> would definitealy corelate nicely with crash under load
[18:58:27] <Reinhilde> last pid: 3697; load avg: 1.90, 2.02, 1.96; up 2+13:08:20 10:58:16
[18:58:29] <Reinhilde> 97 processes: 96 sleeping, 1 on cpu
[18:58:31] <Reinhilde> CPU states: 90.5% idle, 3.1% user, 6.4% kernel, 0.0% iowait, 0.0% swap
[18:58:33] <Reinhilde> Kernel: 345 ctxsw, 3090 trap, 2139 intr, 2427 syscall, 9 fork, 2494 flt
[18:58:35] <Reinhilde> Memory: 1023M phys mem, 166M free mem, 10G total swap, 10G free swap
[18:58:37] <Reinhilde> ARC: 358M Total, 136M MRU, 146M MFU, 288K Anon, 1806K Header, 55M Other
[18:58:39] <Reinhilde> 190M Compressed, 308M Uncompressed, 1.62:1 Ratio, 92M Overhead
[18:58:41] <Reinhilde> i have beaucoup swap configured
[19:00:03] <LeftWing> can you pastebin/gist: mdb -ke ::memstat
[19:01:01] <Reinhilde> LeftWing: right now, or while running both weechat and cc1plus?
[19:01:10] <LeftWing> Well right now to start with
[19:01:24] <Reinhilde> _mjg: well then that dispels the myth of "no OOM killer" doesn't it? ;-)
[19:01:39] <LeftWing> We don't have an OOM killer
[19:02:09] <Reinhilde> LeftWing: not an intentional one, anyway.
[19:02:29] *** CaptainTobin <CaptainTobin!~tobin@c-68-38-10-41.hsd1.in.comcast.net> has quit IRC (Read error: Connection reset by peer)
[19:02:45] <Reinhilde> https://umbrellix.net/~ellenor/mdb-kememstat.txt
[19:03:11] <LeftWing> That is a bunch of kernel memory in use
[19:03:52] <Reinhilde> LeftWing: so the question now is: am I RAM-sufficient?
[19:04:11] <LeftWing> Reinhilde: modinfo | grep vioif
[19:04:41] <Reinhilde> my berkeley student always played nice with the same amount of RAM, but I did need to tie down the arc_max (which is a sysctl)
[19:05:39] <Reinhilde> LeftWing: 183 fffffffff7ede000 3988 247 1 vioif (VIRTIO network driver)
[19:05:54] *** CaptainTobin <CaptainTobin!~tobin@c-68-38-10-41.hsd1.in.comcast.net> has joined #illumos
[19:06:22] <Reinhilde> in case you typo'd
[19:06:22] <LeftWing> OK, can you get: mdb -ke ::kmastat
[19:07:50] <Reinhilde> just for your edification, I'm actually running all these commands through one-shot SSH from my freebsd machine so that I can upload them directly to my public_html file
[19:08:11] <LeftWing> OK
[19:08:40] <Reinhilde> https://umbrellix.net/~ellenor/themagazine.txt
[19:11:47] <Reinhilde> LeftWing: i got the output for you
[19:11:54] <LeftWing> I saw
[19:13:14] *** arnold_oree <arnold_oree!~arnoldore@ranoldoree.plus.com> has quit IRC (Ping timeout: 276 seconds)
[19:14:28] <Reinhilde> LeftWing: on a scale from 1 to Florida Man how screwed am I?
[19:14:47] <LeftWing> I mean, as with all things we just need to figure out what's going on
[19:19:28] <LeftWing> Reinhilde: Does the SIGBUS thing happen randomly, or every time you try to start weechat?
[19:20:04] <Reinhilde> LeftWing: randomly, and nearly never on startup
[19:20:17] <Reinhilde> the main correlation is heavy load, and usually some form of C compiler
[19:23:13] *** Riastradh <Riastradh!~riastradh@netbsd/developer/riastradh> has quit IRC (Ping timeout: 245 seconds)
[19:35:15] <LeftWing> What does "swap -sh" say?
[19:35:57] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has joined #illumos
[19:39:42] <Reinhilde> LeftWing: total: 246M allocated + 25.1M reserved = 271M used, 9.85G available
[19:41:10] <LeftWing> I would be interested to know what happens to those numbers as you approach another SIGBUS
[19:41:43] <LeftWing> My expectation is that you've run out of swap to back an allocation, basically
[19:42:21] <LeftWing> We avoid having an OOM killer by requiring a reservation in swap for any address space we give to a process basically
[19:42:46] <LeftWing> I would expect an ENOMEM SIGBUS to reflect asking for swap we don't have
[19:44:39] <LeftWing> You might want to put something like this into a background process, appending to a log file: while :; do printf "%s: %s\n" "$(date -u +%FT%TZ)" "$(swap -s)"; sleep 1; done
[19:49:12] <Reinhilde> LeftWing: would it not be "while sleep 1..." whatever?
[19:49:34] <LeftWing> If you want?
[19:49:55] <LeftWing> It's really up to you -- the point is to log the output over time so you can refer back to see if it's trending in the wrong direction leading up to the event
[19:52:38] <Reinhilde> ok for a second there it was going down by 10mb/s
[19:52:46] <Reinhilde> LeftWing: i put it into a foreground process with the risk that it will actually get killed by this bug
[19:56:29] <Reinhilde> right, really loading it down
[19:56:56] <Reinhilde> had a bus error in just a compiler this time
[20:00:02] <LeftWing> What did the swap situation look like at the time
[20:02:48] *** gitomat <gitomat!~nodebot@165.225.148.18> has quit IRC (Remote host closed the connection)
[20:02:58] *** gitomat <gitomat!~nodebot@165.225.148.18> has joined #illumos
[20:04:42] <Reinhilde> i don't exactly know
[20:05:09] <Reinhilde> at no time have I seen below 10gb ram+swap available
[20:05:47] <LeftWing> Your output earlier said "9.85G available"
[20:06:19] <Reinhilde> ok
[20:06:45] <Reinhilde> well you and i obviously have a different interpretation of the gigabyte
[20:08:34] <Reinhilde> i go by powers of 1000
[20:08:59] <_mjg> just a quick q, are you sure there are no erros from swap?
[20:09:03] <_mjg> as in, read errors
[20:09:19] <LeftWing> Do you see any error reports if you "fmdump -e | tail"
[20:10:41] <LeftWing> https://github.com/illumos/illumos-gate/blob/8f22c1dff63d6147c87d6bff65bcd3970ad4d368/usr/src/uts/i86pc/os/trap.c#L864-L933
[20:10:46] <LeftWing> Pretty sure you're in here, anyway
[20:11:05] <Reinhilde> word
[20:11:14] <LeftWing> L931 there is, I think, the only place where we'll put BUS_OBJERR in si_code
[20:11:43] <Reinhilde> $ ssh illuminated.umbrellix.net fmdump -e | tail -n 55
[20:11:44] <Reinhilde> TIME CLASS
[20:11:46] <LeftWing> The si_errno value comes from "res" which is the result of the call to pagefault() earlier
[20:11:58] <Reinhilde> nothing. possible wrong user?
[20:12:13] <LeftWing> I mean, I'd just log in normally, become root, and run it
[20:12:35] <LeftWing> If there's no output, that probably means there were no logged error reports
[20:14:25] <Reinhilde> even with pfexec, nothing
[20:16:18] <LeftWing> At any rate, you want to look for the place where we stick ENOMEM in the result that comes out of pagefault() -- probably using the FC_MAKE_ERR() macro
[20:16:47] <LeftWing> You'll probably need to use DTrace to find out exactly where it happens
[20:16:56] <Reinhilde> oh joy :/
[20:17:01] <LeftWing> Once we figure out where it comes from, we can figure out what to do about it
[20:22:02] <_mjg> well perhaps Reinhilde would be up to running a patched kernel where FC_MAKE_ERR is changed to dump a bt if errno == ENOMEM
[20:24:14] <Reinhilde> _mjg: it may be my sole option
[20:25:38] <LeftWing> I think it'd be better to have the macro generate an SDT probe
[20:25:56] <Reinhilde> LeftWing: eh?
[20:26:07] <_mjg> he means that you can then dtrace for it
[20:26:40] <_mjg> the patch should not be hard to write bu i have no means to compile it
[20:30:00] <_mjg> hm. could it be it is guaranteed to be VOP_GETPAGE failing?
[20:32:11] <LeftWing> I don't know off hand
[20:34:17] <_mjg> given that the faulting address is on the heap i presume that's support for anonymous mappings
[20:34:56] <_mjg> one place returns the error as plopped in by VOP_GETPAGES, another explicitly sets ENOMEM after anon_private fails
[20:35:04] <_mjg> where one of the failure modes is again VOP_GETPAGES
[20:36:55] <_mjg> ye, there are 2 failure modes in anon_private. VOP_GETPAGES and bumping yourself against availrmem > pages_pp_maximum. but the latter is gated with the mapping not being writeable, so this can't be this one
[20:41:49] <_mjg> Reinhilde: can you dtrace -n 'fbt::fop_getpage:return /arg1 != 0/ { @[execname,stack()] = count(); }'
[20:41:49] <_mjg> Reinhilde: perhaps add: -o /tmp/getpage-stacks.txt
[20:41:49] <_mjg> Reinhilde: and just leave it until you run into the problem
[20:41:49] <_mjg> Reinhilde: if this returns an error which we can match against the above code, that will provide a step forward
[20:41:49] <_mjg> Reinhilde: .. without recompilign anything yet
[20:45:03] <_mjg> spelunking further -> swap_getpage->pvn_getpages->swap_getapage->VOP_PAGEIO is the likely candidate
[20:47:48] <Reinhilde> sorry to place so much load on youse
[20:48:39] <Reinhilde> _mjg: so run that dtrace until I run into the issue again?
[20:48:42] <_mjg> well i'm just trying to not do the actual work i need to do, so.. :)
[20:48:44] <_mjg> yes
[20:49:05] <_mjg> dtrace -n 'fbt::fop_getpage:return /arg1 != 0/ { @[execname,stack()] = count(); }' -o /tmp/getpage-stacks.txt
[20:49:11] <_mjg> ^C later
[20:58:29] <Reinhilde> _mjg: oh dear, you're using /this/ as a distraction from your actual job?
[20:59:50] *** xenol <xenol!~xenol@2001:470:5bc7:1a6c::dead:beef> has quit IRC (Ping timeout: 276 seconds)
[21:00:43] <_mjg> Reinhilde: i'm kind of a nerd
[21:00:50] <Reinhilde> wow
[21:20:01] *** ed209 <ed209!~ed209@165.225.128.67> has quit IRC (Remote host closed the connection)
[21:20:07] *** ed209 <ed209!~ed209@165.225.128.67> has joined #illumos
[21:25:50] *** shruti <shruti!shruti@nat/redhat/x-nuokkzznfohgnopa> has quit IRC (Ping timeout: 276 seconds)
[21:27:41] *** shruti <shruti!shruti@nat/redhat/x-kzqqhyuhvcdoljyq> has joined #illumos
[21:52:17] *** Qatz <Qatz!~db@c-66-31-24-126.hsd1.nh.comcast.net> has quit IRC (Ping timeout: 240 seconds)
[22:11:58] *** clapont <clapont!~clapont@unaffiliated/clapont> has quit IRC (Ping timeout: 268 seconds)
[22:27:50] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has quit IRC (Quit: Konversation terminated!)
[22:28:02] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-sdiynjuymchimatg> has quit IRC (Ping timeout: 240 seconds)
[22:29:02] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has joined #illumos
[22:45:21] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-kbmbuhzvditvwhrj> has joined #illumos
[22:49:45] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-kbmbuhzvditvwhrj> has quit IRC (Read error: Connection reset by peer)
[22:50:11] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-eclakbeqfwpopiqe> has joined #illumos
[22:54:51] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-eclakbeqfwpopiqe> has quit IRC (Read error: Connection reset by peer)
[22:55:22] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-ozsftnzodrtpflns> has joined #illumos
[23:14:38] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has quit IRC (Quit: Leaving)
[23:25:33] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-ozsftnzodrtpflns> has quit IRC (Read error: Connection reset by peer)
[23:25:53] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-ycamayshofybvpjq> has joined #illumos
[23:27:47] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-ycamayshofybvpjq> has quit IRC (Read error: Connection reset by peer)
[23:30:31] *** andy_js <andy_js!~andy@97e29e78.skybroadband.com> has quit IRC (Quit: andy_js)
[23:30:54] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-aksdrnuadcquduyp> has joined #illumos
[23:31:53] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-aksdrnuadcquduyp> has quit IRC (Read error: Connection reset by peer)
[23:35:45] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-fsvqpqfufjvtaoks> has joined #illumos
[23:40:52] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-fsvqpqfufjvtaoks> has quit IRC (Read error: Connection reset by peer)
[23:41:17] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-yguoradkkpliecrr> has joined #illumos
[23:49:27] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-yguoradkkpliecrr> has quit IRC (Read error: Connection reset by peer)
[23:49:47] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-ehfozhggwqrdjwim> has joined #illumos
[23:50:39] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-ehfozhggwqrdjwim> has quit IRC (Read error: Connection reset by peer)
[23:55:20] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-ucgyyoiwshjqfxhi> has joined #illumos
top

   November 2, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30