Switch to DuckDuckGo Search
   March 5, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31

Toggle Join/Part | bottom
[00:01:23] *** amrfrsh <amrfrsh!~Thunderbi@pptp-194-95-1-171.pptp.padnet.de> has joined #illumos
[00:06:43] *** polishdub <polishdub!~polishdub@207.86.38.254> has quit IRC (Quit: leaving)
[00:10:58] *** alanc <alanc!~alanc@129.157.69.40> has quit IRC (Remote host closed the connection)
[00:11:26] *** alanc <alanc!~alanc@129.157.69.40> has joined #illumos
[00:30:53] *** amrfrsh <amrfrsh!~Thunderbi@pptp-194-95-1-171.pptp.padnet.de> has quit IRC (Ping timeout: 245 seconds)
[00:31:17] *** amrfrsh <amrfrsh!~Thunderbi@185.180.15.226> has joined #illumos
[00:32:32] *** idodeclare <idodeclare!~textual@209.58.135.106> has joined #illumos
[00:34:49] *** andy_js <andy_js!~andy@94.6.62.238> has quit IRC (Quit: andy_js)
[00:35:43] *** rsully <rsully!~rsully@unaffiliated/rsully> has joined #illumos
[01:11:46] *** rauz <rauz!~rauz@enieslobby.rauecker.at> has quit IRC (Ping timeout: 245 seconds)
[01:32:26] *** idodeclare <idodeclare!~textual@209.58.135.106> has quit IRC (Ping timeout: 255 seconds)
[01:41:41] *** idodeclare <idodeclare!~textual@209.58.135.106> has joined #illumos
[01:42:08] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[01:54:47] *** idodeclare <idodeclare!~textual@209.58.135.106> has quit IRC (Ping timeout: 240 seconds)
[02:05:07] *** idodeclare <idodeclare!~textual@209.58.135.106> has joined #illumos
[02:12:02] *** patdk-lap <patdk-lap!~patrickdk@208.94.190.191> has quit IRC (Remote host closed the connection)
[02:14:09] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[02:16:39] *** amrfrsh <amrfrsh!~Thunderbi@185.180.15.226> has quit IRC (Quit: amrfrsh)
[02:22:19] *** amrfrsh <amrfrsh!~Thunderbi@host-6cc88ec17931e033fd9c.ip6.padnet.de> has joined #illumos
[02:26:27] *** amrfrsh <amrfrsh!~Thunderbi@host-6cc88ec17931e033fd9c.ip6.padnet.de> has quit IRC (Ping timeout: 240 seconds)
[02:28:24] *** amrfrsh <amrfrsh!~Thunderbi@185.180.15.226> has joined #illumos
[02:28:48] *** v_a_b <v_a_b!~volker@p57A27D6B.dip0.t-ipconnect.de> has quit IRC (Ping timeout: 245 seconds)
[03:03:40] *** kkantor <kkantor!~kkantor@c-24-118-59-107.hsd1.mn.comcast.net> has quit IRC (Quit: WeeChat 2.1)
[03:12:48] <gitomat> [illumos-gate] 10475 fix zfs-test cli_root/zpool_get zpool_get_002_pos test case -- Jerry Jelinek <jerry.jelinek at joyent dot com>
[03:12:49] <gitomat> [illumos-gate] 10479 7290 broke slog_014_pos.ksh -- Andrew Stormont <astormont at racktopsystems dot com>
[03:12:50] <gitomat> [illumos-gate] 10478 setup and cleanup for pool checkpoint tests doesn't run -- Andrew Stormont <astormont at racktopsystems dot com>
[03:26:41] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Quit: jcea)
[04:05:15] *** rsully <rsully!~rsully@unaffiliated/rsully> has quit IRC (Quit: rsully)
[04:25:27] *** amrfrsh <amrfrsh!~Thunderbi@185.180.15.226> has quit IRC (Ping timeout: 240 seconds)
[05:24:32] *** freakazoid0223 <freakazoid0223!~matt@pool-108-52-159-210.phlapa.fios.verizon.net> has quit IRC (Remote host closed the connection)
[05:26:18] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has quit IRC (Ping timeout: 245 seconds)
[05:28:14] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has joined #illumos
[07:09:19] <gitomat> [illumos-gate] 10496 uts: NULL pointer error in ip_ndp.c -- Toomas Soome <tsoome at me dot com>
[07:19:06] <gitomat> [illumos-gate] 10457 libstand: bzipfs.c cstyle cleanup -- Toomas Soome <tsoome at me dot com>
[07:26:14] *** awordnot <awordnot!~awordnot@c-73-210-60-203.hsd1.il.comcast.net> has quit IRC (Ping timeout: 259 seconds)
[07:32:07] <gitomat> [illumos-gate] 10461 loader: multiboot2.c cstyle cleanup -- Toomas Soome <tsoome at me dot com>
[07:37:33] <gitomat> [illumos-gate] 10463 loader: interp_forth.c cstyle cleanup -- Toomas Soome <tsoome at me dot com>
[07:43:00] *** awordnot <awordnot!~awordnot@c-73-210-60-203.hsd1.il.comcast.net> has joined #illumos
[07:43:46] <gitomat> [illumos-gate] 10465 loader: uboot cstyle cleanup -- Toomas Soome <tsoome at me dot com>
[08:04:28] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Quit: tsoome)
[08:06:03] *** vaxsquid <vaxsquid!~vaxsquid@2001:19f0:5401:a1:5400:ff:fe58:1c49> has quit IRC (Quit: Coffee is for closers!)
[08:07:10] *** vaxsquid <vaxsquid!~vaxsquid@2001:19f0:5401:a1:5400:ff:fe58:1c49> has joined #illumos
[08:08:23] *** vaxsquid <vaxsquid!~vaxsquid@2001:19f0:5401:a1:5400:ff:fe58:1c49> has quit IRC (Excess Flood)
[08:09:41] *** vaxsquid <vaxsquid!~vaxsquid@2001:19f0:5401:a1:5400:ff:fe58:1c49> has joined #illumos
[08:20:11] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has joined #illumos
[09:04:47] *** andy_js <andy_js!~andy@94.6.62.238> has joined #illumos
[09:16:05] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[09:41:34] *** v_a_b <v_a_b!~volker@p57A27B1F.dip0.t-ipconnect.de> has joined #illumos
[09:53:31] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[09:55:11] *** igork <igork!~igork@178.162.50.254> has quit IRC (Read error: Connection reset by peer)
[09:55:31] *** igork <igork!~igork@91.204.56.74> has joined #illumos
[10:01:10] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has quit IRC (Ping timeout: 250 seconds)
[10:03:01] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has joined #illumos
[10:34:42] *** man_u <man_u!~manu@manu2.gandi.net> has joined #illumos
[11:21:01] *** amrfrsh <amrfrsh!~Thunderbi@185.180.15.226> has joined #illumos
[11:50:18] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[11:50:42] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[12:05:47] *** andy_js <andy_js!~andy@94.6.62.238> has quit IRC (Ping timeout: 240 seconds)
[12:06:29] *** andy_js <andy_js!~andy@94.6.62.238> has joined #illumos
[12:56:24] <igork> is it only on my build env - https://paste.dilos.org/?5d91e99c4abfbdf3#MdDHy5Vyr4xPz5pCX5Xr3roDw4zDYQWFnN/invRQVEA=
[12:56:33] <igork> smatch build not clean?
[12:57:18] <igork> build not failed, but warnings present in my build full build log
[13:05:25] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[13:05:38] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[13:09:37] <jlevon> igork: we're not yet completely clean, that's right.
[13:09:45] <jlevon> igork: not far off though
[13:10:01] <igork> ok, thanks for confirmation
[13:57:52] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Ping timeout: 246 seconds)
[13:59:58] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[14:04:21] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[14:04:32] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[14:04:34] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[14:10:37] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[14:11:08] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[14:11:29] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[14:11:46] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[14:14:34] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[14:18:11] *** freakazoid0223 <freakazoid0223!~matt@pool-108-52-159-210.phlapa.fios.verizon.net> has joined #illumos
[14:24:37] *** cantstanya <cantstanya!~chatting@gateway/tor-sasl/cantstanya> has quit IRC (Remote host closed the connection)
[14:30:16] *** cantstanya <cantstanya!~chatting@gateway/tor-sasl/cantstanya> has joined #illumos
[14:53:12] *** lblume <lblume!~lblume@greenviolet/laoyijiehe/lblume> has quit IRC (Ping timeout: 258 seconds)
[14:58:18] *** idodeclare <idodeclare!~textual@209.58.135.106> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[14:58:56] *** idodeclare <idodeclare!~textual@209.58.135.106> has joined #illumos
[15:10:03] *** lblume <lblume!~lblume@greenviolet/laoyijiehe/lblume> has joined #illumos
[15:15:53] *** Kurlon <Kurlon!~Kurlon@98.13.72.207> has quit IRC (Ping timeout: 245 seconds)
[15:16:47] *** rsully <rsully!~rsully@unaffiliated/rsully> has joined #illumos
[15:29:53] *** idodeclare <idodeclare!~textual@209.58.135.106> has quit IRC (Ping timeout: 255 seconds)
[15:29:58] *** rsully <rsully!~rsully@unaffiliated/rsully> has quit IRC (Read error: Connection reset by peer)
[15:49:01] *** gh34 <gh34!~textual@rrcs-162-155-144-114.central.biz.rr.com> has joined #illumos
[15:54:53] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[16:01:00] *** tomoyat1 <tomoyat1!~tomoyat1@133.130.119.65> has quit IRC (Quit: ZNC 1.6.5 - http://znc.in)
[16:01:30] *** tomoyat1 <tomoyat1!~tomoyat1@tomoyat1.com> has joined #illumos
[16:14:02] <jimklimov> hi all, quick question: when the system is telling me it is spending a lot of CPU time in "kernel" (as opposed to "user"), how can I quickly track what it is doing and if it really has an impact on performance or not? :)
[16:14:39] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[16:15:01] <jimklimov> my primary suspects in this use-case would be context switching (hundreds of processes are running, though mostly waiting for network I/O, on few cores) or ZFS (for when they do have a bit of work to do)
[16:15:39] *** rsully <rsully!~rsully@unaffiliated/rsully> has joined #illumos
[16:16:24] <jimklimov> when this burst of spawned processes ends, the CPU load decreases, both for userland and kernel
[16:17:57] <jimklimov> but even a `find` in a large directory tree brings kernel time to tens of percent, sometimes 70-90% (with the rest being `find` itself) even though AFAIK all the metadata is cached and `zpool iostat -v 1` shows no reads from disk
[16:18:49] <jimklimov> OS is a relatively recent OmniOS (r151029 updated in November)
[16:22:05] *** jimklimov1 <jimklimov1!~jimklimov@31.7.243.238> has joined #illumos
[16:23:27] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 240 seconds)
[16:25:14] *** TheBloke <TheBloke!~TomJ@unaffiliated/tomj> has quit IRC (Ping timeout: 255 seconds)
[16:26:17] <jimklimov1> also, with an SSD L2ARC and ZIL in the box, is it both safe and fast to have "sync=standard" on the datasets used over NFS? and how can I make sure the NFS share (for many small files) is used optimally? :-)
[16:26:30] *** TheBloke <TheBloke!~TomJ@unaffiliated/tomj> has joined #illumos
[16:26:58] <jimklimov1> it serves as a shared ccache index and git reference repo for many builders in a farm, and used to speed up things a lot rather than be a bottleneck... :\
[16:27:44] <toasterson> i would have set sync to diasable for random access it does not do anything AFAIK.
[16:28:06] <toasterson> for NFS also keep in mind packet sizes and retransmits.
[16:28:14] <jimklimov1> now, I am not sure why builds are slow so am grasping at straws :)
[16:28:20] <toasterson> did something change on the switches?
[16:28:23] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 245 seconds)
[16:28:32] <jimklimov1> they are IT-managed so not sure
[16:28:42] <jimklimov1> assuming default average settings
[16:28:55] <toasterson> playing around with jumbo frames can sometimes help.
[16:31:29] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[16:34:05] <toasterson> although a lokal find should not have that problem.
[16:35:40] <jimklimov1> ok, for posterity... I thiink the culprit is with builds interrupted mid-way, and ccache lock files left unattended
[16:36:12] *** polishdub <polishdub!~polishdub@207.86.38.254> has joined #illumos
[16:36:21] <toasterson> ahg that sounds nasty.
[16:36:29] <jimklimov1> so each call to compile a file loops to get the lock to look in the index, then I guess abandons the idea and calls real compiler instead, after several (tens of?) seconds trying
[16:36:42] <toasterson> but I just remembered our top shows contextswithes :)
[16:45:04] *** snuff-work <snuff-work!~snuff-wor@202-161-112-134.tpgi.com.au> has quit IRC (Read error: Connection reset by peer)
[16:54:45] *** mnowak_ <mnowak_!mnowak_@nat/suse/x-yhshbsldcnregxnd> has quit IRC (Quit: Leaving)
[16:55:36] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has quit IRC (Quit: Leaving)
[16:56:49] <jimklimov1> so I went into the ccache directory to look for locks and correlate with processes still alive on a suspect build host
[16:57:19] <jimklimov1> and a `find . -name '*.lock' -ls | time head -100` on the build host (over NFS) took 50 sec while on the OmniOS server itself it was 20-23 sec
[16:58:19] *** mnowak_ <mnowak_!mnowak_@nat/suse/x-zbaelsuxiplcdpbz> has joined #illumos
[16:58:45] <jimklimov1> locally sometimes even less, varying 5-13sec in last attempts
[16:59:04] <jimklimov1> probably related to competition from regularly spawning and disappearing other jobs
[17:00:07] <rmustacc> In general, I'd start with using mpstat to get a sense of the general system utilization. Then for processes in question, I'd look at the mircostates from prstat -mL
[17:00:33] <rmustacc> Just in terms of how to answer the general where are things being blocked.
[17:04:34] <jimklimov1> so when I'm find'ing over NFS, the system's two cores total about 170k syscalls per sec with 90-93% system time
[17:04:45] <jimklimov1> CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
[17:04:45] <jimklimov1> 0 0 0 0 3341 2642 2354 1090 297 23644 160 72258 6 94 0 0
[17:04:46] <jimklimov1> 1 0 0 0 2748 104 5923 2587 291 34866 235 99262 9 91 0 0
[17:06:46] <jimklimov1> the nfsd process has 32 LWPs, 2 of which are apparently some systemic stuff (nfsd/1 and nfsd/2) and others share the load somehow, with their SYS column in prstat getting to 0.4-0.5 each while the NFS load (find) is on
[17:07:12] <jimklimov1> and their VCX went up to 200-something from 20-something when not doing anything
[17:07:33] <jimklimov1> whatever that meant :-D
[17:10:14] *** myrkraverk <myrkraverk!~chatzilla@unaffiliated/myrkraverk> has quit IRC (Ping timeout: 268 seconds)
[17:35:06] *** wilbury <wilbury!~otis@real.wilbury.sk> has joined #illumos
[17:38:29] <jimklimov1> tried to simplify the work for NFSD by re-sharing with "anon=399,sec=none,aclok,nosuid" instead of earlier "anon=399,sec=sys" (399 being the build farm ubiquitous account) but the timings result the same
[17:50:02] *** kkantor <kkantor!~kkantor@c-24-118-59-107.hsd1.mn.comcast.net> has joined #illumos
[17:51:46] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has joined #illumos
[17:52:05] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 255 seconds)
[18:02:53] <jimklimov1> nice find while looking for bits: https://lists.samba.org/archive/ccache/2016q1/001394.html "...you can use "memcached only", avoiding the disk access... It allows you to share your cache between different machines, without having to use a shared filesystem like NFS to do it."
[18:03:38] *** myrkraverk <myrkraverk!~chatzilla@unaffiliated/myrkraverk> has joined #illumos
[18:03:50] <jimklimov1> that sounds like a good option to research for our farm, with the NFS server running a copy of the memcached to serve the reads
[18:04:17] <jimklimov1> I'd back the writes by same NFS though, to keep the object cache persistent (workers are recyclable)
[18:22:06] *** jimklimov1 <jimklimov1!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 250 seconds)
[18:25:37] *** lelf <lelf!~user@178.176.160.108> has joined #illumos
[18:25:49] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[18:34:55] *** neirac <neirac!~cneir@190.162.109.53> has joined #illumos
[18:36:43] <neirac> rmustacc if you have time, could you take a look at https://illumos.org/rb/r/1478/ ?. I added the data model call to libproc, along with more fixes.
[18:45:08] *** man_u <man_u!~manu@manu2.gandi.net> has quit IRC (Quit: man_u)
[18:49:30] <rmustacc> neirac: I will try to get to that today. Sorry, been taking care of some personal matters lately.
[18:50:39] <neirac> rmustacc don't worry, thanks for helping me out on this one.
[18:51:07] <rmustacc> No, thank you for actually doing the work.
[18:55:02] *** _Tenchi_ <_Tenchi_!~phil@207-255-80-203-dhcp.aoo.pa.atlanticbb.net> has quit IRC (Ping timeout: 245 seconds)
[18:59:13] <neirac> rmustacc thanks!.
[19:12:44] *** mahrens <mahrens!~mahrens@openzfs/founder> has joined #illumos
[19:19:36] *** amrfrsh <amrfrsh!~Thunderbi@185.180.15.226> has quit IRC (Read error: Connection reset by peer)
[19:22:15] *** amrfrsh <amrfrsh!~Thunderbi@pptp-194-95-1-171.pptp.padnet.de> has joined #illumos
[19:30:38] *** TheBloke <TheBloke!~TomJ@unaffiliated/tomj> has quit IRC (Ping timeout: 255 seconds)
[19:31:40] *** TheBloke <TheBloke!~TomJ@unaffiliated/tomj> has joined #illumos
[19:38:48] *** _Tenchi_ <_Tenchi_!~phil@207-255-80-203-dhcp.aoo.pa.atlanticbb.net> has joined #illumos
[19:46:43] *** neirac <neirac!~cneir@190.162.109.53> has quit IRC (Ping timeout: 245 seconds)
[19:52:47] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 240 seconds)
[19:55:36] *** amrmesh <amrmesh!~Thunderbi@185.180.15.226> has joined #illumos
[19:57:00] *** amrfrsh <amrfrsh!~Thunderbi@pptp-194-95-1-171.pptp.padnet.de> has quit IRC (Ping timeout: 250 seconds)
[19:57:00] *** amrmesh is now known as amrfrsh
[20:06:11] *** amrfrsh <amrfrsh!~Thunderbi@185.180.15.226> has quit IRC (Ping timeout: 255 seconds)
[20:21:14] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[21:06:42] *** Riastradh <Riastradh!~riastradh@netbsd/developer/riastradh> has joined #illumos
[21:26:20] *** Riastradh <Riastradh!~riastradh@netbsd/developer/riastradh> has quit IRC (Remote host closed the connection)
[21:31:27] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 240 seconds)
[21:31:29] *** Riastradh <Riastradh!~riastradh@netbsd/developer/riastradh> has joined #illumos
[21:32:51] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[21:48:09] <Obscurax> Hi, could anyone familiar with ipf have a look at my rules? I want to be sure I understood the manual correctly. https://paste.ngx.cc/r/64b38e0a0d3b88bc
[21:51:58] *** Qatz <Qatz!~DB@2601:187:8400:5::83c> has quit IRC (Ping timeout: 268 seconds)
[21:58:58] *** Qatz <Qatz!~DB@2601:187:8400:5::83c> has joined #illumos
[22:11:11] *** jhot[m] <jhot[m]!jhotmatrix@gateway/shell/matrix.org/x-bxijyohtxiojulul> has joined #illumos
[22:29:28] <jlevon> tsoome: so mnode_range_setup() gets really sore at us for not having a range starting at pfn 0 it seems
[22:30:47] <LeftWing> Obscurax: Doesn't seem unreasonable. I believe you can use "proto tcp/udp", incidentally.
[22:32:00] <LeftWing> I'm also not 100% sure that those NFS ports are enough. They _probably_ are for NFSv4, at least, but might not be for NFSv3? I don't recall how you're expected to get the port mapper (rpc/bind) to interact with ipfilter.
[22:35:43] <jbk> doesn't sharing w/ public (among other things) allow access just using port 2049?
[22:36:00] <jbk> ISTR doing that in the past on solaris boxes for NFS access through a firewall
[22:36:52] <jlevon> tsoome: the good news being that there's probably nothing wrong with allocating the relocator pages out of loader hea
[22:36:53] <jlevon> heap
[22:37:26] <rmustacc> jlevon: Are we putting data we care about in pfn 0?
[22:37:36] <LeftWing> jbk: I believe being firewall friendly was a goal of NFSv4, where everything is on 2049 basically. Can't recall if that's also the case for NFSv3 with all the sideband protocols.
[22:37:41] <jlevon> pfn 0 is always avoided I believe
[22:37:47] *** amrfrsh <amrfrsh!~Thunderbi@185.180.15.226> has joined #illumos
[22:37:50] <rmustacc> Right, it needs to be.
[22:37:52] <jlevon> but that code just expects to see a range there
[22:37:56] <rmustacc> Gotcha.
[22:37:57] <jlevon> that we later avoid
[22:38:02] <rmustacc> OK, just wanted to make sure we weren't starting to use it.
[22:38:05] <jlevon> sure
[22:38:12] <rmustacc> We need to avoid it not just for firmware but l1tf
[22:38:17] <jlevon> yeah
[22:38:40] <jlevon> this is all because this system decided to just have gaps in the efi mem map instead of labelling it
[22:39:21] <rmustacc> Ugh.
[22:49:17] <Obscurax> Leftwing, thanks for the info. I just tried to mount the nfs share and upload some files and it seems to work.
[22:49:25] <LeftWing> Obscurax: Great!
[22:49:34] <LeftWing> If you're using NFSv4 I think that should all be fine
[22:49:58] <Obscurax> Any way to verify the version I´m using?
[22:50:41] <Obscurax> my Google fu is not helping
[22:52:42] <LeftWing> Well, what client are you using
[22:53:22] <Obscurax> ESXi 6.7
[22:57:38] <LeftWing> I'm not really sure, then, sorry
[22:58:03] <rmustacc> Can probably confirm with the server, right?
[22:58:30] <LeftWing> Not sure...
[22:58:39] *** slumos <slumos!slumos@gateway/shell/firrre/x-anzjlialpsqtuhwa> has joined #illumos
[22:59:00] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 250 seconds)
[22:59:09] <LeftWing> I mean it's all software, there will be _some_ way to tell. You could sniff some packets probably.
[22:59:28] <LeftWing> I'm not aware of a succinct interface that can tell you what clients have things mounted for NFSv54
[22:59:31] <LeftWing> *NFSv4
[22:59:43] <LeftWing> For v3 I think some of the showmount(1M) stuff would help
[22:59:54] <LeftWing> But that probably depends on one of the sideband protocols
[23:00:02] *** mahrens <mahrens!~mahrens@openzfs/founder> has quit IRC (Quit: mahrens)
[23:00:59] <LeftWing> I guess "nfsstat -s" will give you counts for server-side operations that have been broken down by protocol
[23:01:02] <LeftWing> (i.e., 3 vs 4)
[23:01:30] <LeftWing> So if you've got zero values for v2 & v3, but non-zero values for v4, that would be a good indicator
[23:03:59] <Obscurax> Server NFSv3 it is.
[23:06:22] <Obscurax> That´s the bad news, the good news is that it seems to work tho
[23:10:35] *** scarcry <scarcry!~scarcry@2001:980:93d7:1:80c9:7fff:fe0f:aaf8> has joined #illumos
[23:11:45] *** phox <phox!~phox@c-24-17-60-211.hsd1.wa.comcast.net> has joined #illumos
[23:13:14] *** gh34 <gh34!~textual@rrcs-162-155-144-114.central.biz.rr.com> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[23:22:58] *** snuff-work <snuff-work!~snuff-wor@202.161.112.134> has joined #illumos
[23:24:01] <richlowe> LeftWing: so, is wsdiff working for you at all currently?
[23:29:38] <LeftWing> Sure
[23:29:44] <LeftWing> in the C locale
[23:30:13] <LeftWing> Do you want me to run something?
[23:31:15] <LeftWing> If you have a file that will apply with "git am" I can easily do a build of master + that, and then run wsdiff between that and another stock build (this is all jenkins experimentation)
[23:31:20] <richlowe> just wondering if even in C it's ever going to finish.
[23:31:26] <LeftWing> Oh it takes a while
[23:31:30] <LeftWing> I feel like it got a bit slower
[23:31:34] <richlowe> it didn't used to take this much of a while
[23:31:35] <LeftWing> with the new python business
[23:31:57] <richlowe> though I guess having to chew through all the i18n stuff isn't helping.
[23:32:11] <LeftWing> It takes 1-2 hours in my VM
[23:32:15] <LeftWing> looking at the two jenkins runs I have
[23:39:01] *** Kurlon <Kurlon!~Kurlon@98.13.72.207> has joined #illumos
[23:45:52] <richlowe> it does what?
[23:45:57] <richlowe> that's not "a bit slower"
[23:46:21] <richlowe> if that's true, something needs backed out
[23:46:32] <richlowe> if someone other than me can boot old bits and confirm I'm right.
[23:46:38] <rmustacc> I'm about to run another one later today. I can provide another data point.
[23:46:54] <rmustacc> Something you'd like me to compare/contrast richlowe?
[23:46:59] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has quit IRC (Quit: Leaving)
[23:47:04] <richlowe> wsdiff now, v. wsdiff... any time before now.
[23:47:11] <rmustacc> OK. I'll time that.
[23:47:30] <richlowe> I'm not sure when it exactly it was last changed, or, exactly, whether it'd be necessarily fair to assume it was a change to wsdiff that's hurting it.
[23:48:12] <LeftWing> I definitely feel like it used to be quicker
[23:48:33] <LeftWing> I was routinely doing it along with my paranoia builds of other people's patches
[23:48:33] <richlowe> right, if it was like, 2 minutes slower that'd be one thing, but I feel like it's closer to 10x slower
[23:48:40] <LeftWing> I think it used to take under 20 minutes
[23:48:47] <LeftWing> Because I used to sit and watch
[23:48:58] <richlowe> I used to do it on a hunch and watch the output sent to /dev/stdout
[23:49:01] <rmustacc> I'll at least compare and see if it's due to an older version of the tools as I can change that pretty easily in this env.
[23:49:11] <richlowe> like, -vVr /dev/stdout...
[23:49:17] <richlowe> which I certainly wouldn't bother doing anymore
[23:49:30] <rmustacc> Though I usually only run it without the -v options, which maybe I shouldn't be?
[23:50:07] <richlowe> they're handy for what I tend to do with it
[23:50:15] <richlowe> if you just want to know what changed, and not how, they're unnecessary.
[23:50:23] <rmustacc> Makes sense.
[23:53:27] <LeftWing> I have the jenkins thing doing.. LANG=C LC_ALL=C time /opt/onbld/bin/wsdiff -v -V -r "$d/wsdiff.txt" "$WORKSPACE/previous/proto/root_i386" "$WORKSPACE/current/proto/root_i386"
[23:53:38] <LeftWing> And then keeping the "wsdiff.txt" in the build artefacts
[23:54:12] <richlowe> god, I love seeing artefact spelled properly.
[23:54:18] <LeftWing> Me too
[23:54:28] <LeftWing> http://hound.writev.io/?q=artefact&i=nope&files=&repos= :P
[23:54:55] <rmustacc> Does an artificer work on artefacts?
[23:55:05] <rmustacc> Or is that an arteficer?
[23:55:07] <LeftWing> If they so desire it, Robert
[23:55:14] <jlevon> I can never remember the difference :(
[23:55:19] <LeftWing> There isn't one, really
[23:55:27] <LeftWing> As I recall it's two spellings of the same word
[23:55:33] <richlowe> "correctly" and "wrong"
[23:55:41] <LeftWing> For which people have perhaps retroactively assigned different connotations
[23:55:43] <andyf> I thought artefact was the UK English spelling
[23:55:46] <jlevon> hmm
[23:55:48] <LeftWing> s/UK//
[23:56:00] <jlevon> I thought one was something like side-effect, the other "product"
[23:56:14] <LeftWing> https://twitter.com/queen_uk/status/476630784108134400 -- best tweet
[23:56:53] <LeftWing> https://grammarist.com/spelling/artefact-artifact/
[23:57:23] <jlevon> huh
[23:58:00] <LeftWing> I believe the difference is, as we so often find, merely one of spelling
[23:58:24] <LeftWing> Divided, as we are, by a common language
[23:58:44] <rmustacc> To be fair, it was being done before we split off.
[23:58:57] <rmustacc> At least, based on the OED's citations.
top

   March 5, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31