Switch to DuckDuckGo Search
   March 4, 2020
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31

Toggle Join/Part | bottom
[00:01:32] *** andy_js <andy_js!~andy@51.146.99.40> has quit IRC (Quit: andy_js)
[00:15:39] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 265 seconds)
[00:17:09] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[00:19:47] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[00:21:16] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has quit IRC (Quit: Leaving)
[00:24:40] *** tru_tru_ <tru_tru_!~tru@157.99.90.140> has quit IRC (Quit: leaving)
[00:24:59] *** tru_tru_ <tru_tru_!~tru@157.99.90.140> has joined #illumos
[00:42:03] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 260 seconds)
[00:43:15] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[00:51:51] *** tru_tru_ <tru_tru_!~tru@157.99.90.140> has quit IRC (Quit: leaving)
[01:28:00] <gitomat> [illumos-gate] 12271 "name" member of "struct option" should be const -- Brian Bennett <brian.bennett at joyent dot com>
[01:30:07] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has quit IRC (Ping timeout: 260 seconds)
[01:32:02] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has joined #illumos
[01:35:36] *** ldepandis <ldepandis!~ldepandis@unaffiliated/ldepandis> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[02:52:11] <gitomat> [illumos-gate] 11493 aggr needs support for multiple pseudo rx groups -- Ryan Zezeski <ryan at zinascii dot com>
[03:13:59] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has quit IRC (Ping timeout: 258 seconds)
[03:14:17] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has joined #illumos
[03:20:03] *** jessfraz_ <jessfraz_!~jessfraz@unaffiliated/jessfraz> has quit IRC (Remote host closed the connection)
[03:21:02] *** jessfraz_ <jessfraz_!~jessfraz@unaffiliated/jessfraz> has joined #illumos
[03:52:24] *** DaQatz is now known as Qatz
[04:13:42] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has quit IRC (Quit: jcea)
[05:04:10] *** MerlinDMC <MerlinDMC!~merlin@163.172.186.44> has quit IRC (Quit: ZNC 1.7.5+deb3 - https://znc.in)
[05:05:26] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has quit IRC (Ping timeout: 256 seconds)
[05:05:27] *** bahamas10 <bahamas10!~dave@cpe-72-231-182-75.nycap.res.rr.com> has quit IRC (Ping timeout: 256 seconds)
[05:05:36] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has joined #illumos
[05:05:42] *** bahamas10 <bahamas10!~dave@cpe-72-231-182-75.nycap.res.rr.com> has joined #illumos
[05:05:52] *** MerlinDMC <MerlinDMC!~merlin@163.172.186.44> has joined #illumos
[05:31:15] *** jemersha- <jemersha-!~jemershaw@c-68-83-252-28.hsd1.pa.comcast.net> has joined #illumos
[05:31:48] *** jemersha- <jemersha-!~jemershaw@c-68-83-252-28.hsd1.pa.comcast.net> has quit IRC (Client Quit)
[05:33:39] *** jemersha- <jemersha-!~jemershaw@c-68-83-252-28.hsd1.pa.comcast.net> has joined #illumos
[05:36:06] *** jemersha- <jemersha-!~jemershaw@c-68-83-252-28.hsd1.pa.comcast.net> has quit IRC (Remote host closed the connection)
[07:25:13] <gitomat> [illumos-gate] 12325 ahci: variable may be used uninitialized -- Toomas Soome <tsoome at me dot com>
[07:52:03] *** BOKALDO <BOKALDO!~BOKALDO@81.198.159.87> has joined #illumos
[08:02:39] *** wonko <wonko!~quassel@75.52.174.34> has quit IRC (Remote host closed the connection)
[08:06:37] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Quit: This computer has gone to sleep)
[08:18:35] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has joined #illumos
[08:19:39] *** neuroserve <neuroserve!~toens@195.71.113.124> has joined #illumos
[08:22:35] *** neuroserve <neuroserve!~toens@195.71.113.124> has quit IRC (Remote host closed the connection)
[08:22:50] *** neuroserve <neuroserve!~toens@195.71.113.124> has joined #illumos
[08:27:15] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has quit IRC (Remote host closed the connection)
[08:29:21] <gitomat> [illumos-gate] 4508 flowadm not working as documented, or documentation incorrect -- Peter Tribble <peter.tribble at gmail dot com>
[08:32:13] *** hubert3 <hubert3!~hubert@121-200-20-108.79c814.syd.nbn.aussiebb.net> has joined #illumos
[08:33:15] <hubert3> hi, having that issue where zfs list / df is showing 0 available, even though zpool list shows hundreds of gb free
[08:33:22] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has joined #illumos
[08:33:42] <hubert3> I've just reinstalled OI and am mounting zfs filesystems created under the old install
[08:33:55] <hubert3> I feel like I've solved this before but can't remember how
[08:36:06] <gitomat> [illumos-gate] 12342 bandwidth display badly formatted in flowstat, dlstat, and dladm -- Peter Tribble <peter.tribble at gmail dot com>
[08:47:06] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has quit IRC (Ping timeout: 258 seconds)
[08:48:20] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[08:57:32] *** Guest98419 <Guest98419!~taylor@c-73-63-29-240.hsd1.ut.comcast.net> has joined #illumos
[08:57:46] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Ping timeout: 256 seconds)
[08:59:03] <sensille> hubert3: quota?
[08:59:16] <sensille> or reservation
[08:59:31] *** tsoome <tsoome!~tsoome@35-96-157-37.dyn.estpak.ee> has joined #illumos
[09:06:05] *** Guest98419 <Guest98419!~taylor@c-73-63-29-240.hsd1.ut.comcast.net> has quit IRC (Quit: Guest98419)
[09:07:53] <hubert3> # zfs get quota,reservation tank3
[09:07:53] <hubert3> NAME PROPERTY VALUE SOURCE
[09:07:53] <hubert3> tank3 quota none default
[09:07:54] <hubert3> tank3 reservation none default
[09:08:01] <hubert3> sensille: both seem disabled
[09:10:03] <sensille> on all levels?
[09:11:07] <hubert3> yup, zfs get -r showing none on all
[09:11:33] <hubert3> I think I experienced this before and changing a zfs tuning parameter spa_slop_shift from 5 to 6 or something
[09:11:42] <hubert3> but I can't remember how or where to set that now
[09:12:02] <hubert3> # zpool list
[09:12:11] <hubert3> tank3 21.8T 21.1T 657G - - 11% 97% 1.00x ONLINE -
[09:12:24] <hubert3> # zfs list
[09:12:25] <hubert3> NAME USED AVAIL REFER MOUNTPOINT
[09:12:30] <hubert3> tank3 15.3T 0 15.3T /tank3
[09:13:46] <sensille> that looks like the space is really used up directly, not even in snapshots
[09:14:38] <hubert3> there were actually 650gig availble under the old openindiana install - where I had changed this kernel parameter
[09:17:36] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has quit IRC (Ping timeout: 256 seconds)
[09:20:59] *** tsoome <tsoome!~tsoome@35-96-157-37.dyn.estpak.ee> has quit IRC (Ping timeout: 260 seconds)
[09:25:06] *** ldepandis <ldepandis!~ldepandis@unaffiliated/ldepandis> has joined #illumos
[09:29:49] *** wiedi <wiedi!~wiedi@185.85.220.192> has joined #illumos
[09:35:03] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[09:40:50] *** Teknix <Teknix!~pds@172.58.44.166> has quit IRC (Ping timeout: 256 seconds)
[09:47:42] *** Teknix <Teknix!~pds@172.58.44.176> has joined #illumos
[09:50:02] *** taylor <taylor!~taylor@c-73-63-29-240.hsd1.ut.comcast.net> has joined #illumos
[09:50:25] *** taylor is now known as Guest74403
[09:55:01] *** hawk <hawk!~hawk@d.qw.se> has quit IRC (Ping timeout: 272 seconds)
[09:59:44] <hubert3> ok solved that
[09:59:55] <hubert3> by adding set zfs:spa_slop_shift = 8 in /etc/system
[10:06:40] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[10:11:03] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 258 seconds)
[10:14:30] *** ldepandis <ldepandis!~ldepandis@unaffiliated/ldepandis> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[10:14:52] *** trn <trn!jhj@prone.ws> has quit IRC (Read error: Connection reset by peer)
[10:15:47] <EisNerd> just a short question, I posted yesterday a summary of my ssd performance to illumos-dev, but there seem to be some hassle with dmarc, so quick question if the post made it.
[10:16:15] <ptribble> Well I saw it at least.
[10:16:50] <ptribble> And it's showing up on https://illumos.topicbox.com/groups/developer
[10:17:16] <EisNerd> not suprising as I posted it directly using the web if
[10:17:16] <tsoome> was in my mail too
[10:17:20] <EisNerd> thx
[10:17:49] <EisNerd> is there some default pcie powersaving or so
[10:18:52] <EisNerd> which may pull down pcie when idle and only relaxes doing so on repeated device activity, but this I would expect to happen also when running bonnie
[10:20:43] <toasterson1> EisNerd (IRC): Are you running in Datacenter Hardware? We had such a phenomenon, when the Firmwares own Powersaving was activated.
[10:20:53] <toasterson1> on HP specifically
[10:21:15] <EisNerd> hm maybe worth to check, reboot a node and check "bios"
[10:21:54] <toasterson1> on HP in ILO Remote Management we had to disable the power profile.
[10:22:06] <toasterson1> i.e. set it to full performance
[10:22:09] <EisNerd> HPE killed us ssd perf on another system with there ssd special raid controller super trick mode (not sure how this crap is called by there marketing)
[10:22:10] *** man_u <man_u!~manu@manu2.gandi.net> has joined #illumos
[10:22:55] <EisNerd> to avoid any of this we have choosen this time to avoid having any HBA at all
[10:23:00] <toasterson1> EisNerd (IRC): Yes HPE killed ESXi Latency once.... If it is HPE it is the Firmwares Power save.
[10:23:48] <EisNerd> this thing https://greenreaper.livejournal.com/140651.html
[10:23:53] <EisNerd> SSD Smart Path
[10:23:56] <EisNerd> they call it
[10:24:32] *** andy_js <andy_js!~andy@51.146.99.40> has joined #illumos
[10:24:42] <EisNerd> took me weeks to understand that the ssd tuning thing kills performance
[10:25:19] <toasterson1> EisNerd (IRC): It's a Firmware bug. The Powersave does only release the full power of the CPU if enough power is drawn from the Powersupply. Thus the performance may lag behind sometimes. Even if the OS supports the Intel Power management Interfaces, the BIOS just overwrites it. Took me 6 monehts to find that on our ESXi Hosts...
[10:25:47] <toasterson1> s/monehts/months/
[10:25:47] *** trn <trn!jhj@prone.ws> has joined #illumos
[10:25:57] <EisNerd> especially as we were working with the system, deploying a vNS with crapped io is extremly boring and long running as it forces to zero 300G disk images
[10:26:48] <EisNerd> anyway, I'll check bios, maybe this solves it, hopefully
[10:27:27] <toasterson1> EisNerd (IRC): You may also want to check ILO. I can't recall if you can set this setting via BIOS alone.
[10:29:14] <EisNerd> btw this time it is supermicro metal not HPE
[10:29:48] <toasterson1> ah. Well hopefully they don't have the same issue...
[10:30:30] <toasterson1> On the plus side if they do Oxide gets more free marketing :)
[10:30:41] <EisNerd> would be nice to get feedback from others using HPC NVMe SSDs if they perform as expected, so I could focus on checking local things. If they also report unexpected weak performance, we could focus on improving the kernel
[10:31:02] <sensille> EisNerd: we had some issues with powersaving on supermicros, you can check with powertop if you see more than p-state
[10:32:54] <EisNerd> https://pastebin.com/MPnUYPu3
[10:33:20] <sensille> that unfortunately looks good
[10:33:50] <EisNerd> maybe putting load and see if it scales up
[10:38:27] <EisNerd> https://pastebin.com/WTf6sER4
[10:38:39] <EisNerd> hm no also looks like I would expect it
[10:43:48] <EisNerd> hm https://pastebin.com/0S8xZ1JN
[10:49:46] <EisNerd> I'm thinking if it could be worth to mind about using raidz1
[10:50:43] <EisNerd> I mean this are SSDs they, at least theoratically, should be far less error prone than classic discs
[10:53:22] <sensille> quick read test from a regular (non nvme) ssd: 1073741824 bytes transferred in 3.916934 secs (274128168 bytes/sec)
[10:55:48] <sensille> from nvme: 1073741824 bytes transferred in 1.584666 secs (677582499 bytes/sec)
[10:56:40] <sensille> 10737418240 bytes transferred in 15.900206 secs (675300577 bytes/sec)
[10:56:59] <sensille> so it is possible with illumos to get decent nvme throughput with dd
[10:57:32] <hubert3> what svcs should be running for nfs file sharing to work? getting error "RPC prog. not avail" on clients
[11:01:06] <hubert3> # showmount
[11:01:07] <hubert3> showmount: exa: RPC: Program not registered
[11:01:28] <toasterson1> idmapper?
[11:02:34] <hubert3> online http://dpaste.com/1FV3Z1Q
[11:04:04] <hubert3> zfs set sharenfs=on tank3 # also fails
[11:09:29] <tsoome> step one - make sure your rpcbind is available on public network (not limited on loopback)
[11:10:00] <hubert3> # sharectl set -p client_versmin=3 nfs
[11:10:01] <hubert3> Invalid protocol specified: nfs
[11:10:01] <tsoome> you can use rpcinfo to query remote host (udp port 111)
[11:10:07] <hubert3> am I missing a package or something?
[11:10:31] <hubert3> rpcinfo <hostname> remotely from my laptop returns a bunch of stuff
[11:10:35] <tsoome> nfsv4 does not have mount protocol, so v4 shares are not visible with showmount
[11:10:39] <hubert3> I think rpc is there but nfs is not installed
[11:11:15] <tsoome> and yes, then make sure you have nfs/server and nfs/client services (only server is needed on server)
[11:12:35] <tsoome> svcs -a | grep nfs should list 5 services online
[11:12:57] <hubert3> yup there are, although I had to svcadm enable them first
[11:12:58] <hubert3> http://dpaste.com/1FV3Z1Q
[11:13:17] <hubert3> cbd,mapid,status,nlockmgr,client
[11:13:26] <hubert3> no server though
[11:16:45] <hubert3> # svcadm enable svc:/network/nfs/server
[11:16:45] <hubert3> svcadm: Pattern 'svc:/network/nfs/server' doesn't match any instances
[11:17:01] <hubert3> is there a package to pkg install?... thought this would be included by default
[11:18:08] <Agnar> hubert3: pkg:/service/file-system/nfs
[11:18:48] <hubert3> thanks. this is the 3rd time I've installed OI in 10 years and I don't remember having to do that before
[11:21:04] <igork> nfs should work from the box - just : zfs set sharenfs=on pool/dataset
[11:21:20] *** Guest74403 is now known as taylor
[11:21:51] *** taylor is now known as Guest69203
[11:22:21] <hubert3> igork: no, it failed because nfs server was not installed with the minimal usb install of OI 2019.10
[11:22:59] <igork> hubert3: didn't know about it - OI removed nfs by default?
[11:23:21] <hubert3> yes, I just did pkg install pkg:/service/file-system/nfs
[11:23:27] <hubert3> it wasn't installed and fixed my problem
[11:23:32] <igork> sorry, didn't test OI a long time, but on dilos we have nfs by default
[11:27:18] <toasterson1> The minimal OS is very reduced.... the idea is that only the most needed tools are on it. It does not contain any server software by default. everything must be installed onto that image. use text install if you have no specific requirements for minimal attack surface.
[11:28:01] <toasterson1> our default reference is the test install image. not the minimal image
[11:28:08] <toasterson1> *text
[11:28:16] <igork> toasterson1: i have added nfs on iso because use it as recovery with mininal tools
[11:29:46] <toasterson1> igork (IRC): we recoment our text install image for that purpose :)
[11:29:55] <igork> ans iso is not a big - about 300mb
[11:30:28] <toasterson1> igork (IRC): you do not have many of the legacy crift we still ship. Ours is bigger unfortunately
[11:30:35] <toasterson1> *cruft
[11:30:41] <igork> toasterson1: dilos has light version with minimal :)
[11:31:27] <toasterson1> igork (IRC): yes but you started from a green field. We would need to chop SUNwcs to pieces.
[11:32:29] <igork> i have splitted SUNwcs to several packages - to be more similar to debian userland, but still not at all
[11:32:57] <toasterson1> igork (IRC): andyf (IRC)
[11:33:24] <toasterson1> ^^ we should start an IPD to chop SUNwcs down
[11:33:26] <tsoome> the problem is about the word "would". someone just has to pick up the task and get it done.
[11:33:47] <igork> tsoome: :)
[11:33:49] <ptribble> SUNWcs isn't that big, to be honest
[11:34:01] <toasterson1> ptribble (IRC): 1.1 GB?
[11:34:07] <toasterson1> in our case
[11:34:07] <igork> but all depend on distribution needs
[11:35:05] <ptribble> 1.1GB? My SUNWcs package is 9M compressed
[11:35:07] <igork> toasterson1: how big your 'zfs list rpool' with minimal install ?
[11:35:27] <toasterson1> igork (IRC): 1GB
[11:35:35] <toasterson1> uncompressed 300mb compressed
[11:35:36] <igork> on compressed dataset?
[11:35:45] <toasterson1> gzip
[11:36:17] <toasterson1> SUNwcs pulls in a lot of dependencies in our case including all of illumos gate
[11:36:39] <toasterson1> ptribble how did you get it that small?
[11:37:28] <toasterson1> we use the default manifests from illumos gate
[11:37:47] <tsoome> 9MB does not really sound possible, the list of files in SUNWcs is 1088 on x86
[11:38:06] <tsoome> those files can not shrink to 9MB:)
[11:38:16] <toasterson1> tsoome (IRC): it could a lot of it are legacy symlinks
[11:38:36] <tsoome> nono, I'm talking about file actions
[11:38:42] <toasterson1> ah
[11:38:59] <tsoome> tsoome@beastie:/code/illumos-gate/usr/src/pkg/manifests$ grep file SUNWcs.mf| grep -vi sparc | wc -l
[11:38:59] <tsoome> 1088
[11:39:28] <ptribble> That's how big it is. Base Tribblix is a 200M ISO.
[11:39:43] <toasterson1> ptribble (IRC): compressed?
[11:39:53] <ptribble> Yup.
[11:40:23] <igork> GCC 8.4 Released
[11:41:00] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[11:41:21] <tsoome> hrm. pkg info is suggesting: Size: 26.60 MB
[11:41:55] <ptribble> Installed, my SUNWcs is 27M raw size, but it would be smaller as th root fs is compressed
[11:42:12] <ptribble> Hm. Almost 1400 files
[11:42:50] <EisNerd> hm would be nice to have a guide to configure bios, there are that much options in the detail
[11:43:25] <toasterson1> pkg contents -m pkg:/SUNWcs | egrep '^file' | grep -iv 'variant.debug.illumos=true' | grep -iv 'sparc' | wc -l
[11:43:25] <toasterson1> 1649
[11:44:00] <toasterson1> pkg info also says Size: 26.83 MB
[11:44:00] <tsoome> but indeed, many very small files, so 26MB might be true:D
[11:47:34] <toasterson1> Yeah looking at pkg info it looks like all the dependencies might pull in around 20-70MB each and then if you have enough of them you get 1 GB.
[11:47:46] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 256 seconds)
[11:48:08] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has quit IRC (Quit: Leaving)
[11:51:00] <EisNerd> btw is there a way to get from pkg the illumos gate git hash it is based on?
[11:51:49] <EisNerd> leoric pointed me to the manifest on pkg.openinidiana.org but this doesn't help you to dertermine what you have currrently on your system
[11:52:25] <tsoome> pkg contents -m should show the manifest
[11:52:39] <EisNerd> great
[11:52:41] <EisNerd> thx
[11:54:01] <igork> on dilos we have local file: /etc/release_revision. it contain git hash + repo .
[11:54:08] <toasterson1> EisNerd (IRC): there is a property called git something in the metadata actions of evey package pkg contents -m shows them aswell. pkg contents -m $PKG | grep 'git.' should do the trick
[11:55:29] <toasterson1> igork (IRC): the git hash in the packages serves a different purpose. It is there to link each package to the last git revision which it got updated by. It is not for the whole os but for every package.
[11:55:51] <igork> toasterson1: ok
[11:56:26] <igork> we have little different logic - we save to git repo a package version what is in public repo
[11:57:12] <igork> userland and illumos parts are different
[11:58:09] <EisNerd> leoric already pointed me yesterday to the entry in the manifest, so the question was completely answered with the command to get the manifest for a specific package for the version I have on my system
[11:58:58] <EisNerd> as this might become relevant when checking for potential os/kernel shortcommings regarding my NVMe trouble
[11:59:45] <EisNerd> damn I should have checked the bios before installing OS
[12:00:01] <EisNerd> it has defaulted to legacy boot instead of efi
[12:01:05] <toasterson1> shouldn't bootadm create an efi partition anyways?
[12:01:40] <EisNerd> and the sata doms are not reported as ssds due to bios defaults
[12:01:48] <EisNerd> maybe this breaks boot now, as I changed it
[12:02:13] <toasterson1> sata doms?
[12:02:38] <EisNerd> small ssds directly packaged in sata plugs
[12:03:29] <EisNerd> and powered through inline power
[12:03:47] <toasterson1> EisNerd (IRC): oh yes those bastards.... the worst idea every imho. so bad to replace them. and they are the most frequent ones to break.
[12:04:32] <EisNerd> oh ok, first time I hear it
[12:05:09] <EisNerd> but the are mirrored, so I don't think this would become an immidiate problem
[12:05:42] <EisNerd> hm for some reason the system won't boot anymore
[12:06:00] <toasterson1> Oh it is not an imediate problem... replacing them just requires more work than hotpluging sata ssds
[12:06:14] <toasterson1> you have to open the case
[12:06:25] <EisNerd> no, I have to pull the blade
[12:06:37] <toasterson1> ah that helps :)
[12:07:23] *** KeiraT <KeiraT!~k4ra@gateway/tor-sasl/k4ra> has quit IRC (Ping timeout: 240 seconds)
[12:07:43] <EisNerd> somehow it does as this system utilizes dual port ssds and has two blades, so you can move workload to the second node and service the other
[12:08:47] *** KeiraT <KeiraT!~k4ra@gateway/tor-sasl/k4ra> has joined #illumos
[12:10:07] <EisNerd> hm it boots, kernel greater, then two warnings regarding memspace program failed, and then it hangs
[12:10:22] <EisNerd> the memspace warnings has been there all the time afaik
[12:13:09] <EisNerd> https://paste.pics/871c1dd4f75aeabfa5f02ef48ab9cc1e
[12:14:24] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Ping timeout: 256 seconds)
[12:17:39] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[12:21:42] <gitomat> [illumos-gate] 12261 pfiles(1) could show any filesystem endpoints for a door -- Andy Fiddaman <omnios at citrus-it dot co.uk>
[12:23:07] <jlevon> andyf: nice
[12:23:25] <andyf> Finally :) Thanks for all your help with that one (and Woodstock, and rmustacc..)
[12:24:44] <hubert3> what's a simple way to create a new SMF service to launch a python script?
[12:31:46] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 265 seconds)
[12:38:15] *** xzilla <xzilla!~robert@pool-71-166-61-141.bltmmd.fios.verizon.net> has quit IRC (Ping timeout: 258 seconds)
[12:54:57] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[12:54:59] <andyf> hubert3: I usually find an example to copy from but I think there are some tools about to generate them
[12:56:05] <andyf> there's a pretty simple one here https://github.com/omniosorg/omnios-build/blob/master/build/ec2-credential/files/ec2-credential.xml
[12:56:24] <andyf> but you can do all sorts in terms of limiting or extending privileges, forcing ASLR, etc. etc.
[12:57:36] <andyf> See https://illumos.org/man/smf_method
[12:58:27] <andyf> Here's a bit of a longer one that does some of that
[12:58:27] <andyf> https://github.com/omniosorg/omnios-build/blob/master/build/ntpsec/files/ntpsec.xml
[12:59:03] *** hubert3 <hubert3!~hubert@121-200-20-108.79c814.syd.nbn.aussiebb.net> has quit IRC (Ping timeout: 268 seconds)
[13:01:41] *** hubert3 <hubert3!~hubert@121-200-20-108.79c814.syd.nbn.aussiebb.net> has joined #illumos
[13:04:43] <hubert3> what's a simple template to follow to create a custom smf service that just runs a python script that stays in the foreground
[13:04:54] <hubert3> just depends on networking
[13:10:35] <andyf> I'll find an example for you
[13:12:33] <andyf> https://github.com/omniosorg/omnios-extra/blob/master/build/ooceapps/files/ooceapps.xml
[13:12:53] <andyf> note the &amp; at the end of the start method exec line - that's needed if the script stays in the foreground
[13:13:07] <toasterson1> andyf (IRC): there is a proper way to do that
[13:13:21] <andyf> you can drop some of those dependencies if you don't need them and adjust the username etc.
[13:13:21] <sensille> anyone knows how to reach dan kimmel?
[13:15:53] <toasterson1> andyf (IRC): there is another startd snippet which allows you to do jobs in the foreground
[13:16:15] <andyf> toasterson1: startd duration=child?
[13:16:24] <toasterson1> andyf (IRC): yes
[13:17:38] <andyf> doesn't that stop it being put in in its own contract though?
[13:18:19] <andyf> Ok, one last example that uses startd wait - https://github.com/omniosorg/omnios-extra/blob/master/build/gitea/files/gitea.xml
[13:26:54] <hubert3> thanks
[13:34:40] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has quit IRC (Remote host closed the connection)
[13:43:45] <EisNerd> hm could someone with more technical skill check thoose warnings, maybe they are the hint why nvme doesn't perform as expected
[13:46:10] <sensille> EisNerd: can you do a longer dd and do an "mpstat 1" meanwhile and paste it here?
[13:46:38] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[13:48:35] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has joined #illumos
[13:51:33] <EisNerd> sure
[13:54:05] <EisNerd> https://pastebin.com/hMRiD8br
[13:55:39] <sensille> so it's not cpu bound
[13:55:50] <EisNerd> maybe that is what I was going to ask for, if there is sth to determine if the kernel is waiting for the metal or if the kernel is busy with itself. But tricky as we would have to ask the kernel and if the kernel misses the opportunity to be faster it will still report to wait for the metal
[13:59:56] <sensille> you can also look at iostat -xn 1
[14:02:17] <EisNerd> guess Intel VMD is of no interest in this (disabled is what is intended)
[14:04:36] <EisNerd> give me a second to boot the system again
[14:05:25] <EisNerd> sensille: guess, while dd is running again?
[14:05:37] <sensille> yeah, again for sevaral seconds
[14:05:53] <sensille> might to it with different i/o sizes
[14:05:57] <sensille> s/to/do
[14:06:09] <EisNerd> 4k and 64k?
[14:06:32] <sensille> for example. or 4k and 1M
[14:08:16] <EisNerd> https://pastebin.com/eTjZQNtL
[14:09:46] <EisNerd> https://pastebin.com/M6M9tZsY
[14:12:26] <EisNerd> https://pastebin.com/7yGM6EsS
[14:12:50] *** hubert3 <hubert3!~hubert@121-200-20-108.79c814.syd.nbn.aussiebb.net> has quit IRC (Ping timeout: 256 seconds)
[14:14:36] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has joined #illumos
[14:24:06] *** jim80net <jim80net!sid287860@gateway/web/irccloud.com/x-dkdneayvaafzmlcf> has quit IRC (Ping timeout: 252 seconds)
[14:24:16] *** jim80net <jim80net!sid287860@gateway/web/irccloud.com/x-cqresmenmgizwvaq> has joined #illumos
[14:26:00] <sensille> that's strange
[14:29:06] <sensille> each read is 56k?
[14:32:01] <sensille> the read latency of 0.2ms at least explains the throughput
[14:33:45] <sensille> here it looks like this: 11922.0 0.0 667630.7 0.0 0.0 0.5 0.0 0.0 0 50 c5t1d0
[14:34:00] <sensille> asvc_t 0.0
[14:36:01] <sensille> but also 56k/read
[14:37:01] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[14:37:35] <sensille> so your OS is reporting to wait for the metal
[14:39:34] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 268 seconds)
[14:41:02] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[14:41:48] <sensille> so the question is, where does your latency come from?
[14:41:56] <EisNerd> at least it thinks so, not sure if some flaw in lowlevel code will result in the same
[14:44:36] <EisNerd> hm maybe worth force disabling power management
[14:44:57] <sensille> dunno if it's relevant, is mpxio disabled? i have no idea how the nvme stack works
[14:45:15] <sensille> why do your devices have the wwn in the name and mine not?
[14:45:26] <EisNerd> beside I'm not a friend of this as usually the os / the HW should be smart enough today to know when and where to put power when needed to perform as intended
[14:45:42] <EisNerd> maybe u.2
[14:45:54] <EisNerd> mpxio? -v
[14:46:22] <sensille> i don't think power is an issue here, as you're not cpu bound
[14:46:42] *** cneir__ <cneir__!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[14:48:02] <sensille> mpxio is multipathing. but that would not make sense to me in this setup
[14:48:23] <sensille> can you pastebin the output of format?
[14:49:40] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 256 seconds)
[14:50:10] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[14:51:03] <EisNerd> just the disk selection? I'll take the first node, as I put the second back into bios rigth now
[14:51:17] <sensille> yeah
[14:51:54] <EisNerd> https://pastebin.com/5d1VpGLr
[14:52:25] *** cneir__ <cneir__!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 265 seconds)
[14:53:41] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[14:53:42] <sensille> so /pci@53,0/pci8086,2030@0/pci11f8,8534@0/pci11f8,8534@2/pci1179,1@0/blkdev@w8CE38EE200B06801,0 for your path vs. /pci@0,0/pci8086,6f0a@3,2/pci8086,3702@0/blkdev@1,0 for mine
[14:53:45] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[14:55:40] <EisNerd> do you have u2 format?
[14:56:58] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 255 seconds)
[14:57:00] <EisNerd> I would think that there is a minimal overhead involved in implementing hotplug ssds, but this should not kill the performance in a way that you can even use comodity mid end HW
[14:57:37] <EisNerd> at least hopefully it doesn't
[14:57:57] <sensille> i have no idea what the form factor is
[14:58:16] <EisNerd> https://www.supermicro.com/en/products/system/2U/2029/SSG-2029P-DN2R24L.cfm
[14:58:25] <sensille> your path involves 5 pci buses, mine only 3
[14:59:15] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[15:01:09] <EisNerd> likely caused by the slightly higher complexity of this box
[15:02:35] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 260 seconds)
[15:03:19] <EisNerd> but I get the point, but the difference is too big I would say to be explained by this
[15:03:39] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[15:03:43] <EisNerd> hm extended apic causes the system to not boot any longer
[15:04:11] <EisNerd> sth intel claims to improve throughput and latency
[15:04:56] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #illumos
[15:06:05] <sensille> i'm not trying to make a point. i'm just offering you a comparison
[15:06:47] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 260 seconds)
[15:07:57] <EisNerd> frustrating, the plan with this box is to bring sth between 20 and 40 gbit on the wire to procide a HA backen for our infrastructure
[15:15:40] <EisNerd> hm some idea regarding the extended apic?
[15:15:41] *** BOKALDO <BOKALDO!~BOKALDO@81.198.159.87> has quit IRC (Quit: Leaving)
[15:16:22] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[15:17:24] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has joined #illumos
[15:18:28] <EisNerd> possible that the fan mode (in BMC/LOM) has any impact on this?
[15:48:37] <EisNerd> hm this system utilises PCIE NTB, maybe this is relevant
[15:49:09] *** amrmesh <amrmesh!~Thunderbi@185.212.171.68> has joined #illumos
[15:51:53] *** amrmesh <amrmesh!~Thunderbi@185.212.171.68> has quit IRC (Remote host closed the connection)
[15:52:19] *** Kurlon_ <Kurlon_!~Kurlon@cpe-67-253-136-97.rochester.res.rr.com> has quit IRC (Ping timeout: 255 seconds)
[15:56:22] <EisNerd> disabling powermanagement improves performance, now I get 250-300MB/s using dd
[16:00:16] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 258 seconds)
[16:00:20] *** jimklimov1 <jimklimov1!~jimklimov@31.7.243.238> has joined #illumos
[16:01:45] *** jimklimov1 <jimklimov1!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[16:01:59] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[16:09:23] *** Guest69203 <Guest69203!~taylor@c-73-63-29-240.hsd1.ut.comcast.net> has quit IRC (Quit: Guest69203)
[16:09:46] *** hubert3 <hubert3!~hubert@121-200-20-108.79c814.syd.nbn.aussiebb.net> has joined #illumos
[16:14:27] *** hubert3 <hubert3!~hubert@121-200-20-108.79c814.syd.nbn.aussiebb.net> has quit IRC (Ping timeout: 258 seconds)
[16:17:31] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 255 seconds)
[16:22:16] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[16:24:28] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[16:24:37] *** cneir__ <cneir__!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[16:25:01] *** BOKALDO <BOKALDO!~BOKALDO@87.110.102.7> has joined #illumos
[16:26:42] *** cneir__ <cneir__!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Read error: Connection reset by peer)
[16:27:07] *** cneir__ <cneir__!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[16:28:01] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[16:28:06] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 268 seconds)
[16:30:44] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[16:31:41] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[16:34:59] *** cneir__ <cneir__!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 260 seconds)
[16:40:01] *** Kruppt <Kruppt!~Kruppt@50-111-62-211.drhm.nc.frontiernet.net> has joined #illumos
[16:43:05] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[16:43:19] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[16:50:29] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[16:51:02] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[16:59:18] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[16:59:30] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[17:09:16] *** neuroserve <neuroserve!~toens@195.71.113.124> has quit IRC (Ping timeout: 258 seconds)
[17:09:25] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[17:09:59] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Client Quit)
[17:10:08] *** tsoome <tsoome!~tsoome@d989-793b-0fd6-7038-2f80-4a40-07d0-2001.sta.estpak.ee> has joined #illumos
[17:17:34] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[17:22:15] *** Teknix <Teknix!~pds@172.58.44.176> has quit IRC (Ping timeout: 265 seconds)
[17:23:32] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[18:10:54] *** hubert3 <hubert3!~hubert@121-200-20-108.79c814.syd.nbn.aussiebb.net> has joined #illumos
[18:16:01] *** hubert3 <hubert3!~hubert@121-200-20-108.79c814.syd.nbn.aussiebb.net> has quit IRC (Ping timeout: 268 seconds)
[18:17:15] *** man_u <man_u!~manu@manu2.gandi.net> has quit IRC (Ping timeout: 268 seconds)
[18:33:44] *** merzo <merzo!~merzo@185.39.197.205> has joined #illumos
[18:36:36] *** Teknix <Teknix!~pds@69.41.134.110> has joined #illumos
[18:58:03] *** wiedi <wiedi!~wiedi@185.85.220.192> has quit IRC (Quit: ^C)
[19:10:06] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[19:12:56] *** Guest69203 <Guest69203!~taylor@199.104.121.63> has joined #illumos
[19:13:28] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 258 seconds)
[19:13:53] *** Guest69203 is now known as taylor
[19:14:24] *** taylor is now known as Guest39165
[19:15:50] *** Guest39165 <Guest39165!~taylor@199.104.121.63> has quit IRC (Client Quit)
[19:19:36] *** merzo <merzo!~merzo@185.39.197.205> has quit IRC (Ping timeout: 258 seconds)
[19:30:27] *** merzo <merzo!~merzo@185.39.197.205> has joined #illumos
[19:31:24] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Quit: Leaving.)
[19:39:14] *** merzo <merzo!~merzo@185.39.197.205> has quit IRC (Ping timeout: 240 seconds)
[19:58:42] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Read error: Connection reset by peer)
[20:00:45] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[20:03:28] *** hawk <hawk!~hawk@d.qw.se> has joined #illumos
[20:12:02] *** hubert3 <hubert3!~hubert@121-200-20-108.79c814.syd.nbn.aussiebb.net> has joined #illumos
[20:16:28] *** hubert3 <hubert3!~hubert@121-200-20-108.79c814.syd.nbn.aussiebb.net> has quit IRC (Ping timeout: 255 seconds)
[20:18:07] *** jcea1 <jcea1!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #illumos
[20:18:40] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has quit IRC (Ping timeout: 256 seconds)
[20:18:40] *** jcea1 is now known as jcea
[20:31:08] *** richlowe <richlowe!~richlowe@cpe-74-139-197-163.kya.res.rr.com> has quit IRC (Ping timeout: 256 seconds)
[20:31:22] *** richlowe <richlowe!~richlowe@2605:a000:160c:8b5b::bb7> has joined #illumos
[20:32:50] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has quit IRC (Ping timeout: 256 seconds)
[20:51:08] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #illumos
[21:04:39] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[21:05:46] *** BOKALDO <BOKALDO!~BOKALDO@87.110.102.7> has quit IRC (Quit: Leaving)
[21:06:00] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has joined #illumos
[21:07:31] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 260 seconds)
[21:09:10] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[21:11:55] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 258 seconds)
[21:14:36] *** lgtaube <lgtaube!~lgt@45.86.203.33> has quit IRC (Ping timeout: 258 seconds)
[21:21:09] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[21:22:13] *** alanc <alanc!~alanc@inet-hqmc02-o.oracle.com> has quit IRC (Remote host closed the connection)
[21:22:39] *** alanc <alanc!~alanc@inet-hqmc02-o.oracle.com> has joined #illumos
[21:24:24] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 265 seconds)
[21:32:42] *** lgtaube <lgtaube!~lgt@213-67-21-71-no84.tbcn.telia.com> has joined #illumos
[21:33:58] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[21:36:58] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 265 seconds)
[21:41:24] *** khng300 <khng300!~khng300@unaffiliated/khng300> has quit IRC (Ping timeout: 256 seconds)
[21:41:40] *** khng300 <khng300!~khng300@unaffiliated/khng300> has joined #illumos
[21:45:56] *** Tempt <Tempt!~avenger@unaffiliated/tempt> has quit IRC (Ping timeout: 256 seconds)
[21:46:04] *** Tempt <Tempt!~avenger@unaffiliated/tempt> has joined #illumos
[21:46:04] *** ChanServ sets mode: +o Tempt
[21:56:01] *** xzilla <xzilla!~robert@12.203.174.10> has joined #illumos
[22:34:43] *** lgtaube <lgtaube!~lgt@213-67-21-71-no84.tbcn.telia.com> has quit IRC (Ping timeout: 258 seconds)
[22:45:59] *** merzo <merzo!~merzo@20-14-132-95.pool.ukrtel.net> has joined #illumos
[22:50:10] *** lgtaube <lgtaube!~lgt@91.109.28.145> has joined #illumos
[23:00:55] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 268 seconds)
[23:30:05] *** andy_js <andy_js!~andy@51.146.99.40> has quit IRC (Quit: andy_js)
[23:32:40] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[23:33:36] *** Kurlon <Kurlon!~Kurlon@cpe-67-253-136-97.rochester.res.rr.com> has joined #illumos
top

   March 4, 2020
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31