Switch to DuckDuckGo Search
   February 1, 2020  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | >


NOTICE: This channel is no longer actively logged.

Toggle Join/Part | bottom
[00:40:46] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[01:11:54] <jlevon> jimklimov-mobile: sorry we missed you, we were right at the back on a table
[01:11:58] *** jimklimov-mobile <jimklimov-mobile!uid278835@gateway/web/irccloud.com/x-yujohgpzbeefxnxb> has quit IRC (Quit: Connection closed for inactivity)
[01:16:24] *** jimklimov-mobile <jimklimov-mobile!uid278835@gateway/web/irccloud.com/x-xkguwpzwlbyrsykv> has joined #illumos
[01:25:14] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has quit IRC (Ping timeout: 240 seconds)
[01:27:13] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has joined #illumos
[01:36:14] *** andy_js <andy_js!~andy@90.215.171.48> has quit IRC (Quit: andy_js)
[01:41:23] *** jamtorus <jamtorus!~quassel@s91904427.blix.com> has joined #illumos
[01:41:58] *** jellydonut <jellydonut!~quassel@s91904426.blix.com> has quit IRC (Ping timeout: 268 seconds)
[01:50:59] *** jellydonut <jellydonut!~quassel@s91904424.blix.com> has joined #illumos
[01:53:52] *** jamtorus <jamtorus!~quassel@s91904427.blix.com> has quit IRC (Ping timeout: 268 seconds)
[02:35:42] *** hoobershaggus <hoobershaggus!4120e720@gateway/web/cgi-irc/kiwiirc.com/ip.65.32.231.32> has joined #illumos
[02:40:38] *** hoobershaggus <hoobershaggus!4120e720@gateway/web/cgi-irc/kiwiirc.com/ip.65.32.231.32> has quit IRC (Remote host closed the connection)
[03:09:03] *** hurfdurf <hurfdurf!~hurfdurf@2601:280:4f00:26a0:758e:6d23:76d8:4c0a> has quit IRC (Ping timeout: 245 seconds)
[03:21:58] *** jimklimov-mobile <jimklimov-mobile!uid278835@gateway/web/irccloud.com/x-xkguwpzwlbyrsykv> has quit IRC (Quit: Connection closed for inactivity)
[03:28:08] *** jocthbr <jocthbr!~salci@138-122-44-143.host.cicloti.com.br> has quit IRC (Ping timeout: 260 seconds)
[03:40:36] *** jocthbr <jocthbr!~salci@138-122-44-143.host.cicloti.com.br> has joined #illumos
[05:56:53] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has quit IRC (Ping timeout: 260 seconds)
[05:58:37] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has joined #illumos
[06:04:40] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has quit IRC (Ping timeout: 268 seconds)
[06:19:55] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has joined #illumos
[06:29:02] *** tru_tru <tru_tru!~tru@157.99.90.140> has quit IRC (Ping timeout: 265 seconds)
[07:27:24] *** zsj <zsj!~zsj@3EC95F11.catv.pool.telekom.hu> has quit IRC (Quit: leaving)
[07:35:55] *** Kruppt <Kruppt!~Kruppt@50.111.11.107> has quit IRC (Quit: Leaving)
[07:53:41] *** zsj <zsj!~zsj@3EC95F11.catv.pool.telekom.hu> has joined #illumos
[08:27:22] *** BOKALDO <BOKALDO!~BOKALDO@87.110.147.150> has joined #illumos
[08:45:04] <mnowak_> can someone help me with strtod(3c) and hexanumbers? I have this minimal example, which produces correct result with GCC 8 & 9 but fails with GCC 6 & 7: https://paste.ec/paste/U23P4zZ9#vGKMWHuSqaNVsIdbcFCBGo8BOObmpx4-susTGdnwVAX. Trying to fix the fish shell which fails here.
[08:50:26] *** lgtaube <lgtaube!~lgt@84.16.224.13> has quit IRC (Ping timeout: 240 seconds)
[08:50:29] <sjorge> So it seems 90% of the stickers I had are OmniOSce
[08:51:11] <sjorge> we are a bit further from the door, so that is good
[09:27:39] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has quit IRC (Remote host closed the connection)
[09:44:45] <clapont> tsoome: the test is to pull random FC cables used between a Server --- SanSwitch --- Storage, so that only one physical path exists at a moment. This test is against any OS+Multipath solution, although I am particularly interested in Solaris_10 +Veritas VxDMP, because one of the Zpools got IO errors while testing; just one zpool, not all zpools.
[09:45:27] <clapont> tsoome: a good reading for Solaris + MPxIO (Solaris's default MultiPath) would be https://docs.oracle.com/cd/E53394_01/html/E54792/agkap.html
[09:45:42] <tsoome> unplug+plug?
[09:46:15] <tsoome> make sure the path is recovered after plugging and before unplugging next cable.
[09:46:45] <tsoome> sometimes the path recovery might take time.
[09:46:56] <tsoome> what storage is it?
[09:47:08] <clapont> in tests I waited 10minutes each time.. a HP MSA2050
[09:47:42] <tsoome> with veritas DMP you need array modules to be installed
[09:47:51] <clapont> with two old Brocade switches so it's not about the performance but about failover
[09:48:54] <tsoome> with scsi_vhci, make sure you have system patched to latest, and then check if the paths are properly identified based on array spec (A/A or A/PG).
[09:49:27] <clapont> the Veritas DMP is working, the storage is recognized as ALUA and the Zpools work fine... over the tests, one Zpool got SUSPENDED for "IO errors" -only one; I would understand if it would have been all of them
[09:49:55] <tsoome> switch really does not matter there (assuming it has decent enough fw version). cable unplug means link down and switch should detect that without any issues.
[09:51:10] <clapont> tsoome: scsi_vhci I dont use.. I need VxDMP for IO fencing... some people suggested to disable VxDMP for all but the three coordoninator disks but still those three would be "exposed" to the same risk
[09:51:14] <tsoome> IO error means the lower level stack did fail. the idea of multipath is that the upper layers will not see IO errors as long as you have at least one functional link.
[09:51:54] <tsoome> however, there are things like timeouts - sometimes it may happen the MPIO layer will not react fast enough
[09:52:19] <tsoome> io fencing? VCS setup?
[09:52:31] <clapont> tsoome: there was exactly one link - and more zpools under writing - but only one Zpool got suspended; why only one, not all - I am unhappy :-)
[09:54:22] <clapont> tsoome: yes, IO fencing with VCS, three disks, two nodes, 10 total zpools . the setup is small (and old) but some cables have been added and hence the idea of a test; it's good to test but here something failed
[09:54:28] *** hemi770 <hemi770!~hemi666@unaffiliated/hemi770> has joined #illumos
[09:54:35] <tsoome> are you sure the pools are using vxvm devices?
[09:55:06] <clapont> yes. all of them are /dev/vx/dmp/something
[09:55:47] <tsoome> then you definitely should look for veritas patches (and ofc make sure s10 is pathed too).
[09:55:52] <tsoome> patched*
[09:56:06] <clapont> the suspended zpool remained in the suspended state even after reconnecting the cables - nothing helped but node reboot
[09:56:55] <tsoome> aye. you can check 'fmadm fauly’
[09:57:02] <tsoome> faulty*
[09:57:55] <clapont> I did, I cleared it but no help.. the hardware LED went off but the zpool needed reboot; and the "zpool scrub" showed no errors
[09:58:38] <clapont> no zpool command helped with that suspended zpool; only reboot did it
[09:59:05] <tsoome> it may happen the FMA did declare the IO port faulty and disabled it, especially if in fact all pools were affected. With at least one path remaining, you should not see any consumer to be affeced by cable pulls.
[10:00:40] <clapont> tsoome: exactly! thank you! it should not but it did; so I'm trying to get directions.. I am willing to buy books, read, dig.. it is not an emergency for me but I wish to solve this properly
[10:00:57] <tsoome> but also note, FC stack may get confused especially when system is old (not patched), and if that happens, lip reset *may* help, but often it really is just about reboot.
[10:02:00] <clapont> tsoome: not all zpools were affected, just one. _only one zpool_ got suspended, the rest continued fine - this is the major question mark: "why only one?"
[10:03:33] <tsoome> in that case you can exclude port down, and have to assume either VxVM or sd driver.
[10:04:55] <tsoome> are all ports using dedicated luns?
[10:05:03] <tsoome> serry, pools*
[10:05:38] <tsoome> it seems I need some time for coffee:D
[10:06:02] *** lgtaube <lgtaube!~lgt@91.109.28.129> has joined #illumos
[10:06:39] <tsoome> if all pools have dedicated lun, then check if all luns are properly assigned in array - with host and cluster attributes.
[10:07:54] <tsoome> if the lun path did switch to array port where the vxvm / os properties are not correctly set, you can see issues...
[10:08:45] <clapont> each pool has its own lun.. I try to understand the last line..
[10:10:03] <tsoome> in array, when you set up lun access and masking, you also need to set host properties - os type, clustering type and things like that
[10:11:25] <tsoome> those are documented in array vendor host connectivity manual
[10:11:26] <clapont> the HP MSA dont have OS/clustering infos. just the initiators which are applied on volumes-lun mappings; the Brocade switches have wwns to allow
[10:12:19] <tsoome> in that case, it is good idea to check if VCS has notes about supported arrays
[10:12:55] <tsoome> and if MSA needs any special treatment.
[10:14:42] <clapont> the VCS was not involved here, it is the higher level, above the VxDMP; the VCS either can mount the zpool, either not; no import/export has been done... I see no reason to search VCS/Storage fine tuning to exist
[10:16:41] <tsoome> vcs has roots into vxvm and depends on fencing etc, therefore it is good idea to check if they have any recommendations about the storage connectivity.
[10:21:30] <clapont> as general ideas, yes these are very good to check: VCS-storage + Storage-OS + patches OS/SAN/Storage
[10:22:17] <clapont> for me, I think I should dig on VxDMP+Storage+FMA
[10:23:08] <tsoome> FMA should have left logs
[10:23:36] <clapont> tsoome: thank you for all the ideas and the time spent; I am intrigued of why _only one_ zpool had problems and I will come back with updates, when I will have; maybe will help other people
[10:24:59] <tsoome> and, if other pools were happy about that last path, then FMA can be excluded
[10:26:08] <clapont> tsoome: yes. thank you very much for the talk. I doubt that Oracle SR would have listed all these; the response to my last ticket included a patch that I already have; the response in the previous ticket was not complete, needed more patches than estimated.. a support contract is very good to have but is not the definitive answer
[10:27:45] <tsoome> with any support, the general rule is, make sure you have updated to latest and you still have an issue:D no engineer will like to spend time on issue which was fixed long time ago...
[10:28:33] <clapont> yes, the other zpools were fine.. the one which got suspended was... an average low traffic zpool -the really high traffic zpool had no errors!
[10:33:38] <tsoome> ya well, we have zfs -> sd -> vxvm dmp -> fc stack, so that does kind of pointing the finger… :D
[10:40:18] *** alanc <alanc!~alanc@inet-hqmc02-o.oracle.com> has quit IRC (Remote host closed the connection)
[10:40:45] *** alanc <alanc!~alanc@inet-hqmc02-o.oracle.com> has joined #illumos
[10:41:44] <tomww> New Podcast "Friends of Illumos", recorded at @FOSDEM 2019: https://twitter.com/sfepackages/status/1223541032022958088
[10:42:36] <clapont> tsoome: "sd" ? I don't understand, what is "sd".
[10:43:03] <tsoome> sd is scsi disk driver
[10:43:31] <tsoome> on sparc you can slso see ssd, which is scsi disk driver on top of fc stack.
[10:45:44] *** mnowak_ <mnowak_!~mnowak_@94.142.238.232> has quit IRC (Quit: Leaving)
[10:45:59] <Agnar> tomww: where are you?
[10:47:37] <clapont> tsoome: ah! yes, it's old sparc with "ssd" for the harddrives :-)
[10:48:26] <clapont> tsoome: thank you again, thank you very much for ideas and talk. I wish you (and everyone) a nice weekend!
[10:49:53] *** mnowak_ <mnowak_!~mnowak_@94.142.238.232> has joined #illumos
[10:50:57] <tomww> Agnar: Hotel, on the way to FOSDEM now
[10:52:31] <Agnar> tomww: ah!
[10:52:34] *** andy_js <andy_js!~andy@90.215.171.48> has joined #illumos
[11:08:52] *** mgerdts <mgerdts!~textual@96-41-228-208.dhcp.ftbg.wi.charter.com> has quit IRC (Ping timeout: 268 seconds)
[12:51:17] *** ldepandis <ldepandis!~ldepandis@unaffiliated/ldepandis> has joined #illumos
[13:03:17] *** elegast <elegast!~elegast@83-161-180-214.mobile.xs4all.nl> has joined #illumos
[13:04:20] *** kayront <kayront!~kayront@unaffiliated/kayront> has joined #illumos
[13:41:43] <Woodstock> mnowak_: from strtod(3): In default mode for strtod(), only decimal, INF/INFINITY, and NAN/NAN(n-char-sequence) forms are recognized. In C99/SUSv3 mode,hexadecimal strings are also recognized.
[13:43:12] <Woodstock> mnowak_: so there's probably an issue with older g++ not providing a c99 environment
[14:29:25] <sjorge> Woodstock: https://illumos.topicbox.com/groups/networking/T774cf7de02648147/a-tale-of-two-fixes-9832 re the odd ipv6-ipv4 macOS issue we talked about briefly
[14:30:22] <sjorge> And the ndp(d) stuff https://www.illumos.org/issues/2338
[14:30:33] <sjorge> There as apparently an older ticket from 8 years ago
[15:17:56] *** jimklimov <jimklimov!~jimklimov@151.216.139.38> has joined #illumos
[15:23:10] *** jimklimov <jimklimov!~jimklimov@151.216.139.38> has quit IRC (Quit: Leaving.)
[15:51:54] *** jimklimov <jimklimov!~jimklimov@151.216.139.38> has joined #illumos
[15:59:49] *** Tsesarevich <Tsesarevich!Tsesarevic@fluxbuntu/founder/joejaxx> has quit IRC ()
[16:00:54] *** Tsesarevich <Tsesarevich!Tsesarevic@fluxbuntu/founder/joejaxx> has joined #illumos
[16:07:20] *** MaidenAmerica <MaidenAmerica!~insomnia@shadowcat/actuallyamemberof/lollipopguild.insomnia> has quit IRC (Ping timeout: 268 seconds)
[16:17:48] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #illumos
[16:19:53] *** MaidenAmerica <MaidenAmerica!~insomnia@shadowcat/actuallyamemberof/lollipopguild.insomnia> has joined #illumos
[16:35:39] *** Kruppt <Kruppt!~Kruppt@50.111.11.107> has joined #illumos
[16:48:26] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has joined #illumos
[17:01:47] *** _alhazred <_alhazred!~Alex@mobile-access-bcee2b-11.dhcp.inet.fi> has joined #illumos
[17:14:18] *** jimklimov <jimklimov!~jimklimov@151.216.139.38> has quit IRC (Quit: Leaving.)
[17:17:15] *** idodeclare <idodeclare!~textual@2600:1700:1101:17c0:dcdb:8dcc:b73c:4883> has joined #illumos
[17:20:36] *** rann <rann!sid175221@gateway/web/irccloud.com/x-epbssgfrlczamkmj> has quit IRC ()
[17:21:16] *** rann <rann!sid175221@gateway/web/irccloud.com/x-ekunrnzjnvseimzd> has joined #illumos
[17:24:15] *** jollyd <jollyd!~alarcher@aaubervilliers-682-1-56-92.w90-88.abo.wanadoo.fr> has joined #illumos
[18:00:52] *** MerlinDMC <MerlinDMC!~merlin@163.172.186.44> has quit IRC (Quit: ZNC 1.6.5+deb1+deb9u2 - http://znc.in)
[18:01:35] *** MerlinDMC <MerlinDMC!~merlin@163.172.186.44> has joined #illumos
[18:04:37] *** poige <poige!sid250374@gateway/web/irccloud.com/x-bjxdxbeemdhivvdg> has quit IRC ()
[18:04:53] *** poige <poige!sid250374@gateway/web/irccloud.com/x-xfsigtrwhkqiudci> has joined #illumos
[18:41:56] *** MerlinDMC <MerlinDMC!~merlin@163.172.186.44> has quit IRC (Quit: ZNC 1.7.5+deb2 - https://znc.in)
[18:43:54] *** MerlinDMC <MerlinDMC!~merlin@163.172.186.44> has joined #illumos
[19:20:38] *** arnold_oree <arnold_oree!~arnoldore@ranoldoree.plus.com> has joined #illumos
[19:29:09] *** jellydonut <jellydonut!~quassel@s91904424.blix.com> has quit IRC (Read error: Connection reset by peer)
[19:29:24] *** MerlinDMC <MerlinDMC!~merlin@163.172.186.44> has quit IRC (Quit: ZNC 1.7.5+deb2 - https://znc.in)
[19:30:39] *** jellydonut <jellydonut!~quassel@s91904424.blix.com> has joined #illumos
[19:32:34] *** jellydonut <jellydonut!~quassel@s91904424.blix.com> has quit IRC (Client Quit)
[19:35:38] *** jellydonut <jellydonut!~quassel@s91904428.blix.com> has joined #illumos
[19:38:51] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has quit IRC (Quit: Leaving)
[19:54:54] *** MerlinDMC <MerlinDMC!~merlin@163.172.186.44> has joined #illumos
[20:05:22] *** elegast <elegast!~elegast@83-161-180-214.mobile.xs4all.nl> has quit IRC (Ping timeout: 268 seconds)
[20:25:41] *** chromatin <chromatin!~chromatin@d149-67-249-75.try.wideopenwest.com> has joined #illumos
[20:27:02] <chromatin> So, I learned about /etc/path_to_inst the hard way when reconfiguring the virtio devices [ordering] in a VM. virtio-net got bumped from slot 5 to slot 6, then of course was not detected at boot. vioif1 showed up but network was not configured. Is there a way for the system to automatically recognize devices have moved slots, or a way to configure the system to not be dependent on physical slot / PCI bus numbering?
[20:36:42] <tsoome> chromatin: in computer systems it is rather important to maintain your knowledge what is installed where:) … illumos/solaris is doing hard work to maintain that knowledge.
[20:38:54] <chromatin> yes, obviously. wish it were still a little more ergonomic to e.g. add a new device. I understand in a physical system one may not slide all cards down in order to plug in a new card, but this [renumbering] is quite likely in a VM
[20:39:55] <chromatin> And I hate to be that guy, but my other OSes handle it seemingly fine.
[20:41:49] *** pmooney <pmooney!~pmooney@67-4-175-230.mpls.qwest.net> has quit IRC (Quit: WeeChat 2.7)
[20:47:35] <jbk> the problem is -- do you try to preserve configuration based on the phtsical location or on the device itself?
[20:47:49] <jbk> for most physical servers, you probably keep things plugged into the same location
[20:53:14] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has quit IRC (Ping timeout: 265 seconds)
[20:53:48] <chromatin> jbk: This is true, but I wonder if most server OSes are running on physical servers in 2020 or no?
[20:53:57] <chromatin> I mean to say, the greatest number of instances
[21:10:36] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has joined #illumos
[21:12:41] <jbk> i mean, i suppose you could look at both the mac and the hw location, and prefer the mac and fallabck to the last hw path..
[21:13:53] <jbk> but no one's done the work to do that
[21:14:23] <chromatin> linux for all its faults has apparently moved to deterministic numbering for network devices
[21:14:28] <tsoome> the sad truth is, the network setup in illumos is needing much love. it’s basically the same crap as it was at the fork time...
[21:14:53] <chromatin> i am re-learning how to rebuild from source so i can try to add some device properties =)
[21:15:17] <chromatin> yak shaving all morning trying to add a ssd-backed zvol to the VM
[21:15:55] <tsoome> chromatin: dont even start about linux networking:D people are spending days trying to get it configured properly
[21:25:04] *** BOKALDO <BOKALDO!~BOKALDO@87.110.147.150> has quit IRC (Quit: Leaving)
[21:25:20] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has quit IRC (Ping timeout: 265 seconds)
[21:31:57] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has joined #illumos
[21:51:48] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has quit IRC (Ping timeout: 260 seconds)
[21:57:22] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has joined #illumos
[22:40:42] <sjorge> Woodstock so looping back to the i40e stuff, X722 is the chipset and X557 is the PHY, and X557 in this case implements the ethernet port instead of having an SFP+ hooked up?
[22:40:50] <sjorge> Or did I get the wrong?
[22:41:11] <sjorge> And we probably don't support the PHY so that is why we can't get the link state via the chipset?
[22:54:46] *** pmooney <pmooney!~pmooney@67-4-175-230.mpls.qwest.net> has joined #illumos
[23:08:56] <jollyd> any idea if open_memstream() is implemented somewhere in the illumos universe?
[23:15:46] <jollyd> it seems it is required by POSIX but not implemented in illumos, Solaris gained support for it: https://docs.oracle.com/cd/E88353_01/html/E37843/open-memstream-3c.html
[23:17:42] *** tsoome <tsoome!~tsoome@91.209.240.229> has quit IRC (Read error: Connection reset by peer)
[23:18:00] *** tsoome <tsoome!~tsoome@91.209.240.229> has joined #illumos
[23:34:01] *** tsoome <tsoome!~tsoome@91.209.240.229> has quit IRC (Read error: Connection reset by peer)
[23:35:34] *** tsoome <tsoome!~tsoome@91.209.240.229> has joined #illumos
[23:47:20] *** chromatin <chromatin!~chromatin@d149-67-249-75.try.wideopenwest.com> has quit IRC (Quit: chromatin)
[23:48:51] *** chromatin <chromatin!~chromatin@d149-67-249-75.try.wideopenwest.com> has joined #illumos
[23:59:02] *** arnold_oree <arnold_oree!~arnoldore@ranoldoree.plus.com> has quit IRC (Quit: Leaving)
top

   February 1, 2020  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | >