Switch to DuckDuckGo Search
   November 6, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30

Toggle Join/Part | bottom
[00:10:45] <andyf> LeftWing - I appreciate the time you're spending on Go and the illumos platform. I'm in the process of switching my ZFS replication stuff over to zrepl, and having go 1.13 will help there!
[00:11:24] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[00:11:31] <LeftWing> andyf: That's great! The zrepl person (Christian Schwarz) is really friendly; we talked a bunch at the OpenZFS thing
[00:11:42] <LeftWing> Let me know if I can help
[00:12:20] <andyf> Do you have build recipes/patches for go 1.13?
[00:12:21] <LeftWing> I tinkered a bit with zrepl some time ago, but I was already knee-deep in a particular snapshot regime and it didn't quite mesh with what zrepl wanted to do -- but it seemed really neat
[00:12:37] <LeftWing> andyf: I have...
[00:12:39] <LeftWing> something
[00:12:43] <LeftWing> haha
[00:13:06] <LeftWing> I have a go1.13.1.illumos-amd64.tar.gz if you want it
[00:13:25] <andyf> I used to use zrep (shell script) - it's worked well for years including nice mechanisms for migrating zones between hosts (reverse replication flow, flip read-only etc..)
[00:13:40] <andyf> although it was very heavily modified..
[00:15:21] <andyf> The biggest problem I had with zrepl was the lack of support in Go 1.12 for readv() - I'm using a patch from Christian that works around that
[00:15:45] <andyf> do you know if the system call interface problem is resolved in 1.13? (save patching in the syscall numbers)
[00:15:49] <LeftWing> err
[00:15:54] <LeftWing> So
[00:16:00] <LeftWing> How is it using readv() today?
[00:16:06] <LeftWing> From which package?
[00:16:31] <andyf> Is this where I gave you the impression I actually understand Go? :p .. one sec
[00:17:00] <LeftWing> ha
[00:17:09] <LeftWing> I mean, I've apparently given you the impression that I do ;D
[00:17:39] <LeftWing> If it is using a function in the "syscall" package, and that function does not exist for our platform, I'm pretty sure it will never exist because they've frozen "syscall" as a legacy thing
[00:17:50] <andyf> https://github.com/omniosorg/omnios-extra/blob/master/build/zrepl/patches/233.patch#L238
[00:18:03] <LeftWing> There's a new package, golang.org/x/unix or whatever, (aka sometimes just "unix") which is where future interfaces like that should go
[00:19:15] <LeftWing> Oof
[00:19:17] <LeftWing> That's quite a patch
[00:19:43] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Remote host closed the connection)
[00:20:15] <andyf> If I manually patch the syscall number into src/syscall/zsysnum_solaris_amd64.go (in go), then it appears to work
[00:20:25] <andyf> but that's not a great patch of course
[00:20:40] <andyf> Christian's patch just adds a fallback readv() that doesn't use the syscall
[00:21:13] *** Raugharr <Raugharr!~david@50-244-36-149-static.hfc.comcastbusiness.net> has quit IRC (Ping timeout: 265 seconds)
[00:22:31] <LeftWing> Yeah
[00:22:51] <LeftWing> So I think we'd want to add a readv() wrapper to the unix package basically
[00:22:57] <andyf> ok, so I won't hold out hope for Go 1.13 supporting readv through the syscall module - but zrepl 0.<next> should have the workaround in it
[00:23:27] <LeftWing> I don't think anything will ever change again in the syscall module basically
[00:23:29] <LeftWing> On any platform
[00:25:13] <LeftWing> e.g., we could presumably get something like this added but for readv() writev(): https://github.com/golang/sys/blob/c1f44814a5cd81a6d1cb589ef1e528bc5d305e07/unix/syscall_solaris.go#L478-L515
[00:25:38] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[00:26:12] <andyf> OI have this patch... https://github.com/OpenIndiana/oi-userland/blob/oi/hipster/components/developer/golang-112/patches/0602-update-list-of-syscall-numbers.patch
[00:26:26] <igork> https://paste.dilos.org/?04ad9a48f92baa60#v+zat/Uz0wXSvWOjILKSjY2cToiNgQFZU9RoqQjQMCE=
[00:26:37] <igork> it is issue with ucode
[00:26:42] <LeftWing> Anything that involves system call numbers is not the way to go
[00:27:07] <igork> i see hardlinks in Makefile.links, but i have no found primary file
[00:28:26] <igork> what i missed ?
[00:28:39] <andyf> LeftWing - well, yes... in principle
[00:29:24] <andyf> pkgsrc adds just one, ioctl()
[00:29:44] <LeftWing> heh
[00:30:58] <LeftWing> So you could try something like: https://github.com/jclulow/wireguard-go-illumos-wip/blob/jclulow/tun/asm_solaris_amd64.s
[00:31:30] <LeftWing> plus, then: https://github.com/jclulow/wireguard-go-illumos-wip/blob/jclulow/tun/tun_illumos.go#L20-L103
[00:31:48] <LeftWing> Probably it should be named asm_illumos_amd64.s now that we have GOOS=illumos
[00:32:08] <LeftWing> Basically I lifted the parts of the x/sys/unix package that they use to call back into libc
[00:32:18] <LeftWing> And added some wrappers for things I wanted (here, getmsg/putmsg/ioctl)
[00:32:26] <LeftWing> You could totally do the same with readv/writev
[00:33:06] <LeftWing> I'm sure this is using some kind of private symbols in the base library of Go (where the libc caller bits actually are) but they're also not really in flux, and you'll know at build time if they're missing symbols or not
[00:41:24] <andyf> cool - I'll have a look (assuming the lack of readv() turns out to be a big enough performance hit)
[00:44:56] <andyf> igork - are you saying that usr/src/data/ucode/intel/000206D6-01 is missing in your tree?
[00:46:20] <andyf> https://github.com/illumos/illumos-gate/blob/master/usr/src/data/ucode/intel/000206D6-01
[00:49:11] <igork> andyf: thanks
[00:49:23] <igork> illumos-joyent tree missed 3 files
[00:49:40] <igork> just checked original illumos-gate - all fine
[00:50:15] <igork> https://paste.dilos.org/?4826d9871586244c#XqegST82tdeeq6KJdj7XQXjKURs3f/1FGzqTCrOPxi4=
[00:53:39] *** Qatz- <Qatz-!~db@2601:187:8400:5::42d> has joined #illumos
[00:54:23] *** Qatz <Qatz!~db@2601:187:8400:5::d9d> has quit IRC (Ping timeout: 250 seconds)
[00:55:48] *** Qatz- is now known as Qatz
[00:57:54] <jlevon> sigh, you made me check. those files are there in illumos-joyent.
[01:01:09] *** joltman <joltman!znc@gateway/vpn/privateinternetaccess/joltman> has quit IRC (Remote host closed the connection)
[01:07:20] <jperkin> toasterson: new trunk repo published which has the fixed gcc dependencies
[01:08:37] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Remote host closed the connection)
[01:08:37] <LeftWing> jlevon: I don't see how they could not be :P
[01:15:43] *** insomnia <insomnia!~insomnia@shadowcat/actuallyamemberof/lollipopguild.insomnia> has joined #illumos
[01:19:31] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[01:20:29] <igork> jlevon: https://paste.dilos.org/?2ebb49989b4530fb#XmZq9kSjqTaVM8Bu1G5cla/yl/+97jL+NF4j5e0dlGc=
[01:26:27] <rmustacc> jlevon: Thanks for the news on ixgbe.
[01:26:43] <jbk> igork: i see it here w/ the same HEAD
[01:26:50] <AmyMalik> rmustacc: have you experienced sigbus under high load?
[01:27:14] <rmustacc> On what platform?
[01:27:20] <AmyMalik> i86pc, 64 bit
[01:27:30] <rmustacc> No. Not one that wasn't something I induced.
[01:27:47] <igork> jbk: it really strange issue - it was local env with 'git pull' time to time. i'll try new git clone
[01:27:52] <AmyMalik> rmustacc: runningmultiple compile jobs does it
[01:28:07] <AmyMalik> I call it the illumos epilepsy bug
[01:28:19] <rmustacc> I'd really rather you didn't call it that.
[01:28:26] <AmyMalik> why
[01:28:51] <rmustacc> I have different views on relating medical conditions to things.
[01:28:58] <rmustacc> Anyways, I'd start by looking at the core files and summarizing what's going on.
[01:29:04] <rmustacc> Are they related mappings, related addresses, something else?
[01:29:12] <rmustacc> The same program, different programs, etc.?
[01:29:18] <AmyMalik> Different programs.
[01:29:50] <rmustacc> So yeah, I'd start just taking it apart and correlating data.
[01:30:14] <rmustacc> Since it's reproducible, it'll be debuggable at some point.
[01:30:18] <AmyMalik> I know that the bug has killed tmux before.
[01:30:46] <AmyMalik> The core files always stop in some random place, and I don't think it's necessarily related to malloc.
[01:30:51] <rmustacc> Anyways, I'd start with the things I talked about. What mapping, what reason, etc.
[01:31:40] <rmustacc> Given the frequency you're describing, maybe write a quick script that wraps the ptools, elfdump -n, mdb, etc. and start to see if there's any relationship.
[01:32:02] <AmyMalik> it's infrequent but it is inducible
[01:32:09] <rmustacc> I might also set up a DTrace script that looks at the stack that is dropping the SIGBUS on the process and see if you can get more context that way as well.
[01:32:47] <AmyMalik> rmustacc: _mjg tried to have me do something like that but i'm an incompetent foon, so that did not end up being successful
[01:33:26] <rmustacc> Well, no time to learn like the present.
[01:33:37] <AmyMalik> When I read the elf notes on each crash of each program that produced a core file, it was an ENOMEM SIGBUS.
[01:34:03] <AmyMalik> if i'm not mistaken, at no point was the system ready to run out of swap.
[01:34:49] <rmustacc> Again, I'd write a script that extracts the address and reason from all the processes, determines what that mapping was, and also grabs all the reasons and start from there.
[01:34:59] <rmustacc> A shell script in this context.
[01:35:38] <AmyMalik> if i do write one, i will inform you at once
[01:36:45] <rmustacc> Unfortunately, we can suggest how to make progress on your problem, but we can't do it for you.
[01:49:57] <LeftWing> rmustacc: I think the next step we had suggested was trying to turn FC_MAKE_ERR() into an SDT probe basically
[01:50:04] <LeftWing> To catch the place where ENOMEM is passed to it
[01:51:15] <LeftWing> 50% of the 1GB of memory in the VM was also in use by the kernel, which seemed a bit high to me
[01:51:28] <LeftWing> (not including ZFS file data)
[01:51:59] <rmustacc> OK. Well, if others have been talking about this, then it'd be useful to get it in one place and tell me about it.
[01:52:25] <rmustacc> Starting from the top with each person in IRC isn't a great way for us to help solve the problem.
[01:52:58] <LeftWing> As I recall we had looked at "swap -sh" during an incidence of the issue and it didn't seem like we'd run out
[01:53:18] <LeftWing> But it was true that ::memstat showed precious little actual free mrmory
[01:55:45] <LeftWing> Anyway I think there are at least two things to investigate: Where is the kernel memory usage for such a small memory system going, and where exactly are we generating the ENOMEM that goes into the SIGBUS in trap() eventually
[01:56:55] <AmyMalik> Oh my daze
[01:57:00] <AmyMalik> so i need, what
[01:59:15] <AmyMalik> a copy of the gate, $250, a contact at Oracle, and the limitless pill?
[01:59:34] <LeftWing> I mean, you need a copy of the gate
[01:59:46] <LeftWing> And an illumos (virtual?) machine of some kind in which to build it
[02:00:08] <rmustacc> Or just a DTrace script?
[02:00:14] <alanc> contacts at Oracle are less useful than you may imagine
[02:00:32] <rmustacc> No one is asking you for money and we're just trying to help.
[02:00:45] <AmyMalik> alanc: agreed
[02:00:47] <LeftWing> You could certainly write a DTrace script that covers all of the places we use FC_MAKE_ERR
[02:01:04] <LeftWing> There are a lot though
[02:01:10] <AmyMalik> rmustacc: I want to compensate you for this crap but i'm myself broke
[02:01:22] <rmustacc> I don't want to take your money.
[02:01:29] <AmyMalik> k
[02:01:42] <rmustacc> Folks have issues, I try to help. But it's just something that goes both ways, that's all.
[02:01:59] <LeftWing> I think Robert's point is that we can try and point you in the right direction, but it's unlikely that we're going to be able to do it all for you -- paid or otherwise
[02:02:02] <AmyMalik> I'm joking around a bit because I'm tired. it's past my bedtime
[02:02:11] <LeftWing> Sleep is important!
[02:02:21] <AmyMalik> but it's only 5pm here
[02:02:38] <LeftWing> I went to bed at 8pm yesterday. It was pretty good.
[02:03:33] <AmyMalik> rmustacc: list all of the manual pages I need to peruse for more information on how to get the information you need
[02:03:47] <AmyMalik> be as terse or specific as you want
[02:04:17] <LeftWing> Do you know how to trace the return value of a function in the kernel?
[02:04:26] <AmyMalik> Sadly no
[02:04:42] <LeftWing> OK so that's a good thing to experiment with first
[02:04:46] <LeftWing> To get a feel for how that works
[02:05:26] <AmyMalik> I've never had a proper fiddle with dtrace.
[02:06:40] <AmyMalik> I wonder if there's a memory leak in the kernel...???
[02:06:47] <jbk> someone once told me the best way to learn dtrace is to use it in anger
[02:06:49] <AmyMalik> Just an outlandish idea
[02:07:08] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Remote host closed the connection)
[02:07:49] <AmyMalik> you're making my abs hurt jbk
[02:08:05] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[02:08:30] <jbk> thankfully mine are hurting less (well wasn't really my abs.. just everything behind them)
[02:08:40] <LeftWing> e.g., you can look at all the calls to kmem_alloc() with, say: dtrace -q -n 'fbt::kmem_alloc:entry { self->sz = arg0 } fbt::kmem_alloc:return /self->sz != 0/ { printf("%s(%u) -> %p\n", probefunc, self->sz, arg1); self->sz = 0; }'
[02:09:11] <AmyMalik> woah
[02:09:49] <AmyMalik> you know you are a mere mortal when you do not understand that incantation.
[02:09:57] <LeftWing> https://illumos.org/books/dtrace is a good thing to look at
[02:10:04] <jbk> have you used awk much?
[02:10:14] <AmyMalik> jbk: no.
[02:10:32] <Smithx10> @AmyMalik https://www.google.com/search?biw=1440&bih=798&sxsrf=ACYBGNQZ_g488vLKwc15nvR8d7HZoMX4zQ%3A1573002620258&ei=fB3CXYG6D4_yasy3jtgH&q=site%3Asmartos.org%2Fbugview+dtrace&oq=site%3Asmartos.org%2Fbugview+dtrace&gs_l=psy-ab.12...0.0..3877...0.0..0.0.0.......0......gws-wiz.p6aYWg_KOrM&ved=0ahUKEwiB6KSzs9TlAhUPuRoKHcybA3sQ4dUDCAs
[02:10:34] <AmyMalik> i am allergic to awk
[02:10:39] <Smithx10> a billion examples
[02:10:46] <LeftWing> But awk is delightful!
[02:10:51] <AmyMalik> Smithx10: holy link tracking batman
[02:11:07] <AmyMalik> LeftWing: they told my mum that abeut garlic and she still throws up.
[02:11:17] *** denk <denk!~denis@devel.tambov.ru> has quit IRC (Ping timeout: 252 seconds)
[02:11:28] <LeftWing> Well, alright then!
[02:11:31] <AmyMalik> i have to take an industrial benadryl to be around awk
[02:11:54] <Smithx10> lol, trolling?
[02:11:57] *** denk <denk!~denis@devel.tambov.ru> has joined #illumos
[02:12:16] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Ping timeout: 240 seconds)
[02:12:21] <AmyMalik> Smithx10: only slightly
[02:12:32] <Smithx10> nn1
[02:15:24] <AmyMalik> but yeah dtrace is kinda like witchcraft to me
[02:16:00] <LeftWing> It is basically a tool for letting you run a limited script in response to certain events happening
[02:16:18] <LeftWing> So in hunt for where your ENOMEM is coming from, you'd want to look at all the functions that might have returned it
[02:16:32] <LeftWing> The fbt provider makes the return value available in the "return" probe for a function in the kernel, so you can trace all of the returns
[02:16:49] <LeftWing> Filter on the ones that are ENOMEM, get the stack() for each one, etc
[02:17:24] <LeftWing> The book does a more concrete job of explaining what it is and how to get started though
[02:17:33] *** clapont <clapont!~clapont@unaffiliated/clapont> has quit IRC (Read error: Connection reset by peer)
[02:18:38] <AmyMalik> LeftWing: would you say that the book is accessible to the 4th quintile Windows user?
[02:18:46] <AmyMalik> (not that that's relevant{
[02:18:49] <AmyMalik> )*
[02:19:30] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Quit: jcea)
[02:20:00] *** ed209 <ed209!~ed209@165.225.128.67> has quit IRC (Remote host closed the connection)
[02:20:07] *** ed209 <ed209!~ed209@165.225.128.67> has joined #illumos
[02:20:31] <LeftWing> I think people usually demonstrate capacity to rise to whatever challenge is personally interesting to them -- and you've invested enough energy in this that it feels like it's of at least some interest to you!
[02:23:04] *** clapont <clapont!~clapont@unaffiliated/clapont> has joined #illumos
[02:27:07] <AmyMalik> By any chance was that book inherited from Oracle?
[02:27:18] <AmyMalik> Or was it written de novo?
[02:27:45] <AmyMalik> It feels like something Sun people would write. Hence why I ask.
[02:30:27] *** jemershaw <jemershaw!~jemershaw@73.81.153.166> has quit IRC (Read error: Connection reset by peer)
[02:36:46] *** jemershaw <jemershaw!~jemershaw@73.81.153.166> has joined #illumos
[02:42:26] *** jemershaw <jemershaw!~jemershaw@73.81.153.166> has quit IRC (Ping timeout: 268 seconds)
[02:43:05] <rmustacc> It was written by the folks who wrote DTrace.
[03:26:52] *** alanc <alanc!~alanc@inet-hqmc01-o.oracle.com> has quit IRC (Remote host closed the connection)
[03:27:19] *** alanc <alanc!~alanc@inet-hqmc01-o.oracle.com> has joined #illumos
[03:34:46] *** joltman <joltman!znc@gateway/vpn/privateinternetaccess/joltman> has joined #illumos
[04:32:43] *** triffid <triffid!triffid@lovecraft-ipv6.mcclung.systems> has quit IRC (Quit: WeeChat 2.0.1)
[04:35:03] *** jemershaw <jemershaw!~jemershaw@73.81.152.203> has joined #illumos
[04:52:56] *** jemershaw <jemershaw!~jemershaw@73.81.152.203> has quit IRC (Ping timeout: 276 seconds)
[04:55:28] *** jemershaw <jemershaw!~jemershaw@73.81.152.203> has joined #illumos
[04:57:29] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 268 seconds)
[04:59:03] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #illumos
[05:01:56] *** rsully <rsully!~rsully@unaffiliated/rsully> has quit IRC (Quit: rsully)
[05:41:38] *** clapont <clapont!~clapont@unaffiliated/clapont> has quit IRC (Ping timeout: 246 seconds)
[05:42:59] *** clapont <clapont!~clapont@unaffiliated/clapont> has joined #illumos
[05:47:57] *** jemershaw <jemershaw!~jemershaw@73.81.152.203> has quit IRC (Ping timeout: 265 seconds)
[05:48:23] *** wl_ <wl_!~wl_@2605:6000:1b0c:6060::87c> has joined #illumos
[06:05:43] *** jemershaw <jemershaw!~jemershaw@c-68-83-252-28.hsd1.pa.comcast.net> has joined #illumos
[06:26:03] *** jemershaw <jemershaw!~jemershaw@c-68-83-252-28.hsd1.pa.comcast.net> has quit IRC (Read error: Connection reset by peer)
[06:30:31] *** jemershaw <jemershaw!~jemershaw@c-68-83-252-28.hsd1.pa.comcast.net> has joined #illumos
[07:05:17] *** insomnia <insomnia!~insomnia@shadowcat/actuallyamemberof/lollipopguild.insomnia> has quit IRC (Ping timeout: 240 seconds)
[07:18:40] *** insomnia <insomnia!~insomnia@shadowcat/actuallyamemberof/lollipopguild.insomnia> has joined #illumos
[08:30:06] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has joined #illumos
[08:39:38] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has joined #illumos
[09:00:56] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[09:02:04] *** ricksha-512 <ricksha-512!~despair86@75-60-137-0.lightspeed.rcsntx.sbcglobal.net> has quit IRC (Remote host closed the connection)
[09:37:44] *** andy_js <andy_js!~andy@97e29e78.skybroadband.com> has joined #illumos
[09:38:26] *** jemershaw <jemershaw!~jemershaw@c-68-83-252-28.hsd1.pa.comcast.net> has quit IRC (Ping timeout: 240 seconds)
[09:41:27] *** jemershaw <jemershaw!~jemershaw@73.81.152.203> has joined #illumos
[09:55:13] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Read error: Connection reset by peer)
[09:55:35] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[10:00:16] *** jemershaw <jemershaw!~jemershaw@73.81.152.203> has quit IRC (Ping timeout: 268 seconds)
[10:00:44] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has quit IRC (Remote host closed the connection)
[10:03:12] *** jemershaw <jemershaw!~jemershaw@c-68-83-252-28.hsd1.pa.comcast.net> has joined #illumos
[10:19:41] *** man_u <man_u!~manu@manu2.gandi.net> has joined #illumos
[11:02:13] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 252 seconds)
[11:06:30] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[11:12:41] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[11:20:00] *** ed209 <ed209!~ed209@165.225.128.67> has quit IRC (Remote host closed the connection)
[11:20:07] *** ed209 <ed209!~ed209@165.225.128.67> has joined #illumos
[11:32:03] <leoric> Does someone else want to look at http://buildzone.oi-build.r61.net/webrev-11934/ ?
[11:33:22] <andy_js> Has anyone seen "not enough memory to fit 4096 bytes on stack" on when using GRUB with a pool created on a recent version of illumos-gate?
[11:34:06] <toasterson> jperkin cool thanks for the quick response
[11:35:05] <jperkin> thanks for the heads up ;)
[11:36:02] <clapont> hi; any hints for where I should start digging when a "zpool import" takes 10minutes? the zpool is 1GB size and has around 100MB data on it. thank you!
[11:36:39] <clapont> I forgot to mention the zpool is over a FC LUN
[11:37:08] <wilbury> how many entries do you have in /dev/dsk ?
[11:37:42] <clapont> wilbury: 1200
[11:37:45] <igork> how stable FC connect
[11:38:29] <clapont> igork: no errors/sync loss/tx/rx reported by "fcinfo hba-ports -l"
[11:38:32] <wilbury> fcinfo, fcadm?
[11:38:36] <igork> do you have flow control enabled
[11:38:36] <wilbury> ok
[11:38:50] <wilbury> 1200 entries is kinda lot. mpxio properly configured?
[11:39:03] <wilbury> how busy is the storage?
[11:39:25] <igork> do you have broken drives with non-zero smart
[11:39:46] <igork> one drive can impact storage performance
[11:39:56] <clapont> the driver/hba is Qlogic; this happens even with no activity - meaning even if I export/unmount all zpools/luns
[11:39:59] <toasterson> or a big latency? import and other zfs operations are synchronous
[11:40:39] <clapont> no SMART error reported by storage; but I recheck now....
[11:40:54] <clapont> toasterson: big latency as in....?
[11:41:34] <toasterson> amount of time a packet travels to the storage server and back
[11:42:06] <clapont> wilbury: yes, a lot, this is a great start! the are more fc cables, the OS has Veritas VXfs and vxvm
[11:42:56] <clapont> toasterson: oh, sorry, I know what latency means in computers, just I did not know what you meant by that
[11:43:19] <clapont> after the zpool is mounted, the data transfer is fine, around 80-100MB/sec
[11:44:35] <wilbury> clapont: ah! vxdisk list? proper AAL? i suggest to not mix vxdmp and mpxio
[11:44:38] <clapont> this "very slow import" is happening with the other zpools too, not only with this one; I took this one to eliminate other causes
[11:44:40] <wilbury> zpool import -d /dev/dsk
[11:44:52] <wilbury> (otherwise it also scan /dev/vx/dsk)
[11:45:23] <wilbury> any encapsulated disks? format sliced on all of them?
[11:45:26] *** amrfrsh <amrfrsh!~Thunderbi@95.174.67.172> has quit IRC (Quit: amrfrsh)
[11:46:17] <clapont> wilbury: I tried also specifying the full path by /dev/vx/dmp/ - but it takes exactly the same time
[11:46:22] <wilbury> no other vx tasks running? (vxtask list)
[11:46:35] <toasterson> clapont (IRC): data transfer is async and thus does not rely that much on latency as zfs set for exmaple
[11:46:49] <wilbury> clapont: do you have that zpool on top of vxdmp devices or solaris mpxio devices?
[11:46:53] <clapont> wilbury: no encapsulated disks. the only disks non-zfs are the cdsdisks that I use for IO fencing
[11:47:02] <wilbury> ok, cdsdisks
[11:48:13] <clapont> wilbury: this vxdmp/mpxio I suspect to be the problem but I cannot find it; the zpools are created over vx, with "zpool create /dev/vx/dmp/stor0_1"
[11:49:22] <wilbury> clapont: /dev/dsk/c0* devices are scsi_vhci (mpxio). if you can afford to create a zpool over mpxio'd devices, just do it. and exclude those devices from vxdmp (using vxdmpadm()
[11:49:46] <clapont> wildbury: no slices on disks, full zpool as suggested by the "zpool create /path"
[11:50:46] <clapont> wilbury: the /dev/dsk/c* are around 1000
[11:51:07] <wilbury> try to create the zpool on mpxio devices (not dmp) and vxdmpadm exclude vxdmp dmpnodename=...
[11:51:14] <clapont> wouldn't be better to remove the extra/stale ones?
[11:51:23] <wilbury> yes, devfsadm -Cv
[11:51:34] <wilbury> (fmd might or might not catch up)
[11:51:59] <clapont> this I tried already... and yes, the same number remains
[11:54:06] <wilbury> exclude from vxdmp the devices that are not being used for vxvm
[11:54:24] <wilbury> create zpool on mpxio'd (stm) devices
[11:56:24] <clapont> hmm... that means to exclude all but the three cdsdisks?
[11:57:07] <clapont> and isn't better to use the vxDMP since its used by VCS?
[11:57:57] <clapont> and unfortunately I cannot re-create the zpools as the others have more data, no place to backup them..
[11:58:40] <clapont> as I see it now, I should try to cleanup the /dev/dsk/ directory, having way too many entries. this may be the very cause of the slow import
[12:01:18] <wilbury> devfsadm -Cv
[12:01:32] <wilbury> or touch /reconfigure && init 6
[12:04:51] <clapont> "vxdisk scandisks" takes ~30seconds; after that, any "vxdisk" command is quick, any new lun added/removed/error is fixed - so the VX works fine
[12:09:13] <wilbury> vxdctl enable might have also helped
[12:09:33] <wilbury> nevertheless, try zpool import -d /dev/vx/dmp/
[12:10:58] <clapont> right, I did "vxdctl enable" too, before the "vxdisk scandisks"
[12:11:22] <clapont> "vxdctl enable" took ~30sec too
[12:15:55] <clapont> hmm, just now I noticed this zpool has just the "stor0_1" instead of "/dev/vx/dmp/stor0_1" (like other have) but I think it should not be a problem
[12:16:14] <Agnar> too much vx *shiver*
[12:16:18] <clapont> I did "zpool import /dev/vx/dmp/stor0_1" and ... I wait :-)
[12:42:36] <clapont> the "zpool import /dev/vx/dmp/stor0_1" failed with "no such pool available"
[12:46:03] *** amrfrsh <amrfrsh!~Thunderbi@eduroamerw3-47-154.uni-paderborn.de> has joined #illumos
[12:47:55] <AmyMalik> So how the hell do I compile and install a new kernel? (point me to a document; don't just teach me yourself)
[12:48:31] <toasterson> https://illumos.org/docs/developers/build/
[12:48:51] <toasterson> be carefull of the differences per distribution
[12:51:06] <tsoome> why on earth do you want to build and install kernel?:)
[12:56:51] <clapont> toasterson: interesting read, thank you
[12:58:56] <clapont> also - thank you wilbury, igork, toasterson, tsoome - for the help you gave me so far. even the questions are helpful!
[13:05:27] <wilbury> clapont: zpool import -d /dev/vx/dmp stor0_1
[13:07:27] <tsoome> thats the problem with vxdmp/powerpath/hdlm - they do create the separate directory with device nodes at best case, in worst case its all mixed up
[13:08:03] <wilbury> powerpath uses /dev/dsk and emcp* devices
[13:08:05] <tsoome> worked ok with manually set mounts but not with automatic discovery
[13:14:57] <clapont> wilbury: correct, /dev/vx/dmp is the directory... just there are too many files there; that's one of the directions I should dig more, I think
[13:16:16] <tsoome> with zfs + mpxio you can forget it all in most cases.
[13:16:44] <wilbury> tsoome: that what i suggested. to exclude zfs devices from vxdmp.
[13:17:04] <wilbury> i burned my fingers on it as well... 10 years ago.
[13:17:43] <tsoome> if you do not use vxvm extras…
[13:17:47] <Agnar> actually, if you can get rid of vxdmp at all, do it. It has some serious issues
[13:18:02] <Agnar> same goes for emc powerpath btw
[13:18:23] <wilbury> emc powerpath can't be avoided with certain storages
[13:18:46] <Agnar> in my personal expirience, MPxIO was always the best choice and the most robust one. Even with eMC storage systems
[13:19:06] <clapont> wilbury: 1) the zpools are created using paths and I may not work 2) I use vx for IO fencing of some olf Veritas VCS
[13:19:42] <wilbury> clapont: yes, so exclude ZFS device nodes from vxdmp, keep only device nodes for fencing disk group vxdmp'd
[13:21:02] <wilbury> Agnar: we have hundreds of solaris storage systems. mpxio is OK for FC-attached storage. multiport SAS disk drivers (mrsas, mptsas) suffer from various problems.
[13:21:31] <Agnar> wilbury: oh, right. I was talking about FC, sorry.
[13:21:43] <wilbury> yes, for FC, your statement is valid.
[13:21:46] <wilbury> mpxio rocks.
[13:22:08] <Agnar> yeah, one of the most underestimated technologies in illumos
[13:22:24] <tsoome> mpxio has its bad sides too
[13:22:31] <Agnar> tsoome: sssshhh!
[13:22:33] * kahiru looks up mpxio on the interwebs
[13:22:33] <Agnar> ;)
[13:22:54] <tsoome> but if you do not overcomplicate your SAN, you are fine.
[13:22:54] <wilbury> kahiru: "solaris traffic manager" :-P
[13:22:58] <wilbury> or scsi_vhci
[13:23:10] <Agnar> kahiru: disk multipathing
[13:23:16] <kahiru> yeah, just looking at it
[13:24:06] <clapont> wilbury, Agnar: you make me wonder if I should search/try to do IO fencing with the mpxio - I'm not sure ig the VCS agent for zpool knows about mpxio but it knows vxdmp and is able to parallelize the import for more zpools
[13:24:28] <tsoome> mpxio does only monitor link status, not traffic. we got bitten when brocade director had issue with blade which had cross-blade connection.
[13:24:52] <tsoome> the link was up but the traffic was blocked on that path
[13:25:15] <wilbury> clapont: yes, you can try that. create a vxdg with mpxio'd devices
[13:25:22] <clapont> I have an old storage with two servers/Veritas VCS, nothing complicated
[13:25:27] <wilbury> and modify vxfendg
[13:25:41] <Agnar> tsoome: yes, that's a limitation I also have seen. But I wont blame MPxIO for it, as most alternatives do more important things worse.
[13:25:55] *** fanta1 <fanta1!~fanta1@p200300F76BC6D400D52CA5B415FA13F9.dip0.t-ipconnect.de> has joined #illumos
[13:26:11] <tsoome> ye, it is not about the blame, but knowing your “enemy”:)
[13:26:46] <Agnar> tsoome: sure. I know more about VX* and PowerPath than I ever wanted to know ;))
[13:27:18] <clapont> tsoome: can you detail what was the problem? and switching to VX helped? I dont have EMC, DellPowerPath, it's a StorageTek
[13:28:50] <tsoome> well, the data path inside the director went through 2 blades, directly connected one was ok, but second blade did die.
[13:30:35] <tsoome> so the link was up but no traffic. in that specific case it was partly brocade bug because IMO it should have brought the path down. anyhow, when we did reboot the directory, it did disable the faulty blade and mpxio did the rest.
[13:31:07] <tsoome> and the whole thing would have been avoided by not building cross-blade path.
[13:32:05] <clapont> two servers connected by FC to a big Brocade FC switch? it is a simple setup
[13:32:48] <tsoome> yes, and yet it was made a bit too complicated:D
[13:34:04] <wilbury> core with access switches?
[13:35:07] <tsoome> no, server was directly connected to 2 blades in that directoy
[13:36:44] <tsoome> blade-to-blade paths are perfectly legal of course, but in that case, it was just bad idea:)
[13:37:02] <clapont> tsoome: by "cross-blade path" you meant you had (at least) two cables from each server to the Brocade Switch?
[13:37:57] <tsoome> no, there was 2 links from server, to separate blades
[13:38:11] <tsoome> and same for storage
[13:38:29] <tsoome> nice parallel independent stuff
[13:40:12] <tsoome> but, they did decide to add redundancy by creating virtual connection between the blades, so host port was able to see both storage ports - one directly, other over the directory backplane
[13:40:47] <clapont> aren't the "blades" the servers, in your environment? or sorry, I don't get you
[13:41:18] <tsoome> no, brocade directory is built from blades:) like stacked switches.
[13:42:51] <clapont> aah, so by "blade" you mean a switch of the whole bunch of Brocade switches, grouped under the Brocade Director
[13:43:35] <tsoome> yep.
[13:44:39] <tsoome> like this one https://www.broadcom.com/products/fibre-channel-networking/directors/fc32-64-blade
[13:47:09] <clapont> aha. nice hardware!
[13:48:30] <tsoome> anyhow
[14:07:29] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has joined #illumos
[14:13:57] <wonko> Ok, trying to run debian9 as an LX zone and installing plexmediaserver there fails with a systemd issue: Failed to connect to bus: Connection refused
[14:14:11] <wonko> Is this a known issue? Is there a workaround?
[14:17:19] <wonko> googling for it is troublesome as linux seems to have had a problem with it at some point so I just get all those results. :(
[14:18:17] <toasterson> is dbus not running?
[14:18:48] <wonko> what should I be looking for? dbus-daemon is running
[14:19:02] <toasterson> dbus-daemon.socket?
[14:19:09] <toasterson> maybe?
[14:19:29] *** amrfrsh <amrfrsh!~Thunderbi@eduroamerw3-47-154.uni-paderborn.de> has quit IRC (Quit: amrfrsh)
[14:20:05] <wonko> I see both dbus.server and dbus.socket as running
[14:20:29] <wonko> the crazy thing is when i try to install with apt (or fix it with dpkg) it kills the zlogin
[14:20:36] <toasterson> wonko (IRC): ok so it's another bus
[14:20:41] <toasterson> use ssh
[14:21:03] <toasterson> that is apty problem happens with certain ipkg zones aswell
[14:21:08] <toasterson> PTY
[14:21:15] <wonko> ah, ok
[14:21:16] <wonko> one sec
[14:21:23] <toasterson> i think they don't like \r
[14:21:36] <toasterson> or something funky
[14:23:36] <wonko> root at 10 dot 42.2.21: Permission denied (publickey).
[14:23:38] <wonko> huh?
[14:24:08] <wonko> I've got PreferredAuthentications password PubkeyAuthentication no set in my local .ssh/config
[14:24:28] <wonko> I've allowed root login in the zone
[14:26:59] <wonko> ok, so something that jumps out at me is debian 9.11 is kernel 4.9 I think? the joyent container spec says 4.10 though (not sure if that would cause any issues though)
[14:29:42] <wonko> oh, debian 9 prohibits password for sshd by default? (or is that a Joyent thing?)
[14:30:32] <toasterson> no a debian /sshd thing
[14:30:43] <wonko> silly debian
[14:30:48] <toasterson> you need to change it in sshd_config
[14:30:52] <wonko> ok, I get further when sshed in
[14:30:53] <wonko> so that's something
[14:31:26] <jimklimov> as for systemd and dbus, we have similar problems on native debian 8, FWIW
[14:31:53] <jimklimov> sometimes systemctl just gets stuck indefinitely, and tracing shows it is in intensive infinite loop banging against the closed door
[14:32:01] <wonko> ok, it looks like my issues may have been entirely due to using zlogin instead of ssh
[14:32:11] <wonko> jimklimov: ouch
[14:32:15] <wonko> that's lovely
[14:32:17] <toasterson> yeah I only use zlogin for basic bootstrapping after install. then switch to ssh as all zones have a IP in my environment
[14:32:41] <wonko> Yeah, I was considering not giving zones IPs but after this they all get IPs. :)
[14:33:00] <toasterson> zlogin may not set-up the environment completely like systemd likes it
[14:33:19] <toasterson> i think there are other login steps involved other than the login command
[14:33:38] <andyf> If systemd is involved, you can be sure there are!
[14:34:39] <jimklimov> well, for all the ahters about how it tries to submerge all system lifecycle management... didn't the mix of ZFS/BEs/SMF/FMA/... pioneer the idea? ;p
[14:35:17] <wonko> but those are separate things that work together
[14:35:24] <wonko> systemd wants to be all things
[14:35:26] <wonko> :)
[14:35:30] <wonko> or something
[14:35:47] <wonko> I don't know, I not a hater. I think it's a bit awkward at times, but over all is good?
[14:36:02] <wonko> I don't need to edit XML to setup services, so that's a plus. :-P
[14:36:26] <jimklimov> there is a lot to dislike about the implementations, or the leadership shometimes, but the concepts and features that do work are rather likeable (to the extent I miss some of those in SMF, which is a lot more robust and advanced in a lot of other areas though)
[14:36:55] <jimklimov> NoXML> svccfg -s FMRI editprop
[14:37:35] <wonko> to be fair though, like all systems you build up some templates that you start from for everything so meh, xml, whatever.
[14:37:45] <wonko> but life is best if you can avoid it
[14:37:58] <wonko> Hmmm, Add Library hangs. I wonder what the issue is.
[14:41:33] <wonko> and it's finally better
[14:44:04] *** windy <windy!81fdf05f@129.253.240.95> has joined #illumos
[14:44:28] *** windy is now known as Guest64964
[14:44:40] *** Guest64964 <Guest64964!81fdf05f@129.253.240.95> has quit IRC (Remote host closed the connection)
[14:45:18] *** pwinder <pwinder!81fdf05f@129.253.240.95> has joined #illumos
[14:50:43] *** danmcd <danmcd!~danmcd@static-71-174-113-16.bstnma.fios.verizon.net> has quit IRC (Ping timeout: 268 seconds)
[14:50:44] *** pwinder <pwinder!81fdf05f@129.253.240.95> has quit IRC (Remote host closed the connection)
[15:01:39] *** pwinder <pwinder!~pwinder@129.253.240.95> has joined #illumos
[15:17:09] *** fanta1 <fanta1!~fanta1@p200300F76BC6D400D52CA5B415FA13F9.dip0.t-ipconnect.de> has quit IRC (Quit: fanta1)
[15:21:36] *** pwinder <pwinder!~pwinder@129.253.240.95> has quit IRC (Quit: Leaving)
[15:22:55] *** pwinder <pwinder!~pwinder@129.253.240.95> has joined #illumos
[15:25:51] *** pwinder <pwinder!~pwinder@129.253.240.95> has quit IRC (Client Quit)
[15:25:57] *** Kurlon <Kurlon!~Kurlon@cpe-67-253-141-249.rochester.res.rr.com> has quit IRC (Ping timeout: 240 seconds)
[15:28:31] <clapont> I wish to confirm that, using the full path of the device ie "zpool import -d /dev/vx/dmp/stor0_1 pool1" the import takes few seconds. so the problem is related to VX/MPxIO. I was thinking that, if the "zpool" command would look only to /dev/vx/dmp/ like "zpool import -d /dev/vx/dmp/ pool1" - but this one also takes very long
[15:29:33] <wilbury> zpool import -d /dev/vx/dmp pool1 will open all devices in /dev/vx/dmp and will try to detect the correct order/setup of zpool.
[15:29:34] <clapont> although /dev/vx/dmp/ has under 200 entries..
[15:29:51] *** pwinder <pwinder!~pwinder@129.253.240.95> has joined #illumos
[15:30:27] <clapont> yes.. and while the VX enable/scandisk commands work in ~30sec, I was expecting this way to be much faster
[15:32:25] <tsoome> scanning through 200 block devices will take some time, for every device we need to scan for 4 pool label locations.
[15:33:49] <tsoome> that is 4x at least 128k reads, 2 from the beginning of the device, 2 from the end.
[15:34:20] <wilbury> uberblocks findings...
[15:47:52] *** MarcelT <MarcelT!~marcel@tortuga.telka.sk> has joined #illumos
[15:57:39] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[15:58:08] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Client Quit)
[16:04:07] *** Raugharr <Raugharr!~david@50-244-36-149-static.hfc.comcastbusiness.net> has joined #illumos
[16:05:16] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[16:06:43] *** amrfrsh <amrfrsh!~Thunderbi@185.212.171.68> has joined #illumos
[16:07:17] *** pwinder <pwinder!~pwinder@129.253.240.95> has quit IRC (Ping timeout: 240 seconds)
[16:13:54] *** Raugharr <Raugharr!~david@50-244-36-149-static.hfc.comcastbusiness.net> has quit IRC (Quit: WeeChat 2.6)
[16:26:27] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Quit: Leaving...)
[16:29:23] *** amrfrsh <amrfrsh!~Thunderbi@185.212.171.68> has quit IRC (Ping timeout: 268 seconds)
[16:39:29] *** chrisBF <chrisBF!5695f8c9@host86-149-248-201.range86-149.btcentralplus.com> has joined #illumos
[16:39:53] *** pcd <pcd!~pcd@openzfs/developer/pcd> has quit IRC (Ping timeout: 245 seconds)
[16:47:28] *** Kruppt <Kruppt!~Kruppt@50.111.41.164> has joined #illumos
[16:47:38] *** pcd <pcd!~pcd@openzfs/developer/pcd> has joined #illumos
[16:48:56] *** danmcd <danmcd!~danmcd@static-71-174-113-16.bstnma.fios.verizon.net> has joined #illumos
[16:59:05] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has joined #illumos
[17:00:05] *** pwinder <pwinder!~pwinder@129.253.240.251> has joined #illumos
[17:13:48] <KungFuJesus> tsoome: Did you intend to have this printf in your patch? printf("vdev_read: offset: %jx, size: %zx\n", offset, bytes);
[17:15:52] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (Ping timeout: 245 seconds)
[17:16:14] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #illumos
[17:17:14] <tsoome> no, its nuked already:)
[17:17:28] <tsoome> it was sneaky one:)
[17:17:57] <KungFuJesus> there's a similar printf going on with the bootloader in one of our production machines. It takes forever to boot because of it
[17:18:14] <KungFuJesus> it's amplified by the slow framebuffer performance of the onboard video
[17:18:50] <tsoome> um, which printf is that?
[17:23:15] <KungFuJesus> I'm not sure where it is in code, but it names the disk first, then says in like 4 byte increments how much it's read
[17:23:26] <KungFuJesus> for both disks in my mirror
[17:31:26] <tsoome> only things we report like that are read errors
[17:39:18] <KungFuJesus> I mean it boots successfully, so it'd be odd for that to be indicative of errors
[17:39:40] <KungFuJesus> it just takes a long time to print through all of them
[17:41:46] <tsoome> if you can, take the screenie, even just phone photo should do.
[17:53:32] <wonko> Is there a good "This is how you use ipf/ipnat you big dummy" guide? I haven't touched ipf in forever. I *think* I've got the rules right but searching for documentation leads to stuff all over the place (running omnios btw)
[17:57:37] <KungFuJesus> https://illumos.org/rb/r/2442 <-- anyone know when the range-tree / b-tree based patches for this will go in? If the OpenZFS talk is to be believed, it should have a significant improvement in write performance
[18:02:22] <danmcd> OmniOS r151032 and at least my smartos debug seem to have a bug in intent-log replay. See https://github.com/zfsonlinux/zfs/pull/9145 for a possible cure (but when I applied it to illumos, it didn't sit right with me with its kstat deletion...).
[18:02:58] <danmcd> I have a dump for my one-time-found-it in SmartOS. it's an NFS-zone build with DEBUG enabled, but the ZFS stuff should be no different from normal smartos ZFS.
[18:12:44] <danmcd> http://kebe.com/~danmcd/webrevs/zol-9145/
[18:14:25] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[18:14:47] *** chrisBF <chrisBF!5695f8c9@host86-149-248-201.range86-149.btcentralplus.com> has quit IRC (Remote host closed the connection)
[18:17:02] *** pwinder <pwinder!~pwinder@129.253.240.251> has quit IRC (Quit: This computer has gone to sleep)
[18:25:16] *** KungFuJesus <KungFuJesus!~adamstyli@207.250.97.74> has quit IRC (Quit: leaving)
[18:30:39] *** Teknix <Teknix!~pds@69.41.134.110> has quit IRC (Ping timeout: 265 seconds)
[18:33:25] *** Teknix <Teknix!~pds@69.41.134.110> has joined #illumos
[18:38:41] <andyf> wonko the FreeBSD handbook is pretty good for ipf https://www.freebsd.org/doc/handbook/firewalls-ipf.html
[18:39:35] <andyf> Or if you want a more tutorial style document, try the howto - I found a copy at https://github.com/cetanu/ipfilter_howto
[18:39:57] <wonko> perfect, thanks!
[18:40:06] <wonko> I wasn't sure how accurate the fbsd docs would be
[18:40:12] <wonko> glad to see that's still the goto. :)
[18:40:42] <andyf> danmcd - that does not sound good..
[18:45:38] <andyf> danmcd - they removed the kstat because it was only used in some unreachable code
[18:45:54] <andyf> danmcd - and they replaced the code with an assert, just in case they were wrong about the unreachable nature :D
[18:47:08] <andyf> danmcd - and if it came in with large dnodes, then '030 will have it too
[18:58:12] *** man_u <man_u!~manu@manu2.gandi.net> has quit IRC (Ping timeout: 265 seconds)
[19:07:15] *** amrfrsh <amrfrsh!~Thunderbi@185.212.171.68> has joined #illumos
[19:20:11] *** lgtaube <lgtaube!~lgt@91.109.28.145> has joined #illumos
[19:26:03] *** Kurlon <Kurlon!~Kurlon@bidd-pub-04.gwi.net> has joined #illumos
[19:34:25] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Quit: Leaving.)
[19:34:46] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[19:39:39] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 264 seconds)
[20:01:56] *** MarcelT <MarcelT!~marcel@tortuga.telka.sk> has quit IRC (Ping timeout: 240 seconds)
[20:02:42] *** Kurlon <Kurlon!~Kurlon@bidd-pub-04.gwi.net> has quit IRC (Remote host closed the connection)
[20:03:18] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[20:07:39] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has quit IRC (Quit: Leaving)
[20:11:38] <wonko> do I need to do anything to ensure ipf/ipnat gets started at boot or does it just look for those files and start if if they are there?
[20:13:01] <Agnar> wonko: svcadm enable ipfilter
[20:13:15] <wonko> Agnar: thanks!
[20:13:24] <Agnar> wonko: if the service is enabled, ipnat will be active too
[20:13:31] <AmyMalik> i remember vultr emailed me multiple times about the RPC port
[20:13:45] <AmyMalik> port 111
[20:13:47] <wonko> Agnar: perfect, thanks so much.
[20:14:24] <AmyMalik> Enclosed in the body of this email will be a worked example of an
[20:14:25] <AmyMalik> /etc/ipf/ipf.conf you can recommend to future customers who have this
[20:14:27] <AmyMalik> problem (who are likely to be SunOS users).
[20:14:29] <AmyMalik> block in proto udp from any to any port = 111
[20:14:31] <AmyMalik> block in proto tcp from any to any port = 111
[20:14:39] <AmyMalik> This turned off again didn't it.
[20:14:41] <AmyMalik> I re-enabled my firewall and have now enabled
[20:14:45] <AmyMalik> svc:/network/ipfilter:default (make sure to ensure your solaris/illumos
[20:14:47] <AmyMalik> users execute `pfexec svcadm enable svc:/network/ipfilter:default`) so
[20:14:49] <AmyMalik> this should not happen.
[20:15:14] <AmyMalik> just sharing something I learned along the way, it's probably not properly explained
[20:15:56] <Agnar> the nfs/service on solaris opens port 111 implicit without an entry in your ipf.conf. Not sure about illumos
[20:17:20] <wilbury> should be handled within ipfd
[20:17:26] <wilbury> and 111 is rpc/bind
[20:17:35] <wilbury> fw handled automatically iirc
[20:17:56] <tsoome> (almost) anything using nfs, needs it:)
[20:20:09] <Agnar> wilbury: no, what I mean is, the nfs/server service opens port 111 directly
[20:21:12] <LeftWing> Surely it opens it by starting rpc/bind?
[20:22:01] <Agnar> LeftWing: in /lib/svc/method/nfs-server there is a configure_ipfilter() function being called by the start argument
[20:22:16] <wilbury> ah, ok
[20:22:18] <wilbury> sorry
[20:22:34] <Agnar> oh, we have the same :)
[20:22:40] <LeftWing> Oh you mean opens it in the firewall sense, not the listen(3SOCKET) sense
[20:22:46] <LeftWing> Sorry
[20:22:47] <Agnar> yes
[20:23:31] <Agnar> which is quiete surprising if you run a nfsd on your firewall (for reasons...) and you see that port 111 is now also open in your ipf
[20:32:03] <AmyMalik> I had to close my portmapper
[20:32:17] <AmyMalik> but I don't exactly know if turning off the service will destroy the OS
[20:32:19] <AmyMalik> so I firewalled it instead
[20:38:08] <wilbury> portmapper is not needed if you don't use nfs or other rpc services.
[20:39:15] *** despair86 <despair86!~despair86@24.170.8.11> has joined #illumos
[20:40:21] *** despair86 is now known as ricksha-512
[21:02:29] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[21:03:15] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[21:11:55] *** MarcelT <MarcelT!~marcel@tortuga.telka.sk> has joined #illumos
[21:13:46] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[21:13:46] *** MarcelT <MarcelT!~marcel@tortuga.telka.sk> has quit IRC (Read error: Connection reset by peer)
[21:14:29] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[21:14:51] *** MarcelT <MarcelT!~marcel@tortuga.telka.sk> has joined #illumos
[21:20:00] *** ed209 <ed209!~ed209@165.225.128.67> has quit IRC (Remote host closed the connection)
[21:20:07] *** ed209 <ed209!~ed209@165.225.128.67> has joined #illumos
[21:31:36] *** MarcelT <MarcelT!~marcel@tortuga.telka.sk> has quit IRC (Ping timeout: 240 seconds)
[21:32:17] *** jemershaw <jemershaw!~jemershaw@c-68-83-252-28.hsd1.pa.comcast.net> has quit IRC (Ping timeout: 240 seconds)
[21:34:44] *** jemershaw <jemershaw!~jemershaw@c-68-83-252-28.hsd1.pa.comcast.net> has joined #illumos
[21:58:07] *** Riastradh <Riastradh!~riastradh@netbsd/developer/riastradh> has joined #illumos
[22:50:17] *** wl_ <wl_!~wl_@2605:6000:1b0c:6060::87c> has quit IRC (Quit: Leaving)
[22:51:34] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[22:56:54] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[23:01:58] *** Kurlon_ <Kurlon_!~Kurlon@bidd-pub-04.gwi.net> has joined #illumos
[23:02:37] *** Kurlon_ <Kurlon_!~Kurlon@bidd-pub-04.gwi.net> has quit IRC (Remote host closed the connection)
[23:05:54] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 268 seconds)
[23:11:49] *** Guest82950 <Guest82950!~void@ip4d16bc15.dynamic.kabel-deutschland.de> has quit IRC (Quit: How to defend yourself against an attacker armed with a mathematician)
[23:12:00] *** yomisei <yomisei!~void@ip4d16bc15.dynamic.kabel-deutschland.de> has joined #illumos
[23:13:54] *** AmyMalik is now known as waark
[23:15:25] *** despair86_ <despair86_!~despair86@75-60-137-0.lightspeed.rcsntx.sbcglobal.net> has joined #illumos
[23:17:16] *** ricksha-512 <ricksha-512!~despair86@24.170.8.11> has quit IRC (Ping timeout: 265 seconds)
[23:17:21] *** despair86_ is now known as ricksha-512
[23:21:25] *** Cthulhux <Cthulhux!cthulhu@rosaelefanten.org> has joined #illumos
[23:28:08] *** waark is now known as AmyMalik
[23:33:27] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has quit IRC (Quit: Leaving)
[23:36:39] *** sh42 is now known as sengir
[23:45:31] *** Kurlon <Kurlon!~Kurlon@cpe-67-253-141-249.rochester.res.rr.com> has joined #illumos
[23:48:13] *** andy_js <andy_js!~andy@97e29e78.skybroadband.com> has quit IRC (Quit: andy_js)
top

   November 6, 2019
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30