Switch to DuckDuckGo Search
   January 24, 2020  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >

Toggle Join/Part | bottom
[00:01:33] *** jimklimov <jimklimov!~jimklimov@ip-86-49-254-26.net.upcbroadband.cz> has quit IRC (Read error: Connection reset by peer)
[00:01:47] *** jimklimov <jimklimov!~jimklimov@ip-86-49-254-26.net.upcbroadband.cz> has joined #illumos
[00:38:11] *** Kurlon <Kurlon!~Kurlon@cpe-67-253-141-249.rochester.res.rr.com> has quit IRC (Remote host closed the connection)
[00:38:48] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[01:05:07] *** Kurlon_ <Kurlon_!~Kurlon@cpe-67-253-141-249.rochester.res.rr.com> has joined #illumos
[01:08:20] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 265 seconds)
[01:25:31] *** isaacdavis <isaacdavis!d0b805aa@208.184.5.170> has joined #illumos
[01:31:18] *** tsoome__ <tsoome__!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Read error: Connection reset by peer)
[01:31:59] *** tsoome__ <tsoome__!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[02:00:02] *** KeiraT <KeiraT!~k4ra@gateway/tor-sasl/k4ra> has quit IRC (Remote host closed the connection)
[02:00:53] *** KeiraT <KeiraT!~k4ra@gateway/tor-sasl/k4ra> has joined #illumos
[02:22:29] *** clapont_ <clapont_!~clapont@46.97.170.47> has quit IRC (Read error: Connection reset by peer)
[02:27:11] *** clapont <clapont!~clapont@unaffiliated/clapont> has joined #illumos
[02:29:00] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has quit IRC (Remote host closed the connection)
[02:58:17] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Remote host closed the connection)
[02:59:09] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[03:00:44] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Remote host closed the connection)
[03:00:58] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[03:23:26] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Quit: jcea)
[04:17:24] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has quit IRC (Ping timeout: 248 seconds)
[04:20:01] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has joined #illumos
[04:32:07] <jbk> just out of curiousity, are there any parts of illumos-gate that explicitly should not use ctf?
[04:38:40] <jbk> e.g. I don't know how/why i noticed this.. but vtinfo and vtdaemon do not get built with ctf (I'm sure there's probably others)
[04:39:35] <jbk> and so i've fixed it
[04:39:40] <jbk> (mostly curious for future stuff)
[05:01:24] *** Kruppt <Kruppt!~Kruppt@104.169.24.12> has quit IRC (Quit: Leaving)
[05:02:19] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Remote host closed the connection)
[05:18:33] *** BOKALDO <BOKALDO!~BOKALDO@81.198.18.7> has joined #illumos
[06:36:43] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has quit IRC (Ping timeout: 265 seconds)
[07:07:14] *** alanc <alanc!~alanc@inet-hqmc01-o.oracle.com> has quit IRC (Ping timeout: 240 seconds)
[07:13:27] *** alanc <alanc!~alanc@inet-hqmc01-o.oracle.com> has joined #illumos
[07:22:25] *** alanc <alanc!~alanc@inet-hqmc01-o.oracle.com> has quit IRC (Ping timeout: 268 seconds)
[07:24:05] *** kovert <kovert!~kovert@204.141.173.249> has quit IRC (Ping timeout: 265 seconds)
[07:28:43] *** alanc <alanc!~alanc@inet-hqmc01-o.oracle.com> has joined #illumos
[07:58:34] <sjorge> jbk building a pi now, should take afew hours to finish
[08:24:54] *** neuroserve <neuroserve!~toens@195.71.113.124> has joined #illumos
[08:38:43] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has quit IRC (Quit: ^C)
[08:40:28] *** tsoome__ <tsoome__!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Quit: This computer has gone to sleep)
[08:43:22] *** jimklimov <jimklimov!~jimklimov@ip-86-49-254-26.net.upcbroadband.cz> has quit IRC (Quit: Leaving.)
[08:56:56] *** tsoome <tsoome!~tsoome@91.209.240.229> has quit IRC (Read error: Connection reset by peer)
[09:24:13] *** wiedi <wiedi!~wiedi@185.85.220.177> has joined #illumos
[09:25:17] <andyf> jbk there is lots in userland that doesn't use ctf and from time to time I see people adding it alongside other changes. I don't know of any particular reason that it isn't everywhere.
[09:27:37] <jperkin> certainly in pkgsrc the policy is to build everything with ctf by default, and only skip things that it breaks (e.g. go)
[09:34:26] *** tsoome <tsoome!~tsoome@89.219.128.66> has joined #illumos
[09:35:48] *** tsoome_ <tsoome_!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[09:40:03] *** kovert <kovert!~kovert@204.141.173.249> has joined #illumos
[10:05:32] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has joined #illumos
[10:16:31] *** zsj <zsj!~zsj@3EC95F11.catv.pool.telekom.hu> has quit IRC (Quit: leaving)
[10:27:39] *** man_u <man_u!~manu@manu2.gandi.net> has joined #illumos
[10:33:22] *** steph <steph!~steph@minos.ber.rdev.info> has quit IRC (Quit: quit)
[10:34:27] *** steph <steph!~steph@minos.ber.rdev.info> has joined #illumos
[10:41:31] *** steph <steph!~steph@minos.ber.rdev.info> has quit IRC (Quit: quit)
[10:46:14] *** steph <steph!steph@minos.ber.rdev.info> has joined #illumos
[10:54:56] *** sjorge <sjorge!~sjorge@unaffiliated/sjorge> has quit IRC (Quit: 410 Gone)
[11:00:37] *** sjorge <sjorge!~sjorge@unaffiliated/sjorge> has joined #illumos
[11:03:35] <gitomat> [illumos-gate] 12046 Provide /proc/<PID>/fdinfo/ -- Andy Fiddaman <omnios at citrus-it dot co.uk>
[11:03:36] <gitomat> [illumos-gate] 12153 netstat can use /proc/<PID>/fdinfo and avoid grabbing processes -- Andy Fiddaman <omnios at citrus-it dot co.uk>
[11:03:47] *** nde <nde!uid414739@gateway/web/irccloud.com/x-mcmhtnkhrjvviwmr> has quit IRC (Quit: Connection closed for inactivity)
[11:06:13] <andyf> Thanks rmustacc, jlevon and danmcd for the help with those two!
[11:11:00] <sjorge> jbk well, got some bad news
[11:11:01] <sjorge> [root@freeradius ~]# getent group admins
[11:11:01] <sjorge> Segmentation Fault (core dumped)
[11:11:49] <sjorge> I have a getent and a nscd core
[11:12:09] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has joined #illumos
[11:12:48] <sjorge> https://gist.github.com/sjorge/406fb4fafd38984d0e96c835cb22aeb5
[11:47:11] *** neirac_ <neirac_!~neirac@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[12:23:40] *** amrmesh <amrmesh!~Thunderbi@134.19.189.92> has joined #illumos
[12:23:56] *** amrmesh <amrmesh!~Thunderbi@134.19.189.92> has quit IRC (Remote host closed the connection)
[12:29:31] *** merzo <merzo!~merzo@46-44-133-95.pool.ukrtel.net> has quit IRC (Ping timeout: 268 seconds)
[12:31:57] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[12:41:29] *** tsoome__ <tsoome__!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[12:41:30] *** tsoome__ <tsoome__!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Client Quit)
[12:45:31] *** BOKALDO <BOKALDO!~BOKALDO@81.198.18.7> has quit IRC (Quit: Leaving)
[12:58:04] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has quit IRC (Ping timeout: 268 seconds)
[12:59:57] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has joined #illumos
[13:01:26] *** nde <nde!uid414739@gateway/web/irccloud.com/x-kezqrxnrvwnnvajr> has joined #illumos
[13:04:47] *** mgerdts <mgerdts!~textual@2600:6c44:c7f:ec89:445a:f192:fee0:4336> has joined #illumos
[13:06:06] <sjorge> I can get you the cores too if you want them
[13:19:40] *** swestdijk[m] <swestdijk[m]!swestdijkm@gateway/shell/matrix.org/x-cjospdorbrpqokey> has left #illumos ("User left")
[13:21:30] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has quit IRC (Ping timeout: 268 seconds)
[13:25:11] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has joined #illumos
[13:32:14] *** BOKALDO <BOKALDO!~BOKALDO@81.198.18.7> has joined #illumos
[13:34:46] *** mgerdts <mgerdts!~textual@2600:6c44:c7f:ec89:445a:f192:fee0:4336> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[13:43:48] *** mgerdts <mgerdts!~textual@2600:6c44:c7f:ec89:9495:fa6c:f1f5:6b9a> has joined #illumos
[13:57:53] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[14:10:43] *** mgerdts <mgerdts!~textual@2600:6c44:c7f:ec89:9495:fa6c:f1f5:6b9a> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[14:26:26] <toasterson> @freenode_neirac:matrix.wegmueller.it: I have added a known Bugs instruction for that. You must use Go modules to compile any Program that Requires the go-zone library. Otherwise go pulls the wrong version of the uuid library unreliably.
[14:27:48] *** veg <veg!~veg@unaffiliated/veg> has quit IRC (Remote host closed the connection)
[14:34:52] *** merzo <merzo!~merzo@249-88-203-46.pool.ukrtel.net> has joined #illumos
[14:38:22] <jimklimov> hi all, do many people use ZFS in illumos VMs hosted on Windows VirtualBox? :)
[14:38:47] <jimklimov> as some know, I migrated to a new laptop at work some time ago, now with Windows 10 (was 7)
[14:39:32] <jimklimov> and store most of my work data in the OI VM (due to corp requirements, there must be a windows and it must be the physical OS)
[14:40:09] <jimklimov> the ZFS pool(s) live in raw partitions so I can technically dual-boot, recover beside Windows, etc.
[14:40:48] *** zsj <zsj!~zsj@3EC95F11.catv.pool.telekom.hu> has joined #illumos
[14:42:05] <jimklimov> the problem is that now where the raw partitions are on disks (1 SSD + 1 HDD) co-used by Windows itself for its C: and D:, the VMs face heavy timeouts and often lose the virtual disks
[14:42:30] <jimklimov> this did not happen on my earlier laptop that also "shared" disks with the OS like that
[14:43:00] <tsoome_> does guest see partition as partition or as virtual disk?
[14:43:13] <jimklimov> and does not happen on the new one where I also put an SSD with partitions dedicated to VMs (there are two VMs accessing this SSD, but no native Windows 10 partitions)
[14:44:06] <jimklimov> the VMs think they see the whole disk fully sized, but normally only the partition they need is non-zeroed (the others are protected by VirtualBox)
[14:44:29] <jimklimov> ahci timeouts do also happen if I pass really the whole disk
[14:45:20] <tsoome_> this smells like excessive cache flush
[14:46:05] <jimklimov> so far I guess my 2 questions are: 1) did anyone else see such issues, does this ring a bell? and 2) what can be reasonable timeouts, and where? I see recent sd.conf in e.g. Joyent's tree is mostly tuned to minimal timeouts/retries in SD layer so ZFS would do that instead
[14:47:21] <jimklimov> I've read about possible cache-tiering problems and tried to disable both VirtualBox's "Use host cache" and Windows' "write-caching policy" for the virtual/physical disks respectively
[14:47:57] <jimklimov> IIRC the windows policy change did make the windows lag, but otherwise the problem for VMs did not disappear
[14:48:35] *** mgerdts <mgerdts!~textual@2600:6c44:c7f:ec89:74ed:dbd7:9b65:c551> has joined #illumos
[14:48:47] <jimklimov> for some reason, my initial mirroring from this dedicated SSD to laptop's partitions did (or claimed to) succeed for both the OI VM and Debian VM
[14:49:46] <jimklimov> but then neither could really write to e.g. EFI System partition on the laptop's secondary disk that I wanted to use for experiments with dual-booting
[14:50:26] *** isaacdavis <isaacdavis!d0b805aa@208.184.5.170> has quit IRC (Remote host closed the connection)
[14:50:40] <jimklimov> I think they managed to format it into FAT eventually, but never managed to copy any loader files and sync them successfully - the disk "fell off" under such heavy load of a few megabytes to copy
[14:51:06] <jimklimov> and the ZFS mirror half and cache device hosted on the laptop's disks are also "faulted/unavail" :(
[14:51:53] <jimklimov> I hoped the use of my private SSD stick would be temporary until the mirror-cloning of the VMs completes, but in fact it is the only backend storage for these VMs that actually works
[14:53:08] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[14:53:56] *** tsoome <tsoome!~tsoome@89.219.128.66> has quit IRC (Quit: tsoome)
[14:54:28] <jimklimov> now also tried my backup USB HDD disk (passed whole as a virtual SATA device, VBox USB-passthrough to OI did not work for me before) - while as slow as expected of mechanics, it survived an overnight sync with znapzend and did not time out
[14:56:23] *** psarria <psarria!~psarria@108.red-81-40-162.staticip.rima-tde.net> has joined #illumos
[14:58:13] *** tsoome_ <tsoome_!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Ping timeout: 265 seconds)
[15:00:34] <toasterson> jimklimov (IRC): usb 3.1 ssd or hdd might help then. Or thunderbolt if supported by the Host
[15:02:10] <jimklimov> yeah, for speed of connectivity to backup media - probably
[15:02:33] <jimklimov> so far I'm more concerned for *ability* to connect to the primary storage however
[15:04:11] <jimklimov> I think the impacting factors here (compared to what works) may be the switch of host from Win7 to Win10, and/or the use of host's Windows partitions located on same disks (even an SSD one) as partitions passed to VMs
[15:05:13] <jimklimov> if anybody comes up with good practical ideas over the next week, we can try them out at the fosdem table ;)
[15:05:19] *** BOKALDO <BOKALDO!~BOKALDO@81.198.18.7> has quit IRC (Quit: Leaving)
[15:05:27] <toasterson> jimklimov (IRC): I never had success passing a whole disk used by the Host to the Guest In Virtualbox neither in linux nor windows
[15:05:44] <toasterson> The only thing that worked were seperate disks
[15:17:34] <jimklimov> thanks... I'll run some more test combinations then
[15:17:47] *** neirac_ <neirac_!~neirac@pc-184-104-160-190.cm.vtr.net> has quit IRC (Quit: neirac_)
[15:20:59] <jimklimov> Jan 24 15:18:28 jimoi ahci: [ID 296163 kern.warning] WARNING: ahci0: ahci port 3 has task file error
[15:21:09] <jimklimov> whatever that means... what file...
[15:43:35] *** kerberizer <kerberizer!~luchesar@wikipedia/Iliev> has quit IRC (Ping timeout: 272 seconds)
[15:47:20] *** kerberizer <kerberizer!~luchesar@wikipedia/Iliev> has joined #illumos
[15:47:52] *** tsoome_ <tsoome_!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[15:52:02] *** kerberizer <kerberizer!~luchesar@wikipedia/Iliev> has quit IRC (Ping timeout: 240 seconds)
[15:55:35] <jbk> sjorge: can you run ::status on the nscd core?
[15:58:12] <jbk> also.. i just found one silly bug:
[15:58:30] *** merzo <merzo!~merzo@249-88-203-46.pool.ukrtel.net> has quit IRC (Read error: Connection reset by peer)
[15:58:39] *** merzo <merzo!~merzo@249-88-203-46.pool.ukrtel.net> has joined #illumos
[15:59:05] <jbk> i commented on your gist
[16:00:22] *** hurfdurf <hurfdurf!~hurfdurf@2601:280:4f00:26a0:c812:a350:f05:62b2> has joined #illumos
[16:04:27] <sjorge> Applied the patch, give me asec to run ::status
[16:05:19] *** kerberizer <kerberizer!~luchesar@wikipedia/Iliev> has joined #illumos
[16:05:22] <sjorge> Added the output from ::status to the gist
[16:11:14] *** neuroserve <neuroserve!~toens@195.71.113.124> has quit IRC (Ping timeout: 240 seconds)
[16:17:41] *** kerberizer <kerberizer!~luchesar@wikipedia/Iliev> has quit IRC (Ping timeout: 268 seconds)
[16:19:22] <jimklimov> so I tried to enable back the caching, and bumped (hopefully) the sd.conf timeout to 60sec with
[16:19:22] <jimklimov> sd-config-list= "", "retries-timeout:60,retries-busy:3,retries-reset:1,retries-victim:2" ;
[16:19:57] <jimklimov> the VM did boot a lot faster than it did before (used to be stuck for a minute or so just after starting kernel, probably enumerating drives)
[16:21:40] *** BOKALDO <BOKALDO!~BOKALDO@81.198.19.103> has joined #illumos
[16:22:07] <jimklimov> but then still logged those ahci errors (seems always for "port 3") and neither HDD nor SSD parts of the pool were "zpool clear"-ed
[16:25:38] *** jim80net <jim80net!sid287860@gateway/web/irccloud.com/x-vqaxppjmanhafuev> has quit IRC ()
[16:25:54] *** jim80net <jim80net!sid287860@gateway/web/irccloud.com/x-dkdneayvaafzmlcf> has joined #illumos
[16:26:09] *** Kruppt <Kruppt!~Kruppt@104.169.24.12> has joined #illumos
[16:27:34] *** m4rley <m4rley!~m4rley@207.148.96.120> has quit IRC (Remote host closed the connection)
[16:32:16] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[16:32:54] <jimklimov> hmm, curiously vbox.log also complains about not sure what, but it "ALLOCATED" and then numerously "CANCELED" some "Type=INVALID" requests.. :\
[16:36:44] <jimklimov> the Virtual Media Manager claims correct "virtual size" for the "raw disks" but too much for "actual size" - maybe it mis-maps something, or mixes up sector sizes?.. :\
[16:37:40] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Ping timeout: 248 seconds)
[16:38:26] <jimklimov> also the "actual size" differs for whole-disk passthrough files vs. files that allow certain partitions, and seems to be the correct "virtual size" times the amount of partitions listed (x2 or x3 for cases I see)
[16:38:40] <jimklimov> they all error the same when accessed though
[16:41:20] <jimklimov> seems as a win10 issue after all : https://forums.virtualbox.org/viewtopic.php?f=6&t=93437
[16:44:54] <jimklimov> the "port 3" seems to be the one where C: lives and is protected more
[16:46:22] <jimklimov> even with UAC slider turned down, accesses to the partition on it return VERR_WRITE_ERROR (in vbox.log)
[16:46:29] <jimklimov> the secondary HDD with the additional partition does not log errors like this
[16:46:38] <jimklimov> but seems to not work either
[16:48:37] *** m4rley <m4rley!~m4rley@207.148.96.120> has joined #illumos
[16:57:36] *** merzo <merzo!~merzo@249-88-203-46.pool.ukrtel.net> has quit IRC (Ping timeout: 265 seconds)
[17:00:42] *** merzo <merzo!~merzo@53-101-203-46.pool.ukrtel.net> has joined #illumos
[17:02:25] *** merzo <merzo!~merzo@53-101-203-46.pool.ukrtel.net> has quit IRC (Read error: Connection reset by peer)
[17:05:51] *** kerberizer <kerberizer!~luchesar@wikipedia/Iliev> has joined #illumos
[17:08:06] *** merzo <merzo!~merzo@46-44-133-95.pool.ukrtel.net> has joined #illumos
[17:14:00] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #illumos
[17:14:30] *** ZOP <ZOP!~ZOP@phobos.wgops.com> has joined #illumos
[17:15:29] *** ZOP_ <ZOP_!~ZOP@phobos.wgops.com> has quit IRC (Ping timeout: 265 seconds)
[17:17:50] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Quit: Leaving.)
[17:20:52] *** merzo_ <merzo_!~merzo@46-44-133-95.pool.ukrtel.net> has joined #illumos
[17:21:16] *** tsoome__ <tsoome__!~tsoome@91.209.240.229> has joined #illumos
[17:22:56] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Ping timeout: 265 seconds)
[17:22:56] *** tsoome__ is now known as tsoome
[17:23:42] *** merzo <merzo!~merzo@46-44-133-95.pool.ukrtel.net> has quit IRC (Ping timeout: 265 seconds)
[17:33:58] <gitomat> [illumos-gate] 12193 zonestatd: Wrong indentation in zsd_usage_cache_update() -- Marcel Telka <marcel at telka dot sk>
[17:50:48] *** kerberizer <kerberizer!~luchesar@wikipedia/Iliev> has quit IRC (Ping timeout: 268 seconds)
[17:54:04] *** kerberizer <kerberizer!~luchesar@wikipedia/Iliev> has joined #illumos
[18:05:47] *** wiedi <wiedi!~wiedi@185.85.220.177> has quit IRC (Quit: ^C)
[18:11:24] *** man_u <man_u!~manu@manu2.gandi.net> has quit IRC (Quit: man_u)
[18:13:40] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[18:15:16] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has quit IRC (Ping timeout: 248 seconds)
[18:16:17] *** sjorge <sjorge!~sjorge@unaffiliated/sjorge> has quit IRC (Quit: 410 Gone)
[18:21:19] *** sjorge <sjorge!~sjorge@unaffiliated/sjorge> has joined #illumos
[18:21:33] <sjorge> jbk the extra change works!
[18:21:34] <sjorge> [root@freeradius ~]# getent group admins
[18:21:34] <sjorge> admins::10002:sjorge
[18:28:17] *** copec <copec!~copec@schrodbox.unaen.org> has quit IRC (Ping timeout: 246 seconds)
[18:31:26] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Remote host closed the connection)
[18:33:10] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[18:33:13] *** Kurlon_ <Kurlon_!~Kurlon@cpe-67-253-141-249.rochester.res.rr.com> has quit IRC (Remote host closed the connection)
[18:33:53] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[18:52:12] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #illumos
[18:59:49] <jbk> excellent!
[19:06:17] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has quit IRC (Read error: Connection reset by peer)
[19:07:07] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has joined #illumos
[19:11:51] *** copec <copec!~copec@schrodbox.unaen.org> has joined #illumos
[19:32:01] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Remote host closed the connection)
[19:36:10] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[19:56:08] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Remote host closed the connection)
[19:56:58] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[20:03:40] *** Kurlon_ <Kurlon_!~Kurlon@cpe-67-253-141-249.rochester.res.rr.com> has joined #illumos
[20:06:51] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has joined #illumos
[20:07:42] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 268 seconds)
[20:42:00] <jbk> i'll go ahead and get those changes out for review then.. though I'll still want to do a sanity check by doing a new setup (to verify the get domain bits also work).. also probably want to try out a few other scenarios
[20:42:28] <jbk> but i think it's probably close enough
[20:48:04] <sjorge> andyf: can probably test too as he is deploying ldap also
[20:49:33] <jbk> i'd need to look at more of the code, but IIRC, during the initial setup, it looks at the base dn you're using for the nisdomainobject OC on than entry (typically an OU)
[20:49:56] <jbk> also, want to confirm behavior when 'member' and 'memberUid' attributes are present
[20:50:18] <jbk> (getent group should show the union of the two.. though possibly duplicated
[20:50:54] <jbk> it sounds like solaris may dedup, we don't (and i'd argue not strictly necessary -- AFAIK, we don't barf if there's duplicate entries in /etc/group today
[20:52:23] <andyf> There are so many possibilities, as always. I use the 'memberof' overlay so that users have a memberof attribute - lots of things expect to look groups up that way round
[20:53:21] <jbk> i think the three key ones are: all secondary members in 'memberUid' attribute, all secondary members in 'member' attribute (DN), and a mix
[20:53:29] <jbk> (both of those can be mapped to different attributes of course)
[20:53:48] <andyf> yes - groupOfUniqueNames with uniqueMember, etc.
[20:54:05] <jbk> (this is for getgr{gid,nam,ent})
[20:54:07] <sjorge> I need to get the memverOf overlay working too
[20:54:15] <sjorge> That’s next on the list after nirrormodd
[21:35:33] <gitomat> [illumos-gate] 12184 SPARC build fails due to missing 64-bit libdhcpagent -- Peter Tribble <peter.tribble at gmail dot com>
[21:39:38] *** Acrossy|2 <Acrossy|2!~kvirc@95.107.16.197> has quit IRC (Read error: Connection reset by peer)
[21:44:24] *** merzo__ <merzo__!~merzo@17-7-132-95.pool.ukrtel.net> has joined #illumos
[21:45:40] *** clapont <clapont!~clapont@unaffiliated/clapont> has quit IRC (Ping timeout: 265 seconds)
[21:46:53] *** merzo_ <merzo_!~merzo@46-44-133-95.pool.ukrtel.net> has quit IRC (Ping timeout: 260 seconds)
[21:52:13] *** BOKALDO <BOKALDO!~BOKALDO@81.198.19.103> has quit IRC (Quit: Leaving)
[22:05:19] <gitomat> [illumos-gate] 12243 too many NFS threads actually hurts performance -- Evan Layton <evan.layton at nexenta dot com>
[22:20:46] <gitomat> [illumos-gate] 12144 Convert Intro(7) to mandoc -- Jason King <jason.king at joyent dot com>
[22:29:42] *** KeiraT <KeiraT!~k4ra@gateway/tor-sasl/k4ra> has quit IRC (Quit: KeiraT)
[22:35:38] <jbk> one thing i have wondered about reimplementing (I did this years ago at $JOB-3) for LDAP
[22:36:02] <jbk> is allowing you to manage server access in LDAP along with your user data
[22:36:09] <jbk> (at least in a more straightforward manner)
[22:38:57] <jbk> and basically you can for a given server, list ldap groups or users (by DN) that are allowed to access that server (or optionally if grouped into containers such as OUs, put similar entries to allow servers listed under them to inherit those values
[22:39:03] <jbk> (if that makes any sense)
[22:43:20] <jbk> though the big issue is the tools for managing the data in ldap tend to be all lower level.. at least last time i looked, I didn't really see anything that was similar to ADUC is on windows
[22:43:46] <jbk> but more like ADSI edit
[22:43:54] <jbk> (though I haven't looked recently)
[23:11:35] <sjorge> jbk wouldn't a generic restrict login to group be better?
[23:11:42] <sjorge> Then the source doesn't matter, like AllowGroup in sshd
[23:11:53] <sjorge> So you can just user a group form files, ldap, whatever
[23:13:49] <richlowe> isn't there a pam module that does that?
[23:14:00] <richlowe> pam_list maybe?
[23:16:02] <jbk> kinda
[23:16:09] <jbk> but you have to manage the file on each server
[23:16:21] *** amrmesh <amrmesh!~Thunderbi@134.19.189.92> has joined #illumos
[23:17:03] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has quit IRC (Quit: Leaving)
[23:17:14] *** amrmesh <amrmesh!~Thunderbi@134.19.189.92> has quit IRC (Client Quit)
[23:18:32] *** KeiraT <KeiraT!~k4ra@gateway/tor-sasl/k4ra> has joined #illumos
[23:20:49] *** hurfdurf is now known as _hurfdurf
[23:21:15] <jbk> (which if you're going to do that, why bother with LDAP, just use the same thing that's managing that manage the users locally)
[23:23:47] *** nde <nde!uid414739@gateway/web/irccloud.com/x-kezqrxnrvwnnvajr> has quit IRC (Quit: Connection closed for inactivity)
[23:24:36] <bahamat> Well presumably that will change much less frequently than the ldap content.
[23:25:01] <bahamat> I consider it part of the ldap configuration, just like which server(s) to contact.
[23:26:53] *** mnrmnaugh <mnrmnaugh!~mnrmnaugh@unaffiliated/mnrmnaugh> has quit IRC (Ping timeout: 268 seconds)
[23:27:11] <jbk> i don't -- it was pretty common to have 'XX group/user needs access to these systems' because of some project/reorg/etc.
[23:27:25] <jbk> so it could be just as dynamic as the user data
[23:27:27] <bahamat> That can be managed in ldap
[23:27:48] <jbk> yes, but the current methods for doing that are pretty terrible imo
[23:28:26] <jbk> like, you can create search rules that just hide the users that shouldn't be able to login from the system
[23:28:37] <bahamat> Well I've never seen an ldap with netgroups other than (,user,), so it's basically just a list of users in each group.
[23:28:53] <jbk> but then you run into potential UID conflicts
[23:29:04] <jbk> netgroups in ldap has always been a horrible back imo
[23:29:15] <bahamat> All uses within a single ldap should have unique uidNumbers anyway.
[23:29:37] <jbk> err hack
[23:29:46] <bahamat> Oh, I don't disagree. It should just work with groups.
[23:30:08] <bahamat> But, ¯\_(ツ)_/¯
[23:31:30] <jbk> basically unless the ldap server has custom support explicitly for netgroups, searching, indexing is going to be poor
[23:33:49] <jbk> i'd rather leverage the native group support in ldap
[23:52:45] *** _hurfdurf is now known as hurfdurf
top

   January 24, 2020  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >