Switch to DuckDuckGo Search
   March 11, 2020  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >


NOTICE: This channel is no longer actively logged.

Toggle Join/Part | bottom
[00:13:35] <dsockwell> 'Pi' and 'good' are so often mutually exclusive that I say go for it
[00:14:25] <dsockwell> but the last one I bought had 4GB of RAM and that seems like plenty
[00:15:34] <LeftWing> I would say so
[00:15:58] <LeftWing> My PC Engines APU board (x64) has 4GB of RAM and it uses ZFS and is fine
[00:16:08] <LeftWing> The anemic CPU is much more of a problem than the RAM
[00:16:40] <toasterson1> alanc (IRC): you had me at trailer for a security flaw....
[00:17:12] <alanc> scroll down just a bit on https://lviattack.eu/ to see it
[00:17:39] <dsockwell> https://lviattack.eu/
[00:17:45] <dsockwell> wrong buffer sorry https://youtu.be/baKHSXeIIaI
[00:18:20] <dsockwell> hahahahahahahahahaha this is great
[00:19:40] *** andy_js <andy_js!~andy@51.146.99.40> has quit IRC (Quit: andy_js)
[00:21:01] <ptribble> ZFS works just fine with 1GB of RAM (eg t2.micro instances on AWS)
[00:24:15] <toasterson1> oh god that trailer.... Is that gallows humor? Hope the humor kills you before the security flaw does?
[00:26:54] *** CME <CME!~CME@ip5f5b13b8.dynamic.kabel-deutschland.de> has quit IRC (Read error: Connection reset by peer)
[00:27:04] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 256 seconds)
[00:27:10] *** CME <CME!~CME@ip5f5b13b8.dynamic.kabel-deutschland.de> has joined #illumos
[00:28:31] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[00:32:51] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has quit IRC (Quit: Leaving)
[00:33:19] <sjorge> spicywolf: all my microSD’s are < 4G
[00:33:41] <LeftWing> You really want to be careful with large scale file systems on Micro SD cards
[00:34:15] <LeftWing> They don't generally do well in the face of large write-heavy workloads, and ZFS does nothing to minimise its effect on the endurance of read-focused flash
[00:34:36] <sjorge> alanc / toasterson1 the clearly Flemish accent leaking through makes it better for me
[00:35:14] <Smithx10> toasterson1: how is the go zones library thingie going?
[00:36:05] <sjorge> It’s by the competing university than the one I work for
[00:36:08] <alanc> fortunately, the lvi attack work so far is mainly to break the SGX enclave, not normal CPU usage, so it's not too scary yet
[00:36:51] <alanc> especially since the mitigation is "just have your compiler/assembler insert all the LFENCEs it can, until you've slowed performance down somewhere between 2x & 20x"
[00:37:19] <toasterson1> Smithx10 (IRC): Good. It does it's job. Anything in particular you want to know?
[00:38:15] <sjorge> Sort of like a school zone then, drive slow so if yo my hit something bad, it’s not too bad
[00:38:26] <Smithx10> Was curious now that we have NFS in a Zone if we should maybe add some of the Volume Pieces?
[00:40:40] <toasterson1> Smithx10 (IRC): it can do that. At least it writes the dataset delegations properly. Are we now able to specify nfs exports in the Zone XML aswell? Otherwise that would be a job for the Higher level https://git.wegmueller.it/opencloud/opencloud
[00:40:50] <toasterson1> Which has volume support now.
[00:41:04] <toasterson1> But propably broken. I need some testers :)
[00:42:01] <Smithx10> That's a good question, can we specify dataset properties in there?
[00:42:49] <Smithx10> I guess what do you do on the client, would that also use zone xml?
[00:44:08] <toasterson1> no zone xml should only handle zone environment configuration and delegated resources. Mounts and exports are to be defined seperatly
[00:45:04] <toasterson1> but adding commands to podadm to allow it to manage mounts and exports is not that hard.
[00:45:40] <toasterson1> the volume instruction could add an export property
[00:46:16] <toasterson1> so that when a zone is built from the instructions it builds that nfs export along with the volume dataset and the delegation
[00:47:33] <toasterson1> hmmm DO just published a Mail mentioning 3 Vulns LVI, TRRespass and L1TF
[00:48:57] <toasterson1> Snoop-assisted L1 Data Sampling is the third.
[00:49:03] *** AllanJude <AllanJude!~allan@freebsd/developer/AllanJude> has quit IRC (Remote host closed the connection)
[00:49:17] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[00:49:22] *** AllanJude <AllanJude!~allan@freebsd/developer/AllanJude> has joined #illumos
[00:51:40] *** rzezeski <rzezeski!uid151901@gateway/web/irccloud.com/x-ooltvfuztjniwrzn> has quit IRC (Quit: Connection closed for inactivity)
[00:52:58] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[00:53:29] <alanc> Snoop-assisted L1 Data Sampling is the new vuln, but the mitigation for hypervisors is the same as L1TF - flush the L1D cache when switching from guest to HV or host
[01:19:34] *** ldepandis <ldepandis!~ldepandis@unaffiliated/ldepandis> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[01:25:21] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[01:28:27] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has quit IRC (Read error: Connection reset by peer)
[01:28:46] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has joined #illumos
[01:40:10] *** Tempt <Tempt!~avenger@unaffiliated/tempt> has quit IRC (Ping timeout: 256 seconds)
[01:41:38] *** Tempt <Tempt!~avenger@unaffiliated/tempt> has joined #illumos
[01:41:38] *** ChanServ sets mode: +o Tempt
[01:43:27] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[01:57:35] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[02:07:18] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[02:09:51] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[02:17:22] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 255 seconds)
[02:22:08] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[02:27:57] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[02:31:15] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[02:37:30] <Smithx10> toasterson1: what is podadmd?
[02:40:24] <toasterson1> That will later be the daemon that exports this functionality over the network. And provide kubelet support.
[02:41:29] <Smithx10> that will do the same namespacing that happens on linux?
[02:41:36] <Smithx10> so 4 pods in one zone i guess?
[02:56:06] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has quit IRC (Ping timeout: 256 seconds)
[03:28:06] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[03:29:01] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[03:44:48] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has quit IRC (Quit: jcea)
[03:55:42] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[04:02:31] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[04:13:36] *** rzezeski <rzezeski!uid151901@gateway/web/irccloud.com/x-tjxkavsatoekhqld> has joined #illumos
[04:43:23] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[04:44:02] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[04:54:30] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[04:57:11] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[05:05:04] *** BOKALDO <BOKALDO!~BOKALDO@87.110.88.30> has joined #illumos
[05:05:05] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has quit IRC (Quit: My MacBook has gone to sleep. ZZZzzz…)
[05:05:36] *** idodeclare <idodeclare!~textual@cpe-76-185-177-63.satx.res.rr.com> has joined #illumos
[05:11:20] *** danmcd <danmcd!~danmcd@static-71-174-113-16.bstnma.fios.verizon.net> has quit IRC (Read error: Connection reset by peer)
[05:34:33] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has joined #illumos
[06:06:51] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has quit IRC (Ping timeout: 260 seconds)
[06:19:31] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 265 seconds)
[06:21:53] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[06:28:51] *** BOKALDO <BOKALDO!~BOKALDO@87.110.88.30> has quit IRC (Quit: Leaving)
[06:37:01] *** freakazoid0223 <freakazoid0223!~matt@pool-96-227-98-169.phlapa.fios.verizon.net> has quit IRC (Ping timeout: 255 seconds)
[06:51:16] *** kerberizer <kerberizer!~luchesar@wikipedia/Iliev> has quit IRC (Ping timeout: 256 seconds)
[07:00:25] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 255 seconds)
[07:01:33] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[07:13:57] *** kerberizer <kerberizer!~luchesar@wikipedia/Iliev> has joined #illumos
[07:37:25] *** tsoome <tsoome!~tsoome@90e4-c54e-b763-ecad-2f80-4a40-07d0-2001.sta.estpak.ee> has quit IRC (Quit: This computer has gone to sleep)
[07:45:25] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 255 seconds)
[07:46:49] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[08:07:42] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has joined #illumos
[08:08:49] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 255 seconds)
[08:09:58] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[08:20:30] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[08:23:18] *** amrfrsh <amrfrsh!~Thunderbi@109.201.133.238> has quit IRC (Quit: amrfrsh)
[08:31:48] *** neuroserve <neuroserve!~toens@195.71.113.124> has joined #illumos
[08:41:46] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has quit IRC (Ping timeout: 256 seconds)
[08:47:51] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 260 seconds)
[08:52:32] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[09:09:44] *** nde <nde!uid414739@gateway/web/irccloud.com/x-ditcpsdbhoxaflej> has quit IRC (Quit: Connection closed for inactivity)
[09:14:59] *** andy_js <andy_js!~andy@51.146.99.40> has joined #illumos
[09:21:07] *** wiedi <wiedi!~wiedi@185.85.220.202> has joined #illumos
[09:25:24] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 256 seconds)
[09:26:52] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[09:31:43] *** Knez <Knez!~Knez@h-73-78.A444.priv.bahnhof.se> has quit IRC (Ping timeout: 272 seconds)
[09:45:15] <toasterson1> Smithx10 (IRC): Nope. I most likely launch 4 zones as one pod. Where the management daemon for the whole pod is chrooted in a filesystem above the zone roots.
[09:45:32] <toasterson1> or running in a management zone
[09:48:17] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has joined #illumos
[09:48:43] *** cartwright <cartwright!~chatting@gateway/tor-sasl/cantstanya> has quit IRC (Ping timeout: 240 seconds)
[09:48:50] *** TwoADay2 <TwoADay2!~hemi770@208.79.89.189> has joined #illumos
[09:48:58] *** TwoADay <TwoADay!~hemi770@208.79.89.189> has quit IRC (Ping timeout: 268 seconds)
[09:49:03] *** KeiraT <KeiraT!~k4ra@gateway/tor-sasl/k4ra> has quit IRC (Ping timeout: 240 seconds)
[09:59:32] *** KeiraT <KeiraT!~k4ra@gateway/tor-sasl/k4ra> has joined #illumos
[10:00:41] *** cartwright <cartwright!~chatting@gateway/tor-sasl/cantstanya> has joined #illumos
[10:13:46] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[10:18:30] *** man_u <man_u!~manu@fob.gandi.net> has joined #illumos
[10:19:22] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Ping timeout: 268 seconds)
[10:21:42] *** man_u_ <man_u_!~manu@manu2.gandi.net> has joined #illumos
[10:23:49] *** man_u <man_u!~manu@fob.gandi.net> has quit IRC (Ping timeout: 255 seconds)
[10:23:49] *** man_u_ is now known as man_u
[10:43:46] *** patdk-lap <patdk-lap!~patrickdk@208.94.189.75> has quit IRC (Ping timeout: 258 seconds)
[10:44:17] *** patdk-lap <patdk-lap!~patrickdk@208.94.189.75> has joined #illumos
[10:52:12] *** hawk <hawk!~hawk@d.qw.se> has joined #illumos
[11:02:04] *** BOKALDO <BOKALDO!~BOKALDO@87.110.88.30> has joined #illumos
[11:06:22] *** ldepandis <ldepandis!~ldepandis@unaffiliated/ldepandis> has joined #illumos
[11:10:50] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 240 seconds)
[11:16:20] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[11:25:10] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 258 seconds)
[11:26:39] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[11:34:10] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Ping timeout: 265 seconds)
[11:35:25] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[12:14:26] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Ping timeout: 240 seconds)
[13:00:44] <EisNerd> toasterson1: at least for smb the zoned attribute results in the sharesmb attribute being ignored in global zone
[13:01:11] <EisNerd> if this was done generic enough it should also work for nfs
[13:02:09] <EisNerd> might be it would be nice to have an attribute for zvols for volume export (iSCSI, FC, ...)
[13:02:53] <toasterson1> EisNerd: I want to first play around with it manually anyway. Lets first publish guides how to do it before we write them down to code :)
[13:04:31] <EisNerd> btw does someone know if cinder is also a transport protocol or just a management and relying for transport on things like zfs, iSCSI, etc?
[13:08:45] <toasterson1> EisNerd (IRC): you mean ceph? or the openstack project?
[13:08:57] <EisNerd> the openstack component
[13:09:24] <toasterson1> it manages other software. it's purely a rest api that runs commands
[13:10:23] <EisNerd> ok so to get cinder in a zone, we would need zoned iscsi
[13:12:14] <toasterson1> no. you would need something cinder can talk with. a driver
[13:12:33] <toasterson1> that can be a door to the gz where the executor of the commands lives.
[13:12:50] <toasterson1> current cinder drivers are specific for linux utilities
[13:13:06] <toasterson1> or you can call another rest api via network like nexenta does
[13:13:40] <toasterson1> EisNerd (IRC): planing on using openstack?
[13:15:25] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[13:17:13] <EisNerd> at least we think about
[13:18:38] <toasterson1> It is only usefull if you have debian or Redhat as hosts. Anything else is just an enormous amount of work. Even then you are having one fulltime employee upgrading the installation. Depending how many projects you use.
[13:19:00] <toasterson1> And It is an absolute self assembly slaughter.
[13:19:35] <toasterson1> IMHO it is faster to write an own webinterface to call the commands manually
[13:19:55] <toasterson1> although the network component is nice. On Redhat
[13:20:15] <EisNerd> the idea was to have ubuntu or esxi compute nodes
[13:20:52] <toasterson1> esxi would work. thne you would need vm's or zones only anyway
[13:21:01] <EisNerd> as vsphere can use openstack backend services like cinder
[13:22:16] <EisNerd> but would be nice to have vm management integrated with zvol mgmt
[13:23:23] <toasterson1> if you want illumos as storage backend then writing a cinder driver would work.
[13:24:26] *** liv3010m <liv3010m!~liv3010m@77-72-245-190.fibertel.com.ar> has quit IRC (Ping timeout: 240 seconds)
[13:27:29] <EisNerd> I think the lvm driver should be adoptable
[13:27:48] <toasterson1> very probable
[13:30:08] <EisNerd> would also be nice if some effort coul dbe put into having multihost pools holding zone(s) + data, failing over from one host to another. But this involves the problem of upgrading the zones, which might be solved using BEs
[13:37:35] <toasterson1> EisNerd (IRC): no. multihost pools would mean either drbd like async updates for blocks or some sort of consensus. extending a in kernel facility for this kind of complex task is not a good idea. Rather it is better to let a userland based system like edgefs or ceph handle all the details from blocks on disk to network spanning pools. for zfs that would mean a completely new filesystem from scratch everything else would be a
[13:37:36] <toasterson1> frankenstein of complexity.
[13:38:20] <toasterson1> especially with the development of languages like rust which allow much better designs. C is no longer the language of choice for such developments
[13:49:26] <EisNerd> toasterson1: I talk of pools making use of existing mutlihost fencing feature
[13:50:12] <EisNerd> so of pools resinding on hardware implemented multiwriter blockdevices
[14:18:03] *** Kurlon_ <Kurlon_!~Kurlon@cpe-67-253-136-97.rochester.res.rr.com> has quit IRC (Ping timeout: 258 seconds)
[14:22:39] *** awordnot <awordnot!~awordnot@c-73-210-60-203.hsd1.il.comcast.net> has quit IRC (Ping timeout: 258 seconds)
[14:27:28] *** awordnot <awordnot!~awordnot@c-73-210-60-203.hsd1.il.comcast.net> has joined #illumos
[14:34:03] *** awordnot <awordnot!~awordnot@c-73-210-60-203.hsd1.il.comcast.net> has quit IRC (Ping timeout: 268 seconds)
[14:39:10] *** awordnot <awordnot!~awordnot@c-73-210-60-203.hsd1.il.comcast.net> has joined #illumos
[14:42:45] <toasterson1> EisNerd (IRC): mutlihost fencing feature?
[14:42:54] <toasterson1> existing?
[14:43:59] *** amrfrsh <amrfrsh!~Thunderbi@134.19.189.92> has joined #illumos
[14:47:03] *** yomi <yomi!~void@ip4d16b7c2.dynamic.kabel-deutschland.de> has joined #illumos
[14:47:27] *** yomi is now known as Guest80057
[14:49:38] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-aaeagmmblkltdfop> has quit IRC (Ping timeout: 240 seconds)
[14:50:55] *** Guest21811 <Guest21811!~void@ip4d16b7c2.dynamic.kabel-deutschland.de> has quit IRC (Ping timeout: 272 seconds)
[14:55:45] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #illumos
[15:01:46] *** kayront- <kayront-!~kayront@zbase.xen.prgmr.com> has quit IRC (Quit: ZNC 1.7.5 - https://znc.in)
[15:02:05] *** kayront <kayront!~kayront@zbase.xen.prgmr.com> has joined #illumos
[15:02:29] *** kayront is now known as Guest74249
[15:12:47] *** rzezeski <rzezeski!uid151901@gateway/web/irccloud.com/x-tjxkavsatoekhqld> has quit IRC (Quit: Connection closed for inactivity)
[15:24:17] *** amrfrsh <amrfrsh!~Thunderbi@134.19.189.92> has quit IRC (Quit: amrfrsh)
[15:24:40] *** amrfrsh <amrfrsh!~Thunderbi@134.19.189.92> has joined #illumos
[15:26:44] *** heroux <heroux!sandroco@gateway/shell/insomnia247/x-zjoagjvsfoxwfeot> has joined #illumos
[15:39:01] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #illumos
[15:54:15] *** freakazoid0223 <freakazoid0223!~matt@pool-96-227-98-169.phlapa.fios.verizon.net> has joined #illumos
[15:55:28] *** ldepandis <ldepandis!~ldepandis@unaffiliated/ldepandis> has quit IRC (Ping timeout: 255 seconds)
[16:18:05] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[16:22:06] <EisNerd> yes
[16:22:23] <EisNerd> pool attribute multihost
[16:22:31] <EisNerd> works great
[16:22:50] <toasterson1> huh do you have a guide/link
[16:22:52] <toasterson1> ?
[16:23:01] <EisNerd> btw how can I import a pool only temporary
[16:23:20] <toasterson1> that would mean zfs has network integration?
[16:23:47] <toasterson1> oh you mean replicate the pool on block level and only import on one host
[16:23:48] <EisNerd> now it uses the pool metadata
[16:24:22] <EisNerd> toasterson1: if you have a pool residing in a san with multiple hosts
[16:24:43] <toasterson1> aaaah
[16:24:54] <EisNerd> s/now/no
[16:25:16] <EisNerd> or a twin node box with NVMe dual port ssds
[16:25:47] <toasterson1> I was thinking the storage server is the SAN. that means the san takes care of the whole storage replication
[16:26:15] <toasterson1> i was thinking of a software that does disaster replication or actually build a san.
[16:26:58] <toasterson1> Usually you use a opensource san software like ceph with cinder and export it directly to the compute nodes.
[16:27:30] <EisNerd> I have a box with 12(max 24) dual port NVMe ssds and two server nodes. So the idea is to run OI on the nodes and have a pool for file services like SMB/NFS and pool for blockdev
[16:27:34] <toasterson1> cinder makes no sense if the compute nodes share the storage or am i wrong?
[16:28:29] <EisNerd> and each pool can be serviced by the one or the other node via 40GBit link to the network core switch
[16:29:33] <EisNerd> so far I wasn't able to get a detailed idea how the smb3 persistent session stuff is intended to work
[16:30:26] *** neuroserve <neuroserve!~toens@195.71.113.124> has quit IRC (Ping timeout: 240 seconds)
[16:31:17] <EisNerd> as it sounds promising
[16:31:57] <toasterson1> huh i was not aware of such setups. that is interesting to check out.
[16:39:45] *** kkantor <kkantor!~kkantor@c-24-118-59-107.hsd1.mn.comcast.net> has quit IRC (Remote host closed the connection)
[16:47:06] <EisNerd> this is why I'm a bit uneasy regarding the not that outstanding performance of NVMe
[16:49:54] <toasterson1> understandable.
[16:50:21] <EisNerd> I made a post to illumos dev regarding this box if you are interested
[16:51:19] *** nde <nde!uid414739@gateway/web/irccloud.com/x-xpqkvbpnccqyoskq> has joined #illumos
[16:52:16] *** danmcd <danmcd!~danmcd@static-71-174-113-16.bstnma.fios.verizon.net> has joined #illumos
[16:53:21] <EisNerd> but I expect this will likely improve in the next few months
[16:53:38] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[16:53:45] <toasterson1> ah i saw there are some links in it about the hardware.
[16:54:31] <toasterson1> ah its 2 systems in 1U each interesting.
[16:54:50] <EisNerd> also it looks like the zfs encryption is not fully utilizing NI support
[16:55:04] <EisNerd> which is also a really interesting and uniq feature
[16:55:12] <toasterson1> NI? aCPU extension?
[16:56:40] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 255 seconds)
[16:57:46] <EisNerd> latest Xeons have native instruction especially for aes which could be utilized to reduce the workload caused by symentric cipher processing as well as improving the throughput
[16:57:50] <jbk> newer intel (and I believe amd) cpus have instructions to perform aes encryption
[16:58:17] <jbk> however there's some known issues that limit their effectiveness
[16:58:18] <EisNerd> openssl has a lot of highly optimized assembler code
[16:58:37] <jbk> i would suggest using aes-gcm for encrypted datasets
[16:58:58] <toasterson1> ah yes the AES instrictionset. NI didn't ring a bell. the linux kernel includes a driver for them but i think we have none yet
[16:59:02] <jbk> our testing showed it was consistently faster than ccm
[16:59:22] *** igork <igork!~igork@45.137.113.0> has joined #illumos
[16:59:27] <jbk> (known issues in illumos I should say)
[16:59:53] *** igork <igork!~igork@45.137.113.0> has quit IRC (Client Quit)
[17:00:49] <EisNerd> https://pastebin.com/MH30CUv3
[17:01:17] <EisNerd> => openssl speed -elapsed -evp aes-128-cbc
[17:01:47] <EisNerd> not sure if openssl code could be used for zfs encryption
[17:01:53] <jbk> i've been meaning to try out rm's fpu context changes w/ it to see if it helps any (I think all the contexts where encryption is used would be safe w/ that patch
[17:02:32] <jbk> max did some testing that showed migrations were hurting performance... so it'd be interesting to test at least..
[17:03:29] <jbk> i also tried to get in touch with saso kiselkov to see about picking up the work he did a while back, but was never able to get ahold of him
[17:03:40] <EisNerd> would be interesting if/how easy a zpools write/read could be profiled with dtrace
[17:04:19] <jbk> (if anyone knows him and wanted to ask him, i'd be appreciative :P)
[17:04:45] <EisNerd> btw how could I get the current values of a modules tunables
[17:05:15] <EisNerd> as in nvme manpage are given
[17:07:53] <EisNerd> jbk: do you know how it is done currently, was the crypto implemented in the kernel code or is some std crypto lib code linked statically
[17:10:19] <jbk> it's largely shared code, but with some ifdefs for user vs. kernel
[17:10:58] <jbk> that's not really the issue though
[17:15:58] <EisNerd> so the question is why not use the already existing and well known working code from openssl
[17:16:19] <EisNerd> which performs very well on OI as the link above shows
[17:18:15] *** tsoome <tsoome!~tsoome@2f9c-b573-86c9-f88e-2f80-4a40-07d0-2001.sta.estpak.ee> has joined #illumos
[17:19:06] *** tsoome <tsoome!~tsoome@2f9c-b573-86c9-f88e-2f80-4a40-07d0-2001.sta.estpak.ee> has quit IRC (Client Quit)
[17:19:08] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has quit IRC (Ping timeout: 256 seconds)
[17:19:16] *** tsoome <tsoome!~tsoome@80.235.52.148> has joined #illumos
[17:21:50] *** cypa <cypa!~cypam]_@5.79.173.34> has joined #illumos
[17:21:55] *** cypa_ <cypa_!~cypam]_@5.79.173.34> has joined #illumos
[17:22:26] <EisNerd> oh btw, is it expected that dd on a physical blockdevice behaves differently when the device is part of an active pools vdev?
[17:23:46] *** cneir__ <cneir__!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[17:26:49] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 255 seconds)
[17:26:50] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has quit IRC (Quit: Leaving.)
[17:27:06] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[17:28:12] *** cneir__ <cneir__!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 256 seconds)
[17:30:17] <tsoome> EisNerd with whole disk setup, zfs does enable drive cache.
[17:33:15] *** awordnot <awordnot!~awordnot@c-73-210-60-203.hsd1.il.comcast.net> has quit IRC (Read error: Connection reset by peer)
[17:34:12] *** awordnot <awordnot!~awordnot@c-73-210-60-203.hsd1.il.comcast.net> has joined #illumos
[17:36:15] <toasterson1> EisNerd (IRC): well known does not mean it is usefull. Thanks to the libressl initiative many bugs could be solved in openssl but the codebase had a lack of funding and a very big set of features. IMHO the best aes code would be proven like openssl but a seperate sub project/library.
[17:36:25] <toasterson1> so the codebase remains small
[17:40:04] <EisNerd> as this are individual obejcts with well defined apis this shoul dbe quite straight forward
[17:40:37] * EisNerd did openssl 1.1.1 port to HPE NSK for IBM MQ
[17:41:26] <EisNerd> also I'm also working on getting asm optimisations working on NSK
[17:43:12] <EisNerd> hm let me check maybe I can try to prepare sth
[17:45:25] <EisNerd> can you point me to the currently used api / prototypes, so I can try to provide a compatible or at least comparable interface?
[17:46:12] <jbk> it's more involved than that
[17:46:47] <jbk> at minimum, you have to deal with using the xmm register and such in the kernel, which currently requires special handling
[17:46:51] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[17:49:44] *** neirac <neirac!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 256 seconds)
[17:55:21] *** man_u <man_u!~manu@manu2.gandi.net> has quit IRC (Quit: man_u)
[17:55:50] <EisNerd> is there something more detailed regarding the special handling
[17:58:03] <rmustacc> How familiar are you with the x86 fpu management from an architectural perspective?
[17:59:08] <EisNerd> not that much
[17:59:27] <rmustacc> So, that will make explaining the special handling a bit more complicated.
[18:00:26] <jbk> sorry.. i'm juggling multiple conversations on my side right now
[18:00:28] <rmustacc> That said, the kernel aes module does seem to have support for Intel specific aes logic.
[18:02:40] <rmustacc> In general, the kernel doesn't use the fpu. The registers in the FPU are basically the userland copy of the registers. When a thread enters the kernel, the non-FPU registes are saved so the kernel has full access to the non-FPU registers.
[18:02:47] <rmustacc> As such, you need to save and restore them before touching them.
[18:02:59] <jlevon> rmustacc: hello. I have gcc 7.5 tree I've been doing some basic testing with today: https://github.com/joyent/gcc/tree/il-7_5_0
[18:03:20] <EisNerd> rmustacc: that is what I already guessed half the way
[18:03:21] <rmustacc> EisNerd: For background start by reading uts/intel/ia32/os/fpu.c. I wrote a block comment there.
[18:03:43] <jlevon> cc jperkin too
[18:04:15] <rmustacc> A snapshot of the kfpu development that jbk mentioned is here https://github.com/rmustacc/illumos-gate/commit/687dea0f7db5ce2a60940855d35739aa7b90e4e9.
[18:04:39] <rmustacc> That said, I don't know enough about your current problem to indicate whether or not that is actually your problem or that the current ASM that is being used for AES is wrong or not.
[18:04:50] <jlevon> rmustacc: I don't think I can open a PR since it's a new branch altogether, so not sure how we should go about review etc.
[18:04:52] <EisNerd> but maybe first we could check if someone (maybe I) can extract the objects from openssl forming the optmized AES implementation in a library with an acceptable interface
[18:04:55] <jbk> LeftWing: is there any special way to close a duplicate of an already closed ticket, or just marked closed?
[18:05:13] <rmustacc> EisNerd: I believe that is where the AES code we use today comes from.
[18:05:35] <EisNerd> hm ok
[18:05:44] <EisNerd> anyway, off for now
[18:05:51] <rmustacc> It says it was written for Intel for OpenSSL.
[18:06:11] <rmustacc> Take a look at usr/src/common/crypto/aes/amd64/aes_intel.s.
[18:08:08] <rmustacc> jlevon: Hmm. Good question.
[18:08:25] <jlevon> I put some notes at https://www.illumos.org/issues/12384
[18:08:34] <andyf> jlevon - for the gcc9.2 thing, I just opened an issue asking for someone to create a branch based on upstream's releases/ tag, then I raised a PR against that new branch
[18:08:58] <andyf> and, thanks to our conversation earlier, I've restarted the testing that I need to complete on that
[18:09:10] <jlevon> oh that'll work, could I convince you rmustacc ? :)
[18:09:37] <rmustacc> Sure.
[18:09:52] <rmustacc> Though really you two should probably have the power yourself at some point.
[18:10:17] <jlevon> should probably be andy as he's done a bunch more
[18:10:58] <rmustacc> But I'll get that kicked off. Just takes longer to clone than illumos-gate.
[18:11:21] <jlevon> indeed
[18:12:10] <rmustacc> andyf, jlevon: Any other branches you need?
[18:13:07] <jlevon> not me
[18:14:09] <andyf> no, I am not worrying too much about the gcc8 stuff
[18:14:28] <rmustacc> OK. Sounds good.
[18:14:43] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has quit IRC (Read error: Connection reset by peer)
[18:15:04] <andyf> I hope to have some reasonable test information for the 9.2 PR this weekend. The test suite is taking almost 4 hours for just gcc and g++ but it's getting there
[18:15:17] *** neirac_ <neirac_!~cneir@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[18:15:20] <rmustacc> Oof. Well, thanks for running through all that.
[18:15:49] <jbk> almost as long as the zfs test suite :) (at least in a vm)
[18:16:20] <andyf> When I run it with lots of parallelism, it gives inconsistent results
[18:17:24] <andyf> so I will soon have a single-threaded run for almost-stock 9.2 and the patched 9.2, which is a start. I don't know how I'm going to compare that against gcc7
[18:17:47] <rmustacc> Comparing it in what sense?
[18:17:58] <rmustacc> In general, the comparisons are meant to be against the stock releases, IIRC.
[18:18:14] <rmustacc> In terms of how do we validate gcc9 for illumos, there are a lot of things we can do to draw upon the gcc7 work that folks did.
[18:19:03] <andyf> right - once the branch is there we can work on getting the OpenIndiana gcc9 updated to this (the OOCE one already is).
[18:19:14] <andyf> The process of proving it for the next gate compiler is going to be longer
[18:19:36] <andyf> In the IPD though, Rich said that we should endeavour to compare the test results against the last sanctioned version
[18:19:48] <andyf> I can do that, but the list may be very long.. I'll see
[18:20:06] <rmustacc> Well, Rich is generally more right than me.
[18:20:14] <jlevon> I think that makes more sense for dot releases, but it can't hurt to at least see??
[18:21:10] <andyf> At this point I'm just trying to create an illumos gcc 9.2 that /should/ work for building gate after all of tsoome's work
[18:21:24] <andyf> so we can start using it as a shadow and then start testing stuff
[18:23:07] <rmustacc> At some point though, I think we'll need to drop 4.4.4 so at some point SPARC will have to deal with that, ready or not...
[18:24:22] <andyf> yes, three shadows is probably at least one too many
[18:26:04] *** wiedi <wiedi!~wiedi@185.85.220.202> has quit IRC (Quit: ^C)
[18:26:26] *** khng300 <khng300!~khng300@unaffiliated/khng300> has quit IRC (Quit: ZNC 1.7.5 - https://znc.in)
[18:26:38] <jlevon> we could probably drop it while still not actively breaking it when peter notices common code isn't gcc4 friendly
[18:27:14] <rmustacc> jlevon, andyf: Can you take a look at https://github.com/illumos/gcc/tree/il-7_5_0?
[18:27:19] <rmustacc> I believe that's what you want.
[18:27:26] <rmustacc> Hopefully executed correctly.
[18:27:56] <tsoome> what is stopping sparc to get built with gcc 7?
[18:28:13] <andyf> Yep, looks great, thanks - tag: releases/gcc-7.5.0, illumos/il-7_5_0, gcc/releases/gcc-7
[18:28:35] <rmustacc> tsoome: If SPARC has reached the point where 7 can be primary, that's great. I didn't believe it was there.
[18:29:27] <tsoome> ok, so it was not about gcc 7 crashing on compile or like that
[18:31:36] <andyf> tsoome - if I can get this 9.2 branch done, it should fix your xpg4/6 issues. Is there much left after that?
[18:32:17] <andyf> (well, the xpg4/6 issues you have stuff in your branch for)
[18:32:23] <jlevon> tsoome: don't think we know that
[18:32:36] <tsoome> a lot. xpg stuff is quite short list compared to rest:D
[18:33:04] <tsoome> jlevon, yea, i guessed that...
[18:33:08] <andyf> and finding real bugs too :)
[18:33:17] <rmustacc> tsoome: Speaking of warnings, did you see some of the cleanup that was done in the arm64/risc-v gate?
[18:33:19] <jperkin> sorry if someone already asked this, but having the il* branches match the upstream branch would be good, currently they're branched against master
[18:33:49] <rmustacc> jperkin: for 7.5 I took the releases/gcc-7.5.0 tag and started the branch from there.
[18:33:56] <jperkin> andyf: lemme know when you have a 9.2 tag ready with all the latest bits and I'll see about getting it into pkgsrc for testing
[18:33:58] <rmustacc> Is that different from what you want?
[18:34:21] <jlevon> jperkin: I'm curious what more you have on top, if anything needed.
[18:34:26] <jperkin> rmustacc: no that sounds fine, the other branches I was looking at the other day were against master so were like 10,000 commits different
[18:34:31] <andyf> jperkin - I don't know what that means
[18:35:03] <rmustacc> andyf: Which part?
[18:35:04] <jperkin> andyf: sorry, I thought you said you were getting an il-9.2 branch up
[18:35:19] <andyf> jperkin - I am - I didn't know what you meant by branched against master
[18:35:45] <jperkin> well if you go to https://github.com/illumos/gcc/tree/il-9_2_0 it says "This branch is 453 commits ahead, 7129 commits behind gcc-mirror:master."
[18:35:48] <andyf> but I got it. The 9.2 branch is definitely not like that, nor the new 7.5 one
[18:36:47] <rmustacc> jperkin: The branch matches the gcc releases/gcc-7.5.0 tag.
[18:36:51] <rmustacc> Erm, wrong branch, oops.
[18:37:16] <rmustacc> But it's still true. The gcc releases/gcc-9.2.0 tag is the same as our branch.
[18:37:30] <rmustacc> So I think github is just saying something confusing?
[18:37:55] <jlevon> I don't think you can meaningfully set a standard comparison base in github for branches
[18:38:10] <rmustacc> I believe it was correctly created from the gcc 9.2 tag.
[18:38:11] <jperkin> hm, maybe. I just wanted an easy way to verify all the patches for possible conflicts
[18:38:16] <jlevon> so it just presumes your parent is the forked repo's default branch
[18:38:33] <jlevon> jperkin: you should be able to do 'compare' still against the base gcctag
[18:38:46] <andyf> git log releases/gcc-9.2.0..il-9.2.0
[18:38:50] <andyf> or git diff..
[18:39:32] <jperkin> yeh sure, I just don't recall github doing that before, but yeh it does it for some of my stuff too, til
[18:39:52] <rmustacc> I think it depends a lot on where the repo came from.
[18:41:34] <jlevon> https://github.com/illumos/gcc/compare/releases/gcc-9.1.0...illumos:il-9_1_0 seems bust
[18:41:47] <jlevon> There isn’t anything to compare.
[18:41:48] <jlevon> releases/gcc-9.1.0 and il-9_1_0 are entirely different commit histories.
[18:41:57] <gitomat> [illumos-gate] 11958 need topo maps for the SMCI,SYS-2028U-E1CNRT+ -- Rob Johnston <rob.johnston at joyent dot com>
[18:42:04] <jlevon> https://github.com/illumos/gcc/compare/releases/gcc-9.2.0...illumos:il-9_2_0 is empty
[18:43:43] *** razamatan <razamatan!~blah@unaffiliated/razamatan> has joined #illumos
[18:44:18] <jlevon> didn't realise we hadn't assembled stuff for gcc 9 at all
[18:45:07] <jlevon> andyf: I think you'd want the as --32 change there too?
[18:45:32] <razamatan> if i want to setup a server whos primary purpose is to be a router, should i use omnios, smartos or sdc/triton? i already have an existing smartos server that's our primary storage and vm host.
[18:46:18] <razamatan> this router box does not have ecc, and isn't meant to store anything critical. mainly going to use zfs as the fs to back itself up to our existing smartos box
[18:46:23] <andyf> gcc9 in OmniOS and OpenIndiana is 64-bit with 64-bit output by default, so it doesn't need the same fix
[18:46:36] <rmustacc> You pay a price for sdc/triton in terms of general management. So I think the question is if you were going to want to manage your existing thing coherently with the other one, then it might be useful, otherise sdc/triton may not be worthwhile.
[18:47:20] <razamatan> yeah.. my current intuition is leaning toward the order i asked (omni, smart, sdc)
[18:47:47] <rmustacc> I think the difference between smartos and omni comes down to what kind of management experience you want and personal preference.
[18:48:34] <andyf> rmustacc - and a little of whether you want to have actual boot disks
[18:48:48] <razamatan> so my current smartos box actually uses pxe boot from our old router. i won't have that in my new router, so i was leaning toward omni os.
[18:49:52] <razamatan> i do have a 250gb drive in the new router box...
[18:49:58] <andyf> jlevon - took me a while to remember why, but for 32 bit output, it means that we always have -m32 in the params
[18:57:48] <LeftWing> razamatan: I have used a lot of SmartOS for many years, but current OmniOS is pretty slick! Using it increasingly as a VM guest on various cloud providers.
[18:57:53] <razamatan> so the only real diff between omni and smart is just the release cadence (omni has one, but smart is just "live") and install/boot (traditional vs ram only)?
[18:58:33] <LeftWing> You can install the pkgsrc bootstrap that you'd get in a SmartOS zone, so I've been able to keep using the same packages
[19:00:28] <jlevon> andyf: righto
[19:02:16] <andyf> LeftWing - omnios has pkgsrc branded zones too. I use them quite a bit when I need to spin up test instances
[19:02:42] <LeftWing> I am pretty down on the interaction between boot environments and zones
[19:03:08] <LeftWing> So I have been making my own delegated datasets for pkgsrc stuff that's outside of the boot environment
[19:03:50] <andyf> razamatan - there are other differences of course, but they may not matter.
[19:04:16] <andyf> razamatan - both have bhyve/kvm (although SmartOS' kvm networking is better); both have lx zones
[19:04:34] <kahiru> andyf: what the difference in the networking?
[19:04:36] <andyf> LeftWing - I misinterpreted that :) "Down on" as in dislike?
[19:04:40] <LeftWing> Indeed
[19:04:45] <andyf> I can understand that
[19:04:59] <andyf> and it's a massive source of confusion for new users
[19:05:01] <LeftWing> The fact that the creation of the boot environment means any data created after that is not persistent is a huge source of peril
[19:05:13] <LeftWing> Like... if I have PostgreSQL in /var of a zone... it seems like that's going to go very poorly
[19:05:19] <andyf> all of my zones have real inherited datasets for data like that
[19:05:56] <LeftWing> At the moment I have /etc/opt /opt /var/opt and /var/db/pkgin in the dataset
[19:06:16] <andyf> razamatan - SmartOS has a driver called 'vnd' which is used to improve kvm networking.
[19:06:41] <kahiru> ah, I see
[19:06:46] <gitomat> [illumos-gate] 12279 ::arc_compression_stats generates errors -- Jason King <jason.king at joyent dot com>
[19:07:03] <andyf> I'm not going to try and summarise it further in this company :D
[19:08:59] <razamatan> i was likely going to run opnsense (primary use case is a router) in bhyve w/ pci pass through to the wan interface
[19:09:01] <andyf> but the performance in omnios is still fine
[19:09:02] <LeftWing> You're part of "this company", andyf :P
[19:10:40] <razamatan> a concern i do have is that w/ smartos, it's dead simple to keep the gz env on latest or locked on stable
[19:11:12] <LeftWing> OmniOS has an LTS release train that has been quite stable so far
[19:11:23] <razamatan> w/ omnios, it's not clear to me that tying myself to lts releases and doing periodic lts-timed release updates is hardened enoiugh
[19:11:26] <razamatan> enough
[19:11:36] *** neuroserve <neuroserve!~toens@ip-178-202-216-248.hsi09.unitymediagroup.de> has joined #illumos
[19:11:45] <andyf> For the LTS and stable releases, omnios tries really hard to not require reboots
[19:12:25] <andyf> but there are weekly releases as-required for security updates (curl, openssl, bind, ...)
[19:13:01] <razamatan> non-kernel level updates i'm fine w/ restarting services w/o reboots
[19:13:41] <razamatan> it's really just the release updates when the kernel needs to be updated... w/ smartos, i know i can keep the old version available and can flip back if problems
[19:13:49] <andyf> There's a diagram of the release schedule at https://omnios.org/schedule
[19:14:06] <andyf> razamatan - with OmniOS the reboot update will be in a new boot environment, so you can easily switch back
[19:14:19] <razamatan> is there more docs around boot envs
[19:14:44] <razamatan> my experience w/ non-rolling distributions is that the kernel and other os stack coordination is a bit dicey
[19:14:59] <andyf> right - every time I upgrade a Linux system is really worries me
[19:15:31] <razamatan> well.. non-rolling linux systems for sure
[19:15:36] <razamatan> fan of gentoo personally
[19:16:24] <andyf> (well, my linux systems tend to be raspberry Pis)
[19:16:53] <razamatan> so w/ omni, it feels like it may not have that complete separation and isolation between releases since it's still release based. smart is built around this isolation between releases
[19:17:15] <andyf> I would just google for "solaris boot environments" - the first few hits there are a pretty good overview
[19:17:21] <razamatan> ty
[19:18:05] <razamatan> all oracale docs still apply for omni, or has illumos diverged since the fork?
[19:18:14] <razamatan> wrt be's
[19:18:31] <tsoome> there is a catch however, solaris has VARSHARE dataset.
[19:18:40] <andyf> It has diverged of course but the principles of BEs are the same
[19:22:28] <razamatan> tsoome: https://docs.oracle.com/cd/E26502_01/html/E21383/glyzj.html ? does illumos not do the same?
[19:23:14] <tsoome> no.
[19:23:53] *** awordnot <awordnot!~awordnot@c-73-210-60-203.hsd1.il.comcast.net> has quit IRC (Ping timeout: 272 seconds)
[19:24:21] <EisNerd> rmustacc: hm difficult to say on a first glance, but openssl has a really sophisticated code to enable all potentially usable implementations at compile time and then it checks on init for the cpu features avail and selects the best implementation. Important is to use the evp variant as otherwise the building blocks used are too fine to allow usage of the sophisticated native instructions
[19:25:22] *** awordnot <awordnot!~awordnot@c-73-210-60-203.hsd1.il.comcast.net> has joined #illumos
[19:25:40] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has joined #illumos
[19:26:39] <rmustacc> Ultimately, if we're trying to solve a perf problem the most important thing is to quantify it and quantify what we want to improve it before we worry about the specifics of current implementations.
[19:26:52] <rmustacc> And why it's important.
[19:27:20] *** sebasp <sebasp!~sebasp@69-165-197-84.cable.teksavvy.com> has joined #illumos
[19:27:55] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has quit IRC (Quit: Leaving)
[19:28:00] <rmustacc> I'm well aware of the variants there having done some of the sha bits in userland. But the kernel is a trickier matter so making sure for example we're telling apart the performance as observed in zfs versus the performance from doing raw operations, versus other stuff is important.
[19:28:17] <rmustacc> Things like how the kernel is currently (not very well) dealing with challengse to fpu usage and diabling pre-emption can have large impacts.
[19:29:35] <razamatan> tsoome: no to which question?
[19:30:06] <tsoome> illumos doe not have /var/share unless you build it manually.
[19:30:41] <razamatan> thanks for the clarification
[19:31:30] *** razamatan <razamatan!~blah@unaffiliated/razamatan> has quit IRC (Quit: this is a bad quit message.)
[19:35:18] *** Fenix_ <Fenix_!~Fenix@75.170.89.113> has joined #illumos
[19:38:06] *** jellydonut <jellydonut!~quassel@s91904423.blix.com> has quit IRC (Quit: jellydonut)
[19:38:32] *** SPARC-Corgi <SPARC-Corgi!~Fenix@75.170.121.137> has quit IRC (Ping timeout: 256 seconds)
[19:45:22] *** jellydonut <jellydonut!~quassel@s91904423.blix.com> has joined #illumos
[19:46:25] *** neirac_ is now known as neirac
[19:55:41] *** nde <nde!uid414739@gateway/web/irccloud.com/x-xpqkvbpnccqyoskq> has quit IRC (Quit: Connection closed for inactivity)
[19:55:51] *** khng300 <khng300!~khng300@unaffiliated/khng300> has joined #illumos
[20:14:34] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has quit IRC (Ping timeout: 256 seconds)
[20:16:15] *** jcea <jcea!~Thunderbi@51.159.34.131> has joined #illumos
[21:13:56] *** wonko <wonko!~quassel@75.52.174.34> has quit IRC (Remote host closed the connection)
[21:19:11] *** wonko <wonko!~quassel@75.52.174.34> has joined #illumos
[21:24:12] <LeftWing> VARSHARE looks like a good start
[21:24:30] <LeftWing> Why do they not put /var/log in there?!
[21:24:32] <andyf> It's the first thing I delete on a Solaris box
[21:25:02] <LeftWing> /var/log or /var/share ?
[21:25:09] <andyf> but then I like uniformity across the different systems I use
[21:25:22] *** alanc <alanc!~alanc@inet-hqmc02-o.oracle.com> has quit IRC (Remote host closed the connection)
[21:25:37] <andyf> rpool/VARSHARE
[21:25:48] *** alanc <alanc!~alanc@inet-hqmc02-o.oracle.com> has joined #illumos
[21:26:13] <LeftWing> It feels like it at least begins to solve for the downsides of BEs
[21:26:21] <andyf> maybe it's something we should re-visit for the default installation though. It's definitely a cause of confusion
[21:26:39] <andyf> the installer does create /home as a BE-independent dataset, which helped a log of newcomers
[21:27:52] <andyf> /var/tmp is in there too and that probably makes sense to be shared.
[21:29:51] <LeftWing> yeah
[21:30:12] <toasterson1> OPTSHARE would be nice too
[21:30:50] <LeftWing> I think that's harder to get right automatically
[21:31:06] <LeftWing> Because putting a tree in OPTSHARE would inhibit the ability to install any IPS package that goes in there
[21:31:30] <toasterson1> ah yes you have that :)
[21:31:33] <LeftWing> But it's clear a package ought not deliver into "/var/tmp"
[21:32:52] <toasterson1> and we would need to share the data directories aswell. maybe make pkg snapshot the VARSHARE for security?
[21:33:49] <jbk> LeftWing: is it safe to close an open dup of an already closed ticket (i.e. won't cause any automation problems)>
[21:33:52] <jbk> ?
[21:34:34] <andyf> LeftWing - in fact, a package cannot deliver into /var/tmp
[21:34:43] <andyf> at least, IPS will fail to build a package that tries
[21:36:06] <LeftWing> jbk: Sure, that seems fine. Mark it as a duplicate?
[21:36:11] <andyf> The key thing with BEs etc. is that the administrators needs to fully understand it, and the implications of things, and it is currently too complicated even when you're just dealing with the GZ
[21:36:44] <LeftWing> Yeah the implications of the snapshotting and cleaving off the running system are a bit of a headspin
[21:37:25] <jbk> done and done
[21:37:31] <LeftWing> Tah
[21:40:32] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has joined #illumos
[21:42:13] *** BOKALDO <BOKALDO!~BOKALDO@87.110.88.30> has quit IRC (Quit: Leaving)
[21:47:45] <andyf> All of the new test suite failures I'm getting for my gcc 9.2 branch seem to be because the patched compiler ignores -fomit-frame-pointer
[21:49:10] <LeftWing> I feel richlowe bumped into that a bunch
[21:49:17] *** sebasp <sebasp!~sebasp@69-165-197-84.cable.teksavvy.com> has quit IRC (Quit: leaving)
[21:49:26] <jlevon> andyf: maybe we should patch those tests to skip
[22:00:34] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 265 seconds)
[22:05:35] <andyf> Could be. a lot
[22:06:04] <andyf> Could be. some of them just do things like count the number of movq instructions in the binary. I could patch the expected
[22:06:07] <andyf> count
[22:06:25] <andyf> I dislike this mobile irc client. sorry
[22:18:05] *** Knez <Knez!~Knez@h-73-78.A444.priv.bahnhof.se> has joined #illumos
[22:35:33] *** Kurlon <Kurlon!~Kurlon@cpe-67-253-136-97.rochester.res.rr.com> has joined #illumos
[22:42:40] <andyf> jlevon - here's one example
[22:42:40] <andyf> https://github.com/illumos/gcc/blob/il-9_2_0/gcc/testsuite/gcc.target/i386/pr22076.c#L20
[22:43:12] <andyf> the highlighted line fails because `movq` does not appear 3 times in the produced binary
[22:45:49] *** mgerdts <mgerdts!~textual@2600:6c44:c7f:ec89:d120:c8f2:8ffa:44d3> has quit IRC (Read error: Connection reset by peer)
[22:47:33] <andyf> it actually appears 4 times, apparently because the -fomit-frame-pointer is ignored
[22:48:00] *** mgerdts <mgerdts!~textual@96-41-228-208.dhcp.ftbg.wi.charter.com> has joined #illumos
[22:53:34] <andyf> so I can check through all of these and we accept it as a baseline for the illumos branch
[22:53:38] <andyf> or I patch all of the tests
[22:53:54] <andyf> or we add a -freally-omit-frame-pointer option which is used solely for the tests
[22:55:20] <jlevon> I would definitely not prefer the last one
[22:55:21] <jlevon> :)
[22:55:47] <jlevon> andyf: I think it should be easy to spot all -fomit-frame-pointer tests and patch em? or no?
[22:56:19] <andyf> Yes. tbh, there weren't that many new failures
[22:56:53] <andyf> 26 of them
[22:58:48] <andyf> Probably a hybrid approach.. I'll have a crack at it
[23:00:59] *** cypa_ <cypa_!~cypam]_@5.79.173.34> has quit IRC (Remote host closed the connection)
[23:00:59] *** cypa <cypa!~cypam]_@5.79.173.34> has quit IRC (Remote host closed the connection)
[23:22:24] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has quit IRC (Quit: Leaving)
[23:31:48] *** Smithx10 <Smithx10!sid243404@gateway/web/irccloud.com/x-inxaholukqgosgnc> has quit IRC (Remote host closed the connection)
[23:31:48] *** ballew <ballew!sid244342@gateway/web/irccloud.com/x-gnuxrutvifhjdwxh> has quit IRC (Remote host closed the connection)
[23:34:53] *** Smithx10 <Smithx10!sid243404@gateway/web/irccloud.com/x-wzvubxfqusatybln> has joined #illumos
[23:36:51] *** gjnoonan <gjnoonan!sid95422@gateway/web/irccloud.com/x-tptjordiylhzdxyt> has quit IRC (Remote host closed the connection)
[23:36:51] *** _jack_ <_jack_!sid396411@gateway/web/irccloud.com/x-toaylpqqztkfqurr> has quit IRC (Remote host closed the connection)
[23:36:51] *** chandlore____ <chandlore____!sid259138@gateway/web/irccloud.com/x-hbkhszkelnipjtaq> has quit IRC (Remote host closed the connection)
[23:39:16] *** ballew <ballew!sid244342@gateway/web/irccloud.com/x-gdmvmmudikfegggl> has joined #illumos
[23:40:16] *** gjnoonan <gjnoonan!sid95422@gateway/web/irccloud.com/x-kwwidapvopaxxdbb> has joined #illumos
[23:40:38] *** chandlore____ <chandlore____!sid259138@gateway/web/irccloud.com/x-fykqekwjetagpbug> has joined #illumos
[23:45:05] *** _jack_ <_jack_!sid396411@gateway/web/irccloud.com/x-olfhansuvricpxnu> has joined #illumos
[23:46:17] *** scoobybejesus <scoobybejesus!sid271506@gateway/web/irccloud.com/x-edhvdfyhqytihgah> has quit IRC (Remote host closed the connection)
[23:49:22] *** scoobybejesus <scoobybejesus!sid271506@gateway/web/irccloud.com/x-okfqkyxyskekyfnw> has joined #illumos
[23:50:17] *** jim80net <jim80net!sid287860@gateway/web/irccloud.com/x-cqresmenmgizwvaq> has quit IRC (Remote host closed the connection)
[23:54:20] *** andy_js <andy_js!~andy@51.146.99.40> has quit IRC (Quit: andy_js)
[23:56:03] *** rann <rann!sid175221@gateway/web/irccloud.com/x-ulrrmapapwbixwng> has quit IRC (Remote host closed the connection)
[23:56:03] *** Tsesarevich <Tsesarevich!Tsesarevic@fluxbuntu/founder/joejaxx> has quit IRC (Remote host closed the connection)
[23:56:57] *** jim80net <jim80net!sid287860@gateway/web/irccloud.com/x-vjithwsbirjrmzna> has joined #illumos
[23:59:35] *** Tsesarevich <Tsesarevich!Tsesarevic@fluxbuntu/founder/joejaxx> has joined #illumos
top

   March 11, 2020  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | >