Switch to DuckDuckGo Search
   February 25, 2020  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | >

Toggle Join/Part | bottom
[00:05:44] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #smartos
[00:11:59] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[00:16:17] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #smartos
[00:22:26] *** kkantor <kkantor!~kkantor@c-24-118-59-107.hsd1.mn.comcast.net> has quit IRC (Ping timeout: 240 seconds)
[00:30:53] *** andy_js <andy_js!~andy@> has quit IRC (Quit: andy_js)
[00:38:16] *** fuglydude <fuglydude!aefa1205@5.sub-174-250-18.myvzw.com> has quit IRC (Remote host closed the connection)
[01:01:43] *** festercluck <festercluck!~stephen@unaffiliated/stephen> has joined #smartos
[01:03:15] *** rennj <rennj!~rennj@wsip-24-120-111-138.lv.lv.cox.net> has quit IRC (Ping timeout: 265 seconds)
[01:05:51] *** stephen <stephen!~stephen@unaffiliated/stephen> has quit IRC (Ping timeout: 272 seconds)
[01:11:33] *** festercluck <festercluck!~stephen@unaffiliated/stephen> has quit IRC (Ping timeout: 272 seconds)
[01:27:22] *** rennj <rennj!~rennj@wsip-24-120-111-138.lv.lv.cox.net> has joined #smartos
[01:38:47] *** fxhp <fxhp!~fox@d-206-53-88-50.ct.cpe.atlanticbb.net> has quit IRC (Ping timeout: 272 seconds)
[01:42:23] *** blackwood821 <blackwood821!~blackwood@2601:484:8002:2fa0:9cfd:a1ec:4a73:23fc> has quit IRC ()
[02:01:30] *** fxhp <fxhp!~fox@d-206-53-88-50.ct.cpe.atlanticbb.net> has joined #smartos
[02:25:44] *** Kurlon_ <Kurlon_!~Kurlon@cpe-67-253-136-97.rochester.res.rr.com> has joined #smartos
[02:29:35] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 258 seconds)
[02:30:36] *** nde <nde!uid414739@gateway/web/irccloud.com/x-zquqnebxxlleigdk> has joined #smartos
[03:06:48] *** fuglydude <fuglydude!aefa1205@5.sub-174-250-18.myvzw.com> has joined #smartos
[03:08:33] <fuglydude> Hey fellas. I got my freebsd bhyve vm to boot, i am pretty excited. start small, i know. i figured out how to add a second disk to the system to save the config.xml file, so I can hopefully manipulate and restore it, awesome. now my problem is, the second disk is a zvol with a pool on it, and i cant figure out how to mount it to extract, or if
[03:08:34] <fuglydude> it is on the vm, it tries to boot from it, rather than the primary. the odd part is, the primary is boot true, secondary is boot false. yet it still tries to boot from the wrong system?
[03:15:25] *** BrownBear <BrownBear!~BrownBear@> has quit IRC (Ping timeout: 255 seconds)
[03:42:30] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has quit IRC (Quit: jcea)
[04:25:11] *** numericill <numericill!~numericil@ip72-192-145-123.sd.sd.cox.net> has quit IRC (Remote host closed the connection)
[04:25:49] *** numericill <numericill!~numericil@ip72-192-145-123.sd.sd.cox.net> has joined #smartos
[04:30:07] *** numericill <numericill!~numericil@ip72-192-145-123.sd.sd.cox.net> has quit IRC (Ping timeout: 255 seconds)
[04:43:58] *** fuglydude <fuglydude!aefa1205@5.sub-174-250-18.myvzw.com> has quit IRC (Remote host closed the connection)
[04:46:12] *** glasspelican <glasspelican!~quassel@2607:5300:201:3100::325> has quit IRC (Ping timeout: 260 seconds)
[04:46:29] *** glasspelican <glasspelican!~quassel@179.ip-167-114-128.net> has joined #smartos
[04:51:01] *** dansolo42_ <dansolo42_!~dansolo42@> has joined #smartos
[04:52:03] *** axonpoet <axonpoet!~axonpoet@fsf/member/axonpoet> has quit IRC (Ping timeout: 260 seconds)
[04:52:03] *** Orion7 <Orion7!~root@c-98-215-84-9.hsd1.il.comcast.net> has quit IRC (Ping timeout: 260 seconds)
[04:52:04] *** dsockwell <dsockwell!~dsockwell@mnemonic.hightechlow.life> has quit IRC (Ping timeout: 260 seconds)
[04:52:31] *** Orion7 <Orion7!~root@c-98-215-84-9.hsd1.il.comcast.net> has joined #smartos
[04:52:31] *** dansolo42 <dansolo42!~dansolo42@> has quit IRC (Ping timeout: 260 seconds)
[04:52:37] *** dansolo42_ is now known as dansolo42
[04:54:17] *** axonpoet <axonpoet!~axonpoet@fsf/member/axonpoet> has joined #smartos
[04:59:21] *** axonpoet_ <axonpoet_!~axonpoet@fsf/member/axonpoet> has joined #smartos
[05:00:55] *** axonpoet <axonpoet!~axonpoet@fsf/member/axonpoet> has quit IRC (Ping timeout: 260 seconds)
[05:19:47] *** dsockwell <dsockwell!~dsockwell@mnemonic.hightechlow.life> has joined #smartos
[05:24:07] *** nde <nde!uid414739@gateway/web/irccloud.com/x-zquqnebxxlleigdk> has quit IRC (Quit: Connection closed for inactivity)
[05:27:27] *** fuglydude <fuglydude!aefa1205@5.sub-174-250-18.myvzw.com> has joined #smartos
[05:28:05] <fuglydude> hey anyone in here worked with vlan tagging on thier zones? Im having some trouble with figuring it out
[06:19:55] <bahamat> fuglydude: What are you trying to do?
[06:19:57] <jbk> what are you trying to do?
[06:19:58] <jbk> heh
[06:20:02] <jbk> stereo :P
[06:20:29] *** numericill <numericill!~numericil@ip72-192-145-123.sd.sd.cox.net> has joined #smartos
[06:38:04] <fuglydude> jbk: thanks for the reply.
[06:38:50] <fuglydude> I am attempting to replicate what i was doing on debian/kvm, which was to have 3 vnic's that i assigned to the vm, each in their own vlan, and be able to use them outside of the box
[06:39:40] <fuglydude> with kvm the way I used it anyway was I created each interface and assigned it a vlan, and that is what got put into the kvm config file. then on the vm itself, i did not have to configure any vlan, it just worked
[06:40:46] <fuglydude> i am 100% new to smartos and i have my 3 vnic's assigned a vlan. i have the switch that the box is plugged into setup with the 3 vlans (tagged). and from another machine on my network, which is on one of the tags, i cannot ping it or reach it
[06:42:57] <fuglydude> so for example, i would assign vnic1 to a bridge eth0.20, lets say, vnic2 to bridge eth0.30 and vnic3 to eth0.40, and insidfe the vm, they looked like 3 independent nics. i did nothing special, they just talked out those vlans. on smart os, so far, i cannot get any traffic to move when i have vnic assigned vlan_id
[06:43:45] <fuglydude> on the old system, the bridge could (or did not have to) have any ip address assigned to it.
[06:44:31] <fuglydude> bahamat: sorry, I didnt see you above. makes more sense -> stereo comment.
[06:55:47] <fuglydude> Not sure if it matters but the guestos is freebsd and i have it configured as virtio for the nics.
[07:01:55] <fuglydude> im looking into the ip spoofing property. i dont have the ip addresses configured in the nic definitions themselves, only because freebsd didnt automatically populate them like say ubuntu
[07:03:51] <jbk> typically the vm configuration in in vmadm should contain the IP the VM is using
[07:04:00] <jbk> so that might be it
[07:05:00] <fuglydude> jbk: one dumb question (i have no problem putting it in there), if a vm is set to get an ip address via dhcp, how does it jive up with the nic definition? i suppose you set both to dhcp and it somehow works it out?
[07:05:20] <jbk> set where?
[07:06:44] *** dansolo42 <dansolo42!~dansolo42@> has quit IRC (Ping timeout: 258 seconds)
[07:06:48] <fuglydude> ok, what i mean by this. I get what your saying about the ip info needing to be set in the nic part of the zone. and then i set it manually on the vm (if it doesnt do it itself like Ubuntu). My question is, if one of the vnic's in the vm is to be dhcp - so the address changes - how do we not have this problem? Or does setting the nic zone config
[07:06:49] <fuglydude> to dhcp suffice
[07:07:26] <fuglydude> I plan to set statically, but am trying to understand
[07:08:06] <jbk> i think there might be a vm option you have to set.. i can't recall offhand
[07:08:33] <jbk> (maybe bahamat will know :P)
[07:09:59] <bahamat> fuglydude: You just need to set the vlan_id field on the nic in your json payload.
[07:11:04] <fuglydude> bahamat: right now, I have the vlan_id set on the zone lan part for each of 3 test nics. I do not have any ip info enterested in that area.
[07:11:07] <bahamat> the vnic will use whatever vlan you assign, and the interface in the guest will be "native"
[07:11:08] <fuglydude> I cannot pass any traffic
[07:11:30] <bahamat> Show me the nic definition.
[07:11:38] <fuglydude> bahamat: thank you for that clarification.
[07:11:39] <fuglydude> ok
[07:12:10] *** dansolo42 <dansolo42!~dansolo42@> has joined #smartos
[07:12:21] <fuglydude> { "interface": "net0", "mac": "72:cb:87:ec:2a:d0", "vlan_id": 99, "nic_tag": "admin", "model": "virtio", "primary": true },
[07:13:11] <fuglydude> I was reading just now in the docs. it seems even though i loaded freebsd which will not auto set the ip information on the guest, i need to have it filled in so it will allow traffic to pass (sort of a whitelist)
[07:14:38] <bahamat> The system, by default strictly controls which IPs and ether addresses can be used on an interface.
[07:15:33] <bahamat> So you either need to specify the IP exactly, specify dhcp (or addrconf for ipv6), or you need to enable ip spoofing.
[07:16:04] <bahamat> The reason you can't pass traffic is that your guest isn't using any layer 3 address that is on a whitelist for being allowed to pass traffic, so it's being dropped.
[07:16:36] <fuglydude> bahamat: Thank you (again) for the explaination. I am adding the info now and will test
[07:16:53] <fuglydude> You also answered my qeustion to jbk about dhcp
[07:17:26] <fuglydude> Yes. Tested. WORKING. I was not aware of the whitelist security feature. Wow. So much to learn!
[07:18:26] <bahamat> It's not like Linux where you can willy nilly just do whatever, or you need to add a bunch of iptables rules to restrict the guest.
[07:18:32] <bahamat> It's secure by default.
[07:19:11] <bahamat> You're allowed to use the IPs in the ips, or allowed_ips fields
[07:19:32] <jbk> in this situation, imagine that network was a subnet on the vlan with an IP block with multitenant guests -- don't want someone to just hijack someone else's IP
[07:19:42] <bahamat> or if you specify dhcp, then you can use the address the dhcp server assigns you (but no other addresses)
[07:19:48] <jbk> (or at least to help explain the rationale behind it)
[07:20:38] <fuglydude> Thank you both. I am gathering about the security. I have used Debian for a long long time, and I just finally decided it was time to try out something new. Its been on my radar for 6 months or more to give Smartos a try.
[07:20:47] <fuglydude> It definitely has a learning curve for me!!
[07:21:21] <bahamat> I used debian for nearly 15 years. I abandoned it (mostly) to use SmartOS.
[07:22:46] <bahamat> There are still a few things I use debian/ubuntu for, but it's always because it's closed source software and that's what the vendor supports.
[07:23:38] <bahamat> Ironically enough, I can run any free software on SmartOS, but I use debian only for proprietary software.
[07:23:45] <fuglydude> bahamat: I was on Debian for probably at least 10 years. And it is encouraging to hear that you went down a similar path before.
[07:23:49] <fuglydude> Funny, right?
[07:24:34] <fuglydude> One that got me earlier. I had setup a vm on smartos. and suddenly i wasn't getting any dhcp ack to pass. i would get the request and the offer but no ack. found out it was a setting in smartos.
[07:24:43] <fuglydude> that was a pain in the rear. figured that one out on my own
[07:24:55] <bahamat> what setting was it?
[07:25:01] <fuglydude> kept thinking, is there a firewall in here somewhere??
[07:25:13] <fuglydude> allow_dhcp_spoofing
[07:25:25] <fuglydude> on the lan vnic
[07:25:36] <fuglydude> since the vm was doing dhcp for the network
[07:25:41] <bahamat> allow_dhcp_spoofing is only needed for dhcp *servers*
[07:25:41] <fuglydude> test network
[07:25:45] <fuglydude> yes
[07:25:45] <bahamat> not for dhcp clients.
[07:25:48] <fuglydude> it was a dhcp server
[07:26:09] <bahamat> ok, yeah, you found the right one then.
[07:26:57] <fuglydude> it was a total duck out of water experience. i spent half a morning trying to figure it out. i was gonna get it or die trying lol
[07:27:24] <bahamat> Well, it's good to hear you stuck with it :-)
[07:28:16] <fuglydude> initially i was having a heck of a time with vnc too (and bhyve) but i got there
[07:28:47] <fuglydude> im sure once i get more acclimated this will all settle down to be pretty awesome. just trying to play around with stuff right now
[07:28:52] <bahamat> Yeah, there's a bug with vnc and bhyve.
[07:30:34] <fuglydude> i got it eventually. thanks to one of the more knowledgeable-than-me people on here earlier today. nothing more of a show stopper than cant get a screen to do the install!
[07:32:59] <fuglydude> What made you, if you dont mind my asking, try out Smartos if youd been on Debian so long?
[07:38:43] *** fuglydude <fuglydude!aefa1205@5.sub-174-250-18.myvzw.com> has quit IRC (Ping timeout: 260 seconds)
[07:56:11] *** rennj <rennj!~rennj@wsip-24-120-111-138.lv.lv.cox.net> has quit IRC (Remote host closed the connection)
[07:59:13] *** rennj <rennj!~rennj@wsip-24-120-111-138.lv.lv.cox.net> has joined #smartos
[08:20:40] *** neuroserve <neuroserve!~toens@> has joined #smartos
[08:25:55] *** numericill <numericill!~numericil@ip72-192-145-123.sd.sd.cox.net> has quit IRC (Remote host closed the connection)
[08:43:23] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has quit IRC (Ping timeout: 265 seconds)
[08:59:23] *** KeiraT <KeiraT!~k4ra@gateway/tor-sasl/k4ra> has quit IRC (Ping timeout: 240 seconds)
[09:23:12] *** wiedi <wiedi!~wiedi@> has joined #smartos
[09:27:24] *** KeiraT <KeiraT!~k4ra@gateway/tor-sasl/k4ra> has joined #smartos
[09:32:32] *** bens1 <bens1!~bens@> has joined #smartos
[09:57:44] *** axonpoet_ is now known as axonpoet
[10:00:46] *** hhdave <hhdave!~anonymous@ip212.ip-193-70-71.eu> has joined #smartos
[10:15:16] *** hhdave <hhdave!~anonymous@ip212.ip-193-70-71.eu> has quit IRC (Ping timeout: 255 seconds)
[10:17:18] *** andy_js <andy_js!~andy@> has joined #smartos
[10:17:27] *** man_u <man_u!~manu@manu2.gandi.net> has joined #smartos
[10:19:25] *** hhdave <hhdave!~anonymous@ip212.ip-193-70-71.eu> has joined #smartos
[11:03:06] *** arnold_oree <arnold_oree!~arnoldore@> has joined #smartos
[11:14:45] *** leah2 <leah2!~leah@vuxu.org> has quit IRC (Ping timeout: 240 seconds)
[11:15:28] *** leah2 <leah2!~leah@vuxu.org> has joined #smartos
[11:20:49] <sjorge> bahamat it's not as much as a bug... as the vnc implementation copies form the UEFI GOP (?) buffer IRIC
[11:20:53] <sjorge> WHich UEFI+CSM lacks
[11:28:15] *** arnold_oree <arnold_oree!~arnoldore@> has quit IRC (Ping timeout: 260 seconds)
[11:44:48] <sjorge> papertigers found out anything interesting while I was snoozing? :D
[12:27:33] *** numericill <numericill!~numericil@ip72-192-145-123.sd.sd.cox.net> has joined #smartos
[12:32:11] *** numericill <numericill!~numericil@ip72-192-145-123.sd.sd.cox.net> has quit IRC (Ping timeout: 258 seconds)
[14:07:18] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has joined #smartos
[14:28:48] *** numericill <numericill!~numericil@ip72-192-145-123.sd.sd.cox.net> has joined #smartos
[14:33:19] *** numericill <numericill!~numericil@ip72-192-145-123.sd.sd.cox.net> has quit IRC (Ping timeout: 265 seconds)
[14:38:08] *** blackwood821 <blackwood821!~blackwood@2601:484:8002:2fa0:916a:6d9b:bf7f:929e> has joined #smartos
[14:41:50] *** arnold_oree <arnold_oree!~arnoldore@> has joined #smartos
[14:48:27] *** arnold_oree <arnold_oree!~arnoldore@> has quit IRC (Ping timeout: 260 seconds)
[14:56:25] *** arnoldoree <arnoldoree!~arnoldore@> has joined #smartos
[15:03:16] *** arnoldoree <arnoldoree!~arnoldore@> has quit IRC (Ping timeout: 255 seconds)
[15:23:09] *** Kurlon_ <Kurlon_!~Kurlon@cpe-67-253-136-97.rochester.res.rr.com> has quit IRC (Ping timeout: 258 seconds)
[15:57:58] *** kkantor <kkantor!~kkantor@c-24-118-59-107.hsd1.mn.comcast.net> has joined #smartos
[15:59:04] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has joined #smartos
[16:24:15] *** stephen <stephen!~stephen@unaffiliated/stephen> has joined #smartos
[16:25:45] *** stephen <stephen!~stephen@unaffiliated/stephen> has quit IRC (Client Quit)
[16:26:58] *** tru_tru <tru_tru!~tru@> has quit IRC (Ping timeout: 255 seconds)
[16:28:32] *** dbrooke <dbrooke!~db@stirling.dbrooke.me.uk> has quit IRC (Read error: Connection reset by peer)
[16:28:39] *** dbrooke_ <dbrooke_!~db@stirling.dbrooke.me.uk> has joined #smartos
[16:29:52] *** numericill <numericill!~numericil@ip72-192-145-123.sd.sd.cox.net> has joined #smartos
[16:34:27] *** numericill <numericill!~numericil@ip72-192-145-123.sd.sd.cox.net> has quit IRC (Ping timeout: 258 seconds)
[16:36:05] *** lgtaube <lgtaube!~lgt@> has quit IRC (Ping timeout: 265 seconds)
[16:39:03] *** neuroserve <neuroserve!~toens@> has quit IRC (Ping timeout: 258 seconds)
[16:49:35] *** jcea <jcea!~Thunderbi@2001:bc8:2ecd:caed:7670:6e00:7670:6e00> has joined #smartos
[16:53:11] *** tru_tru <tru_tru!~tru@> has joined #smartos
[16:56:34] *** nde <nde!uid414739@gateway/web/irccloud.com/x-iwuthbqmanrifuzu> has joined #smartos
[17:14:32] *** numericill <numericill!~numericil@ip72-192-145-123.sd.sd.cox.net> has joined #smartos
[17:15:26] <papertigers> sjorge: I left shortly after for dinner etc. Looking again today though. I want to understand what value is being written into the config space of the device
[17:18:25] <sjorge> That's a good start, if we know what it does exactly we can be sure wether or not we need the stuff in cfginit or not
[17:19:22] <papertigers> sjorge: I can also grab those other two commits, as they look super minor and just adjust the goto statements
[17:19:39] <sjorge> which 2?
[17:19:47] <sjorge> Oh, the ones you mentioned yesterday!
[17:21:33] <papertigers> git diff 3b9cb80b242682690203709aaff4eafae41c138f..37e8a0e0058c226e6bd0ed5c3a07ee15b1146122 usr.sbin/bhyve/pci_passthru.c
[17:21:48] <papertigers> run that in your freebsd checkout if you have one
[17:22:17] <sjorge> I do not :p
[17:22:48] <sjorge> but I can use the github ui
[17:23:19] <sjorge> well... if it wants to load
[17:27:35] <sjorge> gah unicrons all the way
[17:28:57] <sjorge> papertigers looks like there is an additional one https://github.com/freebsd/freebsd/commit/bdad744823ebf970815d691c9ec71236f6f0d90b#diff-cb5d8114daa23a73bf33e1400fd4d00d ?
[17:30:27] <papertigers> sjorge: Dec, this PR only goes up to sometime in sept
[17:30:34] <sjorge> Ah
[17:31:11] <papertigers> although maybe this should get sucked in at the same time
[17:34:43] <Smithx10> danmcd: any update on what was going on with that strange curl behaviour that jemershaw found?
[17:34:49] <Smithx10> That was a weird one
[17:35:30] <danmcd> Looked like classic CPU starvation. BHYVE got cycle-starved. If I literally nice(1)'ed the curls, things went to normal.
[17:36:02] <sjorge> papertigers yeah, it looks like one that should really go in as wel
[17:36:32] <sjorge> maybe it is best to skip it for PR 263
[17:36:47] <sjorge> And then do a followup for pci_passthru.c and the net_backends.{c,h} stuff?
[17:36:56] <Smithx10> danmcd: that's interesting!
[17:37:08] <Smithx10> So how do you prevent that?
[17:37:11] <danmcd> Oh yes, very interesting. I heard people grousing about FSS in that context.
[17:37:25] <sjorge> grousing? Never seen that word
[17:37:35] <danmcd> CPU cap your zones? (My reproductions were on zones with extremely high CPU caps.)
[17:37:50] <Smithx10> I believe all those zones were capped*
[17:37:52] <danmcd> grouse ==> synonym for complain.
[17:38:01] <sjorge> Ah
[17:38:27] <sjorge> IIRC I disabled it for my pi build zone
[17:38:42] <Smithx10> I thought Grouse was something that was drank
[17:38:56] <Smithx10> lol, the name of the Whisky makes more sense now
[17:39:00] <Smithx10> Drink your Complaints away
[17:39:05] <sjorge> Because the box is mostly idle... but it's like a 30-45min quicker build without... and the other stuff does not seem to suffer
[17:39:21] <papertigers> Yeah if I give my smartos build zone a cap of 2400 (all the cores on my box) and build the platform, my Bhyve zones will get crushed. But I think I have funky cpu_shares configured as well
[17:39:22] <sjorge> Famous Grouse or something yueah
[17:39:42] <danmcd> (Meanwhile github's add-new-comment is literally broken right now, and I've got sick family here at home. Not a good day here.)
[17:39:46] <sjorge> Ah I do keep bhyve/kvm (not anymore) on a seperate CN
[17:40:17] <Smithx10> so having a Bhyve and Containers on the same host is now a no no?
[17:40:27] <papertigers> mgerdts would be happy to hear that. He votes to keep VM's and containers separate :P
[17:40:45] <papertigers> Smithx10: I mean there's nothing that says you can't.
[17:40:55] <sjorge> papertigers it generally works better from my experience
[17:40:55] <Smithx10> Except for bhyve getting starved LOL
[17:41:18] <papertigers> bhyve just seems to be more sensitive to it, that's for sure
[17:41:18] <sjorge> Processes more stick much more to the CPU core they are running on
[17:41:35] <sjorge> If the vCPU thread gets swapped on/off or to a differeren CPU core
[17:41:42] <papertigers> because each of it's CPUs is literally a posix thread
[17:41:42] <sjorge> It seems to suffer a lot
[17:41:56] <Smithx10> The behaviour that jemershaw induced tho, we haven't seen in practice
[17:42:02] <sjorge> CPU pinning might make it better though, but I never got that working even in the gz
[17:42:12] <Smithx10> So until it becomes a problem we will just run them all together
[17:42:32] <Smithx10> I think if our company ever good at governance or enforcing something we could split prod up
[17:42:34] <papertigers> Like I said, in my case my build zone has all saftey rails removed so that it gets access to all resources while building
[17:43:25] <mgerdts> Use of the dedicated-cpu property in zonecfg should make it so that cpus are not starved by other workloads. If you care about predictable performance, that's probably the way to go.
[17:43:31] <papertigers> Smithx10: you might hit it if you have really wild packages setup. Like a 100 cap and small share value mixed with really high caps/shares
[17:44:11] <Smithx10> I think we map "VCPU" and Shares to the same
[17:44:23] <Smithx10> so if we do 4 VCPU we do 400% or something
[17:44:37] <Smithx10> Not sure if that actually makes any sense
[17:45:05] <mgerdts> It's just not a fair fight when you have a smartos or lx zone that can create more threads than there are cpus competing with a bhyve zone that will create comparatively few threads, particularly when the bhyve zone has a small number of cpu shares.
[17:45:18] <danmcd> I used my build zone to induce the problem. :)
[17:45:24] <Smithx10> mgerdts: that makes esne
[17:45:34] <mgerdts> 400% will only serve to cap the cpu, not reserve it.
[17:46:01] <Smithx10> This is good to know about tho
[17:46:17] <Smithx10> If we run into any strange perf events with bhyve that will definitely be the first thing to rule out
[17:46:29] <mgerdts> Setting the cap to 100 * vcpus primarily serves to limit the amount of cpu time that can be used by non-vcpu threads that are not really doing much anyway.
[17:46:35] <sjorge> Smithx10 when I ran mixed I did vCPU * 125
[17:46:46] <Smithx10> sjorge: I'll take that into consideration
[17:46:47] <sjorge> papertigers same, my build zone is also nearly unlimited
[17:47:24] <mgerdts> maybe viona does its work in separate threads... that could make a difference when there's a lot of network traffic.
[17:47:43] <sjorge> papertigers, dedicated-cpu would remove the CPUs from the gz/other zones right
[17:47:54] <mgerdts> zfs in the host does pretty much all of its work via worker threads that are associated with the global zone.
[17:48:04] <mgerdts> yeah, dedicated-cpu does that.
[17:48:17] <sjorge> picking the proper core and hyper thread CPU's to pass in and then somehow pin the -c socket=1,cores=2,thread=2 to the correc core/thread would be amazing
[17:48:37] <Smithx10> mgerdts: saw you were rebooting linux CN containers!
[17:48:41] <Smithx10> exciting :P
[17:48:46] <sjorge> Smithx10 mgerdts that is why I found 125 multiplied worked slightly better
[17:48:48] <mgerdts> If you want to group some instances on one set of cpus and let others use other cpus, pools should work as well.
[17:48:51] <sjorge> better IO/network
[17:48:57] <papertigers> There was also work done at one point to wire up the fixed priority stuff to zones I think
[17:48:58] <mgerdts> I've not tried that on smartos, but it was certainly a thing on Solaris.
[17:49:06] <papertigers> wonder if that ever made it down to vmadm and the rest of smartos
[17:49:45] <Smithx10> so viona could be threading out and doign work but the OS cant do anything about it?
[17:50:17] <mgerdts> The resource controls exposed via vmadm tend to be those that are focused on encouraging fairness in the face of oversubscription. Encouraging and delivering are different things.
[17:50:53] <mgerdts> If you really care about predictable performance, we should be using different resource controls at the expense of leaving some resources unused why some workloads are hungry for more.
[17:51:41] <mgerdts> bhyve memory is somewhat odd in that it is strictly in the predictable performance camp.
[17:53:20] <mgerdts> If we were to do microvms (e.g. for serverless), we would probably want them to swing the other direction a bit - it is unlikely that a VM that fires up a few seconds will touch its maximum amount of memory, so allocating and zeroing it is very expensive and wasteful.
[17:58:24] <papertigers> don't use that buzzword, you will get Smithx10 excited! :P'
[18:13:04] <sjorge> Well I would also get excited fro firecracker backed by bhyve
[18:13:13] <sjorge> might be a good lx alterantive
[18:13:31] <sjorge> Especially combined with virtio-9p
[18:14:29] <papertigers> firecracker for bhyve is basically rewriting bhyve(userland piece) but in rust.
[18:14:53] <papertigers> I mean if you wanted it in rust anyways
[18:15:20] <papertigers> we could probably get close to what firecracker is with current bhyve, but there's a bunch of work that would need to happen
[18:17:25] <sjorge> True, bhyve is already pretty close
[18:18:35] *** bens1 <bens1!~bens@> has quit IRC (Quit: Textual IRC Client: www.textualapp.com)
[18:21:39] <papertigers> yea, super close considering we are not using qemu :P
[18:22:40] *** man_u <man_u!~manu@manu2.gandi.net> has quit IRC (Quit: man_u)
[18:29:20] *** noahmehl <noahmehl!~noahmehl@mobile-166-170-45-143.mycingular.net> has joined #smartos
[18:32:30] <Smithx10> papertigers:
[18:32:39] <Smithx10> He said it.
[18:32:40] <Smithx10> Not me.
[18:35:37] *** hhdave <hhdave!~anonymous@ip212.ip-193-70-71.eu> has quit IRC (Quit: hhdave)
[18:41:27] <Smithx10> lol, papertigers but its your chance to justify doing it :P
[18:42:37] <papertigers> microhive.io -- run your serverless workloads in bhyve micro VMs.
[18:42:49] <papertigers> will you be my first customer? :P
[18:43:20] <Smithx10> sure.
[18:43:29] <Smithx10> actually, sjorge can go first.
[18:43:48] <Smithx10> ummm
[18:43:54] <Smithx10> someone beat you to microhive.io.
[18:44:15] <Smithx10> try again
[18:44:29] <papertigers> microhyve.io
[18:44:34] <papertigers> hyve is probably more fitting
[18:44:48] *** wiedi <wiedi!~wiedi@> has quit IRC (Quit: ^C)
[18:46:46] <psarria> for testing purposes i'm creating a lot of native zones with 1GB of max_memory, actually the sum of memory CAP (not RSS) of that zones is above physical memory of GZ, are there any limits on this ?
[18:47:17] <Smithx10> w00t w00t!!! microhyve.io is available
[18:47:28] <Smithx10> could be cross platform too!
[18:56:43] <jbk> Smithx10: no blockchain? disappointed
[18:56:53] <Smithx10> :(
[18:57:00] <Smithx10> sorry.
[19:22:41] *** dbrooke_ <dbrooke_!~db@stirling.dbrooke.me.uk> has quit IRC (Quit: WeeChat 1.6)
[19:22:57] *** dbrooke <dbrooke!~db@stirling.dbrooke.me.uk> has joined #smartos
[20:05:50] <bahamat> psarria: Is this SmartOS standalone or Triton?
[20:08:09] <psarria> SmartOS standalone
[20:08:37] <bahamat> No, there's no limit.
[20:11:08] <psarria> but then doing so i'm overprovisioning memory
[20:11:10] *** rennj <rennj!~rennj@wsip-24-120-111-138.lv.lv.cox.net> has quit IRC (Ping timeout: 265 seconds)
[20:12:19] <psarria> is it different in Triton ?
[20:37:42] *** gemelen <gemelen!~gemelen@zooey.gemelen.net> has joined #smartos
[20:39:24] *** wiedi <wiedi!~wiedi@ip5b4096a6.dynamic.kabel-deutschland.de> has joined #smartos
[20:54:55] *** blackwoo_ <blackwoo_!~blackwood@c-174-49-16-176.hsd1.tn.comcast.net> has joined #smartos
[20:58:28] *** blackwood821 <blackwood821!~blackwood@2601:484:8002:2fa0:916a:6d9b:bf7f:929e> has quit IRC (Ping timeout: 248 seconds)
[21:05:50] *** axonpoet <axonpoet!~axonpoet@fsf/member/axonpoet> has quit IRC (Quit: leaving)
[21:06:49] *** blackwoo_ <blackwoo_!~blackwood@c-174-49-16-176.hsd1.tn.comcast.net> has quit IRC ()
[21:07:10] *** blackwood821 <blackwood821!~blackwood@c-174-49-16-176.hsd1.tn.comcast.net> has joined #smartos
[21:08:28] *** axonpoet <axonpoet!~axonpoet@fsf/member/axonpoet> has joined #smartos
[21:11:12] *** axonpoet <axonpoet!~axonpoet@fsf/member/axonpoet> has quit IRC (Client Quit)
[21:11:55] <bahamat> psarria: In triton the orchestration stack has defined overprovosion levels, and will select compute nodes based on the available capacity.
[21:12:13] *** axonpoet <axonpoet!~axonpoet@fsf/member/axonpoet> has joined #smartos
[21:12:17] <bahamat> (but operators can always override it and deploy an instance to any CN)
[21:12:43] <bahamat> The only truly *hard* limit is available storage space.
[21:18:41] <psarria> understood, thanks a lot bahamat
[22:01:26] *** noahmehl <noahmehl!~noahmehl@mobile-166-170-45-143.mycingular.net> has quit IRC (Ping timeout: 258 seconds)
[22:46:50] *** Kurlon_ <Kurlon_!~Kurlon@bidd-pub-04.gwi.net> has joined #smartos
[22:50:31] *** Kurlon <Kurlon!~Kurlon@bidd-pub-03.gwi.net> has quit IRC (Ping timeout: 272 seconds)
[22:51:16] *** Kurlon_ <Kurlon_!~Kurlon@bidd-pub-04.gwi.net> has quit IRC (Ping timeout: 255 seconds)
[23:25:38] *** Kurlon <Kurlon!~Kurlon@cpe-67-253-136-97.rochester.res.rr.com> has joined #smartos
[23:31:47] *** hotbox <hotbox!~hotbox@2001:41d0:fe8f:b70a:20d:b9ff:fe47:7c05> has quit IRC (Ping timeout: 240 seconds)
[23:54:41] *** rennj <rennj!~rennj@wsip-24-120-111-138.lv.lv.cox.net> has joined #smartos
[23:59:02] *** andy_js <andy_js!~andy@> has quit IRC (Quit: andy_js)

   February 25, 2020  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | >