Switch to DuckDuckGo Search
   November 27, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | >

Toggle Join/Part | bottom
[00:02:34] <jbk> well so far, it seems like you can examine vCPU registers, get stacks, and maybe a few other things
[00:02:55] <jbk> just from looking at the source for the target
[00:03:51] <jbk> the lack of symbols though before the kernel is loaded is a bit annoying -- i mean i can tell 'oh this looks like a function prologue', but trying to figure out which function that is has proved to be challenging so far..
[00:04:04] <jbk> (maybe toomas might have some pointers whenever he's around)
[00:04:30] <jbk> i'm happy to dig into it, it's just trying to figure out how that's proven to be the difficulty so far :)
[00:11:43] *** despair86 <despair86!~despair@2605:6000:1515:a52:b62e:99ff:fea1:3bad> has joined #illumos
[00:12:05] <despair86> can someone remind me what version of DRI we ported from Linux?
[00:12:36] <despair86> 2.6.x I assume?
[00:14:07] <LeftWing> despair86: Is that this stuff? https://github.com/illumos/gfx-drm
[00:14:37] <despair86> yeah
[00:14:42] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Remote host closed the connection)
[00:15:21] <LeftWing> I'm really not sure. Gordon might know!
[00:17:30] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[00:31:29] *** insomnia <insomnia!~insomnia@shadowcat/actuallyamemberof/lollipopguild.insomnia> has joined #illumos
[00:39:05] *** insomnia <insomnia!~insomnia@shadowcat/actuallyamemberof/lollipopguild.insomnia> has quit IRC (Quit: Leaving)
[00:39:24] *** insomnia <insomnia!~insomnia@shadowcat/actuallyamemberof/lollipopguild.insomnia> has joined #illumos
[00:51:52] *** andy_js <andy_js!~andy@94.5.2.153> has quit IRC (Quit: andy_js)
[00:59:48] *** mappx <mappx!~name@stsvon1503w-grc-03-65-93-108-193.dsl.bell.ca> has joined #illumos
[01:02:59] *** _despair86 <_despair86!~despair@2605:6000:1515:a52:b62e:99ff:fea1:3bad> has joined #illumos
[01:05:23] *** despair86 <despair86!~despair@2605:6000:1515:a52:b62e:99ff:fea1:3bad> has quit IRC (Ping timeout: 250 seconds)
[01:15:17] *** insomnia <insomnia!~insomnia@shadowcat/actuallyamemberof/lollipopguild.insomnia> has quit IRC (Quit: Leaving)
[01:15:34] *** insomnia <insomnia!~insomnia@shadowcat/actuallyamemberof/lollipopguild.insomnia> has joined #illumos
[01:17:52] *** Latrina <Latrina!D368@free.znc.bg> has joined #illumos
[01:57:59] *** _despair86 <_despair86!~despair@2605:6000:1515:a52:b62e:99ff:fea1:3bad> has quit IRC (Quit: brb installing patches)
[02:25:57] *** neirac <neirac!~cneira@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[03:08:17] *** kvik <kvik!~kvik@unaffiliated/kvik> has quit IRC (Ping timeout: 240 seconds)
[03:10:53] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has quit IRC (Remote host closed the connection)
[03:18:29] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[03:19:19] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[03:26:51] *** rzezeski <rzezeski!uid151901@gateway/web/irccloud.com/x-vvduutzkhmnplbox> has joined #illumos
[03:35:32] *** kev009_ <kev009_!~kev009@ip72-222-200-117.ph.ph.cox.net> has quit IRC (Remote host closed the connection)
[03:46:40] *** alanc <alanc!~alanc@inet-hqmc01-o.oracle.com> has quit IRC (Remote host closed the connection)
[03:47:07] *** alanc <alanc!~alanc@inet-hqmc01-o.oracle.com> has joined #illumos
[04:02:02] <jbk> anyone know why xsvc would fail to map the ACPI EBDA (causing acpidump to fail)?
[04:07:18] *** kev009 <kev009!~kev009@ip72-222-200-117.ph.ph.cox.net> has joined #illumos
[04:10:40] <jbk> it seems like the problem is with the last page it tries to map
[04:13:57] *** Yogurt <Yogurt!~Yogurt@c-73-189-45-147.hsd1.ca.comcast.net> has joined #illumos
[04:14:51] <veg> with the openzfs codebase being unified between Linux & FreeBSD under the OpenZFS umbrella, is Illuminos going to pull from that shared repo, or maintain its own implementation of ZFS?
[04:15:02] <veg> I couldn't find info by googling around
[04:18:16] *** Yogurt <Yogurt!~Yogurt@c-73-189-45-147.hsd1.ca.comcast.net> has quit IRC (Ping timeout: 240 seconds)
[04:23:35] <rmustacc> So, we already are pulling changes in from the repo as it is. I'm not doing the work, but I suspect over time making that easier to maintain is probably in the goals of the folks working on it.
[04:24:18] <rmustacc> veg: That was for you, forgot to tag you.
[04:24:49] <veg> that would be awesome, thanks rmustacc
[04:25:28] <rmustacc> Today we don't use the shared repo directly, but we pull changes in from it and some folks push things up to there.
[04:26:04] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has quit IRC (Remote host closed the connection)
[04:35:55] <veg> rmustacc: great, I think it would be a super strong sign of inter OS collaboration and strenghtening if the repo could become the central point for work from all projects, it would be quite unique!
[04:36:48] <veg> coming from the GNU world & starting to use BSDs, that would def encourage me to go further and see how I can integreate illumos in my workflow
[05:10:26] *** cneira_ <cneira_!~cneira@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[05:11:02] *** neirac <neirac!~cneira@pc-184-104-160-190.cm.vtr.net> has quit IRC (Read error: Connection reset by peer)
[05:15:18] *** cneira_ is now known as neirac
[05:32:30] *** oxford <oxford!~chatting@gateway/tor-sasl/cantstanya> has quit IRC (Remote host closed the connection)
[05:34:27] *** oxford <oxford!~chatting@gateway/tor-sasl/cantstanya> has joined #illumos
[05:46:27] *** rzezeski <rzezeski!uid151901@gateway/web/irccloud.com/x-vvduutzkhmnplbox> has quit IRC (Quit: Connection closed for inactivity)
[05:51:43] <LeftWing> veg: There is certainly some project-level interest in getting the shared repository to build on illumos as a first step towards conceivably switching our ZFS over to the shared code
[05:52:14] <LeftWing> We're in a bit of a transitional period for the repository, though; it likely makes sense to wait until the rename is finished and the FreeBSD effort to integrate has completed
[06:47:32] <jbk> when that happens, i wonder how hard it'll be to keep the zfs write throttle..
[06:52:08] <LeftWing> jbk: Where did you get EADI from for the libc crate?
[07:13:22] <jbk> ?
[07:14:05] <jbk> the error value?
[07:17:07] <jbk> that didn't come from me -- it appears it was this commit: https://github.com/rust-lang/libc/commit/9e9a32589df9f19736e180c103a6d8a1d92b7364 though i appear to have missed that when adding the illumos target and didn't exclude it
[07:17:15] <jbk> (since it doesn't seem to be defined on illumos)
[07:17:21] <jbk> i'm not sure where he found it
[07:18:25] <jbk> err they.. i don't know the person, so i shouldn't be assuming :(
[07:31:12] <LeftWing> S'alright. I'm doing a pass to try and make the tests actually pass
[07:32:36] <jbk> hopefully once the kbmd stuff slows down a bit, i'm hoping i can dedicate enough time to chase down all the bits to get the illumos target working w/ the latest rustc
[07:34:03] <LeftWing> Is that distinct from a solaris target?
[07:34:21] <jbk> yes
[07:34:46] <jbk> so we can actually have differentiation for our things that are different
[07:35:07] <LeftWing> Nice
[07:35:09] <jbk> just spending an hour here and there isn't enough to chase down everything
[07:35:13] <LeftWing> I look forward to it!
[07:36:34] <jbk> because the big issue is that rustc seems to expand the number of crates required to build rustc w/ each new version, and invariably one of those is broken, or you get 3-4 different versions of a crate that end up being included..etc.
[07:37:12] <jbk> (the rand crate was a big problem for quite a while -- i think rustc used 3-4 different versions via all the various crate dependencies)
[07:37:15] *** cpk <cpk!~chris@185.172.87.163> has joined #illumos
[07:37:33] <jbk> and of course all but one of those versions were the one that issues the solaris syscall directly
[07:37:39] <jbk> and would die
[07:37:51] <jbk> (jperkin had been manually patching the crate in the pkgsrc builds to work around it)
[07:39:00] <jbk> but at least now, i'm starting to get pinged to review new PRs (not everything of course, but on some stuff) that impact solaris and illumos, so that helps
[07:39:51] <jbk> completely unrelated question.. just curiousity since i've not found the relevant bits (if they exist) yet..
[07:41:45] <jbk> how aware is kernel memory allocator about topology (I don't know if NUMA aware is quite the right term).. I mean I know there are per-cpu caches, but in terms of the actual physical pages that end up being selected..
[07:42:01] <jbk> or end up being in that cache
[07:42:06] <LeftWing> I'm not sure
[07:49:03] *** mappx <mappx!~name@stsvon1503w-grc-03-65-93-108-193.dsl.bell.ca> has quit IRC (Ping timeout: 245 seconds)
[08:06:29] <sensille> "no ACPI power usage estimate available" from powertop. does anyone know what causes that?
[08:19:35] *** amrfrsh <amrfrsh!~Thunderbi@190.2.145.106> has quit IRC (Quit: amrfrsh)
[08:27:19] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Quit: tsoome)
[08:49:25] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has joined #illumos
[08:51:36] *** cpk <cpk!~chris@185.172.87.163> has quit IRC (Ping timeout: 240 seconds)
[08:55:41] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has joined #illumos
[08:56:58] *** mappx <mappx!~name@stsvon1503w-grc-03-65-93-108-193.dsl.bell.ca> has joined #illumos
[09:06:54] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[09:20:30] *** amrfrsh <amrfrsh!~Thunderbi@131.234.42.70> has joined #illumos
[09:36:03] <clapont> hi. I come with a maybe silly question but after I re-read about slices and dig online I don't have this fully cleared in my mind when I use the whole drive for a zpool: "why a zpool works on ctds0 and on ctds2?" and "which is the best, use s0 or s2"? 10/20GBs partitions. or if you can suggest some links with definitive answers.. many links are so old that they dont work anymore. thank you!
[09:37:31] <clapont> the closest I've found is "Using Slice 2 As a Partition Sometimes a relational database uses an entire disk and requires one single raw partition. It's convenient in this circumstance to use slice 2, as it represents the entire disk, but is not recommended because you would be using cylinder 0" - I dont use DB, just files
[09:38:01] *** BH23 <BH23!~BH23@193.117.206.132> has joined #illumos
[09:38:35] <sensille> clapont: the native way is to use zfs on ctd, without slices
[09:38:35] <Agnar> slice2 is by default whole disk
[09:38:50] <bahamat> clapont: It's convention in SunOS that slice 2 means the entire disk. I don't know how/why that convention came about.
[09:38:59] <Agnar> but of course you can have multiple partition entries addressing the same cylinders
[09:39:06] <Agnar> so, s0 and s2 overlap
[09:39:15] *** cpk <cpk!~chris@185.172.87.163> has joined #illumos
[09:39:26] <bahamat> technically s2 overlaps with all other slices.
[09:39:34] <Agnar> bahamat: it's for "backup" the disk, that's why it has the label.
[09:40:35] <clapont> I read that s0/s2 overlaps and some logical on historical times when there were only 1-2 computers/university but I don't know which is the best to use
[09:41:19] <sensille> best to use "whole disk" with zfs, if that is an option
[09:41:25] <bahamat> clapont: When you're defining a zpool, don't use the slice notation.
[09:41:50] <clapont> I have some zpools with "s0" and others with "s2"; what I created "zpool create ctd" shows "ctds0" on "zpool status".. but how the one with "ctds2", how was that created?
[09:41:52] <Agnar> clapont: it does not really matter if you use s0 or s2. it does however matter if you can omit the sX at all, because then ZFS will make use of the disks write cache
[09:42:07] <bahamat> clapont: Or rather, if you're giving the whole disk to zfs, then don't use the slice notation.
[09:42:51] <sensille> clapont: and even if you don't have the device without slice notation, zfs will create it for you
[09:42:51] <Agnar> bahamat: drama is, clapont uses VXdmp, and that is prone to have problems with whole disks
[09:43:16] <clapont> when I did"zpool create ctd", at "zpool status" I see "ctds0"; but I have older zpools showing "ctds2" at "zpool status"; so I am thinking that maybe it was a reason..
[09:43:34] <bahamat> Agnar: Is that something that's over or under zfs?
[09:44:00] <Agnar> bahamat: it's a multipath solution from Veritas Storage Foundation, so at device layer
[09:45:02] <clapont> Agnar: you did not forgot :-) on VXDMP, the "zpool create ctd" will list the OS_NATIVE_NAME as "c11t20370080E52DC020d4s2" - which is more confusing
[09:45:04] <bahamat> So then under?
[09:46:25] <clapont> to summarize: "zpool create ctd" shows at "zpool status" "ctds0"; but "vxdisk -eo alldgs list" shows at "OS_NATIVE_NAME" the "ctd_wwns2"
[09:47:06] <clapont> so hence my confusion, I failed to explain properly the other days to Agnar ...
[09:47:36] <sensille> so your confusion is more with vxdmp than with slices :-/
[09:48:20] <clapont> I would not say that.. I am concerned about OS/slices, not about VX; the VXDMP will read whatever the OS creates
[09:49:01] <clapont> I presented the whole picture to better understand
[09:52:22] <clapont> thank you very much for the explanations given; I will continue to create on ctd and in time, with more readings/tests, I will catch the whole idea
[09:59:48] *** jimklimov <jimklimov!~jimklimov@31.7.243.238> has joined #illumos
[10:03:31] <Agnar> VXdmp is just crap
[10:03:32] <Agnar> sorry
[10:05:27] <clapont> Agnar: VXit is something I just have to do, this is what I found. I had more problems with it so I am digging :-)
[10:05:56] <Agnar> clapont: sure, I know. VX* is just broken and horribly outdated :/
[10:05:59] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has quit IRC (Quit: Leaving)
[10:06:26] <Agnar> clapont: and at least on illumos you will get problems with it sooner or later
[10:07:37] <clapont> Agnar: it is a Solaris10 setup with two servers + VX cluster with IO fencing on VX too
[10:10:36] <Agnar> I thought so, yes
[10:11:37] <wilbury> vxdmp is a crap. but was a necessary crap when you wanted to encapsulate rootdisks.
[10:14:27] <clapont> should I propose to migrate from Veritas VCS to Solaris cluster 3?
[10:15:37] <clapont> I don't feel brave enough :-) but it is an idea which I should investigate on some test VMs and have it in case I will have many-many problems
[10:16:24] <clapont> you know the answer "this worked for years, fix it" :-)
[10:17:44] <wilbury> vcs (SFHA), vxfs, vxvm are indeed a very stable and good products.
[10:18:20] <wilbury> but vxdmp is obsolete for most of the use cases.
[10:19:11] *** cpk <cpk!~chris@185.172.87.163> has quit IRC (Ping timeout: 250 seconds)
[10:20:47] <clapont> wilbury: I think you suggested to exclude all the paths from DMP and leave only the 3 disks used for IO fencing? MPxIO is completely disabled not to add more ingredients to the mix :-)
[10:21:20] <wilbury> clapont: yes, i suggested that.
[10:21:36] <clapont> or is it something else to do for enbesting the "vxdmp" part?
[10:22:38] *** man_u <man_u!~manu@manu2.gandi.net> has joined #illumos
[10:23:18] <clapont> thank you. I did not forgot. I'm trying to make the whole picture, test more things, read, re-read and... ask :-)
[10:24:25] *** amrfrsh <amrfrsh!~Thunderbi@131.234.42.70> has quit IRC (Ping timeout: 268 seconds)
[10:27:12] *** psydroid <psydroid!psydroidma@gateway/shell/matrix.org/x-jbspqufsfnxuevjd> has quit IRC (Read error: Connection reset by peer)
[10:27:37] *** jhot[m] <jhot[m]!jhotmatrix@gateway/shell/matrix.org/x-yzpnelniutovphjn> has quit IRC (Remote host closed the connection)
[10:27:46] *** Ericson2314 <Ericson2314!ericson231@gateway/shell/matrix.org/x-cunqxwzrefgupvne> has quit IRC (Write error: Connection reset by peer)
[10:27:46] *** GrahamPerrin[m] <GrahamPerrin[m]!grahamperr@gateway/shell/matrix.org/x-aqlhjgxujmtfvukz> has quit IRC (Write error: Connection reset by peer)
[10:44:03] *** pwinder <pwinder!~pwinder@86.2.210.254> has joined #illumos
[11:21:26] *** Ericson2314 <Ericson2314!ericson231@gateway/shell/matrix.org/x-kbtqngctuxiwjcip> has joined #illumos
[11:21:26] *** psydroid <psydroid!psydroidma@gateway/shell/matrix.org/x-qdvkvmircnoiejql> has joined #illumos
[11:21:26] *** jhot[m] <jhot[m]!jhotmatrix@gateway/shell/matrix.org/x-exghtkjkvycrhhle> has joined #illumos
[11:21:27] *** GrahamPerrin[m] <GrahamPerrin[m]!grahamperr@gateway/shell/matrix.org/x-pwlznhxtfxzspkkh> has joined #illumos
[11:35:20] *** mappx <mappx!~name@stsvon1503w-grc-03-65-93-108-193.dsl.bell.ca> has quit IRC (Ping timeout: 268 seconds)
[11:43:32] <clapont> update: I realized that the two Zpools had different TOCs: one SMI (thus showing "s2" on "vxdisk list") and another had EFI label (thus showing only "ctd" on "vxdisk list"); maybe this helps someone..
[11:51:49] <clapont> for the same reason, the labeled SMI disk will show "ctd" on "zpool status" while the EFI labeled disk will show "stds0"
[11:53:04] <clapont> this was my confusion, thanks everyone for helping! some answers enlighted me on other things which I will check later
[12:02:07] <tsoome> clapont: illumos whole disk zpool setup is not about partitions (we still use partitions), it is about the fact if zfs will manage disk write cache or if it will disable it.
[12:14:16] <clapont> tsoome: I put EFI disk on both, to show up nice; then I did "create zpool /dev/vx/stor0_2" and I hope I am fine with the cache too
[12:17:33] <clapont> as for illumos, I am wondering how I could do participate (given my low level of knowledge). I was happy to tell the end of story, maybe someone would benefit :-)
[12:18:14] *** BH23 <BH23!~BH23@193.117.206.132> has quit IRC (Remote host closed the connection)
[12:54:47] *** eki <eki!~eki@dsl-hkibng41-567327-143.dhcp.inet.fi> has quit IRC (Quit: leaving)
[12:57:21] *** eki <eki!~eki@dsl-hkibng41-567327-143.dhcp.inet.fi> has joined #illumos
[13:00:49] <tsoome> the knowlegde is interesting thing - if you are working with something, the knowledge will grow.
[13:02:27] <tsoome> for very beginning, it is often good idea to start with documentation review, simple code change reviews, if you will find something apparently broken, report it and so on. in time the expertize will start to grow...
[13:02:38] *** jcea <jcea!~Thunderbi@2001:41d0:1:8a82:7670:6e00:7670:6e00> has joined #illumos
[13:18:51] <tsoome> https://www.illumos.org/issues/12036 updated. apparently the sbd_flush_data_cache() is a bit fishy.
[13:26:17] *** amrfrsh <amrfrsh!~Thunderbi@185.212.171.68> has joined #illumos
[13:36:03] <jimklimov> question on ZFS options: does a filesystem with "primarycache=metadata" keep in RAM only the ZFS metadata (block trees leading to filesystem objects), or also FS metadata (e.g. directory contents)?
[13:37:51] <jimklimov> I have a server that provides shared git reference repo and shared ccache to a build farm, so there are millions of small files, relatively few really used recently from any point in time, but the directory lookups (listings and fstat's) are intensive
[13:38:27] <jimklimov> so I want directories in RAM and other data can be in the L2ARC or on disk
[13:39:28] <jimklimov> so playing with `primarycache=metadata secondarycache=all`
[13:40:43] <jimklimov> I see in `zpool iostat` that real disks of this shared resource are often hit for reads while the SSD L2ARC cache is not;
[13:41:36] <jimklimov> no idea if disk reads only serve what is not yet in RAM (or already got pushed out), and no idea why those blocks do not come from SSD then
[13:42:46] <jimklimov> is there some neat dtrace script slike solvfssnoop with a twist to select which file/dir operations are served from which storage tier (disk, l2arc, arc)?
[13:44:16] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Ping timeout: 240 seconds)
[13:53:19] *** mgerdts <mgerdts!~textual@2600:6c44:c7f:ec89:e004:cedd:50fb:1783> has quit IRC (Remote host closed the connection)
[13:54:31] *** mgerdts <mgerdts!~textual@2600:6c44:c7f:ec89:91d0:c721:7659:64d1> has joined #illumos
[13:55:20] *** mgerdts <mgerdts!~textual@2600:6c44:c7f:ec89:91d0:c721:7659:64d1> has quit IRC (Client Quit)
[14:05:24] <ptribble> jimklimov: the way I recall it, the way l2arc is populated is to scan the contents of the main ARC and copying it to l2arc
[14:06:01] <ptribble> so, if primarycache doesn't cache it, there's no data to populate the l2arc
[14:11:55] *** mappx <mappx!~name@stsvon1503w-grc-03-65-93-108-193.dsl.bell.ca> has joined #illumos
[14:14:08] *** gh34 <gh34!~textual@cpe-184-58-181-106.wi.res.rr.com> has joined #illumos
[14:20:56] <clapont> jimklimov: the l2arc cache can eat a lot of RAM; try an echo "::arc" | mdb -k
[14:24:42] <jimklimov> ptribble: thinking of it, the premise sounds reasonable... if data only "expires" from ARC into L2ARC (bandwidth, urgency to memory pressure, and SSD-safety throttling considered), I now wonder whether "never getting into ARC because of primarycache setting" is processed as a quick no-op or as an instant expiration subject to secondarycache setting? :\
[14:26:22] <ptribble> I'm not sure I would consider tweaking primarycache in this case anyway, the obvious use case for turning off data in primarycache is when you have something like a database or VM that's also going to cache it itself, and you don't want to waste RAM cacheing it twice
[14:27:31] <jimklimov> clapont: at the moment, server's 16GB RAM is spent under 500M on the OS processes, 1800M free, and the rest is cache I assume
[14:28:45] <jimklimov> ARC data point lookup here, thanks :) https://pastebin.com/YxCfp2dP
[14:29:47] <jimklimov> ptribble: good point too, thanks
[14:32:10] <clapont> jimklimov: I am not knowledgeable enough to give advices, I just think it's a good idea to watch those and maybe do some rrd graphs while experimenting with loading/unloading Apps + copying large files
[14:33:00] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[14:36:04] *** arnoldoree <arnoldoree!~arnoldore@ranoldoree.plus.com> has joined #illumos
[14:45:42] *** pmooney <pmooney!~pmooney@67-4-175-230.mpls.qwest.net> has quit IRC (Quit: WeeChat 2.6)
[14:48:09] *** cpk <cpk!~chris@185.172.87.163> has joined #illumos
[15:07:59] <jimklimov> while on this subject, maybe the idea helps someone: made a script that we run after reboot of the system to heat the dataset from spinning rust into (L2)ARC, to work around the sad lack of L2ARC persistance
[15:08:03] <jimklimov> https://pastebin.com/L0bEY3b7
[15:20:02] *** rzezeski <rzezeski!uid151901@gateway/web/irccloud.com/x-voyqubxfenaevmvf> has joined #illumos
[15:31:12] *** _alhazre_ <_alhazre_!~Alex@mobile-access-bcee35-219.dhcp.inet.fi> has joined #illumos
[15:31:13] *** _alhazred <_alhazred!~Alex@mobile-access-bcee35-219.dhcp.inet.fi> has quit IRC (Read error: Connection reset by peer)
[15:36:00] <sensille> Agnar: did you express interest in the mystery of the slower cpu-only bench illumos vs. linux?
[15:36:03] <sensille> it's solved
[15:36:42] *** amrfrsh <amrfrsh!~Thunderbi@185.212.171.68> has quit IRC (Quit: amrfrsh)
[15:38:24] <tsoome> jimklimov: also note the illumos zfs has metadata limit in arc (30% if my memory serves)
[15:42:41] <rzezeski> sensille: I'm curious to hear what it was.
[15:42:49] <gitomat> [illumos-gate] 12017 Assertion failure in kstat_waitq_to_runq from blkdev -- Paul Winder <paul at winders dot demon.co.uk>
[15:48:54] <sensille> rzezeski: the main reason was the entry "set idle_cpu_no_deep_c = 1" in /etc/system. this prevents the cpus from entering C3, which would be necessary for the turbo boost to work
[15:50:01] <rzezeski> sensille: interesting, good to know
[15:50:34] <sensille> but hard to remember next time you need it ;)
[15:51:24] *** neirac <neirac!~cneira@pc-184-104-160-190.cm.vtr.net> has quit IRC (Remote host closed the connection)
[15:51:43] *** neirac <neirac!~cneira@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[15:52:41] <jimklimov> tsoome: the definition of metadata here evades me though :) namely, ZFS POSIX filesystem layer directory objects, and inode tables, and other such non-file-contents filesystem data, is it "metadata" for ZFS caching, special redundancy, etc. or not? :)
[15:56:23] <sensille> can i change the system behaviour to not spread the load over all cpus if possible? it would make sense to keep as many cores as possible in C3, but i don't want to put a hard limit on the application by defining psrsets
[15:57:37] <jimklimov> In other news, did anyone have experience with Acer Predator gaming laptops for illumos with graphics? Namely, an ad in https://www.theverge.com/good-deals/2019/11/27/20984544/acer-predator-triton-700-gaming-laptop-g-sync-1080-black-friday-deal-sale caught my eye. The specs at https://www.acer.com/ac/en/US/content/predator-model/NH.Q2LAA.001 do not reflect the sales-mode price, but also do not mention mixing integrated Intel graphics vs. mentioned NVidi
[16:01:03] <Agnar> sensille: cool, thanks!
[16:05:57] <sensille> idle_cpu_no_deep_c is a recommended workaround for various driver-related problems
[16:06:10] <sensille> but i think we don't need it anymore
[16:06:57] <rmustacc> Some people still run Nehalem and Westemere, where it deep C states were plagued with lots of problems.
[16:07:05] <rmustacc> So I don't think we should get rid of the tunable.
[16:07:39] <sensille> but we could add a warning that it interferes with intel turbo boost
[16:08:49] <sensille> ah, with "we" above i meant my work :)
[16:09:00] <rmustacc> Ah.
[16:09:54] <rmustacc> Well, I think we (the broader illumos community) should figure out why it did interfere.
[16:10:09] <rmustacc> I would have expected us to still use monitor/mwait in that scenario.
[16:10:22] <sensille> because it prevents C3. and the amount of boost depends on the number of cores in C3
[16:10:23] <rmustacc> But it may be that unless something enters C3, turbo won't kick in.
[16:11:15] <sensille> i found it mentioned here: https://en.wikichip.org/wiki/intel/turbo_boost_technology
[16:11:41] <sensille> and without the tunable, execution time of the test went down from 3.5s to 3.1
[16:12:38] <rmustacc> Ah, this is the distinction between what you can hit with the all-cores turbo and the single-core turbo.
[16:21:38] *** pmooney <pmooney!~pmooney@67-4-175-230.mpls.qwest.net> has joined #illumos
[16:32:53] *** bacterio <bacterio!bacterio@fsf/member/bacterio> has quit IRC (Read error: Connection reset by peer)
[16:34:17] *** amrfrsh <amrfrsh!~Thunderbi@131.234.44.102> has joined #illumos
[16:45:41] *** chrisBF <chrisBF!519d0501@host81-157-5-1.range81-157.btcentralplus.com> has joined #illumos
[16:49:37] *** mappx <mappx!~name@stsvon1503w-grc-03-65-93-108-193.dsl.bell.ca> has quit IRC (Ping timeout: 240 seconds)
[17:07:49] *** amrfrsh <amrfrsh!~Thunderbi@131.234.44.102> has quit IRC (Ping timeout: 250 seconds)
[17:26:10] *** amrfrsh <amrfrsh!~Thunderbi@131.234.44.102> has joined #illumos
[17:30:07] *** amrfrsh <amrfrsh!~Thunderbi@131.234.44.102> has quit IRC (Client Quit)
[17:36:44] *** ptribble <ptribble!~ptribble@cpc92716-cmbg20-2-0-cust138.5-4.cable.virginm.net> has quit IRC (Quit: Leaving)
[17:40:35] *** bacterio <bacterio!bacterio@fsf/member/bacterio> has joined #illumos
[17:51:33] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has quit IRC (Read error: Connection reset by peer)
[17:52:07] *** tsoome <tsoome!~tsoome@148-52-235-80.sta.estpak.ee> has joined #illumos
[18:32:07] *** bacterio <bacterio!bacterio@fsf/member/bacterio> has quit IRC (Read error: Connection reset by peer)
[18:32:39] *** chrisBF <chrisBF!519d0501@host81-157-5-1.range81-157.btcentralplus.com> has quit IRC (Ping timeout: 260 seconds)
[18:33:08] *** man_u <man_u!~manu@manu2.gandi.net> has quit IRC (Quit: man_u)
[18:41:42] *** Yogurt <Yogurt!~Yogurt@104-7-67-228.lightspeed.sntcca.sbcglobal.net> has joined #illumos
[18:42:21] *** amrfrsh <amrfrsh!~Thunderbi@185.212.171.68> has joined #illumos
[18:58:01] *** amrfrsh <amrfrsh!~Thunderbi@185.212.171.68> has quit IRC (Quit: amrfrsh)
[19:18:41] *** yomisei <yomisei!~void@ip4d16bc15.dynamic.kabel-deutschland.de> has quit IRC (Ping timeout: 250 seconds)
[19:18:58] *** yomisei <yomisei!~void@ip4d16bc15.dynamic.kabel-deutschland.de> has joined #illumos
[19:21:25] *** amrfrsh <amrfrsh!~Thunderbi@185.212.171.68> has joined #illumos
[19:23:23] *** neirac <neirac!~cneira@pc-184-104-160-190.cm.vtr.net> has quit IRC (Ping timeout: 268 seconds)
[19:23:28] *** kohju <kohju!~kohju@gw.justplayer.com> has quit IRC (Ping timeout: 250 seconds)
[19:23:51] *** kohju <kohju!~kohju@gw.justplayer.com> has joined #illumos
[19:23:53] *** mnowak_ <mnowak_!~mnowak_@94.142.238.232> has quit IRC (Ping timeout: 250 seconds)
[19:24:19] *** edef <edef!edef@NixOS/user/edef> has quit IRC (Ping timeout: 250 seconds)
[19:26:22] *** edef <edef!edef@NixOS/user/edef> has joined #illumos
[19:42:20] *** bacterio <bacterio!bacterio@fsf/member/bacterio> has joined #illumos
[19:46:59] *** kebe <kebe!~danmcd@static-71-174-113-16.bstnma.fios.verizon.net> has joined #illumos
[19:47:20] <kebe> ping
[19:47:46] <jlevon> pong?
[19:48:14] <kebe> Thanks. danmcd's session is still on but I need to be here somehow.
[19:51:20] <jimklimov> so here you are :)
[19:51:43] <jimklimov> say hi to Dan when you see his mirror image :)
[20:10:18] <kebe> Just count yourself lucky I'm not doing the Gollum/Smeagol talking-to-myself thing.
[20:19:28] *** neirac <neirac!~cneira@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[20:43:53] *** Teknix <Teknix!~pds@69.41.134.110> has joined #illumos
[20:44:46] *** papertigers <papertigers!~papertige@pool-72-75-249-69.bflony.fios.verizon.net> has quit IRC (Ping timeout: 265 seconds)
[20:53:21] *** papertigers <papertigers!~papertige@pool-72-75-249-69.bflony.fios.verizon.net> has joined #illumos
[21:01:55] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has quit IRC (Remote host closed the connection)
[21:02:55] *** kahiru <kahiru!~quassel@ip-89-102-207-18.net.upcbroadband.cz> has joined #illumos
[21:09:34] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has quit IRC (*.net *.split)
[21:15:44] *** MilkmanDan <MilkmanDan!~dan@wilug/expat/MilkmanDan> has joined #illumos
[21:21:33] *** Teknix <Teknix!~pds@69.41.134.110> has quit IRC (Ping timeout: 245 seconds)
[21:23:57] *** Teknix <Teknix!~pds@172.58.46.225> has joined #illumos
[21:25:25] <neirac> who has documents related to PSARC/2009/396 Tickless Kernel Architecture / lbolt , I'm using the wayback machine to find things but maybe someone has access to the docs?
[21:26:20] *** cpk <cpk!~chris@185.172.87.163> has quit IRC (Ping timeout: 265 seconds)
[21:26:43] *** cpk <cpk!~chris@185.172.87.163> has joined #illumos
[21:28:21] <jlevon> neirac: https://illumos.org/opensolaris/ARChive/PSARC/2009/396/index.html is all that exists I bet
[21:30:46] <neirac> jlevon thanks!
[21:32:40] *** bacterio <bacterio!bacterio@fsf/member/bacterio> has quit IRC (Read error: Connection reset by peer)
[21:33:05] *** neirac <neirac!~cneira@pc-184-104-160-190.cm.vtr.net> has quit IRC (Remote host closed the connection)
[21:33:26] *** neirac <neirac!~cneira@pc-184-104-160-190.cm.vtr.net> has joined #illumos
[21:35:02] *** Teknix <Teknix!~pds@172.58.46.225> has quit IRC (Ping timeout: 265 seconds)
[21:36:55] *** Teknix <Teknix!~pds@172.58.47.121> has joined #illumos
[21:39:14] <neirac> jlevon thanks a lot it has all I wanted
[21:39:20] <jlevon> my pleasure
[21:39:29] <jlevon> thank LeftWing (I think) for saving it all off
[21:48:49] <LeftWing> I believe richlowe preserved it all long ago, but I did put it up on the site!
[21:49:41] *** andy_js <andy_js!~andy@94.5.2.153> has joined #illumos
[21:51:03] <jlevon> LeftWing: is it on github? also is it linked from somewhere?
[21:51:05] <jlevon> and the bug db?
[21:55:10] <LeftWing> It is not currently on GitHub
[21:55:38] <jlevon> ok
[21:55:46] <LeftWing> https://illumos.org/opensolaris/bugdb/bug.html
[21:55:46] <jlevon> hrm, my RTI email didn't make it through.
[21:55:51] <jlevon> ta
[21:57:47] *** rzezeski <rzezeski!uid151901@gateway/web/irccloud.com/x-voyqubxfenaevmvf> has quit IRC (Quit: Connection closed for inactivity)
[21:59:56] *** pwinder <pwinder!~pwinder@86.2.210.254> has quit IRC (Quit: This computer has gone to sleep)
[22:07:46] <LeftWing> jlevon: I don't see it in the moderation or discard parts of the interface I have
[22:10:47] <gitomat> [illumos-gate] 2988 nfssrv: need ability to go to submounts for v3 and v2 protocols -- Vitaliy Gusev <gusev.vitaliy at nexenta dot com>
[22:11:12] <jlevon> yeah my fault
[22:22:49] *** sjorge <sjorge!~sjorge@unaffiliated/sjorge> has quit IRC (Remote host closed the connection)
[22:27:08] *** sjorge <sjorge!~sjorge@unaffiliated/sjorge> has joined #illumos
[22:37:01] *** insomnia <insomnia!~insomnia@shadowcat/actuallyamemberof/lollipopguild.insomnia> has quit IRC (Ping timeout: 268 seconds)
[22:49:34] *** insomnia <insomnia!~insomnia@shadowcat/actuallyamemberof/lollipopguild.insomnia> has joined #illumos
[23:43:00] *** cpk <cpk!~chris@185.172.87.163> has quit IRC (Ping timeout: 268 seconds)
[23:53:19] *** mappx <mappx!~name@stsvon1503w-grc-03-65-93-108-193.dsl.bell.ca> has joined #illumos
top

   November 27, 2019  
< | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | >