[00:04:59] *** clintar <clintar!~clintar@68.69.164.130> has joined ##vulkan
[00:07:05] *** clintar <clintar!~clintar@68.69.164.130> has quit IRC (Remote host closed the connection)
[00:11:08] *** clintar <clintar!~clintar@68.69.164.130> has joined ##vulkan
[00:23:04] *** cramalho <cramalho!~cramalho@2804:54:16f5:8200:bd67:9a34:7fbd:ea22> has joined ##vulkan
[00:25:07] *** ector <ector!~asdf@ua-85-224-236-175.bbcust.telenor.se> has quit IRC ()
[00:29:56] *** ciaala <ciaala!~crypt@2a02:120b:2c1f:4960:6ef0:49ff:feee:4777> has quit IRC (Quit: Konversation terminated!)
[00:32:07] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has quit IRC (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
[00:45:35] *** derhass <derhass!~derhass@ipservice-092-208-158-056.092.208.pools.vodafone-ip.de> has quit IRC (Quit: leaving)
[00:57:14] *** slime <slime!~slime73@blk-215-81-93.eastlink.ca> has joined ##vulkan
[01:42:12] *** MrFlibble <MrFlibble!MrFlibble@2.122.47.217> has quit IRC (Remote host closed the connection)
[01:42:30] *** MrFlibble <MrFlibble!MrFlibble@2.122.47.217> has joined ##vulkan
[01:49:29] *** MrFlibble <MrFlibble!MrFlibble@2.122.47.217> has quit IRC (Ping timeout: 256 seconds)
[02:09:46] *** cramalho <cramalho!~cramalho@2804:54:16f5:8200:bd67:9a34:7fbd:ea22> has quit IRC (Quit: cramalho)
[02:10:13] *** cramalho <cramalho!~cramalho@2804:54:16f5:8200:bd67:9a34:7fbd:ea22> has joined ##vulkan
[02:17:34] *** ratchetfreak <ratchetfreak!~ratchetfr@ptr-82s3g7lgupdkti53lhr.18120a2.ip6.access.telenet.be> has quit IRC (Ping timeout: 256 seconds)
[03:19:53] *** MrFlibble <MrFlibble!MrFlibble@2.122.47.217> has joined ##vulkan
[03:35:04] *** MrFlibble <MrFlibble!MrFlibble@2.122.47.217> has left ##vulkan
[05:09:53] *** ville <ville!~ville@87-93-41-166.bb.dnainternet.fi> has quit IRC (Quit:)
[05:32:55] *** ville <ville!~ville@188-67-14-101.bb.dnainternet.fi> has joined ##vulkan
[06:15:51] *** Danukeru <Danukeru!~Danukeru@irc.danuke.ru> has quit IRC (Ping timeout: 240 seconds)
[06:15:59] *** Danukeru <Danukeru!~Danukeru@irc.danuke.ru> has joined ##vulkan
[06:37:55] *** Mazon <Mazon!~Mazon@37.205.127.168> has quit IRC (Ping timeout: 256 seconds)
[06:43:40] *** Mazon <Mazon!~Mazon@37.205.127.168> has joined ##vulkan
[06:49:56] *** slime <slime!~slime73@blk-215-81-93.eastlink.ca> has quit IRC (Quit: This computer has gone to sleep)
[06:54:27] *** bpmedley <bpmedley!~bpm@c-24-72-144-115.ni.gigamonster.net> has quit IRC (Ping timeout: 240 seconds)
[07:12:31] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has joined ##vulkan
[07:32:58] *** nsf <nsf!~nsf@jiss.convex.ru> has joined ##vulkan
[08:08:25] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has quit IRC (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
[08:27:46] *** borkr <borkr!~borkr@static130-244.mimer.net> has joined ##vulkan
[08:47:36] *** snyp <snyp!~Snyp@103.56.236.188> has joined ##vulkan
[08:54:27] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has joined ##vulkan
[09:05:00] *** TSS_ <TSS_!~TSS@2a02:2f0a:4030:1447:d250:99ff:fe83:4a0a> has quit IRC (Ping timeout: 256 seconds)
[09:11:54] *** dadabidet <dadabidet!~dadabidet@extranet.adullact.org> has joined ##vulkan
[09:16:52] *** grouse <grouse!~grouse@83-233-9-2.cust.bredband2.com> has joined ##vulkan
[09:24:02] *** Fats <Fats!fats@gateway/vpn/privateinternetaccess/fats> has quit IRC (Remote host closed the connection)
[09:25:16] *** Fats <Fats!fats@gateway/vpn/privateinternetaccess/fats> has joined ##vulkan
[09:28:36] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has quit IRC (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
[09:50:00] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has joined ##vulkan
[09:50:27] *** Deluxe <Deluxe!~Deluxe@2001:67c:1220:80e:e9:1d2:f14f:e47f> has joined ##vulkan
[09:59:33] *** Guest4630 <Guest4630!4d3b9548@gateway/web/cgi-irc/kiwiirc.com/ip.77.59.149.72> has joined ##vulkan
[09:59:55] *** Deluxe <Deluxe!~Deluxe@2001:67c:1220:80e:e9:1d2:f14f:e47f> has quit IRC (Remote host closed the connection)
[10:01:57] *** ratchetfreak <ratchetfreak!c351a8d8@gateway/web/freenode/ip.195.81.168.216> has joined ##vulkan
[11:24:54] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has quit IRC (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
[11:44:35] *** sla_ro|master <sla_ro|master!~sla.ro@78.96.209.89> has joined ##vulkan
[12:00:50] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has joined ##vulkan
[12:18:49] *** cramalho <cramalho!~cramalho@2804:54:16f5:8200:bd67:9a34:7fbd:ea22> has quit IRC (Ping timeout: 276 seconds)
[12:45:40] *** HZun <HZun!~HZun@0x3ec72d49.osd.customer.dk.telia.net> has joined ##vulkan
[12:48:09] <HZun> Can an NVIDIA CUDA Core (execution unit) that contains both an ALU and a FPU issue both an ALU and a FPU instruction each clock cycle? or can it only issue one of them each clock cycle?
[12:48:19] *** borkr <borkr!~borkr@static130-244.mimer.net> has quit IRC (Ping timeout: 265 seconds)
[12:51:35] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has quit IRC (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
[12:57:14] <HZun> And can they overlap? for example can execution-unit-1 issue an integer-operation in clock-cycle-1 and then issue a floating-point-operation in clock-cycle-2 ? (where the operations themselves might take 40 clock cycles to finish), etc?
[13:05:19] *** slime <slime!~slime73@blk-215-81-93.eastlink.ca> has joined ##vulkan
[13:06:47] <sharpneli> All there overlap. Even floating point calculations overlap. They have like 7 cycles latency.
[13:07:14] <sharpneli> Troughput is 1 per cycle but latency is massive. But it doesn't matter as one can overlap different warps within single SM
[13:14:40] <HZun> is throughput 1 per cycle even if you mix int and fp? like it seems possible that an operation of each type could start each cycle.
[13:15:47] <sharpneli> Check cuda docs. They give the relevant numbers per arch there. Within reason tho.
[13:15:56] <sharpneli> They don't want people to optimize too much for specific HW
[13:20:23] <HZun> I have. However i cant find any information about if the same CUDA core can start both an int and a fp operation in the same clock cycle, and if it can finish both an int and a fp operation in the same clock cycle.
[13:22:08] <sharpneli> Maybe it cannot then.
[13:22:32] <sharpneli> If it's an ALU where they share some functionalities between the FP and integer units so they cannot be run in parallel
[13:23:01] *** slime <slime!~slime73@blk-215-81-93.eastlink.ca> has quit IRC (Quit: This computer has gone to sleep)
[13:24:32] <sharpneli> And that too depends on the generation. On older gens they had different fp and int troughputs
[13:26:47] <ratchetfreak> it's very likely that you won't need to optimize to that level
[13:26:59] <ratchetfreak> and just normal not-dumb code will be good enough
[13:27:42] <sharpneli> My guess is that there is no parallelism between them on any platform
[13:27:56] <sharpneli> Because they state in multiple places that "instruction issue time, each scheduler issues one instruction for one of its assigned warps that is ready to execute"
[13:28:18] <sharpneli> Single instruction per clock at max. On any of the running warps.
[13:28:31] <ratchetfreak> but that instruction can contain both a ALU op and FPU op
[13:28:57] <sharpneli> At least during maxwell era they didn't have VLIW instructions
[13:32:31] <HZun> So the answer is maybe? and that it is probably limited by the warp scheduler anyway?
[13:32:35] <sharpneli> Also considering that they cannot do full 32bit integer multiply per clock it implies they don't have a full fledged integer alu there at all. Just an unit that can do FP and integer stuff
[13:32:51] <sharpneli> Yes.
[13:33:20] <sharpneli> It's understandable. In graphics work you pretty much never have sufficient instruction level parallelism between fp and int to make it worth to spend chip space for that
[13:33:31] <sharpneli> Instead you can just stuff in more fp units.
[13:34:29] <sharpneli> Also you'd have to double the datapaths to the alu's so you could move operands for both fp and int at the same time.
[13:35:46] <ratchetfreak> and integer ops vs. floating point ops are very similar, only the shifts related to the exponent are different
[13:38:41] <sharpneli> Also moving data around in chip is not cheap. Consumes power, consumes surface area for the paths. Instead of having 32 int+fp units that can dual issue it's almost certainly superior to have 64 single issue mixed fp and int units.
[13:42:15] <HZun> But isnt GPUs already making that tradeoff by using shared registers? it seems like the additional datapath required for dual-issue is low compared to all the register datapaths?
[13:52:05] *** snyp_ <snyp_!~Snyp@103.56.236.33> has joined ##vulkan
[13:54:45] *** snyp__ <snyp__!~Snyp@103.56.236.213> has joined ##vulkan
[13:55:12] *** snyp <snyp!~Snyp@103.56.236.188> has quit IRC (Ping timeout: 245 seconds)
[13:57:00] <sharpneli> HZun: You can imagine the GPU as already issuing 32 instructions in parallel per clock
[13:57:17] <sharpneli> HZun: Now the choice is that what kind of instructions it can issue. 32 floats, or 16 floats + 16 ints.
[13:57:21] *** snyp_ <snyp_!~Snyp@103.56.236.33> has quit IRC (Ping timeout: 264 seconds)
[13:57:31] <sharpneli> That's why it has one scheduler and 32 execution ports
[13:58:08] <sharpneli> The datapath cost for additional int+fp is the same as additional pure int unit
[14:00:18] <sharpneli> The whole reason why GPU's became like this was because originally they were VLIW machines that submitted many instructions per clock for a single thread. As the unit became wider and wider it made sense to run multiple threads at the same time in that one massively wide issue unit
[14:06:39] *** snyp__ <snyp__!~Snyp@103.56.236.213> has quit IRC (Quit: Leaving)
[14:07:16] <HZun> I just found this:
[14:07:20] <HZun> "Unlike Pascal GPUs, which could not execute FP32 and INT32 instructions simultaneously, the Volta GV100 SM includes separate FP32 and INT32 cores, allowing simultaneous execution of FP32 and INT32 operations at full throughput, while also increasing instruction issue throughput."
[14:08:40] <HZun> Do you guys think that "simultaneously" here means "overlapping" or do they mean issue-parallelism, or both?
[14:20:46] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has joined ##vulkan
[14:35:38] <HZun> anyway thanks for the help guys :)
[14:35:44] *** HZun <HZun!~HZun@0x3ec72d49.osd.customer.dk.telia.net> has quit IRC (Quit: Leaving)
[14:55:41] *** borkr <borkr!~borkr@static130-244.mimer.net> has joined ##vulkan
[15:00:14] *** nsf <nsf!~nsf@jiss.convex.ru> has quit IRC (Quit: WeeChat 2.1)
[15:00:58] *** psychicist__ <psychicist__!~psychicis@5356A22B.cm-6-7c.dynamic.ziggo.nl> has joined ##vulkan
[15:15:01] *** cheakoirccloud <cheakoirccloud!uid293319@gateway/web/irccloud.com/x-dnggyvljdqcxwftf> has joined ##vulkan
[15:28:04] *** Guest4630 <Guest4630!4d3b9548@gateway/web/cgi-irc/kiwiirc.com/ip.77.59.149.72> has quit IRC (Ping timeout: 256 seconds)
[16:01:29] *** Guest4630 <Guest4630!4d3b9548@gateway/web/cgi-irc/kiwiirc.com/ip.77.59.149.72> has joined ##vulkan
[16:25:07] *** ImQ009 <ImQ009!~ImQ009@unaffiliated/imq009> has joined ##vulkan
[16:55:03] *** sla_ro|master <sla_ro|master!~sla.ro@78.96.209.89> has quit IRC ()
[16:59:05] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has quit IRC (Ping timeout: 240 seconds)
[17:02:28] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has joined ##vulkan
[17:10:41] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has quit IRC (Ping timeout: 276 seconds)
[17:11:33] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has joined ##vulkan
[17:19:04] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has quit IRC (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
[17:21:29] *** borkr <borkr!~borkr@static130-244.mimer.net> has quit IRC (Remote host closed the connection)
[17:21:57] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has quit IRC (Ping timeout: 264 seconds)
[17:23:23] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has joined ##vulkan
[17:24:12] *** sla_ro|master <sla_ro|master!~sla.ro@78.96.209.89> has joined ##vulkan
[17:27:26] <dyl> turol Would you recommend using the VMA you have in your renderer?
[17:27:43] <dyl> I'm looking to simplify a few things, and buffer allocation is currently one of the most bulky.
[17:28:24] *** dadabidet <dadabidet!~dadabidet@extranet.adullact.org> has quit IRC (Quit: Leaving)
[17:30:38] *** grouse <grouse!~grouse@83-233-9-2.cust.bredband2.com> has quit IRC (Quit: Leaving)
[17:37:09] *** Ralith__ <Ralith__!~ralith@c-24-143-116-108.customer.broadstripe.net> has joined ##vulkan
[17:38:35] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has quit IRC (Ping timeout: 240 seconds)
[17:42:57] *** Ralith__ <Ralith__!~ralith@c-24-143-116-108.customer.broadstripe.net> has quit IRC (Ping timeout: 248 seconds)
[17:44:17] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has joined ##vulkan
[17:44:37] *** cheakoirccloud <cheakoirccloud!uid293319@gateway/web/irccloud.com/x-dnggyvljdqcxwftf> has quit IRC (Quit: Connection closed for inactivity)
[18:19:43] <dyl> Thanks again for the link turol, it's interesting to see the organizational structure.
[18:20:09] <dyl> What I'm doing is ultimately a bit higher level (and doesn't need multiple backends) but this is very helpful.
[18:22:39] <turol> yes use vma until you need something fancy
[18:22:50] <turol> that might be an old version, go directly for the upstream repo
[18:23:09] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has quit IRC (Ping timeout: 264 seconds)
[18:26:09] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has joined ##vulkan
[18:27:33] *** Nilesh___ <Nilesh___!uid116340@gateway/web/irccloud.com/x-iyzxiwsixkysjymr> has joined ##vulkan
[18:30:44] *** Deluxe <Deluxe!~Deluxe@212.4.150.151> has joined ##vulkan
[18:31:33] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has quit IRC (Ping timeout: 256 seconds)
[18:32:17] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has joined ##vulkan
[18:37:41] <dyl> turol also, interesting use of variants.
[18:37:56] <dyl> You are a better student of C++ design patterns than I am haha.
[18:38:05] <turol> nah, functional programming
[18:38:26] <dyl> Well, you're still modeling it as a visitor rather than a functor :p.
[18:38:31] <dyl> But yeah, I see what you mean.
[18:38:42] <turol> because c++ doesn't have haskell case statement
[18:38:55] <turol> that's what it's really supposed to be
[18:38:58] <dyl> Yeah.
[18:39:02] <dyl> One of the nicest things about Swift is the pervasive pattern syntax.
[18:39:11] <dyl> Anywhere you can use a conditional, you can use patterns.
[18:39:25] <dyl> (though they lack much destructuring power beyond sum types)
[18:40:19] <dyl> Your C++ just feels so much cleaner than mine, haha.
[18:40:48] <turol> decades of experience tends to do that
[18:41:31] <dyl> Yeah, I have nothing on that amount of experience.
[18:41:34] *** ratchetfreak <ratchetfreak!c351a8d8@gateway/web/freenode/ip.195.81.168.216> has quit IRC (Ping timeout: 260 seconds)
[18:41:36] <dyl> Least of all in C++.
[18:42:17] <dyl> I may shameless rip some of the patterns/conventions here.
[18:42:29] <turol> keep at it and eventually you will
[18:42:34] <dyl> I think I just need to kill off the line-width requirement in my own code.
[18:42:51] <dyl> As much as I like having parallel buffers open, it makes lambda-heavy C++ a bit harder/unwieldy.
[18:43:05] *** nsf <nsf!~nsf@jiss.convex.ru> has joined ##vulkan
[18:48:53] <dyl> turol I'm trying to grok what you're doing with createEphemeralBuffer/ringbuffering.
[18:49:10] <dyl> Seems far more complex than anything I'd probably have to do, trying to understand what the motivation is.
[18:49:35] <turol> an ephemeral buffer is a buffer which exists for the duration of current frame only
[18:49:45] <turol> it's lifetime is managed by the Renderer
[18:49:49] <dyl> That part I understand clearly.
[18:50:00] <turol> they're backed by a persistently mapped ringbuffer
[18:50:11] <turol> in essence they're simply an offset/size pair
[18:50:26] <turol> so the code says "put this data in a buffer"
[18:50:35] <turol> and later "bind that buffer to this descriptor set"
[18:50:58] <turol> then Renderer takes care of everything else
[18:51:23] <turol> including making sure to not overwrite the data until the frame is done
[18:51:34] <turol> and reallocating the ringbuffer if things don't fit
[18:52:31] <dyl> I guess I don't understand the memory model in Vulkan entirely well enough. Superficially I see why you would want to use a ringbuffer.
[18:52:46] <dyl> But I'm trying to ask myself what the simplest/dumbest approach would be, as this seems a bit more sophisticated.
[18:53:45] <turol> it's a performance optimization
[18:53:52] <turol> allocating/deallocating is slow
[18:54:03] <turol> so if we know these things are ephemeral we can optimize
[18:54:24] <dyl> So, if I'm understanding this correctly, RendererImpl.deleteResources is used to defer resource cleanup to after a given frame?
[18:54:30] <dyl> i.e. deleteResources.emplace(std::move(buffer));
[18:54:51] <turol> yes
[18:54:56] <dyl> The move constructors I saw all zero/reset the source, so that would effectively reset buffer, while placing the original on a queue for deletion.
[18:55:00] <turol> to make sure a resource is not deleted while the gpu is still using it
[18:55:02] <dyl> That makes a lot of sense.
[18:55:18] <dyl> Resource management is something I'm still struggling to model (lack of experience), so this is really helpful to study.
[18:55:38] <dyl> Unlike my code, this feels idiomatically consistent throughout haha.
[18:55:41] <turol> one thing that could be added is to track the last used frame of a resource
[18:55:54] <turol> so if the gpu is not using it we could delete it immediately
[18:56:12] <turol> but since that demo almost never deletes resources until the end i didn't bother
[18:56:34] <dyl> This feels a lot like a lightweight garbage collector. deleteResources.emplace(std::move(...)) === mark, then every frame, sweep.
[18:56:55] <turol> no need for that since resources generally don't form cycles
[18:57:00] <turol> or even graphs really
[18:57:18] <dyl> That was one of the reasons I was leaning on UniqueHandle, was to free myself from worrying about resource dependencies as much.
[18:57:43] <dyl> But now that I've had more time to work with it, it appears to be far less interconnected than it seemed initially.
[18:58:03] <dyl> And yeah, not that kind of GC, I meant it in a very abstract sense.
[18:58:41] <dyl> Ah hm wait, I did mix something up in my reading of this.
[19:00:33] <dyl> turol So, the renderer itself and each frame both use deleteResources.
[19:01:38] <dyl> A little perplexed by recreateRingBuffer still.
[19:01:54] <dyl> Namely, "create a Buffer object which we can put into deleteResources".
[19:02:19] <turol> normally a ringbuffer doesn't have a normal Buffer
[19:02:28] <turol> it's lifetime is managed separately from them
[19:02:41] <dyl> Right, it's more like a pool?
[19:02:44] <turol> but when deleting it we need to defer until gpu is no longer using it
[19:02:51] <turol> not really a pool, there's just the one
[19:02:54] <dyl> So you wrap it up as if it were a normal buffer?
[19:03:13] <turol> so i construct a wrapper for it and then put it on the same deletion path as all other resources
[19:03:25] <turol> no need to special case it
[19:03:34] <dyl> Yes, I thought that was nice.
[19:03:49] <dyl> I may have to steal/adopt some of this resource management strategy as a learning exercise haha.
[19:09:57] <dyl> turol Another thing I'm debating, is whether I should just cave and use exceptions. I generally don't want to because they add more complexity in a language I don't know the ins and outs of very well yet.
[19:10:20] <dyl> And, coming from a more functional background I prefer using something more Either-ish anyhow.
[19:10:27] <dyl> But it ends up being very unwieldy in C++.
[19:10:41] <turol> yes
[19:10:51] <turol> and on linux c++ exceptions are effectively zero cost
[19:12:11] <dyl> Noted.
[19:12:27] <dyl> Why in particular for Linux are they effectively zero cost
[19:12:28] <dyl> ?*
[19:13:14] <turol> it's a modern platform unlike win32
[19:13:21] <turol> google zero-cost exceptions
[19:13:40] <dyl> I’m, to be honest, not even worried about Win at all.
[19:14:09] <turol> win64 is supposed to also be zero-cost
[19:14:17] <turol> i don't care much about windows either
[19:14:31] <turol> as long as it kind of works it's good
[19:14:38] <turol> if it's slow just install linux
[19:14:53] <dyl> The Linux subsystem is kind of amusing.
[19:16:34] <dyl> It’s more for me that I’m in academia, so no one cares about targeting Windows anyways, at least until they are commercially licensing something
[19:17:15] <dyl> Linux > Mac >>> Windows is generally the order of preference, except for a lot of proprietary instrumentation software/drivers in chemistry.
[19:18:29] <Ralith> it's not that hard to have a pleasant Either-ish type in C++17 or later
[19:19:00] <dyl> turol: the situation is largely “everyone uses a Mac for their daily driver, but all of the servers/clusters are Linux and all software is written to run on Linux with Mac compatibility.”
[19:19:04] <Ralith> between explicit operator bool, std::variant, and ref qualifiers
[19:19:48] <dyl> The Windows only users become pretty uncommon in the life sciences or CA as you go up the totem pole.
[19:19:57] <dyl> CS*
[19:20:01] <dyl> Ralith: hm?
[19:20:40] <dyl> ref qualifiers?
[19:22:22] <dyl> Ah I see, so you can apply &/&&/const & to the implicit this?
[19:22:45] <dyl> (A la const qualifier on a method?)
[19:24:04] <dyl> I should really pick up a good book on modern C++ practices.
[19:24:17] <dyl> The extent of my experience with it is mostly a lot of working with LLVM.
[19:26:38] <Ralith> yes, that's how const methods work too
[19:26:42] <Ralith> LLVM 6 actually uses an Either-ish type for error handling
[19:26:48] <Ralith> plus lots of artifice to make it really hard not to process one
[19:27:43] <Ralith> personally I settle for `[[nodiscard]]`
[19:28:07] *** ratchetfreak <ratchetfreak!~ratchetfr@ptr-82s3g7lgupdkti53lhr.18120a2.ip6.access.telenet.be> has joined ##vulkan
[19:29:16] <dyl> You mean llvm::ErrorOr<T>?
[19:30:31] <Ralith> I forget what exactly it's called but probably
[19:35:38] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has joined ##vulkan
[19:39:30] *** Guest4630 <Guest4630!4d3b9548@gateway/web/cgi-irc/kiwiirc.com/ip.77.59.149.72> has quit IRC (Ping timeout: 260 seconds)
[20:04:55] <dyl> turol trying to adapt some of the structure you're using without it feeling like ripping it off haha.
[20:05:18] <dyl> It feels very canonicalized/clean though, so this is tricky.
[20:06:26] <dyl> e.g. the Handle/ResourceContainer setup is very nice.
[20:07:42] <dyl> Also wanted to ask: where do the magic constants MAX_... in Renderer.h come from?
[20:11:10] <turol> arbitrary
[20:12:26] <turol> that's just how much i happened to need and have had no reason to bump them
[20:18:39] <dyl> Makes sense.
[20:18:59] <dyl> turol do you mind me aping a bit of the resource management structure (adapted for my own needs) btw?
[20:19:39] <dyl> More than happy to ensure you're in the acknowledgements and the project is linked. Not sure if it strays into license-reproduction worthy territory as I have a very different use case here.
[20:19:52] <dyl> Something like Handle is pretty hard to write any other way for example haha.
[20:20:02] <dyl> It's just rule-of-5 + eq + bool.
[20:20:10] <turol> MIT license
[20:20:24] <turol> attribution and link would be nice but not required
[20:20:51] <dyl> I like to attribute anyone who even just helps me out anyhow.
[20:21:04] <turol> there are things that could be improved or done differently but i leave them as an exercise for the reader :)
[20:21:27] <dyl> One of the best phrases in the english language.
[20:24:01] <dyl> I have some different requirements that simplify or change a good bit though.
[20:24:23] <dyl> I will pretty much never have to do multiple passes (now), as while reflections and shadows are relevant, they're out of scope for what I'm trying to do right now.
[20:24:46] <dyl> I also don't need to interface across multiple backends.
[20:30:47] *** nsf <nsf!~nsf@jiss.convex.ru> has quit IRC (Quit: WeeChat 2.1)
[20:36:46] *** Nilesh___ <Nilesh___!uid116340@gateway/web/irccloud.com/x-iyzxiwsixkysjymr> has quit IRC (Quit: Connection closed for inactivity)
[20:42:09] *** derhass <derhass!~derhass@ipservice-092-208-158-056.092.208.pools.vodafone-ip.de> has joined ##vulkan
[20:42:17] *** Deluxe <Deluxe!~Deluxe@212.4.150.151> has quit IRC (Remote host closed the connection)
[20:43:46] *** Deluxe <Deluxe!~Deluxe@212.4.150.151> has joined ##vulkan
[21:00:00] *** Guest4630 <Guest4630!4d3b9548@gateway/web/cgi-irc/kiwiirc.com/ip.77.59.149.72> has joined ##vulkan
[21:02:44] <dyl> turol One speculative question (not that I'd want to do so...)
[21:03:05] <dyl> Can definitions like these (with identical bodies) be punned using some STL trickery?
[21:03:29] <dyl> i.e. forward the const qualifier to the return type automatically.
[21:03:38] <turol> not easily
[21:03:48] <turol> preprocessor could do it
[21:03:49] <dyl> I feel like it's possible but certainly not worthwhile.
[21:04:01] <dyl> Something in type_traits perhaps.
[21:04:04] <turol> bu it's uglier than without it
[21:04:22] <dyl> Now did I just discover std::ref also :\/.
[21:04:26] <dyl> .*
[21:05:16] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has quit IRC (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
[21:05:32] <dyl> Including some "based on" attributions in some of the more similar bits.
[21:10:07] <dyl> Huh. I hadn't thought to just do template <typename F> rather than have an explicit std::function type.
[21:10:12] <dyl> SFINAE strikes again.
[21:12:02] * dyl attempts to absorb some of turol's experience by osmosis.
[21:13:20] <turol> studying the works of masters is a time-honored way of learning
[21:18:28] <dyl> One might even argue that it's the ur-way of learning.
[21:19:03] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has joined ##vulkan
[21:20:52] <dyl> It occurs to me that there is some analogy between using SFINAE as in template <typename F> in C++, and in Haskell keeping type constraints out of GADT constructors and enforcing them on functions operating over that GADT.
[21:21:09] <dyl> "Constraining on the margin" so to speak. I don't have a good word for the commonality.
[21:58:49] *** ImQ009 <ImQ009!~ImQ009@unaffiliated/imq009> has quit IRC (Quit: Leaving)
[22:00:34] *** nsf <nsf!~nsf@jiss.convex.ru> has joined ##vulkan
[22:09:16] *** psychicist__ <psychicist__!~psychicis@5356A22B.cm-6-7c.dynamic.ziggo.nl> has quit IRC (Quit: Lost terminal)
[22:19:40] <dyl> turol hey, one question for ya.
[22:19:52] <dyl> I notice that in a lot of your classes you copy-and-zero for move-assignment/construction.
[22:20:02] <dyl> Why not use the copy-and-swap idiom (and rely on the default constructor)?
[22:20:18] <dyl> (Not criticizing your code in any way, just trying to understand the decision making process!)
[22:22:59] <turol> swap uselessly constructs objects
[22:23:05] <turol> it matters for things like vector
[22:23:19] <turol> and move-and-clear is more explicit
[22:23:26] <turol> it's closer to the real meaning
[22:23:37] <turol> swap is kind of convoluted way of doing it
[22:23:48] <ratchetfreak> clear and swap is exception safe though
[22:24:02] <turol> so is move if your move constructor is noexcept
[22:24:17] <ratchetfreak> which you can implement as clear and swap
[22:24:49] *** cheakoirccloud <cheakoirccloud!uid293319@gateway/web/irccloud.com/x-uepjkmqwjbxlkuue> has joined ##vulkan
[22:25:00] <dyl> turol that was what I thought.
[22:25:21] *** TSS_ <TSS_!~TSS@2a02:2f0a:4030:1447:d250:99ff:fe83:4a0a> has joined ##vulkan
[22:25:43] <dyl> re: uselessly constructing objects, though I'd suspect optimization would kick in here.
[22:26:43] <ratchetfreak> copy elision can kick in in a lot of places
[22:26:55] <dyl> I appreciate things being explicit though.
[22:27:01] <ratchetfreak> though I'm not entirely sure copy and swap is one of them
[22:27:09] <dyl> copy-and-swap is neat to me but (and this is party of why I bring this up) it seems forced in a lot of cases.
[22:27:45] <turol> it's sometimes useful
[22:27:47] <ratchetfreak> besides most of the time you don't even need to implement a rule of 5 class
[22:28:06] <turol> like keeping a locked region short by first constructing a new vector and then just swapping under the lock
[22:28:12] <turol> no memory allocation in a critical section
[22:28:37] <ratchetfreak> if you only need move only then you can leverage std::unique_ptr with a custom deleter and deleter::pointer
[22:29:14] <turol> unless you want actual value semantics
[22:29:16] <turol> like i did
[22:29:29] <turol> also visual studio doesn't support defaulting move constructor
[22:29:51] <ratchetfreak> but that is value semantics if you use the deleter::pointer mechanism
[22:30:06] <ratchetfreak> it then uses whatever that type is instead ot T*
[22:30:19] <turol> not sure i get what you're saying
[22:30:44] <turol> for me the point is making sure the actual objects are in a container instead of just pointer
[22:30:50] <turol> to get less pointer chasing
[22:31:00] <ratchetfreak> you can store a vk object in it directly instead of a pointer to one
[22:32:09] <dyl> I generally really try to *not* rule-of-5 whenever possible, but when it's necessary it's necessary.
[22:32:38] <ratchetfreak> "Unlike std::shared_ptr, std::unique_ptr may manage an object through any custom handle type that satisfies NullablePointer. This allows, for example, managing objects located in shared memory, by supplying a Deleter that defines typedef boost::offset_ptr pointer; or another fancy pointer."
[22:32:43] <turol> yes but my objects also have other stuff in them
[22:32:51] <dyl> Yes.
[22:33:00] <dyl> Like for example Handle carries an extra parameter and is a friend to ResourceContainer.
[22:33:19] <turol> and there are debugging flags in most objects
[22:33:22] <dyl> So it's not really just a clone of std::unique_ptr<uint32_t>.
[22:33:44] <dyl> Also, I'm trying to remember where I did some funky stuff with unique_ptr for an LLVM related project.
[22:33:52] <dyl> I ended up actually using a unique_ptr with a nop'd deleter heh.
[22:34:19] <dyl> (I needed to extend something that worked for pointers to unique_ptr, and needed a sentinel value.)
[22:34:39] <dyl> (but a plain unique_ptr will try to free NULL if you don't nop the deleter)
[22:34:57] <dyl> It felt pretty dirty.
[22:36:10] <ratchetfreak> for example it would be a std::unique_ptr<vkInstance, My::vkInstanceDeleter> and vkInstanceDeleter has a using pointer = vkInstance;
[22:37:08] <ratchetfreak> perhaps not that simple because it needs to comply with NullablePointer
[22:37:22] <ratchetfreak> and get needs to return the actual instance
[22:37:26] <dyl> That seems like what Vulkan.hpp does, but I noticed it’s a little flaky in some cases.
[22:37:40] <dyl> For example when you end up wanting up bundle things together like a buffer and its memory.
[22:37:51] <dyl> It’s a little *too* fine grained.
[22:38:13] <dyl> (I know next to nothing in the grand scheme of things though, just anecdotally what I ran into problem wise.)
[22:40:19] <dyl> Didn’t know you could use custom handle types in unique_ptr.
[22:42:27] <dyl> turol: I guess what I meant by my question before is that it seems like in many cases move-and-clear and copy-and-swap behave identically.
[22:42:40] <dyl> Provides you’re clearing to the default-initialized state.
[22:42:50] <dyl> So the advantage to me would seem to be not duplicating default values.
[22:42:52] <ratchetfreak> clearing to moved-from state
[22:43:22] <ratchetfreak> though most of the time you'll want default init and moved-from to be the same
[22:43:28] <turol> essentially yes
[22:43:35] <turol> then the destructor checks that it's been cleared
[22:43:46] <turol> because it's the responsibility of Renderer to free the resources
[22:44:02] <turol> since it requires access to the vkDevice object
[22:44:21] *** Guest4630 <Guest4630!4d3b9548@gateway/web/cgi-irc/kiwiirc.com/ip.77.59.149.72> has quit IRC (Ping timeout: 240 seconds)
[22:44:59] <dyl> It seems like if you’re using value semantics heavily, destructors often just become a bunch of assertions.
[22:45:26] <turol> yep
[22:46:58] <dyl> Any illustrative counterexamples?
[22:47:33] <ratchetfreak> whenever you have a value that ends up being a nop when freed
[22:47:55] <ratchetfreak> kicking the can down the road as it were
[22:48:56] <dyl> What do you mean?
[22:49:05] <dyl> A value that ends up being a nop?
[22:49:21] <ratchetfreak> free() on nullptr is a nop
[22:49:30] <ratchetfreak> same with delete on a nullptr
[22:49:55] <dyl> Ahh.
[22:50:02] <dyl> Didn’t know those didn’t just fault.
[22:50:06] <ratchetfreak> or size of vector set to 0 so the loop to destruct each value is empty
[22:50:48] <dyl> free(NULL) is also a nop, forgot.
[22:51:09] <dyl> ~monoids~
[22:51:21] *** sla_ro|master <sla_ro|master!~sla.ro@78.96.209.89> has quit IRC ()
[23:12:30] *** nsf <nsf!~nsf@jiss.convex.ru> has quit IRC (Quit: WeeChat 2.1)
[23:24:23] <dyl> turol: Why in your Renderer pimpl pattern do you use a raw ptr?
[23:24:50] <dyl> Wouldn't it work to instead have createRenderer(...) = { return std::make_unique<RendererImpl>(...) }
[23:24:55] <dyl> and then in the destructor just assign nullptr?
[23:25:27] <dyl> It seems like unique_ptr captures the ownership semantic better, I think?
[23:26:35] <dyl> the impl = other.impl / other.impl = nullptr is also implied by impl = std::move(other.impl)
[23:28:52] <dyl> Am I missing some obvious pitfall here?
[23:29:08] <dyl> It seems like it would also simplify the constructor (no body, std::move from initializer list takes care of it)
[23:29:39] <dyl> Ah, it'd be an incomplete type.
[23:29:41] <dyl> :\.
[23:29:46] <dyl> There's the obvious pitfall.
[23:32:06] *** davr0s <davr0s!~textual@host86-153-157-230.range86-153.btcentralplus.com> has quit IRC (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
[23:36:31] <dyl> Is that private explicit constructor actually serving any purpose though?
[23:38:36] <dyl> The only place it's used is in createRenderer.
[23:38:45] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has quit IRC (Ping timeout: 264 seconds)
[23:38:54] <dyl> Seems like using std::unique_ptr saved a few lines, is more semantically clear, and requires one less constructor.
[23:50:10] *** Ralith_ <Ralith_!~ralith@c-24-143-116-108.customer.broadstripe.net> has joined ##vulkan
[23:50:49]
*** TSS_ <TSS_!~TSS@2a02:2f0a:4030:1447:d250:99ff:fe83:4a0a> has quit IRC (Quit: ZNC 1.6.2 - http://znc.in)
[23:50:55] *** cramalho <cramalho!~cramalho@179.190.254.211> has joined ##vulkan
[23:52:43] <Ralith> dyl: you can use unique_ptr with incomplete types, you just need to be sure the destructor implementation is generated in the right place
[23:52:54] <dyl> Yeah, just noticed that.
[23:52:58] <dyl> = default in the .cpp file.
[23:53:50] *** TSS_ <TSS_!~TSS@2a02:2f0a:4030:1447:d250:99ff:fe83:4a0a> has joined ##vulkan
[23:54:23] <dyl> Still ending up with some issues though :/.
[23:55:19] <dyl> I'm still getting an instantiation of std::.......::~unique_ptr in my header.
[23:55:22] *** cramalho <cramalho!~cramalho@179.190.254.211> has quit IRC (Client Quit)
[23:55:46] *** cramalho <cramalho!~cramalho@179.190.254.211> has joined ##vulkan
[23:57:29] <dyl> Too bad you can't use using RendererImpl = ...; in the .cpp rather than have to rely on overlapping names with #include.
[23:57:37] <dyl> It's be nice to be able to treat it more like type instantiation.
[23:58:23] *** bpmedley <bpmedley!~bpm@c-24-72-144-115.ni.gigamonster.net> has joined ##vulkan
[23:58:28] <dyl> e.g. in an #ifdef block do Renderer::RendererImpl = typename VulkanRenderer;
[23:58:30] <dyl> or something like that.
[23:59:20] <dyl> Ralith do you know of any way to forward declare a type like class RendererImpl;, and then specify it in the implementation file?
[23:59:21] <dyl> Would be handy.
[23:59:36] <dyl> In other words, anything that includes the header doesn't need to know what's behind the pimpl.
[23:59:46] <dyl> But it's resolved in the implementation file at compile time.
[23:59:59] <Ralith> dyl: you need to explicitly declare the destructor in the class definition so that a definition isn't automatically emitted