All of the resources that I can find on render graph implementations pertain to using low-level graphics APIs which I only have a basic knowledge of. The main performance features that it seems to have is:
Usage of Command buffer
WebGL doesn’t support Command Buffers. It looks like WebGPU does but I don’t know whether or not Three.js would expose that level of control in the future.
Minimize And Batch Resource Barriers
No exposed controls in Threejs. Frankly, out of the scope of my experience anyways.
Optimize Resource Memory (Aliasing)
This is the main area where I can see optimization happening with reusing render targets instead of reallocating/deallocating in each pass I have. However, I’m not sure how marginal of a benefit this would be.
Currently, I have an application that is relying on EffectComposer to manage my passes. However, I am beginning to run into dependency and modularity issues between all my possible passes. Thus, I thought that creating a render graph could be a better design to help me better manage render scheduling.
However, the complexity of my render passes may not be high enough to warrant the overhead that the render graph might create which is why I wanted to better understand the performance benefits that I might gain from using a render graph. I am sure that the render graph design itself will be beneficial to managing my render passes so I’d just like to focus on the performance features (correct me if I’m wrong here).
I would like to have access to glfence somehow, like maybe in the form of a “onCached” callback on meshes/textures/shaders…
As far as rendertargets go, generally you should be reusing the same rendertargets in any passes you’re doing yeah? just making sure you have a resize method on them…
Each pass I have is more or less isolated from one another so any temporary render targets I have in them aren’t ever reused. The render graph would have a resource management system that analyzes the inputs and outputs declared for each pass and optimizes the usage of memory.
The benefits are probably huge, outlined in that paper. However, it has been used and developed by Dice which has a large team of developers and makes AAA games. I think it would be a daunting task to write an engine such as from scratch as a single person, especially one that doesn’t understand these low level graphics concepts. It’s probably overkill for something i imagine is a web experience?
Ok, this would definitely be good. Optimizing the usage of memory would allow for more memory to be used for other things, and possibly be faster than if it were unoptimized.
Yes, I don’t imagine my render graph would be nearly as complicated and robust as Dice outlines. While it may be overkill, I do need to refactor the way I manage render passes to be more modular because it is starting to become quite monolithic and hard to make changes. My application is performance-sensitive so the faster it can be the better. If there are any other designs that I can use to manage the complexity I’d be interested in learning.
With the memory aliasing, I don’t think it would do anything to save on memory consumption. Rather, it would decrease the overhead of reallocating/deallocating resources like render targets between passes by analyzing if they can be reused in future passes. But again, not sure how marginal of a performance difference this would make given that I have to compute the aliasing among other things the render graph has to perform.