-
Notifications
You must be signed in to change notification settings - Fork 353
Add isReady(), remove create*PipelineAsync() #2583
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This proposals (and the idea quoted, and the doc linked) seem to prioritize theoretical purity above everything. This proposal is a huge and late structural change, and they always come with the risks that we find more problems with them later, whereas the current structure is pretty matured and with known issues. But outside of "stop changing the spec" I think this has major issues.
Unless there's extraordinary things that can't be fixed otherwise, I don't think we should pursue this direction for v1. (and really, special casing |
|
I think one thing to consider here is still making this change, but having it ONLY allow GPUComputePipeline and GPURenderPipeline, that way we have a surface for adding this to other objects in the future if we want. More importantly it solves the problem of having to choose ahead of time between synchronous and asynchronous pipeline creation, which I think is real (#2085). |
|
Agreed. We don't have to generalize this for the entire API if we don't want. We could do |
Isn't it ok? Implementations are expected to cache pipeline compilation results, and sometimes you might end up compiling things twice because you decided late that you need a pipeline so again, that's ok? (plus implementations can optimize even that case to a single pipeline compilation).
You would need at least shader module in addition to pipelines. |
If that's an expectation that we want developers to rely on it should be in the spec, we should be writing tests for it to the best of our ability, and we should be advertising it loudly. Otherwise as a developer I won't know if using This ambiguity will cause developers to avoid using The benefit of the
Can you do anything with a shader module besides create a pipeline with it or get the compilation messages (already an async operation)? I would expect that |
|
What about a proposal in between? const pipeline = device.createRenderPipeline(descriptor);
// syntax up for bikeshedding..
const [pipeline, isReady] = device.createRenderPipelineAsync(descriptor);
await isReady;
The implementation still has to make sure that asynchronously created pipelines check a mutex whenever the pipeline is used, but there it would only need to happen for pipelines created using that path (so guarded by a branch or virtual method call). This is still a fair amount of implementation complexity to manage. |
I think this is a solid option.
I'm not sure this is a good invariant to uphold. Yes, it's clear that it's happening now, but is it going to block a bunch of unrelated stuff before I use the pipeline? That seems bad, because naively I'd try to hoist my pipeline creation as early as possible so the latency can be hidden by other commands. Having it be "synchronous" (on the device timeline) means that doesn't work.
We can also just set a boolean once the compilation is completed so that even async pipelines don't have to check the mutex after they're done. Not sure this saves anything though since it may still have to be an atomic load. |
I agree, I don't think it's needed for createShaderModule. To solve #2119 we only have to get createShaderModule off the "main thread" which I don't think necessarily requires this. |
|
I talked with @Kangz this morning for a while about complexity concerns from this and other API changes in the pipeline. One point of complexity that I was unaware of is that in D3D12 and Vulkan pipeline creation is synchronous, and APIs that want it to not be a blocking operation are expected to manage their own threading. Thus one of the benefits of the current split sync/async design is that the synchronous version doesn't have to spin up a new thread for every pipeline unless it's specifically requested via the async method. This definitely adds to the complexity of implementing the Also, while I think it can't be avoided in some cases I do fundamentally agree with the principle that we should be avoiding large scale changes to the API at this stage as we get closer to shipping. I think the above tips the scales for me on whether I consider this worth doing prior to shipping or not. I do think we can and should make strong guarantees (not non-normative notes) in the spec about the caching/syncing behavior of duplicate pipelines, which is probably the most viable route at this point.
That's an intriguing compromise! I think the only reason I'd be wary of it is that in JavaScript, unlike languages like Python, returning Tuples from browser APIs is fairly uncommon and thus may catch some developers by surprise. Specifically, I think you'd see a lot of: And the error is going to come not from the That's not to say it's not a viable approach, it's just the primary concern I have. |
|
Yeah, I was thinking about that and would probably do something slightly different (also because WebIDL can't express tuples), like: |
I think this reasoning is flawed. Implementations never have to spin up any threads, even for async methods implementations. I like |
|
After some further discussion by the editors, we think that while there's still a usability issue here in terms of pipeline creation this particular proposal isn't the way to solve it. Closing, though we expect additional discussion on the subject of pipeline creation ergonomics, possibly to be implemented post-v1. |
A design that the editors have been discussing to resolve #2085 and some of #2119. Specifically, this design allows developers to both wait on the object (say, a pipeline) to be ready AND, if needed, pre-emptively use them anyway before the promise resolves and accept the stall that it may incur.
This also prevents the addition of
...Async()variants of every possible object in the future if we decide that we need them down the line.One thing that's not explicitly mentioned in this CL but probably should be is an underlying assumption that the spec begins treating all GPU object creation as scheduling tasks. As Kai describes in https://hackmd.io/@webgpu/r1mBAEGCK#What-if-we-try-to-change-as-little-as-possible :
Preview | Diff