Skip to content

API lacks handling for async ML device errors on the context #477

@bbernhar

Description

@bbernhar

What happens if a WebNN operation dispatched through MLContext encounters some internal error which causes the GPU device to get removed?

I would expect WebNN to provide a spec into how fatal (device) errors are handled so the WebNN developer could respond appropriately. If we want to do more with MLContext (ex. create buffers), I believe we'll need a more robust error mechanism like WebGPU [1].

[1] https://www.w3.org/TR/webgpu/#errors-and-debugging

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions