Skip to content

Comments

Remove descriptions of outdated MLContext requirements#786

Closed
a-sully wants to merge 1 commit intowebmachinelearning:mainfrom
a-sully:compute-does-not-depend-on-context-options
Closed

Remove descriptions of outdated MLContext requirements#786
a-sully wants to merge 1 commit intowebmachinelearning:mainfrom
a-sully:compute-does-not-depend-on-context-options

Conversation

@a-sully
Copy link
Contributor

@a-sully a-sully commented Nov 14, 2024

These statements are no longer accurate:

  • an MLContext created with the "gpu"``MLDeviceType does in fact accept inputs which are not ArrayBufferViews
  • compute() does not throw if the MLContext is not created with MLContextOptions

Preview | Diff


<div class="note">
When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with the {{MLContextOptions}}.{{MLContextOptions/deviceType}} set to {{MLDeviceType/"gpu"}}, the user agent is responsible for creating an internal GPU device that operates within the context and is capable of ML workload submission on behalf of the calling application. In this setting however, only {{ArrayBufferView}} inputs and outputs are allowed in and out of the graph execution since the application has no way to know what type of internal GPU device is being created on their behalf. In this case, the user agent is responsible for automatic uploads and downloads of the inputs and outputs to and from the GPU memory using this said internal device.
When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with the {{MLContextOptions}}.{{MLContextOptions/deviceType}} set to {{MLDeviceType/"gpu"}}, the user agent is responsible for creating an internal GPU device that operates within the context and is capable of ML workload submission on behalf of the calling application.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI I'm not touching the rest of this paragraph since it relates to #749, which is a can of worms I don't want this PR to open!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we remove it together with compute() method? Compute() only takes ArrayBufferViews.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We certainly could. I just figured it would be nice to chunk off bits like this which haven't been true for a while 🤷 Happy to abandon this PR if you'd prefer to remove this alongside compute()


<div class="note">
When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with the {{MLContextOptions}}.{{MLContextOptions/deviceType}} set to {{MLDeviceType/"gpu"}}, the user agent is responsible for creating an internal GPU device that operates within the context and is capable of ML workload submission on behalf of the calling application. In this setting however, only {{ArrayBufferView}} inputs and outputs are allowed in and out of the graph execution since the application has no way to know what type of internal GPU device is being created on their behalf. In this case, the user agent is responsible for automatic uploads and downloads of the inputs and outputs to and from the GPU memory using this said internal device.
When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with the {{MLContextOptions}}.{{MLContextOptions/deviceType}} set to {{MLDeviceType/"gpu"}}, the user agent is responsible for creating an internal GPU device that operates within the context and is capable of ML workload submission on behalf of the calling application.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we remove it together with compute() method? Compute() only takes ArrayBufferViews.

ISSUE: {{MLContext/compute()}} will be deprecated and removed in favor of <code>[dispatch()](https://github.com/webmachinelearning/webnn/blob/main/mltensor-explainer.md#compute-vs-dispatch)</code>.

Asynchronously carries out the computational workload of a compiled graph {{MLGraph}} on a separate timeline, either on a worker thread for the CPU execution, or on a GPU/NPU timeline for submitting a workload onto the command queue. The asynchronous nature of this call avoids blocking the calling thread while the computation for result is ongoing. This method of execution requires an {{MLContext}} created with {{MLContextOptions}}. Otherwise, it [=exception/throws=] an "{{OperationError}}" {{DOMException}}.
Asynchronously carries out the computational workload of a compiled graph {{MLGraph}} on a separate timeline, either on a worker thread for the CPU execution, or on a GPU/NPU timeline for submitting a workload onto the command queue. The asynchronous nature of this call avoids blocking the calling thread while the computation for result is ongoing.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, should we remove it together with compute() method? The compute() method doesn't support WebGPU interop, if the context is created from a GPUDevice, I suppose it should throw.

@a-sully
Copy link
Contributor Author

a-sully commented Nov 27, 2024

Closing this PR in favor of making these changes while removing compute() (I'll put up a PR soon)

@a-sully a-sully closed this Nov 27, 2024
@a-sully a-sully deleted the compute-does-not-depend-on-context-options branch November 27, 2024 02:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants