Remove descriptions of outdated MLContext requirements#786
Remove descriptions of outdated MLContext requirements#786a-sully wants to merge 1 commit intowebmachinelearning:mainfrom
Conversation
|
|
||
| <div class="note"> | ||
| When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with the {{MLContextOptions}}.{{MLContextOptions/deviceType}} set to {{MLDeviceType/"gpu"}}, the user agent is responsible for creating an internal GPU device that operates within the context and is capable of ML workload submission on behalf of the calling application. In this setting however, only {{ArrayBufferView}} inputs and outputs are allowed in and out of the graph execution since the application has no way to know what type of internal GPU device is being created on their behalf. In this case, the user agent is responsible for automatic uploads and downloads of the inputs and outputs to and from the GPU memory using this said internal device. | ||
| When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with the {{MLContextOptions}}.{{MLContextOptions/deviceType}} set to {{MLDeviceType/"gpu"}}, the user agent is responsible for creating an internal GPU device that operates within the context and is capable of ML workload submission on behalf of the calling application. |
There was a problem hiding this comment.
FYI I'm not touching the rest of this paragraph since it relates to #749, which is a can of worms I don't want this PR to open!
There was a problem hiding this comment.
Should we remove it together with compute() method? Compute() only takes ArrayBufferViews.
There was a problem hiding this comment.
We certainly could. I just figured it would be nice to chunk off bits like this which haven't been true for a while 🤷 Happy to abandon this PR if you'd prefer to remove this alongside compute()
|
|
||
| <div class="note"> | ||
| When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with the {{MLContextOptions}}.{{MLContextOptions/deviceType}} set to {{MLDeviceType/"gpu"}}, the user agent is responsible for creating an internal GPU device that operates within the context and is capable of ML workload submission on behalf of the calling application. In this setting however, only {{ArrayBufferView}} inputs and outputs are allowed in and out of the graph execution since the application has no way to know what type of internal GPU device is being created on their behalf. In this case, the user agent is responsible for automatic uploads and downloads of the inputs and outputs to and from the GPU memory using this said internal device. | ||
| When the {{MLContext/[[contextType]]}} is set to [=context type/default=] with the {{MLContextOptions}}.{{MLContextOptions/deviceType}} set to {{MLDeviceType/"gpu"}}, the user agent is responsible for creating an internal GPU device that operates within the context and is capable of ML workload submission on behalf of the calling application. |
There was a problem hiding this comment.
Should we remove it together with compute() method? Compute() only takes ArrayBufferViews.
| ISSUE: {{MLContext/compute()}} will be deprecated and removed in favor of <code>[dispatch()](https://github.com/webmachinelearning/webnn/blob/main/mltensor-explainer.md#compute-vs-dispatch)</code>. | ||
|
|
||
| Asynchronously carries out the computational workload of a compiled graph {{MLGraph}} on a separate timeline, either on a worker thread for the CPU execution, or on a GPU/NPU timeline for submitting a workload onto the command queue. The asynchronous nature of this call avoids blocking the calling thread while the computation for result is ongoing. This method of execution requires an {{MLContext}} created with {{MLContextOptions}}. Otherwise, it [=exception/throws=] an "{{OperationError}}" {{DOMException}}. | ||
| Asynchronously carries out the computational workload of a compiled graph {{MLGraph}} on a separate timeline, either on a worker thread for the CPU execution, or on a GPU/NPU timeline for submitting a workload onto the command queue. The asynchronous nature of this call avoids blocking the calling thread while the computation for result is ongoing. |
There was a problem hiding this comment.
Same here, should we remove it together with compute() method? The compute() method doesn't support WebGPU interop, if the context is created from a GPUDevice, I suppose it should throw.
|
Closing this PR in favor of making these changes while removing |
These statements are no longer accurate:
MLContextcreated with the"gpu"``MLDeviceTypedoes in fact accept inputs which are notArrayBufferViewscompute()does not throw if theMLContextis not created withMLContextOptionsPreview | Diff