1. Introduction
This section is non-normative.
Graphics Processing Units, or GPUs for short, have been essential in enabling rich rendering and computational applications in personal computing. WebGPU is an API that exposes the capabilities of GPU hardware for the Web. The API is designed from the ground up to efficiently map to (post-2014) native GPU APIs. WebGPU is not related to WebGL and does not explicitly target OpenGL ES.
WebGPU sees physical GPU hardware as GPUAdapters. It provides a connection to an adapter via
GPUDevice, which manages resources, and the device’s GPUQueues, which execute commands.
GPUDevice may have its own memory with high-speed access to the processing units.
GPUBuffer and GPUTexture are the physical resources backed by GPU memory.
GPUCommandBuffer and GPURenderBundle are containers for user-recorded commands.
GPUShaderModule contains shader code. The other resources,
such as GPUSampler or GPUBindGroup, configure the way physical resources are used by the GPU.
GPUs execute commands encoded in GPUCommandBuffers by feeding data through a pipeline,
which is a mix of fixed-function and programmable stages. Programmable stages execute
shaders, which are special programs designed to run on GPU hardware.
Most of the state of a pipeline is defined by
a GPURenderPipeline or a GPUComputePipeline object. The state not included
in these pipeline objects is set during encoding with commands,
such as beginRenderPass() or setBlendConstant().
2. Malicious use considerations
This section is non-normative. It describes the risks associated with exposing this API on the Web.
2.1. Security Considerations
The security requirements for WebGPU are the same as ever for the web, and are likewise non-negotiable. The general approach is strictly validating all the commands before they reach GPU, ensuring that a page can only work with its own data.
2.1.1. CPU-based undefined behavior
A WebGPU implementation translates the workloads issued by the user into API commands specific to the target platform. Native APIs specify the valid usage for the commands (for example, see vkCreateDescriptorSetLayout) and generally don’t guarantee any outcome if the valid usage rules are not followed. This is called "undefined behavior", and it can be exploited by an attacker to access memory they don’t own, or force the driver to execute arbitrary code.
In order to disallow insecure usage, the range of allowed WebGPU behaviors is defined for any input.
An implementation has to validate all the input from the user and only reach the driver
with the valid workloads. This document specifies all the error conditions and handling semantics.
For example, specifying the same buffer with intersecting ranges in both "source" and "destination"
of copyBufferToBuffer() results in GPUCommandEncoder
generating an error, and no other operation occurring.
See § 22 Errors & Debugging for more information about error handling.
2.1.2. GPU-based undefined behavior
WebGPU shaders are executed by the compute units inside GPU hardware. In native APIs,
some of the shader instructions may result in undefined behavior on the GPU.
In order to address that, the shader instruction set and its defined behaviors are
strictly defined by WebGPU. When a shader is provided to createShaderModule(),
the WebGPU implementation has to validate it
before doing any translation (to platform-specific shaders) or transformation passes.
2.1.3. Uninitialized data
Generally, allocating new memory may expose the leftover data of other applications running on the system. In order to address that, WebGPU conceptually initializes all the resources to zero, although in practice an implementation may skip this step if it sees the developer initializing the contents manually. This includes variables and shared workgroup memory inside shaders.
The precise mechanism of clearing the workgroup memory can differ between platforms. If the native API does not provide facilities to clear it, the WebGPU implementation transforms the compute shader to first do a clear across all invocations, synchronize them, and continue executing developer’s code.
GPULoadOp "load" to "clear").
As a result, all implementations should issue a developer console warning about this potential performance penalty, even if there is no penalty in that implementation.
2.1.4. Out-of-bounds access in shaders
Shaders can access physical resources either directly
(for example, as a "uniform" GPUBufferBinding), or via texture units,
which are fixed-function hardware blocks that handle texture coordinate conversions.
Validation in the WebGPU API can only guarantee that all the inputs to the shader are provided and
they have the correct usage and types.
The WebGPU API can not guarantee that the data is accessed within bounds
if the texture units are not involved.
In order to prevent the shaders from accessing GPU memory an application doesn’t own, the WebGPU implementation may enable a special mode (called "robust buffer access") in the driver that guarantees that the access is limited to buffer bounds.
Alternatively, an implementation may transform the shader code by inserting manual bounds checks.
When this path is taken, the out-of-bound checks only apply to array indexing. They aren’t needed
for plain field access of shader structures due to the minBindingSize
validation on the host side.
If the shader attempts to load data outside of physical resource bounds, the implementation is allowed to:
-
return a value at a different location within the resource bounds
-
return a value vector of "(0, 0, 0, X)" with any "X"
-
partially discard the draw or dispatch call
If the shader attempts to write data outside of physical resource bounds, the implementation is allowed to:
-
write the value to a different location within the resource bounds
-
discard the write operation
-
partially discard the draw or dispatch call
2.1.5. Invalid data
When uploading floating-point data from CPU to GPU, or generating it on the GPU, we may end up with a binary representation that doesn’t correspond to a valid number, such as infinity or NaN (not-a-number). The GPU behavior in this case is subject to the accuracy of the GPU hardware implementation of the IEEE-754 standard. WebGPU guarantees that introducing invalid floating-point numbers would only affect the results of arithmetic computations and will not have other side effects.
2.1.6. Driver bugs
GPU drivers are subject to bugs like any other software. If a bug occurs, an attacker could possibly exploit the incorrect behavior of the driver to get access to unprivileged data. In order to reduce the risk, the WebGPU working group will coordinate with GPU vendors to integrate the WebGPU Conformance Test Suite (CTS) as part of their driver testing process, like it was done for WebGL. WebGPU implementations are expected to have workarounds for some of the discovered bugs, and disable WebGPU on drivers with known bugs that can’t be worked around.
2.1.7. Timing attacks
2.1.7.1. Content-timeline timing
WebGPU does not expose new states to JavaScript (the content timeline) which are
shared between agents in an agent cluster.
Content timeline states such as [[mapping]] only change during
explicit content timeline tasks, like in plain JavaScript.
2.1.7.2. Device/queue-timeline timing
Writable storage buffers and other cross-invocation communication may be usable to construct high-precision timers on the queue timeline.
The optional "timestamp-query" feature also provides high precision
timing of GPU operations. To mitigate security and privacy concerns, the timing query
values are aligned to a lower precision: see current queue timestamp. Note in particular:
-
The device timeline typically runs in a process that is shared by multiple origins, so cross-origin isolation (provided by COOP/COEP) does not provide isolation of device/queue-timeline timers.
-
Queue timeline work is issued from the device timeline, and may execute on GPU hardware that does not provide the isolation expected of CPU processes (such as Meltdown mitigations).
-
GPU hardware is not typically susceptible to Spectre-style attacks, but WebGPU may be implemented in software, and software implementations may run in a shared process, preventing isolation-based mitigations.
2.1.8. Row hammer attacks
Row hammer is a class of attacks that exploit the leaking of states in DRAM cells. It could be used on GPU. WebGPU does not have any specific mitigations in place, and relies on platform-level solutions, such as reduced memory refresh intervals.
2.1.9. Denial of service
WebGPU applications have access to GPU memory and compute units. A WebGPU implementation may limit the available GPU memory to an application, in order to keep other applications responsive. For GPU processing time, a WebGPU implementation may set up "watchdog" timer that makes sure an application doesn’t cause GPU unresponsiveness for more than a few seconds. These measures are similar to those used in WebGL.
2.1.10. Workload identification
WebGPU provides access to constrained global resources shared between different programs (and web pages) running on the same machine. An application can try to indirectly probe how constrained these global resources are, in order to reason about workloads performed by other open web pages, based on the patterns of usage of these shared resources. These issues are generally analogous to issues with Javascript, such as system memory and CPU execution throughput. WebGPU does not provide any additional mitigations for this.
2.1.11. Memory resources
WebGPU exposes fallible allocations from machine-global memory heaps, such as VRAM. This allows for probing the size of the system’s remaining available memory (for a given heap type) by attempting to allocate and watching for allocation failures.
GPUs internally have one or more (typically only two) heaps of memory shared by all running applications. When a heap is depleted, WebGPU would fail to create a resource. This is observable, which may allow a malicious application to guess what heaps are used by other applications, and how much they allocate from them.
2.1.12. Computation resources
If one site uses WebGPU at the same time as another, it may observe the increase in time it takes to process some work. For example, if a site constantly submits compute workloads and tracks completion of work on the queue, it may observe that something else also started using the GPU.
A GPU has many parts that can be tested independently, such as the arithmetic units, texture sampling units, atomic units, etc. A malicious application may sense when some of these units are stressed, and attempt to guess the workload of another application by analyzing the stress patterns. This is analogous to the realities of CPU execution of Javascript.
2.1.13. Abuse of capabilities
Malicious sites could abuse the capabilities exposed by WebGPU to run computations that don’t benefit the user or their experience and instead only benefit the site. Examples would be hidden crypto-mining, password cracking or rainbow tables computations.
It is not possible to guard against these types of uses of the API because the browser is not able to distinguish between valid workloads and abusive workloads. This is a general problem with all general-purpose computation capabilities on the Web: JavaScript, WebAssembly or WebGL. WebGPU only makes some workloads easier to implement, or slightly more efficient to run than using WebGL.
To mitigate this form of abuse, browsers can throttle operations on background tabs, could warn that a tab is using a lot of resource, and restrict which contexts are allowed to use WebGPU.
User agents can heuristically issue warnings to users about high power use, especially due to potentially malicious usage. If a user agent implements such a warning, it should include WebGPU usage in its heuristics, in addition to JavaScript, WebAssembly, WebGL, and so on.
2.2. Privacy Considerations
The privacy considerations for WebGPU are similar to those of WebGL. GPU APIs are complex and must expose various aspects of a device’s capabilities out of necessity in order to enable developers to take advantage of those capabilities effectively. The general mitigation approach involves normalizing or binning potentially identifying information and enforcing uniform behavior where possible.
A user agent must not reveal more than 32 distinguishable configurations or buckets.
2.2.1. Machine-specific features and limits
WebGPU can expose a lot of detail on the underlying GPU architecture and the device geometry. This includes available physical adapters, many limits on the GPU and CPU resources that could be used (such as the maximum texture size), and any optional hardware-specific capabilities that are available.
User agents are not obligated to expose the real hardware limits, they are in full control of how much the machine specifics are exposed. One strategy to reduce fingerprinting is binning all the target platforms into a few number of bins. In general, the privacy impact of exposing the hardware limits matches the one of WebGL.
The default limits are also deliberately high enough to allow most applications to work without requesting higher limits. All the usage of the API is validated according to the requested limits, so the actual hardware capabilities are not exposed to the users by accident.
2.2.2. Machine-specific artifacts
There are some machine-specific rasterization/precision artifacts and performance differences that can be observed roughly in the same way as in WebGL. This applies to rasterization coverage and patterns, interpolation precision of the varyings between shader stages, compute unit scheduling, and more aspects of execution.
Generally, rasterization and precision fingerprints are identical across most or all of the devices of each vendor. Performance differences are relatively intractable, but also relatively low-signal (as with JS execution performance).
Privacy-critical applications and user agents should utilize software implementations to eliminate such artifacts.
2.2.3. Machine-specific performance
Another factor for differentiating users is measuring the performance of specific operations on the GPU. Even with low precision timing, repeated execution of an operation can show if the user’s machine is fast at specific workloads. This is a fairly common vector (present in both WebGL and Javascript), but it’s also low-signal and relatively intractable to truly normalize.
WebGPU compute pipelines expose access to GPU unobstructed by the fixed-function hardware. This poses an additional risk for unique device fingerprinting. User agents can take steps to dissociate logical GPU invocations with actual compute units to reduce this risk.
2.2.4. User Agent State
This specification doesn’t define any additional user-agent state for an origin.
However it is expected that user agents will have compilation caches for the result of expensive
compilation like GPUShaderModule, GPURenderPipeline and GPUComputePipeline.
These caches are important to improve the loading time of WebGPU applications after the first
visit.
For the specification, these caches are indifferentiable from incredibly fast compilation, but
for applications it would be easy to measure how long createComputePipelineAsync()
takes to resolve. This can leak information across origins (like "did the user access a site with
this specific shader") so user agents should follow the best practices in
storage partitioning.
The system’s GPU driver may also have its own cache of compiled shaders and pipelines. User agents may want to disable these when at all possible, or add per-partition data to shaders in ways that will make the GPU driver consider them different.
2.2.5. Driver bugs
In addition to the concerns outlined in Security Considerations, driver bugs may introduce differences in behavior that can be observed as a method of differentiating users. The mitigations mentioned in Security Considerations apply here as well, including coordinating with GPU vendors and implementing workarounds for known issues in the user agent.
2.2.6. Adapter Identifiers
Past experience with WebGL has demonstrated that developers have a legitimate need to be able to identify the GPU their code is running on in order to create and maintain robust GPU-based content. For example, to identify adapters with known driver bugs in order to work around them or to avoid features that perform more poorly than expected on a given class of hardware.
But exposing adapter identifiers also naturally expands the amount of fingerprinting information available, so there’s a desire to limit the precision with which we identify the adapter.
There are several mitigations that can be applied to strike a balance between enabling robust content and preserving privacy. First is that user agents can reduce the burden on developers by identifying and working around known driver issues, as they have since browsers began making use of GPUs.
When adapter identifiers are exposed by default they should be as broad as possible while still being useful. Possibly identifying, for example, the adapter’s vendor and general architecture without identifying the specific adapter in use. Similarly, in some cases identifiers for an adapter that is considered a reasonable proxy for the actual adapter may be reported.
In cases where full and detailed information about the adapter is useful (for example: when filing bug reports) the user can be asked for consent to reveal additional information about their hardware to the page.
Finally, the user agent will always have the discretion to not report adapter identifiers at all if it considers it appropriate, such as in enhanced privacy modes.
3. Fundamentals
3.1. Conventions
3.1.1. Syntactic Shorthands
In this specification, the following syntactic shorthands are used:
- The
.("dot") syntax, common in programming languages. -
The phrasing "
Foo.Bar" means "theBarmember of the value (or interface)Foo." IfFoois an ordered map andBardoes not exist inFoo, returnsundefined.The phrasing "
Foo.Baris provided" means "theBarmember exists in the map valueFoo" - The
?.("optional chaining") syntax, adopted from JavaScript. -
The phrasing "
Foo?.Bar" means "ifFooisnullorundefinedorBardoes not exist inFoo,undefined; otherwise,Foo.Bar".For example, where
bufferis aGPUBuffer,buffer?.\[[device]].\[[adapter]]means "ifbufferisnullorundefined, thenundefined; otherwise, the\[[adapter]]internal slot of the\[[device]]internal slot ofbuffer. - The
??("nullish coalescing") syntax, adopted from JavaScript. -
The phrasing "
x??y" means "x, ifxis not null or undefined, andyotherwise". - slot-backed attribute
-
A WebIDL attribute which is backed by an internal slot of the same name. It may or may not be mutable.
3.1.2. WebGPU Objects
A WebGPU object consists of a WebGPU Interface and an internal object.
The WebGPU interface defines the public interface and state of the WebGPU object. It can be used on the content timeline where it was created, where it is a JavaScript-exposed WebIDL interface.
Any interface which includes GPUObjectBase is a WebGPU interface.
The internal object tracks the state of the WebGPU object on the device timeline. All reads/writes to the mutable state of an internal object occur from steps executing on a single well-ordered device timeline.
The following special property types can be defined on WebGPU objects:
- immutable property
-
A read-only slot set during initialization of the object. It can be accessed from any timeline.
Note: Since the slot is immutable, implementations may have a copy on multiple timelines, as needed. Immutable properties are defined in this way to avoid describing multiple copies in this spec.
If named
[[with brackets]], it is an internal slot.
If namedwithoutBrackets, it is areadonlyslot-backed attribute of the WebGPU interface. - content timeline property
-
A property which is only accessible from the content timeline where the object was created.
If named
[[with brackets]], it is an internal slot.
If namedwithoutBrackets, it is a slot-backed attribute of the WebGPU interface. - device timeline property
-
A property which tracks state of the internal object and is only accessible from the device timeline where the object was created. device timeline properties may be mutable.
Device timeline properties are named
[[with brackets]], and are internal slots. - queue timeline property
-
A property which tracks state of the internal object and is only accessible from the queue timeline where the object was created. queue timeline properties may be mutable.
Queue timeline properties are named
[[with brackets]], and are internal slots.
interface mixin GPUObjectBase {attribute USVString label ; };
GPUObjectBase parent,
interface T, GPUObjectDescriptorBase descriptor)
(where T extends GPUObjectBase), run the following content timeline steps:
-
Let device be parent.
[[device]]. -
Let object be a new instance of T.
-
Set object.
[[device]]to device. -
Return object.
GPUObjectBase has the following immutable properties:
[[device]], of type device, readonly-
The device that owns the internal object.
Operations on the contents of this object assert they are running on the device timeline, and that the device is valid.
GPUObjectBase has the following content timeline properties:
label, of type USVString-
A developer-provided label which is used in an implementation-defined way. It can be used by the browser, OS, or other tools to help identify the underlying internal object to the developer. Examples include displaying the label in
GPUErrormessages, console warnings, browser developer tools, and platform debugging utilities.NOTE:Implementations should use labels to enhance error messages by using them to identify WebGPU objects.However, this need not be the only way of identifying objects: implementations should also use other available information, especially when no label is available. For example:
-
The label of the parent
GPUTexturewhen printing aGPUTextureView. -
The label of the parent
GPUCommandEncoderwhen printing aGPURenderPassEncoderorGPUComputePassEncoder. -
The label of the source
GPUCommandEncoderwhen printing aGPUCommandBuffer. -
The label of the source
GPURenderBundleEncoderwhen printing aGPURenderBundle.
NOTE:Thelabelis a property of theGPUObjectBase. TwoGPUObjectBase"wrapper" objects have completely separate label states, even if they refer to the same underlying object (for example returned bygetBindGroupLayout()). Thelabelproperty will not change except by being set from JavaScript.This means one underlying object could be associated with multiple labels. This specification does not define how the label is propagated to the device timeline. How labels are used is completely implementation-defined: error messages could show the most recently set label, all known labels, or no labels at all.
It is defined as a
USVStringbecause some user agents may supply it to the debug facilities of the underlying native APIs. -
GPUObjectBase has the following device timeline properties:
[[valid]], of typeboolean, initiallytrue.-
If
true, indicates that the internal object is valid to use.
[[device]] that owns them, from being garbage collected. This cannot be
guaranteed, however, as holding a strong reference to a parent object may be required in some
implementations.
As a result, developers should assume that a WebGPU interface may remain live until all child objects of that interface have also been garbage collected, causing some resources to remain allocated longer than anticipated.
Calling the destroy method on a WebGPU interface (such as
GPUDevice.destroy() or GPUBuffer.destroy()) should be
favored over relying on garbage collection if predictable release of allocated resources is
needed.
3.1.3. Object Descriptors
An object descriptor holds the information needed to create an object,
which is typically done via one of the create* methods of GPUDevice.
dictionary {GPUObjectDescriptorBase USVString label = ""; };
GPUObjectDescriptorBase has the following members:
label, of type USVString, defaulting to""-
The initial value of
GPUObjectBase.label.
3.2. Asynchrony
3.2.1. Invalid Internal Objects & Contagious Invalidity
Object creation operations in WebGPU don’t return promises, but nonetheless are internally
asynchronous. Returned objects refer to internal objects which are manipulated on a
device timeline. Rather than fail with exceptions or rejections, most errors that occur on a
device timeline are communicated through GPUErrors generated on the associated device.
Internal objects are either valid or invalid. An invalid object will never become valid at a later time, but some valid objects may be invalidated.
Objects are invalid from creation if it wasn’t possible to create them.
This can happen, for example, if the object descriptor doesn’t describe a valid
object, or if there is not enough memory to allocate a resource.
It can also happen if an object is created with or from another invalid object
(for example calling createView() on an invalid GPUTexture)
(for example the GPUTexture of a createView() call):
this case is referred to as contagious invalidity.
Internal objects of most types cannot become invalid after they are created, but still
may become unusable, e.g. if the owning device is lost or
destroyed, or the object has a special internal state,
like buffer state "destroyed".
Internal objects of some types can become invalid after they are created; specifically,
devices, adapters, GPUCommandBuffers, and command/pass/bundle encoders.
GPUObjectBase object is valid to use with
a targetObject if the all of the requirements in the following device timeline steps are met:
-
object.
[[valid]]must betrue. -
object.
[[device]].[[valid]]must betrue. -
object.
[[device]]must equal targetObject.[[device]].
GPUObjectBase
object, run the following device timeline steps:
-
object.
[[valid]]tofalse.
3.2.2. Promise Ordering
Several operations in WebGPU return promises.
WebGPU does not make any guarantees about the order in which these promises settle (resolve or reject), except for the following:
-
For some
GPUQueueq, if p1 = q.onSubmittedWorkDone()is called before p2 = q.onSubmittedWorkDone(), then p1 must settle before p2. -
For some
GPUQueueq andGPUBufferb on the sameGPUDevice, if p1 = b.mapAsync()is called before p2 = q.onSubmittedWorkDone(), then p1 must settle before p2.
Applications must not rely on any other promise settlement ordering.
3.3. Coordinate Systems
Rendering operations use the following coordinate systems:
-
Normalized device coordinates (or NDC) have three dimensions, where:
-
-1.0 ≤ x ≤ 1.0
-
-1.0 ≤ y ≤ 1.0
-
0.0 ≤ z ≤ 1.0
-
The bottom-left corner is at (-1.0, -1.0, z).
Normalized device coordinates. Note: Whether
z = 0orz = 1is treated as the near plane is application specific. The above diagram presentsz = 0as the near plane but the observed behavior is determined by a combination of the projection matrices used by shaders, thedepthClearValue, and thedepthComparefunction. -
-
Clip space coordinates have four dimensions: (x, y, z, w)
-
Clip space coordinates are used for the the clip position of a vertex (i.e. the position output of a vertex shader), and for the clip volume.
-
Normalized device coordinates and clip space coordinates are related as follows: If point p = (p.x, p.y, p.z, p.w) is in the clip volume, then its normalized device coordinates are (p.x ÷ p.w, p.y ÷ p.w, p.z ÷ p.w).
-
-
Framebuffer coordinates address the pixels in the framebuffer
-
They have two dimensions.
-
Each pixel extends 1 unit in x and y dimensions.
-
The top-left corner is at (0.0, 0.0).
-
x increases to the right.
-
y increases down.
-
See § 17 Render Passes and § 23.2.5 Rasterization.
Framebuffer coordinates. -
-
Viewport coordinates combine framebuffer coordinates in x and y dimensions, with depth in z.
-
Normally 0.0 ≤ z ≤ 1.0, but this can be modified by setting
[[viewport]].minDepthandmaxDepthviasetViewport()
-
-
Fragment coordinates match viewport coordinates.
-
Texture coordinates, sometimes called "UV coordinates" in 2D, are used to sample textures and have a number of components matching the
texture dimension.-
0 ≤ u ≤ 1.0
-
0 ≤ v ≤ 1.0
-
0 ≤ w ≤ 1.0
-
(0.0, 0.0, 0.0) is in the first texel in texture memory address order.
-
(1.0, 1.0, 1.0) is in the last texel texture memory address order.
2D Texture coordinates. -
-
Window coordinates, or present coordinates, match framebuffer coordinates, and are used when interacting with an external display or conceptually similar interface.
Note: WebGPU’s coordinate systems match DirectX’s coordinate systems in a graphics pipeline.
3.4. Programming Model
3.4.1. Timelines
WebGPU’s behavior is described in terms of "timelines". Each operation (defined as algorithms) occurs on a timeline. Timelines clearly define both the order of operations, and which state is available to which operations.
Note: This "timeline" model describes the constraints of the multi-process models of browser engines (typically with a "content process" and "GPU process"), as well as the GPU itself as a separate execution unit in many implementations. Implementing WebGPU does not require timelines to execute in parallel, so does not require multiple processes, or even multiple threads. (It does require concurrency for cases like get a copy of the image contents of a context which synchronously blocks on another timeline to complete.)
- Content timeline
-
Associated with the execution of the Web script. It includes calling all methods described by this specification.
To issue steps to the content timeline from an operation on
GPUDevicedevice, queue a global task for GPUDevicedevicewith those steps. - Device timeline
-
Associated with the GPU device operations that are issued by the user agent. It includes creation of adapters, devices, and GPU resources and state objects, which are typically synchronous operations from the point of view of the user agent part that controls the GPU, but can live in a separate OS process.
- Queue timeline
-
Associated with the execution of operations on the compute units of the GPU. It includes actual draw, copy, and compute jobs that run on the GPU.
- Timeline-agnostic
-
Associated with any of the above timelines
Steps may be issued to any timeline if they only operate on immutable properties or arguments passed from the calling steps.
- Immutable value example term definition
-
Can be used on any timeline.
- Content-timeline example term definition
-
Can only be used on the content timeline.
- Device-timeline example term definition
-
Can only be used on the device timeline.
- Queue-timeline example term definition
-
Can only be used on the queue timeline.
Immutable value example term usage.
Immutable value example term usage. Content-timeline example term usage.
Immutable value example term usage. Device-timeline example term usage.
Immutable value example term usage. Queue-timeline example term usage.
In this specification, asynchronous operations are used when the return value depends on work that happens on any timeline other than the Content timeline. They are represented by promises and events in API.
GPUComputePassEncoder.dispatchWorkgroups():
-
User encodes a
dispatchWorkgroupscommand by calling a method of theGPUComputePassEncoderwhich happens on the Content timeline. -
User issues
GPUQueue.submit()that hands over theGPUCommandBufferto the user agent, which processes it on the Device timeline by calling the OS driver to do a low-level submission. -
The submit gets dispatched by the GPU invocation scheduler onto the actual compute units for execution, which happens on the Queue timeline.
GPUDevice.createBuffer():
-
User fills out a
GPUBufferDescriptorand creates aGPUBufferwith it, which happens on the Content timeline. -
User agent creates a low-level buffer on the Device timeline.
GPUBuffer.mapAsync():
-
User requests to map a
GPUBufferon the Content timeline and gets a promise in return. -
User agent checks if the buffer is currently used by the GPU and makes a reminder to itself to check back when this usage is over.
-
After the GPU operating on Queue timeline is done using the buffer, the user agent maps it to memory and resolves the promise.
3.4.2. Memory Model
This section is non-normative.
Once a GPUDevice has been obtained during an application initialization routine,
we can describe the WebGPU platform as consisting of the following layers:
-
User agent implementing the specification.
-
Operating system with low-level native API drivers for this device.
-
Actual CPU and GPU hardware.
Each layer of the WebGPU platform may have different memory types that the user agent needs to consider when implementing the specification:
-
The script-owned memory, such as an
ArrayBuffercreated by the script, is generally not accessible by a GPU driver. -
A user agent may have different processes responsible for running the content and communication to the GPU driver. In this case, it uses inter-process shared memory to transfer data.
-
Dedicated GPUs have their own memory with high bandwidth, while integrated GPUs typically share memory with the system.
Most physical resources are allocated in the memory of type that is efficient for computation or rendering by the GPU. When the user needs to provide new data to the GPU, the data may first need to cross the process boundary in order to reach the user agent part that communicates with the GPU driver. Then it may need to be made visible to the driver, which sometimes requires a copy into driver-allocated staging memory. Finally, it may need to be transferred to the dedicated GPU memory, potentially changing the internal layout into one that is most efficient for GPUs to operate on.
All of these transitions are done by the WebGPU implementation of the user agent.
Note: This example describes the worst case, while in practice
the implementation might not need to cross the process boundary,
or may be able to expose the driver-managed memory directly to
the user behind an ArrayBuffer, thus avoiding any data copies.
3.4.3. Resource Usages
A physical resource can be used with an internal usage by a GPU command:
- input
-
Buffer with input data for draw or dispatch calls. Preserves the contents. Allowed by buffer
INDEX, bufferVERTEX, or bufferINDIRECT. - constant
-
Resource bindings that are constant from the shader point of view. Preserves the contents. Allowed by buffer
UNIFORMor textureTEXTURE_BINDING. - storage
-
Read/write storage resource binding. Allowed by buffer
STORAGEor textureSTORAGE_BINDING. - storage-read
-
Read-only storage resource bindings. Preserves the contents. Allowed by buffer
STORAGEor textureSTORAGE_BINDING. - attachment
-
Texture used as a read/write output attachment or write-only resolve target in a render pass. Allowed by texture
RENDER_ATTACHMENT. - attachment-read
-
Texture used as a read-only attachment in a render pass. Preserves the contents. Allowed by texture
RENDER_ATTACHMENT.
We define subresource to be either a whole buffer, or a texture subresource.
-
Each usage in U is input, constant, storage-read, or attachment-read.
-
Each usage in U is storage.
Multiple such usages are allowed even though they are writable. This is the usage scope storage exception.
-
Each usage in U is attachment.
Multiple such usages are allowed even though they are writable. This is the usage scope attachment exception.
Enforcing that the usages are only combined into a compatible usage list allows the API to limit when data races can occur in working with memory. That property makes applications written against WebGPU more likely to run without modification on different platforms.
GPURenderPassEncoder
results in a non-compatible usage list for that buffer.
-
As a depth/stencil attachment with all aspects marked read-only (using
depthReadOnlyand/orstencilReadOnlyas necessary). -
As a texture binding to a draw call.
-
A buffer or texture may be bound as storage to two different draw calls in a render pass.
-
Disjoint ranges of a single buffer may be bound to two different binding points as storage.
Overlapping ranges must not be bound to a single dispatch/draw call; this is checked by "Encoder bind groups alias a writable resource".
One slice must not be bound twice for two different attachments;
this is checked by beginRenderPass().
3.4.4. Synchronization
A usage scope is a map from subresource to list<internal usage>>. Each usage scope covers a range of operations which may execute in a concurrent fashion with each other, and therefore may only use subresources in consistent compatible usage lists within the scope.
subresource, usageList] in scope,
usageList is a compatible usage list.
-
For each [subresource, usage] in A:
-
Add subresource to B with usage usage.
-
Usage scopes are constructed and validated during encoding:
The usage scopes are as follows:
-
In a compute pass, each dispatch command (
dispatchWorkgroups()ordispatchWorkgroupsIndirect()) is one usage scope.A subresource is used in the usage scope if it is potentially accessible by the dispatched invocations, including:
-
All subresources referenced by bind groups in slots used by the current
GPUComputePipeline’s[[layout]] -
Buffers used directly by dispatch calls (such as indirect buffers)
Note: State-setting compute pass commands, like setBindGroup(), do not contribute their bound resources directly to a usage scope: they only change the state that is checked in dispatch commands.
-
-
One render pass is one usage scope.
A subresource is used in the usage scope if it’s referenced by any command, including state-setting commands (unlike in compute passes), including:
-
Buffers set by
setVertexBuffer() -
Buffers set by
setIndexBuffer() -
All subresources referenced by bind groups set by setBindGroup()
-
Buffers used directly by draw calls (such as indirect buffers)
-
Note: Copy commands are standalone operations and don’t use usage scopes for validation. They implement their own validation to prevent self-races.
-
In a render pass, subresources used in any setBindGroup() call, regardless of whether the currently bound pipeline’s shader or layout actually depends on these bindings, or the bind group is shadowed by another 'set' call.
-
A buffer used in any
setVertexBuffer()call, regardless of whether any draw call depends on this buffer, or whether this buffer is shadowed by another 'set' call. -
A buffer used in any
setIndexBuffer()call, regardless of whether any draw call depends on this buffer, or whether this buffer is shadowed by another 'set' call. -
A texture subresource used as a color attachment, resolve attachment, or depth/stencil attachment in
GPURenderPassDescriptorbybeginRenderPass(), regardless of whether the shader actually depends on these attachments. -
Resources used in bind group entries with visibility 0, or visible only to the compute stage but used in a render pass (or vice versa).
3.5. Core Internal Objects
3.5.1. Adapters
An adapter identifies an implementation of WebGPU on the system: both an instance of compute/rendering functionality on the platform underlying a browser, and an instance of a browser’s implementation of WebGPU on top of that functionality.
Adapters are exposed via GPUAdapter.
Adapters do not uniquely represent underlying implementations:
calling requestAdapter() multiple times returns a different adapter
object each time.
Each adapter object can only be used to create one device:
upon a successful requestDevice() call, the adapter’s [[state]]
changes to "consumed".
Additionally, adapter objects may expire at any time.
Note:
This ensures applications use the latest system state for adapter selection when creating a device.
It also encourages robustness to more scenarios by making them look similar: first initialization,
reinitialization due to an unplugged adapter, reinitialization due to a test
GPUDevice.destroy() call, etc.
An adapter may be considered a fallback adapter if it has significant performance caveats in exchange for some combination of wider compatibility, more predictable behavior, or improved privacy. It is not required that a fallback adapter is available on every system.
adapter has the following immutable properties:
[[features]], of type ordered set<GPUFeatureName>, readonly-
The features which can be used to create devices on this adapter.
[[limits]], of type supported limits, readonly-
The best limits which can be used to create devices on this adapter.
Each adapter limit must be the same or better than its default value in supported limits.
[[fallback]], of typeboolean, readonly-
If set to
trueindicates that the adapter is a fallback adapter. [[xrCompatible]], of type boolean-
If set to
trueindicates that the adapter was requested with compatibility with WebXR sessions. [[default feature level]], of type feature level string, readonly-
Indicates the default feature level of devices created from this adapter.
adapter has the following device timeline properties:
[[state]], initially"valid"-
"valid"-
The adapter can be used to create a device.
"consumed"-
The adapter has already been used to create a device, and cannot be used again.
"expired"-
The adapter has expired for some other reason.
GPUAdapter adapter, run the
following device timeline steps:
-
Set adapter.
[[adapter]].[[state]]to"expired".
3.5.2. Devices
A device is the logical instantiation of an adapter, through which internal objects are created.
Devices are exposed via GPUDevice.
A device is the exclusive owner of all internal objects created from it:
when the device becomes invalid
(is lost or destroyed),
it and all objects created on it (directly, e.g.
createTexture(), or indirectly, e.g. createView()) become
implicitly unusable.
device has the following immutable properties:
[[adapter]], of type adapter, readonly-
The adapter from which this device was created.
[[features]], of type ordered set<GPUFeatureName>, readonly-
The features which can be used on this device, as computed at creation. No additional features can be used, even if the underlying adapter can support them.
[[limits]], of type supported limits, readonly-
The limits which can be used on this device, as computed at creation. No better limits can be used, even if the underlying adapter can support them.
device has the following content timeline properties:
[[content device]], of typeGPUDevice, readonly-
The Content timeline
GPUDeviceinterface which this device is associated with.
GPUDeviceDescriptor descriptor, run the following device timeline steps:
-
Let features be the set of values in descriptor.
requiredFeatures. -
If features contains
"texture-formats-tier2":-
Append
"texture-formats-tier1"to features.
-
-
If features contains
"texture-formats-tier1":-
Append
"rg11b10ufloat-renderable"to features.
-
-
Append any default
GPUFeatureNames to features as defined by the adapter.[[default feature level]]. -
Let limits be a new supported limits object with the default limits as defined by the adapter.
[[default feature level]]. -
For each (key, value) pair in descriptor.
requiredLimits:-
If value is not
undefinedand value is better than limits[key]:-
Set limits[key] to value.
-
-
-
Set limits.
maxStorageBuffersPerShaderStageto max(limits.maxStorageBuffersPerShaderStage, limits.maxStorageBuffersInVertexStage, limits.maxStorageBuffersInFragmentStage). -
Set limits.
maxStorageTexturesPerShaderStageto max(limits.maxStorageTexturesPerShaderStage, limits.maxStorageTexturesInVertexStage, limits.maxStorageTexturesInFragmentStage). -
If features contains
"core-features-and-limits":-
Set limits.
maxStorageBuffersInVertexStageand limits.maxStorageBuffersInFragmentStageto limits.maxStorageBuffersPerShaderStage. -
Set limits.
maxStorageTexturesInVertexStageand limits.maxStorageTexturesInFragmentStageto limits.maxStorageTexturesPerShaderStage.
-
-
Let device be a device object.
-
Set device.
[[adapter]]to adapter. -
Set device.
[[features]]to features. -
Set device.
[[limits]]to limits. -
Return device.
Any time the user agent needs to revoke access to a device, it calls
lose the device(device, "unknown") on the device’s device timeline,
potentially ahead of other operations currently queued on that timeline.
If an operation fails with side effects that would observably change the state of objects on the device or potentially corrupt internal implementation/driver state, the device should be lost to prevent these changes from being observable.
Note:
For all device losses not initiated by the application (via destroy()),
user agents should consider issuing developer-visible warnings unconditionally,
even if the lost promise is handled.
These scenarios should be rare, and the signal is vital to developers because most of the WebGPU
API tries to behave like nothing is wrong to avoid interrupting the runtime flow of the application:
no validation errors are raised, most promises resolve normally, etc.
-
Invalidate device.
-
Issue the following steps on the content timeline of device.
[[content device]]:-
Resolve device.
lostwith a newGPUDeviceLostInfowithreasonset to reason andmessageset to an implementation-defined value.Note:
messageshould not disclose unnecessary user/system information and should never be parsed by applications.
-
-
Complete any outstanding steps that are waiting until device becomes lost.
Note: No errors are generated from a device which is lost. See § 22 Errors & Debugging.
-
If or when the device timeline has been informed of the completion of event, or
-
If device is lost already, or when it becomes lost:
Then issue steps on timeline.
3.6. Optional Capabilities
WebGPU adapters and devices have capabilities, which describe WebGPU functionality that differs between different implementations, typically due to hardware or system software constraints. A capability is either a feature or a limit.
A user agent must not reveal more than 32 distinguishable configurations or buckets.
The capabilities of an adapter must conform to § 4.2.1 Adapter Capability Guarantees.
Only supported capabilities may be requested in requestDevice();
requesting unsupported capabilities results in failure.
The capabilities of a device are determined in "a new device" by starting with the adapter’s
defaults (no features and the default supported limits)
and adding capabilities as requested in requestDevice().
These capabilities are enforced regardless of the capabilities of the adapter.
For privacy considerations, see § 2.2.1 Machine-specific features and limits.
3.6.1. Features
A feature is a set of optional WebGPU functionality that is not supported on all implementations, typically due to hardware or system software constraints.
All features are optional, but adapters make some guarantees about their availability (see § 4.2.1 Adapter Capability Guarantees).
A device supports the exact set of features determined at creation (see § 3.6 Optional Capabilities). API calls perform validation according to these features (not the adapter’s features):
-
Using existing API surfaces in a new way typically results in a validation error.
-
There are several types of optional API surface:
-
Using a new method or enum value always throws a
TypeError. -
Using a new dictionary member with a (correctly-typed) non-default value typically results in a validation error.
-
Using a new WGSL
enabledirective always results in acreateShaderModule()validation error.
-
GPUFeatureName feature is enabled for
a GPUObjectBase object if and only if
object.[[device]].[[features]] contains feature.
See the Feature Index for a description of the functionality each feature enables.
Note: Even where supported, enabling features is not necessarily desirable, as doing so may have a performance impact. Because of this, and to improve portability across devices and implementations, applications should generally only request features that they may actually require.
3.6.2. Limits
Each limit is a numeric limit on the usage of WebGPU on a device.
Note: Even where supported, setting "better" limits is not necessarily desirable, as doing so may have a performance impact. Because of this, and to improve portability across devices and implementations, applications should generally only request limits better than the defaults if they may actually require them.
Each limit has a default value and a compatibility mode default.
Adapters are always guaranteed to support the defaults or better (see § 4.2.1 Adapter Capability Guarantees).
A device supports the exact set of limits determined at creation (see § 3.6 Optional Capabilities). API calls perform validation according to these limits (not the adapter’s limits), no better or worse.
For any given limit, some values are better than others. A better limit value always relaxes validation, enabling strictly more programs to be valid. For each limit class, "better" is defined.
Different limits have different limit classes:
- maximum
-
The limit enforces a maximum on some value passed into the API.
Higher values are better.
May only be set to values ≥ the default. Lower values are clamped to the default.
- alignment
-
The limit enforces a minimum alignment on some value passed into the API; that is, the value must be a multiple of the limit.
Lower values are better.
May only be set to powers of 2 which are ≤ the default. Values which are not powers of 2 are invalid. Higher powers of 2 are clamped to the default.
A supported limits object has a value for every limit defined by WebGPU:
| Limit name | Type | Limit class | Default | Compatibility Mode Default |
|---|---|---|---|---|
maxTextureDimension1D
| GPUSize32
| maximum | 8192 | 4096 |
The maximum allowed value for the size.width
of a texture created with dimension "1d".
| ||||
maxTextureDimension2D
| GPUSize32
| maximum | 8192 | 4096 |
The maximum allowed value for the size.width and size.height
of a texture created with dimension "2d".
| ||||
maxTextureDimension3D
| GPUSize32
| maximum | 2048 | |
The maximum allowed value for the size.width, size.height and size.depthOrArrayLayers
of a texture created with dimension "3d".
| ||||
maxTextureArrayLayers
| GPUSize32
| maximum | 256 | |
The maximum allowed value for the size.depthOrArrayLayers
of a texture created with dimension "2d".
| ||||
maxBindGroups
| GPUSize32
| maximum | 4 | |
The maximum number of GPUBindGroupLayouts
allowed in bindGroupLayouts
when creating a GPUPipelineLayout.
| ||||
maxBindGroupsPlusVertexBuffers
| GPUSize32
| maximum | 24 | |
The maximum number of bind group and vertex buffer slots used simultaneously,
counting any empty slots below the highest index.
Validated in createRenderPipeline() and in draw calls.
| ||||
maxBindingsPerBindGroup
| GPUSize32
| maximum | 1000 | |
The number of binding indices available when creating a GPUBindGroupLayout.
Note: This limit is normative, but arbitrary.
With the default binding slot limits, it is impossible
to use 1000 bindings in one bind group, but this allows
| ||||
maxDynamicUniformBuffersPerPipelineLayout
| GPUSize32
| maximum | 8 | |
The maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout
which are uniform buffers with dynamic offsets.
See Exceeds the binding slot limits.
| ||||
maxDynamicStorageBuffersPerPipelineLayout
| GPUSize32
| maximum | 4 | |
The maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout
which are storage buffers with dynamic offsets.
See Exceeds the binding slot limits.
| ||||
maxSampledTexturesPerShaderStage
| GPUSize32
| maximum | 16 | |
For each possible GPUShaderStage stage,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout
which are sampled textures.
See Exceeds the binding slot limits.
| ||||
maxSamplersPerShaderStage
| GPUSize32
| maximum | 16 | |
For each possible GPUShaderStage stage,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout
which are samplers.
See Exceeds the binding slot limits.
| ||||
maxStorageBuffersPerShaderStage
| GPUSize32
| maximum | 8 | |
For each possible GPUShaderStage stage,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout
which are storage buffers.
See Exceeds the binding slot limits.
Note: This limit applies to all stages. At device initialization, it is normalized with | ||||
maxStorageBuffersInVertexStage
| GPUSize32
| maximum | 8 | 0 |
For the vertex stage, the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout
which are storage buffers.
See Exceeds the binding slot limits.
| ||||
maxStorageBuffersInFragmentStage
| GPUSize32
| maximum | 8 | 4 |
For the fragment stage, the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout
which are storage buffers.
See Exceeds the binding slot limits.
| ||||
maxStorageTexturesPerShaderStage
| GPUSize32
| maximum | 4 | |
For each possible GPUShaderStage stage,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout
which are storage textures.
See Exceeds the binding slot limits.
Note: This limit applies to all stages. At device initialization, it is normalized with | ||||
maxStorageTexturesInVertexStage
| GPUSize32
| maximum | 4 | 0 |
For the vertex stage, the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout
which are storage textures.
See Exceeds the binding slot limits.
| ||||
maxStorageTexturesInFragmentStage
| GPUSize32
| maximum | 4 | |
For the fragment stage, the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout
which are storage textures.
See Exceeds the binding slot limits.
| ||||
maxUniformBuffersPerShaderStage
| GPUSize32
| maximum | 12 | |
For each possible GPUShaderStage stage,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout
which are uniform buffers.
See Exceeds the binding slot limits.
| ||||
maxUniformBufferBindingSize
| GPUSize64
| maximum | 65536 bytes | 16384 bytes |
The maximum GPUBufferBinding.size for bindings with a
GPUBindGroupLayoutEntry entry for which
entry.buffer?.type
is "uniform".
| ||||
maxStorageBufferBindingSize
| GPUSize64
| maximum | 134217728 bytes (128 MiB) | |
The maximum GPUBufferBinding.size for bindings with a
GPUBindGroupLayoutEntry entry for which
entry.buffer?.type
is "storage"
or "read-only-storage".
| ||||
minUniformBufferOffsetAlignment
| GPUSize32
| alignment | 256 bytes | |
The required alignment for GPUBufferBinding.offset and
the dynamic offsets provided in setBindGroup(),
for bindings with a GPUBindGroupLayoutEntry entry for which
entry.buffer?.type
is "uniform".
| ||||
minStorageBufferOffsetAlignment
| GPUSize32
| alignment | 256 bytes | |
The required alignment for GPUBufferBinding.offset and
the dynamic offsets provided in setBindGroup(),
for bindings with a GPUBindGroupLayoutEntry entry for which
entry.buffer?.type
is "storage"
or "read-only-storage".
| ||||
maxVertexBuffers
| GPUSize32
| maximum | 8 | |
The maximum number of buffers
when creating a GPURenderPipeline.
| ||||
maxBufferSize
| GPUSize64
| maximum | 268435456 bytes (256 MiB) | |
The maximum size of size
when creating a GPUBuffer.
| ||||
maxVertexAttributes
| GPUSize32
| maximum | 16 | |
The maximum number of attributes
in total across buffers
when creating a GPURenderPipeline.
| ||||
maxVertexBufferArrayStride
| GPUSize32
| maximum | 2048 bytes | |
The maximum allowed arrayStride
when creating a GPURenderPipeline.
| ||||
maxInterStageShaderVariables
| GPUSize32
| maximum | 16 | 15 |
| The maximum allowed number of input or output variables for inter-stage communication (like vertex outputs or fragment inputs). | ||||
maxColorAttachments
| GPUSize32
| maximum | 8 | 4 |
The maximum allowed number of color attachments in
GPURenderPipelineDescriptor.fragment.targets,
GPURenderPassDescriptor.colorAttachments,
and GPURenderPassLayout.colorFormats.
| ||||
maxColorAttachmentBytesPerSample
| GPUSize32
| maximum | 32 | |
| The maximum number of bytes necessary to hold one sample (pixel or subpixel) of render pipeline output data, across all color attachments. | ||||
maxComputeWorkgroupStorageSize
| GPUSize32
| maximum | 16384 bytes | |
The maximum number of bytes of workgroup storage used for a compute stage
GPUShaderModule entry-point.
| ||||
maxComputeInvocationsPerWorkgroup
| GPUSize32
| maximum | 256 | 128 |
The maximum value of the product of the workgroup_size dimensions for a
compute stage GPUShaderModule entry-point.
| ||||
maxComputeWorkgroupSizeX
| GPUSize32
| maximum | 256 | 128 |
The maximum value of the workgroup_size X dimension for a
compute stage GPUShaderModule entry-point.
| ||||
maxComputeWorkgroupSizeY
| GPUSize32
| maximum | 256 | 128 |
The maximum value of the workgroup_size Y dimensions for a
compute stage GPUShaderModule entry-point.
| ||||
maxComputeWorkgroupSizeZ
| GPUSize32
| maximum | 64 | |
The maximum value of the workgroup_size Z dimensions for a
compute stage GPUShaderModule entry-point.
| ||||
maxComputeWorkgroupsPerDimension
| GPUSize32
| maximum | 65535 | |
The maximum value for the arguments of
dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ).
| ||||
3.6.2.1. GPUSupportedLimits
GPUSupportedLimits exposes an adapter or device’s supported limits.
See GPUAdapter.limits and GPUDevice.limits.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUSupportedLimits {readonly attribute unsigned long ;maxTextureDimension1D readonly attribute unsigned long ;maxTextureDimension2D readonly attribute unsigned long ;maxTextureDimension3D readonly attribute unsigned long ;maxTextureArrayLayers readonly attribute unsigned long ;maxBindGroups readonly attribute unsigned long ;maxBindGroupsPlusVertexBuffers readonly attribute unsigned long ;maxBindingsPerBindGroup readonly attribute unsigned long ;maxDynamicUniformBuffersPerPipelineLayout readonly attribute unsigned long ;maxDynamicStorageBuffersPerPipelineLayout readonly attribute unsigned long ;maxSampledTexturesPerShaderStage readonly attribute unsigned long ;maxSamplersPerShaderStage readonly attribute unsigned long ;maxStorageBuffersPerShaderStage readonly attribute unsigned long ;maxStorageBuffersInVertexStage readonly attribute unsigned long ;maxStorageBuffersInFragmentStage readonly attribute unsigned long ;maxStorageTexturesPerShaderStage readonly attribute unsigned long ;maxStorageTexturesInVertexStage readonly attribute unsigned long ;maxStorageTexturesInFragmentStage readonly attribute unsigned long ;maxUniformBuffersPerShaderStage readonly attribute unsigned long long ;maxUniformBufferBindingSize readonly attribute unsigned long long ;maxStorageBufferBindingSize readonly attribute unsigned long ;minUniformBufferOffsetAlignment readonly attribute unsigned long ;minStorageBufferOffsetAlignment readonly attribute unsigned long ;maxVertexBuffers readonly attribute unsigned long long ;maxBufferSize readonly attribute unsigned long ;maxVertexAttributes readonly attribute unsigned long ;maxVertexBufferArrayStride readonly attribute unsigned long ;maxInterStageShaderVariables readonly attribute unsigned long ;maxColorAttachments readonly attribute unsigned long ;maxColorAttachmentBytesPerSample readonly attribute unsigned long ;maxComputeWorkgroupStorageSize readonly attribute unsigned long ;maxComputeInvocationsPerWorkgroup readonly attribute unsigned long ;maxComputeWorkgroupSizeX readonly attribute unsigned long ;maxComputeWorkgroupSizeY readonly attribute unsigned long ;maxComputeWorkgroupSizeZ readonly attribute unsigned long ; };maxComputeWorkgroupsPerDimension
3.6.2.2. GPUSupportedFeatures
GPUSupportedFeatures is a setlike interface. Its set entries are
the GPUFeatureName values of the features supported by an adapter or
device. It must only contain strings from the GPUFeatureName enum.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUSupportedFeatures {readonly setlike <DOMString >; };
GPUSupportedFeatures set entries is DOMString to allow user
agents to gracefully handle valid GPUFeatureNames which are added in later revisions of the spec
but which the user agent has not been updated to recognize yet. If the set entries type was
GPUFeatureName the following code would throw an TypeError rather than reporting false:
3.6.2.3. WGSLLanguageFeatures
WGSLLanguageFeatures is the setlike interface of
navigator.gpu..
Its set entries are the string names of the WGSL language extensions
supported by the implementation (regardless of the adapter or device).wgslLanguageFeatures
[Exposed =(Window ,Worker ),SecureContext ]interface WGSLLanguageFeatures {readonly setlike <DOMString >; };
3.6.2.4. GPUAdapterInfo
GPUAdapterInfo exposes various identifying information about an adapter.
None of the members in GPUAdapterInfo are guaranteed to be populated with any particular value;
if no value is provided, the attribute will return the empty string "". It is at the user
agent’s discretion which values to reveal, and it is likely that on some devices none of the values
will be populated. As such, applications must be able to handle any possible GPUAdapterInfo values,
including the absence of those values.
The GPUAdapterInfo for an adapter is exposed via GPUAdapter.info
and GPUDevice.adapterInfo).
This info is immutable:
for a given adapter, each GPUAdapterInfo attribute will return the same value every time it’s accessed.
Note:
Though the GPUAdapterInfo attributes are immutable once accessed, an implementation may delay the decision on
what to expose for each attribute until the first time it is accessed.
Note:
Other GPUAdapter instances, even if they represent the same physical adapter, may expose
different values in GPUAdapterInfo.
However, they should expose the same values unless a specific
event has increased the amount of identifying information the page is allowed to access.
(No such events are defined by this specification.)
For privacy considerations, see § 2.2.6 Adapter Identifiers.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUAdapterInfo {readonly attribute DOMString vendor ;readonly attribute DOMString architecture ;readonly attribute DOMString device ;readonly attribute DOMString description ;readonly attribute unsigned long subgroupMinSize ;readonly attribute unsigned long subgroupMaxSize ;readonly attribute boolean isFallbackAdapter ; };
GPUAdapterInfo has the following attributes:
vendor, of type DOMString, readonly-
The name of the vendor of the adapter, if available. Empty string otherwise.
architecture, of type DOMString, readonly-
The name of the family or class of GPUs the adapter belongs to, if available. Empty string otherwise.
device, of type DOMString, readonly-
A vendor-specific identifier for the adapter, if available. Empty string otherwise.
Note: This is a value that represents the type of adapter. For example, it may be a PCI device ID. It does not uniquely identify a given piece of hardware like a serial number.
description, of type DOMString, readonly-
A human readable string describing the adapter as reported by the driver, if available. Empty string otherwise.
Note: Because no formatting is applied to
descriptionattempting to parse this value is not recommended. Applications which change their behavior based on theGPUAdapterInfo, such as applying workarounds for known driver issues, should rely on the other fields when possible. subgroupMinSize, of type unsigned long, readonly-
If the
"subgroups"feature is supported, the minimum supported subgroup size for the adapter. subgroupMaxSize, of type unsigned long, readonly-
If the
"subgroups"feature is supported, the maximum supported subgroup size for the adapter. isFallbackAdapter, of type boolean, readonly-
Whether the adapter is a fallback adapter.
-
Let adapterInfo be a new
GPUAdapterInfo. -
If the vendor is known, set adapterInfo.
vendorto the name of adapter’s vendor as a normalized identifier string. To preserve privacy, the user agent may instead set adapterInfo.vendorto the empty string or a reasonable approximation of the vendor as a normalized identifier string. -
If |the architecture is known, set adapterInfo.
architectureto a normalized identifier string representing the family or class of adapters to which adapter belongs. To preserve privacy, the user agent may instead set adapterInfo.architectureto the empty string or a reasonable approximation of the architecture as a normalized identifier string. -
If the device is known, set adapterInfo.
deviceto a normalized identifier string representing a vendor-specific identifier for adapter. To preserve privacy, the user agent may instead set adapterInfo.deviceto to the empty string or a reasonable approximation of a vendor-specific identifier as a normalized identifier string. -
If a description is known, set adapterInfo.
descriptionto a description of the adapter as reported by the driver. To preserve privacy, the user agent may instead set adapterInfo.descriptionto the empty string or a reasonable approximation of a description. -
If
"subgroups"is supported, setsubgroupMinSizeto the smallest supported subgroup size. Otherwise, set this value to 4.Note: To preserve privacy, the user agent may choose to not support some features or provide values for the property which do not distinguish different devices, but are still usable (e.g. use the default value of 4 for all devices).
-
If
"subgroups"is supported, setsubgroupMaxSizeto the largest supported subgroup size. Otherwise, set this value to 128.Note: To preserve privacy, the user agent may choose to not support some features or provide values for the property which do not distinguish different devices, but are still usable (e.g. use the default value of 128 for all devices).
-
Set adapterInfo.
isFallbackAdapterto adapter.[[fallback]]. -
Return adapterInfo.
[a-z0-9]+(-[a-z0-9]+)*
3.7. Feature Detection
This section is non-normative.
Fully implementing this specification requires implementation of everything it specifies, except where otherwise stated (like § 3.6 Optional Capabilities).
However, since new "core" additions are added to this specification before being exposed by implementations, many features are designed to be feature-detectable by applications:
-
Interface support can be detected with
typeof InterfaceName !== 'undefined'. -
Method and attribute support can be detected with
'itemName' in InterfaceName.prototype. -
New dictionary members, if they need to be detectable, generally document a specific mechanism for feature detection. For example:
-
unclippedDepthsupport is part of a device feature,"depth-clip-control". -
Canvas support for
toneMappingis detected usinggetConfiguration().
-
3.8. Extension Documents
"Extension Documents" are additional documents which describe new functionality which is
non-normative and not part of the WebGPU/WGSL specifications.
They describe functionality that builds upon these specifications, often including one or more new
API feature flags and/or WGSL enable directives, or interactions with other draft
web specifications.
WebGPU implementations must not expose extension functionality; doing so is a spec violation. New functionality does not become part of the WebGPU standard until it is integrated into the WebGPU specification (this document) and/or WGSL specification.
3.9. Origin Restrictions
WebGPU allows accessing image data stored in images, videos, and canvases. Restrictions are imposed on the use of cross-domain media, because shaders can be used to indirectly deduce the contents of textures which have been uploaded to the GPU.
WebGPU disallows uploading an image source if it is not origin-clean.
This also implies that the origin-clean flag for a
canvas rendered using WebGPU will never be set to false.
For more information on issuing CORS requests for image and video elements, consult:
3.10. Task Sources
3.10.1. WebGPU Task Source
WebGPU defines a new task source called the WebGPU task source.
It is used for the uncapturederror event and GPUDevice.lost.
GPUDevice device,
with a series of steps steps on the content timeline:
-
Queue a global task on the WebGPU task source, with the global object that was used to create device, and the steps steps.
3.10.2. Automatic Expiry Task Source
WebGPU defines a new task source called the automatic expiry task source. It is used for the automatic, timed expiry (destruction) of certain objects:
-
GPUTextures returned bygetCurrentTexture() -
GPUExternalTextures created fromHTMLVideoElements
GPUDevice device and a series of steps steps on the content timeline:
-
Queue a global task on the automatic expiry task source, with the global object that was used to create device, and the steps steps.
Tasks from the automatic expiry task source should be processed with high priority; in particular, once queued, they should run before user-defined (JavaScript) tasks.
Implementation note: It is valid to implement a high-priority expiry "task" by instead inserting additional steps at a fixed point inside the event loop processing model rather than running an actual task.
3.11. Color Spaces and Encoding
WebGPU does not provide color management. All values within WebGPU (such as texture elements) are raw numeric values, not color-managed color values.
WebGPU does interface with color-managed outputs (via GPUCanvasConfiguration) and inputs
(via copyExternalImageToTexture() and importExternalTexture()).
Thus, color conversion must be performed between the WebGPU numeric values and the external color values.
Each such interface point locally defines an encoding (color space, transfer function, and alpha
premultiplication) in which the WebGPU numeric values are to be interpreted.
WebGPU allows all of the color spaces in the PredefinedColorSpace enum.
Note, each color space is defined over an extended range, as defined by the referenced CSS definitions,
to represent color values outside of its space (in both chrominance and luminance).
GPUTextures are not color managed. This includes -srgb formats,
which despite their are not tagged with an sRGB color space (like those described by
PredefinedColorSpace and the CSS color spaces srgb and
srgb-linear).
However, -srgb texture formats do have gamma-encoding/decoding properties which are
algorithmically close to those used for gamma encoding in "srgb" and
"display-p3". For example, a fragment
shader can output an "sRGB-linear"-encoded (physically linear) color value into an -srgb
format texture, which will gamma-encode the value when it is written.
Then, the value in the texture will be correctly encoded for use on a
"srgb"-tagged (approximately perceptually-linear) canvas.
It is similarly possible to take advantage of these properties using
copyExternalImageToTexture(); see its description for additional information.
An out-of-gamut premultiplied RGBA value is one where any of the R/G/B channel values
exceeds the alpha channel value. For example, the premultiplied sRGB RGBA value [1.0, 0, 0, 0.5]
represents the (unpremultiplied) color [2, 0, 0] with 50% alpha, written rgb(srgb 2 0 0 / 50%) in CSS.
Just like any color value outside the sRGB color gamut, this is a well defined point in the extended color space
(except when alpha is 0, in which case there is no color).
However, when such values are output to a visible canvas, the result is undefined
(see GPUCanvasAlphaMode "premultiplied").
3.11.1. Color Space Conversions
A color is converted between spaces by translating its representation in one space to a representation in another according to the definitions above.
If the source value has fewer than 4 RGBA channels, the missing green/blue/alpha channels are set to
0, 0, 1, respectively, before converting for color space/encoding and alpha premultiplication.
After conversion, if the destination needs fewer than 4 channels, the additional channels
are ignored.
Note:
Grayscale images generally represent RGB values (V, V, V), or RGBA values (V, V, V, A) in their color space.
Colors are not lossily clamped during conversion: converting from one color space to another will result in values outside the range [0, 1] if the source color values were outside the range of the destination color space’s gamut. For an sRGB destination, for example, this can occur if the source is rgba16float, in a wider color space like Display-P3, or is premultiplied and contains out-of-gamut values.
Similarly, if the source value has a high bit depth (e.g. PNG with 16 bits per component) or
extended range (e.g. canvas with float16 storage), these colors are preserved through color space
conversion, with intermediate computations having at least the precision of the source.
3.11.2. Color Space Conversion Elision
If the source and destination of a color space/encoding conversion are the same, then conversion is not necessary. In general, if any given step of the conversion is an identity function (no-op), implementations should elide it, for performance.
For optimal performance, applications should set their color space and encoding
options so that the number of necessary conversions is minimized throughout the process.
For various image sources of GPUCopyExternalImageSourceInfo:
-
-
Premultiplication is controlled via
premultiplyAlpha. -
Color space is controlled via
colorSpaceConversion.
-
-
2d canvas:
-
Color space is controlled via the
colorSpacecontext creation attribute.
-
WebGL canvas:
-
Premultiplication is controlled via the
premultipliedAlphaoption inWebGLContextAttributes. -
Color space is controlled via the
WebGLRenderingContextBase’sdrawingBufferColorSpacestate.
-
Note: Check browser implementation support for these features before relying on them.
3.12. Numeric conversions from JavaScript to WGSL
Several parts of the WebGPU API (pipeline-overridable constants and
render pass clear values) take numeric values from WebIDL (double or float) and convert
them to WGSL values (bool, i32, u32, f32, f16).
double or float to WGSL type T,
possibly throwing a TypeError, run the following device timeline steps:
Note: This TypeError is generated in the device timeline and never surfaced to JavaScript.
-
Assert idlValue is a finite value, since it is not
unrestricted doubleorunrestricted float. -
Let v be the ECMAScript Number resulting from ! converting idlValue to an ECMAScript value.
-
- If T is
bool -
Return the WGSL
boolvalue corresponding to the result of ! converting v to an IDL value of typeboolean.Note: This algorithm is called after the conversion from an ECMAScript value to an IDL
doubleorfloatvalue. If the original ECMAScript value was a non-numeric, non-boolean value like[]or{}, then the WGSLboolresult may be different than if the ECMAScript value had been converted to IDLbooleandirectly. - If T is
i32 -
Return the WGSL
i32value corresponding to the result of ? converting v to an IDL value of type [EnforceRange]long. - If T is
u32 -
Return the WGSL
u32value corresponding to the result of ? converting v to an IDL value of type [EnforceRange]unsigned long. - If T is
f32 -
Return the WGSL
f32value corresponding to the result of ? converting v to an IDL value of typefloat. - If T is
f16 -
-
Let wgslF32 be the WGSL
f32value corresponding to the result of ? converting v to an IDL value of typefloat. -
Return
f16(wgslF32), the result of ! converting the WGSLf32value tof16as defined in WGSL floating point conversion.
Note: As long as the value is in-range of
f32, no error is thrown, even if the value is out-of-range off16. -
- If T is
GPUColor color to a texel value of texture format format,
possibly throwing a TypeError, run the following device timeline steps:
Note: This TypeError is generated in the device timeline and never surfaced to JavaScript.
-
If the components of format (assert they all have the same type) are:
- floating-point types or normalized types
-
Let T be
f32. - signed integer types
-
Let T be
i32. - unsigned integer types
-
Let T be
u32.
-
Let wgslColor be a WGSL value of type
vec4<T>, where the 4 components are the RGBA channels of color, each ? converted to WGSL type T. -
Convert wgslColor to format using the same conversion rules as the § 23.2.7 Output Merging step, and return the result.
Note: For non-integer types, the exact choice of value is implementation-defined. For normalized types, the value is clamped to the range of the type.
Note:
In other words, the value written will be as if it was written by a WGSL shader that
outputs the value represented as a vec4 of f32, i32, or u32.
4. Initialization
4.1. navigator.gpu
A GPU object is available in the Window and WorkerGlobalScope contexts through the
Navigator and WorkerNavigator interfaces respectively and is exposed via navigator.gpu:
interface mixin { [NavigatorGPU SameObject ,SecureContext ]readonly attribute GPU gpu ; };Navigator includes NavigatorGPU ;WorkerNavigator includes NavigatorGPU ;
NavigatorGPU has the following attributes:
gpu, of type GPU, readonly-
A global singleton providing top-level entry points like
requestAdapter().
4.2. GPU
GPU is the entry point to WebGPU.
[Exposed =(Window ,Worker ),SecureContext ]interface GPU {Promise <GPUAdapter ?>requestAdapter (optional GPURequestAdapterOptions options = {});GPUTextureFormat getPreferredCanvasFormat (); [SameObject ]readonly attribute WGSLLanguageFeatures wgslLanguageFeatures ; };
GPU has the following methods:
requestAdapter(options)-
Requests an adapter from the user agent. The user agent chooses whether to return an adapter, and, if so, chooses according to the provided options.
Called on:GPUthis.Arguments:
Arguments for the GPU.requestAdapter(options) method. Parameter Type Nullable Optional Description optionsGPURequestAdapterOptions✘ ✔ Criteria used to select the adapter. Returns:
Promise<GPUAdapter?>Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Issue the initialization steps on the Device timeline of this.
-
Return promise.
Device timeline initialization steps:-
All of the requirements in the following steps must be met.
-
options.
featureLevelmust be a feature level string.
If any are unmet:
-
Let adapter be
null, issue the resolution steps on contentTimeline, and return.
-
-
If options.
featureLevelis "compatibility":-
Set options.
featureLevelto "compatibility" if the user agent chooses to support it, or "core" if not.Note: This doesn’t modify the JavaScript object passed by the application.
-
-
Set adapter to either:
-
A new adapter object chosen according to the rules in § 4.2.2 Adapter Selection and the criteria in options, adhering to § 4.2.1 Adapter Capability Guarantees, with the capabilities determined in an implementation-defined way by the user agent.
-
null, if the user agent is unable to return an adapter, or makes an implementation-defined choice not to return an adapter.
If an adapter is returned, initialize its properties according to their definitions.
-
Set adapter.
[[limits]]and adapter.[[features]]according to the supported capabilities of the adapter. -
If adapter meets the criteria of a fallback adapter set adapter.
[[fallback]]totrue. Otherwise, set it tofalse. -
Set adapter.
[[xrCompatible]]to options.xrCompatible. -
Set adapter.
[[default feature level]]to options.featureLevel.
-
-
Issue the resolution steps on contentTimeline.
Content timeline resolution steps:-
If adapter is not
null:-
Resolve promise with a new
GPUAdapterencapsulating adapter.
Otherwise:
-
Resolve promise with
null.
-
-
getPreferredCanvasFormat()-
Returns an optimal
GPUTextureFormatfor displaying 8-bit depth, standard dynamic range content on this system. Must only return"rgba8unorm"or"bgra8unorm".The returned value can be passed as the
formattoconfigure()calls on aGPUCanvasContextto ensure the associated canvas is able to display its contents efficiently.Note: Canvases which are not displayed to the screen may or may not benefit from using this format.
Called on:GPUthis.Returns:
GPUTextureFormatContent timeline steps:
-
Return either
"rgba8unorm"or"bgra8unorm", depending on which format is optimal for displaying WebGPU canvases on this system.
-
GPU has the following attributes:
wgslLanguageFeatures, of type WGSLLanguageFeatures, readonly-
The names of supported WGSL language extensions. Supported language extensions are automatically enabled.
Adapters may expire at any time. Upon any change in the system’s state that could affect
the result of any requestAdapter() call, the user agent should expire all
previously-returned adapters. For example:
-
A physical adapter is added/removed (via plug/unplug, driver update, hang recovery, etc.)
-
The system’s power configuration has changed (laptop unplugged, power settings changed, etc.)
Note:
User agents may choose to expire adapters often, even when there has been no system
state change (e.g. seconds or minutes after the adapter was created).
This can help obfuscate real system state changes, and make developers more aware that calling
requestAdapter() again is always necessary before calling requestDevice().
If an application does encounter this situation, standard device-loss recovery
handling should allow it to recover.
4.2.1. Adapter Capability Guarantees
Any GPUAdapter returned by requestAdapter() must provide the following guarantees:
-
At least one of the following must be true:
-
"texture-compression-bc"is supported. -
Both
"texture-compression-etc2"and"texture-compression-astc"are supported.
-
-
If
"texture-compression-bc-sliced-3d"is supported, then"texture-compression-bc"must be supported. -
If
"texture-compression-astc-sliced-3d"is supported, then"texture-compression-astc"must be supported. -
All supported limits must be either the default value or better.
-
All alignment-class limits must be powers of 2.
-
maxBindingsPerBindGroupmust be must be ≥ (max bindings per shader stage × max shader stages per pipeline), where:-
max bindings per shader stage is (
maxSampledTexturesPerShaderStage+maxSamplersPerShaderStage+maxStorageBuffersPerShaderStage+maxStorageTexturesPerShaderStage+maxUniformBuffersPerShaderStage). -
max shader stages per pipeline is
2, because aGPURenderPipelinesupports both a vertex and fragment shader.
Note:
maxBindingsPerBindGroupdoes not reflect a fundamental limit; implementations should raise it to conform to this requirement, rather than lowering the other limits. -
-
maxBindGroupsmust be ≤maxBindGroupsPlusVertexBuffers. -
maxVertexBuffersmust be ≤maxBindGroupsPlusVertexBuffers. -
minUniformBufferOffsetAlignmentandminStorageBufferOffsetAlignmentmust both be ≥ 32 bytes.Note: 32 bytes would be the alignment of
vec4<f64>. See WebGPU Shading Language § 14.4.1 Alignment and Size. -
maxUniformBufferBindingSizemust be ≤maxBufferSize. -
maxStorageBufferBindingSizemust be ≤maxBufferSize. -
maxStorageBufferBindingSizemust be a multiple of 4 bytes. -
maxVertexBufferArrayStridemust be a multiple of 4 bytes. -
maxComputeWorkgroupSizeXmust be ≤maxComputeInvocationsPerWorkgroup. -
maxComputeWorkgroupSizeYmust be ≤maxComputeInvocationsPerWorkgroup. -
maxComputeWorkgroupSizeZmust be ≤maxComputeInvocationsPerWorkgroup. -
maxComputeInvocationsPerWorkgroupmust be ≤maxComputeWorkgroupSizeX×maxComputeWorkgroupSizeY×maxComputeWorkgroupSizeZ.
4.2.2. Adapter Selection
GPURequestAdapterOptions
provides hints to the user agent indicating what
configuration is suitable for the application.
dictionary GPURequestAdapterOptions {DOMString featureLevel = "core";GPUPowerPreference powerPreference ;boolean forceFallbackAdapter =false ;boolean xrCompatible =false ; };
enum {GPUPowerPreference "low-power" ,"high-performance" , };
GPURequestAdapterOptions has the following members:
featureLevel, of type DOMString, defaulting to"core"-
Requests an adapter that supports at least a particular set of capabilities. This influences the
[[default feature level]]of devices created from this adapter. The capabilities for each level are defined below, and the exact steps are defined inrequestAdapter()and "a new device".If the implementation or system does not support all of the capabilities in the requested feature level,
requestAdapter()will returnnull.Note: Applications should typically make a single
requestAdapter()call with the lowest feature level they support, then inspect the adapter for additional capabilities they can use optionally, and request those inrequestDevice().The allowed feature level string values are:
- "core"
-
The following set of capabilities:
-
The Default limits.
Note: Adapters with this
[[default feature level]]may conventionally be referred to as "Core-defaulting". -
- "compatibility"
-
The following set of capabilities:
-
The Compatibility Mode Default limits.
-
No features. (It excludes the
"core-features-and-limits"feature.)
If the implementation cannot enforce the stricter "Compatibility Mode" validation rules,
requestAdapter()will ignore this request and treat it as a request for "core".Note: Adapters with this
[[default feature level]]may conventionally be referred to as "Compatibility-defaulting". -
powerPreference, of type GPUPowerPreference-
Optionally provides a hint indicating what class of adapter should be selected from the system’s available adapters.
The value of this hint may influence which adapter is chosen, but it must not influence whether an adapter is returned or not.
Note: The primary utility of this hint is to influence which GPU is used in a multi-GPU system. For instance, some laptops have a low-power integrated GPU and a high-performance discrete GPU. This hint may also affect the power configuration of the selected GPU to match the requested power preference.
Note: Depending on the exact hardware configuration, such as battery status and attached displays or removable GPUs, the user agent may select different adapters given the same power preference. Typically, given the same hardware configuration and state and
powerPreference, the user agent is likely to select the same adapter.It must be one of the following values:
undefined(or not present)-
Provides no hint to the user agent.
"low-power"-
Indicates a request to prioritize power savings over performance.
Note: Generally, content should use this if it is unlikely to be constrained by drawing performance; for example, if it renders only one frame per second, draws only relatively simple geometry with simple shaders, or uses a small HTML canvas element. Developers are encouraged to use this value if their content allows, since it may significantly improve battery life on portable devices.
"high-performance"-
Indicates a request to prioritize performance over power consumption.
Note: By choosing this value, developers should be aware that, for devices created on the resulting adapter, user agents are more likely to force device loss, in order to save power by switching to a lower-power adapter. Developers are encouraged to only specify this value if they believe it is absolutely necessary, since it may significantly decrease battery life on portable devices.
forceFallbackAdapter, of type boolean, defaulting tofalse-
When set to
trueindicates that only a fallback adapter may be returned. If the user agent does not support a fallback adapter, will causerequestAdapter()to resolve tonull.Note:
requestAdapter()may still return a fallback adapter ifforceFallbackAdapteris set tofalseand either no other appropriate adapter is available or the user agent chooses to return a fallback adapter. Developers that wish to prevent their applications from running on fallback adapters should check theinfo.isFallbackAdapterattribute prior to requesting aGPUDevice. xrCompatible, of type boolean, defaulting tofalse-
When set to
trueindicates that the best adapter for rendering to a WebXR session must be returned. If the user agent or system does not support WebXR sessions then adapter selection may ignore this value.Note: If
xrCompatibleis not set totruewhen the adapter is requested,GPUDevices created from the adapter cannot be used to render for WebXR sessions.
"high-performance" GPUAdapter:
const gpuAdapter= await navigator. gpu. requestAdapter({ powerPreference: 'high-performance' });
4.3. GPUAdapter
A GPUAdapter encapsulates an adapter,
and describes its capabilities (features and limits).
To get a GPUAdapter, use requestAdapter().
[Exposed =(Window ,Worker ),SecureContext ]interface GPUAdapter { [SameObject ]readonly attribute GPUSupportedFeatures features ; [SameObject ]readonly attribute GPUSupportedLimits limits ; [SameObject ]readonly attribute GPUAdapterInfo info ;Promise <GPUDevice >requestDevice (optional GPUDeviceDescriptor descriptor = {}); };
GPUAdapter has the following immutable properties
features, of type GPUSupportedFeatures, readonly-
The set of values in
this.[[adapter]].[[features]]. limits, of type GPUSupportedLimits, readonly-
The limits in
this.[[adapter]].[[limits]]. info, of type GPUAdapterInfo, readonly-
Information about the physical adapter underlying this
GPUAdapter.For a given
GPUAdapter, theGPUAdapterInfovalues exposed are constant over time.The same object is returned each time. To create that object for the first time:
Called on:GPUAdapterthis.Returns:
GPUAdapterInfoContent timeline steps:
-
Return a new adapter info for this.
[[adapter]].
-
[[adapter]], of type adapter, readonly-
The adapter to which this
GPUAdapterrefers.
GPUAdapter has the following methods:
requestDevice(descriptor)-
Requests a device from the adapter.
This is a one-time action: if a device is returned successfully, the adapter becomes
"consumed".Called on:GPUAdapterthis.Arguments:
Arguments for the GPUAdapter.requestDevice(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUDeviceDescriptor✘ ✔ Description of the GPUDeviceto request.Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Let adapter be this.
[[adapter]]. -
Issue the initialization steps to the Device timeline of this.
-
Return promise.
Device timeline initialization steps:-
If any of the following requirements are unmet:
-
The set of values in descriptor.
requiredFeaturesmust be a subset of those in adapter.[[features]].
Then issue the following steps on contentTimeline and return:
Content timeline steps:Note: This is the same error that is produced if a feature name isn’t known by the browser at all (in its
GPUFeatureNamedefinition). This converges the behavior when the browser doesn’t support a feature with the behavior when a particular adapter doesn’t support a feature. -
-
All of the requirements in the following steps must be met.
-
adapter.
[[state]]must not be"consumed". -
For each [key, value] in descriptor.
requiredLimitsfor which value is notundefined:-
key must be the name of a member of supported limits.
-
value must be no better than adapter.
[[limits]][key]. -
If key’s class is alignment, value must be a power of 2 less than 232.
Note: User agents should consider issuing developer-visible warnings when key is not recognized, even when value is
undefined. -
If any are unmet, issue the following steps on contentTimeline and return:
Content timeline steps:-
Reject promise with an
OperationError.
-
-
If adapter.
[[state]]is"expired"or the user agent otherwise cannot fulfill the request:-
Let device be a new device.
-
Lose the device(device,
"unknown"). -
Assert adapter.
[[state]]is"expired".Note: User agents should consider issuing developer-visible warnings in most or all cases when this occurs. Applications should perform reinitialization logic starting with
requestAdapter().
Otherwise:
-
Let device be the result of creating a new device from adapter with descriptor.
-
Expire adapter.
-
-
Issue the subsequent steps on contentTimeline.
Content timeline steps:-
Let gpuDevice be a new
GPUDeviceinstance. -
Set gpuDevice.
[[device]]to device. -
Set device.
[[content device]]to gpuDevice. -
Resolve promise with gpuDevice.
Note: If the device is already lost because the adapter could not fulfill the request, device.
losthas already resolved before promise resolves.
-
GPUDevice with default features and limits:
const gpuAdapter= await navigator. gpu. requestAdapter(); const gpuDevice= await gpuAdapter. requestDevice();
4.3.1. GPUDeviceDescriptor
GPUDeviceDescriptor describes a device request.
dictionary GPUDeviceDescriptor :GPUObjectDescriptorBase {sequence <GPUFeatureName >requiredFeatures = [];record <DOMString , (GPUSize64 or undefined )>requiredLimits = {};GPUQueueDescriptor defaultQueue = {}; };
GPUDeviceDescriptor has the following members:
requiredFeatures, of type sequence<GPUFeatureName>, defaulting to[]-
Specifies the features that are required by the device request. The request will fail if the adapter cannot provide these features.
Exactly the specified set of features, and no more or less, will be allowed in validation of API calls on the resulting device.
requiredLimits, of typerecord<DOMString, (GPUSize64 or undefined)>, defaulting to{}-
Specifies the limits that are required by the device request. The request will fail if the adapter cannot provide these limits.
Each key with a non-
undefinedvalue must be the name of a member of supported limits.API calls on the resulting device perform validation according to the exact limits of the device (not the adapter; see § 3.6.2 Limits).
defaultQueue, of type GPUQueueDescriptor, defaulting to{}-
The descriptor for the default
GPUQueue.
GPUDevice with the "texture-compression-astc" feature if supported:
const gpuAdapter= await navigator. gpu. requestAdapter(); const requiredFeatures= []; if ( gpuAdapter. features. has( 'texture-compression-astc' )) { requiredFeatures. push( 'texture-compression-astc' ) } const gpuDevice= await gpuAdapter. requestDevice({ requiredFeatures});
GPUDevice with a higher maxColorAttachmentBytesPerSample limit:
const gpuAdapter= await navigator. gpu. requestAdapter(); if ( gpuAdapter. limits. maxColorAttachmentBytesPerSample< 64 ) { // When the desired limit isn’t supported, take action to either fall back to a code // path that does not require the higher limit or notify the user that their device // does not meet minimum requirements. } // Request higher limit of max color attachments bytes per sample. const gpuDevice= await gpuAdapter. requestDevice({ requiredLimits: { maxColorAttachmentBytesPerSample: 64 }, });
4.3.1.1. GPUFeatureName
Each GPUFeatureName identifies a set of functionality which, if available,
allows additional usages of WebGPU that would have otherwise been invalid.
enum GPUFeatureName {"core-features-and-limits" ,"depth-clip-control" ,"depth32float-stencil8" ,"texture-compression-bc" ,"texture-compression-bc-sliced-3d" ,"texture-compression-etc2" ,"texture-compression-astc" ,"texture-compression-astc-sliced-3d" ,"timestamp-query" ,"indirect-first-instance" ,"shader-f16" ,"rg11b10ufloat-renderable" ,"bgra8unorm-storage" ,"float32-filterable" ,"float32-blendable" ,"clip-distances" ,"dual-source-blending" ,"subgroups" ,"texture-formats-tier1" ,"texture-formats-tier2" ,"primitive-index" ,"texture-component-swizzle" , };
4.4. GPUDevice
A GPUDevice encapsulates a device and exposes
the functionality of that device.
GPUDevice is the top-level interface through which WebGPU interfaces are created.
To get a GPUDevice, use requestDevice().
[Exposed =(Window ,Worker ),SecureContext ]interface GPUDevice :EventTarget { [SameObject ]readonly attribute GPUSupportedFeatures features ; [SameObject ]readonly attribute GPUSupportedLimits limits ; [SameObject ]readonly attribute GPUAdapterInfo adapterInfo ; [SameObject ]readonly attribute GPUQueue queue ;undefined destroy ();GPUBuffer createBuffer (GPUBufferDescriptor descriptor );GPUTexture createTexture (GPUTextureDescriptor descriptor );GPUSampler createSampler (optional GPUSamplerDescriptor descriptor = {});GPUExternalTexture importExternalTexture (GPUExternalTextureDescriptor descriptor );GPUBindGroupLayout createBindGroupLayout (GPUBindGroupLayoutDescriptor descriptor );GPUPipelineLayout createPipelineLayout (GPUPipelineLayoutDescriptor descriptor );GPUBindGroup createBindGroup (GPUBindGroupDescriptor descriptor );GPUShaderModule createShaderModule (GPUShaderModuleDescriptor descriptor );GPUComputePipeline createComputePipeline (GPUComputePipelineDescriptor descriptor );GPURenderPipeline createRenderPipeline (GPURenderPipelineDescriptor descriptor );Promise <GPUComputePipeline >createComputePipelineAsync (GPUComputePipelineDescriptor descriptor );Promise <GPURenderPipeline >createRenderPipelineAsync (GPURenderPipelineDescriptor descriptor );GPUCommandEncoder createCommandEncoder (optional GPUCommandEncoderDescriptor descriptor = {});GPURenderBundleEncoder createRenderBundleEncoder (GPURenderBundleEncoderDescriptor descriptor );GPUQuerySet createQuerySet (GPUQuerySetDescriptor descriptor ); };GPUDevice includes GPUObjectBase ;
GPUDevice has the following immutable properties:
features, of type GPUSupportedFeatures, readonly-
A set containing the
GPUFeatureNamevalues of the features supported by the device ([[device]].[[features]]). limits, of type GPUSupportedLimits, readonly-
The limits supported by the device (
[[device]].[[limits]]). queue, of type GPUQueue, readonly-
The primary
GPUQueuefor this device. adapterInfo, of type GPUAdapterInfo, readonly-
Information about the physical adapter which created the device that this
GPUDevicerefers to.For a given
GPUDevice, theGPUAdapterInfovalues exposed are constant over time.The same object is returned each time. To create that object for the first time:
Called on:GPUDevicethis.Returns:
GPUAdapterInfoContent timeline steps:
-
Return a new adapter info for this.
[[device]].[[adapter]].
-
The [[device]] for a GPUDevice is the device that the GPUDevice refers
to.
GPUDevice has the following methods:
destroy()-
Destroys the device, preventing further operations on it. Outstanding asynchronous operations will fail.
Note: It is valid to destroy a device multiple times.
Called on:GPUDevicethis.Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
-
Lose the device(this.
[[device]],"destroyed").
Note: Since no further operations can be enqueued on this device, implementations can abort outstanding asynchronous operations immediately and free resource allocations, including mapped memory that was just unmapped.
GPUDevice’s allowed buffer usages are:
GPUDevice’s allowed texture usages are:
-
Always allowed:
COPY_SRC,COPY_DST,TEXTURE_BINDING,STORAGE_BINDING,RENDER_ATTACHMENT,TRANSIENT_ATTACHMENT
4.5. Example
GPUAdapter and GPUDevice with error handling:
let gpuDevice= null ; async function initializeWebGPU() { // Check to ensure the user agent supports WebGPU. if ( ! ( 'gpu' in navigator)) { console. error( "User agent doesn’t support WebGPU." ); return false ; } // Request an adapter. const gpuAdapter= await navigator. gpu. requestAdapter(); // requestAdapter may resolve with null if no suitable adapters are found. if ( ! gpuAdapter) { console. error( 'No WebGPU adapters found.' ); return false ; } // Request a device. // Note that the promise will reject if invalid options are passed to the optional // dictionary. To avoid the promise rejecting always check any features and limits // against the adapters features and limits prior to calling requestDevice(). gpuDevice= await gpuAdapter. requestDevice(); // requestDevice will never return null, but if a valid device request can’t be // fulfilled for some reason it may resolve to a device which has already been lost. // Additionally, devices can be lost at any time after creation for a variety of reasons // (ie: browser resource management, driver updates), so it’s a good idea to always // handle lost devices gracefully. gpuDevice. lost. then(( info) => { console. error( `WebGPU device was lost: ${ info. message} ` ); gpuDevice= null ; // Many causes for lost devices are transient, so applications should try getting a // new device once a previous one has been lost unless the loss was caused by the // application intentionally destroying the device. Note that any WebGPU resources // created with the previous device (buffers, textures, etc) will need to be // re-created with the new one. if ( info. reason!= 'destroyed' ) { initializeWebGPU(); } }); onWebGPUInitialized(); return true ; } function onWebGPUInitialized() { // Begin creating WebGPU resources here... } initializeWebGPU();
5. Buffers
5.1. GPUBuffer
A GPUBuffer represents a block of memory that can be used in GPU operations.
Data is stored in linear layout, meaning that each byte of the allocation can be
addressed by its offset from the start of the GPUBuffer, subject to alignment
restrictions depending on the operation. Some GPUBuffers can be
mapped which makes the block of memory accessible via an ArrayBuffer called
its mapping.
GPUBuffers are created via createBuffer().
Buffers may be mappedAtCreation.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUBuffer {readonly attribute GPUSize64Out size ;readonly attribute GPUFlagsConstant usage ;readonly attribute GPUBufferMapState mapState ;Promise <undefined >mapAsync (GPUMapModeFlags mode ,optional GPUSize64 offset = 0,optional GPUSize64 size );ArrayBuffer getMappedRange (optional GPUSize64 offset = 0,optional GPUSize64 size );undefined unmap ();undefined destroy (); };GPUBuffer includes GPUObjectBase ;enum GPUBufferMapState {"unmapped" ,"pending" ,"mapped" , };
GPUBuffer has the following immutable properties:
size, of type GPUSize64Out, readonly-
The length of the
GPUBufferallocation in bytes. usage, of type GPUFlagsConstant, readonly-
The allowed usages for this
GPUBuffer.
GPUBuffer has the following content timeline properties:
mapState, of type GPUBufferMapState, readonly-
The current
GPUBufferMapStateof the buffer:"unmapped"-
The buffer is not mapped for use by
this.getMappedRange(). "pending"-
A mapping of the buffer has been requested, but is pending. It may succeed, or fail validation in
mapAsync(). "mapped"-
The buffer is mapped and
this.getMappedRange()may be used.
The getter steps are:
Content timeline steps:-
If this.
[[mapping]]is notnull, return"mapped". -
If this.
[[pending_map]]is notnull, return"pending". -
Return
"unmapped".
[[pending_map]], of typePromise<void> ornull, initiallynull-
The
Promisereturned by the currently-pendingmapAsync()call.There is never more than one pending map, because
mapAsync()will refuse immediately if a request is already in flight. [[mapping]], of type active buffer mapping ornull, initiallynull-
Set if and only if the buffer is currently mapped for use by
getMappedRange(). Null otherwise (even if there is a[[pending_map]]).An active buffer mapping is a structure with the following fields:
- data, of type Data Block
-
The mapping for this
GPUBuffer. This data is accessed throughArrayBuffers which are views onto this data, returned bygetMappedRange()and stored in views. - mode, of type
GPUMapModeFlags -
The
GPUMapModeFlagsof the map, as specified in the corresponding call tomapAsync()orcreateBuffer(). - range, of type tuple [
unsigned long long,unsigned long long] -
The range of this
GPUBufferthat is mapped. - views, of type list<
ArrayBuffer> -
The
ArrayBuffers returned viagetMappedRange()to the application. They are tracked so they can be detached whenunmap()is called.
To initialize an active buffer mapping with mode mode and range range, run the following content timeline steps:-
Let size be range[1] - range[0].
-
Let data be ? CreateByteDataBlock(size).
NOTE:This may result in aRangeErrorbeing thrown. For consistency and predictability:-
For any size at which
new ArrayBuffer()would succeed at a given moment, this allocation should succeed at that moment. -
For any size at which
new ArrayBuffer()deterministically throws aRangeError, this allocation should as well.
-
-
Return an active buffer mapping with:
GPUBuffer has the following device timeline properties:
[[internal state]]-
The current internal state of the buffer:
5.1.1. GPUBufferDescriptor
dictionary GPUBufferDescriptor :GPUObjectDescriptorBase {required GPUSize64 size ;required GPUBufferUsageFlags usage ;boolean mappedAtCreation =false ; };
GPUBufferDescriptor has the following members:
size, of type GPUSize64-
The size of the buffer in bytes.
usage, of type GPUBufferUsageFlags-
The allowed usages for the buffer.
mappedAtCreation, of type boolean, defaulting tofalse-
If
truecreates the buffer in an already mapped state, allowinggetMappedRange()to be called immediately. It is valid to setmappedAtCreationtotrueeven ifusagedoes not containMAP_READorMAP_WRITE. This can be used to set the buffer’s initial data.Guarantees that even if the buffer creation eventually fails, it will still appear as if the mapped range can be written/read to until it is unmapped.
5.1.2. Buffer Usages
typedef [EnforceRange ]unsigned long ; [GPUBufferUsageFlags Exposed =(Window ,Worker ),SecureContext ]namespace {GPUBufferUsage const GPUFlagsConstant MAP_READ = 0x0001;const GPUFlagsConstant MAP_WRITE = 0x0002;const GPUFlagsConstant COPY_SRC = 0x0004;const GPUFlagsConstant COPY_DST = 0x0008;const GPUFlagsConstant INDEX = 0x0010;const GPUFlagsConstant VERTEX = 0x0020;const GPUFlagsConstant UNIFORM = 0x0040;const GPUFlagsConstant STORAGE = 0x0080;const GPUFlagsConstant INDIRECT = 0x0100;const GPUFlagsConstant QUERY_RESOLVE = 0x0200; };
The GPUBufferUsage flags determine how a GPUBuffer may be used after its creation:
MAP_READ-
The buffer can be mapped for reading. (Example: calling
mapAsync()withGPUMapMode.READ)May only be combined with
COPY_DST. MAP_WRITE-
The buffer can be mapped for writing. (Example: calling
mapAsync()withGPUMapMode.WRITE)May only be combined with
COPY_SRC. COPY_SRC-
The buffer can be used as the source of a copy operation. (Examples: as the
sourceargument of a copyBufferToBuffer() orcopyBufferToTexture()call.) COPY_DST-
The buffer can be used as the destination of a copy or write operation. (Examples: as the
destinationargument of a copyBufferToBuffer() orcopyTextureToBuffer()call, or as the target of awriteBuffer()call.) INDEX-
The buffer can be used as an index buffer. (Example: passed to
setIndexBuffer().) VERTEX-
The buffer can be used as a vertex buffer. (Example: passed to
setVertexBuffer().) UNIFORM-
The buffer can be used as a uniform buffer. (Example: as a bind group entry for a
GPUBufferBindingLayoutwith abuffer.typeof"uniform".) STORAGE-
The buffer can be used as a storage buffer. (Example: as a bind group entry for a
GPUBufferBindingLayoutwith abuffer.typeof"storage"or"read-only-storage".) INDIRECT-
The buffer can be used as to store indirect command arguments. (Examples: as the
indirectBufferargument of adrawIndirect()ordispatchWorkgroupsIndirect()call.) QUERY_RESOLVE-
The buffer can be used to capture query results. (Example: as the
destinationargument of aresolveQuerySet()call.)
5.1.3. Buffer Creation
createBuffer(descriptor)-
Creates a
GPUBuffer.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createBuffer(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUBufferDescriptor✘ ✘ Description of the GPUBufferto create.Returns:
GPUBufferContent timeline steps:
-
Let b be ! create a new WebGPU object(this,
GPUBuffer, descriptor). -
If descriptor.
mappedAtCreationistrue:-
If descriptor.
sizeis not a multiple of 4, throw aRangeError. -
Set b.
[[mapping]]to ? initialize an active buffer mapping with modeWRITEand range[0, descriptor..size]
-
-
Issue the initialization steps on the Device timeline of this.
-
Return b.
Device timeline initialization steps:-
If any of the following requirements are unmet, generate a validation error, invalidate b and return.
-
this must not be lost.
-
descriptor.
usagemust not be 0. -
descriptor.
usagemust be a subset of the allowed buffer usages for this. -
If descriptor.
sizemust be ≤ this.[[device]].[[limits]].maxBufferSize.
-
Note: If buffer creation fails, and descriptor.
mappedAtCreationisfalse, any calls tomapAsync()will reject, so any resources allocated to enable mapping can and may be discarded or recycled.-
If descriptor.
mappedAtCreationistrue:-
Set b.
[[internal state]]to "unavailable".
Otherwise:
-
Set b.
[[internal state]]to "available".
-
-
Create a device allocation for b where each byte is zero.
If the allocation fails without side-effects, generate an out-of-memory error, invalidate b, and return.
-
const buffer= gpuDevice. createBuffer({ size: 128 , usage: GPUBufferUsage. UNIFORM| GPUBufferUsage. COPY_DST});
5.1.4. Buffer Destruction
An application that no longer requires a GPUBuffer can choose to lose
access to it before garbage collection by calling destroy(). Destroying a buffer also
unmaps it, freeing any memory allocated for the mapping.
Note: This allows the user agent to reclaim the GPU memory associated with the GPUBuffer
once all previously submitted operations using it are complete.
GPUBuffer has the following methods:
destroy()-
Destroys the
GPUBuffer.Note: It is valid to destroy a buffer multiple times.
Called on:GPUBufferthis.Returns:
undefinedContent timeline steps:
-
Call this.
unmap(). -
Issue the subsequent steps on the Device timeline of this.
[[device]].
Device timeline steps:-
Set this.
[[internal state]]to "destroyed".
Note: Since no further operations can be enqueued using this buffer, implementations can free resource allocations, including mapped memory that was just unmapped.
-
5.2. Buffer Mapping
An application can request to map a GPUBuffer so that they can access its
content via ArrayBuffers that represent part of the GPUBuffer’s
allocations. Mapping a GPUBuffer is requested asynchronously with
mapAsync() so that the user agent can ensure the GPU
finished using the GPUBuffer before the application can access its content.
A mapped GPUBuffer
cannot be used by the GPU and must be unmapped using unmap() before
work using it can be submitted to the Queue timeline.
Once the GPUBuffer is mapped, the application can synchronously ask for access
to ranges of its content with getMappedRange().
The returned ArrayBuffer can only be detached by unmap()
(directly, or via GPUBuffer.destroy() or GPUDevice.destroy()),
and cannot be transferred.
A TypeError is thrown by any other operation that attempts to do so.
typedef [EnforceRange ]unsigned long ; [GPUMapModeFlags Exposed =(Window ,Worker ),SecureContext ]namespace {GPUMapMode const GPUFlagsConstant READ = 0x0001;const GPUFlagsConstant WRITE = 0x0002; };
The GPUMapMode flags determine how a GPUBuffer is mapped when calling
mapAsync():
READ-
Only valid with buffers created with the
MAP_READusage.Once the buffer is mapped, calls to
getMappedRange()will return anArrayBuffercontaining the buffer’s current values. Changes to the returnedArrayBufferwill be discarded afterunmap()is called. WRITE-
Only valid with buffers created with the
MAP_WRITEusage.Once the buffer is mapped, calls to
getMappedRange()will return anArrayBuffercontaining the buffer’s current values. Changes to the returnedArrayBufferwill be stored in theGPUBufferafterunmap()is called.Note: Since the
MAP_WRITEbuffer usage may only be combined with theCOPY_SRCbuffer usage, mapping for writing can never return values produced by the GPU, and the returnedArrayBufferwill only ever contain the default initialized data (zeros) or data written by the webpage during a previous mapping.
GPUBuffer has the following methods:
mapAsync(mode, offset, size)-
Maps the given range of the
GPUBufferand resolves the returnedPromisewhen theGPUBuffer’s content is ready to be accessed withgetMappedRange().The resolution of the returned
Promiseonly indicates that the buffer has been mapped. It does not guarantee the completion of any other operations visible to the content timeline, and in particular does not imply that any otherPromisereturned fromonSubmittedWorkDone()ormapAsync()on otherGPUBuffers have resolved.The resolution of the
Promisereturned fromonSubmittedWorkDone()does imply the completion ofmapAsync()calls made prior to that call, onGPUBuffers last used exclusively on that queue.Called on:GPUBufferthis.Arguments:
Arguments for the GPUBuffer.mapAsync(mode, offset, size) method. Parameter Type Nullable Optional Description modeGPUMapModeFlags✘ ✘ Whether the buffer should be mapped for reading or writing. offsetGPUSize64✘ ✔ Offset in bytes into the buffer to the start of the range to map. sizeGPUSize64✘ ✔ Size in bytes of the range to map. Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
If this.
mapStateis not"unmapped":-
Issue the early-reject steps on the Device timeline of this.
[[device]].
-
-
Let p be a new
Promise. -
Set this.
[[pending_map]]to p. -
Issue the validation steps on the Device timeline of this.
[[device]]. -
Return p.
Device timeline early-reject steps:-
Return.
Device timeline validation steps:-
If size is
undefined:-
Let rangeSize be max(0, this.
size- offset).
Otherwise:
-
Let rangeSize be size.
-
-
If any of the following conditions are unsatisfied:
-
this must be valid.
-
Set deviceLost to
true. -
Issue the map failure steps on contentTimeline.
-
Return.
-
-
If any of the following conditions are unsatisfied:
-
this.
[[internal state]]is "available". -
offset is a multiple of 8.
-
rangeSize is a multiple of 4.
-
offset + rangeSize ≤ this.
size -
mode contains only bits defined in
GPUMapMode. -
If mode contains
READthen this.usagemust containMAP_READ. -
If mode contains
WRITEthen this.usagemust containMAP_WRITE.
Then:
-
Set deviceLost to
false. -
Issue the map failure steps on contentTimeline.
-
Return.
-
-
Set this.
[[internal state]]to "unavailable".Note: Since the buffer is mapped, its contents cannot change between this step and
unmap(). -
When either of the following events occur (whichever comes first), or if either has already occurred:
-
The device timeline becomes informed of the completion of an unspecified queue timeline point:
-
after the completion of currently-enqueued operations that use this
-
and no later than the completion of all currently-enqueued operations (regardless of whether they use this).
-
-
this.
[[device]]becomes lost.
Then issue the subsequent steps on the device timeline of this.
[[device]]. -
Device timeline steps:-
Set deviceLost to
trueif this.[[device]]is lost, andfalseotherwise.Note: The device could have been lost between the previous block of steps and this one.
-
If deviceLost:
-
Issue the map failure steps on contentTimeline.
Otherwise:
-
Let internalStateAtCompletion be this.
[[internal state]].Note: If, and only if, at this point the buffer has become "available" again due to an
unmap()call, then[[pending_map]]!= p below, so mapping will not succeed in the steps below. -
Let dataForMappedRegion be the contents of this starting at offset offset, for rangeSize bytes.
-
Issue the map success steps on the contentTimeline.
-
Content timeline map success steps:-
If this.
[[pending_map]]!= p:Note: The map has been cancelled by
unmap().-
Assert p is rejected.
-
Return.
-
-
Assert p is pending.
-
Assert internalStateAtCompletion is "unavailable".
-
Let mapping be initialize an active buffer mapping with mode mode and range
[offset, offset + rangeSize].If this allocation fails:
-
Set this.
[[pending_map]]tonull, and reject p with aRangeError. -
Return.
-
-
Set the content of mapping.data to dataForMappedRegion.
-
Set this.
[[mapping]]to mapping. -
Set this.
[[pending_map]]tonull, and resolve p.
Content timeline map failure steps:-
If this.
[[pending_map]]!= p:Note: The map has been cancelled by
unmap().-
Assert p is already rejected.
-
Return.
-
-
Assert p is still pending.
-
Set this.
[[pending_map]]tonull. -
If deviceLost:
-
Reject p with an
AbortError.Note: This is the same error type produced by cancelling the map using
unmap().
Otherwise:
-
Reject p with an
OperationError.
-
-
getMappedRange(offset, size)-
Returns an
ArrayBufferwith the contents of theGPUBufferin the given mapped range.Called on:GPUBufferthis.Arguments:
Arguments for the GPUBuffer.getMappedRange(offset, size) method. Parameter Type Nullable Optional Description offsetGPUSize64✘ ✔ Offset in bytes into the buffer to return buffer contents from. sizeGPUSize64✘ ✔ Size in bytes of the ArrayBufferto return.Returns:
ArrayBufferContent timeline steps:
-
If size is missing:
-
Let rangeSize be max(0, this.
size- offset).
Otherwise, let rangeSize be size.
-
-
If any of the following conditions are unsatisfied, throw an
OperationErrorand return.-
this.
[[mapping]]is notnull. -
offset is a multiple of 8.
-
rangeSize is a multiple of 4.
-
offset ≥ this.
[[mapping]].range[0]. -
offset + rangeSize ≤ this.
[[mapping]].range[1]. -
[offset, offset + rangeSize) does not overlap another range in this.
[[mapping]].views.
Note: It is always valid to get mapped ranges of a
GPUBufferthat ismappedAtCreation, even if it is invalid, because the Content timeline might not know it is invalid. -
-
Let data be this.
[[mapping]].data. -
Let view be ! create an ArrayBuffer of size rangeSize, but with its pointer mutably referencing the content of data at offset (offset -
[[mapping]].range[0]).Note: A
RangeErrorcannot be thrown here, because the data has already been allocated duringmapAsync()orcreateBuffer(). -
Set view.
[[ArrayBufferDetachKey]]to "WebGPUBufferMapping".Note: This causes a
TypeErrorto be thrown if an attempt is made to DetachArrayBuffer, except byunmap(). -
Append view to this.
[[mapping]].views. -
Return view.
Note: User agents should consider issuing a developer-visible warning if
getMappedRange()succeeds without having checked the status of the map, by waiting formapAsync()to succeed, querying amapStateof"mapped", or waiting for a lateronSubmittedWorkDone()call to succeed. -
unmap()-
Unmaps the mapped range of the
GPUBufferand makes its contents available for use by the GPU again.Called on:GPUBufferthis.Returns:
undefinedContent timeline steps:
-
If this.
[[pending_map]]is notnull:-
Reject this.
[[pending_map]]with anAbortError. -
Set this.
[[pending_map]]tonull.
-
-
If this.
[[mapping]]isnull:-
Return.
-
-
For each
ArrayBufferab in this.[[mapping]].views:-
Perform DetachArrayBuffer(ab, "WebGPUBufferMapping").
-
-
Let bufferUpdate be
null. -
If this.
[[mapping]].mode containsWRITE:-
Set bufferUpdate to {
data: this.[[mapping]].data,offset: this.[[mapping]].range[0] }.
Note: When a buffer is mapped without the
WRITEmode, then unmapped, any local modifications done by the application to the mapped rangesArrayBufferare discarded and will not affect the content of later mappings. -
-
Set this.
[[mapping]]tonull. -
Issue the subsequent steps on the Device timeline of this.
[[device]].
Device timeline steps:-
If any of the following conditions are unsatisfied, return.
-
this is valid to use with this.
[[device]].
-
-
Assert this.
[[internal state]]is "unavailable". -
If bufferUpdate is not
null:-
Issue the following steps on the Queue timeline of this.
[[device]].queue:Queue timeline steps:-
Update the contents of this at offset bufferUpdate.
offsetwith the data bufferUpdate.data.
-
-
-
Set this.
[[internal state]]to "available".
-
6. Textures and Texture Views
6.1. GPUTexture
A texture is made up of 1d, 2d,
or 3d arrays of data which can contain multiple values per-element to
represent things like colors. Textures can be read and written in many ways, depending on the
GPUTextureUsage they are created with. For example, textures can be sampled, read, and written
from render and compute pipeline shaders, and they can be written by render pass outputs.
Internally, textures are often stored in GPU memory with a layout optimized for
multidimensional access rather than linear access.
One texture consists of one or more texture subresources,
each uniquely identified by a mipmap level and,
for 2d textures only, array layer and aspect.
A texture subresource is a subresource: each can be used in different internal usages within a single usage scope.
Each subresource in a mipmap level is approximately half the size,
in each spatial dimension, of the corresponding resource in the lesser level
(see logical miplevel-specific texture extent).
The subresource in level 0 has the dimensions of the texture itself.
Smaller levels are typically used to store lower resolution versions of the same image.
GPUSampler and WGSL provide facilities for selecting and interpolating between levels of detail, explicitly or automatically.
A "2d" texture may be an array of array layers.
Each subresource in a layer is the same size as the corresponding resources in other layers.
For non-2d textures, all subresources have an array layer index of 0.
Each subresource has an aspect.
Color textures have just one aspect: color.
Depth-or-stencil format textures may have multiple aspects:
a depth aspect,
a stencil aspect, or both, and may be used in special ways, such as in
depthStencilAttachment and in "depth" bindings.
A "3d" texture may have multiple slices, each being the
two-dimensional image at a particular z value in the texture.
Slices are not separate subresources.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUTexture {GPUTextureView createView (optional GPUTextureViewDescriptor descriptor = {});undefined destroy ();readonly attribute GPUIntegerCoordinateOut width ;readonly attribute GPUIntegerCoordinateOut height ;readonly attribute GPUIntegerCoordinateOut depthOrArrayLayers ;readonly attribute GPUIntegerCoordinateOut mipLevelCount ;readonly attribute GPUSize32Out sampleCount ;readonly attribute GPUTextureDimension dimension ;readonly attribute GPUTextureFormat format ;readonly attribute GPUFlagsConstant usage ;readonly attribute (GPUTextureViewDimension or undefined )textureBindingViewDimension ; };GPUTexture includes GPUObjectBase ;
GPUTexture has the following immutable properties:
width, of type GPUIntegerCoordinateOut, readonly-
The width of this
GPUTexture. height, of type GPUIntegerCoordinateOut, readonly-
The height of this
GPUTexture. depthOrArrayLayers, of type GPUIntegerCoordinateOut, readonly-
The depth or layer count of this
GPUTexture. mipLevelCount, of type GPUIntegerCoordinateOut, readonly-
The number of mip levels of this
GPUTexture. sampleCount, of type GPUSize32Out, readonly-
The number of sample count of this
GPUTexture. dimension, of type GPUTextureDimension, readonly-
The dimension of the set of texel for each of this
GPUTexture’s subresources. format, of type GPUTextureFormat, readonly-
The format of this
GPUTexture. usage, of type GPUFlagsConstant, readonly-
The allowed usages for this
GPUTexture. [[viewFormats]], of type sequence<GPUTextureFormat>-
The set of
GPUTextureFormats that can be used as theGPUTextureViewDescriptor.formatwhen creating views on thisGPUTexture. textureBindingViewDimension, of type(GPUTextureViewDimension or undefined), readonly-
On devices without
"core-features-and-limits", views created from this texture must have this as theirdimension.On devices with
"core-features-and-limits", this isundefined, and there is no such restriction.
GPUTexture has the following device timeline properties:
[[destroyed]], of typeboolean, initiallyfalse-
If the texture is destroyed, it can no longer be used in any operation, and its underlying memory can be freed.
Arguments:
-
GPUExtent3DbaseSize -
GPUSize32mipLevel
Returns: GPUExtent3DDict
Device timeline steps:
-
Let extent be a new
GPUExtent3DDictobject. -
Set extent.
depthOrArrayLayersto 1. -
Return extent.
The logical miplevel-specific texture extent of a texture is the size of the texture in texels at a specific miplevel. It is calculated by this procedure:
Arguments:
-
GPUTextureDescriptordescriptor -
GPUSize32mipLevel
Returns: GPUExtent3DDict
-
Let extent be a new
GPUExtent3DDictobject. -
If descriptor.
dimensionis:"1d"-
-
Set extent.
widthto max(1, descriptor.size.width ≫ mipLevel). -
Set extent.
heightto 1. -
Set extent.
depthOrArrayLayersto 1.
-
"2d"-
-
Set extent.
widthto max(1, descriptor.size.width ≫ mipLevel). -
Set extent.
heightto max(1, descriptor.size.height ≫ mipLevel). -
Set extent.
depthOrArrayLayersto descriptor.size.depthOrArrayLayers.
-
"3d"-
-
Set extent.
widthto max(1, descriptor.size.width ≫ mipLevel). -
Set extent.
heightto max(1, descriptor.size.height ≫ mipLevel). -
Set extent.
depthOrArrayLayersto max(1, descriptor.size.depthOrArrayLayers ≫ mipLevel).
-
-
Return extent.
The physical miplevel-specific texture extent of a texture is the size of the texture in texels at a specific miplevel that includes the possible extra padding to form complete texel blocks in the texture. It is calculated by this procedure:
Arguments:
-
GPUTextureDescriptordescriptor -
GPUSize32mipLevel
Returns: GPUExtent3DDict
-
Let extent be a new
GPUExtent3DDictobject. -
Let logicalExtent be logical miplevel-specific texture extent(descriptor, mipLevel).
-
If descriptor.
dimensionis:"1d"-
-
Set extent.
widthto logicalExtent.width rounded up to the nearest multiple of descriptor’s texel block width. -
Set extent.
heightto 1. -
Set extent.
depthOrArrayLayersto 1.
-
"2d"-
-
Set extent.
widthto logicalExtent.width rounded up to the nearest multiple of descriptor’s texel block width. -
Set extent.
heightto logicalExtent.height rounded up to the nearest multiple of descriptor’s texel block height. -
Set extent.
depthOrArrayLayersto logicalExtent.depthOrArrayLayers.
-
"3d"-
-
Set extent.
widthto logicalExtent.width rounded up to the nearest multiple of descriptor’s texel block width. -
Set extent.
heightto logicalExtent.height rounded up to the nearest multiple of descriptor’s texel block height. -
Set extent.
depthOrArrayLayersto logicalExtent.depthOrArrayLayers.
-
-
Return extent.
6.1.1. GPUTextureDescriptor
dictionary GPUTextureDescriptor :GPUObjectDescriptorBase {required GPUExtent3D size ;GPUIntegerCoordinate mipLevelCount = 1;GPUSize32 sampleCount = 1;GPUTextureDimension dimension = "2d";required GPUTextureFormat format ;required GPUTextureUsageFlags usage ;sequence <GPUTextureFormat >viewFormats = [];GPUTextureViewDimension textureBindingViewDimension ; };
GPUTextureDescriptor has the following members:
size, of type GPUExtent3D-
The width, height, and depth or layer count of the texture.
mipLevelCount, of type GPUIntegerCoordinate, defaulting to1-
The number of mip levels the texture will contain.
sampleCount, of type GPUSize32, defaulting to1-
The sample count of the texture. A
sampleCount>1indicates a multisampled texture. dimension, of type GPUTextureDimension, defaulting to"2d"-
Whether the texture is one-dimensional, an array of two-dimensional layers, or three-dimensional.
format, of type GPUTextureFormat-
The format of the texture.
usage, of type GPUTextureUsageFlags-
The allowed usages for the texture.
viewFormats, of type sequence<GPUTextureFormat>, defaulting to[]-
Specifies what view
formatvalues will be allowed when callingcreateView()on this texture (in addition to the texture’s actualformat).NOTE:Adding a format to this list may have a significant performance impact, so it is best to avoid adding formats unnecessarily.The actual performance impact is highly dependent on the target system; developers must test various systems to find out the impact on their particular application. For example, on some systems any texture with a
formatorviewFormatsentry including"rgba8unorm-srgb"will perform less optimally than a"rgba8unorm"texture which does not. Similar caveats exist for other formats and pairs of formats on other systems.Formats in this list must be texture view format compatible with the texture format.
TwoGPUTextureFormats format and viewFormat are texture view format compatible on a given device if:-
format equals viewFormat, or
-
format and viewFormat differ only in whether they are
srgbformats (have the-srgbsuffix) and device.[[features]]contains"core-features-and-limits".
-
textureBindingViewDimension, of type GPUTextureViewDimension-
On devices without
"core-features-and-limits", views created from this texture must have this as theirdimension. If not specified, a default is chosen.On devices with
"core-features-and-limits", this is ignored, and there is no such restriction.
enum {GPUTextureDimension "1d" ,"2d" ,"3d" , };
"1d"-
Specifies a texture that has one dimension, width.
"1d"textures cannot have mipmaps, be multisampled, use compressed or depth/stencil formats, or be used as a render target. "2d"-
Specifies a texture that has a width and height, and may have layers.
"3d"-
Specifies a texture that has a width, height, and depth.
"3d"textures cannot be multisampled, and their format must support 3d textures (all plain color formats and some packed/compressed formats).
6.1.2. Texture Usages
typedef [EnforceRange ]unsigned long ; [GPUTextureUsageFlags Exposed =(Window ,Worker ),SecureContext ]namespace {GPUTextureUsage const GPUFlagsConstant COPY_SRC = 0x01;const GPUFlagsConstant COPY_DST = 0x02;const GPUFlagsConstant TEXTURE_BINDING = 0x04;const GPUFlagsConstant STORAGE_BINDING = 0x08;const GPUFlagsConstant RENDER_ATTACHMENT = 0x10;const GPUFlagsConstant TRANSIENT_ATTACHMENT = 0x20; };
The GPUTextureUsage flags determine how a GPUTexture may be used after its creation:
COPY_SRC-
The texture can be used as the source of a copy operation. (Examples: as the
sourceargument of acopyTextureToTexture()orcopyTextureToBuffer()call.) COPY_DST-
The texture can be used as the destination of a copy or write operation. (Examples: as the
destinationargument of acopyTextureToTexture()orcopyBufferToTexture()call, or as the target of awriteTexture()call.) TEXTURE_BINDING-
The texture can be bound for use as a sampled texture in a shader (Example: as a bind group entry for a
GPUTextureBindingLayout.) STORAGE_BINDING-
The texture can be bound for use as a storage texture in a shader (Example: as a bind group entry for a
GPUStorageTextureBindingLayout.) RENDER_ATTACHMENT-
The texture can be used as a color or depth/stencil attachment in a render pass. (Example: as a
GPURenderPassColorAttachment.vieworGPURenderPassDepthStencilAttachment.view.) TRANSIENT_ATTACHMENT-
The texture is intended to be temporary (a hint for optimization), as it is only used within a render pass.
Arguments:
-
GPUTextureDimensiondimension -
GPUTextureDimensionsize
6.1.3. Texture Creation
createTexture(descriptor)-
Creates a
GPUTexture.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createTexture(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUTextureDescriptor✘ ✘ Description of the GPUTextureto create.Returns:
GPUTextureContent timeline steps:
-
? validate GPUExtent3D shape(descriptor.
size). -
? Validate texture format required features of descriptor.
formatwith this.[[device]]. -
? Validate texture format required features of each element of descriptor.
viewFormatswith this.[[device]]. -
Let t be ! create a new WebGPU object(this,
GPUTexture, descriptor). -
Set t.
depthOrArrayLayersto descriptor.size.depthOrArrayLayers. -
Set t.
mipLevelCountto descriptor.mipLevelCount. -
Set t.
sampleCountto descriptor.sampleCount. -
If t.
[[device]].[[features]]does not contain"core-features-and-limits":-
If descriptor.
textureBindingViewDimensionis provided:-
Set t.
textureBindingViewDimensionto descriptor.textureBindingViewDimension.
Otherwise, if descriptor.
dimensionis:"1d"-
Set t.
textureBindingViewDimensionto"1d". "2d"-
If the array layer count of t is 1:
-
Set t.
textureBindingViewDimensionto"2d".
Otherwise:
-
Set t.
textureBindingViewDimensionto"2d-array".
-
"3d"-
Set t.
textureBindingViewDimensionto"3d".
-
-
-
Issue the initialization steps on the Device timeline of this.
-
Return t.
Device timeline initialization steps:-
If any of the following conditions are unsatisfied generate a validation error, invalidate t and return.
-
validating GPUTextureDescriptor(this, descriptor) returns
true.
-
-
Set t.
[[viewFormats]]to descriptor.viewFormats. -
Create a device allocation for t where each block has an equivalent texel representation to a block with a bit representation of zero.
If the allocation fails without side-effects, generate an out-of-memory error, invalidate t, and return.
-
Arguments:
-
GPUDevicethis -
GPUTextureDescriptordescriptor
Device timeline steps:
-
Let limits be this.
[[limits]]. -
Return
trueif all of the following requirements are met, andfalseotherwise:-
this must not be lost.
-
descriptor.
usagemust not be 0. -
descriptor.
usagemust contain only bits present in this’s allowed texture usages. -
descriptor.
size.width, descriptor.size.height, and descriptor.size.depthOrArrayLayers must be > zero. -
descriptor.
mipLevelCountmust be > zero. -
descriptor.
sampleCountmust be either 1 or 4. -
If descriptor.
dimensionis:"1d"-
-
descriptor.
size.width must be ≤ limits.maxTextureDimension1D. -
descriptor.
size.depthOrArrayLayers must be 1. -
descriptor.
sampleCountmust be 1. -
descriptor.
formatmust not be a compressed format or depth-or-stencil format.
-
"2d"-
-
descriptor.
size.width must be ≤ limits.maxTextureDimension2D. -
descriptor.
size.height must be ≤ limits.maxTextureDimension2D. -
descriptor.
size.depthOrArrayLayers must be ≤ limits.maxTextureArrayLayers.
-
"3d"-
-
descriptor.
size.width must be ≤ limits.maxTextureDimension3D. -
descriptor.
size.height must be ≤ limits.maxTextureDimension3D. -
descriptor.
size.depthOrArrayLayers must be ≤ limits.maxTextureDimension3D. -
descriptor.
sampleCountmust be 1. -
descriptor.
formatmust support"3d"textures according to § 26.1 Texture Format Capabilities.
-
-
If this.
[[features]]does not contain"core-features-and-limits":-
If descriptor.
textureBindingViewDimensionis"2d", this.size.depthOrArrayLayers must be 1. -
if descriptor.
textureBindingViewDimensionis"cube", this.size.depthOrArrayLayers must be 6. -
descriptor.
textureBindingViewDimensionmust not be"cube-array".
Note: this validation only applies to a user-specified textureBindingViewDimension. If no value is provided, the texture’s textureBindingViewDimension is set as described in
createTexture(). That algorithm cannot produce invalid values, so the above validation is not required. -
-
descriptor.
size.width must be multiple of texel block width. -
descriptor.
size.height must be multiple of texel block height. -
If descriptor.
sampleCount> 1:-
descriptor.
mipLevelCountmust be 1. -
descriptor.
size.depthOrArrayLayers must be 1. -
descriptor.
usagemust not include theSTORAGE_BINDINGbit. -
descriptor.
usagemust include theRENDER_ATTACHMENTbit. -
descriptor.
formatmust support multisampling according to § 26.1 Texture Format Capabilities.
-
-
descriptor.
mipLevelCountmust be ≤ maximum mipLevel count(descriptor.dimension, descriptor.size). -
If descriptor.
usageincludes theRENDER_ATTACHMENTbit:-
descriptor.
formatmust be a renderable format.
-
-
If descriptor.
usageincludes theSTORAGE_BINDINGbit:-
descriptor.
formatmust be listed in § 26.1.1 Plain color formats table withSTORAGE_BINDINGcapability for at least one access mode.
-
-
If descriptor.
usageincludes theTRANSIENT_ATTACHMENTbit:-
descriptor.
usagemust be equal toTRANSIENT_ATTACHMENT|RENDER_ATTACHMENT. -
descriptor.
mipLevelCountmust be 1. -
descriptor.
size.depthOrArrayLayers must be 1.
-
-
For each viewFormat in descriptor.
viewFormats, descriptor.formatand viewFormat must be texture view format compatible on device this.NOTE:Implementations may consider issuing a developer-visible warning if viewFormat is not compatible with any of the givenusagebits, as that viewFormat will be unusable.
-
const texture= gpuDevice. createTexture({ size: { width: 16 , height: 16 }, format: 'rgba8unorm' , usage: GPUTextureUsage. TEXTURE_BINDING, });
6.1.4. Texture Destruction
An application that no longer requires a GPUTexture can choose to lose access to it before
garbage collection by calling destroy().
Note: This allows the user agent to reclaim the GPU memory associated with the GPUTexture once
all previously submitted operations using it are complete.
GPUTexture has the following methods:
destroy()-
Destroys the
GPUTexture.Called on:GPUTexturethis.Returns:
undefinedContent timeline steps:
-
Issue the subsequent steps on the device timeline.
Device timeline steps:-
Set this.
[[destroyed]]to true.
-
6.2. GPUTextureView
A GPUTextureView is a view onto some subset of the texture subresources defined by
a particular GPUTexture.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUTextureView { };GPUTextureView includes GPUObjectBase ;
GPUTextureView has the following immutable properties:
[[texture]], readonly-
The
GPUTextureinto which this is a view. [[descriptor]], readonly-
The
GPUTextureViewDescriptordescribing this texture view.All optional fields of
GPUTextureViewDescriptorare defined. [[renderExtent]], readonly-
For renderable views, this is the effective
GPUExtent3DDictfor rendering.Note: this extent depends on the
baseMipLevel.
[[descriptor]] desc,
is the subset of the subresources of view.[[texture]]
for which each subresource s satisfies the following:
-
The mipmap level of s is ≥ desc.
baseMipLeveland < desc.baseMipLevel+ desc.mipLevelCount. -
The array layer of s is ≥ desc.
baseArrayLayerand < desc.baseArrayLayer+ desc.arrayLayerCount. -
The aspect of s is in the set of aspects of desc.
aspect.
Two GPUTextureView objects are texture-view-aliasing if and only if
their sets of subresources intersect.
6.2.1. Texture View Creation
dictionary :GPUTextureViewDescriptor GPUObjectDescriptorBase {GPUTextureFormat format ;GPUTextureViewDimension dimension ;GPUTextureUsageFlags usage = 0;GPUTextureAspect aspect = "all";GPUIntegerCoordinate baseMipLevel = 0;GPUIntegerCoordinate mipLevelCount ;GPUIntegerCoordinate baseArrayLayer = 0;GPUIntegerCoordinate arrayLayerCount ; // Requires "texture-component-swizzle" feature.DOMString swizzle = "rgba"; };
GPUTextureViewDescriptor has the following members:
format, of type GPUTextureFormat-
The format of the texture view. Must be either the
formatof the texture or one of theviewFormatsspecified during its creation. dimension, of type GPUTextureViewDimension-
The dimension to view the texture as.
usage, of type GPUTextureUsageFlags, defaulting to0-
The allowed
usage(s)for the texture view. Must be a subset of theusageflags of the texture. If 0, defaults to the full set ofusageflags of the texture.Note: If the view’s
formatdoesn’t support all of the texture’susages, the default will fail, and the view’susagemust be specified explicitly. aspect, of type GPUTextureAspect, defaulting to"all"-
Which
aspect(s)of the texture are accessible to the texture view. baseMipLevel, of type GPUIntegerCoordinate, defaulting to0-
The first (most detailed) mipmap level accessible to the texture view.
mipLevelCount, of type GPUIntegerCoordinate-
How many mipmap levels, starting with
baseMipLevel, are accessible to the texture view. baseArrayLayer, of type GPUIntegerCoordinate, defaulting to0-
The index of the first array layer accessible to the texture view.
arrayLayerCount, of type GPUIntegerCoordinate-
How many array layers, starting with
baseArrayLayer, are accessible to the texture view. swizzle, of type DOMString, defaulting to"rgba"-
A string of length four, with each character mapping to the texture view’s red/green/blue/alpha channels, respectively.
When accessed by a shader, the red/green/blue/alpha channels are replaced by the value corresponding to the component specified in
swizzle[0],swizzle[1],swizzle[2], andswizzle[3], respectively:-
"r": Take its value from the red channel of the texture. -
"g": Take its value from the green channel of the texture. -
"b": Take its value from the blue channel of the texture. -
"a": Take its value from the alpha channel of the texture. -
"0": Force its value to 0. -
"1": Force its value to 1.
Requires the
"texture-component-swizzle"feature to be enabled. -
enum {GPUTextureViewDimension "1d" ,"2d" ,"2d-array" ,"cube" ,"cube-array" ,"3d" , };
"1d"-
The texture is viewed as a 1-dimensional image.
Corresponding WGSL types:
-
texture_1d -
texture_storage_1d
-
"2d"-
The texture is viewed as a single 2-dimensional image.
Corresponding WGSL types:
-
texture_2d -
texture_storage_2d -
texture_multisampled_2d -
texture_depth_2d -
texture_depth_multisampled_2d
-
"2d-array"-
The texture view is viewed as an array of 2-dimensional images.
Corresponding WGSL types:
-
texture_2d_array -
texture_storage_2d_array -
texture_depth_2d_array
-
"cube"-
The texture is viewed as a cubemap.
The view has 6 array layers, each corresponding to a face of the cube in the order
[+X, -X, +Y, -Y, +Z, -Z]and the following orientations:Cubemap faces. The +U/+V axes indicate the individual faces' texture coordinates, and thus the texel copy memory layout of each face. Note: When viewed from the inside, this results in a left-handed coordinate system where +X is right, +Y is up, and +Z is forward.
Sampling is done seamlessly across the faces of the cubemap.
Corresponding WGSL types:
-
texture_cube -
texture_depth_cube
-
"cube-array"-
The texture is viewed as a packed array of n cubemaps, each with 6 array layers behaving like one
"cube"view, for 6n array layers in total.Corresponding WGSL types:
-
texture_cube_array -
texture_depth_cube_array
-
"3d"-
The texture is viewed as a 3-dimensional image.
Corresponding WGSL types:
-
texture_3d -
texture_storage_3d
-
Each GPUTextureAspect value corresponds to a set of aspects.
The set of aspects are defined for each value below.
enum GPUTextureAspect {"all" ,"stencil-only" ,"depth-only" , };
"all"-
All available aspects of the texture format will be accessible to the texture view. For color formats the color aspect will be accessible. For combined depth-stencil formats both the depth and stencil aspects will be accessible. Depth-or-stencil formats with a single aspect will only make that aspect accessible.
The set of aspects is [color, depth, stencil].
"stencil-only"-
Only the stencil aspect of a depth-or-stencil format format will be accessible to the texture view.
The set of aspects is [stencil].
"depth-only"-
Only the depth aspect of a depth-or-stencil format format will be accessible to the texture view.
The set of aspects is [depth].
createView(descriptor)-
Creates a
GPUTextureView.NOTE:By defaultcreateView()will create a view with a dimension that can represent the entire texture. For example, callingcreateView()without specifying adimensionon a"2d"texture with more than one layer will create a"2d-array"GPUTextureView, even if anarrayLayerCountof 1 is specified.For textures created from sources where the layer count is unknown at the time of development it is recommended that calls to
createView()are provided with an explicitdimensionto ensure shader compatibility.Called on:GPUTexturethis.Arguments:
Arguments for the GPUTexture.createView(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUTextureViewDescriptor✘ ✔ Description of the GPUTextureViewto create.Returns: view, of type
GPUTextureView.Content timeline steps:
-
? Validate texture format required features of descriptor.
formatwith this.[[device]]. -
? Validate swizzle string of descriptor.
swizzle. -
Let view be ! create a new WebGPU object(this,
GPUTextureView, descriptor). -
Issue the initialization steps on the Device timeline of this.
-
Return view.
Device timeline initialization steps:-
Set descriptor to the result of resolving GPUTextureViewDescriptor defaults for this with descriptor.
-
If any of the following conditions are unsatisfied generate a validation error, invalidate view and return.
-
this is valid to use with this.
[[device]]. -
If the descriptor.
aspectis"all":-
descriptor.
formatmust equal either this.formator one of the formats in this.[[viewFormats]].
Otherwise:
-
descriptor.
formatmust equal the result of resolving GPUTextureAspect( this.format, descriptor.aspect).
-
-
If descriptor.
swizzleis not"rgba","texture-component-swizzle"must be enabled for this.[[device]]. -
If descriptor.
usageincludes theRENDER_ATTACHMENTbit:-
descriptor.
formatmust be a renderable format.
-
-
If descriptor.
usageincludes theSTORAGE_BINDINGbit:-
descriptor.
formatmust be listed in § 26.1.1 Plain color formats table withSTORAGE_BINDINGcapability for at least one access mode.
-
-
descriptor.
mipLevelCountmust be > 0. -
descriptor.
baseMipLevel+ descriptor.mipLevelCountmust be ≤ this.mipLevelCount. -
descriptor.
arrayLayerCountmust be > 0. -
descriptor.
baseArrayLayer+ descriptor.arrayLayerCountmust be ≤ the array layer count of this. -
If this.
sampleCount> 1, descriptor.dimensionmust be"2d". -
If descriptor.
dimensionis:"1d"-
-
descriptor.
arrayLayerCountmust be1.
"2d"-
-
descriptor.
arrayLayerCountmust be1.
"2d-array""cube"-
-
descriptor.
arrayLayerCountmust be6.
"cube-array"-
-
descriptor.
arrayLayerCountmust be a multiple of6.
"3d"-
-
descriptor.
arrayLayerCountmust be1.
-
-
Let view be a new
GPUTextureViewobject. -
Set view.
[[texture]]to this. -
Set view.
[[descriptor]]to descriptor. -
If descriptor.
usagecontainsRENDER_ATTACHMENT:-
Let renderExtent be compute render extent([this.
width, this.height, this.depthOrArrayLayers], descriptor.baseMipLevel). -
Set view.
[[renderExtent]]to renderExtent.
-
-
GPUTextureView
texture with a GPUTextureViewDescriptor descriptor, run the following device timeline steps:
-
Let resolved be a copy of descriptor.
-
If resolved.
mipLevelCountis not provided: set resolved.mipLevelCountto texture.mipLevelCount− resolved.baseMipLevel. -
If resolved.
dimensionis not provided and texture.dimensionis: -
If resolved.
arrayLayerCountis not provided and resolved.dimensionis:"1d","2d", or"3d"-
Set resolved.
arrayLayerCountto1. "cube"-
Set resolved.
arrayLayerCountto6. "2d-array"or"cube-array"-
Set resolved.
arrayLayerCountto the array layer count of texture − resolved.baseArrayLayer.
-
If resolved.
usageis0: set resolved.usageto texture.usage. -
Return resolved.
GPUTexture texture, run the
following steps:
-
If texture.
dimensionis:"1d"or"3d"-
Return
1. "2d"-
Return texture.
depthOrArrayLayers.
DOMString swizzle,
run the following content timeline steps:
-
If swizzle does not match the [ECMAScript] regexp
^[rgba01]{4}$:-
Throw a
TypeError.
-
6.3. Texture Formats
The name of the format specifies the order of components, bits per component, and data type for the component.
-
r,g,b,a= red, green, blue, alpha -
unorm= unsigned normalized -
snorm= signed normalized -
uint= unsigned int -
sint= signed int -
float= floating point
If the format has the -srgb suffix, then sRGB conversions from gamma to linear
and vice versa are applied during the reading and writing of color values in the
shader. Compressed texture formats are provided by features. Their naming
should follow the convention here, with the texture name as a prefix. e.g.
etc2-rgba8unorm.
The texel block is a single addressable element of the textures in pixel-based GPUTextureFormats,
and a single compressed block of the textures in block-based compressed GPUTextureFormats.
The texel block width and texel block height specifies the dimension of one texel block.
-
For pixel-based
GPUTextureFormats, the texel block width and texel block height are always 1. -
For block-based compressed
GPUTextureFormats, the texel block width is the number of texels in each row of one texel block, and the texel block height is the number of texel rows in one texel block. See § 26.1 Texture Format Capabilities for an exhaustive list of values for every texture format.
The texel block copy footprint of an aspect of a GPUTextureFormat is the number of
bytes one texel block occupies during a texel copy, if applicable.
Note:
The texel block memory cost of a GPUTextureFormat is the number of
bytes needed to store one texel block. It is not fully defined for all formats.
This value is informative and non-normative.
enum { // 8-bit formatsGPUTextureFormat ,"r8unorm" ,"r8snorm" ,"r8uint" , // 16-bit formats"r8sint" ,"r16unorm" ,"r16snorm" ,"r16uint" ,"r16sint" ,"r16float" ,"rg8unorm" ,"rg8snorm" ,"rg8uint" , // 32-bit formats"rg8sint" ,"r32uint" ,"r32sint" ,"r32float" ,"rg16unorm" ,"rg16snorm" ,"rg16uint" ,"rg16sint" ,"rg16float" ,"rgba8unorm" ,"rgba8unorm-srgb" ,"rgba8snorm" ,"rgba8uint" ,"rgba8sint" ,"bgra8unorm" , // Packed 32-bit formats"bgra8unorm-srgb" ,"rgb9e5ufloat" ,"rgb10a2uint" ,"rgb10a2unorm" , // 64-bit formats"rg11b10ufloat" ,"rg32uint" ,"rg32sint" ,"rg32float" ,"rgba16unorm" ,"rgba16snorm" ,"rgba16uint" ,"rgba16sint" , // 128-bit formats"rgba16float" ,"rgba32uint" ,"rgba32sint" , // Depth/stencil formats"rgba32float" ,"stencil8" ,"depth16unorm" ,"depth24plus" ,"depth24plus-stencil8" , // "depth32float-stencil8" feature"depth32float" , // BC compressed formats usable if "texture-compression-bc" is both // supported by the device/user agent and enabled in requestDevice."depth32float-stencil8" ,"bc1-rgba-unorm" ,"bc1-rgba-unorm-srgb" ,"bc2-rgba-unorm" ,"bc2-rgba-unorm-srgb" ,"bc3-rgba-unorm" ,"bc3-rgba-unorm-srgb" ,"bc4-r-unorm" ,"bc4-r-snorm" ,"bc5-rg-unorm" ,"bc5-rg-snorm" ,"bc6h-rgb-ufloat" ,"bc6h-rgb-float" ,"bc7-rgba-unorm" , // ETC2 compressed formats usable if "texture-compression-etc2" is both // supported by the device/user agent and enabled in requestDevice."bc7-rgba-unorm-srgb" ,"etc2-rgb8unorm" ,"etc2-rgb8unorm-srgb" ,"etc2-rgb8a1unorm" ,"etc2-rgb8a1unorm-srgb" ,"etc2-rgba8unorm" ,"etc2-rgba8unorm-srgb" ,"eac-r11unorm" ,"eac-r11snorm" ,"eac-rg11unorm" , // ASTC compressed formats usable if "texture-compression-astc" is both // supported by the device/user agent and enabled in requestDevice."eac-rg11snorm" ,"astc-4x4-unorm" ,"astc-4x4-unorm-srgb" ,"astc-5x4-unorm" ,"astc-5x4-unorm-srgb" ,"astc-5x5-unorm" ,"astc-5x5-unorm-srgb" ,"astc-6x5-unorm" ,"astc-6x5-unorm-srgb" ,"astc-6x6-unorm" ,"astc-6x6-unorm-srgb" ,"astc-8x5-unorm" ,"astc-8x5-unorm-srgb" ,"astc-8x6-unorm" ,"astc-8x6-unorm-srgb" ,"astc-8x8-unorm" ,"astc-8x8-unorm-srgb" ,"astc-10x5-unorm" ,"astc-10x5-unorm-srgb" ,"astc-10x6-unorm" ,"astc-10x6-unorm-srgb" ,"astc-10x8-unorm" ,"astc-10x8-unorm-srgb" ,"astc-10x10-unorm" ,"astc-10x10-unorm-srgb" ,"astc-12x10-unorm" ,"astc-12x10-unorm-srgb" ,"astc-12x12-unorm" , };"astc-12x12-unorm-srgb"
The depth component of the "depth24plus" and "depth24plus-stencil8"
formats may be implemented as either a 24-bit depth value or a "depth32float" value.
The stencil8 format may be implemented as
either a real "stencil8", or "depth24stencil8", where the depth aspect is
hidden and inaccessible.
-
For 24-bit depth, 1 ULP has a constant value of 1 / (224 − 1).
-
For depth32float, 1 ULP has a variable value no greater than 1 / (224).
A format is renderable if it is either a color renderable format, or a depth-or-stencil format.
If a format is listed in § 26.1.1 Plain color formats with RENDER_ATTACHMENT capability, it is a
color renderable format. Any other format is not a color renderable format.
All depth-or-stencil formats are renderable.
A renderable format is also blendable if it can be used with render pipeline blending. See § 26.1 Texture Format Capabilities.
A format is filterable if it supports the
GPUTextureSampleType "float"
(not just "unfilterable-float");
that is, it can be used with "filtering" GPUSamplers.
See § 26.1 Texture Format Capabilities.
Arguments:
-
GPUTextureFormatformat -
GPUTextureAspectaspect
Returns: GPUTextureFormat or null
-
If aspect is:
"all"-
Return format.
"depth-only""stencil-only"-
If format is a depth-stencil-format: Return the aspect-specific format of format according to § 26.1.2 Depth-stencil formats or
nullif the aspect is not present in format.
-
Return
null.
Use of some texture formats require a feature to be enabled on the GPUDevice. Because new
formats can be added to the specification, those enum values might not be known by the implementation.
In order to normalize behavior across implementations, attempting to use a format that requires a
feature will throw an exception if the associated feature is not enabled on the device. This makes
the behavior the same as when the format is unknown to the implementation.
See § 26.1 Texture Format Capabilities for information about which GPUTextureFormats require features.
GPUTextureFormat formatwith logical device device, run the following content timeline steps:
-
If format requires a feature and device.
[[features]]does not contain the feature:-
Throw a
TypeError.
-
6.4. GPUExternalTexture
A GPUExternalTexture is a sampleable 2D texture wrapping an external video frame.
It is an immutable snapshot; its contents cannot change over time, either from inside WebGPU
(it is only sampleable) or from outside WebGPU (e.g. due to video frame advancement).
GPUExternalTextures can be bound into bind groups via the
externalTexture bind group layout entry member.
Note that member uses several binding slots, as defined there.
GPUExternalTexture can be implemented without creating a copy of the imported source,
but this depends implementation-defined factors.
Ownership of the underlying representation may either be exclusive or shared with other
owners (such as a video decoder), but this is not visible to the application.
The underlying representation of an external texture is unobservable (except for precise sampling behavior), but typically may include:
-
Up to three 2D planes of data (e.g. RGBA, Y+UV, Y+U+V).
-
Metadata for converting coordinates before reading from those planes (crop and rotation).
-
Metadata for converting values into the specified output color space (matrices, gammas, 3D LUT).
The configuration used internally by an implementation may be inconsistent across time, systems, user agents, media sources, or even frames within a single video source. In order to account for many possible representations, the binding conservatively uses the following, for each external texture:
-
three sampled texture bindings (for up to 3 planes),
-
one sampled texture binding for a 3D LUT,
-
one sampler binding to sample the 3D LUT, and
-
one uniform buffer binding for metadata.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUExternalTexture { };GPUExternalTexture includes GPUObjectBase ;
GPUExternalTexture has the following immutable properties:
[[descriptor]], of typeGPUExternalTextureDescriptor, readonly-
The descriptor with which the texture was created.
GPUExternalTexture has the following immutable properties:
[[expired]], of typeboolean, initiallyfalse-
Indicates whether the object has expired (can no longer be used).
Note: Unlike
[[destroyed]]slots, which are similar, this can change fromtrueback tofalse.
6.4.1. Importing External Textures
An external texture is created from an external video object
using importExternalTexture().
An external texture created from an HTMLVideoElement expires (is destroyed) automatically in a
task after it is imported, instead of manually or upon garbage collection like other resources.
When an external texture expires, its [[expired]] slot changes to true.
An external texture created from a VideoFrame expires (is destroyed) when, and only when,
the source VideoFrame is closed,
either explicitly by close(), or by other means.
Note: As noted in decode(), authors should call
close() on output VideoFrames to avoid decoder stalls.
If an imported VideoFrame is dropped without being closed, the imported
GPUExternalTexture object will keep it alive until it is also dropped.
The VideoFrame cannot be garbage collected until both objects are dropped.
Garbage collection is unpredictable, so this may still stall the video decoder.
Once the GPUExternalTexture expires, importExternalTexture() must be called again.
However, the user agent may un-expire and return the same GPUExternalTexture again, instead of
creating a new one. This will commonly happen unless the execution of the application is scheduled
to match the video’s frame rate (e.g. using requestVideoFrameCallback()).
If the same object is returned again, it will compare equal, and GPUBindGroups,
GPURenderBundles, etc. referencing the previous object can still be used.
dictionary :GPUExternalTextureDescriptor GPUObjectDescriptorBase {required (HTMLVideoElement or VideoFrame )source ;PredefinedColorSpace colorSpace = "srgb"; };
GPUExternalTextureDescriptor dictionaries have the following members:
source, of type(HTMLVideoElement or VideoFrame)-
The video source to import the external texture from. Source size is determined as described by the external source dimensions table.
colorSpace, of type PredefinedColorSpace, defaulting to"srgb"-
The color space the image contents of
sourcewill be converted into when reading.
importExternalTexture(descriptor)-
Creates a
GPUExternalTexturewrapping the provided image source.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.importExternalTexture(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUExternalTextureDescriptor✘ ✘ Provides the external image source object (and any creation options). Returns:
GPUExternalTextureContent timeline steps:
-
Let source be descriptor.
source. -
If the current image contents of source are the same as the most recent
importExternalTexture()call with the same descriptor (ignoringlabel), and the user agent chooses to reuse it:-
Let previousResult be the
GPUExternalTexturereturned previously. -
Set previousResult.
[[expired]]tofalse, renewing ownership of the underlying resource. -
Let result be previousResult.
Note: This allows the application to detect duplicate imports and avoid re-creating dependent objects (such as
GPUBindGroups). Implementations still need to be able to handle a single frame being wrapped by multipleGPUExternalTexture, since import metadata likecolorSpacecan change even for the same frame.Otherwise:
-
If source is not origin-clean, throw a
SecurityErrorand return. -
Let usability be ? check the usability of the image argument(source).
-
If usability is not
good:-
Return an invalidated
GPUExternalTexture.
-
Let data be the result of converting the current image contents of source into the color space descriptor.
colorSpacewith unpremultiplied alpha.This may result in values outside of the range [0, 1]. If clamping is desired, it may be performed after sampling.
Note: This is described like a copy, but may be implemented as a reference to read-only underlying data plus appropriate metadata to perform conversion later.
-
Let result be a new
GPUExternalTextureobject wrapping data.
-
-
If source is an
HTMLVideoElement, queue an automatic expiry task with device this and the following steps:-
Set result.
[[expired]]totrue, releasing ownership of the underlying resource.
Note: An
HTMLVideoElementshould be imported in the same task that samples the texture (which should generally be scheduled usingrequestVideoFrameCallbackorrequestAnimationFrame()depending on the application). Otherwise, a texture could get destroyed by these steps before the application is finished using it. -
-
If source is a
VideoFrame, then when source is closed, run the following steps:-
Set result.
[[expired]]totrue.
-
-
Return result.
-
const videoElement= document. createElement( 'video' ); // ... set up videoElement, wait for it to be ready... function frame() { requestAnimationFrame( frame); // Always re-import the video on every animation frame, because the // import is likely to have expired. // The browser may cache and reuse a past frame, and if it does it // may return the same GPUExternalTexture object again. // In this case, old bind groups are still valid. const externalTexture= gpuDevice. importExternalTexture({ source: videoElement}); // ... render using externalTexture... } requestAnimationFrame( frame);
requestVideoFrameCallback is available:
const videoElement= document. createElement( 'video' ); // ... set up videoElement... function frame() { videoElement. requestVideoFrameCallback( frame); // Always re-import, because we know the video frame has advanced const externalTexture= gpuDevice. importExternalTexture({ source: videoElement}); // ... render using externalTexture... } videoElement. requestVideoFrameCallback( frame);
6.5. Sampling External Texture Bindings
The externalTexture binding point allows binding GPUExternalTexture
objects (from dynamic image sources like videos). It also supports GPUTexture and GPUTextureView.
Note:
When a GPUTexture or a GPUTextureView is bound to an externalTexture
binding, it is like a GPUExternalTexture with a single RGBA plane and no crop, rotation, or color
conversion.
External textures are represented in WGSL with texture_external and may be read using
textureLoad and textureSampleBaseClampToEdge.
The sampler provided to textureSampleBaseClampToEdge is used to sample the underlying textures.
When the binding resource type is a GPUExternalTexture, the result is in the color space set
by colorSpace.
It is implementation-dependent whether, for any given external texture, the sampler (and filtering)
is applied before or after conversion from underlying values into the specified color space.
Note: If the internal representation is an RGBA plane, sampling behaves as on a regular 2D texture. If there are several underlying planes (e.g. Y+UV), the sampler is used to sample each underlying texture separately, prior to conversion from YUV to the specified color space.
7. Samplers
7.1. GPUSampler
A GPUSampler encodes transformations and filtering information that can
be used in a shader to interpret texture resource data.
GPUSamplers are created via createSampler().
[Exposed =(Window ,Worker ),SecureContext ]interface GPUSampler { };GPUSampler includes GPUObjectBase ;
GPUSampler has the following immutable properties:
[[descriptor]], of typeGPUSamplerDescriptor, readonly-
The
GPUSamplerDescriptorwith which theGPUSamplerwas created. [[isComparison]], of typeboolean, readonly-
Whether the
GPUSampleris used as a comparison sampler. [[isFiltering]], of typeboolean, readonly-
Whether the
GPUSamplerweights multiple samples of a texture.
7.1.1. GPUSamplerDescriptor
A GPUSamplerDescriptor specifies the options to use to create a GPUSampler.
dictionary :GPUSamplerDescriptor GPUObjectDescriptorBase {GPUAddressMode addressModeU = "clamp-to-edge";GPUAddressMode addressModeV = "clamp-to-edge";GPUAddressMode addressModeW = "clamp-to-edge";GPUFilterMode magFilter = "nearest";GPUFilterMode minFilter = "nearest";GPUMipmapFilterMode mipmapFilter = "nearest";float lodMinClamp = 0;float lodMaxClamp = 32;GPUCompareFunction compare ; [Clamp ]unsigned short maxAnisotropy = 1; };
addressModeU, of type GPUAddressMode, defaulting to"clamp-to-edge"addressModeV, of type GPUAddressMode, defaulting to"clamp-to-edge"addressModeW, of type GPUAddressMode, defaulting to"clamp-to-edge"-
Specifies the
address modesfor the texture width, height, and depth coordinates, respectively. magFilter, of type GPUFilterMode, defaulting to"nearest"-
Specifies the sampling behavior when the sampled area is smaller than or equal to one texel.
minFilter, of type GPUFilterMode, defaulting to"nearest"-
Specifies the sampling behavior when the sampled area is larger than one texel.
mipmapFilter, of type GPUMipmapFilterMode, defaulting to"nearest"-
Specifies behavior for sampling between mipmap levels.
lodMinClamp, of type float, defaulting to0lodMaxClamp, of type float, defaulting to32-
Specifies the minimum and maximum levels of detail, respectively, used internally when sampling a texture.
compare, of type GPUCompareFunction-
When provided the sampler will be a comparison sampler with the specified
GPUCompareFunction.Note: Comparison samplers may use filtering, but the sampling results will be implementation-dependent and may differ from the normal filtering rules.
maxAnisotropy, of type unsigned short, defaulting to1-
Specifies the maximum anisotropy value clamp used by the sampler. Anisotropic filtering is enabled when
maxAnisotropyis > 1 and the implementation supports it.Anisotropic filtering improves the image quality of textures sampled at oblique viewing angles. Higher
maxAnisotropyvalues indicate the maximum ratio of anisotropy supported when filtering.NOTE:Most implementations supportmaxAnisotropyvalues in range between 1 and 16, inclusive. The used value ofmaxAnisotropywill be clamped to the maximum value that the platform supports.The precise filtering behavior is implementation-dependent.
Level of detail (LOD) describes which mip level(s) are selected when sampling a texture. It may be specified explicitly through shader methods like textureSampleLevel or implicitly determined from the texture coordinate derivatives.
Note: See Scale Factor Operation, LOD Operation and Image Level Selection in the Vulkan 1.3 spec for an example of how implicit LODs may be calculated.
GPUAddressMode describes the behavior of the sampler if the sampled texels extend beyond the
bounds of the sampled texture.
enum {GPUAddressMode "clamp-to-edge" ,"repeat" ,"mirror-repeat" , };
"clamp-to-edge"-
Texture coordinates are clamped between 0.0 and 1.0, inclusive.
"repeat"-
Texture coordinates wrap to the other side of the texture.
"mirror-repeat"-
Texture coordinates wrap to the other side of the texture, but the texture is flipped when the integer part of the coordinate is odd.
GPUFilterMode and GPUMipmapFilterMode describe the behavior of the sampler if the sampled
area does not cover exactly one texel.
Note: See Texel Filtering in the Vulkan 1.3 spec for an example of how samplers may determine which texels are sampled from for the various filtering modes.
enum {GPUFilterMode "nearest" ,"linear" , };enum {GPUMipmapFilterMode ,"nearest" , };"linear"
"nearest"-
Return the value of the texel nearest to the texture coordinates.
"linear"-
Select two texels in each dimension and return a linear interpolation between their values.
GPUCompareFunction specifies the behavior of a comparison sampler. If a comparison sampler is
used in a shader, the depth_ref is compared to the fetched texel value, and the result of this
comparison test is generated (1.0f for pass, or 0.0f for fail).
After comparison, if texture filtering is enabled, the filtering step occurs, so that comparison
results are mixed together resulting in values in the range [0, 1]. Filtering should behave
as usual, however it may be computed with lower precision or not mix results at all.
enum {GPUCompareFunction "never" ,"less" ,"equal" ,"less-equal" ,"greater" ,"not-equal" ,"greater-equal" ,"always" , };
"never"-
Comparison tests never pass.
"less"-
A provided value passes the comparison test if it is less than the sampled value.
"equal"-
A provided value passes the comparison test if it is equal to the sampled value.
"less-equal"-
A provided value passes the comparison test if it is less than or equal to the sampled value.
"greater"-
A provided value passes the comparison test if it is greater than the sampled value.
"not-equal"-
A provided value passes the comparison test if it is not equal to the sampled value.
"greater-equal"-
A provided value passes the comparison test if it is greater than or equal to the sampled value.
"always"-
Comparison tests always pass.
7.1.2. Sampler Creation
createSampler(descriptor)-
Creates a
GPUSampler.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createSampler(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUSamplerDescriptor✘ ✔ Description of the GPUSamplerto create.Returns:
GPUSamplerContent timeline steps:
-
Let s be ! create a new WebGPU object(this,
GPUSampler, descriptor). -
Issue the initialization steps on the Device timeline of this.
-
Return s.
Device timeline initialization steps:-
If any of the following conditions are unsatisfied generate a validation error, invalidate s and return.
-
this must not be lost.
-
descriptor.
lodMinClamp≥ 0. -
descriptor.
lodMaxClamp≥ descriptor.lodMinClamp. -
descriptor.
maxAnisotropy≥ 1.Note: Most implementations support
maxAnisotropyvalues in range between 1 and 16, inclusive. The providedmaxAnisotropyvalue will be clamped to the maximum value that the platform supports. -
If descriptor.
maxAnisotropy> 1:-
descriptor.
magFilter, descriptor.minFilter, and descriptor.mipmapFiltermust be"linear".
-
-
-
Set s.
[[descriptor]]to descriptor. -
Set s.
[[isComparison]]tofalseif thecompareattribute of s.[[descriptor]]isnullor undefined. Otherwise, set it totrue. -
Set s.
[[isFiltering]]tofalseif none ofminFilter,magFilter, ormipmapFilterhas the value of"linear". Otherwise, set it totrue.
-
GPUSampler that does trilinear filtering and repeats texture coordinates:
const sampler= gpuDevice. createSampler({ addressModeU: 'repeat' , addressModeV: 'repeat' , magFilter: 'linear' , minFilter: 'linear' , mipmapFilter: 'linear' , });
8. Resource Binding
8.1. GPUBindGroupLayout
A GPUBindGroupLayout defines the interface between a set of resources bound in a GPUBindGroup and their accessibility in shader stages.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUBindGroupLayout { };GPUBindGroupLayout includes GPUObjectBase ;
GPUBindGroupLayout has the following immutable properties:
[[descriptor]], of typeGPUBindGroupLayoutDescriptor, readonly
8.1.1. Bind Group Layout Creation
A GPUBindGroupLayout is created via GPUDevice.createBindGroupLayout().
dictionary :GPUBindGroupLayoutDescriptor GPUObjectDescriptorBase {required sequence <GPUBindGroupLayoutEntry >entries ; };
GPUBindGroupLayoutDescriptor dictionaries have the following members:
entries, of type sequence<GPUBindGroupLayoutEntry>-
A list of entries describing the shader resource bindings for a bind group.
A GPUBindGroupLayoutEntry describes a single shader resource binding to be included in a GPUBindGroupLayout.
dictionary {GPUBindGroupLayoutEntry required GPUIndex32 binding ;required GPUShaderStageFlags visibility ;GPUBufferBindingLayout buffer ;GPUSamplerBindingLayout sampler ;GPUTextureBindingLayout texture ;GPUStorageTextureBindingLayout storageTexture ;GPUExternalTextureBindingLayout externalTexture ; };
GPUBindGroupLayoutEntry dictionaries have the following members:
binding, of type GPUIndex32-
A unique identifier for a resource binding within the
GPUBindGroupLayout, corresponding to aGPUBindGroupEntry.bindingand a @binding attribute in theGPUShaderModule. visibility, of type GPUShaderStageFlags-
A bitset of the members of
GPUShaderStage. Each set bit indicates that aGPUBindGroupLayoutEntry’s resource will be accessible from the associated shader stage. buffer, of type GPUBufferBindingLayoutsampler, of type GPUSamplerBindingLayouttexture, of type GPUTextureBindingLayoutstorageTexture, of type GPUStorageTextureBindingLayoutexternalTexture, of type GPUExternalTextureBindingLayout-
Exactly one of these members must be set, indicating the binding type. The contents of the member specify options specific to that type.
The corresponding resource in
createBindGroup()requires the corresponding binding resource type for this binding.
typedef [EnforceRange ]unsigned long ; [GPUShaderStageFlags Exposed =(Window ,Worker ),SecureContext ]namespace {GPUShaderStage const GPUFlagsConstant VERTEX = 0x1;const GPUFlagsConstant FRAGMENT = 0x2;const GPUFlagsConstant COMPUTE = 0x4; };
GPUShaderStage contains the following flags, which describe which shader stages a
corresponding GPUBindGroupEntry for this GPUBindGroupLayoutEntry will be visible to:
VERTEX-
The bind group entry will be accessible to vertex shaders.
FRAGMENT-
The bind group entry will be accessible to fragment shaders.
COMPUTE-
The bind group entry will be accessible to compute shaders.
The binding member of a GPUBindGroupLayoutEntry is determined by which member of the
GPUBindGroupLayoutEntry is defined:
buffer, sampler,
texture, storageTexture, or
externalTexture.
Only one may be defined for any given GPUBindGroupLayoutEntry.
Each member has an associated GPUBindingResource
type and each binding type has an associated internal usage, given by this table:
| Binding member | Resource type | Binding type | Binding usage |
|---|---|---|---|
buffer
| GPUBufferBinding(or GPUBuffer as shorthand)
| "uniform"
| constant |
"storage"
| storage | ||
"read-only-storage"
| storage-read | ||
sampler
| GPUSampler
| "filtering"
| constant |
"non-filtering"
| |||
"comparison"
| |||
texture
| GPUTextureView(or GPUTexture as shorthand)
| "float"
| constant |
"unfilterable-float"
| |||
"depth"
| |||
"sint"
| |||
"uint"
| |||
storageTexture
| GPUTextureView(or GPUTexture as shorthand)
| "write-only"
| storage |
"read-write"
| |||
"read-only"
| storage-read | ||
externalTexture
| GPUExternalTextureor GPUTextureView(or GPUTexture as shorthand)
| constant |
GPUBindGroupLayoutEntry values entries
exceeds the binding slot limits of supported limits limits
if the number of slots used toward a limit exceeds the supported value in limits.
Each entry may use multiple slots toward multiple limits.
Device timeline steps:
-
For each entry in entries, if:
- entry.
buffer?.typeis"uniform"and entry.buffer?.hasDynamicOffsetistrue -
Consider 1
maxDynamicUniformBuffersPerPipelineLayoutslot to be used. - entry.
buffer?.typeis"storage"and entry.buffer?.hasDynamicOffsetistrue -
Consider 1
maxDynamicStorageBuffersPerPipelineLayoutslot to be used.
- entry.
-
For each shader stage stage in «
VERTEX,FRAGMENT,COMPUTE»:-
For each entry in entries for which entry.
visibilitycontains stage, if:- entry.
buffer?.typeis"uniform" -
Consider 1
maxUniformBuffersPerShaderStageslot to be used. - entry.
buffer?.typeis"storage"or"read-only-storage" -
If stage is:
VERTEX-
Consider 1
maxStorageBuffersInVertexStageslot to be used. FRAGMENT-
Consider 1
maxStorageBuffersInFragmentStageslot to be used. COMPUTE-
Consider 1
maxStorageBuffersPerShaderStageslot to be used.
- entry.
sampleris provided -
Consider 1
maxSamplersPerShaderStageslot to be used. - entry.
textureis provided -
Consider 1
maxSampledTexturesPerShaderStageslot to be used. - entry.
storageTextureis provided -
If stage is:
VERTEX-
Consider 1
maxStorageTexturesInVertexStageslot to be used. FRAGMENT-
Consider 1
maxStorageTexturesInFragmentStageslot to be used. COMPUTE-
Consider 1
maxStorageTexturesPerShaderStageslot to be used.
- entry.
externalTextureis provided -
Consider 4
maxSampledTexturesPerShaderStageslot, 1maxSamplersPerShaderStageslot, and 1maxUniformBuffersPerShaderStageslot to be used.Note: See
GPUExternalTexturefor an explanation of this behavior.
- entry.
-
enum {GPUBufferBindingType ,"uniform" ,"storage" , };"read-only-storage" dictionary {GPUBufferBindingLayout GPUBufferBindingType type = "uniform";boolean hasDynamicOffset =false ;GPUSize64 minBindingSize = 0; };
GPUBufferBindingLayout dictionaries have the following members:
type, of type GPUBufferBindingType, defaulting to"uniform"-
Indicates the type required for buffers bound to this binding.
hasDynamicOffset, of type boolean, defaulting tofalse-
Indicates whether this binding requires a dynamic offset.
minBindingSize, of type GPUSize64, defaulting to0-
Indicates the minimum
sizeof a buffer binding used with this bind point.Bindings are always validated against this size in
createBindGroup().If this is not
0, pipeline creation additionally validates that this value ≥ the minimum buffer binding size of the variable.If this is
0, it is ignored by pipeline creation, and instead draw/dispatch commands validate that each binding in theGPUBindGroupsatisfies the minimum buffer binding size of the variable.Note: Similar execution-time validation is theoretically possible for other binding-related fields specified for early validation, like
sampleTypeandformat, which currently can only be validated in pipeline creation. However, such execution-time validation could be costly or unnecessarily complex, so it is available only forminBindingSizewhich is expected to have the most ergonomic impact.
enum {GPUSamplerBindingType ,"filtering" ,"non-filtering" , };"comparison" dictionary {GPUSamplerBindingLayout GPUSamplerBindingType type = "filtering"; };
GPUSamplerBindingLayout dictionaries have the following members:
type, of type GPUSamplerBindingType, defaulting to"filtering"-
Indicates the required type of a sampler bound to this binding.
enum {GPUTextureSampleType ,"float" ,"unfilterable-float" ,"depth" ,"sint" , };"uint" dictionary {GPUTextureBindingLayout GPUTextureSampleType sampleType = "float";GPUTextureViewDimension viewDimension = "2d";boolean multisampled =false ; };
GPUTextureBindingLayout dictionaries have the following members:
sampleType, of type GPUTextureSampleType, defaulting to"float"-
Indicates the type required for texture views bound to this binding.
viewDimension, of type GPUTextureViewDimension, defaulting to"2d"-
Indicates the required
dimensionfor texture views bound to this binding. multisampled, of type boolean, defaulting tofalse-
Indicates whether or not texture views bound to this binding must be multisampled.
enum {GPUStorageTextureAccess ,"write-only" ,"read-only" , };"read-write" dictionary {GPUStorageTextureBindingLayout GPUStorageTextureAccess access = "write-only";required GPUTextureFormat format ;GPUTextureViewDimension viewDimension = "2d"; };
GPUStorageTextureBindingLayout dictionaries have the following members:
access, of type GPUStorageTextureAccess, defaulting to"write-only"-
The access mode for this binding, indicating readability and writability.
format, of type GPUTextureFormat-
The required
formatof texture views bound to this binding. viewDimension, of type GPUTextureViewDimension, defaulting to"2d"-
Indicates the required
dimensionfor texture views bound to this binding.
dictionary { };GPUExternalTextureBindingLayout
A GPUBindGroupLayout object has the following device timeline properties:
[[entryMap]], of type ordered map<GPUSize32,GPUBindGroupLayoutEntry>, readonly-
The map of binding indices pointing to the
GPUBindGroupLayoutEntrys, which thisGPUBindGroupLayoutdescribes. [[dynamicOffsetCount]], of typeGPUSize32, readonly-
The number of buffer bindings with dynamic offsets in this
GPUBindGroupLayout. [[exclusivePipeline]], of typeGPUPipelineBase?, readonly-
The pipeline that created this
GPUBindGroupLayout, if it was created as part of a default pipeline layout. If notnull,GPUBindGroups created with thisGPUBindGroupLayoutcan only be used with the specifiedGPUPipelineBase.
createBindGroupLayout(descriptor)-
Creates a
GPUBindGroupLayout.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createBindGroupLayout(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUBindGroupLayoutDescriptor✘ ✘ Description of the GPUBindGroupLayoutto create.Returns:
GPUBindGroupLayoutContent timeline steps:
-
For each
GPUBindGroupLayoutEntryentry in descriptor.entries:-
If entry.
storageTextureis provided:-
? Validate texture format required features for entry.
storageTexture.formatwith this.[[device]].
-
-
-
Let layout be ! create a new WebGPU object(this,
GPUBindGroupLayout, descriptor). -
Issue the initialization steps on the Device timeline of this.
-
Return layout.
Device timeline initialization steps:-
If any of the following conditions are unsatisfied generate a validation error, invalidate layout and return.
-
this must not be lost.
-
Let limits be this.
[[device]].[[limits]]. -
The
bindingof each entry in descriptor is unique. -
The
bindingof each entry in descriptor must be < limits.maxBindingsPerBindGroup. -
descriptor.
entriesmust not exceed the binding slot limits of limits. -
For each
GPUBindGroupLayoutEntryentry in descriptor.entries:-
Exactly one of entry.
buffer, entry.sampler, entry.texture, entry.storageTexture, and entry.externalTextureis provided. -
entry.
visibilitycontains only bits defined inGPUShaderStage. -
If entry.
visibilityincludesVERTEX:-
If entry.
bufferis provided, entry.buffer.typemust be"uniform"or"read-only-storage". -
If entry.
storageTextureis provided, entry.storageTexture.accessmust be"read-only".
-
-
If entry.
texture?.multisampledistrue:-
entry.
texture.viewDimensionis"2d". -
entry.
texture.sampleTypeis not"float".
-
-
If entry.
storageTextureis provided:-
entry.
storageTexture.viewDimensionis not"cube"or"cube-array". -
entry.
storageTexture.formatmust be a format which can support storage usage for the given entry.storageTexture.accessaccording to the § 26.1.1 Plain color formats table.
-
-
-
-
Set layout.
[[descriptor]]to descriptor. -
Set layout.
[[dynamicOffsetCount]]to the number of entries in descriptor wherebufferis provided andbuffer.hasDynamicOffsetistrue. -
Set layout.
[[exclusivePipeline]]tonull. -
For each
GPUBindGroupLayoutEntryentry in descriptor.entries:-
Insert entry into layout.
[[entryMap]]with the key of entry.binding.
-
-
8.1.2. Compatibility
GPUBindGroupLayout objects a and b are considered group-equivalent
if and only if all of the following conditions are satisfied:
-
for any binding number binding, one of the following conditions is satisfied:
-
it’s missing from both a.
[[entryMap]]and b.[[entryMap]]. -
a.
[[entryMap]][binding] == b.[[entryMap]][binding]
-
If bind groups layouts are group-equivalent they can be interchangeably used in all contents.
8.2. GPUBindGroup
A GPUBindGroup defines a set of resources to be bound together in a group
and how the resources are used in shader stages.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUBindGroup { };GPUBindGroup includes GPUObjectBase ;
GPUBindGroup has the following device timeline properties:
[[layout]], of typeGPUBindGroupLayout, readonly-
The
GPUBindGroupLayoutassociated with thisGPUBindGroup. [[entries]], of type sequence<GPUBindGroupEntry>, readonly-
The set of
GPUBindGroupEntrys thisGPUBindGroupdescribes. [[usedResources]], of type usage scope, readonly-
The set of buffer and texture subresources used by this bind group, associated with lists of the internal usage flags.
GPUBindGroup bindGroup,
given list<GPUBufferDynamicOffset> dynamicOffsets, are computed as follows:
-
Let result be a new set<(
GPUBindGroupLayoutEntry,GPUBufferBinding)>. -
Let dynamicOffsetIndex be 0.
-
For each
GPUBindGroupEntrybindGroupEntry in bindGroup.[[entries]], sorted by bindGroupEntry.binding:-
Let bindGroupLayoutEntry be bindGroup.
[[layout]].[[entryMap]][bindGroupEntry.binding]. -
Let bound be get as buffer binding(bindGroupEntry.
resource). -
If bindGroupLayoutEntry.
buffer.hasDynamicOffset:-
Increment bound.
offsetby dynamicOffsets[dynamicOffsetIndex]. -
Increment dynamicOffsetIndex by 1.
-
-
Append (bindGroupLayoutEntry, bound) to result.
-
-
Return result.
8.2.1. Bind Group Creation
A GPUBindGroup is created via GPUDevice.createBindGroup().
dictionary :GPUBindGroupDescriptor GPUObjectDescriptorBase {required GPUBindGroupLayout layout ;required sequence <GPUBindGroupEntry >entries ; };
GPUBindGroupDescriptor dictionaries have the following members:
layout, of type GPUBindGroupLayout-
The
GPUBindGroupLayoutthe entries of this bind group will conform to. entries, of type sequence<GPUBindGroupEntry>-
A list of entries describing the resources to expose to the shader for each binding described by the
layout.
typedef (GPUSampler or GPUTexture or GPUTextureView or GPUBuffer or GPUBufferBinding or GPUExternalTexture );GPUBindingResource dictionary {GPUBindGroupEntry required GPUIndex32 binding ;required GPUBindingResource resource ; };
A GPUBindGroupEntry describes a single resource to be bound in a GPUBindGroup, and has the
following members:
binding, of type GPUIndex32-
A unique identifier for a resource binding within the
GPUBindGroup, corresponding to aGPUBindGroupLayoutEntry.bindingand a @binding attribute in theGPUShaderModule. resource, of type GPUBindingResource-
The resource to bind, which may be a
GPUSampler,GPUTexture,GPUTextureView,GPUBuffer,GPUBufferBinding, orGPUExternalTexture.
GPUBindGroupEntry has the following device timeline properties:
[[prevalidatedSize]], of typeboolean-
Whether or not this binding entry had its buffer size validated at time of creation.
dictionary {GPUBufferBinding required GPUBuffer buffer ;GPUSize64 offset = 0;GPUSize64 size ; };
A GPUBufferBinding describes a buffer and optional range to bind as a resource, and has the
following members:
buffer, of type GPUBuffer-
The
GPUBufferto bind. offset, of type GPUSize64, defaulting to0-
The offset, in bytes, from the beginning of
bufferto the beginning of the range exposed to the shader by the buffer binding. size, of type GPUSize64-
The size, in bytes, of the buffer binding. If not provided, specifies the range starting at
offsetand ending at the end ofbuffer.
createBindGroup(descriptor)-
Creates a
GPUBindGroup.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createBindGroup(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUBindGroupDescriptor✘ ✘ Description of the GPUBindGroupto create.Returns:
GPUBindGroupContent timeline steps:
-
Let bindGroup be ! create a new WebGPU object(this,
GPUBindGroup, descriptor). -
Issue the initialization steps on the Device timeline of this.
-
Return bindGroup.
Device timeline initialization steps:-
Let limits be this.
[[device]].[[limits]]. -
If any of the following conditions are unsatisfied generate a validation error, invalidate bindGroup and return.
-
descriptor.
layoutis valid to use with this. -
The number of
entriesof descriptor.layoutis exactly equal to the number of descriptor.entries.
For each
GPUBindGroupEntrybindingDescriptor in descriptor.entries:-
Let resource be bindingDescriptor.
resource. -
There is exactly one
GPUBindGroupLayoutEntrylayoutBinding in descriptor.layout.entriessuch that layoutBinding.bindingequals to bindingDescriptor.binding. -
If the defined binding member for layoutBinding is:
sampler-
-
resource is a
GPUSampler. -
resource is valid to use with this.
-
If layoutBinding.
sampler.typeis:"filtering"-
resource.
[[isComparison]]isfalse. "non-filtering"-
resource.
[[isFiltering]]isfalse. resource.[[isComparison]]isfalse. "comparison"-
resource.
[[isComparison]]istrue.
-
texture-
-
resource is either a
GPUTextureor aGPUTextureView. -
resource is valid to use with this.
-
Let textureView be get as texture view(resource).
-
Let texture be textureView.
[[texture]]. -
layoutBinding.
texture.viewDimensionis equal to textureView’sdimension. -
layoutBinding.
texture.sampleTypeis compatible with textureView’sformat. -
textureView.
[[descriptor]].usageincludesTEXTURE_BINDING. -
If layoutBinding.
texture.multisampledistrue, texture’ssampleCount>1, Otherwise texture’ssampleCountis1. -
If texture.
textureBindingViewDimensionis notundefined:-
Assert this.
[[device]].[[features]]does not contain"core-features-and-limits". -
texture.
textureBindingViewDimensionmust be equal to textureView.dimension.
-
-
storageTexture-
-
resource is either a
GPUTextureor aGPUTextureView. -
resource is valid to use with this.
-
Let storageTextureView be get as texture view(resource).
-
Let texture be storageTextureView.
[[texture]]. -
layoutBinding.
storageTexture.viewDimensionis equal to storageTextureView’sdimension. -
layoutBinding.
storageTexture.formatis equal to storageTextureView.[[descriptor]].format. -
storageTextureView.
[[descriptor]].usageincludesSTORAGE_BINDING. -
storageTextureView.
[[descriptor]].mipLevelCountmust be 1. -
storageTextureView.
[[descriptor]].swizzlemust be"rgba".
-
buffer-
-
resource is either a
GPUBufferor aGPUBufferBinding. -
Let bufferBinding be get as buffer binding(resource).
-
bufferBinding.
bufferis valid to use with this. -
The bound part designated by bufferBinding.
offsetand bufferBinding.sizeresides inside the buffer and has non-zero size. -
effective buffer binding size(bufferBinding) ≥ layoutBinding.
buffer.minBindingSize. -
If layoutBinding.
buffer.typeis"uniform"-
-
effective buffer binding size(bufferBinding) ≤ limits.
maxUniformBufferBindingSize. -
bufferBinding.
offsetis a multiple of limits.minUniformBufferOffsetAlignment.
"storage"or"read-only-storage"-
-
effective buffer binding size(bufferBinding) ≤ limits.
maxStorageBufferBindingSize. -
effective buffer binding size(bufferBinding) is a multiple of 4.
-
bufferBinding.
offsetis a multiple of limits.minStorageBufferOffsetAlignment.
-
externalTexture-
-
resource is either a
GPUExternalTexture, aGPUTexture, or aGPUTextureView. -
resource is valid to use with this.
-
If resource is a:
GPUTextureorGPUTextureView-
-
Let view be get as texture view(resource).
-
view.
[[descriptor]].usagemust includeTEXTURE_BINDING. -
view.
[[descriptor]].dimensionmust be"2d". -
view.
[[descriptor]].mipLevelCountmust be 1. -
view.
[[descriptor]].formatmust be"rgba8unorm","bgra8unorm", or"rgba16float". -
view.
[[texture]].sampleCountmust be 1.
-
-
-
If this.
[[device]].[[features]]does not contain"core-features-and-limits":-
For each
GPUBindGroupEntrybindGroupEntry in descriptor.entries:-
If bindGroupEntry.
resourceis aGPUTextureView:-
Let textureView be bindGroupEntry.
resource. -
Let descriptor be textureView.
[[descriptor]]. -
descriptor.
baseArrayLayermust be0. -
descriptor.
arrayLayerCountmust be equal to textureView.[[texture]].depthOrArrayLayers.
-
-
-
-
-
Let bindGroup.
[[layout]]= descriptor.layout. -
Let bindGroup.
[[entries]]= descriptor.entries. -
Let bindGroup.
[[usedResources]]= {}. -
For each
GPUBindGroupEntrybindingDescriptor in descriptor.entries:-
Let internalUsage be the binding usage for layoutBinding.
-
Each subresource seen by resource is added to
[[usedResources]]as internalUsage. -
Let bindingDescriptor.
[[prevalidatedSize]]befalseif the defined binding member for layoutBinding isbufferand layoutBinding.buffer.minBindingSizeis0, andtrueotherwise.
-
-
Arguments:
-
GPUBindingResourceresource
Returns: GPUTextureView
-
Assert resource is either a
GPUTextureor aGPUTextureView. -
If resource is a:
GPUTexture-
-
Return resource.
createView().
-
GPUTextureView-
-
Return resource.
-
Arguments:
-
GPUBindingResourceresource
Returns: GPUBufferBinding
-
Assert resource is either a
GPUBufferor aGPUBufferBinding. -
If resource is a:
GPUBuffer-
-
Let bufferBinding a new
GPUBufferBinding. -
Set bufferBinding.
bufferto resource. -
Return bufferBinding.
-
GPUBufferBinding-
-
Return resource.
-
GPUBufferBinding objects a and b are considered buffer-binding-aliasing if and only if all of the following are true:
-
The range formed by a.
offsetand a.sizeintersects the range formed by b.offsetand b.size, where if asizeis unspecified, the range goes to the end of the buffer.
Note: When doing this calculation, any dynamic offsets have already been applied to the ranges.
8.3. GPUPipelineLayout
A GPUPipelineLayout defines the mapping between resources of all GPUBindGroup objects set up during command encoding in setBindGroup(), and the shaders of the pipeline set by GPURenderCommandsMixin.setPipeline or GPUComputePassEncoder.setPipeline.
The full binding address of a resource can be defined as a trio of:
-
shader stage mask, to which the resource is visible
-
bind group index
-
binding number
The components of this address can also be seen as the binding space of a pipeline. A GPUBindGroup (with the corresponding GPUBindGroupLayout) covers that space for a fixed bind group index. The contained bindings need to be a superset of the resources used by the shader at this bind group index.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUPipelineLayout { };GPUPipelineLayout includes GPUObjectBase ;
GPUPipelineLayout has the following device timeline properties:
[[bindGroupLayouts]], of type list<GPUBindGroupLayout>, readonly-
The
GPUBindGroupLayoutobjects provided at creation inGPUPipelineLayoutDescriptor.bindGroupLayouts.
Note: using the same GPUPipelineLayout for many GPURenderPipeline or GPUComputePipeline pipelines guarantees that the user agent doesn’t need to rebind any resources internally when there is a switch between these pipelines.
GPUComputePipeline object X was created with GPUPipelineLayout.bindGroupLayouts A, B, C. GPUComputePipeline object Y was created with GPUPipelineLayout.bindGroupLayouts A, D, C. Supposing the command encoding sequence has two dispatches:
-
setBindGroup(0, ...)
-
setBindGroup(1, ...)
-
setBindGroup(2, ...)
-
setPipeline(X) -
setBindGroup(1, ...)
-
setPipeline(Y)
In this scenario, the user agent would have to re-bind the group slot 2 for the second dispatch, even though neither the GPUBindGroupLayout at index 2 of GPUPipelineLayout.bindGroupLayouts, or the GPUBindGroup at slot 2, change.
Note: the expected usage of the GPUPipelineLayout is placing the most common and the least frequently changing bind groups at the "bottom" of the layout, meaning lower bind group slot numbers, like 0 or 1. The more frequently a bind group needs to change between draw calls, the higher its index should be. This general guideline allows the user agent to minimize state changes between draw calls, and consequently lower the CPU overhead.
8.3.1. Pipeline Layout Creation
A GPUPipelineLayout is created via GPUDevice.createPipelineLayout().
dictionary :GPUPipelineLayoutDescriptor GPUObjectDescriptorBase {required sequence <GPUBindGroupLayout ?>bindGroupLayouts ; };
GPUPipelineLayoutDescriptor dictionaries define all the GPUBindGroupLayouts used by a
pipeline, and have the following members:
bindGroupLayouts, of typesequence<GPUBindGroupLayout?>-
A list of optional
GPUBindGroupLayouts the pipeline will use. Each element corresponds to a @group attribute in theGPUShaderModule, with theNth element corresponding with@group(N).
createPipelineLayout(descriptor)-
Creates a
GPUPipelineLayout.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createPipelineLayout(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUPipelineLayoutDescriptor✘ ✘ Description of the GPUPipelineLayoutto create.Returns:
GPUPipelineLayoutContent timeline steps:
-
Let pl be ! create a new WebGPU object(this,
GPUPipelineLayout, descriptor). -
Issue the initialization steps on the Device timeline of this.
-
Return pl.
Device timeline initialization steps:-
Let limits be this.
[[device]].[[limits]]. -
Let bindGroupLayouts be a list of
nullGPUBindGroupLayouts with size equal to limits.maxBindGroups. -
For each bindGroupLayout at index i in descriptor.
bindGroupLayouts:-
If bindGroupLayout is not
nulland bindGroupLayout.[[descriptor]].entriesis not empty:-
Set bindGroupLayouts[i] to bindGroupLayout.
-
-
-
Let allEntries be the result of concatenating bgl.
[[descriptor]].entriesfor all non-nullbgl in bindGroupLayouts. -
If any of the following conditions are unsatisfied generate a validation error, invalidate pl and return.
-
Every non-
nullGPUBindGroupLayoutin bindGroupLayouts must be valid to use with this and have a[[exclusivePipeline]]ofnull. -
The size of descriptor.
bindGroupLayoutsmust be ≤ limits.maxBindGroups. -
allEntries must not exceed the binding slot limits of limits.
-
-
Set the pl.
[[bindGroupLayouts]]to bindGroupLayouts.
-
Note: two GPUPipelineLayout objects are considered equivalent for any usage
if their internal [[bindGroupLayouts]] sequences contain
GPUBindGroupLayout objects that are group-equivalent.
8.4. Example
GPUBindGroupLayout that describes a binding with a uniform buffer, a texture, and a sampler.
Then create a GPUBindGroup and a GPUPipelineLayout using the GPUBindGroupLayout.
const bindGroupLayout= gpuDevice. createBindGroupLayout({ entries: [{ binding: 0 , visibility: GPUShaderStage. VERTEX| GPUShaderStage. FRAGMENT, buffer: {} }, { binding: 1 , visibility: GPUShaderStage. FRAGMENT, texture: {} }, { binding: 2 , visibility: GPUShaderStage. FRAGMENT, sampler: {} }] }); const bindGroup= gpuDevice. createBindGroup({ layout: bindGroupLayout, entries: [{ binding: 0 , resource: { buffer: buffer}, }, { binding: 1 , resource: texture}, { binding: 2 , resource: sampler}] }); const pipelineLayout= gpuDevice. createPipelineLayout({ bindGroupLayouts: [ bindGroupLayout] });
9. Shader Modules
9.1. GPUShaderModule
[Exposed =(Window ,Worker ),SecureContext ]interface GPUShaderModule {Promise <GPUCompilationInfo >getCompilationInfo (); };GPUShaderModule includes GPUObjectBase ;
GPUShaderModule is a reference to an internal shader module object.
9.1.1. Shader Module Creation
dictionary :GPUShaderModuleDescriptor GPUObjectDescriptorBase {required USVString code ;sequence <GPUShaderModuleCompilationHint >compilationHints = []; };
code, of type USVString-
The WGSL source code for the shader module.
compilationHints, of type sequence<GPUShaderModuleCompilationHint>, defaulting to[]-
A list of
GPUShaderModuleCompilationHints.Any hint provided by an application should contain information about one entry point of a pipeline that will eventually be created from the entry point.
Implementations should use any information present in the
GPUShaderModuleCompilationHintto perform as much compilation as is possible withincreateShaderModule().Aside from type-checking, these hints are not validated in any way.
NOTE:Supplying information incompilationHintsdoes not have any observable effect, other than performance. It may be detrimental to performance to provide hints for pipelines that never end up being created.Because a single shader module can hold multiple entry points, and multiple pipelines can be created from a single shader module, it can be more performant for an implementation to do as much compilation as possible once in
createShaderModule()rather than multiple times in the multiple calls tocreateComputePipeline()orcreateRenderPipeline().Hints are only applied to the entry points they explicitly name. Unlike
GPUProgrammableStage.entryPoint, there is no default, even if only one entry point is present in the module.Note: Hints are not validated in an observable way, but user agents may surface identifiable errors (like unknown entry point names or incompatible pipeline layouts) to developers, for example in the browser developer console.
createShaderModule(descriptor)-
Creates a
GPUShaderModule.Called on:GPUDevicethis.Arguments:
Arguments for the GPUDevice.createShaderModule(descriptor) method. Parameter Type Nullable Optional Description descriptorGPUShaderModuleDescriptor✘ ✘ Description of the GPUShaderModuleto create.Returns:
GPUShaderModuleContent timeline steps:
-
Let sm be ! create a new WebGPU object(this,
GPUShaderModule, descriptor). -
Issue the initialization steps on the Device timeline of this.
-
Return sm.
Device timeline initialization steps:-
Let error be any error that results from shader module creation with the WGSL source descriptor.
code, ornullif no errors occured. -
If any of the following requirements are unmet, generate a validation error, invalidate sm, and return.
-
this must not be lost.
-
error must not be a shader-creation program error.
-
For each
enableextension in descriptor.code, the correspondingGPUFeatureNamemust be enabled (see the Feature Index).
Note: Uncategorized errors cannot arise from shader module creation. Implementations which detect such errors during shader module creation must behave as if the shader module is valid, and defer surfacing the error until pipeline creation.
-
NOTE:User agents should not include detailed compiler error messages or shader text in themessagetext of validation errors arising here: these details are accessible viagetCompilationInfo(). User agents should surface human-readable, formatted error details to developers for easier debugging (for example as a warning in the browser developer console, expandable to show full shader source).As shader compilation errors should be rare in production applications, user agents could choose to surface them to developers regardless of error handling (GPU error scopes or
uncapturederrorevent handlers), e.g. as an expandable warning. If not, they should provide and document another way for developers to access human-readable error details, for example by adding a checkbox to show errors unconditionally, or by showing human-readable details when logging aGPUCompilationInfoobject to the console. -
GPUShaderModule from WGSL code:
// A simple vertex and fragment shader pair that will fill the viewport with red. const shaderSource= ` var<private> pos : array<vec2<f32>, 3> = array<vec2<f32>, 3>( vec2(-1.0, -1.0), vec2(-1.0, 3.0), vec2(3.0, -1.0)); @vertex fn vertexMain(@builtin(vertex_index) vertexIndex : u32) -> @builtin(position) vec4<f32> { return vec4(pos[vertexIndex], 1.0, 1.0); } @fragment fn fragmentMain() -> @location(0) vec4<f32> { return vec4(1.0, 0.0, 0.0, 1.0); } ` ; const shaderModule= gpuDevice. createShaderModule({ code: shaderSource, });
9.1.1.1. Shader Module Compilation Hints
Shader module compilation hints are optional, additional information indicating how a given
GPUShaderModule entry point is intended to be used in the future. For some implementations this
information may aid in compiling the shader module earlier, potentially increasing performance.
dictionary {GPUShaderModuleCompilationHint required USVString ; (entryPoint GPUPipelineLayout or GPUAutoLayoutMode )layout ; };
layout, of type(GPUPipelineLayout or GPUAutoLayoutMode)-
A
GPUPipelineLayoutthat theGPUShaderModulemay be used with in a futurecreateComputePipeline()orcreateRenderPipeline()call. If set to"auto"the layout will be the default pipeline layout for the entry point associated with this hint will be used.
createShaderModule() and createComputePipeline() /
createRenderPipeline().
If an application is unable to provide hint information at the time of calling
createShaderModule(), it should usually not delay calling
createShaderModule(), but instead just omit the unknown information from
the compilationHints sequence or the individual members of
GPUShaderModuleCompilationHint. Omitting this information
may cause compilation to be deferred to createComputePipeline() /
createRenderPipeline().
If an author is not confident that the hint information passed to createShaderModule()
will match the information later passed to createComputePipeline() /
createRenderPipeline() with that same module, they should avoid passing that
information to createShaderModule(), as passing mismatched information to
createShaderModule() may cause unnecessary compilations to occur.
9.1.2. Shader Module Compilation Information
enum {GPUCompilationMessageType ,"error" ,"warning" , }; ["info" Exposed =(Window ,Worker ),Serializable ,SecureContext ]interface {GPUCompilationMessage readonly attribute DOMString message ;readonly attribute GPUCompilationMessageType type ;readonly attribute unsigned long long lineNum ;readonly attribute unsigned long long linePos ;readonly attribute unsigned long long offset ;readonly attribute unsigned long long length ; }; [Exposed =(Window ,Worker ),Serializable ,SecureContext ]interface {GPUCompilationInfo readonly attribute FrozenArray <GPUCompilationMessage >; };messages
A GPUCompilationMessage is an informational, warning, or error message generated by the
GPUShaderModule compiler. The messages are intended to be human readable to help developers
diagnose issues with their shader code. Each message may correspond to
a single point or range of the shader source, or may be unassociated with any specific part of the code.
GPUCompilationMessage has the following attributes:
message, of type DOMString, readonly-
The human-readable, localizable text for this compilation message.
Note: The
messageshould follow the best practices for language and direction information. This includes making use of any future standards which may emerge regarding the reporting of string language and direction metadata.Editorial note: At the time of this writing, no language/direction recommendation is available that provides compatibility and consistency with legacy APIs, but when there is, adopt it formally.
type, of type GPUCompilationMessageType, readonly-
The severity level of the message.
If the
typeis"error", it corresponds to a shader-creation error. lineNum, of type unsigned long long, readonly-
The line number in the shader
codethemessagecorresponds to. Value is one-based, such that a lineNum of1indicates the first line of the shadercode. Lines are delimited by line breaks.If the
messagecorresponds to a substring this points to the line on which the substring begins. Must be0if themessagedoes not correspond to any specific point in the shadercode. linePos, of type unsigned long long, readonly-
The offset, in UTF-16 code units, from the beginning of line
lineNumof the shadercodeto the point or beginning of the substring that themessagecorresponds to. Value is one-based, such that alinePosof1indicates the first code unit of the line.If
messagecorresponds to a substring this points to the first UTF-16 code unit of the substring. Must be0if themessagedoes not correspond to any specific point in the shadercode. offset, of type unsigned long long, readonly-
The offset from the beginning of the shader
codein UTF-16 code units to the point or beginning of the substring thatmessagecorresponds to. Must reference the same position aslineNumandlinePos. Must be0if themessagedoes not correspond to any specific point in the shadercode. length, of type unsigned long long, readonly-
The number of UTF-16 code units in the substring that
messagecorresponds to. If the message does not correspond with a substring thenlengthmust be 0.
Note: GPUCompilationMessage.lineNum and
GPUCompilationMessage.linePos are one-based since the most common use
for them is expected to be printing human readable messages that can be correlated with the line and
column numbers shown in many text editors.
Note: GPUCompilationMessage.offset and
GPUCompilationMessage.length are appropriate to pass to
substr() in order to retrieve the substring of the shader code the
message corresponds to.
getCompilationInfo()-
Returns any messages generated during the
GPUShaderModule’s compilation.The locations, order, and contents of messages are implementation-defined. In particular, messages aren’t necessarily ordered by
lineNum.Called on:GPUShaderModulethisReturns:
Promise<GPUCompilationInfo>Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Issue the synchronization steps on the Device timeline of this.
-
Return promise.
Device timeline synchronization steps:-
Let event occur upon the (successful or unsuccessful) completion of shader module creation for this.
-
Listen for timeline event event on this.
[[device]], handled by the subsequent steps on contentTimeline.
Content timeline steps:-
Let info be a new
GPUCompilationInfo. -
Let messages be a list of any errors, warnings, or informational messages generated during shader module creation for this, or the empty list
[]if the device was lost. -
For each message in messages:
-
Let m be a new
GPUCompilationMessage. -
Set m.
messageto be the text of message. -
- If message is associated with a specific substring or position
within the shader
code: -
-
Set m.
lineNumto the one-based number of the first line that the message refers to. -
Set m.
linePosto the one-based number of the first UTF-16 code units on m.lineNumthat the message refers to, or1if the message refers to the entire line. -
Set m.
offsetto the number of UTF-16 code units from the beginning of the shader to beginning of the substring or position that message refers to. -
Set m.
lengththe length of the substring in UTF-16 code units that message refers to, or 0 if message refers to a position
-
- Otherwise:
- If message is associated with a specific substring or position
within the shader
-
-
Resolve promise with info.
-