1. Introduction
This section is non-normative.
Graphics Processing Units, or GPUs for short, have been essential in enabling rich rendering and computational applications in personal computing. WebGPU is an API that exposes the capabilities of GPU hardware for the Web. The API is designed from the ground up to efficiently map to the Vulkan, Direct3D 12, and Metal native GPU APIs. WebGPU is not related to WebGL and does not explicitly target OpenGL ES.
WebGPU sees physical GPU hardware as GPUAdapter
s. It provides a connection to an adapter via GPUDevice
, which manages resources, and the device’s GPUQueue
s, which execute commands. GPUDevice
may have its own memory with high-speed access to the processing units. GPUBuffer
and GPUTexture
are the physical resources backed by GPU memory. GPUCommandBuffer
and GPURenderBundle
are containers for user-recorded commands. GPUShaderModule
contains shader code. The other resources,
such as GPUSampler
or GPUBindGroup
, configure the way physical resources are used by the GPU.
GPUs execute commands encoded in GPUCommandBuffer
s by feeding data through a pipeline,
which is a mix of fixed-function and programmable stages. Programmable stages execute shaders, which are special programs designed to run on GPU hardware.
Most of the state of a pipeline is defined by
a GPURenderPipeline
or a GPUComputePipeline
object. The state not included
in these pipeline objects is set during encoding with commands,
such as beginRenderPass()
or setBlendConstant()
.
2. Malicious use considerations
This section is non-normative. It describes the risks associated with exposing this API on the Web.
2.1. Security
The security requirements for WebGPU are the same as ever for the web, and are likewise non-negotiable. The general approach is strictly validating all the commands before they reach GPU, ensuring that a page can only work with its own data.
2.1.1. CPU-based undefined behavior
A WebGPU implementation translates the workloads issued by the user into API commands specific to the target platform. Native APIs specify the valid usage for the commands (for example, see vkCreateDescriptorSetLayout) and generally don’t guarantee any outcome if the valid usage rules are not followed. This is called "undefined behavior", and it can be exploited by an attacker to access memory they don’t own, or force the driver to execute arbitrary code.
In order to disallow insecure usage, the range of allowed WebGPU behaviors is defined for any input.
An implementation has to validate all the input from the user and only reach the driver
with the valid workloads. This document specifies all the error conditions and handling semantics.
For example, specifying the same buffer with intersecting ranges in both "source" and "destination"
of copyBufferToBuffer()
results in GPUCommandEncoder
generating an error, and no other operation occurring.
See § 20 Errors & Debugging for more information about error handling.
2.2. GPU-based undefined behavior
WebGPU shaders are executed by the compute units inside GPU hardware. In native APIs,
some of the shader instructions may result in undefined behavior on the GPU.
In order to address that, the shader instruction set and its defined behaviors are
strictly defined by WebGPU. When a shader is provided to createShaderModule()
,
the WebGPU implementation has to validate it
before doing any translation (to platform-specific shaders) or transformation passes.
2.3. Uninitialized data
Generally, allocating new memory may expose the leftover data of other applications running on the system. In order to address that, WebGPU conceptually initializes all the resources to zero, although in practice an implementation may skip this step if it sees the developer initializing the contents manually. This includes variables and shared workgroup memory inside shaders.
The precise mechanism of clearing the workgroup memory can differ between platforms. If the native API does not provide facilities to clear it, the WebGPU implementation transforms the compute shader to first do a clear across all invocations, synchronize them, and continue executing developer’s code.
2.4. Out-of-bounds access in shaders
Shaders can access physical resources either directly
(for example, as a "uniform"
GPUBufferBinding
), or via texture units,
which are fixed-function hardware blocks that handle texture coordinate conversions.
Validation on the API side can only guarantee that all the inputs to the shader are provided and
they have the correct usage and types.
The host API side can not guarantee that the data is accessed within bounds
if the texture units are not involved.
define the host API distinct from the shader API
In order to prevent the shaders from accessing GPU memory an application doesn’t own, the WebGPU implementation may enable a special mode (called "robust buffer access") in the driver that guarantees that the access is limited to buffer bounds.
Alternatively, an implementation may transform the shader code by inserting manual bounds checks.
When this path is taken, the out-of-bound checks only apply to array indexing. They aren’t needed
for plain field access of shader structures due to the minBindingSize
validation on the host side.
If the shader attempts to load data outside of physical resource bounds, the implementation is allowed to:
-
return a value at a different location within the resource bounds
-
return a value vector of "(0, 0, 0, X)" with any "X"
-
partially discard the draw or dispatch call
If the shader attempts to write data outside of physical resource bounds, the implementation is allowed to:
-
write the value to a different location within the resource bounds
-
discard the write operation
-
partially discard the draw or dispatch call
2.5. Invalid data
When uploading floating-point data from CPU to GPU, or generating it on the GPU, we may end up with a binary representation that doesn’t correspond to a valid number, such as infinity or NaN (not-a-number). The GPU behavior in this case is subject to the accuracy of the GPU hardware implementation of the IEEE-754 standard. WebGPU guarantees that introducing invalid floating-point numbers would only affect the results of arithmetic computations and will not have other side effects.
2.5.1. Driver bugs
GPU drivers are subject to bugs like any other software. If a bug occurs, an attacker could possibly exploit the incorrect behavior of the driver to get access to unprivileged data. In order to reduce the risk, the WebGPU working group will coordinate with GPU vendors to integrate the WebGPU Conformance Test Suite (CTS) as part of their driver testing process, like it was done for WebGL. WebGPU implementations are expected to have workarounds for some of the discovered bugs, and disable WebGPU on drivers with known bugs that can’t be worked around.
2.5.2. Timing attacks
WebGPU is designed for multi-threaded use via Web Workers. As such, it is designed not to open
the users to modern high-precision timing attacks. Some of the objects,
like GPUBuffer
or GPUQueue
, have shared state which can be simultaneously accessed.
This allows race conditions to occur, similar to those of accessing a SharedArrayBuffer
from multiple Web Workers, which makes the thread scheduling observable.
WebGPU addresses this by limiting the ability to deserialize (or share) objects only to
the agents inside the agent cluster, and only if
the cross-origin isolated policies are in place.
This restriction matches the mitigations against the malicious SharedArrayBuffer
use. Similarly, the user agent may also
serialize the agents sharing any handles to prevent any concurrency entirely.
In the end, the attack surface for races on shared state in WebGPU will be
a small subset of the SharedArrayBuffer
attacks.
WebGPU also specifies the "timestamp-query"
feature, which
provides high precision timing of GPU operations. The feature is optional, and a WebGPU
implementation may limit its exposure only to those scenarios that are trusted. Alternatively,
the timing query results could be processed by a compute shader and aligned to a lower precision.
2.5.3. Row hammer attacks
Row hammer is a class of attacks that exploit the leaking of states in DRAM cells. It could be used on GPU. WebGPU does not have any specific mitigations in place, and relies on platform-level solutions, such as reduced memory refresh intervals.
2.6. Denial of service
WebGPU applications have access to GPU memory and compute units. A WebGPU implementation may limit the available GPU memory to an application, in order to keep other applications responsive. For GPU processing time, a WebGPU implementation may set up "watchdog" timer that makes sure an application doesn’t cause GPU unresponsiveness for more than a few seconds. These measures are similar to those used in WebGL.
2.7. Workload identification
WebGPU provides access to constrained global resources shared between different programs (and web pages) running on the same machine. An application can try to indirectly probe how constrained these global resources are, in order to reason about workloads performed by other open web pages, based on the patterns of usage of these shared resources. These issues are generally analogous to issues with Javascript, such as system memory and CPU execution throughput. WebGPU does not provide any additional mitigations for this.
2.7.1. Memory resources
WebGPU exposes fallible allocations from machine-global memory heaps, such as VRAM. This allows for probing the size of the system’s remaining available memory (for a given heap type) by attempting to allocate and watching for allocation failures.
GPUs internally have one or more (typically only two) heaps of memory shared by all running applications. When a heap is depleted, WebGPU would fail to create a resource. This is observable, which may allow a malicious application to guess what heaps are used by other applications, and how much they allocate from them.
2.7.2. Computation resources
If one site uses WebGPU at the same time as another, it may observe the increase in time it takes to process some work. For example, if a site constantly submits compute workloads and tracks completion of work on the queue, it may observe that something else also started using the GPU.
A GPU has many parts that can be tested independently, such as the arithmetic units, texture sampling units, atomic units, etc. A malicious application may sense when some of these units are stressed, and attempt to guess the workload of another application by analyzing the stress patterns. This is analogous to the realities of CPU execution of Javascript.
2.8. Privacy
2.8.1. Machine-specific limits
WebGPU can expose a lot of detail on the underlying GPU architecture and the device geometry. This includes available physical adapters, many limits on the GPU and CPU resources that could be used (such as the maximum texture size), and any optional hardware-specific capabilities that are available.
User agents are not obligated to expose the real hardware limits, they are in full contol of how much the machine specifics are exposed. One strategy to reduce fingeprinting is binning all the target platforms into a few number of bins. In general, the privacy impact of exposing the hardware limits matches the one of WebGL.
The default limits are also deliberately high enough to allow most application to work without requesting higher limits. All the usage of the API is validated according to the requested limits, so the actual hardware capabilities are not exposed to the users by accident.
2.8.2. Machine-specific artifacts
There are some machine-specific rasterization/precision artifacts and performance differences that can be observed roughly in the same way as in WebGL. This applies to rasterization coverage and patterns, interpolation precision of the varyings between shader stages, compute unit scheduling, and more aspects of execution.
Generally, rasterization and precision fingerprints are identical across most or all of the devices of each vendor. Performance differences are relatively intractable, but also relatively low-signal (as with JS execution performance).
Privacy-critical applications and user agents should utilize software implementations to eliminate such artifacts.
2.8.3. Machine-specific performance
Another factor for differentiating users is measuring the performance of specific operations on the GPU. Even with low precision timing, repeated execution of an operation can show if the user’s machine is fast at specific workloads. This is a fairly common vector (present in both WebGL and Javascript), but it’s also low-signal and relatively intractable to truly normalize.
WebGPU compute pipelines expose access to GPU unobstructed by the fixed-function hardware. This poses an additional risk for unique device fingerprinting. User agents can take steps to dissociate logical GPU invocations with actual compute units to reduce this risk.
2.8.4. User Agent State
This specification doesn’t define any additional user-agent state for an origin.
However it is expected that user agents will have compilation caches for the result of expensive
compilation like GPUShaderModule
, GPURenderPipeline
and GPUComputePipeline
.
These caches are important to improve the loading time of WebGPU applications after the first
visit.
For the specification, these caches are indifferentiable from incredibly fast compilation, but
for applications it would be easy to measure how long createComputePipelineAsync()
takes to resolve. This can leak information across origins (like "did the user access a site with
this specific shader") so user agents should follow the best practices in storage partitioning.
The system’s GPU driver may also have its own cache of compiled shaders and pipelines. User agents may want to disable these when at all possible, or add per-partition data to shaders in ways that will make the GPU driver consider them different.
3. Fundamentals
3.1. Conventions
3.1.1. Dot Syntax
In this specification, the .
("dot") syntax, common in programming languages, is used.
The phrasing "Foo.Bar
" means "the Bar
member of the value (or interface) Foo
."
The ?.
("optional chaining") syntax, adopted from JavaScript, is also used.
The phrasing "Foo?.Bar
" means
"if Foo
is null
or undefined
, Foo
; otherwise, Foo.Bar
".
For example, where buffer
is a GPUBuffer
, buffer?.[[device]].[[adapter]]
means
"if buffer
is null
or undefined
, then undefined
, otherwise,
the [[adapter]]
internal slot of the [[device]]
internal slot of buffer
.
3.1.2. Internal Objects
An internal object is a conceptual, non-exposed WebGPU object. Internal objects track the state of an API object and hold any underlying implementation. If the state of a particular internal object can change in parallel from multiple agents, those changes are always atomic with respect to all agents.
Note: An "agent" refers to a JavaScript "thread" (i.e. main thread, or Web Worker).
3.1.3. WebGPU Interfaces
A WebGPU interface is an exposed interface which encapsulates an internal object. It provides the interface through which the internal object's state is changed.
As a matter of convention, if a WebGPU interface is referred to as invalid, it means that the internal object it encapsulates is invalid.
Any interface which includes GPUObjectBase
is a WebGPU interface.
interface mixin {
GPUObjectBase attribute USVString ?label ; };
GPUObjectBase
has the following attributes:
label
, of type USVString, nullable-
A label which can be used by development tools (such as error/warning messages, browser developer tools, or platform debugging utilities) to identify the underlying internal object to the developer. It has no specified format, and therefore cannot be reliably machine-parsed.
In any given situation, the user agent may or may not choose to use this label.
GPUObjectBase
has the following internal slots:
[[device]]
, of type device, readonly-
An internal slot holding the device which owns the internal object.
3.1.4. Object Descriptors
An object descriptor holds the information needed to create an object,
which is typically done via one of the create*
methods of GPUDevice
.
dictionary {
GPUObjectDescriptorBase USVString label ; };
GPUObjectDescriptorBase
has the following members:
label
, of type USVString-
The initial value of
GPUObjectBase.label
.
3.2. Invalid Internal Objects & Contagious Invalidity
If an object is successfully created, it is valid at that moment. An internal object may be invalid. It may become invalid during its lifetime, but it will never become valid again.
Consider separating "invalid" from "destroyed".
This would let validity be immutable, and only operations involving devices, buffers, and textures
(e.g. submit, map) would check those objects' [[destroyed]]
state (explicitly).
-
If there is an error in the creation of an object, it is immediately invalid. This can happen, for example, if the object descriptor doesn’t describe a valid object, or if there is not enough memory to allocate a resource.
-
If an object is explicitly destroyed (e.g.
GPUBuffer.destroy()
), it becomes invalid. -
If the device that owns an object is lost, the object becomes invalid.
GPUObjectBase
object is valid to use with a targetObject, run the following steps:
-
If any of the following conditions are unsatisfied return
false
:-
object is valid.
-
object.
[[device]]
is valid. -
If targetObject is a
GPUDevice
object.[[device]]
is targetObject. -
Otherwise object.
[[device]]
is targetObject.[[device]]
.
-
-
Return
true
.
3.3. Coordinate Systems
WebGPU’s coordinate systems match DirectX and Metal’s coordinate systems in a graphics pipeline.
-
Y-axis is up in normalized device coordinate (NDC): point(-1.0, -1.0) in NDC is located at the bottom-left corner of NDC. In addition, x and y in NDC should be between -1.0 and 1.0 inclusive, while z in NDC should be between 0.0 and 1.0 inclusive. Vertices out of this range in NDC will not introduce any errors, but they will be clipped.
-
Y-axis is down in framebuffer coordinate, viewport coordinate and fragment/pixel coordinate: origin(0, 0) is located at the top-left corner in these coordinate systems.
-
Window/present coordinate matches framebuffer coordinate.
-
UV of origin(0, 0) in texture coordinate represents the first texel (the lowest byte) in texture memory.
3.4. Programming Model
3.4.1. Timelines
This section is non-normative.
A computer system with a user agent at the front-end and GPU at the back-end has components working on different timelines in parallel:
- Content timeline
-
Associated with the execution of the Web script. It includes calling all methods described by this specification.
Steps executed on the content timeline look like this. - Device timeline
-
Associated with the GPU device operations that are issued by the user agent. It includes creation of adapters, devices, and GPU resources and state objects, which are typically synchronous operations from the point of view of the user agent part that controls the GPU, but can live in a separate OS process.
Steps executed on the device timeline look like this. - Queue timeline
-
Associated with the execution of operations on the compute units of the GPU. It includes actual draw, copy, and compute jobs that run on the GPU.
Steps executed on the queue timeline look like this.
In this specification, asynchronous operations are used when the result value depends on work that happens on any timeline other than the Content timeline. They are represented by callbacks and promises in JavaScript.
GPUComputePassEncoder.dispatch()
:
-
User encodes a
dispatch
command by calling a method of theGPUComputePassEncoder
which happens on the Content timeline. -
User issues
GPUQueue.submit()
that hands over theGPUCommandBuffer
to the user agent, which processes it on the Device timeline by calling the OS driver to do a low-level submission. -
The submit gets dispatched by the GPU invocation scheduler onto the actual compute units for execution, which happens on the Queue timeline.
GPUDevice.createBuffer()
:
-
User fills out a
GPUBufferDescriptor
and creates aGPUBuffer
with it, which happens on the Content timeline. -
User agent creates a low-level buffer on the Device timeline.
GPUBuffer.mapAsync()
:
-
User requests to map a
GPUBuffer
on the Content timeline and gets a promise in return. -
User agent checks if the buffer is currently used by the GPU and makes a reminder to itself to check back when this usage is over.
-
After the GPU operating on Queue timeline is done using the buffer, the user agent maps it to memory and resolves the promise.
3.4.2. Memory Model
This section is non-normative.
Once a GPUDevice
has been obtained during an application initialization routine,
we can describe the WebGPU platform as consisting of the following layers:
-
User agent implementing the specification.
-
Operating system with low-level native API drivers for this device.
-
Actual CPU and GPU hardware.
Each layer of the WebGPU platform may have different memory types that the user agent needs to consider when implementing the specification:
-
The script-owned memory, such as an
ArrayBuffer
created by the script, is generally not accessible by a GPU driver. -
A user agent may have different processes responsible for running the content and communication to the GPU driver. In this case, it uses inter-process shared memory to transfer data.
-
Dedicated GPUs have their own memory with high bandwidth, while integrated GPUs typically share memory with the system.
Most physical resources are allocated in the memory of type that is efficient for computation or rendering by the GPU. When the user needs to provide new data to the GPU, the data may first need to cross the process boundary in order to reach the user agent part that communicates with the GPU driver. Then it may need to be made visible to the driver, which sometimes requires a copy into driver-allocated staging memory. Finally, it may need to be transferred to the dedicated GPU memory, potentially changing the internal layout into one that is most efficient for GPUs to operate on.
All of these transitions are done by the WebGPU implementation of the user agent.
Note: This example describes the worst case, while in practice
the implementation may not need to cross the process boundary,
or may be able to expose the driver-managed memory directly to
the user behind an ArrayBuffer
, thus avoiding any data copies.
3.4.3. Multi-Threading
3.4.4. Resource Usages
A physical resource can be used on GPU with an internal usage:
- input
-
Buffer with input data for draw or dispatch calls. Preserves the contents. Allowed by buffer
INDEX
, bufferVERTEX
, or bufferINDIRECT
. - constant
-
Resource bindings that are constant from the shader point of view. Preserves the contents. Allowed by buffer
UNIFORM
or textureSAMPLED
. - storage
-
Read-write storage resource binding. Allowed by buffer
STORAGE
. - storage-read
-
Read-only storage resource bindings. Preserves the contents. Allowed by buffer
STORAGE
or textureSTORAGE
. - storage-write
-
Write-only storage resource bindings. Allowed by texture
STORAGE
. - attachment
-
Texture used as an output attachment in a render pass. Allowed by texture
RENDER_ATTACHMENT
. - attachment-read
-
Texture used as a read-only attachment in a render pass. Preserves the contents. Allowed by texture
RENDER_ATTACHMENT
.
Textures may consist of separate mipmap levels and array layers,
which can be used differently at any given time.
Each such texture subresource is uniquely identified by a texture, mipmap level, and
(for 2d
textures only) array layer,
and aspect.
We define subresource to be either a whole buffer, or a texture subresource.
-
Each usage in U is input, constant, storage-read, or attachment-read.
-
Each usage in U is storage.
-
Each usage in U is storage-write.
-
U contains exactly one element: attachment.
Enforcing that the usages are only combined into a compatible usage list allows the API to limit when data races can occur in working with memory. That property makes applications written against WebGPU more likely to run without modification on different platforms.
Generally, when an implementation processes an operation that uses a subresource in a different way than its current usage allows, it schedules a transition of the resource
into the new state. In some cases, like within an open GPURenderPassEncoder
, such a
transition is impossible due to the hardware limitations.
We define these places as usage scopes.
The main usage rule is, for any one subresource, its list of internal usages within one usage scope must be a compatible usage list.
For example, binding the same buffer for storage as well as for input within the same GPURenderPassEncoder
would put the encoder
as well as the owning GPUCommandEncoder
into the error state.
This combination of usages does not make a compatible usage list.
Note: race condition of multiple writable storage buffer/texture usages in a single usage scope is allowed.
The subresources of textures included in the views provided to GPURenderPassColorAttachment.view
and GPURenderPassColorAttachment.resolveTarget
are considered to be used as attachment for the usage scope of this render pass.
The physical size of a texture subresource is the dimension of the texture subresource in texels that includes the possible extra paddings to form complete texel blocks in the subresource.
-
For pixel-based
GPUTextureFormats
, the physical size is always equal to the size of the texture subresource used in the sampling hardwares. -
Textures in block-based compressed
GPUTextureFormats
always have a mipmap level 0 whose[[descriptor]]
.size
is a multiple of the texel block size, but the lower mipmap levels might not be multiples of the texel block size and can have paddings.
GPUTexture
in BC format whose [[descriptor]]
.size
is {60, 60, 1}, when sampling
the GPUTexture
at mipmap level 2, the sampling hardware uses {15, 15, 1} as the size of the texture subresource,
while its physical size is {16, 16, 1} as the block-compression algorithm can only operate on 4x4 texel blocks. 3.4.5. Synchronization
For each subresource of a physical resource, its set of internal usage flags is tracked on the Queue timeline.
This section will need to be revised to support multiple queues.
On the Queue timeline, there is an ordered sequence of usage scopes. For the duration of each scope, the set of internal usage flags of any given subresource is constant. A subresource may transition to new usages at the boundaries between usage scopes.
This specification defines the following usage scopes:
-
Outside of a pass (in
GPUCommandEncoder
), each (non-state-setting) command is one usage scope (e.g.copyBufferToTexture()
). -
In a compute pass, each dispatch command (
dispatch()
ordispatchIndirect()
) is one usage scope. A subresource is "used" in the usage scope if it’s accessible by the command. Within a dispatch, every subresource in every currently boundGPUBindGroup
is "used" in the usage scope. State-setting compute pass commands, likesetBindGroup(index, bindGroup, dynamicOffsets)
, do not contribute directly to a usage scope; they instead change the state that is checked in dispatch commands. -
One render pass is one usage scope. A subresource is "used" in the usage scope if it’s referenced by any (state-setting or non-state-setting) command. For example, in
setBindGroup(index, bindGroup, dynamicOffsets)
, every subresource inbindGroup
is "used" in the render pass’s usage scope.
The above should probably talk about GPU commands. But we don’t have a way to reference specific GPU commands (like dispatch) yet.
-
In a render pass, subresources used in any
setBindGroup()
call, regardless of whether the currently bound pipeline’s shader or layout actually depends on these bindings, or the bind group is shadowed by another 'set' call. -
A buffer used in any
setVertexBuffer()
call, regardless of whether any draw call depends on this buffer, or this buffer is shadowed by another 'set' call. -
A buffer used in any
setIndexBuffer()
call, regardless of whether any draw call depends on this buffer, or this buffer is shadowed by another 'set' call. -
A texture subresource used as a color attachment, resolve attachment, or depth/stencil attachment in
GPURenderPassDescriptor
bybeginRenderPass()
, regardless of whether the shader actually depends on these attachments. -
Resources used in bind group entries with visibility 0, or visible only to the compute stage but used in a render pass (or vice versa).
During command encoding, every usage of a subresource is recorded in one of the usage scopes in the command buffer.
For each usage scope, the implementation performs usage scope validation by composing the list of all internal usage flags of each subresource used in the usage scope.
If any of those lists is not a compatible usage list, GPUCommandEncoder.finish()
generates a GPUValidationError
in the current error scope.
3.5. Core Internal Objects
3.5.1. Adapters
An adapter represents an implementation of WebGPU on the system. Each adapter identifies both an instance of a hardware accelerator (e.g. GPU or CPU) and an instance of a browser’s implementation of WebGPU on top of that accelerator.
If an adapter becomes unavailable, it becomes invalid. Once invalid, it never becomes valid again. Any devices on the adapter, and internal objects owned by those devices, also become invalid.
Note: An adapter may be a physical display adapter (GPU), but it could also be
a software renderer.
A returned adapter could refer to different physical adapters, or to
different browser codepaths or system drivers on the same physical adapters.
Applications can hold onto multiple adapters at once (via GPUAdapter
)
(even if some are invalid),
and two of these could refer to different instances of the same physical
configuration (e.g. if the GPU was reset or disconnected and reconnected).
An adapter has the following internal slots:
[[features]]
, of type ordered set<GPUFeatureName
>, readonly-
The features which can be used to create devices on this adapter.
[[limits]]
, of type supported limits, readonly-
The best limits which can be used to create devices on this adapter.
Each adapter limit must be the same or better than its default value in supported limits.
[[current]]
, of type boolean-
Indicates whether the adapter is allowed to vend new devices at this time. Its value may change at any time.
It is initially set to
true
insiderequestAdapter()
. It becomesfalse
inside "lose the device" and "mark adapters stale". Once set tofalse
, it cannot becometrue
again.Note: This mechanism ensures that various adapter-creation scenarios look similar to applications, so they can easily be robust to more scenarios with less testing: first initialization, reinitialization due to an unplugged adapter, reinitialization due to a test
GPUDevice.destroy()
call, etc. It also ensures applications use the latest system state to make decisions about which adapter to use.
Adapters are exposed via GPUAdapter
.
3.5.2. Devices
A device is the logical instantiation of an adapter, through which internal objects are created. It can be shared across multiple agents (e.g. dedicated workers).
A device is the exclusive owner of all internal objects created from it:
when the device is lost, it and all objects created on it (directly, e.g. createTexture()
, or indirectly, e.g. createView()
) become invalid.
A device has the following internal slots:
[[adapter]]
, of type adapter, readonly-
The adapter from which this device was created.
[[features]]
, of type ordered set<GPUFeatureName
>, readonly-
The features which can be used on this device. No additional features can be used, even if the underlying adapter can support them.
[[limits]]
, of type supported limits, readonly-
The limits which can be used on this device. No better limits can be used, even if the underlying adapter can support them.
GPUDeviceDescriptor
descriptor:
-
Set device.
[[adapter]]
to adapter. -
Set device.
[[features]]
to the set of values in descriptor.nonGuaranteedFeatures
. -
Let device.
[[limits]]
be a supported limits object with the default values. For each (key, value) pair in descriptor.nonGuaranteedLimits
, set the member corresponding to key in device.[[limits]]
to the value value.
Any time the user agent needs to revoke access to a device, it calls lose the device(device, undefined
).
-
Set device.
[[adapter]]
.[[current]]
tofalse
. -
explain how to get from device to its "primary"
GPUDevice
. -
Resolve
GPUDevice.lost
with a newGPUDeviceLostInfo
withreason
set to reason andmessage
set to an implementation-defined value.Note:
message
should not disclose unnecessary user/system information and should never be parsed by applications.
Devices are exposed via GPUDevice
.
3.6. Optional Capabilities
WebGPU adapters and devices have capabilities, which describe WebGPU functionality that differs between different implementations, typically due to hardware or system software constraints. A capability is either a feature or a limit.
3.6.1. Features
A feature is a set of optional WebGPU functionality that is not supported on all implementations, typically due to hardware or system software constraints.
Each GPUAdapter
exposes a set of available features.
Only those features may be requested in requestDevice()
.
Functionality that is part of an feature may only be used if the feature was requested at device creation. See the Feature Index for a description of the functionality each feature enables.
3.6.2. Limits
Each limit is a numeric limit on the usage of WebGPU on a device.
A supported limits object has a value for every defined limit.
Each adapter has a set of supported limits, and devices are created
with specific supported limits in place.
The device limits are enforced regardless of the adapter’s limits.
One limit value may be better than another. A better limit value always relaxes validation, enabling strictly more programs to be valid. For each limit, "better" is defined.
Note: Setting "better" limits may not necessarily be desirable, as they may have a performance impact. Because of this, and to improve portability across devices and implementations, applications should generally request the "worst" limits that work for their content (ideally, the default values).
Each limit also has a default value.
Every adapter is guaranteed to support the default value or better.
The default is used if a value is not explicitly specified in nonGuaranteedLimits
.
Limit name | Type | Better | Default |
---|---|---|---|
maxTextureDimension1D
| GPUSize32
| Higher | 8192 |
The maximum allowed value for the size .width of a texture created with dimension "1d" .
| |||
maxTextureDimension2D
| GPUSize32
| Higher | 8192 |
The maximum allowed value for the size .width and size .height of a texture created with dimension "2d" .
| |||
maxTextureDimension3D
| GPUSize32
| Higher | 2048 |
The maximum allowed value for the size .width, size .height and size .depthOrArrayLayers of a texture created with dimension "3d" .
| |||
maxTextureArrayLayers
| GPUSize32
| Higher | 2048 |
The maximum allowed value for the size .depthOrArrayLayers of a texture created with dimension "1d" or "2d" .
| |||
maxBindGroups
| GPUSize32
| Higher | 4 |
The maximum number of GPUBindGroupLayouts allowed in bindGroupLayouts when creating a GPUPipelineLayout .
| |||
maxDynamicUniformBuffersPerPipelineLayout
| GPUSize32
| Higher | 8 |
The maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are uniform buffers with dynamic offsets.
See Exceeds the binding slot limits.
| |||
maxDynamicStorageBuffersPerPipelineLayout
| GPUSize32
| Higher | 4 |
The maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are storage buffers with dynamic offsets.
See Exceeds the binding slot limits.
| |||
maxSampledTexturesPerShaderStage
| GPUSize32
| Higher | 16 |
For each possible GPUShaderStage stage ,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are sampled textures.
See Exceeds the binding slot limits.
| |||
maxSamplersPerShaderStage
| GPUSize32
| Higher | 16 |
For each possible GPUShaderStage stage ,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are samplers.
See Exceeds the binding slot limits.
| |||
maxStorageBuffersPerShaderStage
| GPUSize32
| Higher | 4 |
For each possible GPUShaderStage stage ,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are storage buffers.
See Exceeds the binding slot limits.
| |||
maxStorageTexturesPerShaderStage
| GPUSize32
| Higher | 4 |
For each possible GPUShaderStage stage ,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are storage textures.
See Exceeds the binding slot limits.
| |||
maxUniformBuffersPerShaderStage
| GPUSize32
| Higher | 12 |
For each possible GPUShaderStage stage ,
the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are uniform buffers.
See Exceeds the binding slot limits.
| |||
maxUniformBufferBindingSize
| GPUSize32
| Higher | 16384 |
The maximum GPUBufferBinding .size for bindings with a GPUBindGroupLayoutEntry entry for which entry.buffer ?.type is "uniform" .
| |||
maxStorageBufferBindingSize
| GPUSize32
| Higher | 134217728 (128 MiB) |
The maximum GPUBufferBinding .size for bindings with a GPUBindGroupLayoutEntry entry for which entry.buffer ?.type is "storage" or "read-only-storage" .
| |||
maxVertexBuffers
| GPUSize32
| Higher | 8 |
The maximum number of buffers when creating a GPURenderPipeline .
| |||
maxVertexAttributes
| GPUSize32
| Higher | 16 |
The maximum number of attributes in total across buffers when creating a GPURenderPipeline .
| |||
maxVertexBufferArrayStride
| GPUSize32
| Higher | 2048 |
The maximum allowed arrayStride when creating a GPURenderPipeline .
|
3.6.2.1. GPUAdapterLimits
GPUAdapterLimits
exposes the limits supported by an adapter.
See GPUAdapter.limits
.
[Exposed =Window ]interface GPUAdapterLimits {readonly attribute unsigned long ;
maxTextureDimension1D readonly attribute unsigned long ;
maxTextureDimension2D readonly attribute unsigned long ;
maxTextureDimension3D readonly attribute unsigned long ;
maxTextureArrayLayers readonly attribute unsigned long ;
maxBindGroups readonly attribute unsigned long ;
maxDynamicUniformBuffersPerPipelineLayout readonly attribute unsigned long ;
maxDynamicStorageBuffersPerPipelineLayout readonly attribute unsigned long ;
maxSampledTexturesPerShaderStage readonly attribute unsigned long ;
maxSamplersPerShaderStage readonly attribute unsigned long ;
maxStorageBuffersPerShaderStage readonly attribute unsigned long ;
maxStorageTexturesPerShaderStage readonly attribute unsigned long ;
maxUniformBuffersPerShaderStage readonly attribute unsigned long ;
maxUniformBufferBindingSize readonly attribute unsigned long ;
maxStorageBufferBindingSize readonly attribute unsigned long ;
maxVertexBuffers readonly attribute unsigned long ;
maxVertexAttributes readonly attribute unsigned long ; };
maxVertexBufferArrayStride
3.6.2.2. GPUSupportedFeatures
GPUSupportedFeatures
is a setlike interface. Its set entries are
the GPUFeatureName
values of the features supported by an adapter or
device. It must only contain strings from the GPUFeatureName
enum.
[Exposed =Window ]interface GPUSupportedFeatures {readonly setlike <DOMString >; };
GPUSupportedFeatures
set entries is DOMString
to allow user
agents to gracefully handle valid GPUFeatureName
s which are added in later revisions of the spec
but which the user agent has not been updated to recognize yet. If the set entries type was GPUFeatureName
the following code would throw an TypeError
rather than reporting false
:
3.7. Origin Restrictions
WebGPU allows accessing image data stored in images, videos, and canvases. Restrictions are imposed on the use of cross-domain media, because shaders can be used to indirectly deduce the contents of textures which have been uploaded to the GPU.
WebGPU disallows uploading an image source if it is not origin-clean.
This also implies that the origin-clean flag for a
canvas rendered using WebGPU will never be set to false
.
For more information on issuing CORS requests for image and video elements, consult:
3.8. Color Spaces
WebGPU does not provide color management. All values within WebGPU (such as texture elements) are raw numeric values, not color-managed color values.
WebGPU does interface with color-managed outputs (via GPUCanvasContext
) and inputs
(via copyExternalImageToTexture()
and importExternalTexture()
).
Color conversion must be performed between the WebGPU numeric values and the external color values.
These interface points locally define a color space, in which the WebGPU numeric values are to be
interpreted, from options defined in CSS Color 4 §10.2 Predefined colorspaces: srgb, display-p3, a98-rgb, prophoto-rgb, rec2020, xyz and lab'..
enum {
GPUPredefinedColorSpace "srgb" , };
Replace this with the PredefinedColorSpaceEnum
from the canvas color space proposal.
Consider a path for uploading srgb-encoded images into linearly-encoded textures. <https://github.com/gpuweb/gpuweb/issues/1715>
"srgb"
-
The CSS predefined color space srgb.
4. Initialization
4.1. Examples
Need a robust example like the one in ErrorHandling.md, which handles all situations. Possibly also include a simple example with no handling.
4.2. navigator.gpu
A GPU
object is available in the Window
and DedicatedWorkerGlobalScope
contexts through the Navigator
and WorkerNavigator
interfaces respectively and is exposed via navigator.gpu
:
interface mixin { [
NavigatorGPU SameObject ]readonly attribute GPU ; };
gpu Navigator includes NavigatorGPU ;WorkerNavigator includes NavigatorGPU ;
4.3. GPU
GPU
is the entry point to WebGPU.
[Exposed =(Window ,DedicatedWorker )]interface GPU {Promise <GPUAdapter ?>requestAdapter (optional GPURequestAdapterOptions options = {}); };
GPU
has the following methods:
requestAdapter(options)
-
Requests an adapter from the user agent. The user agent chooses whether to return an adapter, and, if so, chooses according to the provided options.
Called on:GPU
this.Arguments:
Arguments for the GPU.requestAdapter(options) method. Parameter Type Nullable Optional Description options
GPURequestAdapterOptions ✘ ✔ Criteria used to select the adapter. Returns:
Promise
<GPUAdapter
?>-
Let promise be a new promise.
-
Issue the following steps on the Device timeline of this:
-
If the user agent chooses to return an adapter, it should:
-
Create an adapter adapter with
[[current]]
set totrue
, chosen according to the rules in § 4.3.1 Adapter Selection and the criteria in options. -
Resolve promise with a new
GPUAdapter
encapsulating adapter.
-
-
Otherwise, promise resolves with
null
.
-
-
Return promise.
-
GPU
has the following internal slots:
[[previously_returned_adapters]]
, of type ordered set<adapter>-
The set of adapters that have been returned via
requestAdapter()
. It is used, then cleared, in mark adapters stale.
Upon any change in the system’s state that could affect the result of any requestAdapter()
call, the user agent should mark adapters stale. For example:
-
A physical adapter is added/removed (via plug, driver update, TDR, etc.)
-
The system’s power configuration has changed (laptop unplugged, power settings changed, etc.)
Additionally, mark adapters stale may by scheduled at any time.
User agents may choose to do this often even when there has been no system state change (e.g.
several seconds after the last call to requestDevice()
.
This has no effect on well-formed applications, obfuscates real system state changes, and makes
developers more aware that calling requestAdapter()
again is always necessary before
calling requestDevice()
.
-
For each adapter in
navigator.gpu.
[[previously_returned_adapters]]
:-
Set adapter.
[[adapter]]
.[[current]]
tofalse
.
-
-
Empty
navigator.gpu.
[[previously_returned_adapters]]
.
Update here if an adaptersadded
/adapterschanged
event is introduced.
GPUAdapter
:
const adapter= await navigator. gpu. requestAdapter( /* ... */ ); const features= adapter. features; // ...
4.3.1. Adapter Selection
GPURequestAdapterOptions
provides hints to the user agent indicating what
configuration is suitable for the application.
dictionary GPURequestAdapterOptions {GPUPowerPreference powerPreference ; };
enum {
GPUPowerPreference "low-power" ,"high-performance" };
GPURequestAdapterOptions
has the following members:
powerPreference
, of type GPUPowerPreference-
Optionally provides a hint indicating what class of adapter should be selected from the system’s available adapters.
The value of this hint may influence which adapter is chosen, but it must not influence whether an adapter is returned or not.
Note: The primary utility of this hint is to influence which GPU is used in a multi-GPU system. For instance, some laptops have a low-power integrated GPU and a high-performance discrete GPU.
Note: Depending on the exact hardware configuration, such as battery status and attached displays or removable GPUs, the user agent may select different adapters given the same power preference. Typically, given the same hardware configuration and state and
powerPreference
, the user agent is likely to select the same adapter.It must be one of the following values:
undefined
(or not present)-
Provides no hint to the user agent.
"low-power"
-
Indicates a request to prioritize power savings over performance.
Note: Generally, content should use this if it is unlikely to be constrained by drawing performance; for example, if it renders only one frame per second, draws only relatively simple geometry with simple shaders, or uses a small HTML canvas element. Developers are encouraged to use this value if their content allows, since it may significantly improve battery life on portable devices.
"high-performance"
-
Indicates a request to prioritize performance over power consumption.
Note: By choosing this value, developers should be aware that, for devices created on the resulting adapter, user agents are more likely to force device loss, in order to save power by switching to a lower-power adapter. Developers are encouraged to only specify this value if they believe it is absolutely necessary, since it may significantly decrease battery life on portable devices.
4.4. GPUAdapter
A GPUAdapter
encapsulates an adapter,
and describes its capabilities (features and limits).
To get a GPUAdapter
, use requestAdapter()
.
[Exposed =Window ]interface GPUAdapter {readonly attribute DOMString name ; [SameObject ]readonly attribute GPUSupportedFeatures features ; [SameObject ]readonly attribute GPUAdapterLimits limits ;Promise <GPUDevice >requestDevice (optional GPUDeviceDescriptor descriptor = {}); };
GPUAdapter
has the following attributes:
name
, of type DOMString, readonly-
A human-readable name identifying the adapter. The contents are implementation-defined.
features
, of type GPUSupportedFeatures, readonly-
The set of values in
this
.[[adapter]]
.[[features]]
. limits
, of type GPUAdapterLimits, readonly-
The limits in
this
.[[adapter]]
.[[limits]]
.
GPUAdapter
has the following internal slots:
[[adapter]]
, of type adapter, readonly-
The adapter to which this
GPUAdapter
refers.
GPUAdapter
has the following methods:
requestDevice(descriptor)
-
Requests a device from the adapter.
Called on:GPUAdapter
this.Arguments:
Arguments for the GPUAdapter.requestDevice(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUDeviceDescriptor ✘ ✔ Description of the GPUDevice
to request.-
Let promise be a new promise.
-
Let adapter be this.
[[adapter]]
. -
Issue the following steps to the Device timeline:
-
If any of the following requirements are unmet, reject promise with a
TypeError
and stop.-
The set of values in descriptor.
nonGuaranteedFeatures
must be a subset of those in adapter.[[features]]
. -
Each key in descriptor.
nonGuaranteedLimits
must be the name of a member of supported limits.
-
-
If any of the following requirements are unmet, reject promise with an
OperationError
and stop.-
For each type of limit in supported limits, the value of that limit in descriptor.
nonGuaranteedLimits
must be no better than the value of that limit in adapter.[[limits]]
.
-
-
If adapter.
[[current]]
isfalse
, or the user agent otherwise cannot fulfill the request:-
Let device be a new device.
-
Lose the device(device,
undefined
).Note: This makes adapter.
[[current]]
false
, if it wasn’t already.Note: User agents should consider issuing developer-visible warnings in most or all cases when this occurs. Applications should perform reinitialization logic starting with
requestAdapter()
. -
Resolve promise with a new
GPUDevice
encapsulating device, and stop.
-
-
Resolve promise with a new
GPUDevice
object encapsulating a new device with the capabilities described by descriptor.
-
-
Return promise.
-
4.4.1. GPUDeviceDescriptor
GPUDeviceDescriptor
describes a device request.
dictionary GPUDeviceDescriptor :GPUObjectDescriptorBase {sequence <GPUFeatureName >nonGuaranteedFeatures = [];record <DOMString ,GPUSize32 >nonGuaranteedLimits = {}; };
GPUDeviceDescriptor
has the following members:
nonGuaranteedFeatures
, of type sequence<GPUFeatureName>, defaulting to[]
-
The set of
GPUFeatureName
values in this sequence defines the exact set of features that must be enabled on the device. nonGuaranteedLimits
, of type record<DOMString, GPUSize32>, defaulting to{}
-
Defines the exact limits that must be enabled on the device. Each key must be the name of a member of supported limits.
4.4.1.1. GPUFeatureName
Each GPUFeatureName
identifies a set of functionality which, if available,
allows additional usages of WebGPU that would have otherwise been invalid.
enum GPUFeatureName {"depth-clamping" ,"depth24unorm-stencil8" ,"depth32float-stencil8" ,"pipeline-statistics-query" ,"texture-compression-bc" ,"timestamp-query" , };
4.5. GPUDevice
A GPUDevice
encapsulates a device and exposes
the functionality of that device.
GPUDevice
is the top-level interface through which WebGPU interfaces are created.
To get a GPUDevice
, use requestDevice()
.
[Exposed =(Window ,DedicatedWorker ),Serializable ]interface GPUDevice :EventTarget { [SameObject ]readonly attribute GPUSupportedFeatures features ;readonly attribute object limits ; [SameObject ]readonly attribute GPUQueue queue ;undefined destroy ();GPUBuffer createBuffer (GPUBufferDescriptor descriptor );GPUTexture createTexture (GPUTextureDescriptor descriptor );GPUSampler createSampler (optional GPUSamplerDescriptor descriptor = {});GPUExternalTexture importExternalTexture (GPUExternalTextureDescriptor descriptor );GPUBindGroupLayout createBindGroupLayout (GPUBindGroupLayoutDescriptor descriptor );GPUPipelineLayout createPipelineLayout (GPUPipelineLayoutDescriptor descriptor );GPUBindGroup createBindGroup (GPUBindGroupDescriptor descriptor );GPUShaderModule createShaderModule (GPUShaderModuleDescriptor descriptor );GPUComputePipeline createComputePipeline (GPUComputePipelineDescriptor descriptor );GPURenderPipeline createRenderPipeline (GPURenderPipelineDescriptor descriptor );Promise <GPUComputePipeline >createComputePipelineAsync (GPUComputePipelineDescriptor descriptor );Promise <GPURenderPipeline >createRenderPipelineAsync (GPURenderPipelineDescriptor descriptor );GPUCommandEncoder createCommandEncoder (optional GPUCommandEncoderDescriptor descriptor = {});GPURenderBundleEncoder createRenderBundleEncoder (GPURenderBundleEncoderDescriptor descriptor );GPUQuerySet createQuerySet (GPUQuerySetDescriptor descriptor ); };GPUDevice includes GPUObjectBase ;
GPUDevice
has the following attributes:
features
, of type GPUSupportedFeatures, readonly-
A set containing the
GPUFeatureName
values of the features supported by the device (i.e. the ones with which it was created). limits
, of type object, readonly-
Exposes the limits supported by the device (which are exactly the ones with which it was created).
queue
, of type GPUQueue, readonly-
The primary
GPUQueue
for this device.
GPUDevice
has the following internal slots:
GPUDevice
has the methods listed in its WebIDL definition above.
Those not defined here are defined elsewhere in this document.
destroy()
-
Destroys the device, preventing further operations on it. Outstanding asynchronous operations will fail.
Called on:GPUDevice
this.-
Lose the device(this.
[[device]]
,"destroyed"
).
Note: Since no further operations can occur on this device, implementations can free resource allocations and abort outstanding asynchronous operations immediately.
-
GPUDevice
objects are serializable objects.
-
Set serialized.agentCluster to be the surrounding agent's agent cluster.
-
If serialized.agentCluster’s cross-origin isolated capability is false, throw a "
DataCloneError
". -
If forStorage is
true
, throw a "DataCloneError
". -
Set serialized.device to the value of value.
[[device]]
.
-
If serialized.agentCluster is not the surrounding agent's agent cluster, throw a "
DataCloneError
". -
Set value.
[[device]]
to serialized.device.
GPUDevice
doesn’t really need the cross-origin policy restriction.
It should be usable from multiple agents regardless. Once we describe the serialization
of buffers, textures, and queues - the COOP+COEP logic should be moved in there.
5. Buffers
5.1. GPUBuffer
define buffer (internal object)
A GPUBuffer
represents a block of memory that can be used in GPU operations.
Data is stored in linear layout, meaning that each byte of the allocation can be
addressed by its offset from the start of the GPUBuffer
, subject to alignment
restrictions depending on the operation. Some GPUBuffers
can be
mapped which makes the block of memory accessible via an ArrayBuffer
called
its mapping.
GPUBuffers
are created via GPUDevice.createBuffer(descriptor)
that returns a new buffer in the mapped or unmapped state.
[Exposed =Window ,Serializable ]interface GPUBuffer {Promise <undefined >mapAsync (GPUMapModeFlags mode ,optional GPUSize64 offset = 0,optional GPUSize64 size );ArrayBuffer getMappedRange (optional GPUSize64 offset = 0,optional GPUSize64 size );undefined unmap ();undefined destroy (); };GPUBuffer includes GPUObjectBase ;
GPUBuffer
has the following internal slots:
[[size]]
of typeGPUSize64
.-
The length of the
GPUBuffer
allocation in bytes. [[usage]]
of typeGPUBufferUsageFlags
.-
The allowed usages for this
GPUBuffer
. [[state]]
of type buffer state.-
The current state of the
GPUBuffer
. [[mapping]]
of typeArrayBuffer
orPromise
ornull
.-
The mapping for this
GPUBuffer
. TheArrayBuffer
isn’t directly accessible and is instead accessed through views into it, called the mapped ranges, that are stored in[[mapped_ranges]]
Specify
[[mapping]]
in term ofDataBlock
similarly toAllocateArrayBuffer
? <https://github.com/gpuweb/gpuweb/issues/605> [[mapping_range]]
of type list<unsigned long long
> ornull
.-
The range of this
GPUBuffer
that is mapped. [[mapped_ranges]]
of type list<ArrayBuffer
> ornull
.-
The
ArrayBuffer
s returned viagetMappedRange
to the application. They are tracked so they can be detached whenunmap
is called. [[map_mode]]
of typeGPUMapModeFlags
.-
The
GPUMapModeFlags
of the last call tomapAsync()
(if any).
[[usage]]
is differently named from [[descriptor]]
.usage
.
We should make it consistent.
Each GPUBuffer
has a current buffer state on the Content timeline which is one of the following:
-
"mapped" where the
GPUBuffer
is available for CPU operations on its content. -
"mapped at creation" where the
GPUBuffer
was just created and is available for CPU operations on its content. -
"mapping pending" where the
GPUBuffer
is being made available for CPU operations on its content. -
"unmapped" where the
GPUBuffer
is available for GPU operations. -
"destroyed" where the
GPUBuffer
is no longer available for any operations exceptdestroy
.
Note: [[size]]
and [[usage]]
are immutable once the GPUBuffer
has been created.
GPUBuffer
has a state machine with the following states.
([[mapping]]
, [[mapping_range]]
,
and [[mapped_ranges]]
are null
when not specified.)
-
mapped or mapped at creation with an
ArrayBuffer
typed[[mapping]]
, a sequence of two numbers in[[mapping_range]]
and a sequence ofArrayBuffer
in[[mapped_ranges]]
-
mapping pending with a
Promise
typed[[mapping]]
.
GPUBuffer
is Serializable
. It is a reference to an internal buffer
object, and Serializable
means that the reference can be copied between
realms (threads/workers), allowing multiple realms to access it concurrently.
Since GPUBuffer
has internal state (mapped, destroyed), that state is
internally-synchronized - these state changes occur atomically across realms.
5.2. Buffer Creation
5.2.1. GPUBufferDescriptor
This specifies the options to use in creating a GPUBuffer
.
dictionary :
GPUBufferDescriptor GPUObjectDescriptorBase {required GPUSize64 ;
size required GPUBufferUsageFlags ;
usage boolean =
mappedAtCreation false ; };
5.2.2. Buffer Usage
typedef [EnforceRange ]unsigned long ; [
GPUBufferUsageFlags Exposed =Window ]interface {
GPUBufferUsage const GPUFlagsConstant = 0x0001;
MAP_READ const GPUFlagsConstant = 0x0002;
MAP_WRITE const GPUFlagsConstant = 0x0004;
COPY_SRC const GPUFlagsConstant = 0x0008;
COPY_DST const GPUFlagsConstant = 0x0010;
INDEX const GPUFlagsConstant = 0x0020;
VERTEX const GPUFlagsConstant = 0x0040;
UNIFORM const GPUFlagsConstant = 0x0080;
STORAGE const GPUFlagsConstant = 0x0100;
INDIRECT const GPUFlagsConstant = 0x0200; };
QUERY_RESOLVE
createBuffer(descriptor)
-
Creates a
GPUBuffer
.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createBuffer(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUBufferDescriptor ✘ ✘ Description of the GPUBuffer
to create.Returns:
GPUBuffer
-
If any of the following conditions are unsatisfied, return an error buffer and stop.
-
descriptor.
usage
is a subset of this.[[allowed buffer usages]]. -
If descriptor.
mappedAtCreation
istrue
:-
descriptor.
size
is a multiple of 4.
-
Explain that the resulting error buffer can still be mapped at creation. <https://github.com/gpuweb/gpuweb/issues/605>
Explain what are a
GPUDevice
's[[allowed buffer usages]]
. <https://github.com/gpuweb/gpuweb/issues/605> -
Let b be a new
GPUBuffer
object. -
If descriptor.
mappedAtCreation
istrue
:-
Set b.
[[mapping]]
to a newArrayBuffer
of size b.[[size]]
. -
Set b.
[[mapping_range]]
to[0, descriptor.size]
. -
Set b.
[[mapped_ranges]]
to[]
. -
Set b.
[[state]]
to mapped at creation.
Else:
-
Set b.
[[mapping]]
tonull
. -
Set b.
[[mapping_range]]
tonull
. -
Set b.
[[mapped_ranges]]
tonull
.
-
-
Set each byte of b’s allocation to zero.
-
Return b.
Note: it is valid to set
mappedAtCreation
totrue
withoutMAP_READ
orMAP_WRITE
inusage
. This can be used to set the buffer’s initial data. -
5.3. Buffer Destruction
An application that no longer requires a GPUBuffer
can choose to lose
access to it before garbage collection by calling destroy()
.
Note: This allows the user agent to reclaim the GPU memory associated with the GPUBuffer
once all previously submitted operations using it are complete.
destroy()
-
Destroys the
GPUBuffer
.
5.4. Buffer Mapping
An application can request to map a GPUBuffer
so that they can access its
content via ArrayBuffer
s that represent part of the GPUBuffer
's
allocations. Mapping a GPUBuffer
is requested asynchronously with mapAsync()
so that the user agent can ensure the GPU
finished using the GPUBuffer
before the application can access its content.
Once the GPUBuffer
is mapped the application can synchronously ask for access
to ranges of its content with getMappedRange
. A mapped GPUBuffer
cannot be used by the GPU and must be unmapped using unmap
before
work using it can be submitted to the Queue timeline.
Add client-side validation that a mapped buffer can
only be unmapped and destroyed on the worker on which it was mapped. Likewise getMappedRange
can only be called on that worker. <https://github.com/gpuweb/gpuweb/issues/605>
typedef [EnforceRange ]unsigned long ; [
GPUMapModeFlags Exposed =Window ]interface {
GPUMapMode const GPUFlagsConstant = 0x0001;
READ const GPUFlagsConstant = 0x0002; };
WRITE
mapAsync(mode, offset, size)
-
Maps the given range of the
GPUBuffer
and resolves the returnedPromise
when theGPUBuffer
's content is ready to be accessed withgetMappedRange()
.Called on:GPUBuffer
this.Arguments:
Arguments for the GPUBuffer.mapAsync(mode, offset, size) method. Parameter Type Nullable Optional Description mode
GPUMapModeFlags ✘ ✘ Whether the buffer should be mapped for reading or writing. offset
GPUSize64 ✘ ✔ Offset in bytes into the buffer to the start of the range to map. size
GPUSize64 ✘ ✔ Size in bytes of the range to map. Handle error buffers once we have a description of the error monad. <https://github.com/gpuweb/gpuweb/issues/605>
-
If size is unspecified:
-
Let rangeSize be max(0, this.
[[size]]
- offset).
Otherwise, let rangeSize be size.
-
-
If any of the following conditions are unsatisfied:
Then:
-
Record a validation error on the current scope.
-
Return a promise rejected with an
OperationError
on the Device timeline.
-
-
Let p be a new
Promise
. -
Set this.
[[mapping]]
to p. -
Set this.
[[state]]
to mapping pending. -
Set this.
[[map_mode]]
to mode. -
Enqueue an operation on the default queue’s Queue timeline that will execute the following:
-
If this.
[[state]]
is mapping pending:-
Let m be a new
ArrayBuffer
of size rangeSize. -
Set the content of m to the content of this’s allocation starting at offset offset and for rangeSize bytes.
-
Set this.
[[mapping]]
to m. -
Set this.
[[mapping_range]]
to[offset, offset + rangeSize]
. -
Set this.
[[mapped_ranges]]
to[]
.
-
-
Resolve p.
-
-
Return p.
-
getMappedRange(offset, size)
-
Returns a
ArrayBuffer
with the contents of theGPUBuffer
in the given mapped range.Called on:GPUBuffer
this.Arguments:
Arguments for the GPUBuffer.getMappedRange(offset, size) method. Parameter Type Nullable Optional Description offset
GPUSize64 ✘ ✔ Offset in bytes into the buffer to return buffer contents from. size
GPUSize64 ✘ ✔ Size in bytes of the ArrayBuffer
to return.Returns:
ArrayBuffer
-
If size is unspecified:
-
Let rangeSize be max(0, this.
[[size]]
- offset).
Otherwise, let rangeSize be size.
-
-
If any of the following conditions are unsatisfied, throw an
OperationError
and stop.-
this.
[[state]]
is mapped or mapped at creation. -
offset is a multiple of 8.
-
rangeSize is a multiple of 4.
-
offset is greater than or equal to this.
[[mapping_range]]
[0]. -
offset + rangeSize is less than or equal to this.
[[mapping_range]]
[1]. -
[offset, offset + rangeSize) does not overlap another range in this.
[[mapped_ranges]]
.
Note: It is always valid to get mapped ranges of a
GPUBuffer
that is mapped at creation, even if it is invalid, because the Content timeline might not know it is invalid. -
-
Let m be a new
ArrayBuffer
of size rangeSize pointing at the content of this.[[mapping]]
at offset offset - this.[[mapping_range]]
[0]. -
Append m to this.
[[mapped_ranges]]
. -
Return m.
-
unmap()
-
Unmaps the mapped range of the
GPUBuffer
and makes it’s contents available for use by the GPU again.Called on:GPUBuffer
this.Returns:
undefined
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
Note: It is valid to unmap an error
GPUBuffer
that is mapped at creation because the Content timeline might not know it is an errorGPUBuffer
. -
If this.
[[state]]
is mapping pending:-
Reject
[[mapping]]
with anAbortError
. -
Set this.
[[mapping]]
tonull
.
-
-
If this.
[[state]]
is mapped or mapped at creation:-
If one of the two following conditions holds:
-
this.
[[state]]
is mapped at creation -
this.
[[state]]
is mapped and this.[[map_mode]]
containsWRITE
Then:
-
Enqueue an operation on the default queue’s Queue timeline that updates the this.
[[mapping_range]]
of this’s allocation to the content of this.[[mapping]]
.
-
-
Detach each
ArrayBuffer
in this.[[mapped_ranges]]
from its content. -
Set this.
[[mapping]]
tonull
. -
Set this.
[[mapping_range]]
tonull
. -
Set this.
[[mapped_ranges]]
tonull
.
-
Note: When a
MAP_READ
buffer (not currently mapped at creation) is unmapped, any local modifications done by the application to the mapped rangesArrayBuffer
are discarded and will not affect the content of follow-up mappings. -
6. Textures and Texture Views
define texture (internal object)
define mipmap level, array layer, aspect, slice (concepts)
6.1. GPUTexture
GPUTextures
are created via GPUDevice.createTexture(descriptor)
that returns a new texture.
[Exposed =Window ,Serializable ]interface GPUTexture {GPUTextureView createView (optional GPUTextureViewDescriptor descriptor = {});undefined destroy (); };GPUTexture includes GPUObjectBase ;
GPUTexture
has the following internal slots:
[[descriptor]]
-
The
GPUTextureDescriptor
describing this texture.All optional fields of
GPUTextureDescriptor
are defined.
Arguments:
-
GPUExtent3D
baseSize -
GPUSize32
mipLevel
Returns: GPUExtent3DDict
-
Let extent be a new
GPUExtent3DDict
object. -
Set extent.
depthOrArrayLayers
to 1. -
Return extent.
share this definition with the part of the specification that describes sampling.
6.1.1. Texture Creation
dictionary :
GPUTextureDescriptor GPUObjectDescriptorBase {required GPUExtent3D ;
size GPUIntegerCoordinate = 1;
mipLevelCount GPUSize32 = 1;
sampleCount GPUTextureDimension = "2d";
dimension required GPUTextureFormat ;
format required GPUTextureUsageFlags ; };
usage
enum {
GPUTextureDimension ,
"1d" ,
"2d" , };
"3d"
typedef [EnforceRange ]unsigned long ; [
GPUTextureUsageFlags Exposed =Window ]interface {
GPUTextureUsage const GPUFlagsConstant = 0x01;
COPY_SRC const GPUFlagsConstant = 0x02;
COPY_DST const GPUFlagsConstant = 0x04;
SAMPLED const GPUFlagsConstant = 0x08;
STORAGE const GPUFlagsConstant = 0x10; };
RENDER_ATTACHMENT
createTexture(descriptor)
-
Creates a
GPUTexture
.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createTexture(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUTextureDescriptor ✘ ✘ Description of the GPUTexture
to create.Returns:
GPUTexture
-
Issue the following steps on the Device timeline of this:
-
If descriptor.
format
is aGPUTextureFormat
that requires a feature (see § 24.1 Texture Format Capabilities), but this.[[device]]
.[[features]]
does not contain the feature, throw aTypeError
. -
If any of the following requirements are unmet:
-
descriptor.
size
.width, descriptor.size
.height, and descriptor.size
.depthOrArrayLayers must be greater than zero. -
descriptor.
mipLevelCount
must be greater than zero. -
descriptor.
sampleCount
must be either 1 or 4. -
If descriptor.
dimension
is:"1d"
-
-
descriptor.
size
.width must be less than or equal to this.maxTextureDimension1D
. -
descriptor.
size
.depthOrArrayLayers must be 1. -
descriptor.
sampleCount
must be 1. -
descriptor.
format
must not be a compressed format or depth/stencil format.
-
"2d"
-
-
descriptor.
size
.width must be less than or equal to this.maxTextureDimension2D
. -
descriptor.
size
.height must be less than or equal to this.maxTextureDimension2D
. -
descriptor.
size
.depthOrArrayLayers must be less than or equal to this.maxTextureArrayLayers
.
-
"3d"
-
-
descriptor.
size
.width must be less than or equal to this.maxTextureDimension3D
. -
descriptor.
size
.height must be less than or equal to this.maxTextureDimension3D
. -
descriptor.
size
.depthOrArrayLayers must be less than or equal to this.maxTextureDimension3D
. -
descriptor.
sampleCount
must be 1. -
descriptor.
format
must not be a compressed format or depth/stencil format.
-
-
descriptor.
size
.width must be multiple of texel block width. -
descriptor.
size
.height must be multiple of texel block height. -
If descriptor.
sampleCount
> 1:-
descriptor.
mipLevelCount
must be 1. -
descriptor.
size
.depthOrArrayLayers must be 1. -
descriptor.
format
must be a renderable format.
-
-
descriptor.
mipLevelCount
must be less than or equal to maximum mipLevel count(descriptor.dimension
, descriptor.size
). -
descriptor.
usage
must be a combination ofGPUTextureUsage
values. -
If descriptor.
usage
includes theRENDER_ATTACHMENT
bit, descriptor.format
must be a renderable format. -
If descriptor.
usage
includes theSTORAGE
bit, descriptor.format
must be listed in § 24.1.1 Plain color formats table withSTORAGE
capability.
Then:
-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Return a new invalid
GPUTexture
.
-
Let t be a new
GPUTexture
object. -
Set t.
[[descriptor]]
to descriptor. -
Return t.
-
-
6.1.2. Texture Destruction
An application that no longer requires a GPUTexture
can choose to lose access to it before
garbage collection by calling destroy()
.
Note: This allows the user agent to reclaim the GPU memory associated with the GPUTexture
once
all previously submitted operations using it are complete.
destroy()
-
Destroys the
GPUTexture
.
6.2. GPUTextureView
[Exposed =Window ]interface GPUTextureView { };GPUTextureView includes GPUObjectBase ;
GPUTextureView
has the following internal slots:
[[texture]]
-
The
GPUTexture
into which this is a view. [[descriptor]]
-
The
GPUTextureViewDescriptor
describing this texture view.All optional fields of
GPUTextureViewDescriptor
are defined. [[renderExtent]]
-
For renderable views, this is the effective
GPUExtent3DDict
for rendering.Note: this extent depends on the
baseMipLevel
.
6.2.1. Texture View Creation
dictionary :
GPUTextureViewDescriptor GPUObjectDescriptorBase {GPUTextureFormat ;
format GPUTextureViewDimension ;
dimension GPUTextureAspect = "all";
aspect GPUIntegerCoordinate = 0;
baseMipLevel GPUIntegerCoordinate ;
mipLevelCount GPUIntegerCoordinate = 0;
baseArrayLayer GPUIntegerCoordinate ; };
arrayLayerCount
enum {
GPUTextureViewDimension ,
"1d" ,
"2d" ,
"2d-array" ,
"cube" ,
"cube-array" };
"3d"
enum {
GPUTextureAspect ,
"all" ,
"stencil-only" };
"depth-only"
createView(descriptor)
-
Creates a
GPUTextureView
.Called on:GPUTexture
this.Arguments:
Arguments for the GPUTexture.createView(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUTextureViewDescriptor ✘ ✔ Description of the GPUTextureView
to create.Returns: view, of type
GPUTextureView
.-
Set descriptor to the result of resolving GPUTextureViewDescriptor defaults with descriptor.
-
Issue the following steps on the Device timeline of this:
-
If any of the following requirements are unmet:
-
this is valid
-
If the descriptor.
aspect
is"stencil-only"
-
this.
[[descriptor]]
.format
must be a depth-stencil format which contains a stencil aspect. "depth-only"
-
this.
[[descriptor]]
.format
must be a depth-stencil format which contains a depth aspect.
-
descriptor.
mipLevelCount
must be > 0. -
descriptor.
baseMipLevel
+ descriptor.mipLevelCount
must be ≤ this.[[descriptor]]
.mipLevelCount
. -
descriptor.
arrayLayerCount
must be > 0. -
descriptor.
baseArrayLayer
+ descriptor.arrayLayerCount
must be ≤ the array layer count of this. -
descriptor.
format
must be this.[[descriptor]]
.format
. -
If descriptor.
dimension
is:"1d"
-
this.
[[descriptor]]
.dimension
must be"1d"
.descriptor.
arrayLayerCount
must be1
. "2d"
-
this.
[[descriptor]]
.dimension
must be"2d"
.descriptor.
arrayLayerCount
must be1
. "2d-array"
-
this.
[[descriptor]]
.dimension
must be"2d"
. "cube"
-
this.
[[descriptor]]
.dimension
must be"2d"
.descriptor.
arrayLayerCount
must be6
.this.
[[descriptor]]
.size
.width must be this.[[descriptor]]
.size
.height. "cube-array"
-
this.
[[descriptor]]
.dimension
must be"2d"
.descriptor.
arrayLayerCount
must be a multiple of6
.this.
[[descriptor]]
.size
.width must be this.[[descriptor]]
.size
.height. "3d"
-
this.
[[descriptor]]
.dimension
must be"3d"
.descriptor.
arrayLayerCount
must be1
.
Then:
-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Return a new invalid
GPUTextureView
.
-
-
Let view be a new
GPUTextureView
object. -
Set view.
[[texture]]
to this. -
Set view.
[[descriptor]]
to descriptor. -
If this.
[[descriptor]]
.usage
containsRENDER_ATTACHMENT
:-
Let renderExtent be compute render extent(this.
[[descriptor]]
.size
, descriptor.baseMipLevel
). -
Set view.
[[renderExtent]]
to renderExtent.
-
-
Return view.
-
-
GPUTextureViewDescriptor
descriptor run the following steps:
-
Let resolved be a copy of descriptor.
-
If resolved.
format
isundefined
, set resolved.format
to texture.[[descriptor]]
.format
. -
If resolved.
mipLevelCount
isundefined
, set resolved.mipLevelCount
to texture.[[descriptor]]
.mipLevelCount
−baseMipLevel
. -
If resolved.
dimension
isundefined
and texture.[[descriptor]]
.dimension
is: -
If resolved.
arrayLayerCount
isundefined
and resolved.dimension
is:"1d"
,"2d"
, or"3d"
-
Set resolved.
arrayLayerCount
to1
. "cube"
-
Set resolved.
arrayLayerCount
to6
. "2d-array"
or"cube-array"
-
Set resolved.
arrayLayerCount
to texture.[[descriptor]]
.size
.depthOrArrayLayers −baseArrayLayer
.
-
Return resolved.
GPUTexture
texture, run the
following steps:
-
If texture.
[[descriptor]]
.dimension
is:"1d"
or"3d"
-
Return
1
. "2d"
-
Return texture.
[[descriptor]]
.size
.depthOrArrayLayers.
6.3. Texture Formats
The name of the format specifies the order of components, bits per component, and data type for the component.
-
r
,g
,b
,a
= red, green, blue, alpha -
unorm
= unsigned normalized -
snorm
= signed normalized -
uint
= unsigned int -
sint
= signed int -
float
= floating point
If the format has the -srgb
suffix, then sRGB conversions from gamma to linear
and vice versa are applied during the reading and writing of color values in the
shader. Compressed texture formats are provided by features. Their naming
should follow the convention here, with the texture name as a prefix. e.g. etc2-rgba8unorm
.
The texel block is a single addressable element of the textures in pixel-based GPUTextureFormat
s,
and a single compressed block of the textures in block-based compressed GPUTextureFormat
s.
The texel block width and texel block height specifies the dimension of one texel block.
-
For pixel-based
GPUTextureFormat
s, the texel block width and texel block height are always 1. -
For block-based compressed
GPUTextureFormat
s, the texel block width is the number of texels in each row of one texel block, and the texel block height is the number of texel rows in one texel block.
The texel block size of a GPUTextureFormat
is the number of bytes to store one texel block.
The texel block size of each GPUTextureFormat
is constant except for "stencil8"
, "depth24plus"
, and "depth24plus-stencil8"
.
enum { // 8-bit formats
GPUTextureFormat ,
"r8unorm" ,
"r8snorm" ,
"r8uint" , // 16-bit formats
"r8sint" ,
"r16uint" ,
"r16sint" ,
"r16float" ,
"rg8unorm" ,
"rg8snorm" ,
"rg8uint" , // 32-bit formats
"rg8sint" ,
"r32uint" ,
"r32sint" ,
"r32float" ,
"rg16uint" ,
"rg16sint" ,
"rg16float" ,
"rgba8unorm" ,
"rgba8unorm-srgb" ,
"rgba8snorm" ,
"rgba8uint" ,
"rgba8sint" ,
"bgra8unorm" , // Packed 32-bit formats
"bgra8unorm-srgb" ,
"rgb9e5ufloat" ,
"rgb10a2unorm" , // 64-bit formats
"rg11b10ufloat" ,
"rg32uint" ,
"rg32sint" ,
"rg32float" ,
"rgba16uint" ,
"rgba16sint" , // 128-bit formats
"rgba16float" ,
"rgba32uint" ,
"rgba32sint" , // Depth and stencil formats
"rgba32float" ,
"stencil8" ,
"depth16unorm" ,
"depth24plus" ,
"depth24plus-stencil8" , // BC compressed formats usable if "texture-compression-bc" is both // supported by the device/user agent and enabled in requestDevice.
"depth32float" ,
"bc1-rgba-unorm" ,
"bc1-rgba-unorm-srgb" ,
"bc2-rgba-unorm" ,
"bc2-rgba-unorm-srgb" ,
"bc3-rgba-unorm" ,
"bc3-rgba-unorm-srgb" ,
"bc4-r-unorm" ,
"bc4-r-snorm" ,
"bc5-rg-unorm" ,
"bc5-rg-snorm" ,
"bc6h-rgb-ufloat" ,
"bc6h-rgb-float" ,
"bc7-rgba-unorm" , // "depth24unorm-stencil8" feature
"bc7-rgba-unorm-srgb" , // "depth32float-stencil8" feature
"depth24unorm-stencil8" , };
"depth32float-stencil8"
The depth aspect of the "depth24plus"
) and "depth24plus-stencil8"
)
formats may be implemented as either a 24-bit unsigned normalized value ("depth24unorm")
or a 32-bit IEEE 754 floating point value ("depth32float").
add something on GPUAdapter(?) that gives an estimate of the bytes per texel of "stencil8"
The stencil8
) format may be implemented as
either a real "stencil8", or "depth24stencil8", where the depth aspect is
hidden and inaccessible.
Note: While the precision of depth32float is strictly higher than the precision of depth24unorm for all values in the representable range (0.0 to 1.0), note that the set of representable values is not exactly the same: for depth24unorm, 1 ULP has a constant value of 1 / (224 − 1); for depth32float, 1 ULP has a variable value no greater than 1 / (224).
A renderable format is either color renderable format, or depth or stencil renderable format.
If a format is listed in § 24.1.1 Plain color formats with RENDER_ATTACHMENT
capability, it is a
color renderable format. Any other format is not a color renderable format. Any depth/stencil format is a
depth or stencil renderable format. Any other format is not a depth or stencil renderable format.
6.4. GPUExternalTexture
A GPUExternalTexture
is a sampleable texture wrapping an external video object.
The contents of a GPUExternalTexture
object may not change, either from inside WebGPU
(it is only sampleable) or from outside WebGPU (e.g. due to video frame advancement).
Update this description with canvas.
They are bound into bind group layouts using the externalTexture
bind group layout entry member.
External textures use several binding slots: see Exceeds the binding slot limits.
The underlying representation of an external texture is unobservable (except for sampling behavior) but typically may include
-
Up to three 2D planes of data (e.g. RGBA, Y+UV, Y+U+V).
-
Metadata for converting coordinates before reading from those planes (crop and rotation).
-
Metadata for converting values into the specified output color space (matrices, gammas, 3D LUT).
The configuration used may not be stable across time, systems, user agents, media sources, or frames within a single video source. In order to account for many possible representations, the binding conservatively uses the following, for each external texture:
-
three sampled texture bindings (for up to 3 planes),
-
one sampled texture binding for a 3D LUT,
-
one sampler binding to sample the 3D LUT, and
-
one uniform buffer binding for metadata.
[Exposed =Window ]interface GPUExternalTexture { };GPUExternalTexture includes GPUObjectBase ;
GPUExternalTexture
has the following internal slots:
[[destroyed]]
, of typeboolean
-
Indicates whether the object has been destroyed (can no longer be used). Initially set to
false
.
6.4.1. Importing External Textures
An external texture is created from an external video object
using importExternalTexture()
.
Update this description with canvas.
External textures are destroyed automatically, as a microtask, instead of manually or upon garbage collection like other resources.
dictionary :
GPUExternalTextureDescriptor GPUObjectDescriptorBase {required HTMLVideoElement ;
source GPUPredefinedColorSpace = "srgb"; };
colorSpace
importExternalTexture(descriptor)
-
Creates a
GPUExternalTexture
wrapping the provided image source.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.importExternalTexture(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUExternalTextureDescriptor ✘ ✘ Provides the external image source object (and any creation options). Returns:
GPUExternalTexture
-
Let source be descriptor.
source
. -
If source is not origin-clean, throw a
SecurityError
and stop. -
Let data be the result of converting the current image contents of source into the color space descriptor.
colorSpace
.Note: This is described like a copy, but may be implemented as a reference to read-only underlying data plus appropriate metadata to perform conversion later.
It is currently undetermined whether the default colorSpace, "srgb", is extended-srgb or clamped-srgb. This will be determined upstream as the semantics around super-srgb image sources get defined. Unfortunately we can’t sidestep it for now because video sources can go already go outside the srgb range. The upstream determination will change whether using the default colorSpace option can result in sampling values greater than 1.0 or not. If upstream decides to make "srgb" mean clamped-srgb, we also have the option of changing our default to
"extended-srgb"
. -
Let result be a new
GPUExternalTexture
object wrapping data. -
Queue a microtask to set result.
[[destroyed]]
totrue
, releasing the underlying resource. -
Return result.
-
6.4.2. Sampling External Textures
External textures are represented in WGSL with texture_external
and may be read using textureLoad
and textureSampleLevel
.
The sampler
provided to textureSampleLevel
is used to sample the underlying textures.
The result is in the color space set by colorSpace
.
It is implementation-dependent whether, for any given external texture, the sampler (and filtering)
is applied before or after conversion from underlying values into the specified color space.
Note: If the internal representation is an RGBA plane, sampling behaves as on a regular 2D texture. If there are several underlying planes (e.g. Y+UV), the sampler is used to sample each underlying texture separately, prior to conversion from YUV to the specified color space.
7. Samplers
7.1. GPUSampler
A GPUSampler
encodes transformations and filtering information that can
be used in a shader to interpret texture resource data.
GPUSamplers
are created via GPUDevice.createSampler(optional descriptor)
that returns a new sampler object.
[Exposed =Window ]interface GPUSampler { };GPUSampler includes GPUObjectBase ;
GPUSampler
has the following internal slots:
[[descriptor]]
, of typeGPUSamplerDescriptor
, readonly-
The
GPUSamplerDescriptor
with which theGPUSampler
was created. [[isComparison]]
of typeboolean
.-
Whether the
GPUSampler
is used as a comparison sampler. [[isFiltering]]
of typeboolean
.-
Whether the
GPUSampler
weights multiple samples of a texture.
7.2. Sampler Creation
7.2.1. GPUSamplerDescriptor
A GPUSamplerDescriptor
specifies the options to use to create a GPUSampler
.
dictionary :
GPUSamplerDescriptor GPUObjectDescriptorBase {GPUAddressMode = "clamp-to-edge";
addressModeU GPUAddressMode = "clamp-to-edge";
addressModeV GPUAddressMode = "clamp-to-edge";
addressModeW GPUFilterMode = "nearest";
magFilter GPUFilterMode = "nearest";
minFilter GPUFilterMode = "nearest";
mipmapFilter float = 0;
lodMinClamp float = 0xffffffff; // TODO: What should this be? Was Number.MAX_VALUE.
lodMaxClamp GPUCompareFunction ; [
compare Clamp ]unsigned short = 1; };
maxAnisotropy
-
addressModeU
,addressModeV
, andaddressModeW
specify the address modes for the texture width, height, and depth coordinates, respectively. -
magFilter
specifies the sampling behavior when the sample footprint is smaller than or equal to one texel. -
minFilter
specifies the sampling behavior when the sample footprint is larger than one texel. -
mipmapFilter
specifies behavior for sampling between two mipmap levels. -
lodMinClamp
andlodMaxClamp
specify the minimum and maximum levels of detail, respectively, used internally when sampling a texture. -
If
compare
is provided, the sampler will be a comparison sampler with the specifiedGPUCompareFunction
. -
maxAnisotropy
specifies the maximum anisotropy value clamp used by the sampler.Note: most implementations support
maxAnisotropy
values in range between 1 and 16, inclusive.
explain how LOD is calculated and if there are differences here between platforms. Issue: explain what anisotropic sampling is
GPUAddressMode
describes the behavior of the sampler if the sample footprint extends beyond
the bounds of the sampled texture.
Describe a "sample footprint" in greater detail.
enum {
GPUAddressMode "clamp-to-edge" ,"repeat" ,"mirror-repeat" };
"clamp-to-edge"
-
Texture coordinates are clamped between 0.0 and 1.0, inclusive.
"repeat"
-
Texture coordinates wrap to the other side of the texture.
"mirror-repeat"
-
Texture coordinates wrap to the other side of the texture, but the texture is flipped when the integer part of the coordinate is odd.
GPUFilterMode
describes the behavior of the sampler if the sample footprint does not exactly
match one texel.
enum {
GPUFilterMode "nearest" ,"linear" };
"nearest"
-
Return the value of the texel nearest to the texture coordinates.
"linear"
-
Select two texels in each dimension and return a linear interpolation between their values.
GPUCompareFunction
specifies the behavior of a comparison sampler. If a comparison sampler is
used in a shader, an input value is compared to the sampled texture value, and the result of this
comparison test (0.0f for pass, or 1.0f for fail) is used in the filtering operation.
describe how filtering interacts with comparison sampling.
enum {
GPUCompareFunction "never" ,"less" ,"equal" ,"less-equal" ,"greater" ,"not-equal" ,"greater-equal" ,"always" };
"never"
-
Comparison tests never pass.
"less"
-
A provided value passes the comparison test if it is less than the sampled value.
"equal"
-
A provided value passes the comparison test if it is equal to the sampled value.
"less-equal"
-
A provided value passes the comparison test if it is less than or equal to the sampled value.
"greater"
-
A provided value passes the comparison test if it is greater than the sampled value.
"not-equal"
-
A provided value passes the comparison test if it is not equal to the sampled value.
"greater-equal"
-
A provided value passes the comparison test if it is greater than or equal to the sampled value.
"always"
-
Comparison tests always pass.
-
GPUDevice
device -
GPUSamplerDescriptor
descriptor
Returns: boolean
Return true
if and only if all of the following conditions are satisfied:
-
device is valid.
-
descriptor.
lodMinClamp
is greater than or equal to 0. -
descriptor.
lodMaxClamp
is greater than or equal to descriptor.lodMinClamp
. -
descriptor.
maxAnisotropy
is greater than or equal to 1. -
When descriptor.
maxAnisotropy
is greater than 1, descriptor.magFilter
, descriptor.minFilter
, and descriptor.mipmapFilter
must be equal to"linear"
.
createSampler(descriptor)
-
Creates a
GPUBindGroupLayout
.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createSampler(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUSamplerDescriptor ✘ ✔ Description of the GPUSampler
to create.Returns:
GPUSampler
-
Let s be a new
GPUSampler
object. -
Set s.
[[descriptor]]
to descriptor. -
Set s.
[[isComparison]]
tofalse
if thecompare
attribute of s.[[descriptor]]
isnull
or undefined. Otherwise, set it totrue
. -
Set s.
[[isFiltering]]
tofalse
if none ofminFilter
,magFilter
, ormipmapFilter
has the value of"linear"
. Otherwise, set it totrue
. -
Return s.
Valid Usage-
If descriptor is not
null
or undefined:-
If validating GPUSamplerDescriptor(this, descriptor) returns
false
:-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Create a new invalid
GPUSampler
and return the result.
-
-
-
8. Resource Binding
8.1. GPUBindGroupLayout
A GPUBindGroupLayout
defines the interface between a set of resources bound in a GPUBindGroup
and their accessibility in shader stages.
[Exposed =Window ,Serializable ]interface GPUBindGroupLayout { };GPUBindGroupLayout includes GPUObjectBase ;
GPUBindGroupLayout
has the following internal slots:
[[descriptor]]
8.1.1. Creation
A GPUBindGroupLayout
is created via GPUDevice.createBindGroupLayout()
.
dictionary :
GPUBindGroupLayoutDescriptor GPUObjectDescriptorBase {required sequence <GPUBindGroupLayoutEntry >; };
entries
A GPUBindGroupLayoutEntry
describes a single shader resource binding to be included in a GPUBindGroupLayout
.
typedef [EnforceRange ]unsigned long ; [
GPUShaderStageFlags Exposed =Window ]interface {
GPUShaderStage const GPUFlagsConstant = 0x1;
VERTEX const GPUFlagsConstant = 0x2;
FRAGMENT const GPUFlagsConstant = 0x4; };
COMPUTE dictionary {
GPUBindGroupLayoutEntry required GPUIndex32 binding ;required GPUShaderStageFlags visibility ;GPUBufferBindingLayout buffer ;GPUSamplerBindingLayout sampler ;GPUTextureBindingLayout texture ;GPUStorageTextureBindingLayout storageTexture ;GPUExternalTextureBindingLayout externalTexture ; };
GPUBindGroupLayoutEntry
dictionaries have the following members:
binding
, of type GPUIndex32-
A unique identifier for a resource binding within a
GPUBindGroupLayoutEntry
, a correspondingGPUBindGroupEntry
, and theGPUShaderModule
s. visibility
, of type GPUShaderStageFlags-
A bitset of the members of
GPUShaderStage
. Each set bit indicates that aGPUBindGroupLayoutEntry
's resource will be accessible from the associated shader stage. buffer
, of type GPUBufferBindingLayout-
When not
undefined
, indicates the binding resource type for thisGPUBindGroupLayoutEntry
isGPUBufferBinding
. sampler
, of type GPUSamplerBindingLayout-
When not
undefined
, indicates the binding resource type for thisGPUBindGroupLayoutEntry
isGPUSampler
. texture
, of type GPUTextureBindingLayout-
When not
undefined
, indicates the binding resource type for thisGPUBindGroupLayoutEntry
isGPUTextureView
. storageTexture
, of type GPUStorageTextureBindingLayout-
When not
undefined
, indicates the binding resource type for thisGPUBindGroupLayoutEntry
isGPUTextureView
. externalTexture
, of type GPUExternalTextureBindingLayout-
When not
undefined
, indicates the binding resource type for thisGPUBindGroupLayoutEntry
isGPUExternalTexture
.
The binding member of a GPUBindGroupLayoutEntry
is determined by which member of the GPUBindGroupLayoutEntry
is defined: buffer
, sampler
, texture
, storageTexture
, or externalTexture
.
Only one may be defined for any given GPUBindGroupLayoutEntry
.
Each member has an associated GPUBindingResource
type and each binding type has an associated internal usage, given by this table:
Binding member | Resource type | Binding type | Binding usage |
---|---|---|---|
buffer
| GPUBufferBinding
| "uniform"
| constant |
"storage"
| storage | ||
"read-only-storage"
| storage-read | ||
sampler
| GPUSampler
| "filtering"
| constant |
"non-filtering"
| |||
"comparison"
| |||
texture
| GPUTextureView
| "float"
| constant |
"unfilterable-float"
| |||
"depth"
| |||
"sint"
| |||
"uint"
| |||
storageTexture
| GPUTextureView
| "read-only"
| storage-read |
"write-only"
| storage-write | ||
externalTexture
| GPUExternalTexture
| constant |
GPUBindGroupLayoutEntry
values entries exceeds the binding slot limits of supported limits limits if the number of slots used toward a limit exceeds the supported value in limits.
Each entry may use multiple slots toward multiple limits.
-
For each entry in entries, if:
- entry.
buffer
?.type
is"uniform"
and entry.buffer
?.hasDynamicOffset
istrue
-
Consider 1
maxDynamicUniformBuffersPerPipelineLayout
slot to be used. - entry.
buffer
?.type
is"storage"
and entry.buffer
?.hasDynamicOffset
istrue
-
Consider 1
maxDynamicStorageBuffersPerPipelineLayout
slot to be used.
- entry.
-
For each shader stage stage in «
VERTEX
,FRAGMENT
,COMPUTE
»:-
For each entry in entries for which entry.
visibility
contains stage, if:- entry.
buffer
?.type
is"uniform"
-
Consider 1
maxUniformBuffersPerShaderStage
slot to be used. - entry.
buffer
?.type
is"storage"
or"read-only-storage"
-
Consider 1
maxStorageBuffersPerShaderStage
slot to be used. - entry.
sampler
is notundefined
-
Consider 1
maxSamplersPerShaderStage
slot to be used. - entry.
texture
is notundefined
-
Consider 1
maxSampledTexturesPerShaderStage
slot to be used. - entry.
storageTexture
is notundefined
-
Consider 1
maxStorageTexturesPerShaderStage
slot to be used. - entry.
externalTexture
is notundefined
-
Consider 4
maxSampledTexturesPerShaderStage
slot, 1maxSamplersPerShaderStage
slot, and 1maxUniformBuffersPerShaderStage
slot to be used.
- entry.
-
enum {
GPUBufferBindingType ,
"uniform" ,
"storage" , };
"read-only-storage" dictionary {
GPUBufferBindingLayout GPUBufferBindingType type = "uniform";boolean hasDynamicOffset =false ;GPUSize64 minBindingSize = 0; };
GPUBufferBindingLayout
dictionaries have the following members:
type
, of type GPUBufferBindingType, defaulting to"uniform"
-
Indicates the type required for buffers bound to this bindings.
hasDynamicOffset
, of type boolean, defaulting tofalse
-
Indicates whether this binding requires a dynamic offset.
minBindingSize
, of type GPUSize64, defaulting to0
-
May be used to indicate the minimum buffer binding size.
enum {
GPUSamplerBindingType ,
"filtering" ,
"non-filtering" , };
"comparison" dictionary {
GPUSamplerBindingLayout GPUSamplerBindingType type = "filtering"; };
GPUSamplerBindingLayout
dictionaries have the following members:
type
, of type GPUSamplerBindingType, defaulting to"filtering"
-
Indicates the required type of a sampler bound to this bindings.
enum {
GPUTextureSampleType ,
"float" ,
"unfilterable-float" ,
"depth" ,
"sint" , };
"uint" dictionary {
GPUTextureBindingLayout GPUTextureSampleType sampleType = "float";GPUTextureViewDimension viewDimension = "2d";boolean multisampled =false ; };
consider making sampleType
truly optional.
GPUTextureBindingLayout
dictionaries have the following members:
sampleType
, of type GPUTextureSampleType, defaulting to"float"
-
Indicates the type required for texture views bound to this binding.
viewDimension
, of type GPUTextureViewDimension, defaulting to"2d"
-
Indicates the required
dimension
for texture views bound to this binding.Note: This enables Metal-based WebGPU implementations to back the respective bind groups with
MTLArgumentBuffer
objects that are more efficient to bind at run-time. multisampled
, of type boolean, defaulting tofalse
-
Inicates whether or not texture views bound to this binding must be multisampled.
enum {
GPUStorageTextureAccess ,
"read-only" , };
"write-only" dictionary {
GPUStorageTextureBindingLayout required GPUStorageTextureAccess access ;required GPUTextureFormat format ;GPUTextureViewDimension viewDimension = "2d"; };
consider making format
truly optional.
GPUStorageTextureBindingLayout
dictionaries have the following members:
access
, of type GPUStorageTextureAccess-
Indicates whether texture views bound to this binding will be bound for read-only or write-only access.
format
, of type GPUTextureFormat-
The required
format
of texture views bound to this binding. viewDimension
, of type GPUTextureViewDimension, defaulting to"2d"
-
Indicates the required
dimension
for texture views bound to this binding.Note: This enables Metal-based WebGPU implementations to back the respective bind groups with
MTLArgumentBuffer
objects that are more efficient to bind at run-time.
dictionary { };
GPUExternalTextureBindingLayout
A GPUBindGroupLayout
object has the following internal slots:
[[entryMap]]
of type ordered map<GPUSize32
,GPUBindGroupLayoutEntry
>.-
The map of binding indices pointing to the
GPUBindGroupLayoutEntry
s, which thisGPUBindGroupLayout
describes. [[dynamicOffsetCount]]
of typeGPUSize32
.-
The number of buffer bindings with dynamic offsets in this
GPUBindGroupLayout
.
createBindGroupLayout(descriptor)
-
Creates a
GPUBindGroupLayout
.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createBindGroupLayout(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUBindGroupLayoutDescriptor ✘ ✘ Description of the GPUBindGroupLayout
to create.Returns:
GPUBindGroupLayout
-
Let layout be a new valid
GPUBindGroupLayout
object. -
Set layout.
[[descriptor]]
to descriptor. -
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied:
-
The
binding
of each entry in descriptor is unique. -
descriptor.
entries
must not exceed the binding slot limits of this.[[device]]
.[[limits]]
. -
For each
GPUBindGroupLayoutEntry
entry in descriptor.entries
:-
Let bufferLayout be entry.
buffer
-
Let samplerLayout be entry.
sampler
-
Let textureLayout be entry.
texture
-
Let storageTextureLayout be entry.
storageTexture
-
Exactly one of bufferLayout, samplerLayout, textureLayout, or storageTextureLayout are not
undefined
. -
If entry.
visibility
includesVERTEX
:-
entry.
storageTexture
?.access
must not be"write-only"
.
-
If the textureLayout is not
undefined
and textureLayout.multisampled
istrue
:-
textureLayout.
viewDimension
is"2d"
. -
textureLayout.
sampleType
is not"float"
.
-
-
If storageTextureLayout is not
undefined
:-
storageTextureLayout.
viewDimension
is not"cube"
or"cube-array"
. -
storageTextureLayout.
format
must be a format which can support storage usage.
-
-
Then:
-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Make layout invalid and return layout.
-
Set layout.
[[dynamicOffsetCount]]
to the number of entries in descriptor wherebuffer
is notundefined
andbuffer
.hasDynamicOffset
istrue
. -
For each
GPUBindGroupLayoutEntry
entry in descriptor.entries
:-
Insert entry into layout.
[[entryMap]]
with the key of entry.binding
.
-
-
-
Return layout.
-
8.1.2. Compatibility
GPUBindGroupLayout
objects a and b are considered group-equivalent if and only if, for any binding number binding, one of the following conditions is satisfied:
-
it’s missing from both a.
[[entryMap]]
and b.[[entryMap]]
. -
a.
[[entryMap]]
[binding] == b.[[entryMap]]
[binding]
If bind groups layouts are group-equivalent they can be interchangeably used in all contents.
8.2. GPUBindGroup
A GPUBindGroup
defines a set of resources to be bound together in a group
and how the resources are used in shader stages.
[Exposed =Window ]interface GPUBindGroup { };GPUBindGroup includes GPUObjectBase ;
8.2.1. Bind Group Creation
A GPUBindGroup
is created via GPUDevice.createBindGroup()
.
dictionary :
GPUBindGroupDescriptor GPUObjectDescriptorBase {required GPUBindGroupLayout ;
layout required sequence <GPUBindGroupEntry >; };
entries
A GPUBindGroupEntry
describes a single resource to be bound in a GPUBindGroup
.
typedef (GPUSampler or GPUTextureView or GPUBufferBinding or GPUExternalTexture );
GPUBindingResource dictionary {
GPUBindGroupEntry required GPUIndex32 ;
binding required GPUBindingResource ; };
resource
dictionary {
GPUBufferBinding required GPUBuffer ;
buffer GPUSize64 = 0;
offset GPUSize64 ; };
size
A GPUBindGroup
object has the following internal slots:
[[layout]]
of typeGPUBindGroupLayout
.-
The
GPUBindGroupLayout
associated with thisGPUBindGroup
. [[entries]]
of type sequence<GPUBindGroupEntry
>.-
The set of
GPUBindGroupEntry
s thisGPUBindGroup
describes. [[usedResources]]
of type ordered map<subresource, list<internal usage>>.-
The set of buffer and texture subresources used by this bind group, associated with lists of the internal usage flags.
createBindGroup(descriptor)
-
Creates a
GPUBindGroup
.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createBindGroup(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUBindGroupDescriptor ✘ ✘ Description of the GPUBindGroup
to create.Returns:
GPUBindGroup
-
Let bindGroup be a new valid
GPUBindGroup
object. -
Let limits be this.
[[device]]
.[[limits]]
.maxUniformBufferBindingSize
. -
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied:
-
descriptor.
layout
is valid to use with this. -
The number of
entries
of descriptor.layout
is exactly equal to the number of descriptor.entries
.
For each
GPUBindGroupEntry
bindingDescriptor in descriptor.entries
:-
Let resource be bindingDescriptor.
resource
. -
There is exactly one
GPUBindGroupLayoutEntry
layoutBinding in descriptor.layout
.entries
such that layoutBinding.binding
equals to bindingDescriptor.binding
. -
If the defined binding member for layoutBinding is
sampler
-
-
resource is a
GPUSampler
. -
resource is valid to use with this.
-
If layoutBinding.
sampler
.type
is:"filtering"
-
resource.
[[isComparison]]
isfalse
. "non-filtering"
-
resource.
[[isFiltering]]
isfalse
. resource.[[isComparison]]
isfalse
. "comparison"
-
resource.
[[isComparison]]
istrue
.
-
texture
-
-
resource is a
GPUTextureView
. -
resource is valid to use with this.
-
Let texture be resource.
[[texture]]
. -
layoutBinding.
texture
.viewDimension
is equal to resource’sdimension
. -
layoutBinding.
texture
.sampleType
is compatible with resource’sformat
. -
If layoutBinding.
texture
.multisampled
istrue
, texture’ssampleCount
>1
, Otherwise texture’ssampleCount
is1
.
-
storageTexture
-
-
resource is a
GPUTextureView
. -
resource is valid to use with this.
-
Let texture be resource.
[[texture]]
. -
layoutBinding.
storageTexture
.viewDimension
is equal to resource’sdimension
. -
layoutBinding.
storageTexture
.format
is equal to resource.[[descriptor]]
.format
.
-
buffer
-
-
resource is a
GPUBufferBinding
. -
resource.
buffer
is valid to use with this. -
The bound part designated by resource.
offset
and resource.size
resides inside the buffer. -
If layoutBinding.
buffer
.minBindingSize
is notundefined
:-
The effective binding size, that is either explict in resource.
size
or derived from resource.offset
and the full size of the buffer, is greater than or equal to layoutBinding.buffer
.minBindingSize
.
-
-
If layoutBinding.
buffer
.type
is"uniform"
-
resource.
buffer
.usage
includesUNIFORM
.resource.
size
≤ limits.maxUniformBufferBindingSize
.This validation should take into account the default when
size
is not set. Also shouldsize
default to thebuffer.byteLength - offset
ormin(buffer.byteLength - offset, limits.maxUniformBufferBindingSize)
? "storage"
or"read-only-storage"
-
resource.
buffer
.usage
includesSTORAGE
.resource.
size
≤ limits.maxStorageBufferBindingSize
.
-
define the association between texture formats and component types
Then:
-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Make bindGroup invalid and return bindGroup.
-
-
Let bindGroup.
[[layout]]
= descriptor.layout
. -
Let bindGroup.
[[entries]]
= descriptor.entries
. -
Let bindGroup.
[[usedResources]]
= {}. -
For each
GPUBindGroupEntry
bindingDescriptor in descriptor.entries
:-
Let internalUsage be the binding usage for layoutBinding.
-
Each subresource seen by resource is added to
[[usedResources]]
as internalUsage.
-
-
-
Return bindGroup.
-
8.3. GPUPipelineLayout
A GPUPipelineLayout
defines the mapping between resources of all GPUBindGroup
objects set up during command encoding in setBindGroup
, and the shaders of the pipeline set by GPURenderEncoderBase.setPipeline
or GPUComputePassEncoder.setPipeline
.
The full binding address of a resource can be defined as a trio of:
-
shader stage mask, to which the resource is visible
-
bind group index
-
binding number
The components of this address can also be seen as the binding space of a pipeline. A GPUBindGroup
(with the corresponding GPUBindGroupLayout
) covers that space for a fixed bind group index. The contained bindings need to be a superset of the resources used by the shader at this bind group index.
[Exposed =Window ,Serializable ]interface GPUPipelineLayout { };GPUPipelineLayout includes GPUObjectBase ;
GPUPipelineLayout
has the following internal slots:
[[bindGroupLayouts]]
of type list<GPUBindGroupLayout
>.-
The
GPUBindGroupLayout
objects provided at creation inGPUPipelineLayoutDescriptor.bindGroupLayouts
.
Note: using the same GPUPipelineLayout
for many GPURenderPipeline
or GPUComputePipeline
pipelines guarantees that the user agent doesn’t need to rebind any resources internally when there is a switch between these pipelines.
GPUComputePipeline
object X was created with GPUPipelineLayout.bindGroupLayouts
A, B, C. GPUComputePipeline
object Y was created with GPUPipelineLayout.bindGroupLayouts
A, D, C. Supposing the command encoding sequence has two dispatches:
In this scenario, the user agent would have to re-bind the group slot 2 for the second dispatch, even though neither the GPUBindGroupLayout
at index 2 of GPUPipelineLayout.bindGrouplayouts
, or the GPUBindGroup
at slot 2, change.
should this example and the note be moved to some "best practices" document?
Note: the expected usage of the GPUPipelineLayout
is placing the most common and the least frequently changing bind groups at the "bottom" of the layout, meaning lower bind group slot numbers, like 0 or 1. The more frequently a bind group needs to change between draw calls, the higher its index should be. This general guideline allows the user agent to minimize state changes between draw calls, and consequently lower the CPU overhead.
8.3.1. Creation
A GPUPipelineLayout
is created via GPUDevice.createPipelineLayout()
.
dictionary :
GPUPipelineLayoutDescriptor GPUObjectDescriptorBase {required sequence <GPUBindGroupLayout >; };
bindGroupLayouts
createPipelineLayout(descriptor)
-
Creates a
GPUPipelineLayout
.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createPipelineLayout(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUPipelineLayoutDescriptor ✘ ✘ Description of the GPUPipelineLayout
to create.Returns:
GPUPipelineLayout
-
If any of the following requirements are unmet:
Let limits be this.[[device]]
.[[limits]]
.Let allEntries be the result of concatenating bgl.
[[descriptor]]
.entries
for all bgl in descriptor.bindGroupLayouts
.-
Every
GPUBindGroupLayout
in descriptor.bindGroupLayouts
must be valid to use with this. -
The size of descriptor.
bindGroupLayouts
must be ≤ limits.maxBindGroups
. -
allEntries must not exceed the binding slot limits of limits.
Then:
-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Create a new invalid
GPUPipelineLayout
and return the result.
-
-
Let pl be a new
GPUPipelineLayout
object. -
Set the pl.
[[bindGroupLayouts]]
to descriptor.bindGroupLayouts
. -
Return pl.
-
Note: two GPUPipelineLayout
objects are considered equivalent for any usage
if their internal [[bindGroupLayouts]]
sequences contain GPUBindGroupLayout
objects that are group-equivalent.
9. Shader Modules
9.1. GPUShaderModule
[Exposed =Window ,Serializable ]interface GPUShaderModule {Promise <GPUCompilationInfo >compilationInfo (); };GPUShaderModule includes GPUObjectBase ;
GPUShaderModule
is Serializable
. It is a reference to an internal
shader module object, and Serializable
means that the reference can be copied between realms (threads/workers), allowing multiple realms to access
it concurrently. Since GPUShaderModule
is immutable, there are no race
conditions.
9.1.1. Shader Module Creation
dictionary :
GPUShaderModuleDescriptor GPUObjectDescriptorBase {required USVString ;
code object ; };
sourceMap
sourceMap
, if defined, MAY be interpreted as a
source-map-v3 format.
Source maps are optional, but serve as a standardized way to support dev-tool
integration such as source-language debugging. [SourceMap]
createShaderModule(descriptor)
-
Creates a
GPUShaderModule
.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createShaderModule(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUShaderModuleDescriptor ✘ ✘ Description of the GPUShaderModule
to create.Returns:
GPUShaderModule
Describe
createShaderModule()
algorithm steps.
9.1.2. Shader Module Compilation Information
enum {
GPUCompilationMessageType ,
"error" ,
"warning" }; [
"info" Exposed =Window ,Serializable ]interface {
GPUCompilationMessage readonly attribute DOMString message ;readonly attribute GPUCompilationMessageType type ;readonly attribute unsigned long long lineNum ;readonly attribute unsigned long long linePos ;readonly attribute unsigned long long offset ;readonly attribute unsigned long long length ; }; [Exposed =Window ,Serializable ]interface {
GPUCompilationInfo readonly attribute FrozenArray <GPUCompilationMessage >; };
messages
A GPUCompilationMessage
is an informational, warning, or error message generated by the GPUShaderModule
compiler. The messages are intended to be human readable to help developers
diagnose issues with their shader code
. Each message may correspond to
either a single point in the shader code, a substring of the shader code, or may not correspond to
any specific point in the code at all.
GPUCompilationMessage
has the following attributes:
message
, of type DOMString, readonly-
A human-readable string containing the message generated during the shader compilation.
type
, of type GPUCompilationMessageType, readonly-
The severity level of the message.
lineNum
, of type unsigned long long, readonly-
The line number in the shader
code
themessage
corresponds to. Value is one-based, such that a lineNum of1
indicates the first line of the shadercode
.If the
message
corresponds to a substring this points to the line on which the substring begins. Must be0
if themessage
does not correspond to any specific point in the shadercode
.Reference WGSL spec when it defines what a line is.
linePos
, of type unsigned long long, readonly-
The offset, in UTF-16 code units, from the beginning of line
lineNum
of the shadercode
to the point or beginning of the substring that themessage
corresponds to. Value is one-based, such that alinePos
of1
indicates the first character of the line.If
message
corresponds to a substring this points to the first UTF-16 code unit of the substring. Must be0
if themessage
does not correspond to any specific point in the shadercode
. offset
, of type unsigned long long, readonly-
The offset from the beginning of the shader
code
in UTF-16 code units to the point or beginning of the substring thatmessage
corresponds to. Must reference the same position aslineNum
andlinePos
. Must be0
if themessage
does not correspond to any specific point in the shadercode
. length
, of type unsigned long long, readonly-
The number of UTF-16 code units in the substring that
message
corresponds to. If the message does not correspond with a substring thenlength
must be 0.
Note: GPUCompilationMessage
.lineNum
and GPUCompilationMessage
.linePos
are one-based since the most common use
for them is expected to be printing human readable messages that can be correlated with the line and
column numbers shown in many text editors.
Note: GPUCompilationMessage
.offset
and GPUCompilationMessage
.length
are appropriate to pass to substr()
in order to retrieve the substring of the shader code
the message
corresponds to.
compilationInfo()
-
Returns any messages generated during the
GPUShaderModule
's compilation.Called on:GPUShaderModule
this.Returns:
Promise
<GPUCompilationInfo
>Describe
compilationInfo()
algorithm steps.
10. Pipelines
A pipeline, be it GPUComputePipeline
or GPURenderPipeline
,
represents the complete function done by a combination of the GPU hardware, the driver,
and the user agent, that process the input data in the shape of bindings and vertex buffers,
and produces some output, like the colors in the output render targets.
Structurally, the pipeline consists of a sequence of programmable stages (shaders) and fixed-function states, such as the blending modes.
Note: Internally, depending on the target platform, the driver may convert some of the fixed-function states into shader code, and link it together with the shaders provided by the user. This linking is one of the reason the object is created as a whole.
This combination state is created as a single object
(by GPUDevice.createComputePipeline()
or GPUDevice.createRenderPipeline()
),
and switched as one
(by GPUComputePassEncoder.setPipeline
or GPURenderEncoderBase.setPipeline
correspondingly).
10.1. Base pipelines
dictionary :
GPUPipelineDescriptorBase GPUObjectDescriptorBase {GPUPipelineLayout ; };
layout interface mixin {
GPUPipelineBase GPUBindGroupLayout getBindGroupLayout (unsigned long index ); };
GPUPipelineBase
has the following internal slots:
[[layout]]
of typeGPUPipelineLayout
.-
The definition of the layout of resources which can be used with
this
.
GPUPipelineBase
has the following methods:
getBindGroupLayout(index)
-
Gets a
GPUBindGroupLayout
that is compatible with theGPUPipelineBase
'sGPUBindGroupLayout
atindex
.Called on:GPUPipelineBase
this.Arguments:
Arguments for the GPUPipelineBase.getBindGroupLayout(index) method. Parameter Type Nullable Optional Description index
unsigned long ✘ ✘ Index into the pipeline layout’s [[bindGroupLayouts]]
sequence.Returns:
GPUBindGroupLayout
-
If index ≥ this.
[[device]]
.[[limits]]
.maxBindGroups
:-
Throw a
RangeError
.
-
-
If this is not valid:
-
Return a new error
GPUBindGroupLayout
.
-
-
Return a new
GPUBindGroupLayout
object that references the same internal object as this.[[layout]]
.[[bindGroupLayouts]]
[index].
Specify this more properly once we have internal objects for
GPUBindGroupLayout
. Alternatively only spec is as a new internal objects that’s group-equivalentNote: Only returning new
GPUBindGroupLayout
objects ensures no synchronization is necessary between the Content timeline and the Device timeline. -
10.1.1. Default pipeline layout
A GPUPipelineBase
object that was created without a layout
has a default layout created and used instead.
-
Let groupDescs be a sequence of device.
[[limits]]
.maxBindGroups
newGPUBindGroupLayoutDescriptor
objects. -
For each groupDesc in groupDescs:
-
Set groupDesc.
entries
to an empty sequence.
-
-
For each
GPUProgrammableStage
stageDesc in the descriptor used to create the pipeline:-
Let stageInfo be the "reflection information" for stageDesc.
Define the reflection information concept so that this spec can interface with the WGSL spec and get information what the interface is for a
GPUShaderModule
for a specific entrypoint. -
Let shaderStage be the
GPUShaderStageFlags
for stageDesc.entryPoint
in stageDesc.module
. -
For each resource resource in stageInfo’s resource interface:
-
Let group be resource’s "group" decoration.
-
Let binding be resource’s "binding" decoration.
-
Let entry be a new
GPUBindGroupLayoutEntry
. -
Set entry.
binding
to binding. -
Set entry.
visibility
to shaderStage. -
If resource is for a sampler binding:
-
Let samplerLayout be a new
GPUSamplerBindingLayout
. -
Set entry.
sampler
to samplerLayout.
-
-
If resource is for a comparison sampler binding:
-
Let samplerLayout be a new
GPUSamplerBindingLayout
. -
Set samplerLayout.
type
to"comparison"
. -
Set entry.
sampler
to samplerLayout.
-
-
If resource is for a buffer binding:
-
Let bufferLayout be a new
GPUBufferBindingLayout
. -
Set bufferLayout.
minBindingSize
to resource’s minimum buffer binding size.link to a definition for "minimum buffer binding size" in the "reflection information".
-
If resource is for a read-only storage buffer:
-
Set bufferLayout.
type
to"read-only-storage"
.
-
-
If resource is for a storage buffer:
-
Set entry.
buffer
to bufferLayout.
-
-
If resource is for a sampled texture binding:
-
Let textureLayout be a new
GPUTextureBindingLayout
. -
Set textureLayout.
sampleType
to resource’s component type. -
Set textureLayout.
viewDimension
to resource’s dimension. -
If resource is for a multisampled texture:
-
Set textureLayout.
multisampled
totrue
.
-
-
Set entry.
texture
to textureLayout.
-
-
If resource is for a storage texture binding:
-
Let storageTextureLayout be a new
GPUStorageTextureBindingLayout
. -
Set storageTextureLayout.
format
to resource’s format. -
Set storageTextureLayout.
viewDimension
to resource’s dimension. -
If resource is for a read-only storage texture:
-
Set storageTextureLayout.
access
to"read-only"
.
-
-
If resource is for a write-only storage texture:
-
Set storageTextureLayout.
access
to"write-only"
.
-
-
Set entry.
storageTexture
to storageTextureLayout.
-
-
If groupDescs[group] has an entry previousEntry with
binding
equal to binding:-
If entry has different
visibility
than previousEntry:-
Add the bits set in entry.
visibility
into previousEntry.visibility
-
-
If resource is for a buffer binding and entry has greater
buffer
.minBindingSize
than previousEntry:-
Set previousEntry.
buffer
.minBindingSize
to entry.buffer
.minBindingSize
.
-
-
If any other property is unequal between entry and previousEntry:
-
Return
null
(which will cause the creation of the pipeline to fail).
-
-
-
Else
-
Append entry to groupDescs[group].
-
-
-
-
Let groupLayouts be a new sequence.
-
For each groupDesc in groupDescs:
-
Append device.
createBindGroupLayout()
(groupDesc) to groupLayouts.
-
-
Let desc be a new
GPUPipelineLayoutDescriptor
. -
Set desc.
bindGroupLayouts
to groupLayouts. -
Return device.
createPipelineLayout()
(desc).
This fills the pipeline layout with empty bindgroups. Revisit once the behavior of empty bindgroups is specified.
10.1.2. GPUProgrammableStage
dictionary GPUProgrammableStage {required GPUShaderModule ;
module required USVString ; };
entryPoint
A GPUProgrammableStage
describes the entry point in the user-provided GPUShaderModule
that controls one of the programmable stages of a pipeline.
-
GPUShaderStage
stage -
GPUProgrammableStage
descriptor -
GPUPipelineLayout
layout
Return true
if all of the following conditions are satisfied:
-
The descriptor.
module
is validGPUShaderModule
. -
The descriptor.
module
contains an entry point at stage named descriptor.entryPoint
. -
For each binding that is statically used by the shader entry point, the validating shader binding(binding, layout) returns
true
. -
For each texture sampling shader call that is statically used by the entry point:
-
Let texture be the
GPUBindGroupLayoutEntry
corresponding to the sampled texture in the call. -
Let sampler be the
GPUBindGroupLayoutEntry
corresponding to the used sampler in the call. -
One of the following conditions is
false
:-
texture.
sampleType
is"unfilterable-float"
: -
sampler.
type
is"filtering"
.
-
-
-
shader binding, reflected from the shader module
-
GPUPipelineLayout
layout
Consider the shader binding annotation of bindIndex for the binding index and bindGroup for the bind group index.
Return true
if all of the following conditions are satisfied:
-
layout.
[[bindGroupLayouts]]
[bindGroup] contains aGPUBindGroupLayoutEntry
entry whose entry.binding
== bindIndex. -
If the defined binding member for entry is:
buffer
-
"uniform"
-
The binding is a uniform buffer.
"storage"
-
The binding is a storage buffer.
"read-only-storage"
-
The binding is a read-only storage buffer.
If entry.
buffer
.minBindingSize
is not0
:-
If the last field of the corresponding structure defined in the shader has an unbounded array type, then the value of entry.
buffer
.minBindingSize
must be greater than or equal to the byte offset of that field plus the stride of the unbounded array. -
If the corresponding shader structure doesn’t end with an unbounded array type, then the value of entry.
buffer
.minBindingSize
must be greater than or equal to the size of the structure.
sampler
-
"filtering"
-
the binding is a non-comparison sampler
"non-filtering"
-
the binding is a non-comparison sampler
"comparison"
-
the binding is a comparison sampler
texture
-
If entry.
texture
.multisampled
is:true
-
the binding is a multisampled texture.
false
-
The binding is a sampled texture with a sample count of 1.
The component type of the texture matches entry.
texture
.sampleType
.The shader view dimension of the texture matches entry.
texture
.viewDimension
. storageTexture
-
If entry.
storageTexture
.access
is:"read-only"
-
The binding is a read-only storage texture.
"write-only"
-
The binding is a writable storage texture.
The format of the storage texture matches entry.
storageTexture
.format
.The shader view dimension of the storage texture matches entry.
storageTexture
.viewDimension
.
A resource binding is considered to be statically used by a shader entry point if and only if it’s reachable by the control flow graph of the shader module, starting at the entry point.
10.2. GPUComputePipeline
A GPUComputePipeline
is a kind of pipeline that controls the compute shader stage,
and can be used in GPUComputePassEncoder
.
Compute inputs and outputs are all contained in the bindings,
according to the given GPUPipelineLayout
.
The outputs correspond to buffer
bindings with a type of "storage"
and storageTexture
bindings with a type of "write-only"
.
Stages of a compute pipeline:
-
Compute shader
[Exposed =Window ,Serializable ]interface GPUComputePipeline { };GPUComputePipeline includes GPUObjectBase ;GPUComputePipeline includes GPUPipelineBase ;
10.2.1. Creation
dictionary :
GPUComputePipelineDescriptor GPUPipelineDescriptorBase {required GPUProgrammableStage ; };
compute
createComputePipeline(descriptor)
-
Creates a
GPUComputePipeline
.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createComputePipeline(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUComputePipelineDescriptor ✘ ✘ Description of the GPUComputePipeline
to create.Returns:
GPUComputePipeline
If any of the following conditions are unsatisfied:
-
descriptor.
layout
is valid to use with this. -
validating GPUProgrammableStage(
COMPUTE
, descriptor.compute
, descriptor.layout
) succeeds.
Then:
-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Create a new invalid
GPUComputePipeline
and return the result.
-
createComputePipelineAsync(descriptor)
-
Creates a
GPUComputePipeline
. The returnedPromise
resolves when the created pipeline is ready to be used without additional delay.If pipeline creation fails, the returned
Promise
resolves to an invalidGPUComputePipeline
object.Note: Use of this method is preferred whenever possible, as it prevents blocking the queue timeline work on pipeline compilation.
Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createComputePipelineAsync(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUComputePipelineDescriptor ✘ ✘ Description of the GPUComputePipeline
to create.Returns:
Promise
<GPUComputePipeline
>-
Let promise be a new promise.
-
Issue the following steps on the Device timeline of this:
-
Let pipeline be a new
GPUComputePipeline
created as if this.createComputePipeline()
was called with descriptor; -
When pipeline is ready to be used, resolve promise with pipeline.
-
-
Return promise.
-
10.3. GPURenderPipeline
A GPURenderPipeline
is a kind of pipeline that controls the vertex
and fragment shader stages, and can be used in GPURenderPassEncoder
as well as GPURenderBundleEncoder
.
Render pipeline inputs are:
-
bindings, according to the given
GPUPipelineLayout
-
vertex and index buffers, described by
GPUVertexState
-
the color attachments, described by
GPUColorTargetState
-
optionally, the depth-stencil attachment, described by
GPUDepthStencilState
Render pipeline outputs are:
-
storageTexture
bindings with aaccess
of"write-only"
-
the color attachments, described by
GPUColorTargetState
-
optionally, depth-stencil attachment, described by
GPUDepthStencilState
A render pipeline is comprised of the following render stages:
-
Vertex fetch, controlled by
GPUVertexState.buffers
-
Vertex shader, controlled by
GPUVertexState
-
Primitive assembly, controlled by
GPUPrimitiveState
-
Rasterization, controlled by
GPUPrimitiveState
,GPUDepthStencilState
, andGPUMultisampleState
-
Fragment shader, controlled by
GPUFragmentState
-
Stencil test and operation, controlled by
GPUDepthStencilState
-
Depth test and write, controlled by
GPUDepthStencilState
-
Output merging, controlled by
GPUFragmentState.targets
[Exposed =Window ,Serializable ]interface GPURenderPipeline { };GPURenderPipeline includes GPUObjectBase ;GPURenderPipeline includes GPUPipelineBase ;
GPURenderPipeline
has the following internal slots:
[[descriptor]]
, of typeGPURenderPipelineDescriptor
-
The
GPURenderPipelineDescriptor
describing this pipeline.All optional fields of
GPURenderPipelineDescriptor
are defined. [[strip_index_format]]
, of typeGPUIndexFormat
?-
The format index data this pipeline requires if using a strip primitive topology, initially
undefined
.
10.3.1. Creation
dictionary :
GPURenderPipelineDescriptor GPUPipelineDescriptorBase {required GPUVertexState ;
vertex GPUPrimitiveState = {};
primitive GPUDepthStencilState ;
depthStencil GPUMultisampleState = {};
multisample GPUFragmentState ; };
fragment
A GPURenderPipelineDescriptor
describes the state of a render pipeline by
configuring each of the render stages. See § 21.3 Rendering for the
details.
-
vertex
describes the vertex shader entry point of the pipeline and its input buffer layouts. -
primitive
describes the the primitive-related properties of the pipeline. -
depthStencil
describes the optional depth-stencil properties, including the testing, operations, and bias. -
multisample
describes the multi-sampling properties of the pipeline. -
fragment
describes the fragment shader entry point of the pipeline and its output colors. If it’snull
, the § 21.3.7 No Color Output mode is enabled.
createRenderPipeline(descriptor)
-
Creates a
GPURenderPipeline
.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createRenderPipeline(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPURenderPipelineDescriptor ✘ ✘ Description of the GPURenderPipeline
to create.Returns:
GPURenderPipeline
-
Let pipeline be a new valid
GPURenderPipeline
object. -
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied:
-
descriptor.
layout
is valid to use with this. -
validating GPURenderPipelineDescriptor(descriptor, this) succeeds.
Then:
-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Make pipeline invalid.
-
-
Set pipeline.
[[descriptor]]
to descriptor. -
Set pipeline.
[[strip_index_format]]
to descriptor.primitive
.stripIndexFormat
.
-
-
Return pipeline.
-
createRenderPipelineAsync(descriptor)
-
Creates a
GPURenderPipeline
. The returnedPromise
resolves when the created pipeline is ready to be used without additional delay.If pipeline creation fails, the returned
Promise
resolves to an invalidGPURenderPipeline
object.Note: Use of this method is preferred whenever possible, as it prevents blocking the queue timeline work on pipeline compilation.
Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createRenderPipelineAsync(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPURenderPipelineDescriptor ✘ ✘ Description of the GPURenderPipeline
to create.Returns:
Promise
<GPURenderPipeline
>-
Let promise be a new promise.
-
Issue the following steps on the Device timeline of this:
-
Let pipeline be a new
GPURenderPipeline
created as if this.createRenderPipeline()
was called with descriptor; -
When pipeline is ready to be used, resolve promise with pipeline.
-
-
Return promise.
-
-
GPURenderPipelineDescriptor
descriptor -
GPUDevice
device
Return true
if all of the following conditions are satisfied:
-
validating GPUProgrammableStage(
VERTEX
, descriptor.vertex
, descriptor.layout
) succeeds. -
validating GPUVertexState(device, descriptor.
vertex
, descriptor.vertex
) succeeds. -
If descriptor.
fragment
is notnull
:-
validating GPUProgrammableStage(
FRAGMENT
, descriptor.fragment
, descriptor.layout
) succeeds. -
validating GPUFragmentState(descriptor.
fragment
) succeeds. -
If the output SV_Coverage semantics is statically used by descriptor.
fragment
:-
descriptor.
multisample
.alphaToCoverageEnabled
isfalse
.
-
-
-
validating GPUPrimitiveState(descriptor.
primitive
, device.[[features]]
) succeeds. -
if descriptor.
depthStencil
is notnull
:-
validating GPUDepthStencilState(descriptor.
depthStencil
) succeeds.
-
-
validating GPUMultisampleState(descriptor.
multisample
) succeeds. -
For each user-defined output of descriptor.
vertex
there must be a user-defined input of descriptor.fragment
that matches the location, type, and interpolation of the output. -
For each user-defined input of descriptor.
fragment
there must be a user-defined output of descriptor.vertex
that location, type, and interpolation of the input.
should we validate that cullMode
is none for points and lines?
define what "compatible" means for render target formats.
need a proper limit for the maximum number of color targets.
10.3.2. Primitive State
enum {
GPUPrimitiveTopology ,
"point-list" ,
"line-list" ,
"line-strip" ,
"triangle-list" };
"triangle-strip"
dictionary {
GPUPrimitiveState GPUPrimitiveTopology = "triangle-list";
topology GPUIndexFormat ;
stripIndexFormat GPUFrontFace = "ccw";
frontFace GPUCullMode = "none"; // Enable depth clamping (requires "depth-clamping" feature)
cullMode boolean =
clampDepth false ; };
-
GPUPrimitiveState
descriptor -
list<
GPUFeatureName
> features
Return true
if all of the following conditions are satisfied:
-
If descriptor.
topology
is:"line-strip"
or"triangle-strip"
-
descriptor.
stripIndexFormat
is notundefined
- Otherwise
-
descriptor.
stripIndexFormat
isundefined
-
If descriptor.
clampDepth
istrue
:-
features must contain
"depth-clamping"
.
-
enum {
GPUFrontFace ,
"ccw" };
"cw"
enum {
GPUCullMode ,
"none" ,
"front" };
"back"
10.3.3. Multisample State
dictionary {
GPUMultisampleState GPUSize32 = 1;
count GPUSampleMask = 0xFFFFFFFF;
mask boolean =
alphaToCoverageEnabled false ; };
-
GPUMultisampleState
descriptor
Return true
if all of the following conditions are satisfied:
-
If descriptor.
alphaToCoverageEnabled
istrue
:-
descriptor.
count
is greater than 1.
-
10.3.4. Fragment State
dictionary :
GPUFragmentState GPUProgrammableStage {required sequence <GPUColorTargetState >; };
targets
true
if all of the following requirements are met:
-
descriptor.
targets
.length must be ≤ 4. -
For each colorState layout descriptor in the list descriptor.
targets
:-
colorState.
format
must be listed in § 24.1.1 Plain color formats withRENDER_ATTACHMENT
capability. -
If colorState.
blend
is notundefined
:-
The colorState.
format
must be filterable according to the § 24.1.1 Plain color formats table. -
colorState.
blend
.color
must be a valid GPUBlendComponent. -
colorState.
blend
.alpha
must be a valid GPUBlendComponent.
-
-
colorState.
writeMask
must be < 16. -
descriptor.
module
must contain an output variable that:-
is statically used by descriptor.
entryPoint
, and -
has a type that is compatible with colorState.
format
.
-
-
define the area of reach for "statically used" things of GPUProgrammableStage
10.3.5. Color Target State
dictionary {
GPUColorTargetState required GPUTextureFormat ;
format GPUBlendState ;
blend GPUColorWriteFlags = 0xF; // GPUColorWrite.ALL };
writeMask
dictionary {
GPUBlendState required GPUBlendComponent ;
color required GPUBlendComponent ; };
alpha
typedef [EnforceRange ]unsigned long ; [
GPUColorWriteFlags Exposed =Window ]interface {
GPUColorWrite const GPUFlagsConstant = 0x1;
RED const GPUFlagsConstant = 0x2;
GREEN const GPUFlagsConstant = 0x4;
BLUE const GPUFlagsConstant = 0x8;
ALPHA const GPUFlagsConstant = 0xF; };
ALL
10.3.5.1. Blend State
dictionary {
GPUBlendComponent GPUBlendFactor = "one";
srcFactor GPUBlendFactor = "zero";
dstFactor GPUBlendOperation = "add"; };
operation
enum {
GPUBlendFactor ,
"zero" ,
"one" ,
"src" ,
"one-minus-src" ,
"src-alpha" ,
"one-minus-src-alpha" ,
"dst" ,
"one-minus-dst" ,
"dst-alpha" ,
"one-minus-dst-alpha" ,
"src-alpha-saturated" ,
"constant" };
"one-minus-constant"
enum {
GPUBlendOperation ,
"add" ,
"subtract" ,
"reverse-subtract" ,
"min" };
"max"
10.3.6. Depth/Stencil State
dictionary {
GPUDepthStencilState required GPUTextureFormat ;
format boolean =
depthWriteEnabled false ;GPUCompareFunction = "always";
depthCompare GPUStencilFaceState = {};
stencilFront GPUStencilFaceState = {};
stencilBack GPUStencilValue = 0xFFFFFFFF;
stencilReadMask GPUStencilValue = 0xFFFFFFFF;
stencilWriteMask GPUDepthBias = 0;
depthBias float = 0;
depthBiasSlopeScale float = 0; };
depthBiasClamp
dictionary {
GPUStencilFaceState GPUCompareFunction = "always";
compare GPUStencilOperation = "keep";
failOp GPUStencilOperation = "keep";
depthFailOp GPUStencilOperation = "keep"; };
passOp
enum {
GPUStencilOperation ,
"keep" ,
"zero" ,
"replace" ,
"invert" ,
"increment-clamp" ,
"decrement-clamp" ,
"increment-wrap" };
"decrement-wrap"
-
GPUDepthStencilState
descriptor
Return true
, if and only if, all of the following conditions are satisfied:
-
descriptor.
format
is listed in § 24.1.2 Depth/stencil formats. -
if descriptor.
depthWriteEnabled
istrue
or descriptor.depthCompare
is not"always"
:-
descriptor.
format
must have a depth component.
-
-
if descriptor.
stencilFront
or descriptor.stencilBack
are not default values:-
descriptor.
format
must have a stencil component.
-
how can this algorithm support depth/stencil formats that are added in extensions?
10.3.7. Vertex State
enum {
GPUIndexFormat ,
"uint16" };
"uint32"
The index format determines both the data type of index values in a buffer and, when used with
strip primitive topologies ("line-strip"
or "triangle-strip"
) also specifies the primitive restart value. The primitive restart value indicates which index value indicates that a new primitive
should be started rather than continuing to construct the triangle strip with the prior indexed
vertices.
GPUPrimitiveState
s that specify a strip primitive topology must specify a stripIndexFormat
so that the primitive restart value that will be used
is known at pipeline creation time. GPUPrimitiveState
s that specify a list primitive
topology must set stripIndexFormat
to undefined
, and will use the index
format passed to setIndexBuffer()
when rendering.
Index format | Primitive restart value |
---|---|
"uint16"
| 0xFFFF |
"uint32"
| 0xFFFFFFFF |
10.3.7.1. Vertex Formats
The name of the format specifies the order of components, bits per component, and data type for the component.
-
unorm
= unsigned normalized -
snorm
= signed normalized -
uint
= unsigned int -
sint
= signed int -
float
= floating point
enum {
GPUVertexFormat ,
"uint8x2" ,
"uint8x4" ,
"sint8x2" ,
"sint8x4" ,
"unorm8x2" ,
"unorm8x4" ,
"snorm8x2" ,
"snorm8x4" ,
"uint16x2" ,
"uint16x4" ,
"sint16x2" ,
"sint16x4" ,
"unorm16x2" ,
"unorm16x4" ,
"snorm16x2" ,
"snorm16x4" ,
"float16x2" ,
"float16x4" ,
"float32" ,
"float32x2" ,
"float32x3" ,
"float32x4" ,
"uint32" ,
"uint32x2" ,
"uint32x3" ,
"uint32x4" ,
"sint32" ,
"sint32x2" ,
"sint32x3" , };
"sint32x4"
enum {
GPUInputStepMode ,
"vertex" };
"instance"
dictionary :
GPUVertexState GPUProgrammableStage {sequence <GPUVertexBufferLayout ?>= []; };
buffers
A vertex buffer is, conceptually, a view into buffer memory as an array of structures. arrayStride
is the stride, in bytes, between elements of that array.
Each element of a vertex buffer is like a structure with a memory layout defined by its attributes
, which describe the members of the structure.
Each GPUVertexAttribute
describes its format
and its offset
, in bytes, within the structure.
Each attribute appears as a separate input in a vertex shader, each bound by a numeric location,
which is specified by shaderLocation
.
Every location must be unique within the GPUVertexState
.
dictionary {
GPUVertexBufferLayout required GPUSize64 ;
arrayStride GPUInputStepMode = "vertex";
stepMode required sequence <GPUVertexAttribute >; };
attributes
dictionary {
GPUVertexAttribute required GPUVertexFormat ;
format required GPUSize64 ;
offset required GPUIndex32 ; };
shaderLocation
-
GPUDevice
device -
GPUVertexBufferLayout
descriptor -
GPUProgrammableStage
vertexStage
Return true
, if and only if, all of the following conditions are satisfied:
-
descriptor.
arrayStride
≤ device.[[device]]
.[[limits]]
.maxVertexBufferArrayStride
. -
descriptor.
arrayStride
is a multiple of 4. -
For each attribute attrib in the list descriptor.
attributes
:-
If descriptor.
arrayStride
is zero:-
attrib.
offset
+ sizeof(attrib.format
) ≤ device.[[device]]
.[[limits]]
.maxVertexBufferArrayStride
.
Otherwise:
-
attrib.
offset
+ sizeof(attrib.format
) ≤ descriptor.arrayStride
.
-
-
attrib.
offset
is a multiple of the size of one component of attrib.format
. -
attrib.
shaderLocation
is less than device.[[device]]
.[[limits]]
.maxVertexAttributes
.
-
-
For every vertex attribute in the shader reflection of vertexStage.
module
that is know to be statically used by vertexStage.entryPoint
, there is a corresponding attrib element of descriptor.attributes
for which all of the following are true:-
The shader format is attrib.
format
. -
The shader location is attrib.
shaderLocation
.
-
-
GPUDevice
device -
GPUVertexState
descriptor
Return true
, if and only if, all of the following conditions are satisfied:
-
descriptor.
buffers
.length is less than or equal to device.[[device]]
.[[limits]]
.maxVertexBuffers
. -
Each vertexBuffer layout descriptor in the list descriptor.
buffers
passes validating GPUVertexBufferLayout(device, vertexBuffer, descriptor) -
The sum of vertexBuffer.
attributes
.length, over every vertexBuffer in descriptor.buffers
, is less than or equal to device.[[device]]
.[[limits]]
.maxVertexAttributes
. -
Each attrib in the union of all
GPUVertexAttribute
across descriptor.buffers
has a distinct attrib.shaderLocation
value.
11. Command Buffers
Command buffers are pre-recorded lists of GPU commands that can be submitted to a GPUQueue
for execution. Each GPU command represents a task to be performed on the GPU, such as
setting state, drawing, copying resources, etc.
11.1. GPUCommandBuffer
[Exposed =Window ]interface GPUCommandBuffer {readonly attribute Promise <double >executionTime ; };GPUCommandBuffer includes GPUObjectBase ;
GPUCommandBuffer
has the following attributes:
executionTime
, of type Promise<double>, readonly-
The total time, in seconds, that the GPU took to execute this command buffer.
Note: If
measureExecutionTime
istrue
, this resolves after the command buffer executes. Otherwise, this rejects with anOperationError
.Specify the creation and resolution of the promise.In
finish()
, it should be specified that a new promise is created and stored in this attribute. The promise starts rejected ifmeasureExecutionTime
isfalse
. If the finish() fails, then the promise resolves to 0.In
submit()
, it should be specified that (ifmeasureExecutionTime
is set), work is issued to read back the execution time, and, when that completes, the promise is resolved with that value. If the submit() fails, then the promise resolves to 0.
GPUCommandBuffer
has the following internal slots:
[[command_list]]
of type list<GPU command>.-
A list of GPU commands to be executed on the Queue timeline when this command buffer is submitted.
11.1.1. Creation
dictionary :
GPUCommandBufferDescriptor GPUObjectDescriptorBase { };
12. Command Encoding
12.1. GPUCommandEncoder
[Exposed =Window ]interface GPUCommandEncoder {GPURenderPassEncoder beginRenderPass (GPURenderPassDescriptor descriptor );GPUComputePassEncoder beginComputePass (optional GPUComputePassDescriptor descriptor = {});undefined copyBufferToBuffer (GPUBuffer source ,GPUSize64 sourceOffset ,GPUBuffer destination ,GPUSize64 destinationOffset ,GPUSize64 size );undefined copyBufferToTexture (GPUImageCopyBuffer source ,GPUImageCopyTexture destination ,GPUExtent3D copySize );undefined copyTextureToBuffer (GPUImageCopyTexture source ,GPUImageCopyBuffer destination ,GPUExtent3D copySize );undefined copyTextureToTexture (GPUImageCopyTexture source ,GPUImageCopyTexture destination ,GPUExtent3D copySize );undefined pushDebugGroup (USVString groupLabel );undefined popDebugGroup ();undefined insertDebugMarker (USVString markerLabel );undefined writeTimestamp (GPUQuerySet querySet ,GPUSize32 queryIndex );undefined resolveQuerySet (GPUQuerySet querySet ,GPUSize32 firstQuery ,GPUSize32 queryCount ,GPUBuffer destination ,GPUSize64 destinationOffset );GPUCommandBuffer finish (optional GPUCommandBufferDescriptor descriptor = {}); };GPUCommandEncoder includes GPUObjectBase ;
GPUCommandEncoder
has the following internal slots:
[[command_list]]
of type list<GPU command>.-
A list of GPU command to be executed on the Queue timeline when the
GPUCommandBuffer
this encoder produces is submitted. [[state]]
of typeencoder state
.-
The current state of the
GPUCommandEncoder
, initially set toopen
. [[debug_group_stack]]
of type stack<USVString
>.-
A stack of active debug group labels.
Each GPUCommandEncoder
has a current encoder state
on the Content timeline which may be one of the following:
- "
open
" -
Indicates the
GPUCommandEncoder
is available to begin new operations. The[[state]]
isopen
any time theGPUCommandEncoder
is valid and has no activeGPURenderPassEncoder
orGPUComputePassEncoder
. - "
encoding a render pass
" -
Indicates the
GPUCommandEncoder
has an activeGPURenderPassEncoder
. The[[state]]
becomesencoding a render pass
oncebeginRenderPass()
is called sucessfully untilendPass()
is called on the returnedGPURenderPassEncoder
, at which point the[[state]]
(if the encoder is still valid) reverts toopen
. - "
encoding a compute pass
" -
Indicates the
GPUCommandEncoder
has an activeGPUComputePassEncoder
. The[[state]]
becomesencoding a compute pass
oncebeginComputePass()
is called sucessfully untilendPass()
is called on the returnedGPUComputePassEncoder
, at which point the[[state]]
(if the encoder is still valid) reverts toopen
. - "
closed
" -
Indicates the
GPUCommandEncoder
is no longer available for any operations. The[[state]]
becomesclosed
oncefinish()
is called or theGPUCommandEncoder
otherwise becomes invalid.
12.1.1. Creation
dictionary :
GPUCommandEncoderDescriptor GPUObjectDescriptorBase {boolean measureExecutionTime =false ; // TODO: reusability flag? };
measureExecutionTime
, of type boolean, defaulting tofalse
-
Enable measurement of the GPU execution time of the entire command buffer.
createCommandEncoder(descriptor)
-
Creates a
GPUCommandEncoder
.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createCommandEncoder(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUCommandEncoderDescriptor ✘ ✔ Description of the GPUCommandEncoder
to create.Returns:
GPUCommandEncoder
Describe
createCommandEncoder()
algorithm steps.
12.2. Pass Encoding
beginRenderPass(descriptor)
-
Begins encoding a render pass described by descriptor.
Called on:GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.beginRenderPass(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPURenderPassDescriptor ✘ ✘ Description of the GPURenderPassEncoder
to create.Returns:
GPURenderPassEncoder
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
descriptor meets the Valid Usage rules.
-
Set this.
[[state]]
toencoding a render pass
. -
For each colorAttachment in descriptor.
colorAttachments
:-
The texture subresource seen by colorAttachment.
view
is considered to be used as attachment for the duration of the render pass.
-
-
Let depthStencilAttachment be descriptor.
depthStencilAttachment
. -
If depthStencilAttachment is not
null
:-
if depthStencilAttachment.
depthReadOnly
andstencilReadOnly
are set-
The texture subresources seen by depthStencilAttachment.
view
are considered to be used as attachment-read for the duration of the render pass.
-
-
Else, the texture subresource seen by depthStencilAttachment.
view
is considered to be used as attachment for the duration of the render pass.
-
specify the behavior of read-only depth/stencil Issue: Enqueue attachment loads (with loadOp clear).
-
beginComputePass(descriptor)
-
Begins encoding a compute pass described by descriptor.
Called on:GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.beginComputePass(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUComputePassDescriptor ✘ ✔ Returns:
GPUComputePassEncoder
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
Set this.
[[state]]
toencoding a compute pass
.
-
12.3. Copy Commands
these dictionary definitions should be inside the image copies section.
12.3.1. GPUImageDataLayout
dictionary GPUImageDataLayout {GPUSize64 = 0;
offset GPUSize32 bytesPerRow ;GPUSize32 rowsPerImage ; };
A GPUImageDataLayout
is a layout of images within some linear memory.
It’s used when copying data between a texture and a buffer, or when scheduling a
write into a texture from the GPUQueue
.
-
For
2d
textures, data is copied between one or multiple contiguous images and array layers. -
For
3d
textures, data is copied between one or multiple contiguous images and depth slices.
Operations that copy between byte arrays and textures always work with rows of texel blocks, which we’ll call block rows. It’s not possible to update only a part of a texel block.
Define images more precisely. In particular, define them as being comprised of texel blocks.
Define the exact copy semantics, by reference to common algorithms shared by the copy methods.
bytesPerRow
, of type GPUSize32-
The stride, in bytes, between the beginning of each block row and the subsequent block row.
Required if there are multiple block rows (i.e. the height or depth is more than one block).
rowsPerImage
, of type GPUSize32-
Number of block rows per single image of the texture.
rowsPerImage
×bytesPerRow
is the stride, in bytes, between the beginning of each image of data and the subsequent image.Required if there are multiple images (i.e. the depth is more than one).
12.3.2. GPUImageCopyBuffer
In an image copy operation, GPUImageCopyBuffer
defines a GPUBuffer
and, together with
the copySize
, how image data is laid out in the buffer’s memory (see GPUImageDataLayout
).
dictionary GPUImageCopyBuffer :GPUImageDataLayout {required GPUBuffer ; };
buffer
Arguments:
-
GPUImageCopyBuffer
imageCopyBuffer
Returns: boolean
Return true
if and only if all of the following conditions are satisfied:
-
imageCopyBuffer.
bytesPerRow
must be a multiple of 256.
12.3.3. GPUImageCopyTexture
In an image copy operation, a GPUImageCopyTexture
defines a GPUTexture
and, together with
the copySize
, the sub-region of the texture (spanning one or more contiguous texture subresources at the same mip-map level).
dictionary GPUImageCopyTexture {required GPUTexture texture ;GPUIntegerCoordinate mipLevel = 0;GPUOrigin3D origin = {};GPUTextureAspect aspect = "all"; };
texture
, of type GPUTexture-
Texture to copy to/from.
mipLevel
, of type GPUIntegerCoordinate, defaulting to0
-
Mip-map level of the
texture
to copy to/from. origin
, of type GPUOrigin3D, defaulting to{}
-
Defines the origin of the copy - the minimum corner of the texture sub-region to copy to/from. Together with
copySize
, defines the full copy sub-region. aspect
, of type GPUTextureAspect, defaulting to"all"
-
Defines which aspects of the
texture
to copy to/from.
Arguments:
-
GPUImageCopyTexture
imageCopyTexture -
GPUExtent3D
copySize
Returns: boolean
Let:
-
blockWidth be the texel block width of imageCopyTexture.
texture
.[[descriptor]]
.format
. -
blockHeight be the texel block height of imageCopyTexture.
texture
.[[descriptor]]
.format
.
Return true
if and only if all of the following conditions apply:
-
imageCopyTexture.
texture
must be a validGPUTexture
. -
imageCopyTexture.
mipLevel
must be less than the[[descriptor]]
.mipLevelCount
of imageCopyTexture.texture
. -
imageCopyTexture.
origin
.y must be a multiple of blockHeight. -
The imageCopyTexture subresource size of imageCopyTexture is equal to copySize if either of the following conditions is true:
-
imageCopyTexture.
texture
.[[descriptor]]
.format
is a depth-stencil format. -
imageCopyTexture.
texture
.[[descriptor]]
.sampleCount
is greater than 1.
-
Define the copies with 1d
and 3d
textures. <https://github.com/gpuweb/gpuweb/issues/69>
12.3.4. GPUImageCopyExternalImage
dictionary GPUImageCopyExternalImage {required (ImageBitmap or HTMLCanvasElement or OffscreenCanvas )source ;GPUOrigin2D origin = {}; };
GPUImageCopyExternalImage
has the following members:
source
, of type(ImageBitmap or HTMLCanvasElement or OffscreenCanvas)
-
The source of the image copy. The copy source data is captured at the moment that
copyExternalImageToTexture()
is issued. origin
, of type GPUOrigin2D, defaulting to{}
-
Defines the origin of the copy - the minimum corner of the source sub-region to copy from. Together with
copySize
, defines the full copy sub-region.
Needs optional information about the target color encoding. ImageBitmap and canvas (and VideoElement) are color-managed: they encode colors. GPUTexture is not color-managed: it encodes raw numbers. Producing raw numbers requires knowing the target encoding. Probably there should be a particular default value (e.g. the default profile used by the browser for unmanaged content?), but eventually we’ll want to add knobs. <https://github.com/gpuweb/gpuweb/issues/1483>
Once that’s figured out, generally define (and test) the encoding of color values into the
various formats allowed by copyExternalImageToTexture()
.
12.3.5. Buffer Copies
copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size)
-
Encode a command into the
GPUCommandEncoder
that copies data from a sub-region of aGPUBuffer
to a sub-region of anotherGPUBuffer
.Called on:GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size) method. Parameter Type Nullable Optional Description source
GPUBuffer ✘ ✘ The GPUBuffer
to copy from.sourceOffset
GPUSize64 ✘ ✘ Offset in bytes into source to begin copying from. destination
GPUBuffer ✘ ✘ The GPUBuffer
to copy to.destinationOffset
GPUSize64 ✘ ✘ Offset in bytes into destination to place the copied data. size
GPUSize64 ✘ ✘ Bytes to copy. Returns:
undefined
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
source is valid to use with this.
-
destination is valid to use with this.
-
size is a multiple of 4.
-
sourceOffset is a multiple of 4.
-
destinationOffset is a multiple of 4.
-
(sourceOffset + size) does not overflow a
GPUSize64
. -
(destinationOffset + size) does not overflow a
GPUSize64
. -
source.
[[size]]
is greater than or equal to (sourceOffset + size). -
destination.
[[size]]
is greater than or equal to (destinationOffset + size). -
source and destination are not the same
GPUBuffer
.
Define the state machine for GPUCommandEncoder. <https://github.com/gpuweb/gpuweb/issues/21>
figure out how to handle overflows in the spec. <https://github.com/gpuweb/gpuweb/issues/69>
12.3.6. Image Copies
WebGPU provides copyBufferToTexture()
for buffer-to-texture copies and copyTextureToBuffer()
for texture-to-buffer copies.
The following definitions and validation rules apply to both copyBufferToTexture()
and copyTextureToBuffer()
.
imageCopyTexture subresource size and Valid Texture Copy Range also applies to copyTextureToTexture()
.
imageCopyTexture subresource size
Arguments:
-
GPUImageCopyTexture
imageCopyTexture
Returns: GPUExtent3D
The imageCopyTexture subresource size of imageCopyTexture is calculated as follows:
Its width, height and depthOrArrayLayers are the width, height, and depth, respectively,
of the physical size of imageCopyTexture.texture
subresource at mipmap level imageCopyTexture.mipLevel
.
define this as an algorithm with (texture, mipmapLevel) parameters and use the call syntax instead of referring to the definition by label.
Arguments:
GPUImageDataLayout
layout-
Layout of the linear texture data.
GPUSize64
byteSize-
Total size of the linear data, in bytes.
GPUTextureFormat
format-
Format of the texture.
GPUExtent3D
copyExtent-
Extent of the texture to copy.
-
Let blockWidth, blockHeight, and blockSize be the texel block width, height, and size of format.
-
It is assumed that copyExtent.width is a multiple of blockWidth and copyExtent.height is a multiple of blockHeight. Let:
-
Fail if the following conditions are not satisfied:
-
If heightInBlocks > 1, layout.
bytesPerRow
must be specified. -
If copyExtent.depthOrArrayLayers > 1, layout.
bytesPerRow
and layout.rowsPerImage
must be specified. -
If specified, layout.
bytesPerRow
must be greater than or equal to bytesInLastRow. -
If specified, layout.
rowsPerImage
must be greater than or equal to heightInBlocks.
-
-
Let requiredBytesInCopy be 0.
-
If copyExtent.depthOrArrayLayers > 1:
-
Let bytesPerImage be layout.
bytesPerRow
× layout.rowsPerImage
. -
Let bytesBeforeLastImage be bytesPerImage × (copyExtent.depthOrArrayLayers − 1).
-
Add bytesBeforeLastImage to requiredBytesInCopy.
-
-
If copyExtent.depthOrArrayLayers > 0:
-
If heightInBlocks > 1, add layout.
bytesPerRow
× (heightInBlocks − 1) to requiredBytesInCopy. -
If heightInBlocks > 0, add bytesInLastRow to requiredBytesInCopy.
-
-
Fail if the following conditions are not satisfied:
-
layout.
offset
+ requiredBytesInCopy ≤ byteSize.
-
Valid Texture Copy Range
Given a GPUImageCopyTexture
imageCopyTexture and a GPUExtent3D
copySize, let
-
blockWidth be the texel block width of imageCopyTexture.
texture
.[[descriptor]]
.format
. -
blockHeight be the texel block height of imageCopyTexture.
texture
.[[descriptor]]
.format
.
The following validation rules apply:
-
If the
[[descriptor]]
.dimension
of imageCopyTexture.texture
is1d
:-
Both copySize.height and depthOrArrayLayers must be 1.
-
-
If the
[[descriptor]]
.dimension
of imageCopyTexture.texture
is2d
:-
(imageCopyTexture.
origin
.x + copySize.width), (imageCopyTexture.origin
.y + copySize.height), and (imageCopyTexture.origin
.z + copySize.depthOrArrayLayers) must be less than or equal to the width, height, and depthOrArrayLayers, respectively, of the imageCopyTexture subresource size of imageCopyTexture.
-
-
copySize.width must be a multiple of blockWidth.
-
copySize.height must be a multiple of blockHeight.
Define the copies with 1d
and 3d
textures. <https://github.com/gpuweb/gpuweb/issues/69>
Additional restrictions on rowsPerImage if needed. <https://github.com/gpuweb/gpuweb/issues/537>
Define the copies with "depth24plus"
, "depth24plus-stencil8"
, and "stencil8"
. <https://github.com/gpuweb/gpuweb/issues/652>
convert "Valid Texture Copy Range" into an algorithm with parameters, similar to "validating linear texture data"
copyBufferToTexture(source, destination, copySize)
-
Encode a command into the
GPUCommandEncoder
that copies data from a sub-region of aGPUBuffer
to a sub-region of one or multiple continuous texture subresources.Called on:GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.copyBufferToTexture(source, destination, copySize) method. Parameter Type Nullable Optional Description source
GPUImageCopyBuffer ✘ ✘ Combined with copySize, defines the region of the source buffer. destination
GPUImageCopyTexture ✘ ✘ Combined with copySize, defines the region of the destination texture subresource. copySize
GPUExtent3D ✘ ✘ Returns:
undefined
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
Let dstTextureDesc be destination.
texture
.[[descriptor]]
. -
validating GPUImageCopyBuffer(source) returns
true
. -
validating GPUImageCopyTexture(destination, copySize) returns
true
. -
dstTextureDesc.
sampleCount
is 1. -
If dstTextureDesc.
format
is a depth-stencil format:-
destination.
aspect
must refer to a single copyable aspect of dstTextureDesc.format
. See depth-formats.
-
-
Valid Texture Copy Range applies to destination and copySize.
-
If dstTextureDesc.
format
is not a depth/stencil format:-
source.
offset
is a multiple of the texel block size of dstTextureDesc.format
.
-
-
If dstTextureDesc.
format
is a depth/stencil format:-
source.
offset
is a multiple of 4.
-
-
validating linear texture data(source, source.
buffer
.[[size]]
, dstTextureDesc.format
, copySize) succeeds.
-
copyTextureToBuffer(source, destination, copySize)
-
Encode a command into the
GPUCommandEncoder
that copies data from a sub-region of one or multiple continuous texture subresourcesto a sub-region of aGPUBuffer
.Called on:GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.copyTextureToBuffer(source, destination, copySize) method. Parameter Type Nullable Optional Description source
GPUImageCopyTexture ✘ ✘ Combined with copySize, defines the region of the source texture subresources. destination
GPUImageCopyBuffer ✘ ✘ Combined with copySize, defines the region of the destination buffer. copySize
GPUExtent3D ✘ ✘ Returns:
undefined
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
Let srcTextureDesc be source.
texture
.[[descriptor]]
. -
validating GPUImageCopyTexture(source, copySize) returns
true
. -
srcTextureDesc.
sampleCount
is 1. -
If srcTextureDesc.
format
is a depth-stencil format:-
source.
aspect
must refer to a single copyable aspect of srcTextureDesc.format
. See depth-formats.
-
-
validating GPUImageCopyBuffer(destination) returns
true
. -
Valid Texture Copy Range applies to source and copySize.
-
If srcTextureDesc.
format
is not a depth/stencil format:-
destination.
offset
is a multiple of the texel block size of srcTextureDesc.format
.
-
-
If srcTextureDesc.
format
is a depth/stencil format:-
destination.
offset
is a multiple of 4.
-
-
validating linear texture data(destination, destination.
buffer
.[[size]]
, srcTextureDesc.format
, copySize) succeeds.
-
copyTextureToTexture(source, destination, copySize)
-
Encode a command into the
GPUCommandEncoder
that copies data from a sub-region of one or multiple contiguous texture subresources to another sub-region of one or multiple continuous texture subresources.Called on:GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.copyTextureToTexture(source, destination, copySize) method. Parameter Type Nullable Optional Description source
GPUImageCopyTexture ✘ ✘ Combined with copySize, defines the region of the source texture subresources. destination
GPUImageCopyTexture ✘ ✘ Combined with copySize, defines the region of the destination texture subresources. copySize
GPUExtent3D ✘ ✘ Returns:
undefined
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
Let srcTextureDesc be source.
texture
.[[descriptor]]
. -
Let dstTextureDesc be destination.
texture
.[[descriptor]]
. -
validating GPUImageCopyTexture(source, copySize) returns
true
. -
validating GPUImageCopyTexture(destination, copySize) returns
true
. -
srcTextureDesc.
sampleCount
is equal to dstTextureDesc.sampleCount
. -
If srcTextureDesc.
format
is a depth-stencil format: -
Valid Texture Copy Range applies to source and copySize.
-
Valid Texture Copy Range applies to destination and copySize.
-
The set of subresources for texture copy(source, copySize) and the set of subresources for texture copy(destination, copySize) is disjoint.
-
-
-
If imageCopyTexture.
texture
.[[descriptor]]
.dimension
is"2d"
:-
For each arrayLayer of the copySize.depthOrArrayLayers array layers starting at imageCopyTexture.
origin
.z:-
The subresource of imageCopyTexture.
texture
at mipmap level imageCopyTexture.mipLevel
and array layer arrayLayer.
-
-
-
Otherwise:
-
The subresource of imageCopyTexture.
texture
at mipmap level imageCopyTexture.mipLevel
.
-
12.4. Debug Markers
Both command encoders and programmable pass encoders provide methods to apply debug labels to groups of commands or insert a single label into the command sequence. Debug groups can be nested to create a hierarchy of labeled commands. These labels may be passed to the native API backends for tooling, may be used by the user agent’s internal tooling, or may be a no-op when such tooling is not available or applicable.
Debug groups in a GPUCommandEncoder
or GPUProgrammablePassEncoder
must be well nested.
pushDebugGroup(groupLabel)
-
Marks the beginning of a labeled group of commands for the
GPUCommandEncoder
.Called on:GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.pushDebugGroup(groupLabel) method. Parameter Type Nullable Optional Description groupLabel
USVString ✘ ✘ The label for the command group. Returns:
undefined
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
Push groupLabel onto this.
[[debug_group_stack]]
.
-
popDebugGroup()
-
Marks the end of a labeled group of commands for the
GPUCommandEncoder
.Called on:GPUCommandEncoder
this.Returns:
undefined
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
this.
[[debug_group_stack]]
's size is greater than 0.
-
Pop an entry off this.
[[debug_group_stack]]
.
-
insertDebugMarker(markerLabel)
-
Marks a point in a stream of commands with a label string.
Called on:GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.insertDebugMarker(markerLabel) method. Parameter Type Nullable Optional Description markerLabel
USVString ✘ ✘ The label to insert. Returns:
undefined
Issue the following steps on the Device timeline of this:
12.5. Queries
writeTimestamp(querySet, queryIndex)
-
Writes a timestamp value into querySet when all previous commands have completed executing.
Called on:GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.writeTimestamp(querySet, queryIndex) method. Parameter Type Nullable Optional Description querySet
GPUQuerySet ✘ ✘ The query set that will store the timestamp values. queryIndex
GPUSize32 ✘ ✘ The index of the query in the query set. Returns:
undefined
-
If this.
[[device]]
.[[features]]
does not contain"timestamp-query"
, throw aTypeError
. -
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
querySet is valid to use with this.
-
querySet.
[[descriptor]]
.type
is"timestamp"
. -
queryIndex < querySet.
[[descriptor]]
.count
.
Describe
writeTimestamp()
algorithm steps. -
resolveQuerySet(querySet, firstQuery, queryCount, destination, destinationOffset)
-
Called on:
GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.resolveQuerySet(querySet, firstQuery, queryCount, destination, destinationOffset) method. Parameter Type Nullable Optional Description querySet
GPUQuerySet ✘ ✘ firstQuery
GPUSize32 ✘ ✘ queryCount
GPUSize32 ✘ ✘ destination
GPUBuffer ✘ ✘ destinationOffset
GPUSize64 ✘ ✘ Returns:
undefined
If any of the following conditions are unsatisfied, generate a
GPUValidationError
and stop.-
querySet is valid to use with this.
-
destination is valid to use with this.
-
destination.
[[usage]]
containsQUERY_RESOLVE
. -
firstQuery is less than the number of queries in querySet.
-
(firstQuery + queryCount) is less than or equal to the number of queries in querySet.
-
destinationOffset is a multiple of 8.
-
destinationOffset + 8 × queryCount ≤ destination.
[[size]]
.
Describe
resolveQuerySet()
algorithm steps.
12.6. Finalization
A GPUCommandBuffer
containing the commands recorded by the GPUCommandEncoder
can be created
by calling finish()
. Once finish()
has been called the
command encoder can no longer be used.
finish(descriptor)
-
Completes recording of the commands sequence and returns a corresponding
GPUCommandBuffer
.Called on:GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.finish(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUCommandBufferDescriptor ✘ ✔ Returns:
GPUCommandBuffer
-
Let commandBuffer be a new
GPUCommandBuffer
. -
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
this is valid.
-
this.
[[debug_group_stack]]
's size is 0. -
Every usage scope contained in this satisfies the usage scope validation.
-
-
Let commandBuffer.
[[command_list]]
be a clone of this.[[command_list]]
.
-
-
Return commandBuffer.
-
13. Programmable Passes
interface mixin {
GPUProgrammablePassEncoder undefined setBindGroup (GPUIndex32 index ,GPUBindGroup bindGroup ,optional sequence <GPUBufferDynamicOffset >= []);
dynamicOffsets undefined setBindGroup (GPUIndex32 index ,GPUBindGroup bindGroup ,Uint32Array dynamicOffsetsData ,GPUSize64 dynamicOffsetsDataStart ,GPUSize32 dynamicOffsetsDataLength );undefined pushDebugGroup (USVString groupLabel );undefined popDebugGroup ();undefined insertDebugMarker (USVString markerLabel ); };
GPUProgrammablePassEncoder
has the following internal slots:
[[command_encoder]]
of typeGPUCommandEncoder
.-
The
GPUCommandEncoder
that created this programmable pass. [[debug_group_stack]]
of type stack<USVString
>.-
A stack of active debug group labels.
[[bind_groups]]
, of type ordered map<GPUIndex32
,GPUBindGroup
>-
The current
GPUBindGroup
for each index, initially empty.
13.1. Bind Groups
setBindGroup(index, bindGroup, dynamicOffsets)
-
Sets the current
GPUBindGroup
for the given index.Called on:GPUProgrammablePassEncoder
this.Arguments:
Arguments for the GPUProgrammablePassEncoder.setBindGroup(index, bindGroup, dynamicOffsets) method. Parameter Type Nullable Optional Description index
GPUIndex32 ✘ ✘ The index to set the bind group at. bindGroup
GPUBindGroup ✘ ✘ Bind group to use for subsequent render or compute commands. Resolve bikeshed conflict when using
argumentdef
with overloaded functions that prevents us from defining dynamicOffsets.Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
bindGroup is valid to use with this.
-
index < this.
[[device]]
.[[limits]]
.maxBindGroups
. -
dynamicOffsets.length is bindGroup.
[[layout]]
.[[dynamicOffsetCount]]
. -
Iterate over each dynamic binding offset in bindGroup and run the following steps for each bufferBinding, minBindingSize, and dynamicOffsetIndex:
-
-
Set this.
[[bind_groups]]
[index] to be bindGroup.
-
setBindGroup(index, bindGroup, dynamicOffsetsData, dynamicOffsetsDataStart, dynamicOffsetsDataLength)
-
Sets the current
GPUBindGroup
for the given index, specifying dynamic offsets as a subset of aUint32Array
.Called on:GPUProgrammablePassEncoder
this.Arguments:
Arguments for the GPUProgrammablePassEncoder.setBindGroup(index, bindGroup, dynamicOffsetsData, dynamicOffsetsDataStart, dynamicOffsetsDataLength) method. Parameter Type Nullable Optional Description index
GPUIndex32 ✘ ✘ The index to set the bind group at. bindGroup
GPUBindGroup ✘ ✘ Bind group to use for subsequent render or compute commands. dynamicOffsetsData
Uint32Array ✘ ✘ Array containing buffer offsets in bytes for each entry in bindGroup marked as buffer
.hasDynamicOffset
.dynamicOffsetsDataStart
GPUSize64 ✘ ✘ Offset in elements into dynamicOffsetsData where the buffer offset data begins. dynamicOffsetsDataLength
GPUSize32 ✘ ✘ Number of buffer offsets to read from dynamicOffsetsData. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
bindGroup is valid to use with this.
-
index < this.
[[device]]
.[[limits]]
.maxBindGroups
. -
dynamicOffsetsDataLength is bindGroup.
[[layout]]
.[[dynamicOffsetCount]]
. -
dynamicOffsetsDataStart + dynamicOffsetsDataLength ≤ dynamicOffsetsData.length.
-
Iterate over each dynamic binding offset in bindGroup and run the following steps for each bufferBinding, minBindingSize, and dynamicOffsetIndex:
-
-
Set this.
[[bind_groups]]
[index] to be bindGroup.
-
GPUBindGroup
bindGroup with a given list of steps to be executed for each dynamic offset:
-
Let dynamicOffsetIndex be
0
. -
Let layout be bindGroup.
[[layout]]
. -
For each
GPUBindGroupEntry
entry in bindGroup.[[entries]]
:-
Let bindingDescriptor be the
GPUBindGroupLayoutEntry
at layout.[[entryMap]]
[entry.binding
]: -
If bindingDescriptor.
buffer
is notundefined
and bindingDescriptor.buffer
.hasDynamicOffset
istrue
:-
Let bufferBinding be entry.
resource
. -
Let minBindingSize be bindingDescriptor.
buffer
.minBindingSize
. -
Call steps with bufferBinding, minBindingSize, and dynamicOffsetIndex.
-
Let dynamicOffsetIndex be dynamicOffsetIndex +
1
-
-
Arguments:
GPUProgrammablePassEncoder
encoder-
Encoder who’s bind groups are being validated.
GPUPipelineBase
pipeline-
Pipline to validate encoders bind groups are compatible with.
If any of the following conditions are unsatisfied, return false
:
-
pipeline must not be
null
. -
For each pair of (
GPUIndex32
index,GPUBindGroupLayout
bindGroupLayout) in pipeline.[[layout]]
.[[bindGroupLayouts]]
.-
Let bindGroup be encoder.
[[bind_groups]]
[index]. -
bindGroup must not be
null
. -
bindGroup.
[[layout]]
must be group-equivalent with bindGroupLayout.
-
Otherwise return true
.
13.2. Debug Markers
Debug marker methods for programmable pass encoders provide the same functionality as command encoder debug markers while recording a programmable pass.
pushDebugGroup(groupLabel)
-
Marks the beginning of a labeled group of commands for the
GPUProgrammablePassEncoder
.Called on:GPUProgrammablePassEncoder
this.Arguments:
Arguments for the GPUProgrammablePassEncoder.pushDebugGroup(groupLabel) method. Parameter Type Nullable Optional Description groupLabel
USVString ✘ ✘ The label for the command group. Returns:
undefined
Issue the following steps on the Device timeline of this:
-
Push groupLabel onto this.
[[debug_group_stack]]
.
-
popDebugGroup()
-
Marks the end of a labeled group of commands for the
GPUProgrammablePassEncoder
.Called on:GPUProgrammablePassEncoder
this.Returns:
undefined
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
this.
[[debug_group_stack]]
's size is greater than 0.
-
-
Pop an entry off of this.
[[debug_group_stack]]
.
-
insertDebugMarker(markerLabel)
-
Inserts a single debug marker label into the
GPUProgrammablePassEncoder
's commands sequence.Called on:GPUProgrammablePassEncoder
this.Arguments:
Arguments for the GPUProgrammablePassEncoder.insertDebugMarker(markerLabel) method. Parameter Type Nullable Optional Description markerLabel
USVString ✘ ✘ The label to insert. Returns:
undefined
14. Compute Passes
14.1. GPUComputePassEncoder
[Exposed =Window ]interface GPUComputePassEncoder {undefined setPipeline (GPUComputePipeline pipeline );undefined dispatch (GPUSize32 x ,optional GPUSize32 y = 1,optional GPUSize32 z = 1);undefined dispatchIndirect (GPUBuffer indirectBuffer ,GPUSize64 indirectOffset );undefined beginPipelineStatisticsQuery (GPUQuerySet querySet ,GPUSize32 queryIndex );undefined endPipelineStatisticsQuery ();undefined writeTimestamp (GPUQuerySet querySet ,GPUSize32 queryIndex );undefined endPass (); };GPUComputePassEncoder includes GPUObjectBase ;GPUComputePassEncoder includes GPUProgrammablePassEncoder ;
GPUComputePassEncoder
has the following internal slots:
[[pipeline]]
, of typeGPUComputePipeline
-
The current
GPUComputePipeline
, initiallynull
.
14.1.1. Creation
dictionary :
GPUComputePassDescriptor GPUObjectDescriptorBase { };
14.1.2. Dispatch
setPipeline(pipeline)
-
Sets the current
GPUComputePipeline
.Called on:GPUComputePassEncoder
this.Arguments:
Arguments for the GPUComputePassEncoder.setPipeline(pipeline) method. Parameter Type Nullable Optional Description pipeline
GPUComputePipeline ✘ ✘ The compute pipeline to use for subsequent dispatch commands. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
pipeline is valid to use with this.
-
-
Set this.
[[pipeline]]
to be pipeline.
-
dispatch(x, y, z)
-
Dispatch work to be performed with the current
GPUComputePipeline
. See § 21.2 Computing for the detailed specification.Called on:GPUComputePassEncoder
this.Arguments:
Arguments for the GPUComputePassEncoder.dispatch(x, y, z) method. Parameter Type Nullable Optional Description x
GPUSize32 ✘ ✘ X dimension of the grid of workgroups to dispatch. y
GPUSize32 ✘ ✔ Y dimension of the grid of workgroups to dispatch. z
GPUSize32 ✘ ✔ Z dimension of the grid of workgroups to dispatch. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
Validate encoder bind groups(this, this.
[[pipeline]]
) istrue
.
-
-
Append a GPU command to this.
[[command_encoder]]
.[[command_list]]
that captures theGPUComputePassEncoder
state of this as passState and, when executed, issues the following steps on the appropriate Queue timeline:-
Dispatch a grid of workgroups with dimensions [x, y, z] with passState.
[[pipeline]]
using passState.[[bind_groups]]
.
-
-
dispatchIndirect(indirectBuffer, indirectOffset)
-
Dispatch work to be performed with the current
GPUComputePipeline
using parameters read from aGPUBuffer
. See § 21.2 Computing for the detailed specification.The indirect dispatch parameters encoded in the buffer must be a tightly packed block of three 32-bit unsigned integer values (12 bytes total), given in the same order as the arguments for
dispatch()
. For example:let dispatchIndirectParameters= new Uint32Array( 3 ); dispatchIndirectParameters[ 0 ] = x; dispatchIndirectParameters[ 1 ] = y; dispatchIndirectParameters[ 2 ] = z; Called on:GPUComputePassEncoder
this.Arguments:
Arguments for the GPUComputePassEncoder.dispatchIndirect(indirectBuffer, indirectOffset) method. Parameter Type Nullable Optional Description indirectBuffer
GPUBuffer ✘ ✘ Buffer containing the indirect dispatch parameters. indirectOffset
GPUSize64 ✘ ✘ Offset in bytes into indirectBuffer where the dispatch data begins. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
Validate encoder bind groups(this, this.
[[pipeline]]
) istrue
. -
indirectBuffer is valid to use with this.
-
indirectOffset + sizeof(indirect dispatch parameters) ≤ indirectBuffer.
[[size]]
. -
indirectOffset is a multiple of 4.
-
-
Add indirectBuffer to the usage scope as
INDIRECT
.
-
14.1.3. Queries
beginPipelineStatisticsQuery(querySet, queryIndex)
-
Called on:
GPUComputePassEncoder
this.Arguments:
Arguments for the GPUComputePassEncoder.beginPipelineStatisticsQuery(querySet, queryIndex) method. Parameter Type Nullable Optional Description querySet
GPUQuerySet ✘ ✘ queryIndex
GPUSize32 ✘ ✘ Returns:
undefined
-
If this.
[[device]]
.[[features]]
does not contain"pipeline-statistics-query"
, throw aTypeError
.
Describe
beginPipelineStatisticsQuery()
algorithm steps. -
endPipelineStatisticsQuery()
-
Called on:
GPUComputePassEncoder
this.Returns:
undefined
-
If this.
[[device]]
.[[features]]
does not contain"pipeline-statistics-query"
, throw aTypeError
.
Describe
endPipelineStatisticsQuery()
algorithm steps. -
writeTimestamp(querySet, queryIndex)
-
Writes a timestamp value into querySet when all previous commands have completed executing.
Called on:GPUComputePassEncoder
this.Arguments:
Arguments for the GPUComputePassEncoder.writeTimestamp(querySet, queryIndex) method. Parameter Type Nullable Optional Description querySet
GPUQuerySet ✘ ✘ The query set that will store the timestamp values. queryIndex
GPUSize32 ✘ ✘ The index of the query in the query set. Returns:
undefined
-
If this.
[[device]]
.[[features]]
does not contain"timestamp-query"
, throw aTypeError
. -
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
querySet is valid to use with this.
-
querySet.
[[descriptor]]
.type
is"timestamp"
. -
queryIndex < querySet.
[[descriptor]]
.count
.
-
Describe
writeTimestamp()
algorithm steps. -
14.1.4. Finalization
The compute pass encoder can be ended by calling endPass()
once the user
has finished recording commands for the pass. Once endPass()
has been
called the compute pass encoder can no longer be used.
endPass()
-
Completes recording of the compute pass commands sequence.
Called on:GPUComputePassEncoder
this.Returns:
undefined
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
this.
[[debug_group_stack]]
's size is 0.
-
-
15. Render Passes
15.1. GPURenderPassEncoder
interface mixin {
GPURenderEncoderBase undefined setPipeline (GPURenderPipeline pipeline );undefined setIndexBuffer (GPUBuffer buffer ,GPUIndexFormat indexFormat ,optional GPUSize64 offset = 0,optional GPUSize64 size = 0);undefined setVertexBuffer (GPUIndex32 slot ,GPUBuffer buffer ,optional GPUSize64 offset = 0,optional GPUSize64 size = 0);undefined draw (GPUSize32 vertexCount ,optional GPUSize32 instanceCount = 1,optional GPUSize32 firstVertex = 0,optional GPUSize32 firstInstance = 0);undefined drawIndexed (GPUSize32 indexCount ,optional GPUSize32 instanceCount = 1,optional GPUSize32 firstIndex = 0,optional GPUSignedOffset32 baseVertex = 0,optional GPUSize32 firstInstance = 0);undefined drawIndirect (GPUBuffer indirectBuffer ,GPUSize64 indirectOffset );undefined drawIndexedIndirect (GPUBuffer indirectBuffer ,GPUSize64 indirectOffset ); }; [Exposed =Window ]interface GPURenderPassEncoder {undefined setViewport (float x ,float y ,float width ,float height ,float minDepth ,float maxDepth );undefined setScissorRect (GPUIntegerCoordinate x ,GPUIntegerCoordinate y ,GPUIntegerCoordinate width ,GPUIntegerCoordinate height );undefined setBlendConstant (GPUColor color );undefined setStencilReference (GPUStencilValue reference );undefined beginOcclusionQuery (GPUSize32 queryIndex );undefined endOcclusionQuery ();undefined beginPipelineStatisticsQuery (GPUQuerySet querySet ,GPUSize32 queryIndex );undefined endPipelineStatisticsQuery ();undefined writeTimestamp (GPUQuerySet querySet ,GPUSize32 queryIndex );undefined executeBundles (sequence <GPURenderBundle >bundles );undefined endPass (); };GPURenderPassEncoder includes GPUObjectBase ;GPURenderPassEncoder includes GPUProgrammablePassEncoder ;GPURenderPassEncoder includes GPURenderEncoderBase ;
-
In indirect draw calls, the base instance field (inside the indirect buffer data) must be set to zero.
GPURenderEncoderBase
has the following internal slots:
[[pipeline]]
, of typeGPURenderPipeline
-
The current
GPURenderPipeline
, initiallynull
. [[index_buffer]]
, of typeGPUBuffer
-
The current buffer to read index data from, initially
null
. [[index_format]]
, of typeGPUIndexFormat
-
The format of the index data in
[[index_buffer]]
. [[vertex_buffers]]
, of type ordered map<slot,GPUBuffer
>-
The current
GPUBuffer
s to read vertex data from for each slot, initially empty.
GPURenderPassEncoder
has the following internal slots:
[[attachment_size]]
-
Set to the following extents:
-
width, height
= the dimensions of the pass’s render attachments
-
[[occlusion_query_set]]
, of typeGPUQuerySet
.-
The
GPUQuerySet
to store occlusion query results for the pass, which is initialized withGPURenderPassDescriptor
.occlusionQuerySet
at pass creation time. [[occlusion_query_active]]
, of typeboolean
.-
Whether the pass’s
[[occlusion_query_set]]
is being written. [[viewport]]
-
Current viewport rectangle and depth range.
When a GPURenderPassEncoder
is created, it has the following default state:
-
-
x, y
=0.0, 0.0
-
width, height
= the dimensions of the pass’s render targets -
minDepth, maxDepth
=0.0, 1.0
-
-
Scissor rectangle:
-
x, y
=0, 0
-
width, height
= the dimensions of the pass’s render targets
-
15.1.1. Creation
dictionary :
GPURenderPassDescriptor GPUObjectDescriptorBase {required sequence <GPURenderPassColorAttachment >colorAttachments ;GPURenderPassDepthStencilAttachment depthStencilAttachment ;GPUQuerySet occlusionQuerySet ; };
colorAttachments
, of type sequence<GPURenderPassColorAttachment>-
The set of
GPURenderPassColorAttachment
values in this sequence defines which color attachments will be output to when executing this render pass. depthStencilAttachment
, of type GPURenderPassDepthStencilAttachment-
The
GPURenderPassDepthStencilAttachment
value that defines the depth/stencil attachment that will be output to and tested against when executing this render pass. occlusionQuerySet
, of type GPUQuerySet-
The
GPUQuerySet
value defines where the occlusion query results will be stored for this pass.
Given a GPURenderPassDescriptor
this the following validation rules apply:
-
this.
colorAttachments
.length must be less than or equal to the maximum color attachments. -
this.
colorAttachments
.length must greater than0
or this.depthStencilAttachment
must not benull
. -
For each colorAttachment in this.
colorAttachments
:-
colorAttachment must meet the GPURenderPassColorAttachment Valid Usage rules.
-
-
If this.
depthStencilAttachment
is notnull
:-
this.
depthStencilAttachment
must meet the GPURenderPassDepthStencilAttachment Valid Usage rules.
-
-
Each
view
in this.colorAttachments
and this.depthStencilAttachment
.view
, if present, must have all have the same[[descriptor]]
.sampleCount
. -
For each
view
in this.colorAttachments
and this.depthStencilAttachment
.view
, if present, the[[renderExtent]]
must match. -
If this.
occlusionQuerySet
is notnull
:-
this.
occlusionQuerySet
.[[descriptor]]
.type
must beocclusion
.
-
Define maximum color attachments
support for no attachments <https://github.com/gpuweb/gpuweb/issues/503>
GPURenderPassDescriptor
value descriptor, the syntax:
-
descriptor.renderExtent refers to
[[renderExtent]]
of any[[descriptor]]
in either descriptor.depthStencilAttachment
.view
, or any of theview
in descriptor.colorAttachments
.
make it a define once we reference to this from other places
Note: the Valid Usage guarantees that all of the render extents of the attachments are the same, so we can take any of them, assuming the descriptor is valid.
15.1.1.1. Color Attachments
dictionary {
GPURenderPassColorAttachment required GPUTextureView view ;GPUTextureView resolveTarget ;required (GPULoadOp or GPUColor )loadValue ;required GPUStoreOp storeOp ; };
view
, of type GPUTextureView-
A
GPUTextureView
describing the texture subresource that will be output to for this color attachment. resolveTarget
, of type GPUTextureView-
A
GPUTextureView
describing the texture subresource that will receive the resolved output for this color attachment ifview
is multisampled. loadValue
, of type(GPULoadOp or GPUColor)
-
If a
GPULoadOp
, indicates the load operation to perform onview
prior to executing the render pass. If aGPUColor
, indicates the value to clearview
to prior to executing the render pass.Note: It is recommended to prefer a clear-value; see
"load"
. storeOp
, of type GPUStoreOp-
The store operation to perform on
view
after executing the render pass.
Given a GPURenderPassColorAttachment
this the following validation rules
apply:
-
Let renderTextureDesc be this.
view
.[[texture]]
.[[descriptor]]
. -
Let resolveTextureDesc be this.
resolveTarget
.[[texture]]
.[[descriptor]]
. -
this.
view
must have a color renderable format. -
renderTextureDesc.
usage
must containRENDER_ATTACHMENT
. -
this.
view
must be a view of a single subresource. -
If this.
resolveTarget
is notnull
:-
this.
view
must be multisampled. -
this.
resolveTarget
must not be multisampled. -
resolveTextureDesc.
usage
must containRENDER_ATTACHMENT
. -
this.
resolveTarget
must be a view of a single subresource. -
The dimensions of the subresources seen by this.
resolveTarget
and this.view
must match. -
resolveTextureDesc.
format
must match renderTextureDesc.format
.
-
15.1.1.2. Depth/Stencil Attachments
dictionary {
GPURenderPassDepthStencilAttachment required GPUTextureView view ;required (GPULoadOp or float )depthLoadValue ;required GPUStoreOp depthStoreOp ;boolean depthReadOnly =false ;required (GPULoadOp or GPUStencilValue )stencilLoadValue ;required GPUStoreOp stencilStoreOp ;boolean stencilReadOnly =false ; };
view
, of type GPUTextureView-
A
GPUTextureView
describing the texture subresource that will be output to and read from for this depth/stencil attachment. depthLoadValue
, of type(GPULoadOp or float)
-
If a
GPULoadOp
, indicates the load operation to perform onview
's depth component prior to executing the render pass. If afloat
, indicates the value to clearview
's depth component to prior to executing the render pass.Note: It is recommended to prefer a clear-value; see
"load"
. depthStoreOp
, of type GPUStoreOp-
The store operation to perform on
view
's depth component after executing the render pass.Note: It is recommended to prefer a clear-value; see
"load"
. depthReadOnly
, of type boolean, defaulting tofalse
-
Indicates that the depth component of
view
is read only. stencilLoadValue
, of type(GPULoadOp or GPUStencilValue)
-
If a
GPULoadOp
, indicates the load operation to perform onview
's stencil component prior to executing the render pass. If aGPUStencilValue
, indicates the value to clearview
's stencil component to prior to executing the render pass. stencilStoreOp
, of type GPUStoreOp-
The store operation to perform on
view
's stencil component after executing the render pass. stencilReadOnly
, of type boolean, defaulting tofalse
-
Indicates that the stencil component of
view
is read only.
Given a GPURenderPassDepthStencilAttachment
this the following validation
rules apply:
-
this.
view
must have a depth or stencil renderable format. -
this.
view
must be a view of a single texture subresource. -
this.
view
.[[descriptor]]
.usage
must containRENDER_ATTACHMENT
. -
this.
depthReadOnly
istrue
, this.depthLoadValue
must be"load"
and this.depthStoreOp
must be"store"
. -
this.
stencilReadOnly
istrue
, this.stencilLoadValue
must be"load"
and this.stencilStoreOp
must be"store"
.
15.1.1.3. Load & Store Operations
enum {
GPULoadOp "load" };
"load"
-
Loads the existing value for this attachment into the render pass.
Note: On some GPU hardware (primarily mobile), providing a clear-value is significantly cheaper because it avoids loading data from main memory into tile-local memory. On other GPU hardware, there isn’t a significant difference. As a result, it is recommended to use a clear-value, rather than
"load"
, in cases where the initial value doesn’t matter (e.g. the render target will be cleared using a skybox).
enum {
GPUStoreOp ,
"store" };
"clear"
15.1.2. Drawing
setPipeline(pipeline)
-
Sets the current
GPURenderPipeline
.Called on:GPURenderEncoderBase
this.Arguments:
Arguments for the GPURenderEncoderBase.setPipeline(pipeline) method. Parameter Type Nullable Optional Description pipeline
GPURenderPipeline ✘ ✘ The render pipeline to use for subsequent drawing commands. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
pipeline is valid to use with this.
Validate that pipeline is compatible with the render pass descriptor.
-
-
Set this.
[[pipeline]]
to be pipeline.
-
setIndexBuffer(buffer, indexFormat, offset, size)
-
Sets the current index buffer.
Called on:GPURenderEncoderBase
this.Arguments:
Arguments for the GPURenderEncoderBase.setIndexBuffer(buffer, indexFormat, offset, size) method. Parameter Type Nullable Optional Description buffer
GPUBuffer ✘ ✘ Buffer containing index data to use for subsequent drawing commands. indexFormat
GPUIndexFormat ✘ ✘ Format of the index data contained in buffer. offset
GPUSize64 ✘ ✔ Offset in bytes into buffer where the index data begins. size
GPUSize64 ✘ ✔ Size in bytes of the index data in buffer. If 0
, buffer.[[size]]
- offset is used.Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
buffer is valid to use with this.
-
offset + size ≤ buffer.
[[size]]
.
-
-
Add buffer to the usage scope as input.
-
Set this.
[[index_buffer]]
to be buffer. -
Set this.
[[index_format]]
to be indexFormat.
-
setVertexBuffer(slot, buffer, offset, size)
-
Sets the current vertex buffer for the given slot.
Called on:GPURenderEncoderBase
this.Arguments:
Arguments for the GPURenderEncoderBase.setVertexBuffer(slot, buffer, offset, size) method. Parameter Type Nullable Optional Description slot
GPUIndex32 ✘ ✘ The vertex buffer slot to set the vertex buffer for. buffer
GPUBuffer ✘ ✘ Buffer containing vertex data to use for subsequent drawing commands. offset
GPUSize64 ✘ ✔ Offset in bytes into buffer where the vertex data begins. size
GPUSize64 ✘ ✔ Size in bytes of the vertex data in buffer. If 0
, buffer.[[size]]
- offset is used.Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
buffer is valid to use with this.
-
slot < this.
[[device]]
.[[limits]]
.maxVertexBuffers
. -
offset + size ≤ buffer.
[[size]]
.
-
-
Add buffer to the usage scope as input.
-
Set this.
[[vertex_buffers]]
[slot] to be buffer.
-
draw(vertexCount, instanceCount, firstVertex, firstInstance)
-
Draws primitives. See § 21.3 Rendering for the detailed specification.
Called on:GPURenderEncoderBase
this.Arguments:
Arguments for the GPURenderEncoderBase.draw(vertexCount, instanceCount, firstVertex, firstInstance) method. Parameter Type Nullable Optional Description vertexCount
GPUSize32 ✘ ✘ The number of vertices to draw. instanceCount
GPUSize32 ✘ ✔ The number of instances to draw. firstVertex
GPUSize32 ✘ ✔ Offset into the vertex buffers, in vertices, to begin drawing from. firstInstance
GPUSize32 ✘ ✔ First instance to draw. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:If any of the following conditions are unsatisfied, make this invalid and stop.-
It is valid to draw with this.
-
drawIndexed(indexCount, instanceCount, firstIndex, baseVertex, firstInstance)
-
Draws indexed primitives. See § 21.3 Rendering for the detailed specification.
Called on:GPURenderEncoderBase
this.Arguments:
Arguments for the GPURenderEncoderBase.drawIndexed(indexCount, instanceCount, firstIndex, baseVertex, firstInstance) method. Parameter Type Nullable Optional Description indexCount
GPUSize32 ✘ ✘ The number of indices to draw. instanceCount
GPUSize32 ✘ ✔ The number of instances to draw. firstIndex
GPUSize32 ✘ ✔ Offset into the index buffer, in indices, begin drawing from. baseVertex
GPUSignedOffset32 ✘ ✔ Added to each index value before indexing into the vertex buffers. firstInstance
GPUSize32 ✘ ✔ First instance to draw. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:If any of the following conditions are unsatisfied, make this invalid and stop.-
It is valid to draw indexed with this.
-
drawIndirect(indirectBuffer, indirectOffset)
-
Draws primitives using parameters read from a
GPUBuffer
. See § 21.3 Rendering for the detailed specification.The indirect draw parameters encoded in the buffer must be a tightly packed block of four 32-bit unsigned integer values (16 bytes total), given in the same order as the arguments for
draw()
. For example:let drawIndirectParameters= new Uint32Array( 4 ); drawIndirectParameters[ 0 ] = vertexCount; drawIndirectParameters[ 1 ] = instanceCount; drawIndirectParameters[ 2 ] = firstVertex; drawIndirectParameters[ 3 ] = firstInstance; Called on:GPURenderEncoderBase
this.Arguments:
Arguments for the GPURenderEncoderBase.drawIndirect(indirectBuffer, indirectOffset) method. Parameter Type Nullable Optional Description indirectBuffer
GPUBuffer ✘ ✘ Buffer containing the indirect draw parameters. indirectOffset
GPUSize64 ✘ ✘ Offset in bytes into indirectBuffer where the drawing data begins. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
It is valid to draw with this.
-
indirectBuffer is valid to use with this.
-
indirectOffset + sizeof(indirect draw parameters) ≤ indirectBuffer.
[[size]]
. -
indirectOffset is a multiple of 4.
-
-
Add indirectBuffer to the usage scope as input.
-
drawIndexedIndirect(indirectBuffer, indirectOffset)
-
Draws indexed primitives using parameters read from a
GPUBuffer
. See § 21.3 Rendering for the detailed specification.The indirect drawIndexed parameters encoded in the buffer must be a tightly packed block of five 32-bit unsigned integer values (20 bytes total), given in the same order as the arguments for
drawIndexed()
. For example:let drawIndexedIndirectParameters= new Uint32Array( 5 ); drawIndexedIndirectParameters[ 0 ] = indexCount; drawIndexedIndirectParameters[ 1 ] = instanceCount; drawIndexedIndirectParameters[ 2 ] = firstIndex; drawIndexedIndirectParameters[ 3 ] = baseVertex; drawIndexedIndirectParameters[ 4 ] = firstInstance; Called on:GPURenderEncoderBase
this.Arguments:
Arguments for the GPURenderEncoderBase.drawIndexedIndirect(indirectBuffer, indirectOffset) method. Parameter Type Nullable Optional Description indirectBuffer
GPUBuffer ✘ ✘ Buffer containing the indirect drawIndexed parameters. indirectOffset
GPUSize64 ✘ ✘ Offset in bytes into indirectBuffer where the drawing data begins. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
It is valid to draw indexed with this.
-
indirectBuffer is valid to use with this.
-
indirectOffset + sizeof(indirect drawIndexed parameters) ≤ indirectBuffer.
[[size]]
. -
indirectOffset is a multiple of 4.
-
-
Add indirectBuffer to the usage scope as input.
-
GPURenderEncoderBase
encoder run the following steps:
If any of the following conditions are unsatisfied, return false
:
-
Validate encoder bind groups(encoder, encoder.
[[pipeline]]
) must betrue
. -
Let pipelineDescriptor be encoder.
[[pipeline]]
.[[descriptor]]
. -
For each
GPUIndex32
slot0
to pipelineDescriptor.vertex
.buffers
.length:-
encoder.
[[vertex_buffers]]
[slot] must not benull
.
-
Otherwise return true
.
GPURenderEncoderBase
encoder run the following steps:
If any of the following conditions are unsatisfied, return false
:
-
It must be valid to draw with encoder.
-
encoder.
[[index_buffer]]
must not benull
. -
Let stripIndexFormat be encoder.
[[pipeline]]
.[[strip_index_format]]
. -
If stripIndexFormat is not
undefined
:-
encoder.
[[index_format]]
must be stripIndexFormat.
-
Otherwise return true
.
15.1.3. Rasterization state
The GPURenderPassEncoder
has several methods which affect how draw commands are rasterized to
attachments used by this encoder.
setViewport(x, y, width, height, minDepth, maxDepth)
-
Sets the viewport used during the rasterization stage to linearly map from normalized device coordinates to viewport coordinates.
Called on:GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.setViewport(x, y, width, height, minDepth, maxDepth) method. Parameter Type Nullable Optional Description x
float ✘ ✘ Minimum X value of the viewport in pixels. y
float ✘ ✘ Minimum Y value of the viewport in pixels. width
float ✘ ✘ Width of the viewport in pixels. height
float ✘ ✘ Height of the viewport in pixels. minDepth
float ✘ ✘ Minimum depth value of the viewport. maxDepth
float ✘ ✘ Maximum depth value of the viewport. Returns:
undefined
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
x is greater than or equal to
0
. -
y is greater than or equal to
0
. -
width is greater than or equal to
0
. -
height is greater than or equal to
0
. -
x + width is less than or equal to this.
[[attachment_size]]
.width. -
y + height is less than or equal to this.
[[attachment_size]]
.height. -
minDepth is greater than or equal to
0.0
and less than or equal to1.0
. -
maxDepth is greater than or equal to
0.0
and less than or equal to1.0
. -
maxDepth is greater than minDepth.
-
-
Set this.
[[viewport]]
to the extents x, y, width, height, minDepth, and maxDepth.
Allowed for GPUs to use fixed point or rounded viewport coordinates
-
setScissorRect(x, y, width, height)
-
Sets the scissor rectangle used during the rasterization stage. After transformation into viewport coordinates any fragments which fall outside the scissor rectangle will be discarded.
Called on:GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.setScissorRect(x, y, width, height) method. Parameter Type Nullable Optional Description x
GPUIntegerCoordinate ✘ ✘ Minimum X value of the scissor rectangle in pixels. y
GPUIntegerCoordinate ✘ ✘ Minimum Y value of the scissor rectangle in pixels. width
GPUIntegerCoordinate ✘ ✘ Width of the scissor rectangle in pixels. height
GPUIntegerCoordinate ✘ ✘ Height of the scissor rectangle in pixels. Returns:
undefined
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
x+width is less than or equal to this.
[[attachment_size]]
.width. -
y+height is less than or equal to this.
[[attachment_size]]
.height.
-
-
Set the scissor rectangle to the extents x, y, width, and height.
-
setBlendConstant(color)
-
Sets the constant blend color and alpha values used with
"constant"
and"one-minus-constant"
GPUBlendFactor
s.Called on:GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.setBlendConstant(color) method. Parameter Type Nullable Optional Description color
GPUColor ✘ ✘ The color to use when blending. setStencilReference(reference)
-
Sets the stencil reference value used during stencil tests with the the
"replace"
GPUStencilOperation
.Called on:GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.setStencilReference(reference) method. Parameter Type Nullable Optional Description reference
GPUStencilValue ✘ ✘ The stencil reference value.
15.1.4. Queries
beginOcclusionQuery(queryIndex)
-
Called on:
GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.beginOcclusionQuery(queryIndex) method. Parameter Type Nullable Optional Description queryIndex
GPUSize32 ✘ ✘ The index of the query in the query set. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
this.
[[occlusion_query_set]]
is notnull
. -
queryIndex < this.
[[occlusion_query_set]]
.[[descriptor]]
.count
. -
The query at same queryIndex must not have been previously written to in this pass.
-
this.
[[occlusion_query_active]]
isfalse
.
-
-
Set this.
[[occlusion_query_active]]
totrue
.
-
endOcclusionQuery()
-
Called on:
GPURenderPassEncoder
this.Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
this.
[[occlusion_query_active]]
istrue
.
-
-
Set this.
[[occlusion_query_active]]
tofalse
.
-
beginPipelineStatisticsQuery(querySet, queryIndex)
-
Called on:
GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.beginPipelineStatisticsQuery(querySet, queryIndex) method. Parameter Type Nullable Optional Description querySet
GPUQuerySet ✘ ✘ queryIndex
GPUSize32 ✘ ✘ Returns:
undefined
-
If this.
[[device]]
.[[features]]
does not contain"pipeline-statistics-query"
, throw aTypeError
.
Describe
beginPipelineStatisticsQuery()
algorithm steps. -
endPipelineStatisticsQuery()
-
Called on:
GPURenderPassEncoder
this.Returns:
undefined
-
If this.
[[device]]
.[[features]]
does not contain"pipeline-statistics-query"
, throw aTypeError
.
Describe
endPipelineStatisticsQuery()
algorithm steps. -
writeTimestamp(querySet, queryIndex)
-
Writes a timestamp value into querySet when all previous commands have completed executing.
Called on:GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.writeTimestamp(querySet, queryIndex) method. Parameter Type Nullable Optional Description querySet
GPUQuerySet ✘ ✘ The query set that will store the timestamp values. queryIndex
GPUSize32 ✘ ✘ The index of the query in the query set. Returns:
undefined
-
If this.
[[device]]
.[[features]]
does not contain"timestamp-query"
, throw aTypeError
. -
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
querySet is valid to use with this.
-
querySet.
[[descriptor]]
.type
is"timestamp"
. -
queryIndex < querySet.
[[descriptor]]
.count
. -
The query in querySet at index queryIndex has not been written earlier in this render pass.
-
Describe
writeTimestamp()
algorithm steps. -
15.1.5. Bundles
executeBundles(bundles)
-
Executes the commands previously recorded into the given
GPURenderBundle
s as part of this render pass.When a
GPURenderBundle
is executed, it does not inherit the render pass’s pipeline, bind groups, or vertex and index buffers. After aGPURenderBundle
has executed, the render pass’s pipeline, bind groups, and vertex and index buffers are cleared.Note: state is cleared even if zero
GPURenderBundles
are executed.Called on:GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.executeBundles(bundles) method. Parameter Type Nullable Optional Description bundles
sequence<GPURenderBundle> ✘ ✘ List of render bundles to execute. Returns:
undefined
Describe
executeBundles()
algorithm steps.
15.1.6. Finalization
The render pass encoder can be ended by calling endPass()
once the user
has finished recording commands for the pass. Once endPass()
has been
called the render pass encoder can no longer be used.
endPass()
-
Completes recording of the render pass commands sequence.
Called on:GPURenderPassEncoder
this.Returns:
undefined
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
this.
[[debug_group_stack]]
's size is 0. -
this.
[[occlusion_query_active]]
isfalse
.
-
-
16. Bundles
16.1. GPURenderBundle
[Exposed =Window ]interface GPURenderBundle { };GPURenderBundle includes GPUObjectBase ;
16.1.1. Creation
dictionary :
GPURenderBundleDescriptor GPUObjectDescriptorBase { };
[Exposed =Window ]interface {
GPURenderBundleEncoder GPURenderBundle finish (optional GPURenderBundleDescriptor descriptor = {}); };GPURenderBundleEncoder includes GPUObjectBase ;GPURenderBundleEncoder includes GPUProgrammablePassEncoder ;GPURenderBundleEncoder includes GPURenderEncoderBase ;
createRenderBundleEncoder(descriptor)
-
Creates a
GPURenderBundleEncoder
.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createRenderBundleEncoder(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPURenderBundleEncoderDescriptor ✘ ✘ Description of the GPURenderBundleEncoder
to create.Returns:
GPURenderBundleEncoder
Describe
createRenderBundleEncoder()
algorithm steps.
16.1.2. Encoding
dictionary :
GPURenderBundleEncoderDescriptor GPUObjectDescriptorBase {required sequence <GPUTextureFormat >;
colorFormats GPUTextureFormat ;
depthStencilFormat GPUSize32 = 1; };
sampleCount
16.1.3. Finalization
finish(descriptor)
-
Completes recording of the render bundle commands sequence.
Called on:GPURenderBundleEncoder
this.Arguments:
Arguments for the GPURenderBundleEncoder.finish(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPURenderBundleDescriptor ✘ ✔ Returns:
GPURenderBundle
Describe
finish()
algorithm steps.
17. Queues
[Exposed =Window ]interface {
GPUQueue undefined submit (sequence <GPUCommandBuffer >commandBuffers );Promise <undefined >onSubmittedWorkDone ();undefined writeBuffer (GPUBuffer buffer ,GPUSize64 bufferOffset , [AllowShared ]BufferSource data ,optional GPUSize64 dataOffset = 0,optional GPUSize64 size );undefined writeTexture (GPUImageCopyTexture destination , [AllowShared ]BufferSource data ,GPUImageDataLayout dataLayout ,GPUExtent3D size );undefined copyExternalImageToTexture (GPUImageCopyExternalImage source ,GPUImageCopyTexture destination ,GPUExtent3D copySize ); };GPUQueue includes GPUObjectBase ;
GPUQueue
has the following methods:
writeBuffer(buffer, bufferOffset, data, dataOffset, size)
-
Issues a write operation of the provided data into a
GPUBuffer
.Called on:GPUQueue
this.Arguments:
Arguments for the GPUQueue.writeBuffer(buffer, bufferOffset, data, dataOffset, size) method. Parameter Type Nullable Optional Description buffer
GPUBuffer ✘ ✘ The buffer to write to. bufferOffset
GPUSize64 ✘ ✘ Offset in bytes into buffer to begin writing at. data
BufferSource ✘ ✘ Data to write into buffer. dataOffset
GPUSize64 ✘ ✔ Offset in into data to begin writing from. Given in elements if data is a TypedArray
and bytes otherwise.size
GPUSize64 ✘ ✔ Size of content to write from data to buffer. Given in elements if data is a TypedArray
and bytes otherwise.Returns:
undefined
-
If data is an
ArrayBuffer
orDataView
, let the element type be "byte". Otherwise, data is a TypedArray; let the element type be the type of the TypedArray. -
Let dataSize be the size of data, in elements.
-
If size is unspecified, let contentsSize be dataSize − dataOffset. Otherwise, let contentsSize be size.
-
If any of the following conditions are unsatisfied, throw
OperationError
and stop.-
contentsSize ≥ 0.
-
dataOffset + contentsSize ≤ dataSize.
-
contentsSize, converted to bytes, is a multiple of 4 bytes.
-
-
Let dataContents be a copy of the bytes held by the buffer source.
-
Let contents be the contentsSize elements of dataContents starting at an offset of dataOffset elements.
-
Issue the following steps on the Queue timeline of this:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
Write contents into buffer starting at bufferOffset.
-
-
writeTexture(destination, data, dataLayout, size)
-
Issues a write operation of the provided data into a
GPUTexture
.Called on:GPUQueue
this.Arguments:
Arguments for the GPUQueue.writeTexture(destination, data, dataLayout, size) method. Parameter Type Nullable Optional Description destination
GPUImageCopyTexture ✘ ✘ The texture subresource and origin to write to. data
BufferSource ✘ ✘ Data to write into destination. dataLayout
GPUImageDataLayout ✘ ✘ Layout of the content in data. size
GPUExtent3D ✘ ✘ Extents of the content to write from data to destination. Returns:
undefined
-
Let dataBytes be a copy of the bytes held by the buffer source data.
-
Let dataByteSize be the number of bytes in dataBytes.
-
Let textureDesc be destination.
texture
.[[descriptor]]
. -
If any of the following conditions are unsatisfied, throw
OperationError
and stop.-
validating linear texture data(dataLayout, dataByteSize, textureDesc.
format
, size) succeeds.
-
-
Let contents be the contents of the images seen by viewing dataBytes with dataLayout and size.
-
Issue the following steps on the Queue timeline of this:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
validating GPUImageCopyTexture(destination, size) returns
true
. -
textureDesc.
sampleCount
is 1. -
Valid Texture Copy Range(destination, size) is satisfied.
-
destination.
aspect
refers to a single copyable aspect of textureDesc.format
. See depth-formats.
Note: unlike
GPUCommandEncoder
.copyBufferToTexture()
, there is no alignment requirement on either dataLayout.bytesPerRow
or dataLayout.offset
. -
-
Write contents into destination.
-
-
copyExternalImageToTexture(source, destination, copySize)
-
Issues a copy operation of the contents of a platform image/canvas into the destination texture.
A color conversion to a
GPUPredefinedColorSpace
is necessary here. Temporarily disallow float targets until there is an upstream decision on whether "srgb" means extended-srgb or clamped-srgb.Called on:GPUQueue
this.Arguments:
Arguments for the GPUQueue.copyExternalImageToTexture(source, destination, copySize) method. Parameter Type Nullable Optional Description source
GPUImageCopyExternalImage ✘ ✘ source image and origin to copy to destination. destination
GPUImageCopyTexture ✘ ✘ The texture subresource and origin to write to. copySize
GPUExtent3D ✘ ✘ Extents of the content to write from source to destination. Returns:
undefined
-
If source.
source
is not origin-clean, throw aSecurityError
and stop.
If any of the following requirements are unmet, throw an
OperationError
and stop.-
Let textureDesc be destination.
texture
.[[descriptor]]
. -
copySize.depthOrArrayLayers must be
1
. -
textureDesc.
usage
must include bothRENDER_ATTACHMENT
andCOPY_DST
. -
textureDesc.
format
must be one of the following:
The above list was written just for ImageBitmap. Re-evaluate it for canvas (and video). (It’s probably still fine, though
rgb10a2unorm
is a bit of an outlier.) -
submit(commandBuffers)
-
Schedules the execution of the command buffers by the GPU on this queue.
Called on:GPUQueue
this.Arguments:
Arguments for the GPUQueue.submit(commandBuffers) method. Parameter Type Nullable Optional Description commandBuffers
sequence<GPUCommandBuffer> ✘ ✘ Returns:
undefined
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
Every
GPUBuffer
referenced in any element of commandBuffers is in the"unmapped"
buffer state. -
Every
GPUQuerySet
referenced in a command in any element of commandBuffers is in the available state. For occlusion queries,occlusionQuerySet
inbeginRenderPass()
does not constitute a reference, whilebeginOcclusionQuery()
does.
-
-
Issue the following steps on the Queue timeline of this:
-
For each commandBuffer in commandBuffers:
-
Execute each command in commandBuffer.
[[command_list]]
.
-
-
-
onSubmittedWorkDone()
-
Returns a
Promise
that resolves once this queue finishes processing all the work submitted up to this moment.Called on:GPUQueue
this.Arguments:
Arguments for the GPUQueue.onSubmittedWorkDone() method. Parameter Type Nullable Optional Description Describe
onSubmittedWorkDone()
algorithm steps.
18. Queries
18.1. GPUQuerySet
[Exposed =Window ]interface GPUQuerySet {undefined destroy (); };GPUQuerySet includes GPUObjectBase ;
GPUQuerySet
has the following internal slots:
[[descriptor]]
, of typeGPUQuerySetDescriptor
-
The
GPUQuerySetDescriptor
describing this query set.All optional fields of
GPUTextureViewDescriptor
are defined. [[state]]
of type query set state.-
The current state of the
GPUQuerySet
.
Each GPUQuerySet
has a current query set state on the Device timeline which is one of the following:
-
"available" where the
GPUQuerySet
is available for GPU operations on its content. -
"destroyed" where the
GPUQuerySet
is no longer available for any operations exceptdestroy
.
18.1.1. QuerySet Creation
A GPUQuerySetDescriptor
specifies the options to use in creating a GPUQuerySet
.
dictionary :
GPUQuerySetDescriptor GPUObjectDescriptorBase {required GPUQueryType type ;required GPUSize32 count ;sequence <GPUPipelineStatisticName >pipelineStatistics = []; };
type
, of type GPUQueryType-
The type of queries managed by
GPUQuerySet
. count
, of type GPUSize32-
The number of queries managed by
GPUQuerySet
. pipelineStatistics
, of type sequence<GPUPipelineStatisticName>, defaulting to[]
-
The set of
GPUPipelineStatisticName
values in this sequence defines which pipeline statistics will be returned in the new query set.
createQuerySet(descriptor)
-
Creates a
GPUQuerySet
.Called on:GPUDevice
this.Arguments:
Arguments for the GPUDevice.createQuerySet(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUQuerySetDescriptor ✘ ✘ Description of the GPUQuerySet
to create.Returns:
GPUQuerySet
-
If descriptor.
type
is"pipeline-statistics"
, but this.[[device]]
.[[features]]
does not contain"pipeline-statistics-query"
, throw aTypeError
. -
If descriptor.
type
is"timestamp"
, but this.[[device]]
.[[features]]
does not contain"timestamp-query"
, throw aTypeError
. -
If any of the following requirements are unmet, return an error query set and stop.
-
descriptor.
count
must be ≤ 8192. -
If descriptor.
type
is"pipeline-statistics"
:-
descriptor.
pipelineStatistics
must not contain duplicate entries.
Otherwise:
-
descriptor.
pipelineStatistics
must be empty.
-
-
Let q be a new
GPUQuerySet
object. -
Set q.
[[descriptor]]
to descriptor. -
Return q.
-
18.1.2. QuerySet Destruction
An application that no longer requires a GPUQuerySet
can choose to lose access to it before
garbage collection by calling destroy()
.
destroy()
-
Destroys the
GPUQuerySet
.
18.2. QueryType
enum {
GPUQueryType ,
"occlusion" ,
"pipeline-statistics" };
"timestamp"
18.3. Occlusion Query
Occlusion query is only available on render passes, to query the number of fragment samples that pass all the per-fragment tests for a set of drawing commands, including scissor, sample mask, alpha to coverage, stencil, and depth tests. Any non-zero result value for the query indicates that at least one sample passed the tests and reached the output merging stage of the render pipeline, 0 indicates that no samples passed the tests.
When beginning a render pass, GPURenderPassDescriptor
.occlusionQuerySet
must be set to be able to use occlusion queries during the pass. An occlusion query is begun
and ended by calling beginOcclusionQuery()
and endOcclusionQuery()
in pairs that cannot be nested.
18.4. Pipeline Statistics Query
enum {
GPUPipelineStatisticName ,
"vertex-shader-invocations" ,
"clipper-invocations" ,
"clipper-primitives-out" ,
"fragment-shader-invocations" };
"compute-shader-invocations"
When resolving pipeline statistics query, each result is written into GPUSize64
, and the number and order of the results written to GPU buffer matches the number and order of GPUPipelineStatisticName
specified in pipelineStatistics
.
The beginPipelineStatisticsQuery()
and endPipelineStatisticsQuery()
(on both GPUComputePassEncoder
and GPURenderPassEncoder
) cannot be nested. A pipeline statistics query must be ended before beginning another one.
Pipeline statistics query requires pipeline-statistics-query
is available on the device.
18.5. Timestamp Query
Timestamp query allows application to write timestamp values to a GPUQuerySet
by calling writeTimestamp()
on GPUComputePassEncoder
or GPURenderPassEncoder
or GPUCommandEncoder
, and then resolve timestamp values in nanoseconds (type of GPUSize64
) to a GPUBuffer
(using resolveQuerySet()
).
Timestamp query requires timestamp-query
is available on the device.
Note: The timestamp values may be zero if the physical device reset timestamp counter, please ignore it and the following values.
Write normative text about timestamp value resets.
Because timestamp query provides high-resolution GPU timestamp, we need to decide what constraints, if any, are on its availability.
19. Canvas Rendering & Swap Chains
19.1. HTMLCanvasElement.getContext()
A GPUCanvasContext
object can be obtained via the getContext()
method of an HTMLCanvasElement
instance by
passing the string literal 'gpupresent'
as its contextType
argument.
GPUCanvasContext
from an offscreen HTMLCanvasElement
:
const canvas= document. createElement( 'canvas' ); const context= canvas. getContext( 'gpupresent' ); const swapChain= context. configureSwapChain( /* ... */ ); // ...
19.2. GPUCanvasContext
[Exposed =Window ]interface {
GPUCanvasContext GPUSwapChain configureSwapChain (GPUSwapChainDescriptor descriptor );GPUTextureFormat getSwapChainPreferredFormat (GPUAdapter adapter ); };
GPUCanvasContext
has the following internal slots:
[[canvas]]
of typeHTMLCanvasElement
.-
The canvas this context was created from.
GPUCanvasContext
has the following methods:
configureSwapChain(descriptor)
-
Configures the swap chain for this canvas, and returns a new
GPUSwapChain
object representing it. Destroys any swapchain previously returned byconfigureSwapChain
, including all of the textures it has produced.Called on:GPUCanvasContext
this.Arguments:
Arguments for the GPUCanvasContext.configureSwapChain(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUSwapChainDescriptor ✘ ✘ Description of the GPUSwapChain
to configure.Returns:
GPUSwapChain
-
Issue the following steps on the Device timeline of this:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
Describe remaining
configureSwapChain()
algorithm steps. -
-
getSwapChainPreferredFormat(adapter)
-
Returns an optimal
GPUTextureFormat
to use for swap chains with this context and the given device.Called on:GPUCanvasContext
this.Arguments:
Arguments for the GPUCanvasContext.getSwapChainPreferredFormat(adapter) method. Parameter Type Nullable Optional Description adapter
GPUAdapter ✘ ✘ Adapter the swap chain format should be queried for. Returns:
GPUTextureFormat
-
Return an optimal
GPUTextureFormat
to use when creating aGPUSwapChain
with the given adapter. Must be one of the supported swap chain formats.
-
19.3. GPUSwapChainDescriptor
The supported swap chain formats are a set of GPUTextureFormat
s that must be
supported when specified as a GPUSwapChainDescriptor
.format
regardless
of the given GPUSwapChainDescriptor
.device
, initially set to:
«"bgra8unorm"
, "bgra8unorm-srgb"
, "rgba8unorm"
, "rgba8unorm-srgb"
».
enum {
GPUCanvasCompositingAlphaMode ,
"opaque" , };
"premultiplied" dictionary :
GPUSwapChainDescriptor GPUObjectDescriptorBase {required GPUDevice ;
device required GPUTextureFormat ;
format GPUTextureUsageFlags = 0x10; // GPUTextureUsage.RENDER_ATTACHMENT
usage GPUCanvasCompositingAlphaMode = "opaque"; };
compositingAlphaMode
A GPUPredefinedColorSpace
is necessary here.
Initially, for SDR, it can default to simply "srgb".
When we add HDR canvas output (pixel values > 1), some clarification may be needed depending on
upstream changes to canvas for color and HDR (e.g. to make sure we choose between extended-srgb
and clamped-srgb: they’re the same for SDR, but different for HDR).
19.4. GPUSwapChain
[Exposed =Window ]interface {
GPUSwapChain GPUTexture getCurrentTexture (); };GPUSwapChain includes GPUObjectBase ;
GPUSwapChain
has the following internal slots:
[[context]]
of typeGPUCanvasContext
-
The context this swap chain was configured for.
[[descriptor]]
of typeGPUSwapChainDescriptor
-
The descriptor this swap chain was created with.
[[currentTexture]]
of typeGPUTexture
, nullable-
The current texture that will be returned by the swap chain when calling
getCurrentTexture()
, and the next one to be composited to the document. Initially set to the result of allocating a new swap chain texture for this swap chain.
GPUSwapChain
has the following methods:
getCurrentTexture()
-
Get the
GPUTexture
that will be composited to the document by theGPUCanvasContext
that created this swap chain next.Called on:GPUSwapChain
this.Returns:
GPUTexture
-
If this.
[[currentTexture]]
isnull
:-
Let this.
[[currentTexture]]
be the result of allocating a new swap chain texture for this.
-
-
Return this.
[[currentTexture]]
.
Note: Developers can expect that the same
GPUTexture
object will be returned by every call togetCurrentTexture()
made within the same frame (i.e. between invocations of Update the rendering). -
Document
" step of the "Update the rendering"
HTML processing model, each GPUSwapChain
swapChain must present the
swap chain content to the canvas by running the following steps:
-
Let texture be swapChain.
[[currentTexture]]
. -
If texture is
null
, stop. -
Set swapChain.
[[currentTexture]]
tonull
. -
Ensure that all submitted work items (e.g. queue submissions) has completed writing to texture.
-
Update swapChain.
[[context]]
.[[canvas]]
with the contents of texture. -
Call
destroy()
on texture.
The texture should mark its [[destroyed]]
field as true rather than calling the destroy()
method if we separate object invalid and destroyed states.
GPUSwapChain
swapChain run the following steps:
-
Let canvas be swapChain.
[[context]]
.[[canvas]]
. -
Let device be swapChain.
[[descriptor]]
.device
. -
Let descriptor be a new
GPUTextureDescriptor
. -
Set descriptor.
size
to [canvas.width, canvas.height, 1]. -
Set descriptor.
format
to swapChain.[[descriptor]]
.format
. -
Set descriptor.
usage
to swapChain.[[descriptor]]
.usage
. -
Let texture be a new
GPUTexture
created as if device.createTexture()
were called with descriptor.If a previously presented texture from swapChain matches the required criteria, its GPU memory may be re-used. -
Ensure texture is cleared to
(0, 0, 0, 0)
. -
Return texture.
19.5. GPUCanvasCompositingAlphaMode
This enum selects how the swap chain canvas will paint onto the page.
GPUCanvasCompositingAlphaMode | Description | dst.rgb | dst.a |
---|---|---|---|
opaque
| Paint RGB as opaque and ignore alpha values. If the content is not already opaque, implementations may need to clear alpha to opaque during presentation. | |dst.rgb = src.rgb| | |dst.a = 1| |
premultiplied
| Composite assuming color values are premultiplied by their alpha value. 100% red 50% opaque is [0.5, 0, 0, 0.5]. Color values must be less than or equal to their alpha value. [1.0, 0, 0, 0.5] is "super-luminant" and cannot reliably be displayed. | |dst.rgb = src.rgb + dst.rgb*(1-src.a)| | |dst.a = src.a + dst.a*(1-src.a)| |
20. Errors & Debugging
20.1. Fatal Errors
enum {
GPUDeviceLostReason , }; [
"destroyed" Exposed =Window ]interface {
GPUDeviceLostInfo readonly attribute (GPUDeviceLostReason or undefined );
reason readonly attribute DOMString ; };
message partial interface GPUDevice {readonly attribute Promise <GPUDeviceLostInfo >; };
lost
20.2. Error Scopes
enum {
GPUErrorFilter ,
"out-of-memory" };
"validation"
[Exposed =Window ]interface {
GPUOutOfMemoryError (); }; [
constructor Exposed =Window ]interface {
GPUValidationError (
constructor DOMString );
message readonly attribute DOMString ; };
message typedef (GPUOutOfMemoryError or GPUValidationError );
GPUError
partial interface GPUDevice {undefined pushErrorScope (GPUErrorFilter );
filter Promise <GPUError ?>popErrorScope (); };
pushErrorScope(filter)
popErrorScope()
-
Rejects with
OperationError
if:-
The device is lost.
-
There are no error scopes on the stack.
-
20.3. Telemetry
[Exposed =(Window ,DedicatedWorker ) ]interface :
GPUUncapturedErrorEvent Event {(
constructor DOMString ,
type GPUUncapturedErrorEventInit ); [
gpuUncapturedErrorEventInitDict SameObject ]readonly attribute GPUError ; };
error dictionary :
GPUUncapturedErrorEventInit EventInit {required GPUError ; };
error
partial interface GPUDevice { [Exposed =(Window ,DedicatedWorker )]attribute EventHandler ; };
onuncapturederror
21. Detailed Operations
This section describes the details of various GPU operations.
21.1. Transfer
describe the transfers at the high level
21.2. Computing
Computing operations provide direct access to GPU’s programmable hardware.
Compute shaders do not have pipeline inputs or outputs, their results are
side effects from writing data into storage bindings bound as GPUBufferBindingType."storage"
and GPUStorageTextureBindingLayout
.
These operations are encoded within GPUComputePassEncoder
as:
describe the computing algorithm
21.3. Rendering
Rendering is done by a set of GPU operations that are executed within GPURenderPassEncoder
,
and result in modifications of the texture data, viewed by the render pass attachments.
These operations are encoded with:
Note: rendering is the traditional use of GPUs, and is supported by multiple fixed-function blocks in hardware.
A RenderState is an internal object representing the state
of the current GPURenderPassEncoder
during command encoding. RenderState is a spec namespace for the following definitions:
GPURenderPassEncoder
pass, the syntax:
-
pass.indexBuffer refers to the index buffer bound via
setIndexBuffer()
, if any. -
pass.vertexBuffers refers to list<vertex buffer> bound by
setVertexBuffer()
. -
pass.bindGroups refers to list<
GPUBindGroup
> bound bysetBindGroup(index, bindGroup, dynamicOffsets)
.
The main rendering algorithm:
Arguments:
-
descriptor: Description of the current
GPURenderPipeline
. -
drawCall: The draw call parameters.
-
state: RenderState of the
GPURenderEncoderBase
where the draw call is issued.
-
Resolve indices. See § 21.3.1 Index Resolution.
Let vertexList be the result of resolve indices(drawCall, state).
-
Process vertices. See § 21.3.2 Vertex Processing.
Execute process vertices(vertexList, drawCall, descriptor.
vertex
, state). -
Assemble primitives. See § 21.3.3 Primitive Assembly.
Execute assemble primitives(vertexList, drawCall, descriptor.
primitive
). -
Clip primitives. See § 21.3.4 Primitive Clipping.
-
Rasterize. See § 21.3.5 Rasterization.
-
Process fragments. Issue: fill out the section
-
Process depth/stencil. Issue: fill out the section
-
Write pixels. Issue: fill out the section
21.3.1. Index Resolution
At the first stage of rendering, the pipeline builds a list of vertices to process for each instance.
Arguments:
-
drawCall: The draw call parameters.
-
state: The active RenderState.
Returns: list of integer indices.
-
Let vertexIndexList be an empty list of indices.
-
If drawCall is an indexed draw call:
-
initialize the vertexIndexList with drawCall.indexCount integers.
-
for i in range 0 .. drawCall.indexCount (non-inclusive):
-
let vertexIndex be fetch index(i + drawCall.firstIndex, state.indexBuffer.buffer, state.indexBuffer.offset, state.indexBuffer.format) + drawCall.baseVertex
-
append vertexIndex to the vertexIndexList
-
-
-
Otherwise:
-
initialize the vertexIndexList with drawCall.vertexCount integers.
-
assign the vertexIndexList item i to be drawCall.firstVertex + i
-
-
Return vertexIndexList.
Note: in case of indirect draw calls, the indexCount
, vertexCount
,
and other properties of drawCall are read from the indirect buffer
instead of the draw command itself.
Arguments:
-
i: Index of a vertex index to fetch.
-
buffer:
GPUBuffer
containing index data. -
offset: Base offset into the buffer.
-
format:
GPUIndexFormat
of the index.
Let stride be defined by the format:
Interpret the data in buffer starting with offset + i * stride of size stride bytes as an unsigned integer and return it.21.3.2. Vertex Processing
Vertex processing stage is a programmable stage of the render pipeline that processes the vertex attribute data, and produces clip space positions for § 21.3.4 Primitive Clipping, as well as other data for the § 21.3.6 Fragment Processing.
Arguments:
-
vertexIndexList: List of vertex indices to process.
-
drawCall: The draw call parameters.
-
desc: The descriptor of type
GPUVertexState
. -
state: The active RenderState.
Each vertex vertexIndex in the vertexIndexList,
in each instance of index rawInstanceIndex, is processed independently.
The rawInstanceIndex is in range from 0 to drawCall.instanceCount - 1, inclusive.
This processing happens in parallel, and any side effects, such as
writes into GPUBufferBindingType."storage"
bindings,
may happen in any order.
-
Let instanceIndex be rawInstanceIndex + drawCall.baseInstance.
-
For each non-
null
vertexBufferLayout in the list of desc.buffers
:-
Let i be the index of the buffer layout in this list.
-
Let vertexBuffer and vertexBufferOffset be the buffer and offset in state.vertexBuffers bindings at slot i
-
Let vertexElementIndex be dependent on vertexBufferLayout.
stepMode
:"vertex"
-
vertexIndex
"instance"
-
instanceIndex
-
For each attributeDesc in vertexBufferLayout.
attributes
:-
Let attributeOffset be vertexBufferOffset + vertexElementIndex * vertexBufferLayout.
arrayStride
+ attributeDesc.offset
. -
Load the attribute data of format attributeDesc.
format
from vertexBuffer starting at offset attributeOffset. -
Convert the data into a shader-visible format.
An attribute of type"unorm8x2"
will be converted from 2 bytes of fixed-point unsigned 8-bit integers into 2 floating-point values asvec2<f32>
in WGSL. -
Bind the data to vertex shader input location attributeDesc.
shaderLocation
.
-
-
-
For each
GPUBindGroup
group at index in state.bindGroups:-
For each resource
GPUBindingResource
in the bind group:-
Let entry be the corresponding
GPUBindGroupLayoutEntry
for this resource. -
If entry.
GPUBindGroupLayoutEntry
.visibility includesVERTEX
:-
Bind the resource to the shader under group index and binding
GPUBindGroupLayoutEntry.binding
.
-
-
-
-
Set the shader builtins:
-
Set the
VertexIndex
builtin, if any, to vertexIndex. -
Set the
InstanceIndex
builtin, if any, to instanceIndex.
-
-
Invoke vertex shader entry point described by desc.
Note: The target platform caches the results of vertex shader invocations. There is no guarantee that any vertexIndex that repeats more than once will result in multiple invocations. Similarly, there is no guarantee that a single vertexIndex will only be processed once.
21.3.3. Primitive Assembly
Primitives are assembled by a fixed-function stage of GPUs.
Arguments:
-
vertexIndexList: List of vertex indices to process.
-
drawCall: The draw call parameters.
-
desc: The descriptor of type
GPUPrimitiveState
.
For each instance, the primitives get assembled from the vertices that have been processed by the shaders, based on the vertexIndexList.
-
First, if desc.
stripIndexFormat
is notnull
(which means the primitive topology is a strip), and the drawCall is indexed, the vertexIndexList is split into sub-lists using the maximum value of this index format as a separator.Example: a vertexIndexList with values
[1, 2, 65535, 4, 5, 6]
of type"uint16"
will be split in sub-lists[1, 2]
and[4, 5, 6]
. -
For each of the sub-lists vl, primitive generation is done according to the desc.
topology
:"line-list"
-
Line primitives are composed from (vl.0, vl.1), then (vl.2, vl.3), then (vl.4 to vl.5), etc. Each subsequent primitive takes 2 vertices.
"line-strip"
-
Line primitives are composed from (vl.0, vl.1), then (vl.1, vl.2), then (vl.2, vl.3), etc. Each subsequent primitive takes 1 vertex.
"triangle-list"
-
Triangle primitives are composed from (vl.0, vl.1, vl.2), then (vl.3, vl.4, vl.5), then (vl.6, vl.7, vl.8), etc. Each subsequent primitive takes 3 vertices.
"triangle-strip"
-
Triangle primitives are composed from (vl.0, vl.1, vl.2), then (vl.2, vl.1, vl.3), then (vl.2, vl.3, vl.4), then (vl.4, vl.3, vl.5), etc. Each subsequent primitive takes 1 vertices.
should this be defined more formally?
Any incomplete primitives are dropped.
21.3.4. Primitive Clipping
Vertex shaders have to produce a built-in "position" (of type vec4<f32>
),
which denotes the clip position of a vertex.
Primitives are clipped to the clip volume, which, for any clip position p inside a primitive, is defined by the following inequalities:
-
−p.w ≤ p.x ≤ p.w
-
−p.w ≤ p.y ≤ p.w
-
0 ≤ p.z ≤ p.w (depth clipping)
If descriptor.primitive
.clampDepth
is true
,
the depth clipping restriction of the clip volume is not applied.
A primitive passes through this stage unchanged if every one of its edges
lie entirely inside the clip volume.
If the edges of a primitives intersect the boundary of the clip volume,
the intersecting edges are reconnected by new edges that lie along the boundary of the clip volume.
For triangular primitives (descriptor.primitive
.topology
is "triangle-list"
or "triangle-strip"
), this reconnection
may result in introduction of new vertices into the polygon, internally.
If a primitive intersects an edge of the clip volume’s boundary, the clipped polygon must include a point on this boundary edge.
If the vertex shader outputs other floating-point values (scalars and vectors), qualified with "perspective" interpolation, they also get clipped. The output values associated with a vertex that lies within the clip volume are unaffected by clipping. If a primitive is clipped, however, the output values assigned to vertices produced by clipping are clipped.
Considering an edge between vertices a and b that got clipped, resulting in the vertex c, let’s define t to be the ratio between the edge vertices: c.p = t × a.p + (1 − t) × b.p, where x.p is the output clip position of a vertex x.
For each vertex output value "v" with a corresponding fragment input, a.v and b.v would be the outputs for a and b vertices respectively. The clipped shader output c.v is produced based on the interpolation qualifier:
- "flat"
-
Flat interpolation is unaffected, and is based on provoking vertex, which is the first vertex in the primitive. The output value is the same for the whole primitive, and matches the vertex output of the provoking vertex: c.v = provoking vertex.v
- "linear"
-
The interpolation ratio gets adjusted against the perspective coordinates of the clip positions, so that the result of interpolation is linear in screen space.
- "perspective"
-
The value is linearly interpolated in clip space, producing perspective-correct values:
c.v = t × a.v + (1 − t) × b.v
link to interpolation qualifiers in WGSL
21.3.5. Rasterization
Rasterization is the hardware processing stage that maps the generated primitives
to the 2-dimensional rendering area of the framebuffer -
the set of render attachments in the current GPURenderPassEncoder
.
This rendering area is split into an even grid of pixels.
Rasterization determines the set of pixels affected by a primitive. In case of multi-sampling,
each pixel is further split into descriptor.multisample
.count
samples. The locations of samples are the same for each pixel, but not defined in this spec.
do we want to force-enable the "Standard sample locations" in Vulkan?
The framebuffer coordinates start from the top-left corner of the render targets. Each unit corresponds exactly to a pixel. See § 3.3 Coordinate Systems for more information.
-
First, the clipped vertices are transformed into NDC - normalized device coordinates. Given the output position p, the NDC coordinates are computed as:
ndc(p) = vector(p.x ÷ p.w, p.y ÷ p.w, p.z ÷ p.w)
-
Let viewport be
[[viewport]]
of the current render pass. Then the NDC coordinates n are converted into framebuffer coordinates, based on the size of the render targets:framebufferCoords(n) = vector(viewport.
x
+ 0.5×(n.x+1)×viewport.width
, viewport.y
+ 0.5×(n.y+1)×viewport.height
) -
The specific rasterization algorithm depends on
primitive
.topology
:"point-list"
-
The point, if not filtered by § 21.3.4 Primitive Clipping, goes into § 21.3.5.1 Point Rasterization.
"line-list"
or"line-strip"
-
The line cut by § 21.3.4 Primitive Clipping goes into § 21.3.5.2 Line Rasterization.
"triangle-list"
or"triangle-strip"
-
The polygon produced in § 21.3.4 Primitive Clipping goes into § 21.3.5.3 Polygon Rasterization.
Let’s define fragment destination to be a combination of the pixel position with the sample index, in case § 21.3.9 Sample frequency shading is active.
The result of rasterization is a set of points, each associated with the following data:
-
multisample coverage mask (see § 21.3.10 Sample Masking)
-
depth, in NDC coordinates.
-
barycentric coordinates
define barycentric coordinates Issue: define the depth computation algorithm
21.3.5.1. Point Rasterization
A single fragment destination is selected within the pixel containing the framebuffer coordinates of the point.
The coverage mask depends on multi-sampling mode:
- sample-frequency
-
coverageMask = 1 ≪
sampleIndex
- pixel-frequency multi-sampling
-
coverageMask = 1 ≪ descriptor.
multisample
.count
− 1 - no multi-sampling
-
coverageMask = 1
21.3.5.2. Line Rasterization
21.3.5.3. Polygon Rasterization
Let v(i) be the framebuffer coordinates for the clipped vertex number i (starting with 1) in a rasterized polygon of n vertices.
Note: this section uses the term "polygon" instead of a "triangle", since § 21.3.4 Primitive Clipping stage may have introduced additional vertices. This is non-observable by the application.
The first step of polygon rasterization is determining if the polygon is front-facing or back-facing. This depends on the sign of the area occupied by the polygon in framebuffer coordinates:
area = 0.5 × ((v1.x × vn.y − vn.x × v1.y) + ∑ (vi+1.x × vi.y − vi.x × vi+1.y))
The sign of area is interpreted based on the primitive
.frontFace
:
"ccw"
-
area > 0 is considered front-facing, otherwise back-facing
"cw"
-
area < 0 is considered front-facing, otherwise back-facing
- "linear"
The polygon can be culled by primitive
.cullMode
:
"none"
-
All polygons pass this test.
"front"
-
The front-facing polygons are discarded, and do not process in later stages of the render pipeline.
"back"
-
The back-facing polygons are discarded.
The next step is determining a set of fragments inside the polygon in framebuffer space -
these are locations scheduled for the per-fragment operations.
The determination is based on descriptor.multisample
:
- disabled
-
Fragments are associated with pixel centers. That is, all the points with coordinates C, where fract(C) = vector2(0.5, 0.5) in the framebuffer space, enclosed into the polygon, are included. If a pixel center is on the edge of the polygon, whether or not it’s included is not defined.
Note: this becomes a subject of precision for the rasterizer.
- enabled
-
Each pixel is associated with descriptor.
multisample
.count
locations, which are implementation-defined. The locations are ordered, and the list is the same for each pixel of the framebuffer. Each location corresponds to one fragment in the multisampled framebuffer.The rasterizer builds a mask of locations being hit inside each pixel and provides is as "sample-mask" built-in to the fragment shader.
21.3.6. Fragment Processing
TODO: fill out this section
21.3.7. No Color Output
In no-color-output mode, pipeline does not produce any color attachment outputs.
The pipeline still performs rasterization and produces depth values based on the vertex position output. The depth testing and stencil operations can still be used.
21.3.8. Alpha to Coverage
In alpha-to-coverage mode, an additional alpha-to-coverage mask of MSAA samples is generated based on the alpha component of the
fragment shader output value of the fragment
.targets
[0].
The algorithm of producing the extra mask is platform-dependent and can vary for different pixels. It guarantees that:
-
if alpha is 0.0 or less, the result is 0x0
-
if alpha is 1.0 or greater, the result is 0xFFFFFFFF
-
if alpha is greater than some other alpha1, then the produced sample mask has at least as many bits set to 1 as the mask for alpha1
21.3.9. Sample frequency shading
TODO: fill out the section
21.3.10. Sample Masking
The final sample mask for a pixel is computed as: rasterization mask & mask
& shader-output mask.
Only the lower count
bits of the mask are considered.
If the least-significant bit at position N of the final sample mask has value of "0", the sample color outputs (corresponding to sample N) to all attachments of the fragment shader are discarded. Also, no depth test or stencil operations are executed on the relevant samples of the depth-stencil attachment.
Note: the color output for sample N is produced by the fragment shader execution with SV_SampleIndex == N for the current pixel. If the fragment shader doesn’t use this semantics, it’s only executed once per pixel.
The rasterization mask is produced by the rasterization stage, based on the shape of the rasterized polygon. The samples incuded in the shape get the relevant bits 1 in the mask.
The shader-output mask takes the output value of SV_Coverage semantics in the fragment shader.
If the semantics is not statically used by the shader, and alphaToCoverageEnabled
is enabled, the shader-output mask becomes the alpha-to-coverage mask. Otherwise, it defaults to 0xFFFFFFFF.
link to the semantics of SV_SampleIndex and SV_Coverage in WGSL spec.
22. Type Definitions
typedef [EnforceRange ]unsigned long ;
GPUBufferDynamicOffset typedef [EnforceRange ]unsigned long ;
GPUStencilValue typedef [EnforceRange ]unsigned long ;
GPUSampleMask typedef [EnforceRange ]long ;
GPUDepthBias typedef [EnforceRange ]unsigned long long ;
GPUSize64 typedef [EnforceRange ]unsigned long ;
GPUIntegerCoordinate typedef [EnforceRange ]unsigned long ;
GPUIndex32 typedef [EnforceRange ]unsigned long ;
GPUSize32 typedef [EnforceRange ]long ;
GPUSignedOffset32 typedef unsigned long ;
GPUFlagsConstant
22.1. Colors & Vectors
dictionary {
GPUColorDict required double ;
r required double ;
g required double ;
b required double ; };
a typedef (sequence <double >or GPUColorDict );
GPUColor
Note: double
is large enough to precisely hold 32-bit signed/unsigned
integers and single-precision floats.
dictionary {
GPUOrigin2DDict GPUIntegerCoordinate = 0;
x GPUIntegerCoordinate = 0; };
y typedef (sequence <GPUIntegerCoordinate >or GPUOrigin2DDict );
GPUOrigin2D
An Origin2D is a GPUOrigin2D
. Origin2D is a spec namespace for the following definitions:
GPUOrigin2D
value origin, depending on its type, the syntax:
-
origin.x refers to either
GPUOrigin2DDict
.x
or the first item of the sequence or 0 if it isn’t present. -
origin.y refers to either
GPUOrigin2DDict
.y
or the second item of the sequence or 0 if it isn’t present.
dictionary {
GPUOrigin3DDict GPUIntegerCoordinate = 0;
x GPUIntegerCoordinate = 0;
y GPUIntegerCoordinate = 0; };
z typedef (sequence <GPUIntegerCoordinate >or GPUOrigin3DDict );
GPUOrigin3D
An Origin3D is a GPUOrigin3D
. Origin3D is a spec namespace for the following definitions:
GPUOrigin3D
value origin, depending on its type, the syntax:
-
origin.x refers to either
GPUOrigin3DDict
.x
or the first item of the sequence or 0 if it isn’t present. -
origin.y refers to either
GPUOrigin3DDict
.y
or the second item of the sequence or 0 if it isn’t present. -
origin.z refers to either
GPUOrigin3DDict
.z
or the third item of the sequence or 0 if it isn’t presnet.
dictionary {
GPUExtent3DDict required GPUIntegerCoordinate ;
width GPUIntegerCoordinate = 1;
height GPUIntegerCoordinate = 1; };
depthOrArrayLayers typedef (sequence <GPUIntegerCoordinate >or GPUExtent3DDict );
GPUExtent3D
An Extent3D is a GPUExtent3D
. Extent3D is a spec namespace for the following definitions:
GPUExtent3D
value extent, depending on its type, the syntax:
-
extent.width refers to either
GPUExtent3DDict
.width
or the first item of the sequence (1 if not present). -
extent.height refers to either
GPUExtent3DDict
.height
or the second item of the sequence (1 if not present). -
extent.depthOrArrayLayers refers to either
GPUExtent3DDict
.depthOrArrayLayers
or the third item of the sequence (1 if not present).
23. Feature Index
23.1. depth-clamping
Define functionality when the "depth-clamping"
feature is enabled.
Feature Dictionary Values
The following dictionary values are supported if and only if the "depth-clamping"
feature is enabled, otherwise they must be set to their default values:
GPUPrimitiveState
23.2. depth24unorm-stencil8
Allows for explicit creation of textures of format "depth24unorm-stencil8"
.
Feature Enums
The following enums are supported if and only if the "depth24unorm-stencil8"
feature is enabled:
GPUTextureFormat
23.3. depth32float-stencil8
Allows for explicit creation of textures of format "depth32float-stencil8"
.
Feature Enums
The following enums are supported if and only if the "depth32float-stencil8"
feature is enabled:
GPUTextureFormat
23.4. pipeline-statistics-query
Define functionality when the "pipeline-statistics-query"
feature is enabled.
Feature Enums
The following enums are supported if and only if the "pipeline-statistics-query"
feature is enabled:
GPUQueryType
23.5. texture-compression-bc
Allows for explicit creation of textures of BC compressed formats.
Feature Enums
The following enums are supported if and only if the "texture-compression-bc"
feature is enabled:
GPUTextureFormat
23.6. timestamp-query
Define functionality when the "timestamp-query"
feature is enabled.
Feature Enums
The following enums are supported if and only if the "timestamp-query"
feature is enabled:
GPUQueryType
24. Appendices
24.1. Texture Format Capabilities
24.1.1. Plain color formats
All plain color formats support COPY_SRC
, COPY_DST
, and SAMPLED
usage.
Only formats with GPUTextureSampleType
"float"
can be blended.
The GPUTextureUsage.STORAGE
column specifies the support for STORAGE
usage in the core API, including both "read-only"
and "write-only"
.
The GPUTextureUsage.RENDER_ATTACHMENT
column specifies the support for RENDER_ATTACHMENT
usage in the core API.
24.1.2. Depth/stencil formats
All depth formats support COPY_SRC
, COPY_DST
, SAMPLED
, and RENDER_ATTACHMENT
usage. However, the source/destination is restricted based on the format.
None of the depth formats can be filtered.
Format | Bytes per texel | Aspect | GPUTextureSampleType
| Copy aspect from Buffer | Copy aspect into Buffer |
---|---|---|---|---|---|
stencil8
| 1 − 5 | stencil | "uint"
| ✓ | |
depth16unorm
| 2 | depth | "depth"
| ✓ | |
depth24plus
| 4 | depth | "depth"
| ✗ | |
depth24plus-stencil8
| 4 − 8 | depth | "depth"
| ✗ | |
stencil | "uint"
| ✓ | |||
depth32float
| 4 | depth | "depth"
| ✗ | ✓ |
Copies between depth textures can only happen within the following sets of formats:
-
stencil8
,depth24plus-stencil8
(stencil component),r8uint
-
depth24plus
,depth24plus-stencil8
(depth aspect)
Additionally, depth32float
textures can be copied into depth32float
and r32float
textures.
Note: depth32float
texel values have a limited range. As a result, copies into depth32float
textures are only valid from other depth32float
textures.
clarify if depth24plus-stencil8
is copyable into depth24plus
in Metal.
24.1.3. Packed formats
All packed texture formats support COPY_SRC
, COPY_DST
, and SAMPLED
usages. All of these formats have "float"
type and can be filtered on sampling.
Format | Bytes per block | GPUTextureSampleType
| Block Size | Feature |
---|---|---|---|---|
rgb9e5ufloat
| 4 | "float" ,"unfilterable-float"
| 1 × 1 | |
bc1-rgba-unorm
| 8 | "float" ,"unfilterable-float"
| 4 × 4 | texture-compression-bc
|
bc1-rgba-unorm-srgb
| ||||
bc2-rgba-unorm
| 16 | |||
bc2-rgba-unorm-srgb
| ||||
bc3-rgba-unorm
| 16 | |||
bc3-rgba-unorm-srgb
| ||||
bc4-r-unorm
| 8 | |||
bc4-r-snorm
| ||||
bc5-rg-unorm
| 16 | |||
bc5-rg-snorm
| ||||
bc6h-rgb-ufloat
| 16 | |||
bc6h-rgb-float
| ||||
bc7-rgba-unorm
| 16 | |||
bc7-rgba-unorm-srgb
|
24.2. Temporary usages of non-exported dfns
Eventually all of these should disappear but they are useful to avoid warning while building the specification.