Parallel Thread Execution ISA Version 9.2
The programming guide to using PTX (Parallel Thread Execution) and ISA (Instruction Set Architecture).
1. Introduction
This document describes PTX, a low-level parallel thread execution virtual machine and instruction set architecture (ISA). PTX exposes the GPU as a data-parallel computing device.
1.1. Scalable Data-Parallel Computing using GPUs
Driven by the insatiable market demand for real-time, high-definition 3D graphics, the programmable GPU has evolved into a highly parallel, multithreaded, many-core processor with tremendous computational horsepower and very high memory bandwidth. The GPU is especially well-suited to address problems that can be expressed as data-parallel computations - the same program is executed on many data elements in parallel - with high arithmetic intensity - the ratio of arithmetic operations to memory operations. Because the same program is executed for each data element, there is a lower requirement for sophisticated flow control; and because it is executed on many data elements and has high arithmetic intensity, the memory access latency can be hidden with calculations instead of big data caches.
Data-parallel processing maps data elements to parallel processing threads. Many applications that process large data sets can use a data-parallel programming model to speed up the computations. In 3D rendering large sets of pixels and vertices are mapped to parallel threads. Similarly, image and media processing applications such as post-processing of rendered images, video encoding and decoding, image scaling, stereo vision, and pattern recognition can map image blocks and pixels to parallel processing threads. In fact, many algorithms outside the field of image rendering and processing are accelerated by data-parallel processing, from general signal processing or physics simulation to computational finance or computational biology.
PTX defines a virtual machine and ISA for general purpose parallel thread execution. PTX programs are translated at install time to the target hardware instruction set. The PTX-to-GPU translator and driver enable NVIDIA GPUs to be used as programmable parallel computers.
1.2. Goals of PTX
PTX provides a stable programming model and instruction set for general purpose parallel programming. It is designed to be efficient on NVIDIA GPUs supporting the computation features defined by the NVIDIA Tesla architecture. High level language compilers for languages such as CUDA and C/C++ generate PTX instructions, which are optimized for and translated to native target-architecture instructions.
The goals for PTX include the following:
Provide a stable ISA that spans multiple GPU generations.
Achieve performance in compiled applications comparable to native GPU performance.
Provide a machine-independent ISA for C/C++ and other compilers to target.
Provide a code distribution ISA for application and middleware developers.
Provide a common source-level ISA for optimizing code generators and translators, which map PTX to specific target machines.
Facilitate hand-coding of libraries, performance kernels, and architecture tests.
Provide a scalable programming model that spans GPU sizes from a single unit to many parallel units.
1.3. PTX ISA Version 9.2
PTX ISA version 9.2 introduces the following new features:
Adds support for
.u8x4and.s8x4instruction types foradd,sub,min,max,neginstructions.Adds support for
add.sat.{u16x2/s16x2/u32}instruction.Adds support for
.b128type forst.asyncinstruction.Adds support for
.ignore_oobqualifier forcp.async.bulkinstruction.Adds support for
.bf16x2destination type forcvtinstruction with.e4m3x2,.e5m2x2,.e3m2x2,.e2m3x2,.e2m1x2source types.
1.4. Document Structure
The information in this document is organized into the following Chapters:
Programming Model outlines the programming model.
PTX Machine Model gives an overview of the PTX virtual machine model.
Syntax describes the basic syntax of the PTX language.
State Spaces, Types, and Variables describes state spaces, types, and variable declarations.
Instruction Operands describes instruction operands.
Abstracting the ABI describes the function and call syntax, calling convention, and PTX support for abstracting the Application Binary Interface (ABI).
Instruction Set describes the instruction set.
Special Registers lists special registers.
Directives lists the assembly directives supported in PTX.
Release Notes provides release notes for PTX ISA versions 2.x and beyond.
References
-
754-2008 IEEE Standard for Floating-Point Arithmetic. ISBN 978-0-7381-5752-8, 2008.
-
The OpenCL Specification, Version: 1.1, Document Revision: 44, June 1, 2011.
-
CUDA Programming Guide.
-
CUDA Dynamic Parallelism Programming Guide.
-
CUDA Atomicity Requirements.
-
PTX Writers Guide to Interoperability.
2. Programming Model
2.1. A Highly Multithreaded Coprocessor
The GPU is a compute device capable of executing a very large number of threads in parallel. It operates as a coprocessor to the main CPU, or host: In other words, data-parallel, compute-intensive portions of applications running on the host are off-loaded onto the device.
More precisely, a portion of an application that is executed many times, but independently on different data, can be isolated into a kernel function that is executed on the GPU as many different threads. To that effect, such a function is compiled to the PTX instruction set and the resulting kernel is translated at install time to the target GPU instruction set.
2.2. Thread Hierarchy
The batch of threads that executes a kernel is organized as a grid. A grid consists of either cooperative thread arrays or clusters of cooperative thread arrays as described in this section and illustrated in Figure 1 and Figure 2. Cooperative thread arrays (CTAs) implement CUDA thread blocks and clusters implement CUDA thread block clusters.
2.2.1. Cooperative Thread Arrays
The Parallel Thread Execution (PTX) programming model is explicitly parallel: a PTX program specifies the execution of a given thread of a parallel thread array. A cooperative thread array, or CTA, is an array of threads that execute a kernel concurrently or in parallel.
Threads within a CTA can communicate with each other. To coordinate the communication of the threads within the CTA, one can specify synchronization points where threads wait until all threads in the CTA have arrived.
Each thread has a unique thread identifier within the CTA. Programs use a data parallel
decomposition to partition inputs, work, and results across the threads of the CTA. Each CTA thread
uses its thread identifier to determine its assigned role, assign specific input and output
positions, compute addresses, and select work to perform. The thread identifier is a three-element
vector tid, (with elements tid.x, tid.y, and tid.z) that specifies the thread’s
position within a 1D, 2D, or 3D CTA. Each thread identifier component ranges from zero up to the
number of thread ids in that CTA dimension.
Each CTA has a 1D, 2D, or 3D shape specified by a three-element vector ntid (with elements
ntid.x, ntid.y, and ntid.z). The vector ntid specifies the number of threads in each
CTA dimension.
Threads within a CTA execute in SIMT (single-instruction, multiple-thread) fashion in groups called
warps. A warp is a maximal subset of threads from a single CTA, such that the threads execute
the same instructions at the same time. Threads within a warp are sequentially numbered. The warp
size is a machine-dependent constant. Typically, a warp has 32 threads. Some applications may be
able to maximize performance with knowledge of the warp size, so PTX includes a run-time immediate
constant, WARP_SZ, which may be used in any instruction where an immediate operand is allowed.
2.2.2. Cluster of Cooperative Thread Arrays
Cluster is a group of CTAs that run concurrently or in parallel and can synchronize and communicate with each other via shared memory. The executing CTA has to make sure that the shared memory of the peer CTA exists before communicating with it via shared memory and the peer CTA hasn’t exited before completing the shared memory operation.
Threads within the different CTAs in a cluster can synchronize and communicate with each other via
shared memory. Cluster-wide barriers can be used to synchronize all the threads within the
cluster. Each CTA in a cluster has a unique CTA identifier within its cluster
(cluster_ctaid). Each cluster of CTAs has 1D, 2D or 3D shape specified by the parameter
cluster_nctaid. Each CTA in the cluster also has a unique CTA identifier (cluster_ctarank)
across all dimensions. The total number of CTAs across all the dimensions in the cluster is
specified by cluster_nctarank. Threads may read and use these values through predefined, read-only
special registers %cluster_ctaid, %cluster_nctaid, %cluster_ctarank,
%cluster_nctarank.
Cluster level is applicable only on target architecture sm_90 or higher. Specifying cluster
level during launch time is optional. If the user specifies the cluster dimensions at launch time
then it will be treated as explicit cluster launch, otherwise it will be treated as implicit cluster
launch with default dimension 1x1x1. PTX provides read-only special register
%is_explicit_cluster to differentiate between explicit and implicit cluster launch.
2.2.3. Grid of Clusters
There is a maximum number of threads that a CTA can contain and a maximum number of CTAs that a cluster can contain. However, clusters with CTAs that execute the same kernel can be batched together into a grid of clusters, so that the total number of threads that can be launched in a single kernel invocation is very large. This comes at the expense of reduced thread communication and synchronization, because threads in different clusters cannot communicate and synchronize with each other.
Each cluster has a unique cluster identifier (clusterid) within a grid of clusters. Each grid of
clusters has a 1D, 2D , or 3D shape specified by the parameter nclusterid. Each grid also has a
unique temporal grid identifier (gridid). Threads may read and use these values through
predefined, read-only special registers %tid, %ntid, %clusterid, %nclusterid, and
%gridid.
Each CTA has a unique identifier (ctaid) within a grid. Each grid of CTAs has 1D, 2D, or 3D shape
specified by the parameter nctaid. Thread may use and read these values through predefined,
read-only special registers %ctaid and %nctaid.
Each kernel is executed as a batch of threads organized as a grid of clusters consisting of CTAs
where cluster is optional level and is applicable only for target architectures sm_90 and
higher. Figure 1 shows a grid consisting of CTAs and
Figure 2 shows a grid consisting of clusters.
Grids may be launched with dependencies between one another - a grid may be a dependent grid and/or a prerequisite grid. To understand how grid dependencies may be defined, refer to the section on CUDA Graphs in the Cuda Programming Guide.
Figure 1 Grid with CTAs
Figure 2 Grid with clusters
A cluster is a set of cooperative thread arrays (CTAs) where a CTA is a set of concurrent threads that execute the same kernel program. A grid is a set of clusters consisting of CTAs that execute independently.
2.3. Memory Hierarchy
PTX threads may access data from multiple state spaces during their execution as illustrated by
Figure 3 where cluster level is introduced from
target architecture sm_90 onwards. Each thread has a private local memory. Each thread block
(CTA) has a shared memory visible to all threads of the block and to all active blocks in the
cluster and with the same lifetime as the block. Finally, all threads have access to the same global
memory.
There are additional state spaces accessible by all threads: the constant, param, texture, and surface state spaces. Constant and texture memory are read-only; surface memory is readable and writable. The global, constant, param, texture, and surface state spaces are optimized for different memory usages. For example, texture memory offers different addressing modes as well as data filtering for specific data formats. Note that texture and surface memory is cached, and within the same kernel call, the cache is not kept coherent with respect to global memory writes and surface memory writes, so any texture fetch or surface read to an address that has been written to via a global or a surface write in the same kernel call returns undefined data. In other words, a thread can safely read some texture or surface memory location only if this memory location has been updated by a previous kernel call or memory copy, but not if it has been previously updated by the same thread or another thread from the same kernel call.
The global, constant, and texture state spaces are persistent across kernel launches by the same application.
Both the host and the device maintain their own local memory, referred to as host memory and device memory, respectively. The device memory may be mapped and read or written by the host, or, for more efficient transfer, copied from the host memory through optimized API calls that utilize the device’s high-performance Direct Memory Access (DMA) engine.
Figure 3 Memory Hierarchy
3. PTX Machine Model
3.1. A Set of SIMT Multiprocessors
The NVIDIA GPU architecture is built around a scalable array of multithreaded Streaming Multiprocessors (SMs). When a host program invokes a kernel grid, the blocks of the grid are enumerated and distributed to multiprocessors with available execution capacity. The threads of a thread block execute concurrently on one multiprocessor. As thread blocks terminate, new blocks are launched on the vacated multiprocessors.
A multiprocessor consists of multiple Scalar Processor (SP) cores, a multithreaded instruction unit, and on-chip shared memory. The multiprocessor creates, manages, and executes concurrent threads in hardware with zero scheduling overhead. It implements a single-instruction barrier synchronization. Fast barrier synchronization together with lightweight thread creation and zero-overhead thread scheduling efficiently support very fine-grained parallelism, allowing, for example, a low granularity decomposition of problems by assigning one thread to each data element (such as a pixel in an image, a voxel in a volume, a cell in a grid-based computation).
To manage hundreds of threads running several different programs, the multiprocessor employs an architecture we call SIMT (single-instruction, multiple-thread). The multiprocessor maps each thread to one scalar processor core, and each scalar thread executes independently with its own instruction address and register state. The multiprocessor SIMT unit creates, manages, schedules, and executes threads in groups of parallel threads called warps. (This term originates from weaving, the first parallel thread technology.) Individual threads composing a SIMT warp start together at the same program address but are otherwise free to branch and execute independently.
When a multiprocessor is given one or more thread blocks to execute, it splits them into warps that get scheduled by the SIMT unit. The way a block is split into warps is always the same; each warp contains threads of consecutive, increasing thread IDs with the first warp containing thread 0.
At every instruction issue time, the SIMT unit selects a warp that is ready to execute and issues the next instruction to the active threads of the warp. A warp executes one common instruction at a time, so full efficiency is realized when all threads of a warp agree on their execution path. If threads of a warp diverge via a data-dependent conditional branch, the warp serially executes each branch path taken, disabling threads that are not on that path, and when all paths complete, the threads converge back to the same execution path. Branch divergence occurs only within a warp; different warps execute independently regardless of whether they are executing common or disjointed code paths.
SIMT architecture is akin to SIMD (Single Instruction, Multiple Data) vector organizations in that a single instruction controls multiple processing elements. A key difference is that SIMD vector organizations expose the SIMD width to the software, whereas SIMT instructions specify the execution and branching behavior of a single thread. In contrast with SIMD vector machines, SIMT enables programmers to write thread-level parallel code for independent, scalar threads, as well as data-parallel code for coordinated threads. For the purposes of correctness, the programmer can essentially ignore the SIMT behavior; however, substantial performance improvements can be realized by taking care that the code seldom requires threads in a warp to diverge. In practice, this is analogous to the role of cache lines in traditional code: Cache line size can be safely ignored when designing for correctness but must be considered in the code structure when designing for peak performance. Vector architectures, on the other hand, require the software to coalesce loads into vectors and manage divergence manually.
How many blocks a multiprocessor can process at once depends on how many registers per thread and how much shared memory per block are required for a given kernel since the multiprocessor’s registers and shared memory are split among all the threads of the batch of blocks. If there are not enough registers or shared memory available per multiprocessor to process at least one block, the kernel will fail to launch.
Figure 4 Hardware Model
A set of SIMT multiprocessors with on-chip shared memory.
3.2. Independent Thread Scheduling
On architectures prior to Volta, warps used a single program counter shared amongst all 32 threads in the warp together with an active mask specifying the active threads of the warp. As a result, threads from the same warp in divergent regions or different states of execution cannot signal each other or exchange data, and algorithms requiring fine-grained sharing of data guarded by locks or mutexes can easily lead to deadlock, depending on which warp the contending threads come from.
Starting with the Volta architecture, Independent Thread Scheduling allows full concurrency between threads, regardless of warp. With Independent Thread Scheduling, the GPU maintains execution state per thread, including a program counter and call stack, and can yield execution at a per-thread granularity, either to make better use of execution resources or to allow one thread to wait for data to be produced by another. A schedule optimizer determines how to group active threads from the same warp together into SIMT units. This retains the high throughput of SIMT execution as in prior NVIDIA GPUs, but with much more flexibility: threads can now diverge and reconverge at sub-warp granularity.
Independent Thread Scheduling can lead to a rather different set of threads participating in the executed code than intended if the developer made assumptions about warp-synchronicity of previous hardware architectures. In particular, any warp-synchronous code (such as synchronization-free, intra-warp reductions) should be revisited to ensure compatibility with Volta and beyond. See the section on Compute Capability 7.x in the Cuda Programming Guide for further details.
4. Syntax
PTX programs are a collection of text source modules (files). PTX source modules have an assembly-language style syntax with instruction operation codes and operands. Pseudo-operations specify symbol and addressing management. The ptxas optimizing backend compiler optimizes and assembles PTX source modules to produce corresponding binary object files.
4.1. Source Format
Source modules are ASCII text. Lines are separated by the newline character (\n).
All whitespace characters are equivalent; whitespace is ignored except for its use in separating tokens in the language.
The C preprocessor cpp may be used to process PTX source modules. Lines beginning with # are
preprocessor directives. The following are common preprocessor directives:
#include, #define, #if, #ifdef, #else, #endif, #line, #file
C: A Reference Manual by Harbison and Steele provides a good description of the C preprocessor.
PTX is case sensitive and uses lowercase for keywords.
Each PTX module must begin with a .version directive specifying the PTX language version,
followed by a .target directive specifying the target architecture assumed. See
PTX Module Directives for a more information on these directives.
4.3. Statements
A PTX statement is either a directive or an instruction. Statements begin with an optional label and end with a semicolon.
Examples
.reg .b32 r1, r2;
.global .f32 array[N];
start: mov.b32 r1, %tid.x;
shl.b32 r1, r1, 2; // shift thread id by 2 bits
ld.global.b32 r2, array[r1]; // thread[tid] gets array[tid]
add.f32 r2, r2, 0.5; // add 1/2
4.3.1. Directive Statements
Directive keywords begin with a dot, so no conflict is possible with user-defined identifiers. The directives in PTX are listed in Table 1 and described in State Spaces, Types, and Variables and Directives.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4.3.2. Instruction Statements
Instructions are formed from an instruction opcode followed by a comma-separated list of zero or
more operands, and terminated with a semicolon. Operands may be register variables, constant
expressions, address expressions, or label names. Instructions have an optional guard predicate
which controls conditional execution. The guard predicate follows the optional label and precedes
the opcode, and is written as @p, where p is a predicate register. The guard predicate may
be optionally negated, written as @!p.
The destination operand is first, followed by source operands.
Instruction keywords are listed in Table 2. All instruction keywords are reserved tokens in PTX.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4.4. Identifiers
User-defined identifiers follow extended C++ rules: they either start with a letter followed by zero or more letters, digits, underscore, or dollar characters; or they start with an underscore, dollar, or percentage character followed by one or more letters, digits, underscore, or dollar characters:
followsym: [a-zA-Z0-9_$]
identifier: [a-zA-Z]{followsym}* | {[_$%]{followsym}+
PTX does not specify a maximum length for identifiers and suggests that all implementations support a minimum length of at least 1024 characters.
Many high-level languages such as C and C++ follow similar rules for identifier names, except that the percentage sign is not allowed. PTX allows the percentage sign as the first character of an identifier. The percentage sign can be used to avoid name conflicts, e.g., between user-defined variable names and compiler-generated names.
PTX predefines one constant and a small number of special registers that begin with the percentage sign, listed in Table 3.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4.5. Constants
PTX supports integer and floating-point constants and constant expressions. These constants may be
used in data initialization and as operands to instructions. Type checking rules remain the same for
integer, floating-point, and bit-size types. For predicate-type data and instructions, integer
constants are allowed and are interpreted as in C, i.e., zero values are False and non-zero
values are True.
4.5.1. Integer Constants
Integer constants are 64-bits in size and are either signed or unsigned, i.e., every integer
constant has type .s64 or .u64. The signed/unsigned nature of an integer constant is needed
to correctly evaluate constant expressions containing operations such as division and ordered
comparisons, where the behavior of the operation depends on the operand types. When used in an
instruction or data initialization, each integer constant is converted to the appropriate size based
on the data or instruction type at its use.
Integer literals may be written in decimal, hexadecimal, octal, or binary notation. The syntax
follows that of C. Integer literals may be followed immediately by the letter U to indicate that
the literal is unsigned.
hexadecimal literal: 0[xX]{hexdigit}+U?
octal literal: 0{octal digit}+U?
binary literal: 0[bB]{bit}+U?
decimal literal {nonzero-digit}{digit}*U?
Integer literals are non-negative and have a type determined by their magnitude and optional type
suffix as follows: literals are signed (.s64) unless the value cannot be fully represented in
.s64 or the unsigned suffix is specified, in which case the literal is unsigned (.u64).
The predefined integer constant WARP_SZ specifies the number of threads per warp for the target
platform; to date, all target architectures have a WARP_SZ value of 32.
4.5.2. Floating-Point Constants
Floating-point constants are represented as 64-bit double-precision values, and all floating-point constant expressions are evaluated using 64-bit double precision arithmetic. The only exception is the 32-bit hex notation for expressing an exact single-precision floating-point value; such values retain their exact 32-bit single-precision value and may not be used in constant expressions. Each 64-bit floating-point constant is converted to the appropriate floating-point size based on the data or instruction type at its use.
Floating-point literals may be written with an optional decimal point and an optional signed exponent. Unlike C and C++, there is no suffix letter to specify size; literals are always represented in 64-bit double-precision format.
PTX includes a second representation of floating-point constants for specifying the exact machine
representation using a hexadecimal constant. To specify IEEE 754 double-precision floating point
values, the constant begins with 0d or 0D followed by 16 hex digits. To specify IEEE 754
single-precision floating point values, the constant begins with 0f or 0F followed by 8 hex
digits.
0[fF]{hexdigit}{8} // single-precision floating point
0[dD]{hexdigit}{16} // double-precision floating point
Example
mov.f32 $f3, 0F3f800000; // 1.0
4.5.3. Predicate Constants
In PTX, integer constants may be used as predicates. For predicate-type data initializers and
instruction operands, integer constants are interpreted as in C, i.e., zero values are False and
non-zero values are True.
4.5.4. Constant Expressions
In PTX, constant expressions are formed using operators as in C and are evaluated using rules similar to those in C, but simplified by restricting types and sizes, removing most casts, and defining full semantics to eliminate cases where expression evaluation in C is implementation dependent.
Constant expressions are formed from constant literals, unary plus and minus, basic arithmetic
operators (addition, subtraction, multiplication, division), comparison operators, the conditional
ternary operator ( ?: ), and parentheses. Integer constant expressions also allow unary logical
negation (!), bitwise complement (~), remainder (%), shift operators (<< and
>>), bit-type operators (&, |, and ^), and logical operators (&&, ||).
Constant expressions in PTX do not support casts between integer and floating-point.
Constant expressions are evaluated using the same operator precedence as in C. Table 4 gives operator precedence and associativity. Operator precedence is highest for unary operators and decreases with each line in the chart. Operators on the same line have the same precedence and are evaluated right-to-left for unary operators and left-to-right for binary operators.
Kind |
Operator Symbols |
Operator Names |
Associates |
|---|---|---|---|
Primary |
|
parenthesis |
n/a |
Unary |
|
plus, minus, negation, complement |
right |
|
casts |
right |
|
Binary |
|
multiplication, division, remainder |
left |
|
addition, subtraction |
||
|
shifts |
||
|
ordered comparisons |
||
|
equal, not equal |
||
|
bitwise AND |
||
|
bitwise XOR |
||
|
bitwise OR |
||
|
logical AND |
||
|
logical OR |
||
Ternary |
|
conditional |
right |
4.5.5. Integer Constant Expression Evaluation
Integer constant expressions are evaluated at compile time according to a set of rules that
determine the type (signed .s64 versus unsigned .u64) of each sub-expression. These rules
are based on the rules in C, but they’ve been simplified to apply only to 64-bit integers, and
behavior is fully defined in all cases (specifically, for remainder and shift operators).
-
Literals are signed unless unsigned is needed to prevent overflow, or unless the literal uses a
Usuffix. For example:42,0x1234,0123are signed.0xfabc123400000000,42U,0x1234Uare unsigned.
-
Unary plus and minus preserve the type of the input operand. For example:
+123,-1,-(-42)are signed.-1U,-0xfabc123400000000are unsigned.
Unary logical negation (
!) produces a signed result with value0or1.Unary bitwise complement (
~) interprets the source operand as unsigned and produces an unsigned result.Some binary operators require normalization of source operands. This normalization is known as the usual arithmetic conversions and simply converts both operands to unsigned type if either operand is unsigned.
Addition, subtraction, multiplication, and division perform the usual arithmetic conversions and produce a result with the same type as the converted operands. That is, the operands and result are unsigned if either source operand is unsigned, and is otherwise signed.
Remainder (
%) interprets the operands as unsigned. Note that this differs from C, which allows a negative divisor but defines the behavior to be implementation dependent.Left and right shift interpret the second operand as unsigned and produce a result with the same type as the first operand. Note that the behavior of right-shift is determined by the type of the first operand: right shift of a signed value is arithmetic and preserves the sign, and right shift of an unsigned value is logical and shifts in a zero bit.
AND (
&), OR (|), and XOR (^) perform the usual arithmetic conversions and produce a result with the same type as the converted operands.AND_OP (
&&), OR_OP (||), Equal (==), and Not_Equal (!=) produce a signed result. The result value is 0 or 1.Ordered comparisons (
<,<=,>,>=) perform the usual arithmetic conversions on source operands and produce a signed result. The result value is0or1.Casting of expressions to signed or unsigned is supported using (
.s64) and (.u64) casts.For the conditional operator (
? :) , the first operand must be an integer, and the second and third operands are either both integers or both floating-point. The usual arithmetic conversions are performed on the second and third operands, and the result type is the same as the converted type.
4.5.6. Summary of Constant Expression Evaluation Rules
Table 5 contains a summary of the constant expression evaluation rules.
Kind |
Operator |
Operand Types |
Operand Interpretation |
Result Type |
|---|---|---|---|---|
Primary |
|
any type |
same as source |
same as source |
constant literal |
n/a |
n/a |
|
|
Unary |
|
any type |
same as source |
same as source |
|
integer |
zero or non-zero |
|
|
|
integer |
|
|
|
Cast |
|
integer |
|
|
|
integer |
|
|
|
Binary |
|
|
|
|
integer |
use usual conversions |
converted type |
||
|
|
|
|
|
integer |
use usual conversions |
|
||
|
|
|
|
|
integer |
use usual conversions |
|
||
|
integer |
|
|
|
|
integer |
1st unchanged, 2nd is |
same as 1st operand |
|
|
integer |
|
|
|
|
integer |
zero or non-zero |
|
|
Ternary |
|
|
same as sources |
|
|
use usual conversions |
converted type |
5. State Spaces, Types, and Variables
While the specific resources available in a given target GPU will vary, the kinds of resources will be common across platforms, and these resources are abstracted in PTX through state spaces and data types.
5.1. State Spaces
A state space is a storage area with particular characteristics. All variables reside in some state space. The characteristics of a state space include its size, addressability, access speed, access rights, and level of sharing between threads.
The state spaces defined in PTX are a byproduct of parallel programming and graphics programming. The list of state spaces is shown in Table 6,and properties of state spaces are shown in Table 7.
Name |
Description |
|---|---|
|
Registers, fast. |
|
Special registers. Read-only; pre-defined; platform-specific. |
|
Shared, read-only memory. |
|
Global memory, shared by all threads. |
|
Local memory, private to each thread. |
|
Kernel parameters, defined per-grid; or Function or local parameters, defined per-thread. |
|
Addressable memory, defined per CTA, accessible to all threads in the cluster throughout the lifetime of the CTA that defines it. |
|
Global texture memory (deprecated). |
Name |
Addressable |
Initializable |
Access |
Sharing |
|---|---|---|---|---|
|
No |
No |
R/W |
per-thread |
|
No |
No |
RO |
per-CTA |
|
Yes |
Yes1 |
RO |
per-grid |
|
Yes |
Yes1 |
R/W |
Context |
|
Yes |
No |
R/W |
per-thread |
|
Yes2 |
No |
RO |
per-grid |
|
Restricted3 |
No |
R/W |
per-thread |
|
Yes |
No |
R/W |
per-cluster5 |
|
No4 |
Yes, via driver |
RO |
Context |
|
Notes: 1 Variables in 2 Accessible only via the 3 Accessible via 4 Accessible only via the 5 Visible to the owning CTA and other active CTAs in the cluster. |
||||
5.1.1. Register State Space
Registers (.reg state space) are fast storage locations. The number of registers is limited, and
will vary from platform to platform. When the limit is exceeded, register variables will be spilled
to memory, causing changes in performance. For each architecture, there is a recommended maximum
number of registers to use (see the CUDA Programming Guide for details).
Registers may be typed (signed integer, unsigned integer, floating point, predicate) or
untyped. Register size is restricted; aside from predicate registers which are 1-bit, scalar
registers have a width of 8-, 16-, 32-, 64-, or 128-bits, and vector registers have a width of
16-, 32-, 64-, or 128-bits. The most common use of 8-bit registers is with ld, st, and cvt
instructions, or as elements of vector tuples.
Registers differ from the other state spaces in that they are not fully addressable, i.e., it is not
possible to refer to the address of a register. When compiling to use the Application Binary
Interface (ABI), register variables are restricted to function scope and may not be declared at
module scope. When compiling legacy PTX code (ISA versions prior to 3.0) containing module-scoped
.reg variables, the compiler silently disables use of the ABI. Registers may have alignment
boundaries required by multi-word loads and stores.
5.1.2. Special Register State Space
The special register (.sreg) state space holds predefined, platform-specific registers, such as
grid, cluster, CTA, and thread parameters, clock counters, and performance monitoring registers. All
special registers are predefined.
5.1.3. Constant State Space
The constant (.const) state space is a read-only memory initialized by the host. Constant memory
is accessed with a ld.const instruction. Constant memory is restricted in size, currently
limited to 64 KB which can be used to hold statically-sized constant variables. There is an
additional 640 KB of constant memory, organized as ten independent 64 KB regions. The driver may
allocate and initialize constant buffers in these regions and pass pointers to the buffers as kernel
function parameters. Since the ten regions are not contiguous, the driver must ensure that constant
buffers are allocated so that each buffer fits entirely within a 64 KB region and does not span a
region boundary.
Statically-sized constant variables have an optional variable initializer; constant variables with no explicit initializer are initialized to zero by default. Constant buffers allocated by the driver are initialized by the host, and pointers to such buffers are passed to the kernel as parameters. See the description of kernel parameter attributes in Kernel Function Parameter Attributes for more details on passing pointers to constant buffers as kernel parameters.
5.1.3.1. Banked Constant State Space (deprecated)
Previous versions of PTX exposed constant memory as a set of eleven 64 KB banks, with explicit bank numbers required for variable declaration and during access.
Prior to PTX ISA version 2.2, the constant memory was organized into fixed size banks. There were
eleven 64 KB banks, and banks were specified using the .const[bank] modifier, where bank
ranged from 0 to 10. If no bank number was given, bank zero was assumed.
By convention, bank zero was used for all statically-sized constant variables. The remaining banks were used to declare incomplete constant arrays (as in C, for example), where the size is not known at compile time. For example, the declaration
.extern .const[2] .b32 const_buffer[];
resulted in const_buffer pointing to the start of constant bank two. This pointer could then be
used to access the entire 64 KB constant bank. Multiple incomplete array variables declared in the
same bank were aliased, with each pointing to the start address of the specified constant bank.
To access data in contant banks 1 through 10, the bank number was required in the state space of the load instruction. For example, an incomplete array in bank 2 was accessed as follows:
.extern .const[2] .b32 const_buffer[];
ld.const[2].b32 %r1, [const_buffer+4]; // load second word
In PTX ISA version 2.2, we eliminated explicit banks and replaced the incomplete array representation of driver-allocated constant buffers with kernel parameter attributes that allow pointers to constant buffers to be passed as kernel parameters.
5.1.4. Global State Space
The global (.global) state space is memory that is accessible by all threads in a context. It is
the mechanism by which threads in different CTAs, clusters, and grids can communicate. Use
ld.global, st.global, and atom.global to access global variables.
Global variables have an optional variable initializer; global variables with no explicit initializer are initialized to zero by default.
5.1.5. Local State Space
The local state space (.local) is private memory for each thread to keep its own data. It is
typically standard memory with cache. The size is limited, as it must be allocated on a per-thread
basis. Use ld.local and st.local to access local variables.
When compiling to use the Application Binary Interface (ABI), .local state-space variables
must be declared within function scope and are allocated on the stack. In implementations that do
not support a stack, all local memory variables are stored at fixed addresses, recursive function
calls are not supported, and .local variables may be declared at module scope. When compiling
legacy PTX code (ISA versions prior to 3.0) containing module-scoped .local variables, the
compiler silently disables use of the ABI.
5.1.6. Parameter State Space
The parameter (.param) state space is used (1) to pass input arguments from the host to the
kernel, (2a) to declare formal input and return parameters for device functions called from within
kernel execution, and (2b) to declare locally-scoped byte array variables that serve as function
call arguments, typically for passing large structures by value to a function. Kernel function
parameters differ from device function parameters in terms of access and sharing (read-only versus
read-write, per-kernel versus per-thread). Note that PTX ISA versions 1.x supports only kernel
function parameters in .param space; device function parameters were previously restricted to the
register state space. The use of parameter state space for device function parameters was introduced
in PTX ISA version 2.0 and requires target architecture sm_20 or higher. Additional sub-qualifiers
::entry or ::func can be specified on instructions with .param state space to indicate
whether the address refers to kernel function parameter or device function parameter. If no
sub-qualifier is specified with the .param state space, then the default sub-qualifier is specific
to and dependent on the exact instruction. For example, st.param is equivalent to st.param::func
whereas isspacep.param is equivalent to isspacep.param::entry. Refer to the instruction
description for more details on default sub-qualifier assumption.
Note
The location of parameter space is implementation specific. For example, in some implementations
kernel parameters reside in global memory. No access protection is provided between parameter and
global space in this case. Though the exact location of the kernel parameter space is
implementation specific, the kernel parameter space window is always contained within the global
space window. Similarly, function parameters are mapped to parameter passing registers and/or
stack locations based on the function calling conventions of the Application Binary Interface
(ABI). Therefore, PTX code should make no assumptions about the relative locations or ordering
of .param space variables.
5.1.6.1. Kernel Function Parameters
Each kernel function definition includes an optional list of parameters. These parameters are
addressable, read-only variables declared in the .param state space. Values passed from the host
to the kernel are accessed through these parameter variables using ld.param instructions. The
kernel parameter variables are shared across all CTAs from all clusters within a grid.
The address of a kernel parameter may be moved into a register using the mov instruction. The
resulting address is in the .param state space and is accessed using ld.param instructions.
Example
.entry foo ( .param .b32 N, .param .align 8 .b8 buffer[64] )
{
.reg .u32 %n;
.reg .f64 %d;
ld.param.u32 %n, [N];
ld.param.f64 %d, [buffer];
...
Example
.entry bar ( .param .b32 len )
{
.reg .u32 %ptr, %n;
mov.u32 %ptr, len;
ld.param.u32 %n, [%ptr];
...
Kernel function parameters may represent normal data values, or they may hold addresses to objects in constant, global, local, or shared state spaces. In the case of pointers, the compiler and runtime system need information about which parameters are pointers, and to which state space they point. Kernel parameter attribute directives are used to provide this information at the PTX level. See Kernel Function Parameter Attributes for a description of kernel parameter attribute directives.
Note
The current implementation does not allow creation of generic pointers to constant variables
(cvta.const) in programs that have pointers to constant buffers passed as kernel parameters.
5.1.6.2. Kernel Function Parameter Attributes
Kernel function parameters may be declared with an optional .ptr attribute to indicate that a
parameter is a pointer to memory, and also indicate the state space and alignment of the memory
being pointed to. Kernel Parameter Attribute: .ptr
describes the .ptr kernel parameter attribute.
5.1.6.3. Kernel Parameter Attribute: .ptr
.ptr
Kernel parameter alignment attribute.
Syntax
.param .type .ptr .space .align N varname
.param .type .ptr .align N varname
.space = { .const, .global, .local, .shared };
Description
Used to specify the state space and, optionally, the alignment of memory pointed to by a pointer type kernel parameter. The alignment value N, if present, must be a power of two. If no state space is specified, the pointer is assumed to be a generic address pointing to one of const, global, local, or shared memory. If no alignment is specified, the memory pointed to is assumed to be aligned to a 4 byte boundary.
Spaces between .ptr, .space, and .align may be eliminated to improve readability.
PTX ISA Notes
Introduced in PTX ISA version 2.2.
Support for generic addressing of .const space added in PTX ISA version 3.1.
Target ISA Notes
Supported on all target architectures.
Examples
.entry foo ( .param .u32 param1,
.param .u32 .ptr.global.align 16 param2,
.param .u32 .ptr.const.align 8 param3,
.param .u32 .ptr.align 16 param4 // generic address
// pointer
) { .. }
5.1.6.4. Device Function Parameters
PTX ISA version 2.0 extended the use of parameter space to device function parameters. The most
common use is for passing objects by value that do not fit within a PTX register, such as C
structures larger than 8 bytes. In this case, a byte array in parameter space is used. Typically,
the caller will declare a locally-scoped .param byte array variable that represents a flattened
C structure or union. This will be passed by value to a callee, which declares a .param formal
parameter having the same size and alignment as the passed argument.
Example
// pass object of type struct { double d; int y; };
.func foo ( .reg .b32 N, .param .align 8 .b8 buffer[12] )
{
.reg .f64 %d;
.reg .s32 %y;
ld.param.f64 %d, [buffer];
ld.param.s32 %y, [buffer+8];
...
}
// code snippet from the caller
// struct { double d; int y; } mystruct; is flattened, passed to foo
...
.reg .f64 dbl;
.reg .s32 x;
.param .align 8 .b8 mystruct;
...
st.param.f64 [mystruct+0], dbl;
st.param.s32 [mystruct+8], x;
call foo, (4, mystruct);
...
See the section on function call syntax for more details.
Function input parameters may be read via ld.param and function return parameters may be written
using st.param; it is illegal to write to an input parameter or read from a return parameter.
Aside from passing structures by value, .param space is also required whenever a formal
parameter has its address taken within the called function. In PTX, the address of a function input
parameter may be moved into a register using the mov instruction. Note that the parameter will
be copied to the stack if necessary, and so the address will be in the .local state space and is
accessed via ld.local and st.local instructions. It is not possible to use mov to get
the address of or a locally-scoped .param space variable. Starting PTX ISA version 6.0, it is
possible to use mov instruction to get address of return parameter of device function.
Example
// pass array of up to eight floating-point values in buffer
.func foo ( .param .b32 N, .param .b32 buffer[32] )
{
.reg .u32 %n, %r;
.reg .f32 %f;
.reg .pred %p;
ld.param.u32 %n, [N];
mov.u32 %r, buffer; // forces buffer to .local state space
Loop:
setp.eq.u32 %p, %n, 0;
@%p bra Done;
ld.local.f32 %f, [%r];
...
add.u32 %r, %r, 4;
sub.u32 %n, %n, 1;
bra Loop;
Done:
...
}
5.1.8. Texture State Space (deprecated)
The texture (.tex) state space is global memory accessed via the texture instruction. It is
shared by all threads in a context. Texture memory is read-only and cached, so accesses to texture
memory are not coherent with global memory stores to the texture image.
The GPU hardware has a fixed number of texture bindings that can be accessed within a single kernel
(typically 128). The .tex directive will bind the named texture memory variable to a hardware
texture identifier, where texture identifiers are allocated sequentially beginning with
zero. Multiple names may be bound to the same physical texture identifier. An error is generated if
the maximum number of physical resources is exceeded. The texture name must be of type .u32 or
.u64.
Physical texture resources are allocated on a per-kernel granularity, and .tex variables are
required to be defined in the global scope.
Texture memory is read-only. A texture’s base address is assumed to be aligned to a 16 byte boundary.
Example
.tex .u32 tex_a; // bound to physical texture 0
.tex .u32 tex_c, tex_d; // both bound to physical texture 1
.tex .u32 tex_d; // bound to physical texture 2
.tex .u32 tex_f; // bound to physical texture 3
Note
Explicit declarations of variables in the texture state space is deprecated, and programs should
instead reference texture memory through variables of type .texref. The .tex directive is
retained for backward compatibility, and variables declared in the .tex state space are
equivalent to module-scoped .texref variables in the .global state space.
For example, a legacy PTX definitions such as
.tex .u32 tex_a;
is equivalent to:
.global .texref tex_a;
See Texture Sampler and Surface Types for the
description of the .texref type and Texture Instructions
for its use in texture instructions.
5.2. Types
5.2.1. Fundamental Types
In PTX, the fundamental types reflect the native data types supported by the target architectures. A fundamental type specifies both a basic type and a size. Register variables are always of a fundamental type, and instructions operate on these types. The same type-size specifiers are used for both variable definitions and for typing instructions, so their names are intentionally short.
Table 8 lists the fundamental type specifiers for each basic type:
Basic Type |
Fundamental Type Specifiers |
|---|---|
Signed integer |
|
Unsigned integer |
|
Floating-point |
|
Bits (untyped) |
|
Predicate |
|
Most instructions have one or more type specifiers, needed to fully specify instruction behavior. Operand types and sizes are checked against instruction types for compatibility.
Two fundamental types are compatible if they have the same basic type and are the same size. Signed and unsigned integer types are compatible if they have the same size. The bit-size type is compatible with any fundamental type having the same size.
In principle, all variables (aside from predicates) could be declared using only bit-size types, but typed variables enhance program readability and allow for better operand type checking.
5.2.2. Restricted Use of Sub-Word Sizes
The .u8, .s8, and .b8 instruction types are restricted to ld, st, add, sub,
min, max, neg and cvt instructions. The .f16 floating-point type is allowed
in half precision floating point instructions and texture fetch instructions.
The .f16x2 floating point type is allowed only in half precision
floating point arithmetic instructions and texture fetch instructions.
For convenience, ld, st, and cvt instructions permit source and destination data
operands to be wider than the instruction-type size, so that narrow values may be loaded, stored,
and converted using regular-width registers. For example, 8-bit or 16-bit values may be held
directly in 32-bit or 64-bit registers when being loaded, stored, or converted to other types and
sizes.
5.2.3. Alternate Floating-Point Data Formats
The fundamental floating-point types supported in PTX have implicit bit representations that
indicate the number of bits used to store exponent and mantissa. For example, the .f16 type
indicates 5 bits reserved for exponent and 10 bits reserved for mantissa. In addition to the
floating-point representations assumed by the fundamental types, PTX allows the following alternate
floating-point data formats:
-
bf16data format: -
This data format is a 16-bit floating point format with 8 bits for exponent and 7 bits for mantissa. A register variable containing
bf16data must be declared with.b16type. -
e4m3data format: -
This data format is an 8-bit floating point format with 4 bits for exponent and 3 bits for mantissa. The
e4m3encoding does not support infinity andNaNvalues are limited to0x7fand0xff. A register variable containinge4m3value must be declared using bit-size type. -
e5m2data format: -
This data format is an 8-bit floating point format with 5 bits for exponent and 2 bits for mantissa. A register variable containing
e5m2value must be declared using bit-size type. -
tf32data format: -
This data format is a special 32-bit floating point format supported by the matrix multiply-and-accumulate instructions, with the same range as
.f32and reduced precision (>=10 bits). The internal layout oftf32format is implementation defined. PTX facilitates conversion from single precision.f32type totf32format. A register variable containingtf32data must be declared with.b32type. -
e2m1data format: -
This data format is a 4-bit floating point format with 2 bits for exponent and 1 bit for mantissa. The
e2m1encoding does not support infinity andNaN.e2m1values must be used in a packed format specified ase2m1x2. A register variable containing twoe2m1values must be declared with.b8type. -
e2m3data format: -
This data format is a 6-bit floating point format with 2 bits for exponent and 3 bits for mantissa. The
e2m3encoding does not support infinity andNaN.e2m3values must be used in a packed format specified ase2m3x2. A register variable containing twoe2m3values must be declared with.b16type where each.b8element has 6-bit floating point value and 2 MSB bits padded with zeros. -
e3m2data format: -
This data format is a 6-bit floating point format with 3 bits for exponent and 2 bits for mantissa. The
e3m2encoding does not support infinity andNaN.e3m2values must be used in a packed format specified ase3m2x2. A register variable containing twoe3m2values must be declared with.b16type where each.b8element has 6-bit floating point value and 2 MSB bits padded with zeros. -
ue8m0data format: -
This data format is an 8-bit unsigned floating-point format with 8 bits for exponent and 0 bits for mantissa. The
ue8m0encoding does not support infinity.NaNvalue is limited to0xff.ue8m0values must be used in a packed format specified asue8m0x2. A register variable containing twoue8m0values must be declared with.b16type. -
ue4m3data format: -
This data format is a 7-bit unsigned floating-point format with 4 bits for exponent and 3 bits for mantissa. The
ue4m3encoding does not support infinity.NaNvalue is limited to0x7f. A register variable containing singleue4m3value must be declared with.b8type having MSB bit padded with zero.
Alternate data formats cannot be used as fundamental types. They are supported as source or destination formats by certain instructions.
5.2.4. Fixed-point Data format
PTX supports following fixed-point data formats:
-
s2f6data format: -
This data format is 8-bit signed 2’s complement integer with 2 sign-integer bits and 6 fractional bits with form xx.xxxxxx. The
s2f6encoding does not support infinity andNaN.s2f6value = s8 value * 2^(-6) Positive max representation = 01.111111 = 127 * 2^(-6) = 1.984375 Negative max representation = 10.000000 = -128 * 2^(-6) = -2.0
5.2.5. Packed Data Types
Certain PTX instructions operate on two or more sets of inputs in parallel, and produce two or more sets of outputs. Such instructions can use the data stored in a packed format. PTX supports either two or four values of the same scalar data type to be packed into a single, larger value. The packed value is considered as a value of a packed data type. In this section we describe the packed data types supported in PTX.
5.2.5.1. Packed Floating Point Data Types
PTX supports various variants of packed floating point data types. Out of them, only .f16x2 is
supported as a fundamental type, while others cannot be used as fundamental types - they are
supported as instruction types on certain instructions. When using an instruction with such
non-fundamental types, the operand data variables must be of bit type of appropriate size.
For example, all of the operand variables must be of type .b32 for an instruction with
instruction type as .bf16x2.
Table 9 described various variants
of packed floating point data types in PTX.
Packed floating point type |
Number of elements contained in a packed format |
Type of each element |
Register variable type to be used in the declaration |
|---|---|---|---|
|
Two |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
|
|
|
Four |
|
|
|
|
||
|
|
||
|
|
||
|
|
|
5.2.5.2. Packed Integer Data Types
PTX supports four variants of packed integer data types: .u16x2, .s16x2, .u8x4, and .s8x4.
The .u16x2, .s16x2 packed data types consist of two .u16 or .s16 values. The .u8x4,
.s8x4 packed data types consist of four .u8 or .s8 values. A register variable containing .u16x2,
.s16x2, .u8x4, .s8x4 data must be declared with .b32 type. Packed integer data types cannot be used as
fundamental types. They are supported as instruction types on certain instructions.
5.2.5.3. Packed Fixed-Point Data Types
PTX supports .s2f6x2 packed fixed-point data type consisting of two .s2f6 packed
fixed-point values. A register variable containing .s2f6x2 value must be declared with
.b16 type. Packed fixed-point data type cannot be used as fundamental type and is only
supported as instruction type.
5.3. Texture Sampler and Surface Types
PTX includes built-in opaque types for defining texture, sampler, and surface descriptor variables. These types have named fields similar to structures, but all information about layout, field ordering, base address, and overall size is hidden to a PTX program, hence the term opaque. The use of these opaque types is limited to:
Variable definition within global (module) scope and in kernel entry parameter lists.
Static initialization of module-scope variables using comma-delimited static assignment expressions for the named members of the type.
Referencing textures, samplers, or surfaces via texture and surface load/store instructions (
tex,suld,sust,sured).Retrieving the value of a named member via query instructions (
txq,suq).Creating pointers to opaque variables using
mov, e.g.,mov.u64 reg, opaque_var;. The resulting pointer may be stored to and loaded from memory, passed as a parameter to functions, and de-referenced by texture and surface load, store, and query instructions, but the pointer cannot otherwise be treated as an address, i.e., accessing the pointer withldandstinstructions, or performing pointer arithmetic will result in undefined results.Opaque variables may not appear in initializers, e.g., to initialize a pointer to an opaque variable.
Note
Indirect access to textures and surfaces using pointers to opaque variables is supported
beginning with PTX ISA version 3.1 and requires target sm_20 or later.
Indirect access to textures is supported only in unified texture mode (see below).
The three built-in types are .texref, .samplerref, and .surfref. For working with
textures and samplers, PTX has two modes of operation. In the unified mode, texture and sampler
information is accessed through a single .texref handle. In the independent mode, texture and
sampler information each have their own handle, allowing them to be defined separately and combined
at the site of usage in the program. In independent mode, the fields of the .texref type that
describe sampler properties are ignored, since these properties are defined by .samplerref
variables.
Table 10 and
Table 11 list the named members
of each type for unified and independent texture modes. These members and their values have
precise mappings to methods and values defined in the texture HW class as well as
exposed values via the API.
Member |
.texref values |
.surfref values |
|---|---|---|
|
in elements |
|
|
in elements |
|
|
in elements |
|
|
|
|
|
|
|
|
|
N/A |
|
|
N/A |
|
|
N/A |
|
as number of textures in a texture array |
as number of surfaces in a surface array |
|
as number of levels in a mipmapped texture |
N/A |
|
as number of samples in a multi-sample texture |
N/A |
|
N/A |
|
5.3.1. Texture and Surface Properties
Fields width, height, and depth specify the size of the texture or surface in number of
elements in each dimension.
The channel_data_type and channel_order fields specify these properties of the texture or
surface using enumeration types corresponding to the source language API. For example, see
Channel Data Type and Channel Order Fields for
the OpenCL enumeration types currently supported in PTX.
5.3.2. Sampler Properties
The normalized_coords field indicates whether the texture or surface uses normalized coordinates
in the range [0.0, 1.0) instead of unnormalized coordinates in the range [0, N). If no value is
specified, the default is set by the runtime system based on the source language.
The filter_mode field specifies how the values returned by texture reads are computed based on
the input texture coordinates.
The addr_mode_{0,1,2} fields define the addressing mode in each dimension, which determine how
out-of-range coordinates are handled.
See the CUDA C++ Programming Guide for more details of these properties.
Member |
.samplerref values |
.texref values |
.surfref values |
|---|---|---|---|
|
N/A |
in elements |
|
|
N/A |
in elements |
|
|
N/A |
in elements |
|
|
N/A |
|
|
|
N/A |
|
|
|
N/A |
|
N/A |
|
|
N/A |
N/A |
|
|
ignored |
N/A |
|
|
N/A |
N/A |
|
N/A |
as number of textures in a texture array |
as number of surfaces in a surface array |
|
N/A |
as number of levels in a mipmapped texture |
N/A |
|
N/A |
as number of samples in a multi-sample texture |
N/A |
|
N/A |
N/A |
|
In independent texture mode, the sampler properties are carried in an independent .samplerref
variable, and these fields are disabled in the .texref variables. One additional sampler
property, force_unnormalized_coords, is available in independent texture mode.
The force_unnormalized_coords field is a property of .samplerref variables that allows the
sampler to override the texture header normalized_coords property. This field is defined only in
independent texture mode. When True, the texture header setting is overridden and unnormalized
coordinates are used; when False, the texture header setting is used.
The force_unnormalized_coords property is used in compiling OpenCL; in OpenCL, the property of
normalized coordinates is carried in sampler headers. To compile OpenCL to PTX, texture headers are
always initialized with normalized_coords set to True, and the OpenCL sampler-based
normalized_coords flag maps (negated) to the PTX-level force_unnormalized_coords flag.
Variables using these types may be declared at module scope or within kernel entry parameter
lists. At module scope, these variables must be in the .global state space. As kernel
parameters, these variables are declared in the .param state space.
Example
.global .texref my_texture_name;
.global .samplerref my_sampler_name;
.global .surfref my_surface_name;
When declared at module scope, the types may be initialized using a list of static expressions assigning values to the named members.
Example
.global .texref tex1;
.global .samplerref tsamp1 = { addr_mode_0 = clamp_to_border,
filter_mode = nearest
};
5.3.3. Channel Data Type and Channel Order Fields
The channel_data_type and channel_order fields have enumeration types corresponding to the
source language API. Currently, OpenCL is the only source language that defines these
fields. Table 13 and
Table 12 show the
enumeration values defined in OpenCL version 1.0 for channel data type and channel order.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5.4. Variables
In PTX, a variable declaration describes both the variable’s type and its state space. In addition to fundamental types, PTX supports types for simple aggregate objects such as vectors and arrays.
5.4.1. Variable Declarations
All storage for data is specified with variable declarations. Every variable must reside in one of the state spaces enumerated in the previous section.
A variable declaration names the space in which the variable resides, its type and size, its name, an optional array size, an optional initializer, and an optional fixed address for the variable.
Predicate variables may only be declared in the register state space.
Examples
.global .u32 loc;
.reg .s32 i;
.const .f32 bias[] = {-1.0, 1.0};
.global .u8 bg[4] = {0, 0, 0, 0};
.reg .v4 .f32 accel;
.reg .pred p, q, r;
5.4.2. Vectors
Limited-length vector types are supported. Vectors of length 2 and 4 of any non-predicate
fundamental type can be declared by prefixing the type with .v2 or .v4. Vectors must be
based on a fundamental type, and they may reside in the register space. Vectors cannot exceed
128-bits in length; for example, .v4 .f64 is not allowed. Three-element vectors may be
handled by using a .v4 vector, where the fourth element provides padding. This is a common case
for three-dimensional grids, textures, etc.
Examples
.global .v4 .f32 V; // a length-4 vector of floats
.shared .v2 .u16 uv; // a length-2 vector of unsigned ints
.global .v4 .b8 v; // a length-4 vector of bytes
By default, vector variables are aligned to a multiple of their overall size (vector length times base-type size), to enable vector load and store instructions which require addresses aligned to a multiple of the access size.
5.4.3. Array Declarations
Array declarations are provided to allow the programmer to reserve space. To declare an array, the variable name is followed with dimensional declarations similar to fixed-size array declarations in C. The size of each dimension is a constant expression.
Examples
.local .u16 kernel[19][19];
.shared .u8 mailbox[128];
The size of the array specifies how many elements should be reserved. For the declaration of array kernel above, 19*19 = 361 halfwords are reserved, for a total of 722 bytes.
When declared with an initializer, the first dimension of the array may be omitted. The size of the first array dimension is determined by the number of elements in the array initializer.
Examples
.global .u32 index[] = { 0, 1, 2, 3, 4, 5, 6, 7 };
.global .s32 offset[][2] = { {-1, 0}, {0, -1}, {1, 0}, {0, 1} };
Array index has eight elements, and array offset is a 4x2 array.
5.4.4. Initializers
Declared variables may specify an initial value using a syntax similar to C/C++, where the variable name is followed by an equals sign and the initial value or values for the variable. A scalar takes a single value, while vectors and arrays take nested lists of values inside of curly braces (the nesting matches the dimensionality of the declaration).
As in C, array initializers may be incomplete, i.e., the number of initializer elements may be less than the extent of the corresponding array dimension, with remaining array locations initialized to the default value for the specified array type.
Examples
.const .f32 vals[8] = { 0.33, 0.25, 0.125 };
.global .s32 x[3][2] = { {1,2}, {3} };
is equivalent to
.const .f32 vals[8] = { 0.33, 0.25, 0.125, 0.0, 0.0, 0.0, 0.0, 0.0 };
.global .s32 x[3][2] = { {1,2}, {3,0}, {0,0} };
Currently, variable initialization is supported only for constant and global state spaces. Variables in constant and global state spaces with no explicit initializer are initialized to zero by default. Initializers are not allowed in external variable declarations.
Variable names appearing in initializers represent the address of the variable; this can be used to
statically initialize a pointer to a variable. Initializers may also contain var+offset
expressions, where offset is a byte offset added to the address of var. Only variables in
.global or .const state spaces may be used in initializers. By default, the resulting
address is the offset in the variable’s state space (as is the case when taking the address of a
variable with a mov instruction). An operator, generic(), is provided to create a generic
address for variables used in initializers.
Starting PTX ISA version 7.1, an operator mask() is provided, where mask is an integer
immediate. The only allowed expressions in the mask() operator are integer constant expression
and symbol expression representing address of variable. The mask() operator extracts n
consecutive bits from the expression used in initializers and inserts these bits at the lowest
position of the initialized variable. The number n and the starting position of the bits to be
extracted is specified by the integer immediate mask. PTX ISA version 7.1 only supports
extracting a single byte starting at byte boundary from the address of the variable. PTX ISA version
7.3 supports Integer constant expression as an operand in the mask() operator.
Supported values for mask are: 0xFF, 0xFF00, 0XFF0000, 0xFF000000, 0xFF00000000, 0xFF0000000000,
0xFF000000000000, 0xFF00000000000000.
Examples
.const .u32 foo = 42;
.global .u32 bar[] = { 2, 3, 5 };
.global .u32 p1 = foo; // offset of foo in .const space
.global .u32 p2 = generic(foo); // generic address of foo
// array of generic-address pointers to elements of bar
.global .u32 parr[] = { generic(bar), generic(bar)+4,
generic(bar)+8 };
// examples using mask() operator are pruned for brevity
.global .u8 addr[] = {0xff(foo), 0xff00(foo), 0xff0000(foo), ...};
.global .u8 addr2[] = {0xff(foo+4), 0xff00(foo+4), 0xff0000(foo+4),...}
.global .u8 addr3[] = {0xff(generic(foo)), 0xff00(generic(foo)),...}
.global .u8 addr4[] = {0xff(generic(foo)+4), 0xff00(generic(foo)+4),...}
// mask() operator with integer const expression
.global .u8 addr5[] = { 0xFF(1000 + 546), 0xFF00(131187), ...};
Note
PTX 3.1 redefines the default addressing for global variables in initializers, from generic
addresses to offsets in the global state space. Legacy PTX code is treated as having an implicit
generic() operator for each global variable used in an initializer. PTX 3.1 code should
either include explicit generic() operators in initializers, use cvta.global to form
generic addresses at runtime, or load from the non-generic address using ld.global.
Device function names appearing in initializers represent the address of the first instruction in the function; this can be used to initialize a table of function pointers to be used with indirect calls. Beginning in PTX ISA version 3.1, kernel function names can be used as initializers e.g. to initialize a table of kernel function pointers, to be used with CUDA Dynamic Parallelism to launch kernels from GPU. See the CUDA Dynamic Parallelism Programming Guide for details.
Labels cannot be used in initializers.
Variables that hold addresses of variables or functions should be of type .u8 or .u32 or
.u64.
Type .u8 is allowed only if the mask() operator is used.
Initializers are allowed for all types except .f16, .f16x2 and .pred.
Examples
.global .s32 n = 10;
.global .f32 blur_kernel[][3]
= {{.05,.1,.05},{.1,.4,.1},{.05,.1,.05}};
.global .u32 foo[] = { 2, 3, 5, 7, 9, 11 };
.global .u64 ptr = generic(foo); // generic address of foo[0]
.global .u64 ptr = generic(foo)+8; // generic address of foo[2]
5.4.5. Alignment
Byte alignment of storage for all addressable variables can be specified in the variable
declaration. Alignment is specified using an optional .align byte-count specifier immediately
following the state-space specifier. The variable will be aligned to an address which is an integer
multiple of byte-count. The alignment value byte-count must be a power of two. For arrays, alignment
specifies the address alignment for the starting address of the entire array, not for individual
elements.
The default alignment for scalar and array variables is to a multiple of the base-type size. The default alignment for vector variables is to a multiple of the overall vector size.
Examples
// allocate array at 4-byte aligned address. Elements are bytes.
.const .align 4 .b8 bar[8] = {0,0,0,0,2,0,0,0};
Note that all PTX instructions that access memory require that the address be aligned to a multiple
of the access size. The access size of a memory instruction is the total number of bytes accessed in
memory. For example, the access size of ld.v4.b32 is 16 bytes, while the access size of
atom.f16x2 is 4 bytes.
5.4.6. Parameterized Variable Names
Since PTX supports virtual registers, it is quite common for a compiler frontend to generate a large number of register names. Rather than require explicit declaration of every name, PTX supports a syntax for creating a set of variables having a common prefix string appended with integer suffixes.
For example, suppose a program uses a large number, say one hundred, of .b32 variables, named
%r0, %r1, …, %r99. These 100 register variables can be declared as follows:
.reg .b32 %r<100>; // declare %r0, %r1, ..., %r99
This shorthand syntax may be used with any of the fundamental types and with any state space, and may be preceded by an alignment specifier. Array variables cannot be declared this way, nor are initializers permitted.
5.4.7. Variable Attributes
Variables may be declared with an optional .attribute directive which allows specifying special
attributes of variables. Keyword .attribute is followed by attribute specification inside
parenthesis. Multiple attributes are separated by comma.
Variable and Function Attribute Directive: .attribute describes the .attribute
directive.
5.4.8. Variable and Function Attribute Directive: .attribute
.attribute
Variable and function attributes
Description
Used to specify special attributes of a variable or a function.
The following attributes are supported.
.managed-
.managedattribute specifies that variable will be allocated at a location in unified virtual memory environment where host and other devices in the system can reference the variable directly. This attribute can only be used with variables in .global state space. See the CUDA UVM-Lite Programming Guide for details. .unified-
.unifiedattribute specifies that function has the same memory address on the host and on other devices in the system. Integer constantsuuid1anduuid2respectively specify upper and lower 64 bits of the unique identifier associated with the function or the variable. This attribute can only be used on device functions or on variables in the.globalstate space. Variables with.unifiedattribute are read-only and must be loaded by specifying.unifiedqualifier on the address operand ofldinstruction, otherwise the behavior is undefined.
PTX ISA Notes
Introduced in PTX ISA version 4.0.
Support for function attributes introduced in PTX ISA version 8.0.
Target ISA Notes
.managedattribute requiressm_30or higher..unifiedattribute requiressm_90or higher.
Examples
.global .attribute(.managed) .s32 g;
.global .attribute(.managed) .u64 x;
.global .attribute(.unified(19,95)) .f32 f;
.func .attribute(.unified(0xAB, 0xCD)) bar() { ... }
5.5. Tensors
A tensor is a multi-dimensional matrix structure in the memory. Tensor is defined by the following properties:
Dimensionality
Dimension sizes across each dimension
Individual element types
Tensor stride across each dimension
PTX supports instructions which can operate on the tensor data. PTX Tensor instructions include:
Copying data between global and shared memories
Reducing the destination tensor data with the source.
The Tensor data can be operated on by various wmma.mma, mma and wgmma.mma_async
instructions.
PTX Tensor instructions treat the tensor data in the global memory as a multi-dimensional structure and treat the data in the shared memory as a linear data.
5.5.1. Tensor Dimension, size and format
Tensors can have dimensions: 1D, 2D, 3D, 4D or 5D.
Each dimension has a size which represents the number of elements along the dimension. The elements can have one the following types:
Bit-sized type:
.b32,.b64Sub-byte types:
.b4x16,.b4x16_p64,.b6x16_p32,.b6p2x16Integer:
.u8,.u16,.u32,.s32,.u64,.s64Floating point and alternate floating point:
.f16,.bf16,.tf32,.f32,.f64(rounded to nearest even).
Tensor can have padding at the end in each of the dimensions to provide alignment for the data in the subsequent dimensions. Tensor stride can be used to specify the amount of padding in each dimension.
5.5.1.1. Sub-byte Types
5.5.1.1.1. Padding and alignment of the sub-byte types
The sub-byte types are expected to packed contiguously in the global memory and the Tensor copy instruction will expand them by appending empty spaces as shown below:
Type
.b4x16: With this type, there is no padding involved and the packed sixteen.b4elements in a 64-bits container is copied as is between the shared memory and the global memory.-
Type
.b4x16_p64: With this type, sixteen contiguous 4-bits of data is copied from global memory to the shared memory with the append of 64-bits of padding as shown in Figure 5
Figure 5 Layout for .b4x16_p64
The padded region that gets added is un-initialized.
-
Type
.b6x16_p32: With this type, sixteen 6-bits of data is copied from global memory to the shared memory with an append of 32-bits of padding as shown in Figure 6
Figure 6 Layout for .b6x16_p32
The padded region that gets added is un-initialized.
-
Type
.b6p2x16: With this type, sixteen elements, each containing 6-bits of data at the LSB and 2-bits of padding at the MSB, are copied from shared memory into the global memory by discarding the 2-bits of padding data and packing the 6-bits data contiguously as shown in Figure 7
Figure 7 Layout for .b6p2x16
In case of .b6x16_p32 and .b4x16_p64, the padded region that gets added is
un-initialized.
The types .b6x16_p32 and .b6p2x16 share the same encoding value in the
descriptor (value 15) as the two types are applicable for different types of
tensor copy operations:
Type |
Valid Tensor Copy Direction |
|---|---|
|
|
|
|
5.5.2. Tensor Access Modes
Tensor data can be accessed in two modes:
-
Tiled mode:
In tiled mode, the source multi-dimensional tensor layout is preserved at the destination.
-
Im2col mode:
In im2col mode, the elements in the Bounding Box of the source tensor are rearranged into columns at the destination. Refer here for more details.
5.5.3. Tiled Mode
This section talks about how Tensor and Tensor access work in tiled mode.
5.5.3.1. Bounding Box
A tensor can be accessed in chunks known as Bounding Box. The Bounding Box has the same dimensionality as the tensor they are accessing into. Size of each bounding Box must be a multiple of 16 bytes. The address of the bounding Box must also be aligned to 16 bytes.
Bounding Box has the following access properties:
Bounding Box dimension sizes
Out of boundary access mode
Traversal strides
The tensor-coordinates, specified in the PTX tensor instructions, specify the starting offset of the bounding box. Starting offset of the bounding box along with the rest of the bounding box information together are used to determine the elements which are to be accessed.
5.5.3.2. Traversal-Stride
While the Bounding Box is iterating the tensor across a dimension, the traversal stride specifies the exact number of elements to be skipped. If no jump over is required, default value of 1 must be specified.
The traversal stride in dimension 0 can be used for the Interleave layout. For non-interleaved layout, the traversal stride in dimension 0 must always be 1.
Figure 8 illustrates tensor, tensor size, tensor stride, Bounding Box size and traversal stride.
Figure 8 Tiled mode bounding box, tensor size and traversal stride
5.5.3.3. Out of Boundary Access
PTX Tensor operation can detect and handle the case when the Bounding Box crosses the tensor boundary in any dimension. There are 2 modes:
-
Zero fill mode:
Elements in the Bounding Box which fall outside of the tensor boundary are set to 0.
-
OOB-NaNfill mode:Elements in the Bounding Box which fall outside of the tensor boundary are set to a special NaN called
OOB-NaN.
Figure 9 shows an example of the out of boundary access.
Figure 9 Out of boundary access
5.5.3.4. .tile::scatter4 and .tile::gather4 modes
These modes are similar to the tiled mode with restriction that these modes work only on 2D tensor data.
Tile::scatter4 and Tile::gather4 modes are used to access multiple non-contiguous rows of tensor data.
In Tile::scatter4 mode single 2D source tensor is divided into four rows in the 2D destination tensor.
In Tile::gather4 mode four rows in the source 2D tensor are combined to form single 2D destination tensor.
These modes work on four rows and hence the instruction will take:
four tensor coordinates across the dimension 0
one tensor coordinate across the dimension 1
The interleave layout is not supported for .tile::scatter4 and .tile::gather4 modes.
All other constraints and rules of the tile mode apply to these modes as well.
5.5.3.4.1. Bounding Box
For Tile::scatter4 and Tile::gather4 modes, four request coordinates will form four Bounding
Boxes in the tensor space.
Figure 10 shows an example of the same with start coordinates (1, 2), (1, 5), (1, 0) and (1, 9).
The size of the bounding box in the dimension 0 represents the length of the rows. The size of the bounding box in the dimension 1 must be one.
Figure 10 tiled::scatter4/tiled::gather4 mode bounding box example
5.5.4. im2col mode
Im2col mode supports the following tensor dimensions : 3D, 4D and 5D. In this mode, the tensor data is treated as a batch of images with the following properties:
N : number of images in the batch
D, H, W : size of a 3D image (depth, height and width)
C: channels per image element
The above properties are associated with 3D, 4D and 5D tensors as follows:
Dimension |
N/D/H/W/C applicability |
|---|---|
3D |
NWC |
4D |
NHWC |
5D |
NDHWC |
5.5.4.1. Bounding Box
In im2col mode, the Bounding Box is defined in DHW space. Boundaries along other dimensions are specified by Pixels-per-Column and Channels-per-Pixel parameters as described below.
The dimensionality of the Bounding Box is two less than the tensor dimensionality.
The following properties describe how to access of the elements in im2col mode:
Bounding-Box Lower-Corner
Bounding-Box Upper-Corner
Pixels-per-Column
Channels-per-Pixel
Bounding-box Lower-Corner and Bounding-box Upper-Corner specify the two opposite corners of the Bounding Box in the DHW space. Bounding-box Lower-Corner specifies the corner with the smallest coordinate and Bounding-box Upper-Corner specifies the corner with the largest coordinate.
Bounding-box Upper- and Lower-Corners are 16-bit signed values whose limits varies across the dimensions and are as shown below:
3D |
4D |
5D |
|
|---|---|---|---|
Upper- / Lower- Corner sizes |
[-215, 215-1] |
[-27, 27-1] |
[-24, 24-1] |
Figure 11 and Figure 12 show the Upper-Corners and Lower-Corners.
Figure 11 im2col mode bounding box example 1
Figure 12 im2col mode bounding box example 2
The Bounding-box Upper- and Lower- Corners specify only the boundaries and not the number of elements to be accessed. Pixels-per-Column specifies the number of elements to be accessed in the NDHW space.
Channels-per-Pixel specifies the number of elements to access across the C dimension.
The tensor coordinates, specified in the PTX tensor instructions, behaves differently in different dimensions:
Across N and C dimensions: specify the starting offsets along the dimension, similar to the tiled mode.
Across DHW dimensions: specify the location of the convolution filter base in the tensor space. The filter corner location must be within the bounding box.
The im2col offsets, specified in the PTX tensor instructions in im2col mode, are added to the filter base coordinates to determine the starting location in the tensor space from where the elements are accessed.
The size of the im2col offsets varies across the dimensions and their valid ranges are as shown below:
3D |
4D |
5D |
|
|---|---|---|---|
im2col offsets range |
[0, 216-1] |
[0, 28-1] |
[0, 25-1] |
Following are some examples of the im2col mode accesses:
-
Example 1 (Figure 13):
Tensor Size[0] = 64 Tensor Size[1] = 9 Tensor Size[2] = 14 Tensor Size[3] = 64 Pixels-per-Column = 64 channels-per-pixel = 8 Bounding-Box Lower-Corner W = -1 Bounding-Box Lower-Corner H = -1 Bounding-Box Upper-Corner W = -1 Bounding-Box Upper-Corner H = -1. tensor coordinates = (7, 7, 4, 0) im2col offsets : (0, 0)
Figure 13 im2col mode example 1
-
Example 2 (Figure 14):
Tensor Size[0] = 64 Tensor Size[1] = 9 Tensor Size[2] = 14 Tensor Size[3] = 64 Pixels-per-Column = 64 channels-per-pixel = 8 Bounding-Box Lower-Corner W = 0 Bounding-Box Lower-Corner H = 0 Bounding-Box Upper-Corner W = -2 Bounding-Box Upper-Corner H = -2 tensor coordinates = (7, 7, 4, 0) im2col offsets: (2, 2)
Figure 14 im2col mode example 2
5.5.4.2. Traversal Stride
The traversal stride, in im2col mode, does not impact the total number of elements (or pixels) being accessed unlike the tiled mode. Pixels-per-Column determines the total number of elements being accessed, in im2col mode.
The number of elements traversed along the D, H and W dimensions is strided by the traversal stride for that dimension.
The following example with Figure 15 illustrates accesse with traversal-strides:
Tensor Size[0] = 64
Tensor Size[1] = 8
Tensor Size[2] = 14
Tensor Size[3] = 64
Traversal Stride = 2
Pixels-per-Column = 32
channels-per-pixel = 16
Bounding-Box Lower-Corner W = -1
Bounding-Box Lower-Corner H = -1
Bounding-Box Upper-Corner W = -1
Bounding-Box Upper-Corner H = -1.
Tensor coordinates in the instruction = (7, 7, 5, 0)
Im2col offsets in the instruction : (1, 1)
Figure 15 im2col mode traversal stride example
5.5.4.3. Out of Boundary Access
In im2col mode, when the number of requested pixels in NDHW space specified by Pixels-per-Column exceeds the number of available pixels in the image batch then out-of-bounds access is performed.
Similar to tiled mode, zero fill or OOB-NaN fill can be performed based on the Fill-Mode
specified.
5.5.5. im2col::w and im2col::w::128 modes
These modes are similar to the im2col mode with the restriction that elements are accessed across
the W dimension only while keeping the H and D dimension constant.
All the constraints and rules of the im2col mode apply to these modes as well. Note that the valid Swizzling Modes must be set. In other words, swizzling mode must not be (i) no swizzle and (ii) 128-byte swizzle mode with 32-byte atomicity with 8-byte flip.
The number of elements accessed in the im2col::w::128 mode is fixed and is equal to 128.
The number of elements accessed in the im2col::w mode depends on the Pixels-per-Column
field in the TensorMap.
5.5.5.1. Bounding Box
In these modes, the size of the bounding box in D and H dimensions are 1.
The D and H dimensions in the tensor coordinates argument in the PTX instruction specify
the position of the bounding box in the tensor space.
The Bounding-Box Lower-Corner-W and Bounding-Box Upper-Corner-W specify the two opposite
corners of the Bounding Box in the W dimension.
The W dimension in the tensor coordinates argument in the PTX instruction specify the location
of the first element that is to be accessed in the bounding box.
Number of pixels loaded in im2col::w mode is as specified by Pixels-per-Column in the TensorMap.
Number of pixels loaded in im2col::w::128 mode is always 128. So, Pixels-per-Column is ignored
in im2col::w::128 mode.
Figure 16 shows an example of the im2col::w and
im2col::w:128 modes.
Figure 16 im2col::w and im2col::w::128 modes example
The first element can lie outside of the Bounding Box in the W-dimension only and only on the left side of the Bounding Box. Figure 17 shows of an example of this.
Figure 17 im2col::w and im2col::w::128 modes first element outside Bounding Box example
5.5.5.2. Traversal Stride
This is similar to im2col mode with the exception of that the number of elements traversed
along only the W dimension is strided by the traversal stride as specified in the TensorMap.
5.5.5.3. wHalo
In im2col::w mode, the wHalo argument in the PTX instruction specifies how many filter
halo elements must be loaded at the end of the image.
In im2col::w::128 mode, the halo elements are loaded after every 32 elements in the bounding
box along the W dimension. The wHalo argument in the PTX instruction specifies how many
halo elements must be loaded after every 32 elements.
Following is an example of .im2col::w mode access:
Tensor Size [0] = 128
Tensor Size [1] = 9
Tensor Size [2] = 7
Tensor Size [3] = 64
Pixels-per-column = 128
Channels-per-pixel = 64
Bounding Box Lower Corner W = 0
Bounding Box Upper Corner W = 0
Tensor Coordinates in the instruction = (7, 2, 3, 0)
wHalo in the instruction = 2 (as 3x3 convolution filter is used)
A tensor copy operation with the above parameters loads 128 pixels and the two halo pixels as shown in Figure 18.
Figure 18 tensor copy operation with im2col::w mode example
The halo pixels are always loaded in the shared memory next to the main row pixels as shown in Figure 18.
Following is an example of .im2col::w::128 mode access:
Tensor Size [0] = 128
Tensor Size [1] = 9
Tensor Size [2] = 7
Tensor Size [3] = 64
Channels-per-pixel = 64
Bounding Box Lower Corner W = 0
Bounding Box Upper Corner W = 0
Tensor Coordinates in the instruction = (7, 2, 3, 0)
wHalo in the instruction = 2 (as 3x3 convolution filter is used)
A tensor copy operation with the above parameters loads 128 elements such that after every 32 elements, wHalo number of elements are loaded as shown in Figure 19.
Figure 19 tensor copy operation with im2col::w::128 mode example
5.5.5.4. wOffset
In the convolution calculations, the same elements along the W dimension are reused for different
locations within the convolution filter footprint. Based on the number of times a pixel is used, the
pixels may be loaded into different shared memory buffers. Each buffer can be loaded by a separate
tensor copy operation.
The wOffset argument in the tensor copy and prefetch instruction adjusts the source pixel location
for each buffer. The exact position of the buffer is adjusted along the W dimension using the
following formula:
Bounding Box Lower Corner W += wOffset
Bounding Box Upper Corner W += wOffset
W += wOffset
Following are examples of tensor copy to multiple buffers with various wHalo and wOffset values:
Example 1:
Tensor Size [0] = 128
Tensor Size [1] = 9
Tensor Size [2] = 67
Tensor Size [3] = 64
Pixels-per-Column = 128
Channels-per-pixel = 64
Bounding Box Lower Corner W = -1
Bounding Box Upper Corner W = 0
Traversal Stride = 2
Tensor Coordinates in the instruction = (7, 2, -1, 0)
Shared memory buffer 1:
wHalo = 1
wOffset = 0
Shared memory buffer 2:
wHalo = 0
wOffset = 1
Figure 20 tensor copy operation to buffer 1 of Example 1
Figure 21 tensor copy operation to buffer 2 of Example 1
Example 2:
Tensor Size [0] = 128
Tensor Size [1] = 7
Tensor Size [2] = 7
Tensor Size [3] = 64
Pixels-per-Column = 128
Channels-per-pixel = 64
Bounding Box Lower Corner W = -1
Bounding Box Upper Corner W = -1
Traversal Stride = 3
Tensor Coordinates in the instruction = (7, 2, -1, 0)
Shared memory buffer 1:
wHalo = 0
wOffset = 0
Shared memory buffer 2:
wHalo = 0
wOffset = 1
Shared memory buffer 3:
wHalo = 0
wOffset = 2
Figure 22 tensor copy operation to buffer 1 of Example 2
Figure 23 tensor copy operation to buffer 2 of Example 2
Figure 24 tensor copy operation to buffer 3 of Example 2
5.5.6. Interleave layout
Tensor can be interleaved and the following interleave layouts are supported:
No interleave (NDHWC)
8 byte interleave (NC/8DHWC8) : C8 utilizes 16 bytes in memory assuming 2B per channel.
16 byte interleave (NC/16HWC16) : C16 utilizes 32 bytes in memory assuming 4B per channel.
The C information is organized in slices where sequential C elements are grouped in 16 byte or 32 byte quantities.
If the total number of channels is not a multiple of the number of channels per slice, then the last slice must be padded with zeros to make it complete 16B or 32B slice.
Interleaved layouts are supported only for the dimensionalities : 3D, 4D and 5D.
The interleave layout is not supported for .im2col::w and .im2col::w::128 modes.
5.5.7. Swizzling Modes
The layout of the data in the shared memory can be different to that of global memory, for access performance reasons. The following describes various swizzling modes:
-
No swizzle mode:
There is no swizzling in this mode and the destination data layout is exactly similar to the source data layout.
0
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
… Pattern repeats …
-
32 byte swizzle mode:
The following table, where each elements (numbered cell) is 16 byte and the starting address is 256 bytes aligned, shows the pattern of the destination data layout:
0
1
2
3
4
5
6
7
1
0
3
2
5
4
7
6
… Pattern repeats …
An example of the 32 byte swizzle mode for NC/(32B)HWC(32B) tensor of 1x2x10x10xC16 dimension, with the innermost dimension holding slice of 16 channels with 2 byte/channel, is shown in Figure 25.
Figure 25 32-byte swizzle mode example
Figure 26 shows the two fragments of the tensor : one for C/(32B) = 0 and another for C/(32B) = 1.
Figure 26 32-byte swizzle mode fragments
Figure 27 shows the destination data layout with 32 byte swizzling.
Figure 27 32-byte swizzle mode destination data layout
-
64 byte swizzle mode:
The following table, where each elements (numbered cell) is 16 byte and the starting address is 512 bytes aligned, shows the pattern of the destination data layout:
0
1
2
3
4
5
6
7
1
0
3
2
5
4
7
6
2
3
0
1
6
7
4
5
3
2
1
0
7
6
5
4
… Pattern repeats …
An example of the 64 byte swizzle mode for NHWC tensor of 1x10x10x64 dimension, with 2 bytes / channel and 32 channels, is shown in Figure 28.
Figure 28 64-byte swizzle mode example
Each colored cell represents 8 channels. Figure 29 shows the source data layout.
Figure 29 64-byte swizzle mode source data layout
Figure 30 shows the destination data layout with 64 byte swizzling.
Figure 30 64-byte swizzle mode destination data layout
-
96 byte swizzle mode:
The following table where each element (numbered cell) is 16 byte shows the swizzling pattern at the destination data layout:
0
1
2
3
4
5
6
7
1
0
3
2
5
4
7
6
… Pattern repeats …
An example of the data layout in global memory and its swizzled data layout in shared memory where each element (colored cell) is 16 bytes and the starting address is 256 bytes aligned is shown in Figure 31.
Figure 31 96-byte swizzle mode example
-
128 byte swizzle mode:
The 128-byte swizzling mode supports the following sub-modes:
-
16-byte atomicity sub-mode:
In this sub-mode, the 16-byte of data is kept intact while swizzling.
The following table, where each elements (numbered cell) is 16 byte and the starting address is 1024 bytes aligned, shows the pattern of the destination data layout:
0
1
2
3
4
5
6
7
1
0
3
2
5
4
7
6
2
3
0
1
6
7
4
5
3
2
1
0
7
6
5
4
4
5
6
7
0
1
2
3
5
4
7
6
1
0
3
2
6
7
4
5
2
3
0
1
7
6
5
4
3
2
1
0
… Pattern repeats …
An example of the 128 byte swizzle mode for NHWC tensor of 1x10x10x64 dimension, with 2 bytes / channel and 64 channels, is shown in Figure 32.
Figure 32 128-byte swizzle mode example
Each colored cell represents 8 channels. Figure 33 shows the source data layout.
Figure 33 128-byte swizzle mode source data layout
Figure 34 shows the destination data layout with 128 byte swizzling.
Figure 34 128-byte swizzle mode destination data layout
-
32-byte atomicity sub-mode:
In this sub-mode, the 32-byte of data is kept intact while swizzling.
The following table where each element (numbered cell) is 16 byte shows the swizzling pattern at the destination data layout:
0 1
2 3
4 5
6 7
2 3
0 1
6 7
4 5
4 5
6 7
0 1
2 3
6 7
4 5
2 3
0 1
… Pattern repeats …
This sub-mode requires 32 byte alignment at shared memory.
An example of the data layout in global memory and its swizzled data layout in shared memory where each element (colored cell) is 16 bytes is shown in Figure 35
Figure 35 128-byte swizzle mode example with 32-byte atomicity
-
32-byte atomicity with 8-byte flip sub-mode:
The swizzling pattern for this sub-mode is similar to the 32-byte atomicity sub-mode except that there is a flip of adjacent 8-bytes within the 16-byte data at every alternate shared memory line. Note that this mode is legal only when
cp.async.bulk.tensorspecifies the copy direction as.shared::cluster.globalor otherwise.shared::cta.global.An example of the data layout in global memory and its swizzled data layout in shared memory where each element (colored cell) is 16 bytes (two 8-byte sub-elements for each 16-byte colored cell are shown to show the flip) is shown in Figure 36
Figure 36 128-byte swizzle mode example with 32-byte atomicity with 8-byte flip
-
64-byte atomicity sub-mode:
In this sub-mode, the 64-byte of data is kept intact while swizzling.
The following table where each element (numbered cell) is 16 byte shows the swizzling pattern at the destination data layout:
0 1 2 3
4 5 6 7
4 5 6 7
0 1 2 3
… Pattern repeats …
This sub-mode requires 64-byte alignment at shared memory.
An example of the data layout in global memory and its swizzled data layout in shared memory where each element (colored cell) is 16 bytes is shown in Figure 37
Figure 37 128-byte swizzle mode example with 64-byte atomicity
-
Table 14 lists the valid combination of swizzle-atomicity with the swizzling-mode.
Swizzling Mode |
Swizzle-Atomicity |
|---|---|
No Swizzling |
– |
32B Swizzling Mode |
16B |
64B Swizzling Mode |
16B |
96B Swizzling Mode |
16B |
128B Swizzling Mode |
|
The value of swizzle base offset is 0 when the dstMem shared memory address is located
at the following boundary:
Swizzling Mode |
Starting address of the repeating pattern |
|---|---|
128-Byte swizzle |
1024-Byte boundary |
96-Byte swizzle |
256-Byte boundary |
64-Byte swizzle |
512-Byte boundary |
32-Byte swizzle |
256-Byte boundary |
Otherwise, the swizzle base offset is a non-zero value, computed using following formula:
Swizzling Mode |
Formula |
|---|---|
128-Byte swizzle |
base offset = (dstMem / 128) % 8 |
96-Byte swizzle |
base offset = (dstMem / 128) % 2 |
64-Byte swizzle |
base offset = (dstMem / 128) % 4 |
32-Byte swizzle |
base offset = (dstMem / 128) % 2 |
5.5.8. Tensor-map
The tensor-map is a 128-byte opaque object either in .const space or .param (kernel function
parameter) space or .global space which describes the tensor properties and the access properties
of the tensor data described in previous sections.
Tensor-Map can be created using CUDA APIs. Refer to CUDA programming guide for more details.
6. Instruction Operands
6.1. Operand Type Information
All operands in instructions have a known type from their declarations. Each operand type must be compatible with the type determined by the instruction template and instruction type. There is no automatic conversion between types.
The bit-size type is compatible with every type having the same size. Integer types of a common size are compatible with each other. Operands having type different from but compatible with the instruction type are silently cast to the instruction type.
6.2. Source Operands
The source operands are denoted in the instruction descriptions by the names a, b, and
c. PTX describes a load-store machine, so operands for ALU instructions must all be in variables
declared in the .reg register state space. For most operations, the sizes of the operands must
be consistent.
The cvt (convert) instruction takes a variety of operand types and sizes, as its job is to
convert from nearly any data type to any other data type (and size).
The ld, st, mov, and cvt instructions copy data from one location to
another. Instructions ld and st move data from/to addressable state spaces to/from
registers. The mov instruction copies data between registers.
Most instructions have an optional predicate guard that controls conditional execution, and a few
instructions have additional predicate source operands. Predicate operands are denoted by the names
p, q, r, s.
6.3. Destination Operands
PTX instructions that produce a single result store the result in the field denoted by d (for
destination) in the instruction descriptions. The result operand is a scalar or vector variable in
the register state space.
6.4. Using Addresses, Arrays, and Vectors
Using scalar variables as operands is straightforward. The interesting capabilities begin with addresses, arrays, and vectors.
6.4.1. Addresses as Operands
All the memory instructions take an address operand that specifies the memory location being accessed. This addressable operand is one of:
[var]-
the name of an addressable variable
var. [reg]-
an integer or bit-size type register
regcontaining a byte address. [reg+immOff]-
a sum of register
regcontaining a byte address plus a constant integer byte offset (signed, 32-bit). [var+immOff]-
a sum of address of addressable variable
varcontaining a byte address plus a constant integer byte offset (signed, 32-bit). [immAddr]-
an immediate absolute byte address (unsigned, 32-bit).
var[immOff]-
an array element as described in Arrays as Operands.
The register containing an address may be declared as a bit-size type or integer type.
The access size of a memory instruction is the total number of bytes accessed in memory. For
example, the access size of ld.v4.b32 is 16 bytes, while the access size of atom.f16x2 is 4
bytes.
The address must be naturally aligned to a multiple of the access size. If an address is not properly aligned, the resulting behavior is undefined. For example, among other things, the access may proceed by silently masking off low-order address bits to achieve proper rounding, or the instruction may fault.
The address size may be either 32-bit or 64-bit. 128-bit adresses are not supported. Addresses are zero-extended to the specified width as needed, and truncated if the register width exceeds the state space address width for the target architecture.
Address arithmetic is performed using integer arithmetic and logical instructions. Examples include pointer arithmetic and pointer comparisons. All addresses and address computations are byte-based; there is no support for C-style pointer arithmetic.
The mov instruction can be used to move the address of a variable into a pointer. The address is
an offset in the state space in which the variable is declared. Load and store operations move data
between registers and locations in addressable state spaces. The syntax is similar to that used in
many assembly languages, where scalar variables are simply named and addresses are de-referenced by
enclosing the address expression in square brackets. Address expressions include variable names,
address registers, address register plus byte offset, and immediate address expressions which
evaluate at compile-time to a constant address.
Here are a few examples:
.shared .u16 x;
.reg .u16 r0;
.global .v4 .f32 V;
.reg .v4 .f32 W;
.const .s32 tbl[256];
.reg .b32 p;
.reg .s32 q;
ld.shared.u16 r0,[x];
ld.global.v4.f32 W, [V];
ld.const.s32 q, [tbl+12];
mov.u32 p, tbl;
6.4.1.1. Generic Addressing
If a memory instruction does not specify a state space, the operation is performed using generic
addressing. The state spaces .const, Kernel Function Parameters
(.param), .local and .shared are modeled as
windows within the generic address space. Each window is defined by a window base and a window size
that is equal to the size of the corresponding state space. A generic address maps to global
memory unless it falls within the window for const, local, or shared memory. The
Kernel Function Parameters (.param) window is contained
within the .global window. Within each window, a generic address maps to an address in the
underlying state space by subtracting the window base from the generic address.
6.4.2. Arrays as Operands
Arrays of all types can be declared, and the identifier becomes an address constant in the space where the array is declared. The size of the array is a constant in the program.
Array elements can be accessed using an explicitly calculated byte address, or by indexing into the array using square-bracket notation. The expression within square brackets is either a constant integer, a register variable, or a simple register with constant offset expression, where the offset is a constant expression that is either added or subtracted from a register variable. If more complicated indexing is desired, it must be written as an address calculation prior to use. Examples are:
ld.global.u32 s, a[0];
ld.global.u32 s, a[N-1];
mov.u32 s, a[1]; // move address of a[1] into s
6.4.3. Vectors as Operands
Vector operands can be specified as source and destination operands for instructions. However, when specified as destination operand, all elements in vector expression must be unique, otherwise behavior is undefined. Vectors may also be passed as arguments to called functions.
Vector elements can be extracted from the vector with the suffixes .x, .y, .z and
.w, as well as the typical color fields .r, .g, .b and .a.
A brace-enclosed list is used for pattern matching to pull apart vectors.
.reg .v4 .f32 V;
.reg .f32 a, b, c, d;
mov.v4.f32 {a,b,c,d}, V;
Vector loads and stores can be used to implement wide loads and stores, which may improve memory performance. The registers in the load/store operations can be a vector, or a brace-enclosed list of similarly typed scalars. Here are examples:
ld.global.v4.f32 {a,b,c,d}, [addr+16];
ld.global.v2.u32 V2, [addr+8];
Elements in a brace-enclosed vector, say {Ra, Rb, Rc, Rd}, correspond to extracted elements as follows:
Ra = V.x = V.r
Rb = V.y = V.g
Rc = V.z = V.b
Rd = V.w = V.a
6.4.4. Labels and Function Names as Operands
Labels and function names can be used only in bra/brx.idx and call instructions
respectively. Function names can be used in mov instruction to get the address of the function
into a register, for use in an indirect call.
Beginning in PTX ISA version 3.1, the mov instruction may be used to take the address of kernel
functions, to be passed to a system call that initiates a kernel launch from the GPU. This feature
is part of the support for CUDA Dynamic Parallelism. See the CUDA Dynamic Parallelism Programming
Guide for details.
6.5. Type Conversion
All operands to all arithmetic, logic, and data movement instruction must be of the same type and size, except for operations where changing the size and/or type is part of the definition of the instruction. Operands of different sizes or types must be converted prior to the operation.
6.5.1. Scalar Conversions
Table 15 and
Table 16 show what
precision and format the cvt instruction uses given operands of differing types. For example, if a
cvt.s32.u16 instruction is given a u16 source operand and s32 as a destination operand,
the u16 is zero-extended to s32.
Conversions to floating-point that are beyond the range of floating-point numbers are represented
with the maximum floating-point value (IEEE 754 Inf for f32 and f64, and ~131,000 for
f16).
Destination Format |
||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
s8 |
s16 |
s32 |
s64 |
u8 |
u16 |
u32 |
u64 |
f16 |
f32 |
f64 |
bf16 |
tf32 |
||
Source Format |
s8 |
– |
sext |
sext |
sext |
– |
sext |
sext |
sext |
s2f |
s2f |
s2f |
s2f |
– |
s16 |
chop1 |
– |
sext |
sext |
chop1 |
– |
sext |
sext |
s2f |
s2f |
s2f |
s2f |
– |
|
s32 |
chop1 |
chop1 |
– |
sext |
chop1 |
chop1 |
– |
sext |
s2f |
s2f |
s2f |
s2f |
– |
|
s64 |
chop1 |
chop1 |
chop1 |
– |
chop1 |
chop1 |
chop1 |
– |
s2f |
s2f |
s2f |
s2f |
– |
|
u8 |
– |
zext |
zext |
zext |
– |
zext |
zext |
zext |
u2f |
u2f |
u2f |
u2f |
– |
|
u16 |
chop1 |
– |
zext |
zext |
chop1 |
– |
zext |
zext |
u2f |
u2f |
u2f |
u2f |
– |
|
u32 |
chop1 |
chop1 |
– |
zext |
chop1 |
chop1 |
– |
zext |
u2f |
u2f |
u2f |
u2f |
– |
|
u64 |
chop1 |
chop1 |
chop1 |
– |
chop1 |
chop1 |
chop1 |
– |
u2f |
u2f |
u2f |
u2f |
– |
|
f16 |
f2s |
f2s |
f2s |
f2s |
f2u |
f2u |
f2u |
f2u |
– |
f2f |
f2f |
f2f |
– |
|
f32 |
f2s |
f2s |
f2s |
f2s |
f2u |
f2u |
f2u |
f2u |
f2f |
– |
f2f |
f2f |
f2f |
|
f64 |
f2s |
f2s |
f2s |
f2s |
f2u |
f2u |
f2u |
f2u |
f2f |
f2f |
– |
f2f |
– |
|
bf16 |
f2s |
f2s |
f2s |
f2s |
f2u |
f2u |
f2u |
f2u |
f2f |
f2f |
f2f |
f2f |
– |
|
tf32 |
– |
– |
– |
– |
– |
– |
– |
– |
– |
– |
– |
– |
– |
|
Destination Format |
|||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
f16 |
f32 |
bf16 |
e4m3 |
e5m2 |
e2m3 |
e3m2 |
e2m1 |
ue8m0 |
s2f6 |
||
Source Format |
f16 |
– |
f2f |
f2f |
f2f |
f2f |
f2f |
f2f |
f2f |
– |
– |
f32 |
f2f |
– |
f2f |
f2f |
f2f |
f2f |
f2f |
f2f |
f2f |
f2f |
|
bf16 |
f2f |
f2f |
– |
f2f |
f2f |
f2f |
f2f |
f2f |
f2f |
f2f |
|
e4m3 |
f2f |
– |
f2f |
– |
– |
– |
– |
– |
– |
– |
|
e5m2 |
f2f |
– |
f2f |
– |
– |
– |
– |
– |
– |
– |
|
e2m3 |
f2f |
– |
f2f |
– |
– |
– |
– |
– |
– |
– |
|
e3m2 |
f2f |
– |
f2f |
– |
– |
– |
– |
– |
– |
– |
|
e2m1 |
f2f |
– |
f2f |
– |
– |
– |
– |
– |
– |
– |
|
ue8m0 |
– |
– |
f2f |
– |
– |
– |
– |
– |
– |
– |
|
s2f6 |
– |
– |
f2f |
– |
– |
– |
– |
– |
– |
– |
|
Notes
sext = sign-extend; zext = zero-extend; chop = keep only low bits that fit;
s2f = signed-to-float; f2s = float-to-signed; u2f = unsigned-to-float;
f2u = float-to-unsigned; f2f = float-to-float.
1 If the destination register is wider than the destination format, the result is extended to the destination register width after chopping. The type of extension (sign or zero) is based on the destination format. For example, cvt.s16.u32 targeting a 32-bit register first chops to 16-bit, then sign-extends to 32-bit.
6.5.2. Rounding Modifiers
Conversion instructions may specify a rounding modifier. In PTX, there are four integer rounding modifiers and six floating-point rounding modifiers. Table 17 and Table 18 summarize the rounding modifiers.
Modifier |
Description |
|---|---|
|
rounds to nearest even |
|
rounds to nearest, ties away from zero |
|
rounds towards zero |
|
rounds towards negative infinity |
|
rounds towards positive infinity |
|
rounds either towards zero or away from zero based on the carry out of the integer addition of random bits and the discarded bits of mantissa |
Modifier |
Description |
|---|---|
|
round to nearest integer, choosing even integer if source is equidistant between two integers. |
|
round to nearest integer in the direction of zero |
|
round to nearest integer in direction of negative infinity |
|
round to nearest integer in direction of positive infinity |
6.6. Operand Costs
Operands from different state spaces affect the speed of an operation. Registers are fastest, while global memory is slowest. Much of the delay to memory can be hidden in a number of ways. The first is to have multiple threads of execution so that the hardware can issue a memory operation and then switch to other execution. Another way to hide latency is to issue the load instructions as early as possible, as execution is not blocked until the desired result is used in a subsequent (in time) instruction. The register in a store operation is available much more quickly. Table 19 gives estimates of the costs of using different kinds of memory.
Space |
Time |
Notes |
|---|---|---|
Register |
0 |
|
Shared |
0 |
|
Constant |
0 |
Amortized cost is low, first access is high |
Local |
> 100 clocks |
|
Parameter |
0 |
|
Immediate |
0 |
|
Global |
> 100 clocks |
|
Texture |
> 100 clocks |
|
Surface |
> 100 clocks |
7. Abstracting the ABI
Rather than expose details of a particular calling convention, stack layout, and Application Binary
Interface (ABI), PTX provides a slightly higher-level abstraction and supports multiple ABI
implementations. In this section, we describe the features of PTX needed to achieve this hiding of
the ABI. These include syntax for function definitions, function calls, parameter passing, and
memory allocated on the stack (alloca).
Refer to PTX Writers Guide to Interoperability for details on generating PTX compliant with Application Binary Interface (ABI) for the CUDA® architecture.
7.1. Function Declarations and Definitions
In PTX, functions are declared and defined using the .func directive. A function declaration
specifies an optional list of return parameters, the function name, and an optional list of input
parameters; together these specify the function’s interface, or prototype. A function definition
specifies both the interface and the body of the function. A function must be declared or defined
prior to being called.
The simplest function has no parameters or return values, and is represented in PTX as follows:
.func foo
{
...
ret;
}
...
call foo;
...
Here, execution of the call instruction transfers control to foo, implicitly saving the
return address. Execution of the ret instruction within foo transfers control to the
instruction following the call.
Scalar and vector base-type input and return parameters may be represented simply as register variables. At the call, arguments may be register variables or constants, and return values may be placed directly into register variables. The arguments and return variables at the call must have type and size that match the callee’s corresponding formal parameters.
Example
.func (.reg .u32 %res) inc_ptr ( .reg .u32 %ptr, .reg .u32 %inc )
{
add.u32 %res, %ptr, %inc;
ret;
}
...
call (%r1), inc_ptr, (%r1,4);
...
When using the ABI, .reg state space parameters must be at least 32-bits in size. Subword scalar
objects in the source language should be promoted to 32-bit registers in PTX, or use .param
state space byte arrays described next.
Objects such as C structures and unions are flattened into registers or byte arrays in PTX and are
represented using .param space memory. For example, consider the following C structure, passed
by value to a function:
struct {
double dbl;
char c[4];
};
In PTX, this structure will be flattened into a byte array. Since memory accesses are required to be
aligned to a multiple of the access size, the structure in this example will be a 12 byte array with
8 byte alignment so that accesses to the .f64 field are aligned. The .param state space is
used to pass the structure by value:
Example
.func (.reg .s32 out) bar (.reg .s32 x, .param .align 8 .b8 y[12])
{
.reg .f64 f1;
.reg .b32 c1, c2, c3, c4;
...
ld.param.f64 f1, [y+0];
ld.param.b8 c1, [y+8];
ld.param.b8 c2, [y+9];
ld.param.b8 c3, [y+10];
ld.param.b8 c4, [y+11];
...
... // computation using x,f1,c1,c2,c3,c4;
}
{
.param .b8 .align 8 py[12];
...
st.param.b64 [py+ 0], %rd;
st.param.b8 [py+ 8], %rc1;
st.param.b8 [py+ 9], %rc2;
st.param.b8 [py+10], %rc1;
st.param.b8 [py+11], %rc2;
// scalar args in .reg space, byte array in .param space
call (%out), bar, (%x, py);
...
In this example, note that .param space variables are used in two ways. First, a .param
variable y is used in function definition bar to represent a formal parameter. Second, a
.param variable py is declared in the body of the calling function and used to set up the
structure being passed to bar.
The following is a conceptual way to think about the .param state space use in device functions.
For a caller,
The
.paramstate space is used to set values that will be passed to a called function and/or to receive return values from a called function. Typically, a.parambyte array is used to collect together fields of a structure being passed by value.
For a callee,
The
.paramstate space is used to receive parameter values and/or pass return values back to the caller.
The following restrictions apply to parameter passing.
For a caller,
Arguments may be
.paramvariables,.regvariables, or constants.In the case of
.paramspace formal parameters that are byte arrays, the argument must also be a.paramspace byte array with matching type, size, and alignment. A.paramargument must be declared within the local scope of the caller.In the case of
.paramspace formal parameters that are base-type scalar or vector variables, the corresponding argument may be either a.paramor.regspace variable with matching type and size, or a constant that can be represented in the type of the formal parameter.In the case of
.regspace formal parameters, the corresponding argument may be either a.paramor.regspace variable of matching type and size, or a constant that can be represented in the type of the formal parameter.In the case of
.regspace formal parameters, the register must be at least 32-bits in size.All
st.paraminstructions used for passing arguments to function call must immediately precede the correspondingcallinstruction andld.paraminstruction used for collecting return value must immediately follow thecallinstruction without any control flow alteration.st.paramandld.paraminstructions used for argument passing cannot be predicated. This enables compiler optimization and ensures that the.paramvariable does not consume extra space in the caller’s frame beyond that needed by the ABI. The.paramvariable simply allows a mapping to be made at the call site between data that may be in multiple locations (e.g., structure being manipulated by caller is located in registers and memory) to something that can be passed as a parameter or return value to the callee.
For a callee,
Input and return parameters may be
.paramvariables or.regvariables.Parameters in
.parammemory must be aligned to a multiple of 1, 2, 4, 8, or 16 bytes.Parameters in the
.regstate space must be at least 32-bits in size.The
.regstate space can be used to receive and return base-type scalar and vector values, including sub-word size objects when compiling in non-ABI mode. Supporting the.regstate space provides legacy support.
Note that the choice of .reg or .param state space for parameter passing has no impact on
whether the parameter is ultimately passed in physical registers or on the stack. The mapping of
parameters to physical registers and stack locations depends on the ABI definition and the order,
size, and alignment of parameters.
7.1.1. Changes from PTX ISA Version 1.x
In PTX ISA version 1.x, formal parameters were restricted to .reg state space, and there was no support for array parameters. Objects such as C structures were flattened and passed or returned using multiple registers. PTX ISA version 1.x supports multiple return values for this purpose.
Beginning with PTX ISA version 2.0, formal parameters may be in either .reg or .param state
space, and .param space parameters support arrays. For targets sm_20 or higher, PTX
restricts functions to a single return value, and a .param byte array should be used to return
objects that do not fit into a register. PTX continues to support multiple return registers for
sm_1x targets.
Note
PTX implements a stack-based ABI only for targets sm_20 or higher.
PTX ISA versions prior to 3.0 permitted variables in .reg and .local state spaces to be
defined at module scope. When compiling to use the ABI, PTX ISA version 3.0 and later disallows
module-scoped .reg and .local variables and restricts their use to within function
scope. When compiling without use of the ABI, module-scoped .reg and .local variables are
supported as before. When compiling legacy PTX code (ISA versions prior to 3.0) containing
module-scoped .reg or .local variables, the compiler silently disables use of the ABI.
7.2. Variadic Functions
Note
Support for variadic functions which was unimplemented has been removed from the spec.
PTX version 6.0 supports passing unsized array parameter to a function which can be used to implement variadic functions.
Refer to Kernel and Function Directives: .func for details
7.3. Alloca
PTX provides alloca instruction for allocating storage at runtime on the per-thread local memory
stack. The allocated stack memory can be accessed with ld.local and st.local instructions
using the pointer returned by alloca.
In order to facilitate deallocation of memory allocated with alloca, PTX provides two additional
instructions: stacksave which allows reading the value of stack pointer in a local variable, and
stackrestore which can restore the stack pointer with the saved value.
alloca, stacksave, and stackrestore instructions are described in
Stack Manipulation Instructions.
8. Memory Consistency Model
In multi-threaded executions, the side-effects of memory operations performed by each thread become visible to other threads in a partial and non-identical order. This means that any two operations may appear to happen in no order, or in different orders, to different threads. The axioms introduced by the memory consistency model specify exactly which contradictions are forbidden between the orders observed by different threads.
In the absence of any constraint, each read operation returns the value committed by some write operation to the same memory location, including the initial write to that memory location. The memory consistency model effectively constrains the set of such candidate writes from which a read operation can return a value.
8.1. Scope and applicability of the model
The constraints specified under this model apply to PTX programs with any PTX ISA version number,
running on sm_70 or later architectures.
The memory consistency model does not apply to texture (including ld.global.nc) and surface
accesses.
8.1.1. Limitations on atomicity at system scope
When communicating with the host CPU, certain strong operations with system scope may not be performed atomically on some systems. For more details on atomicity guarantees to host memory, see the CUDA Atomicity Requirements.
8.2. Memory operations
The fundamental storage unit in the PTX memory model is a byte, consisting of 8 bits. Each state space available to a PTX program is a sequence of contiguous bytes in memory. Every byte in a PTX state space has a unique address relative to all threads that have access to the same state space.
Each PTX memory instruction specifies an address operand and a data type. The address operand contains a virtual address that gets converted to a physical address during memory access. The physical address and the size of the data type together define a physical memory location, which is the range of bytes starting from the physical address and extending up to the size of the data type in bytes.
The memory consistency model specification uses the terms “address” or “memory address” to indicate a virtual address, and the term “memory location” to indicate a physical memory location.
Each PTX memory instruction also specifies the operation — either a read, a write or an atomic read-modify-write — to be performed on all the bytes in the corresponding memory location.
8.2.1. Overlap
Two memory locations are said to overlap when the starting address of one location is within the range of bytes constituting the other location. Two memory operations are said to overlap when they specify the same virtual address and the corresponding memory locations overlap. The overlap is said to be complete when both memory locations are identical, and it is said to be partial otherwise.
8.2.2. Aliases
Two distinct virtual addresses are said to be aliases if they map to the same memory location.
8.2.3. Multimem Addresses
A multimem address is a virtual address which points to multiple distinct memory locations across devices.
Only multimem.* operations are valid on multimem addresses. That is, the behavior of accessing a multimem address in any other memory operation is undefined.
8.2.4. Memory Operations on Vector Data Types
The memory consistency model relates operations executed on memory locations with scalar data types, which have a maximum size and alignment of 64 bits. Memory operations with a vector data type are modelled as a set of equivalent memory operations with a scalar data type, executed in an unspecified order on the elements in the vector.
8.2.5. Memory Operations on Packed Data Types
A packed data type consists of two values of the same scalar data type, as described in Packed Data Types. These values are accessed in adjacent memory locations. A memory operation on a packed data type is modelled as a pair of equivalent memory operations on the scalar data type, executed in an unspecified order on each element of the packed data.
8.2.6. Initialization
Each byte in memory is initialized by a hypothetical write W0 executed before starting any thread in the program. If the byte is included in a program variable, and that variable has an initial value, then W0 writes the corresponding initial value for that byte; else W0 is assumed to have written an unknown but constant value to the byte.
8.3. State spaces
The relations defined in the memory consistency model are independent of state spaces. In
particular, causality order closes over all memory operations across all the state spaces. But the
side-effect of a memory operation in one state space can be observed directly only by operations
that also have access to the same state space. This further constrains the synchronizing effect of a
memory operation in addition to scope. For example, the synchronizing effect of the PTX instruction
ld.relaxed.shared.sys is identical to that of ld.relaxed.shared.cluster, since no thread
outside the same cluster can execute an operation that accesses the same memory location.
8.4. Operation types
For simplicity, the rest of the document refers to the following operation types, instead of mentioning specific instructions that give rise to them.
Operation Type |
Instruction/Operation |
|---|---|
atomic operation |
|
read operation |
All variants of |
write operation |
All variants of |
memory operation |
A read or write operation. |
volatile operation |
An instruction with |
acquire operation |
A memory operation with |
release operation |
A memory operation with |
mmio operation |
An |
memory fence operation |
A |
proxy fence operation |
A |
strong operation |
A memory fence operation, or a memory operation with a |
weak operation |
An |
synchronizing operation |
A |
8.4.1. mmio Operation
An mmio operation is a memory operation with .mmio qualifier specified. It is usually performed
on a memory location which is mapped to the control registers of peer I/O devices. It can also be
used for communication between threads but has poor performance relative to non-mmio operations.
The semantic meaning of mmio operations cannot be defined precisely as it is defined by the underlying I/O device. For formal specification of semantics of mmio operation from Memory Consistency Model perspective, it is equivalent to the semantics of a strong operation. But it follows a few implementation-specific properties, if it meets the CUDA atomicity requirements at the specified scope:
Writes are always performed and are never combined within the scope specified.
-
Reads are always performed, and are not forwarded, prefetched, combined, or allowed to hit any cache within the scope specified.
As an exception, in some implementations, the surrounding locations may also be loaded. In such cases the amount of data loaded is implementation specific and varies between 32 and 128 bytes in size.
8.4.2. volatile Operation
A volatile operation is a memory operation with .volatile qualifier specified.
The semantics of volatile operations are equivalent to a relaxed memory operation with system-scope
but with the following extra implementation-specific constraints:
The number of volatile instructions (not operations) executed by a program is preserved. Hardware may combine and merge volatile operations issued by multiple different volatile instructions, that is, the number of volatile operations in the program is not preserved.
Volatile instructions are not re-ordered around other volatile instructions, but the memory operations performed by those instructions may be re-ordered around each other.
Note
PTX volatile operations are intended for compilers to lower volatile read and write operations from CUDA C++, and other programming languages sharing CUDA C++ volatile semantics, to PTX.
Since volatile operations are relaxed at system-scope with extra constraints, prefer using other
strong read or write operations (e.g. ld.relaxed.sys or st.relaxed.sys) for
Inter-Thread Synchronization instead, which may deliver better performance.
PTX volatile operations are not suited for Memory Mapped IO (MMIO) because volatile operations do not preserve the number of memory operations performed, and may perform more or less operations than requested in a non-deterministic way. Use .mmio operations instead, which strictly preserve the number of operations performed.
8.5. Scope
Each strong operation must specify a scope, which is the set of threads that may interact directly with that operation and establish any of the relations described in the memory consistency model. There are four scopes:
Scope |
Description |
|---|---|
|
The set of all threads executing in the same CTA as the current thread. |
|
The set of all threads executing in the same cluster as the current thread. |
|
The set of all threads in the current program executing on the same compute device as the current thread. This also includes other kernel grids invoked by the host program on the same compute device. |
|
The set of all threads in the current program, including all kernel grids invoked by the host program on all compute devices, and all threads constituting the host program itself. |
Note that the warp is not a scope; the CTA is the smallest collection of threads that qualifies as a scope in the memory consistency model.
8.6. Proxies
A memory proxy, or a proxy is an abstract label applied to a method of memory access. When two memory operations use distinct methods of memory access, they are said to be different proxies.
Memory operations as defined in Operation types use generic method of memory access, i.e. a generic proxy. Other operations such as textures and surfaces all use distinct methods of memory access, also distinct from the generic method.
A proxy fence is required to synchronize memory operations across different proxies. Although virtual aliases use the generic method of memory access, since using distinct virtual addresses behaves as if using different proxies, they require a proxy fence to establish memory ordering.
8.7. Morally strong operations
Two operations are said to be morally strong relative to each other if they satisfy all of the following conditions:
The operations are related in program order (i.e, they are both executed by the same thread), or each operation is strong and specifies a scope that includes the thread executing the other operation.
Both operations are performed via the same proxy.
If both are memory operations, then they overlap completely.
Most (but not all) of the axioms in the memory consistency model depend on relations between morally strong operations.
8.7.1. Conflict and Data-races
Two overlapping memory operations are said to conflict when at least one of them is a write.
Two conflicting memory operations are said to be in a data-race if they are not related in causality order and they are not morally strong.
8.7.2. Limitations on Mixed-size Data-races
A data-race between operations that overlap completely is called a uniform-size data-race, while a data-race between operations that overlap partially is called a mixed-size data-race.
The axioms in the memory consistency model do not apply if a PTX program contains one or more mixed-size data-races. But these axioms are sufficient to describe the behavior of a PTX program with only uniform-size data-races.
Atomicity of mixed-size RMW operations
In any program with or without mixed-size data-races, the following property holds for every pair of overlapping atomic operations A1 and A2 such that each specifies a scope that includes the other: Either the read-modify-write operation specified by A1 is performed completely before A2 is initiated, or vice versa. This property holds irrespective of whether the two operations A1 and A2 overlap partially or completely.
8.8. Release and Acquire Patterns
Some sequences of instructions give rise to patterns that participate in memory synchronization as described later. The release pattern makes prior operations from the current thread1 visible to some operations from other threads. The acquire pattern makes some operations from other threads visible to later operations from the current thread.
A release pattern on a location M consists of one of the following:
-
A release operation on M
E.g.:
st.release [M];oratom.release [M];ormbarrier.arrive.release [M]; -
Or a release or acquire-release operation on M followed by a strong write on M in program order
E.g.:
st.release [M];st.relaxed [M]; -
Or a release or acquire-release memory fence followed by a strong write on M in program order
E.g.:
fence.release; st.relaxed [M];orfence.release; atom.relaxed [M];
Any memory synchronization established by a release pattern only affects operations occurring in program order before the first instruction in that pattern.
An acquire pattern on a location M consists of one of the following:
-
An acquire operation on M
E.g.:
ld.acquire [M];oratom.acquire [M];ormbarrier.test_wait.acquire [M]; -
Or a strong read on M followed by an acquire operation on M in program order
E.g.:
ld.relaxed [M]; ld.acquire [M]; -
Or a strong read on M followed by an acquire memory fence in program order
E.g.:
ld.relaxed [M]; fence.acquire;oratom.relaxed [M]; fence.acquire;
Any memory synchronization established by an acquire pattern only affects operations occurring in program order after the last instruction in that pattern.
Note that while atomic reductions conceptually perform a strong read as part of its read-modify-write sequence, this strong read does not form an acquire pattern.
E.g.:
red.add [M], 1; fence.acquire;is not an acquire pattern.
1 For both release and acquire patterns, this effect is further extended to operations in other threads through the transitive nature of causality order.
8.9. Ordering of memory operations
The sequence of operations performed by each thread is captured as program order while memory synchronization across threads is captured as causality order. The visibility of the side-effects of memory operations to other memory operations is captured as communication order. The memory consistency model defines contradictions that are disallowed between communication order on the one hand, and causality order and program order on the other.
8.9.1. Program Order
The program order relates all operations performed by a thread to the order in which a sequential processor will execute instructions in the corresponding PTX source. It is a transitive relation that forms a total order over the operations performed by the thread, but does not relate operations from different threads.
8.9.1.1. Asynchronous Operations
Some PTX instructions (all variants of cp.async, cp.async.bulk, cp.reduce.async.bulk,
wgmma.mma_async) perform operations that are asynchronous to the thread that executed the
instruction. These asynchronous operations are ordered after prior instructions in the same thread
(except in the case of wgmma.mma_async), but they are not part of the program order for that
thread. Instead, they provide weaker ordering guarantees as documented in the instruction
description.
For example, the loads and stores performed as part of a cp.async are ordered with respect to
each other, but not to those of any other cp.async instructions initiated by the same thread,
nor any other instruction subsequently issued by the thread with the exception of
cp.async.commit_group or cp.async.mbarrier.arrive. The asynchronous mbarrier arrive-on operation
performed by a cp.async.mbarrier.arrive instruction is ordered with respect to the memory
operations performed by all prior cp.async operations initiated by the same thread, but not to
those of any other instruction issued by the thread. The implicit mbarrier complete-tx
operation that is part of all variants of cp.async.bulk and cp.reduce.async.bulk
instructions is ordered only with respect to the memory operations performed by the same
asynchronous instruction, and in particular it does not transitively establish ordering with respect
to prior instructions from the issuing thread.
8.9.2. Observation Order
Observation order relates a write W to a read R through an optional sequence of atomic read-modify-write operations.
A write W precedes a read R in observation order if:
R and W are morally strong and R reads the value written by W, or
For some atomic operation Z, W precedes Z and Z precedes R in observation order.
8.9.3. Fence-SC Order
The Fence-SC order is an acyclic partial order, determined at runtime, that relates every pair of morally strong fence.sc operations.
8.9.4. Memory synchronization
Synchronizing operations performed by different threads synchronize with each other at runtime as described here. The effect of such synchronization is to establish causality order across threads.
A
fence.scoperation X synchronizes with afence.scoperation Y if X precedes Y in the Fence-SC order.A
bar{.cta}.syncorbar{.cta}.redorbar{.cta}.arriveoperation synchronizes with abar{.cta}.syncorbar{.cta}.redoperation executed on the same barrier.A
barrier.cluster.arriveoperation synchronizes with abarrier.cluster.waitoperation.A release pattern X synchronizes with an acquire pattern Y, if a write operation in X precedes a read operation in Y in observation order, and the first operation in X and the last operation in Y are morally strong.
API synchronization
A synchronizes relation can also be established by certain CUDA APIs.
Completion of a task enqueued in a CUDA stream synchronizes with the start of the following task in the same stream, if any.
For purposes of the above, recording or waiting on a CUDA event in a stream, or causing a cross-stream barrier to be inserted due to
cudaStreamLegacy, enqueues tasks in the associated streams even if there are no direct side effects. An event record task synchronizes with matching event wait tasks, and a barrier arrival task synchronizes with matching barrier wait tasks.Start of a CUDA kernel synchronizes with start of all threads in the kernel. End of all threads in a kernel synchronize with end of the kernel.
Start of a CUDA graph synchronizes with start of all source nodes in the graph. Completion of all sink nodes in a CUDA graph synchronizes with completion of the graph. Completion of a graph node synchronizes with start of all nodes with a direct dependency.
Start of a CUDA API call to enqueue a task synchronizes with start of the task.
Completion of the last task queued to a stream, if any, synchronizes with return from
cudaStreamSynchronize. Completion of the most recently queued matching event record task, if any, synchronizes with return fromcudaEventSynchronize. Synchronizing a CUDA device or context behaves as if synchronizing all streams in the context, including ones that have been destroyed.Returning
cudaSuccessfrom an API to query a CUDA handle, such as a stream or event, behaves the same as return from the matching synchronization API.
In addition to establishing a synchronizes relation, the CUDA API synchronization mechanisms above
also participate in proxy-preserved base causality order except for the tensormap-proxy which is not
acquired from generic-proxy at CUDA Kernel start and must therefore be acquired explicitly
using fence.proxy.tensormap::generic.acquire when needed.
8.9.5. Causality Order
Causality order captures how memory operations become visible across threads through synchronizing operations. The axiom “Causality” uses this order to constrain the set of write operations from which a read operation may read a value.
Relations in the causality order primarily consist of relations in Base causality order1 , which is a transitive order, determined at runtime.
Base causality order
An operation X precedes an operation Y in base causality order if:
X precedes Y in program order, or
X synchronizes with Y, or
-
For some operation Z,
X precedes Z in program order and Z precedes Y in base causality order, or
X precedes Z in base causality order and Z precedes Y in program order, or
X precedes Z in base causality order and Z precedes Y in base causality order.
Proxy-preserved base causality order
A memory operation X precedes a memory operation Y in proxy-preserved base causality order if X precedes Y in base causality order, and:
X and Y are performed to the same address, using the generic proxy, or
X and Y are performed to the same address, using the same proxy, and by the same thread block, or
X and Y are aliases and there is an alias proxy fence along the base causality path from X to Y.
Causality order
Causality order combines base causality order with some non-transitive relations as follows:
An operation X precedes an operation Y in causality order if:
X precedes Y in proxy-preserved base causality order, or
For some operation Z, X precedes Z in observation order, and Z precedes Y in proxy-preserved base causality order.
1 The transitivity of base causality order accounts for the “cumulativity” of synchronizing operations.
8.9.6. Coherence Order
There exists a partial transitive order that relates overlapping write operations, determined at runtime, called the coherence order1. Two overlapping write operations are related in coherence order if they are morally strong or if they are related in causality order. Two overlapping writes are unrelated in coherence order if they are in a data-race, which gives rise to the partial nature of coherence order.
1 Coherence order cannot be observed directly since it consists entirely of write operations. It may be observed indirectly by its use in constraining the set of candidate writes that a read operation may read from.
8.9.7. Communication Order
The communication order is a non-transitive order, determined at runtime, that relates write operations to other overlapping memory operations.
A write W precedes an overlapping read R in communication order if R returns the value of any byte that was written by W.
A write W precedes a write W’ in communication order if W precedes W’ in coherence order.
A read R precedes an overlapping write W in communication order if, for any byte accessed by both R and W, R returns the value written by a write W’ that precedes W in coherence order.
Communication order captures the visibility of memory operations — when a memory operation X1 precedes a memory operation X2 in communication order, X1 is said to be visible to X2.
8.10. Axioms
8.10.1. Coherence
If a write W precedes an overlapping write W’ in causality order, then W must precede W’ in coherence order.
8.10.2. Fence-SC
Fence-SC order cannot contradict causality order. For a pair of morally strong fence.sc operations F1 and F2, if F1 precedes F2 in causality order, then F1 must precede F2 in Fence-SC order.
8.10.3. Atomicity
Single-Copy Atomicity
Conflicting morally strong operations are performed with single-copy atomicity. When a read R and a write W are morally strong, then the following two communications cannot both exist in the same execution, for the set of bytes accessed by both R and W:
R reads any byte from W.
R reads any byte from any write W’ which precedes W in coherence order.
Atomicity of read-modify-write (RMW) operations
When an atomic operation A and a write W overlap and are morally strong, then the following two communications cannot both exist in the same execution, for the set of bytes accessed by both A and W:
A reads any byte from a write W’ that precedes W in coherence order.
A follows W in coherence order.
Litmus Test 1
.global .u32 x = 0;
|
|
T1 |
T2 |
A1: atom.sys.inc.u32 %r0, [x];
|
A2: atom.sys.inc.u32 %r0, [x];
|
FINAL STATE: x == 2
|
|
Atomicity is guaranteed when the operations are morally strong.
Litmus Test 2
.global .u32 x = 0;
|
|
T1 |
T2 (In a different CTA) |
A1: atom.cta.inc.u32 %r0, [x];
|
A2: atom.gpu.inc.u32 %r0, [x];
|
FINAL STATE: x == 1 OR x == 2
|
|
Atomicity is not guaranteed if the operations are not morally strong.
8.10.4. No Thin Air
Values may not appear “out of thin air”: an execution cannot speculatively produce a value in such a way that the speculation becomes self-satisfying through chains of instruction dependencies and inter-thread communication. This matches both programmer intuition and hardware reality, but is necessary to state explicitly when performing formal analysis.
Litmus Test: Load Buffering with true dependencies
.global .u32 x = 0;
.global .u32 y = 0;
|
|
T1 |
T2 |
A1: ld.global.u32 %r0, [x];
B1: st.global.u32 [y], %r0;
|
A2: ld.global.u32 %r1, [y];
B2: st.global.u32 [x], %r1;
|
FINAL STATE: x == 0 AND y == 0
|
|
The litmus test known as “LB+deps” (Load Buffering with dependencies) checks such forbidden values that may arise out of thin air. Two threads T1 and T2 each read from a first variable and copy the observed result into a second variable, with the first and second variable exchanged between the threads. If each variable is initially zero, the final result shall also be zero. If A1 reads from B2 and A2 reads from B1, then values passing through the memory operations in this example form a cycle: A1->B1->A2->B2->A1. Only the values x == 0 and y == 0 are allowed to satisfy this cycle. If any of the memory operations in this example were to speculatively associate a different value with the corresponding memory location, then such a speculation would become self-fulfilling, and hence forbidden.
Litmus Test: Load Buffering without dependencies
.global .u32 x = 0;
.global .u32 y = 0;
|
|
T1 |
T2 |
A1: ld.global.u32 %r0, [x];
B1: st.global.u32 [y], 1;
|
A2: ld.global.u32 %r1, [y];
B2: st.global.u32 [x], 1;
|
FINAL STATE: x == 1 AND y == 1
|
|
This litmus test differs from the one above in that it unconditionally stores 1 to x and y. In this litmus test a final state of x == 1 and y == 1 is permitted. This execution does not contradict the requirement demonstrated by the previous litmus test. Here there is no self-fulfilling cycle – the litmus test will always and unconditionally store 1 to x and y, so here the cycle is not self-fulfilled and the speculation is valid.
Here the lack of dependencies is plain, but the implementation may perform any chain of reasoning to determine that a store is not dependent on a prior load, and thus break self-fulfilling cycles which would otherwise apparently be forbidden by the No-Thin-Air axiom.
This form of load buffering is deliberately permitted in the PTX memory consistency model.
8.10.5. Sequential Consistency Per Location
Within any set of overlapping memory operations that are pairwise morally strong, communication order cannot contradict program order, i.e., a concatenation of program order between overlapping operations and morally strong relations in communication order cannot result in a cycle. This ensures that each program slice of overlapping pairwise morally strong operations is strictly sequentially-consistent.
Litmus Test: CoRR
.global .u32 x = 0;
|
|
T1 |
T2 |
W1: st.global.relaxed.sys.u32 [x], 1;
|
R1: ld.global.relaxed.sys.u32 %r0, [x];
R2: ld.global.relaxed.sys.u32 %r1, [x];
|
IF %r0 == 1 THEN %r1 == 1
|
|
The litmus test “CoRR” (Coherent Read-Read), demonstrates one consequence of this guarantee. A thread T1 executes a write W1 on a location x, and a thread T2 executes two (or an infinite sequence of) reads R1 and R2 on the same location x. No other writes are executed on x, except the one modelling the initial value. The operations W1, R1 and R2 are pairwise morally strong. If R1 reads from W1, then the subsequent read R2 must also observe the same value. If R2 observed the initial value of x instead, then this would form a sequence of morally-strong relations R2->W1->R1 in communication order that contradicts the program order R1->R2 in thread T2. Hence R2 cannot read the initial value of x in such an execution.
8.10.6. Causality
Relations in communication order cannot contradict causality order. This constrains the set of candidate write operations that a read operation may read from:
If a read R precedes an overlapping write W in causality order, then R cannot read from W.
If a write W precedes an overlapping read R in causality order, then for any byte accessed by both R and W, R cannot read from any write W’ that precedes W in coherence order.
Litmus Test: Message Passing
.global .u32 data = 0;
.global .u32 flag = 0;
|
|
T1 |
T2 |
W1: st.global.u32 [data], 1;
F1: fence.sys;
W2: st.global.relaxed.sys.u32 [flag], 1;
|
R1: ld.global.relaxed.sys.u32 %r0, [flag];
F2: fence.sys;
R2: ld.global.u32 %r1, [data];
|
IF %r0 == 1 THEN %r1 == 1
|
|
The litmus test known as “MP” (Message Passing) represents the essence of typical synchronization algorithms. A vast majority of useful programs can be reduced to sequenced applications of this pattern.
Thread T1 first writes to a data variable and then to a flag variable while a second thread T2 first reads from the flag variable and then from the data variable. The operations on the flag are morally strong and the memory operations in each thread are separated by a fence, and these fences are morally strong.
If R1 observes W2, then the release pattern “F1; W2” synchronizes with the acquire pattern “R1; F2”. This establishes the causality order W1 -> F1 -> W2 -> R1 -> F2 -> R2. Then axiom causality guarantees that R2 cannot read from any write that precedes W1 in coherence order. In the absence of any other writes in this example, R2 must read from W1.
Litmus Test: CoWR
// These addresses are aliases
.global .u32 data_alias_1;
.global .u32 data_alias_2;
|
T1 |
W1: st.global.u32 [data_alias_1], 1;
F1: fence.proxy.alias;
R1: ld.global.u32 %r1, [data_alias_2];
|
%r1 == 1
|
Virtual aliases require an alias proxy fence along the synchronization path.
Litmus Test: Store Buffering
The litmus test known as “SB” (Store Buffering) demonstrates the sequential consistency enforced
by the fence.sc. A thread T1 writes to a first variable, and then reads the value of a second
variable, while a second thread T2 writes to the second variable and then reads the value of the
first variable. The memory operations in each thread are separated by fence.sc instructions,
and these fences are morally strong.
.global .u32 x = 0;
.global .u32 y = 0;
|
|
T1 |
T2 |
W1: st.global.u32 [x], 1;
F1: fence.sc.sys;
R1: ld.global.u32 %r0, [y];
|
W2: st.global.u32 [y], 1;
F2: fence.sc.sys;
R2: ld.global.u32 %r1, [x];
|
%r0 == 1 OR %r1 == 1
|
|
In any execution, either F1 precedes F2 in Fence-SC order, or vice versa. If F1 precedes F2 in
Fence-SC order, then F1 synchronizes with F2. This establishes the causality order in W1 -> F1
-> F2 -> R2. Axiom causality ensures that R2 cannot read from any write that precedes W1 in
coherence order. In the absence of any other write to that variable, R2 must read from
W1. Similarly, in the case where F2 precedes F1 in Fence-SC order, R1 must read from W2. If each
fence.sc in this example were replaced by a fence.acq_rel instruction, then this outcome is
not guaranteed. There may be an execution where the write from each thread remains unobserved from
the other thread, i.e., an execution is possible, where both R1 and R2 return the initial value “0”
for variables y and x respectively.
8.11. Special Cases
8.11.1. Reductions do not form Acquire Patterns
Atomic reduction operations like red do not form acquire patterns with acquire fences.
Litmus Test: Message Passing with a Red Instruction
.global .u32 x = 0;
.global .u32 flag = 0;
|
|
T1 |
T2 |
W1: st.u32 [x], 42;
W2: st.release.gpu.u32 [flag], 1;
|
RMW1: red.sys.global.add.u32 [flag], 1;
F2: fence.acquire.gpu;
R2: ld.weak.u32 %r1, [x];
|
%r1 == 0 AND flag == 2
|
|
The litmus test known as “MP” (Message Passing) demonstrates the consequence
of reductions being excluded from acquire patterns.
It is possible to observe the outcome where R2 reads the value 0
from x and flag has the final value of 2.
This outcome is possible since the release pattern in
4.2. Comments
Comments in PTX follow C/C++ syntax, using non-nested
/*and*/for comments that may span multiple lines, and using//to begin a comment that extends up to the next newline character, which terminates the current line. Comments cannot occur within character constants, string literals, or within other comments.Comments in PTX are treated as whitespace.