I’ve recently started writing simulations with WebGPU and am interested in using ThreeJS for the rendering. I was curious if it’s possible to efficiently turn some output GPUBuffer into a BufferAttribute (for example, calculating vertex positions via WebGPU compute shader but rendering them via a ThreeJSBufferGeometry).
The best way I can figure out right now is to copy that information from the GPU back to the CPU using GPUBuffer.mapAsync(), change it to a TypedArray, then plug it back into the BufferAttribute.
const positionBuffer = new THREE.StorageBufferAttribute( positions, 3 );
const normalBuffer = new THREE.StorageBufferAttribute( normals, 3 );
const uvBuffer = new THREE.StorageBufferAttribute( uvs, 2 );
You can use this in compute shaders with read_write and in vertex shaders with read. I’m already doing that. I would have to do a little example of this. The big advantage is that with these buffers in the vertex shader you have access to all vertices instead of just the vertice currently being processed, as is the case with classic attributes.
Depending on whether you prefer TSL or WGSL, the further procedure differs. In both cases you will need this accessors.
//without .toReadOnly() it is read_write
storage( positionBuffer, 'vec3', positionBuffer.count ).toReadOnly(),
storage( normalBuffer, 'vec3', normalBuffer.count ).toReadOnly(),
storage( uvBuffer, 'vec2', uvBuffer.count ).toReadOnly(),
P.S. “Might that be possible one day?” All of this is already possible
Here’s an example. I admit it’s not very creative, but it shows how you can manipulate the position buffer with a compute shader and how the vertex shader uses the same buffer: