WorldPosition in shader

I have found similar topics but not the full answer. (How can I get world space position in shaderPass?) Where i am missing the getViewPosition() in

[quote="linsanda, post:1, topic:37051"]

float depthTx = texture2D(tDepth,vUv).r;
float viewZ = getViewZ( depthTx );
float clipW = cameraProjectionMatrix[2][3] * viewZ + cameraProjectionMatrix[3][3];
vec4 e = getViewPosition(vUv,depthTx,clipW);
vec4 wPos = CameraMatrixWorld*e;
gl_FragColor = wPos;

[/quote]

I am trying to get the world position independantly of the camera ( orbiting ), I gather i have to use the cameraMatrix, how?
This is my current code

 vertexShader: /* glsl */ `
		varying vec2 vUv;
		varying vec3 wpos;

		void main() {
			vUv = uv;

			gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
	
		}`,```

````Preformatted text`
uniform sampler2D tDepth;
		uniform float cameraNear;
		uniform float cameraFar;
		uniform mat4 cameraProjectionMatrix;
		uniform mat4 cameraWorldMatrix;
		varying vec2 vUv;
		varying vec3 wpos;

		#include <packing>

float getLinearDepth( const in vec2 screenPosition ) {
			#if PERSPECTIVE_CAMERA == 1
				float fragCoordZ = texture2D( tDepth, screenPosition ).x;
				float viewZ = perspectiveDepthToViewZ( fragCoordZ, cameraNear, cameraFar );
				 return viewZToOrthographicDepth( viewZ, cameraNear, cameraFar );
			#else
				return texture2D( tDepth, screenPosition ).x;
			#endif
		}
		float	getViewZ( const in vec2 screenPosition ) {
				float fragCoordZ = texture2D( tDepth, screenPosition ).x;
				float viewZ = perspectiveDepthToViewZ( fragCoordZ, cameraNear, cameraFar );
				 return viewZ;
  }

		void main() {
			float depth = getLinearDepth( vUv );
			float viewZ = getViewZ( vUv );
float clipW = cameraProjectionMatrix[2][3] * viewZ + cameraProjectionMatrix[3][3];
vec4 e = getViewPosition(vUv,depth,clipW);
vec4 wPos = cameraWorldMatrix*e;

To compute world-space positions from depth in a postprocessing pass, you need:

  1. A method to reconstruct view-space coordinates from screen UV + depth.
  2. Multiply by the camera’s world matrix to go from view space to world space.

Below is a common pattern in Three.js (for a perspective camera):

// Reconstructs view-space position from UV + depth:
vec4 getViewPosition(vec2 uv, float linearDepth) {
  // 1) Convert uv (0..1) to clip-space (-1..1)
  vec4 clipPos = vec4(uv * 2.0 - 1.0, 0.0, 1.0);

  // 2) Depth is in view space, so move clipPos.z = NDC
  //    We can approximate by reprojecting with the camera proj matrix if we have an inverse.
  //    Alternatively, if we have 'viewZ' from perspectiveDepthToViewZ, set clipPos.z to that reprojected coordinate.
  clipPos.z = (linearDepth * 2.0 - 1.0);
  
  // 3) Transform by inverseProjectionMatrix to get view-space
  vec4 viewPos = inverse(cameraProjectionMatrix) * clipPos;
  // Perspective divide
  viewPos.xyz /= viewPos.w;
  
  return viewPos;
}

void main() {
  float fragDepth = texture2D(tDepth, vUv).r;
  // Convert depth to view-space z
  float viewZ = perspectiveDepthToViewZ(fragDepth, cameraNear, cameraFar);

  // Reconstruct the view-space position
  vec4 viewPos = getViewPosition(vUv, fragDepth);

  // Finally, go to world space
  vec4 worldPos = cameraWorldMatrix * viewPos;

  gl_FragColor = vec4(worldPos.xyz, 1.0);
}
  • cameraWorldMatrix transforms from view space to world space.
  • If you see mismatches, ensure you’re correctly converting the depth to the coordinate system used by getViewPosition.
  • Three.js’s built-in perspectiveDepthToViewZ helps convert [0…1] depth to view-space Z, but the code must consistently handle projection/unprojection.

Thank you for such a detailed answer, it is very helpful.

Now I need to place World space points in the shader, is there a built-in conversion?