2 Comments

Know your SSAO artifacts

Screen space ambient occlusion is a  global illumination technique that’s pretty commonly used in games. It approximates the occlusion of a surface by other nearby surfaces. The resulting term is generally incorporated into the ambient lighting equation. It works on a depth map of the scene by shooting rays from one particular pixel into other surrounding pixels. Using the depth map, the view space position of those pixels may be calculated, providing a somewhat real basis for measuring occlusion.

SSAO is pretty commonly used in games in recent years. So it’s a battle-tested technique in terms of real world performance and quality, and probably also because it imposes few limits on geometry. That said, I’m not a big fan of it. It’s often over-done – that helps distinguish shapes but can be unrealistic. I haven’t been pleased with the results I got from using it in one of my projects. The noise artifacts and performance just weren’t up-to-par. I instead went with a baked in per-vertex ambient occlusion term which provided a better look at almost no cost.

Nevertheless, I recently decided to experiment with SSAO again in a little side project. The technique seems to be a somewhat difficult to get working properly, so I decided to do a “deep dive” and get to the source of some the artifacts I was seeing. In this post I’ll describe some of the problems I encountered and how to mitigate them.

With SSAO (top) and with SSAO (bottom).

With SSAO (top) and with SSAO (bottom).

The basics

There are a few steps to SSAO. First, depth and normal maps of the scene must be generated. If you’re using deferred shading, then you probably already have these. Next, there is the step that actually computes the occlusion term and writes it to a texture. This will be noisy, so it will then require a depth-aware blurring step to remove high-frequency noise (most of the screenshots in this article don’t have the blur applied). Finally the AO texture is used in the lighting step to apply ambient lighting with the occlusion term.

I’m going to mainly discuss artifacts related to the sampling of points. Generally, for SSAO you use a number of randomly generated sample rays/points within the unit hemisphere above the surface point in question. These are the rays that are projected out to see if they intersect nearby geometry. From articles and papers I’ve looked at, I’ve seen two main methods of generating the sample positions: spherical and hemispherical.

TwoTypes

In the hemispherical method, the hemisphere is rotated so that it lies above the surface in question (determined by the surface normal). In the spherical method, rays that are below the surface (shown in blue above) are negated before sampling so that they lie above the surface.

The artifacts

Some of these artifacts can be subtle and hard to diagnose when you have a lot of sample points. So when analyzing the problems I’ll often use just a single sample.

Depth resolution artifact

Consider what happens when we have a ray that is nearly parallel to the surface of an object (but still above it):

DepthResolution

Since we only have 24 or 32 bits to store depth, values in the depth buffer are of course quantized. This can result in cases when we erroneously detect that a ray which should not hit any geometry is actually under a surface. This can result in artifacts that look a little like z-fighting:

EnsureNormalIsNotAlongPlane

This tends to get worse as the object gets further away and the relative resolution of the depth buffer decreases.

The solution is to avoid rays that are close to parallel to the surface. We can take the dot product of the normalized ray and the surface normal and reject the ray if it is below a certain value. I am currently using 0.15 as my threshold value, although the value used in the SSAO code I grabbed from somewhere (I have no idea where) was using 0.5. I think this is way too high, but it really depends on the resolution of your depth buffer and how far away your objects are.

DontUse

Note that in the case where we’re using spherical sample points (and inverting those that fall below the surface), rejecting rays means we’ll be using less sample points to determine AO. Thus we’ll get a lower quality measurement. In the hemispherical case, we can just incorporate the threshold value into the original set of points themselves.

AO radius artifact

SSAO works by comparing the calculated depth of the end of the projected ray with the value from the depth map under the screen position of the end of the ray. So it detects if the ray ends up inside an object, and assumes that that object is occluding the point at the origin of the ray. Of course, it may end up inside an unrelated object that is far in front of the ray. This would provide unrealistic darknening of the object, so SSAO algorithms generally declare an upper bound on the depth distance between the sampled and projected point. Usually this upper bound is set to the maximum length of our rays (but there’s no reason this has to be so).

float depthDiff = (testDepth - calculatedSamplePosDepth);
 if (depthDiff > 0.0 && depthDiff < AORadius)
 {
    // Add it to occlusion term

This can produce another artifact however. If you’re using just a single ray sample point and without random perturbations (like I do for debugging issues) you will come across this, possibly showing up as light bands near the corners:

RangeArtifact1

It’s easy to see why this occurs in the following diagram:

AORadius

It’s important to note that I’ve never seen this artifact in real situations where we are taking multiple samples with a randomly rotated kernel. But it’s possible it can occur with certain geometries, and it will definitely occur if you’ve simplified your algorithm for debugging purposes. So it is something to be aware of. Note that if your SSAO algorithm used a much more expensive ray march, where the first intersection with any geometry was detected, then this won’t be an issue.

Artifacts with random kernel rotation

Any realistic SSAO algorithm needs to randomly rotate the sampling kernel, essentially providing a different set of relative sample points for each pixel. This is generally done with a texture of random normals. You can see how necessary this is in the following picture:

RandomRot

There are a few different options for how this rotation is done. If you’re using spherical sample rays, you can reflect them about the random normal vector.

The original sampling kernel on the left. Reflections about two different random normals at center and right.

The original sampling kernel on the left. Reflections about two different random normals at center and right.

This is a simple way to give different orientations for the sampling kernel points for each pixel.

If you’re using a hemispherical set of sample points, you don’t have that option. If you reflect across an arbitrary normal your sample points will no longer be in a single hemisphere. You must orient your hemisphere along your surface, and then use the random normal to rotate it to an arbitrary position.

This is generally done by using the Gram-Schmidt process to construct an orthogonal basis, from which we generate a rotation matrix used to rotate the rays in the sample kernel.

float3x3 GetRotationMatrix(float3 surfaceNormal, float3 randomNormal)
{
 // Gram–Schmidt process to construct an orthogonal basis.
 float3 tangent = normalize(randomNormal - surfaceNormal * dot(randomNormal, surfaceNormal));
 float3 bitangent = cross(surfaceNormal, tangent);
 return float3x3(tangent, bitangent, surfaceNormal);
}

This works well when the surface faces towards the viewer. It can generate artifacts when the surface is more perpendicular to the screen however. Take a look at the following image which shows the same single sample kernel ray being rotated by 8 different “random” normals. The random rotation normals aren’t really random for the purposes of this image – they are evenly distributed around z = 0 plane.

UnvevenDistFinal

You can see that the result is evenly distributed in the image on the right, but not in the image on the left. The rotations tend to be “clumped up” in portions of the range. The result is that your randomly-rotated sample kernel isn’t so random after all. You can see the kinds of artifacts in the bottom of the next image:

RotationSideArtifact

The more samples you use, the less noticeable the effect gets. Even with higher sample counts it can be visible though. Here is some banding caused by this artifact, with a 7 sample kernel:

RotationBanding

With some more math, I’m sure there is a way to re-balance the distribution here. Anyone want to take a shot? For now, I think the “spherical samples reflected about a random normal” produce better results, even if you need to occasionally discard a sample because it’s too close to the tangent of the surface.

Banding

With the hemispherical random normals, you may also notice some banding:

Banding

This is prevalent at all angles if you aren’t using many samples. The image above uses 5 samples, and there are 5 bands (the smallest isn’t really visible). This is because, despite our random sampling kernel rotations, the relative orientations and lengths of the sampling kernel stay the same. So if we consider all the possible random rotations, we can essentially think of our 5 sample kernel as tracing out the path of 5 cones. The image below shows this:

BandingExplained

As you can see, each band boundary corresponds to the spot where one of the “sample cones” begins to sink into geometry, thus beginning to contribute to occlusion.

For the spherical sampling kernel – where the random normals result in random reflection – this artifact isn’t quite as noticeable, since the sampling rays can be at different relative orientations from each other and don’t trace out such smooth cones.

MSAA and SSAO together

If you’re rendering with multisampling enabled in your final pass, you’ll notice SSAO artifacts along the edges of your objects.

MSAASSAO

The reason is that with MSAA on, the pixel shader is executed for some extra pixels around the edge of your object. For these pixels, we’ll sample from outside the edge of your object in the ambient occlusion buffer, thus mixing in the wrong AO value for your object.

Improvements

Remember the banding I previously mentioned? We can take some steps to eliminate this artifact. We create a larger number of samples in our kernel (say, 11), but only use a small subset of them (say 4 or 5). We will choose which based on another per-pixel random number (we have room in the unused component in our random normal texture).

Not only that, but we can separate the rays in our sampling kernel into a unit vector and a scale component. Then, we can index the unit vector and scale components separately in the pixel shader, essentially expanding our 11 sample kernel to 121 (11 x 11) possibilities (of which we’ll choose 4 or 5, say).

Let’s go through an example.

Say we have 11 rays in our kernel. We’ll denote them by Direction[0 – 10] and Scale[0 – 10].

Next, assume that we expand the extra random component in our “random normal texture” into a value from 0 to 17.

Finally, we’ll use a different “offset multipliers” to index Direction and Scale. Let’s say 1 and 7, respectively.

So if we take 4 samples for each pixel, and the value from our random normal texture is 12, then our 4 rays will be:

Direction[((0 + 12) * 1) % 11] * Scale[((0 + 12) * 7) % 11]
Direction[((1 + 12) * 1) % 11] * Scale[((1 + 12) * 7) % 11]
Direction[((2 + 12) * 1) % 11] * Scale[((2 + 12) * 7) % 11]
Direction[((3 + 12) * 1) % 11] * Scale[((3 + 12) * 7) % 11]

or,

Direction[1] * Scale[7]
Direction[2] * Scale[3]
Direction[3] * Scale[10]
Direction[4] * Scale[6]

And if the value from our random normal texture is 5, then our 4 rays will be:

Direction[5] * Scale[2]
Direction[6] * Scale[9]
Direction[7] * Scale[5]
Direction[8] * Scale[1]

With a small bit of extra work, the numbers can be varied even more (for instance, adding in an offset depending on the current pixel position).

The improvement can clearly be seen here:

 

Mixups

 

Note that this does have an impact on the shader. The sampling loop can no longer be unrolled, since our indices into the sampling kernel are no longer constant. It’s possible this could have a performance impact.

2 comments on “Know your SSAO artifacts

  1. […] z-буфераЛинейная алгебра для разработчиков игрKnow your SSAO artifacts — артефакты/косяки/неточности SSAO и способы их […]

  2. Great article!

Leave a comment

Cascadia Quest blog

Development blog for Cascadia Quest

Space Quest Historian

Hi. I quite like adventure games.

Harebrained Schemes

Developer's blog for IceFall Games

kosmonaut's blog

3d GFX and more

bitsquid: development blog

Developer's blog for IceFall Games

Sean Middleditch Psychic Readings

Developer's blog for IceFall Games

Lost Garden

Developer's blog for IceFall Games

Memories

Developer's blog for IceFall Games

Casey Muratori's Blog

Developer's blog for IceFall Games

Rendering Evolution

Developer's blog for IceFall Games

Simonschreibt.

Developer's blog for IceFall Games

– Woolfe –

Developer's blog for IceFall Games

Clone of Duty: Stonehenge

First Person Shooter coming soon to the XBOX 360

Low Tide Productions

Games and other artsy stuff...

BadCorporateLogo

Just another WordPress.com site

Sipty's Writing

Take a look inside the mind of a game developer.

Design a site like this with WordPress.com
Get started