Rendering
Image from: Meta Humans, Unreal Engine / Epic Inc.
The following content is licensed under a Creative Commons Attribution
4.0 International license (CC BY-SA 4.0) Prof. Dr. Valentin Schwind 1
Rendering
Rendering (or image synthesis) is the process of generating a
(non-)photorealistic image of a 2D/3D model (data structure)
Rendering means to solve three computational main tasks:
1. Occlusion: check an object’s visibility
2. Local illumination: the simulation of the appearance of surfaces with
respect to their material properties (see: Shaders)
3. Global illumination: the simulation of light distribution within the scene,
which can be solved directly and/or indirectly
Rendering Prof. Dr. Valentin Schwind 2
Rendering
Albrecht Dürer, 1525
http://upload.wikimedia.org/wikipedia/commons/1/1a/D%C3%BCrer_-
_Man_Drawing_a_Lute.jpg
Rendering Prof. Dr. Valentin Schwind 3
Recap: Z-Buffer
Proposed (but not realized due to performance costs) by Strasser (1974),
For every pixel x, y, a z-value (depth) is stored in addition to the color value. The
Z-Buffer is required for perspective corrected texture projections
Z-values are initialized by the maximal possible z-value (inifinity)
All scene objects are rendered in arbitrary order
A test is performed for every pixel:
If the z-value is smaller than the one currently stored at the pixel: store new z-value,
draw pixel
View frustum, with near- and far- clip planes
Suitable for arbitrary complex scenes
Easy to implement
Only one value per pixel (aliasing)
Z-Buffer
Limited accuracy (even with 32-bit)
Rendering Prof. Dr. Valentin Schwind 4
Recap: Z-Buffer and Rasterization
View-frustum
Line clipping
culling
Z
Occlusion
culling
Backface
culling
Z-Buffer
Rendering Prof. Dr. Valentin Schwind 5
Why Clipping
Line clipping is the process of removing (clipping) lines or portions of
lines
Rasterization buffer are limited and need all fragments of visible points
Typically, any part of a line which is outside of the viewing area is
removed.
We need that point
Prof. Dr. Valentin Schwind 6
Why Clipping
Line clipping is the process of removing (clipping) lines or portions of
lines
Rasterization buffer are limited and need all fragments of visible points
Typically, any part of a line which is outside of the viewing area is
removed.
We need that edge
Prof. Dr. Valentin Schwind 7
Clipping Techniques
Point-wise algorithms (single points are clipped)
Bresenham + Z-Buffer: simple, direct determination of visibility, suitable for raster output
devices, large number of tests necessary
Line-wise algorithms (line segments are clipped)
Cohen-Sutherland (1974, today known as “fast-clipping”)
Polygon-wise algorithms (polygons are clipped)
Sutherland–Hodgman (1974)
Weiler–Atherton
Vatti/Greiner–Hormann
Object/Image-wise algorithms (objects are clipped)
Warnock algorithm (image space)
Painter’s algorithm (object space)
General rule: Clipping shouldn't cost more than drawing objects, but it helps…
Rendering Prof. Dr. Valentin Schwind 8
Naive (Brute-force) Line Clipping
If both end points 𝑝𝑏 and 𝑝𝑒 are inside
viewport
Accept the whole line
Otherwise, clip the line at each edge
solve for 𝑡𝑙𝑖𝑛𝑒 and 𝑡𝑒𝑑𝑔𝑒 with intersection within 𝑒𝑒
segment if both 0 ≤ 𝑡𝑙𝑖𝑛𝑒, 𝑡𝑒𝑑𝑔𝑒 ≤ 1
𝑝𝑒
Replace suitable end points for the line by the
intersection point
Many cases are irrelevant
𝑝𝑏
𝑒𝑏
Rendering Prof. Dr. Valentin Schwind 9
Cohen-Sutherland Line Clipping
Divides a two-dimensional space into 9 regions and then
efficiently determines the (portions of) lines that are visible in 1001 1000 1010
the central region (the viewport)
0001 0000 0010
Each viewport edge defines a half space and is represented by
a bit-encoded flag (=outcode) if a vertex of the edge is outside 0101 0100 0110
Trivial cases
Bit order: top, bottom, right, left
Trivial accept: both are in viewport lineClipAndDraw(xmin, ymin, xmax,
ymax)
(ComputeOutCode(pb) | ComputeOutCode(pe)) == 0
Trivial reject: both lie outside w.r.t. at least one common edge
(ComputeOutCode(pb) & ComputeOutCode(pe)) != 0 𝑦𝑒 − 𝑦𝑎
𝑦 = 𝑦 𝑎 +( 𝑥 𝑚𝑖𝑛 − 𝑥 𝑎)
𝑥𝑒 − 𝑥𝑎
else: Line has to be clipped to all edges where XOR bits are set, i.e.,
the points lies on different sides of that edge Intersection calculation for x = xmin
ComputeOutCode(pb) XOR ComputeOutCode(pe)
Rendering Prof. Dr. Valentin Schwind 10
Cohen-Sutherland C++ Implementation
typedef int OC;
void lineClipAndDraw(Point P1, Point P2) {
const int INSIDE = 0; // 0000 OC oc0 = ComputeOC(P1.x, P1.y), oc1 = ComputeOC(P2.x, P2.y);
const int LEFT = 1; // 0001 bool accept = false;
const int RIGHT = 2; // 0010 1001 1000 1010
const int BOTTOM = 4; // 0100 while (true) {
const int TOP = 8; // 1000 if (!(oc0 | oc1)) {
accept = true; break;
OC ComputeOC(double x, double y) { } else if (oc0 & oc1) { 0001 0000 0010
OC code; break;
code = INSIDE; } else {
if (x < xmin) code |= LEFT;
else if (x > xmax) code |= RIGHT;
double x, y;
OC ocOut = oc1 > oc0 ? oc1 : oc0;
0101 0100 0110
if (y < ymin) code |= BOTTOM; if (ocOut & TOP) {
else if (y > ymax) code |= TOP; x = P1.x + (P2.x - P1.x) * (ymax - P1.y) / (P2.y - P1.y); y =
return code; ymax;
} } else if (ocOut & BOTTOM) {
x = P1.x + (P2.x - P1.x) * (ymin - P1.y) / (P2.y - P1.y); y = 1000 1010
// xmax, xmin, ymax and ymin are ymin;
// global constants defined by viewport } else if (ocOut & RIGHT) {
y = P1.y + (P2.y - P1.y) * (xmax - P1.x) / (P2.x - P1.x); x =
xmax;
} else if (ocOut & LEFT) {
y = P1.y + (P2.y - P1.y) * (xmin - P1.x) / (P2.x - P1.x); x = 0001
xmin;
}
if (ocOut == oc0) { P1.x = x; P1.y = y; oc0 = ComputeOC(P1.x, 0101
P1.y);
} else { P2.x = x; P2.y = y; oc1 = ComputeOC(P2.x, P2.y); }
}
}
}
Rendering Prof. Dr. Valentin Schwind 11
Naive Polygon Clipping
Line clipping is expensive and should generally be avoided
Intersection calculation
Increased number of variables: number of new points, new
triangles, new polygons,…
Alternative: artificial enlargement of the clipping region
Viewport
(Much) larger than viewport, but
Still avoiding overflow due to fixed-point representation Clipping region
Less clipping
Applications should avoid drawing objects that are outside of the
viewport/viewing frustum
Objects that are partially outside will be implicitly clipped during
rasterization
Slight penalty because they will still be processed (triangle setup)
Rendering Prof. Dr. Valentin Schwind 12
Sutherland-Hodgeman Polygon Clipping
Extends each line of a convex clip polygon in turn and selecting only vertices
from the subject polygon that are on the visible side (OpenGL)
1. Begins with an input list of all vertices in the subject polygon
2. One side of the clip polygon is extended infinitely in both directions, and the path
of the subject polygon is traversed
3. Vertices from the input list are inserted into an output list if they lie on the visible
side of the extended clip polygon line
4. New vertices are added to the output list where the subject polygon path crosses
the extended clip polygon line
5. Repeated iteratively for each clip polygon side, using the output list from one
stage as the input list for the next one
Degenerated polygons/edges must be eliminated by post-processing, if
necessary
Rendering Prof. Dr. Valentin Schwind 13
Sutherland-Hodgeman Polygon Clipping
List outputList = subjectPolygon;
for (Edge clipEdge in clipPolygon) do
List inputList = outputList;
outputList.clear();
𝑝 𝑖 −1 𝑝 𝑖 −1 𝑝 𝑖 −1 𝑝 𝑖 −1
Point S = inputList.last;
for (Point E in inputList) do
𝑝 𝑝
if (E inside clipEdge) then
if (S not inside clipEdge) then
𝑝𝑖 outputList.add(ComputeIntersection(S,E,clipEdge));
𝑝𝑖 end if
𝑝𝑖 𝑝𝑖 outputList.add(E);
else if (S inside clipEdge) then
Output: Output: Output: Output: outputList.add(ComputeIntersection(S,E,clipEdge));
end if
S = E;
Rendering Prof. Dr. Valentin Schwind 14
Other Clipping Algorithms
Weiler & Atherton (1977)
Arbitrary concave polygons with holes
Vatti (1992)
Supports self-overlap
Greiner & Hormann (1998)
Simpler and faster as Vatti
Supports Boolean operations
Idea:
Odd winding number rule
Intersection with the polygon leads to a winding number ±1
Walk along both polygons
Alternate winding number value
Mark point of entry and point of exit
Combine results
Rendering Prof. Dr. Valentin Schwind 15
Culling Techniques
View-frustum culling:
Most graphics toolkits allow the programmer to specify a "near" and "far" clip depth
(clipping usually done by perspective transform)
Backface culling
Z-component of the surface normal is negative we drop them
Counter-clockwise winding order for triangles (see: Modeling) that are facing forwards.
With backface culling, triangles facing the other direction will be automatically culled
(OpenGL)
Occlusion culling
Hierarchical Z-Buffer: an octree contains objects in model space, and a “Z-pyramid”
contains the values of Z-buffer of each hierarchical set.
Portals: divide a scene into cells/sectors (rooms) and portals (doors), and computes which
sectors are visible by clipping them against portals
…and many, many more!
Rendering Prof. Dr. Valentin Schwind 16
Occlusion of Light
Due to the occlusion of light, shadows can occur
When a light does not hit an object (because it gets occluded by some other
object), the object is in shadow
Occlusion and shadows add a great deal of realism to a scene and
make it easier for a viewer to observe spatial relationships between objects
We already implement occlusion
algorithms and shadowing in
local shaders (see: Shaders) but
shadows are tricky to implement
though (particularly in real-time)
Most real-time applications use
shadow mapping Image from: https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-
Mapping
Rendering Prof. Dr. Valentin Schwind 17
Orthographic Shadow Mapping
Renders the scene from the light’s direction and everything we see from
the light’s orthographic perspective is lit and everything we can't see
must be in shadow
Directional Light
light‘s
coordinate
Z-d space
e
tex pth fo
ture rm
(de li
pth ght sto
/sh
ado red as closestDepth = 0.6
wm
ap)
Light projected Z-depth
texture, Image from:
https://learnopengl.com/Advan
currentDepth = 0.9 ced-Lighting/Shadows/Shadow-
Mapping
return currentDepth > closestDepth ? 1.0 :
0.0;
projected shadow
Rendering Prof. Dr. Valentin Schwind 18
Perspective Shadow Mapping
Renders the scene from the light's point of view and everything we
see from the light's perspective is lit and everything we can't see must
be in shadow
Spot Light
light‘s
coordinate
Z- xtu
space
te
de re
pt (d
h
fo ept
rm h/
closestDepth = 0.6
l i g s ha
ht do
Light projected Z-depth
st w
texture, Image from:
or m
https://learnopengl.com/Advan
ed ap
currentDepth = 0.9 ced-Lighting/Shadows/Shadow-
as )
Mapping
return currentDepth > closestDepth ? 1.0 :
0.0;
projected shadow
Rendering Prof. Dr. Valentin Schwind 19
Shadow Acne
Multiple fragments can sample the same
value from the depth map when they're
relatively far away from the light. This often
causes a Moiré-like pattern called shadow
acne.
Fragment buffer
resolution
Shadow Acne, Image from:
https://learnopengl.com/Advanced-Lighting/Shadows/Shado
w-Mapping
Several fragments sample the same
depth sample: some depth texel are
above, and some below the object.
Shadow map resolution Thus, some fragments are to be in
shadow and not
Rendering Prof. Dr. Valentin Schwind 20
Shadow Bias
A small little “hack” called a shadow bias shifts
the depth of the shadow map by a small
constant amount such that the fragments are not
incorrectly considered below the surface.
Fragment buffer
resolution
Shadow Acne, Image from:
https://learnopengl.com/Advanced-Lighting/Shadows/Shado
w-Mapping
Requires some tweaking as this can be
different for each scene, but most of
the time it's simply a matter of slowly
Shadow map resolution incrementing the bias until all acne is
removed.
Rendering Prof. Dr. Valentin Schwind 21
Peter Panning
Problem: the shadow bias can become large enough
to see a visible offset of shadows compared to
the actual object silhoutte. This artifact is called
peter panning since objects seem slightly detached
from their shadows.
Frontface culling
By culling the front faces (instead of
backfaces) in the shadow map pass,
Backface culling we ignore in-object shadow (we can’t
see them anyways) and get the correct
result
Rendering Prof. Dr. Valentin Schwind 22
Percentage-Closer Filtering (PCF)
Because the depth map has a fixed resolution, the depth
frequently usually spans more than one fragment per texel
By slightly shifting the shadow maps’ texture
coordinates and applying transparency, we can produce soft
shadows
This is called Percentage-Closer Filtering and Aliasing in the shadow map texture
means we sample the texture more than once (depends on the shadow map
resolution)
based on a texel filter kernel:
e.g., a 3 x 3 = 9 texel kernel with decreasing PCF with a 3x3
transparency at the edges kernel
Another approach is to apply a hard-coded
Gaussian filter kernel. The Gaussian radius
is today known as shadow blur.
Rendering Prof. Dr. Valentin Schwind 23
Percentage Closer Soft Shadows (PCSS)
The algorithm by Nvidia [1] simulates a more realistic
falloff where the shadows get progressively softer the
further the receiver is from the caster in three steps:
1. Blocker search: The shader searches the shadow map
and averages the depths that are closer to the light than
to the fragment (the “receiver”). The search size of the
region depends on the light size and the receiver’s
distance from the light source.
2. Penumbra estimation: estimates the penumbra width
based on the light size and blocker/receiver distances Images from:
https://github.com/TheMasonX/UnityPC
from the light using parallel planes approximation SS
[1] Nvidia’s PCSS white paper (2005):
3. Filtering: Now we perform a typical PCF step on the http://developer.download.nvidia.com/s
haderlibrary/docs/shadow_PCSS.pdf
shadow map using a kernel size that is proportional to the
penumbra estimate
Rendering Prof. Dr. Valentin Schwind 24
Shadow Maps: The Problems
Sampling
Shadow maps are discretely and regularly sampled (e.g. grid)
Surfaces can have arbitrary orientation with respect to light
Can result in very bad sampling of a surface
Essentially impossible to solve in physical renderings
Resolution
Objects far from the camera should not be sampled finely
But shadow maps use a fixed grid
Must adapt to preferred resolution
Use several resolutions
e.g. Split or Cascaded Shadow Maps
Cascaded Shadow Maps and Perspective
Transform geometry appropriately Aliasing. Images from:
https://docs.microsoft.com/en-us/windows/win32
e.g. Perspective or Trapezoid Shadow Maps /dxtecharts/cascaded-shadow-maps
Rendering Prof. Dr. Valentin Schwind 25
Rasterization
After clipping, culling, and shadowing, the rasterizer projects each primitive
(mostly triangles) onto the image plane and lists all covered pixels in 2D
As already known, individual primitives are broken down into discrete
elements called fragments
the fragment shader is the program that takes care that the GPU can quickly
rasterize an image
A fragment is a collection of values produced by the rasterizer
A fragment represents a sample-sized segment of a rasterized primitive
The size covered by a fragment is related to the pixel area
Rasterization can produce multiple fragments from the same triangle per-pixel,
depending on various multisampling parameters (and OpenGL state)
BTW: Combining a fragment into a pixel is known as binning
Rendering Prof. Dr. Valentin Schwind 26
GPU-Based Rasterization
Rasterizer are typically GPU-based and extremely fast
The process of rasterizing 3D models is mostly carried out by fixed
function (non-programmable) hardware within the graphics pipeline.
But the process of (GPU-based and non-programmable) hardware does
not allow to render physical light effects we fake them all the time
Tesselation Rasterization
Hull Tesselator Domain
Shader (DX) (DX) Shader
?
Geometr
Input Vertex (DX) Rasterize
y
Assembler Shader Tesselation Primitive r
Shader
Control Generator Tesselation
(OG) (OG) Evaluation
(OG)
GPU
Rendering Prof. Dr. Valentin Schwind 27
Things a rasterizer can‘t do
Images from: https://pxhere.com/en/photo/1025641 and https://pxhere.com/en/photo/1356653
Rendering Prof. Dr. Valentin Schwind 28
Ray Tracing
Generating an image by tracing the path of light for each pixel in an
image plane
A ray is cast into the scene, the visible surface identified, the surface normal at the
visible point, and the visible light intensity computed
By following the path of light, a variety of optical effects become
possible:
Reflections
Refractions
Diffuse light scattering
Dispersion phenomena (e.g., chromatic aberration)
Optical effects (depth-of-field, etc.)
Rendering Prof. Dr. Valentin Schwind 29
Ray Tracing
light ray image plane
refraction ray
reflection ray intersection shadow ray
light ray
camera ray
Rendering Prof. Dr. Valentin Schwind 30
Ray Tracing Algorithm
For each pixel do:
Raytrace(ray)
Find the first intersection of the ray with an object in scene
Compute the shadow ray toward each light R = Rs
Do the local illumination R += L (see: Shaders)
If object is reflective: compute reflection ray S
C += Raytrace(S)
If object is refractive: compute refraction ray T
C += Raytrace(T)
Rendering Prof. Dr. Valentin Schwind 31
Path Tracing Algorithm
For each pixel do:
Raytrace(ray)
Find the first intersection of the ray with an object in scene
Compute the shadow ray toward each light R = Rs
Do the local illumination R += L (see: Shaders)
If object is reflective: compute reflection ray S
C += Raytrace(S)
If object is refractive: compute refraction ray T
C += Raytrace(T)
If object is diffuse compute a random ray R’ above the hemisphere using the BRDF
C += Raytrace(R’)
Rendering Prof. Dr. Valentin Schwind 32
Ray Tracing Algorithm
Fundamental algorithm in computer science (not only used for computer
graphics, Also used to simulate the path of sound waves e.g., echoes,
physics, etc…)
Automatic, simple and intuitive
Easy to understand and implement
Delivers “correct“ images by default
Powerful and efficient
Can work in parallel and distributed environments
Logarithmic scalability with scene size: O(log n) vs. O(n)
Real-time ray tracing has become standard on new commercial graphics cards (but
is yet not fast enough to test every pixel)
Rendering Prof. Dr. Valentin Schwind 33
Ray Tracing in Computer Graphics
In the past
Only used as an off-line (CPU-based) technique
Computationally very demanding (minutes to hours per frame)
More Recently
Interactive ray tracing on supercomputers [Parker, U. Utah‘98]
Interactive ray tracing on PCs and distributed on PC clusters [Wald’01]
RPU: First full HW implementation [Siggraph 2005]
Commercial tools: Embree/OSPRey (Intel/CPU), OptiX (Nvidia/GPU)
Complete film industry has switched to ray tracing (Monte-Carlo)
Ray tracing systems
Research: PBRT (offline, physically-based, based on book), Mitsuba renderer (EPFL), imbatracer,
…
Commercial: V-Ray (Chaos Group), Corona (Render Legion), VRED (Autodesk), MentalRay/iRay
(MI), …
Rendering Prof. Dr. Valentin Schwind 34
ReCap: What is Light?
A wave
Diffraction, interference
A particle (photon)
Light comes in discrete energy quanta
A field
Electromagnetic force: exchange of virtual photons
The propagation of light can be well-explained…
…in the large by Einsteins theory of relativity
…in the small by Maxwell Equations and Quantum theory
Images from: https://wiki.seg.org/ and
…in macroscopic scales??? https://sciencestruck.com/wavelength-of-visible-lig
ht-spectrum
and https://en.wikipedia.org/
Rendering Prof. Dr. Valentin Schwind 35
Rethinking: Light in Macroscopic Scales
Perceived by humans:
Tristimulus color model (Human Visual/Vestibular System)
Psychophysics: tone mapping, compression, motion, …
Assumptions and observations from macroscopic optics:
Scalar, real-valued quantity: radiometrics
Image from:
Energy preservation https://en.wikipedia.org/wiki/S
un
Linear and instant propagation
Superposition principle: light contributions add, do not interact
No polarization (not true)
No dispersion (not true)
Image from:
No “emitters”, just temperatures that cause radiation https://pxhere.com/en/photo/548
225
Rendering Prof. Dr. Valentin Schwind 36
Radiometric Quantities
Radiometry is the science of measuring radiant energy transfers
Radiometric quantities have physical meaning and can be directly
measured using proper equipment such as spectral photometers.
Radiometric Quantities
Energy [J] #Photons x Energy = 𝑛 ⋅ ℎ𝜈
Radiant power [watt = J/s] Total Flux
Intensity [watt/sr] Flux from a point per solid
angle
Irradiance [watt/m2] Incoming flux per area
Radiosity [watt/m2] Outgoing flux per area
Radiance [watt/(m2 sr)] Flux per area & projected solid
angle
Rendering Prof. Dr. Valentin Schwind 37
Light Transport in Computer Graphics
3D Scene
Lights (emitters)
Object surfaces (partially absorbing)
Illuminated objects can become emitters, too!
Radiosity: Irradiance minus absorbed photons flux density
Radiosity: photons per second per m² leaving surface
Irradiance: photons per second per m² incident on surface
Light bounces between all mutually visible surfaces
Invariance of radiance in free space
No absorption in-between objects
Dynamic energy equilibrium
Emitted photons = absorbed photons (+ escaping photons)
Rendering Prof. Dr. Valentin Schwind 38
Light Types
Illumination refers to the propagation and
diffusion of light from concrete light sources.
These include
Omni lights
Spotlights
Area lights
Direct lights
Sky lights
Light portals (repeating emitters)
Other objects Rendering with direct and indirect light both scematic
light rays / renderstuff.com
But: on which rule should compute
photons…?
Rendering Prof. Dr. Valentin Schwind 39
Color Bleeding and Metamerism
An important effect with indirect lighting is the
emission of colors (not only the radiance)
Using strong and bright colors, color often affects
the entire scene.
The emitted colors of a scene can influence each
other and produce similar (or identical color
valences) in humans.
This happens very often in nature and in closed
rooms without neutral light and is known as
metamerism.
Renderers, which separate the physical influence of
the light color on the subjective perception of the Rendering with direct and indirect light both scematic
light rays / renderstuff.com
human being, are called metameric renderers.
Rendering Prof. Dr. Valentin Schwind 40
The Render Equation
Since Maxwell equations and the special theory of relativity do not provide
any results for scenes on a real scale in an acceptable computing time,
computer graphics needs new techniques of light distribution in macroscopic scales.
In 1986, James Kajiya published the so-called render equation to solve this
problem. He proved that all rendering techniques (raytracing, path tracing, light
tracing,…) can be derived from that equation
The rendering equation thus became the “ideal” of a realistic, global lighting
model and led to systematization, comparability and a better understanding
between the effects of light in space and at quantum level
Rendering techniques such as ray tracing, however, did not emerge from the rendering
equation, but were later mathematically legitimized by them.
Today, all photorealistic renderers are built to approximate the render
equation
Rendering Prof. Dr. Valentin Schwind 41
The Render Equation
The original equation published by Kayja (1986) is:
To describe it more clearly, we select the following (and more common) James Kajiya: The Rendering Equation. ACM
SIGGRAPH Computer Graphics 20, 4 (Aug.
form of representation of the term:
1986):
Can only be
approximated
, but how?
visible surface Term for a „non- For all Direction- Incoming
radiance with surface realistic“ self- angles on dependent radiance from The
mo s
position () and emitter of light a reflectance: The all directions i mp t
o
e qu r t a n t
to: (𝑥, )
outgoing ray () (sources) hemispher BRDF! a ti o
n
e A raytracer simplyfies this CG i n
Rendering Prof. Dr. Valentin Schwind 42
Radiosity Mesh Refinement
An early approach is to compute light according to the rendering
equation is the so-called radiosity method, which, unlike raytracing, was
independent of viewpoint, as the entire scene is computed in an
independent pre-processing of geometry or textures.
dRad - A Discontinuity Meshing Radiosity Solver, softlab.ntua.gr
Rendering Prof. Dr. Valentin Schwind 43
Monte-Carlo-Integration (MC)
We turn the integral into finite sum usingrandom samples. By increasing
n:
Expected value remains the same
Variance decreases standard deviation (the error) converges with
We can emit…
rays from the scene (light tracing)
highly inefficient because only few photons will hit the image plane (very very very
noisy)
rays from the camera (path tracing)
highly efficient because we only need to compute photons we see (just noisy)
easy to implement but how do we sample the scene?
Rendering Prof. Dr. Valentin Schwind 44
Stratified Sampling Ray into the scene
(sample)
With uniform sampling, we can get unlucky – e.g., all
samples in a corner
To prevent this, we subdivide the domain Ω into non- „bad day“
overlapping regions Ωi – each region is called a
stratum
Using stratified sampling the variance can decrease
from to
fair
This procedure is also known as: Quasi-Random
Stratum
Monte-Carlo (or QMC) Primary samples are primary rays
(from the camera). Through sending
QMC starts from uniformly sendingmultiple rays per pixel the algorithm
points, however,… automatically performs anti-
aliasing!
Rendering Prof. Dr. Valentin Schwind 45
Uniform vs Adaptive Sampling
Uniform Adaptive
sampling Sampling
Image from:
https://www.academia.edu/13665801/Understanding_Molecular_Simulation_From_Algorith
ms_to_Applications_volume_1_of_Computational_Science_Series
Rendering Prof. Dr. Valentin Schwind 46
Importance Sampling
Uniform distributions are not the optimal way to use MC
e.g., integrating a glossy material in a rendering, it makes sense to
sample more directions around the specular one
We need a functional approximation instead of random
sampling
When the function is regular, the position of the samples has little
impact on the result
When the function is irregular, a small change in the sample position
may cause a large-scale change in the result of MC
Irregular functions (like in renderings) are difficult to Image from:
https://www.scr
sample and cause a high variance in the function estimator atchapixel.com/l
essons/mathem
Unfortunately, we have no prior knowledge on the function (the atics-physics-
for-computer-
integrand) being integrated (unlike in maths) graphics/monte-
carlo-methods-
It would be great to distribute these samples using a probability in-practice/
variance-
distribution function (PDF) that has the same than the function reduction-
methods
Rendering Prof. Dr. Valentin Schwind 47
Example: Cosine-Weighted Importance Samping
Importance sampling means: it is often useful to sample from a
distribution that has a shape similar to that of the integrand being
estimated
The render equation weighs the product of the BRDF and the incident
radiance based on a cosine term (the hemisphere)
It would be helpful to have a method that generates directions that are more
likely to be close to the top of the hemisphere, where the cosine term has a
large value, than the bottom, where the cosine term is small
Also known as „Malley‘s
method“
Rendering Prof. Dr. Valentin Schwind 48
Rejection Sampling
Rejection sampling is based on the observation that to sample a random
variable in one dimension, one can perform a uniformly random
sampling of the two-dimensional Cartesian graph, and keep the
samples in the region under the graph of its density function
1. Sample a point on the x-axis from the proposal distribution.
2. Draw a vertical line at this x-position, up to the maximum y-value of the PDF
of the proposal distribution.
3. Sample uniformly along this line from 0 to the maximum of the PDF.
If the sampled value is greater than the value of the desired distribution at this
vertical line, reject the x-value and return to step 1
else the x-value is a sample from the desired distribution
Can easily be extended to more dimensions!
Rendering Prof. Dr. Valentin Schwind 49
Ray Casting
1. Cast a primary ray from the eye through each pixel on the camera
plane
2. Trace secondary rays (light, reflection, refraction), stop after n
bounces
3. Accumulate radiance contribution using random rays (approx. render
equation)
image plane
The easiest way to compute the average radiance per pixel is recursion
Using this algorithm, the eye (image
plane) only receives radiance, not
color (RGB) values. To map radiance
to color values, we need tone
mapping.
Rendering Prof. Dr. Valentin Schwind 50
Light Tracing
Rays are shot from the image plane. If they hit a surface, they are
diffused, reflected or refracted. Except for refractions, new, random rays
are generated that approach the integral of the rendering equation. The
more rays you use, the lower the error in the variance (the noise)
image plane
Boxing scene with global lighting Boxing scene with global lighting
through path tracing, low sample through path tracing, high sample
count / Thomas Kabir / wikipedia.org count / Thomas Kabir / wikipedia.org
Rendering Prof. Dr. Valentin Schwind 51
Bidirectional Path Tracing
The algorithm generates light paths from both ends. One segment is built
up starting from the camera while the other segment is starting from an
arbitrary point on the surface of a light source. Finally, the two paths are
connected using a deterministic step.
image plane
Bidrectional path tracing by Veach (1997), Image from: http://rendering-
memo.blogspot.com/2016/03/bidirectional-path-tracing-9-some.html
Rendering Prof. Dr. Valentin Schwind 52
Gamma Correction
Human visual perception solves the tone mapping problem because it is able to
adapt to the prevailing brightness conditions. The eye reacts non-linearly to
different absolute brightness conditions (photopic, mesopic, scotopic).
Actually, the mapping is linked to a computer monitor’s power function as an
intensity to voltage response curve (1 : 0.45 ~ 2.2) and can be easily solved
using a log-power function (typically )
This functional mapping is called gamma correction,
the gamma-value (the exponent) is typically 2.2 for
CRD/LED screens, however, can vary!
However, we have a huge problem…
Images from: https://learnopengl.com/Advanced-Lighting/Gamma-Correction
Rendering Prof. Dr. Valentin Schwind 53
Linear Workflow
RGB image material, which is used e.g., for textures, is always gamma corrected – but in
some cases it should not be at all.
This applies in particular to texture maps that are not used as color maps e.g., as height,
normal, bump, reflection or refraction maps.
such maps should then not be processed with a gamma of 2.2, but linearly with 1.0!
Output material must not be corrected for further use in the compositing program but
must remain linear to avoid losses. Some formats and programs e.g., PNG, TIFF, JPGs with
sRGB are linear and are correctly displayed on the display - but many formats (BMP, JPG
standard) do not support this correction and are then incorrectly saved with a gamma of
2.2.
The user always sees the corrected and not the linear model when viewing the output material,
the real-time display and possibly shader previews must also be able to control the linear
workflow, or the virtual frame buffer must correct the display.
EXR or HDRI material often cause confusion, the brightness at a bit depth of 32-bit is
always in LWF, since all HDRI formats are linearly gamma-corrected
Rendering Prof. Dr. Valentin Schwind 54
Prof. Dr. Valentin Schwind 55
Ray Trace Ambient Occlusion
𝑁
Used to estimate how each point in a scene is dark bright
exposed to the ambient lighting
The occlusion at a point on a surface with normal
can be computed by integrating the ambient light
visibility (dark and bright contribution) function over
the hemisphere with respect to projected solid angle
The integral can easily be approximated using the
Monte Carlo method by casting random rays from x Crown model rendered with ambient
occlusion. Left: we traced 4 random rays per
and testing an intersection with another geometry pixel, uniformly distributed over the
hemisphere at the visible point. Right: a
(ray casting) reference image of the converged solution
was rendered with 2048 rays per pixel, Image
from:
https://link.springer.com/content/pdf/10.1007
%2F978-1-4842-4427-2_15.pdf
Rendering Prof. Dr. Valentin Schwind 56
Screen Space Ambient Occlusion (SSAO)
MC integration not practicable in real-time: Let’s use Z-buffer!
Horizontal and vertical position give position of point in x,y-direction (camera
space)
Z-buffer and normals gives position and orientation of point (camera space)
Contains discrete representation of all visible geometry
Use only “ray-tracing” against this simplified scene (200+ sample needed)
corner
corner ???
SSAO with OpenGL, Image from:
https://learnopengl.com/
Advanced-Lighting/SSAO
A range check that makes sure a fragment contributes to the
occlusion factor if its depth values is within the sample's
radius
Rendering Prof. Dr. Valentin Schwind 57
Deferred Rendering
Screen-space shading technique that is performed on a new rendering
pass after vertex and fragment shaders (G-Buffer)
Positions, normals, and materials for each surface are rendered into the
G-buffer using the “render to texture” principle. After this, a pixel shader
computes the direct and indirect lighting at each pixel using the
information of the texture buffers in screen space (done since 1988, but
(Raw)
become popular in games since 2010)
Diffuse
Lighting
Filter Ambient
multipl Reflections Specular Refractions
Occlusion
y
Background multipl Gamma clip
merge add add add add
Color y Output matte
multipl
y
Alpha
(Raw) GI
Rendering Prof. Dr. Valentin Schwind 58
Deferred Rendering
New Lighting
Position Pass Diffuse Filter Pass
Normal Pass
New 3D Light in
Deferred Rendering
Space
Rendering Prof. Dr. Valentin Schwind 59
Anti-Aliasing
Aliasing everywhere…
3D Textures 2D
The naive approach to avoid any aliasing effects is to render the image
using a higher resolution and then to downscale the image (2x, 4x, 8x,...)
First implemented by 3dfx VSA-100 (today Nvidia) with best quality but worst
performance
This approach is known as FSAA (Full-Scene-Antialiasing)
Rendering Prof. Dr. Valentin Schwind 60
Anti-Aliasing Types
Anti-aliasing (AA) is the reduction of undesired effects that arise from sampling a pixel grid
or from the staircase effect (see Nyquist—Shannon sampling theorem)
There are 3 AA types:
kernel-based filtering (best quality, worst performance):
e.g., Box or Gaussian blur
Full Scene Anti Aliasing (FSAA)
spatial (good quality, good performance):
Supersampling anti-aliasing (SSAA)
Multisample anti-aliasing (MSAA)*
Coverage sampling anti-aliasing (CSAA, Nvidia) / enhanced quality anti-aliasing (EQAA, AMD)
post-processed (acceptatable quality, good performance):
Fast approximate anti-aliasing (FXAA, Nvidia) / Morphological anti-aliasing (MLAA, AMD)*
Enhanced subpixel morphological anti-aliasing (SMAA)*
* supported by Unity
Temporal anti-aliasing (TXAA)*
Rendering Prof. Dr. Valentin Schwind 61
Enhanced Quality Anti-Aliasing (EQAA)
Samples multiple locations and combines these samples to produce the
final pixel
A hardware anti-aliasing method can use in tandem with other methods
(post-processed effects, except temporal AA as it uses motion vectors)
EQAA/CSAA detects if a polygon is present using coverage
None samples
MSAA 2xand
weight neighboured samples
MSAA 4x MSAA 8x
Images from: https://www.phoronix.com/scan.php?page=news_item&px=RadeonSI-EQAA-Anti-Aliasing
https://docs.unity3d.com/Packages/
[email protected]/manual/Anti-Aliasing.html#MSAA
Rendering Prof. Dr. Valentin Schwind 62
Motion Blur
Motion Blur is a common post-processing effect that
simulates the image blur that happens when an object
is moving faster than the camera’s exposure time.
Can help a scene look more natural because it
replicates what the human eye sees.
Typically, the motion blur is based on motion vectors
provided by the application and blurs the scene using:
Shutter Angle: This is the angle of the rotary shutter. A
larger value results in longer exposures and increases
the strength of the blur effect.
Sample Count: The number of sample points will affect
the quality and performance of the blur.
Image from:
https://learn.unity.com/tutorial/post-
processing-effects-motion-blur-2019-
3#5f49e6d8edbc2a29443e1bfd
Rendering Prof. Dr. Valentin Schwind 63
Ray Tracing in Real-Time
Does a quick‘n‘dirty
approximation of
the render equation
NVIDIA Reference PG132 Board for RTX 3090, Image from: Yuuki_Ans/https://videocardz.com
Incredibly
expensive, don‘t Noise is removed in post-processing
buy that
Rendering Prof. Dr. Valentin Schwind 64
Noise Filtering
The Spatio-Temporal Variance Guided Filter (SVGF) is a denoiser that uses
spatio-temporal reprojection along with feature buffers like normals,
depth, and variance calculations to drive a bilateral filter to blur regions
of high variance.
e.g., Minecraft RTX uses a form of SVGF, with the addition of irradiance
caching, the use of ray length for better driving reflections, and splitting
rendering for transmissive surfaces such as water.
OIDN: A machine learning autoencoder that takes in a albedo, first
bounce normals, and your input noisy image and output a filtered image.
Nvidia Optix 7 Denoising Autoencoder takes in the same inputs as OIDN,
an optional albedo, normal, and input noisy image, and outputs a filtered
image much faster than Intel's solution at the cost of quality
Rendering Prof. Dr. Valentin Schwind 65
Deep Learning Super Sampling (DLSS)
Upscaling machine learning techniques that that use a small color buffer
and a direction map to multiply the resolution of the output 2-4 times.
A neural network is trained by Nvidia using "ideal" images of video
games of ultra-high resolution on supercomputers and low-resolution
images of the same games
The result is stored on the video card driver. It is said that Nvidia uses
DGX-1 servers to perform the training of the network.
Only available on GeForce RTX 20 and GeForce RTX 30 series GPUs, in
dedicated AI accelerators called Tensor Cores
DLSS is exclusive to developers pre-approved by NVIDIA, so there's
currently no way to use this publicly, that being said there's alternatives
such as DirectML's SuperResolution Sample
Rendering Prof. Dr. Valentin Schwind 66
Deep Learning-based Color Mapping
Physically based simulation of light
transport, principled representation of
material appearance, and even
photogrammetric modeling do not
necessarily lead to photorealistic tone
mapping
Convolutional neural networks that leverage Image from:
https://intel-isl.github.io/PhotorealismEnhancemen
intermediate representations produced by t/
and https://youtu.be/P1IcaBn3ej0 , Paper
conventional rendering pipelines can available at https://arxiv.org/abs/2105.04619
enhance typical CG rendering
A model can be trained based on
photographs and via a novel adversarial
objectives, that provide strong supervision
Rendering Prof. Dr. Valentin Schwind 67
at multiple perceptual levels
Dispersion using Spectral Ray Tracers
An effect that often occur with real-world materials is is
dispersion, where a light beam is broken down into its
wavelengths.
The effect occurs because light is made up of different
waves superimposed on one another. These split as
soon as they hit a refracting medium at an angle and
different wavelengths are inclined in different directions.
A spectral ray tracer therefore contains an additional ray
dimensions: the wavelength and the ray
composition.
An intersection with an object breaks the ray into its
components and into different beams with different Dispersion with V-Ray , Quelle:
Renderosity.com und AP PHYSICS &
wavelengths. PHYSICAL SCIENCE
cdsd.k12.pa.us
Rendering Prof. Dr. Valentin Schwind 68
Relativistic Ray Tracing
Relativistic ray tracing takes into account both
the influence of mass and the speed of light
(over time). The most important feature of this
technique is that light rays can form curves
before they hit the grid plane. Light beams between camera and black hole
http://www.lehrer-online.de/854890.php
To illustrate all relativistic effects, however, a
special ray tracer is required that takes all
properties of light into account (doppler effect,
aberration, dispersion, etc.).
However, relativistic ray tracers are only
experimental in nature and are difficult to
A relativistic Raygtracer Quelle:
implement due to their enormous complexity. http://www.anu.edu.au/physics/Searle/Obsolet
e/Raytracer.html
Another problem with relativistic effects is the
Rendering Prof. Dr. Valentin Schwind 69
diffusion of light within nebulae or refracting bodies.
„Renders Everything You Ever Saw“ (REYES)
REYES is a modular rendering system from Lucasfilm (today Disney/Pixar)
developed in 1980s with a collection of algorithms and data processing methods and
was used for the movie “Star Trek II – the Wrath of Khan”.
Today, REYES is an experimental but production-ready platform based on five
paradigms:
Complexity (Model / Shading): Every conceivable geometric structure should be able to be
created. Shaders can describe surfaces not only via textures, but also via so-called micro-
displacements, which influence transparency, reflectivity... Optimized to minimize CPU time
Minimal Raytracing: Raytracing is very computationally intensive. The aim of REYES is to
achieve photorealistic results without raytracing whenever possible
Speed: The rendering time of REYES must not exceed the calculation of a two-hour film
with an average calculation time of three minutes per frame (= year per CPU).
Quality: Artifacts, flickers, noise and splotches are excluded
Flexibility: New techniques should be possible without a complete reimplementation
Rendering Prof. Dr. Valentin Schwind 70
REYES Pipeline
3D model / B-rep
efficient reading
computation of bounding volumes
divide on the screen? no
yes dividable into smaller primitives? Images from: Star Trek II: The Wrath of
hide Khan, Copyright: Paramount Pictures
cut into smaller primitives
Texture Data Illumination
Microdisplacements
Cut objects into micropolygons
Backdoor Occlusion / Visibility / Clipping
Filtering
Rendering
Rendering Prof. Dr. Valentin Schwind 71
References
Global Illumination Compendium
https://people.cs.kuleuven.be/~philip.dutre/GI/TotalCompendium.pdf
Physical-based rendering book: https://www.pbrt.org/ and
https://www.pbr-book.org
Enhancing photorealism enhancement http://vladlen.info/papers/EPE.pdf
Glassner A. S.: An Introduction to Ray Tracing, 1st Edition, (1989)
Bloomenthal J.: Computer Graphics: Implementation and
Explanation (2019)
Shirley P.: Fundamentals of Computer Graphics, 4th Edition, (2015)
Rendering Prof. Dr. Valentin Schwind 72