CCS347 – GD Unit I
CCS347 Game Development
Unit I 3D Graphics for Game Design
Genres of Games, Basics of 2D and 3D Graphics for Game Avatar, Game
Components – 2D and 3D Transformations – Projections – Color Models –
Illumination and Shader Models – Animation – Controller Based Animation.
1. Genres of Games
1. Action Games: These games typically involve fast-paced gameplay with
a focus on combat, exploration, and reflexes. Examples include first-
person shooters (FPS), platformers, and hack-and-slash games.
2. Adventure Games: Adventure games emphasize exploration, puzzle-
solving, and storytelling. They often feature intricate narratives and
immersive worlds. Point-and-click adventures and role-playing games
(RPGs) are popular subgenres.
3. Role-Playing Games (RPGs): RPGs allow players to assume the roles of
characters within a fictional world. They often involve character
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
customization, decision-making, and progression through leveling up or
acquiring new abilities.
4. Strategy Games: Strategy games require players to use tactics and
planning to achieve victory. They can be divided into subgenres such as
real-time strategy (RTS), turn-based strategy (TBS), and 4X (explore,
expand, exploit, exterminate) games.
5. Simulation Games: Simulation games aim to replicate real-world
activities or systems. They can cover a wide range of topics, including
city-building, farming, flight simulation, and life simulation.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
6. Sports Games: Sports games simulate real-life sports such as soccer,
basketball, and football. They often feature realistic physics, player
statistics, and multiplayer modes for competitive play.
7. Puzzle Games: Puzzle games challenge players to solve problems or
complete tasks using logic, spatial reasoning, and pattern recognition.
Examples include match-three games, Sudoku, and crossword puzzles.
8. Horror Games: Horror games focus on creating tension and fear
through atmospheric design, suspenseful gameplay, and frightening
imagery. They often incorporate elements of other genres, such as
action or adventure.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
9. Racing Games: Racing games center around competitive driving
challenges, ranging from realistic simulations to arcade-style
experiences. Players compete against AI opponents or other players in
various vehicles.
10. Fighting Games: Fighting games feature one-on-one combat
between characters with unique abilities and movesets. They emphasize
timing, precision, and strategic thinking to defeat opponents.
11. Platformer Games: Platformers involve navigating characters
through levels filled with obstacles, enemies, and hazards. They often
require precise jumping and timing skills to progress.
12. MMORPGs (Massively Multiplayer Online Role-Playing
Games): MMORPGs are online games where thousands of players
interact in a virtual world simultaneously. They often feature persistent
worlds, character progression, and player-vs-player (PvP) or player-vs-
environment (PvE) gameplay.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
13. Educational Games: Educational games aim to teach players
specific skills or knowledge while entertaining them. They cover a wide
range of subjects, including mathematics, language learning, and
history.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
2. Basics of 2D and 3D Graphics for Game Avatar
Game avatars are the digital characters that represent players or NPCs (non-
playable characters) in video games. These avatars can be designed in either
2D or 3D, depending on the game's art style and technical requirements.
2D Graphics:
2D (two-dimensional) graphics have only height and width, meaning
they appear flat and lack depth.
Used in side-scrolling platformers, mobile games, and classic arcade
games.
Examples of 2D Games: Super Mario Bros., Hollow Knight, Celeste.
Common Art Styles:
o Pixel Art
o Vector Art
o Hand-drawn Art
Tools for 2D Graphics
Adobe Photoshop – Used for digital painting.
Aseprite – Ideal for pixel art and sprite animation.
Inkscape – For vector-based 2D graphics.
Spine – Used for 2D skeletal animation.
Create a 2D Game Avatar
1. Sprites: In 2D games, characters are often represented as sprites, which
are two-dimensional images or animations. These sprites can be created
using software like Adobe Photoshop or GIMP.
2. Animation: Animating 2D characters involves creating a series of
images that depict various movements and actions. These images are
then displayed sequentially to give the illusion of motion.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
3. Character Design: Designing 2D characters involves creating visually
appealing and recognizable sprites that convey personality and
characteristics through their appearance and animations.
4. Resolution: 2D graphics are typically created at specific resolutions,
which dictate the level of detail and clarity of the images. Common
resolutions for 2D games include 640x480, 800x600, and 1024x768.
5. Layering: 2D graphics often utilize layers to organize elements within
the game world. This allows developers to create depth and add visual
complexity to scenes by arranging sprites in different layers.
3D Graphics:
3D (three-dimensional) graphics add depth along with height and width,
making characters appear more realistic.
Used in modern games for a more immersive and realistic experience.
Examples of 3D Games: Fortnite, The Witcher 3, Cyberpunk 2077.
Common Techniques:
Polygon Modeling
Texture Mapping
Rigging & Animation
Tools for 3D Graphics
Blender – Free and powerful 3D modeling software.
Autodesk Maya – Industry-standard for character modeling.
ZBrush – Best for sculpting high-detail models.
Substance Painter – Used for advanced texturing.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Create a 3D Game Avatar
1. Modeling: In 3D games, characters are represented as three-
dimensional models composed of vertices, edges, and faces. These
models are created using specialized software like Blender, Maya, or 3ds
Max.
2. Texturing: Texturing involves applying two-dimensional images, called
textures, onto the surfaces of 3D models to give them color, detail, and
texture. Textures are created using software like Substance Painter or
Adobe Photoshop.
3. Rigging and Animation: Rigging is the process of creating a skeletal
structure for a 3D model, which allows it to be animated realistically.
Animations for 3D characters involve manipulating the model's skeletal
rig to create movements and actions.
4. Character Design: 3D character design involves creating detailed and
anatomically accurate models that convey personality and expression
through their appearance and animations.
5. Rendering: Rendering is the process of generating the final 2D images
from the 3D scene. This involves applying lighting, shadows, reflections,
and other visual effects to create a realistic or stylized appearance.
6. Polycount: The polycount refers to the number of polygons (or
triangles) used to construct a 3D model. Higher polycounts allow for
greater detail but require more computational resources to render.
Both 2D and 3D graphics have their advantages and are used in various types
of games depending on the desired visual style, technical requirements, and
artistic preferences of the developers.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
2D Avatar character design:
3D Avatar character design:
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Key Differences Between 2D and 3D Graphics
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
3. Game Components
Game components are the fundamental elements that make up a game. These
components work together to create the overall gameplay experience. Some
common game components are:
1. Game Engine: The game engine is the software framework that
provides developers with tools and functionalities to create and manage
various aspects of the game, including graphics, physics, audio, and
artificial intelligence.
Popular Game Engines
2. Graphics: (How the Game Looks) Graphics encompass all visual
elements of the game, including characters, environments, animations,
user interfaces, and special effects. Graphics can be 2D or 3D, depending
on the style and requirements of the game.
Types of Game Graphics:
2D Graphics – Used in platformers and mobile games.
3D Graphics – Used in modern AAA and open-world games.
Pixel Art – A retro-style often used in indie games.
Vector Graphics – Clean and scalable, used in casual games.
Shaders & Lighting – Control reflections, shadows, and atmospheric
effects.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
3. Audio: (How the Game Sounds) Audio components include background
music, sound effects, voice acting, and ambient sounds. These elements
contribute to the atmosphere, immersion, and overall experience of the
game.
Types of Game Audio:
Background Music – Sets the mood (e.g., horror games have eerie
soundtracks).
Sound Effects (SFX) – Footsteps, gunshots, item pickups, etc.
Character Voices – Dialogues and voice acting for NPCs.
Environmental Sounds – Wind, rain, explosions, crowd noise.
4. User Interface (UI): (How the Player Interacts) The user interface
comprises menus, HUD (heads-up display), buttons, icons, and other
interactive elements that allow players to navigate the game, access
options, and interact with the game world.
Key UI Elements:
HUD (Heads-Up Display) – Displays health, score, and inventory.
Menus & Options – Includes pause screens, settings, and inventory.
Minimap & Navigation – Helps players find their way in large game
worlds.
Dialogue & Notifications – Provides in-game text, mission updates,
and prompts.
5. Gameplay Mechanics: (How the Game Works) Gameplay mechanics
are the rules, systems, and interactions that define how the game is
played. This includes movement, combat, puzzles, resource
management, progression, and win/lose conditions.
Key Mechanics in Games:
Player Movement – Walking, jumping, running, flying, or swimming.
Combat Systems – Melee attacks, shooting, blocking, and dodging.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Physics & Collision – Gravity, object interactions, and collisions.
Level Progression – Unlocking new areas, gaining experience, and
upgrading skills.
Win/Lose Conditions – Defining how a player wins or loses a game.
6. Characters: Characters are the entities controlled by players or
controlled by the game's artificial intelligence. This includes player
avatars, NPCs (non-player characters), enemies, allies, and any other
entities that inhabit the game world.
7. Levels/Maps: Levels or maps are the playable areas within the game
world. They can vary in size, complexity, and design, offering different
challenges, environments, and objectives for players to explore and
complete.
8. Storyline/Narrative: (How the Game Engages) The storyline or
narrative provides context, plot, and structure to the game. It includes
dialogue, cutscenes, lore, backstory, and character development,
enriching the player's experience and immersion in the game world.
Types of Game Storytelling:
Linear Story – A fixed sequence of events (The Last of Us).
Branching Narrative – Player choices affect the outcome (The Witcher 3).
Emergent Storytelling – Story develops based on gameplay decisions
(Minecraft).
Lore & World-Building – Background stories that enhance immersion (Dark
Souls).
9. Physics: Physics simulation governs the behavior of objects and
characters within the game world, including movement, collision
detection, gravity, inertia, and other physical interactions. Realistic
physics can enhance immersion and gameplay realism.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
10. Networking: Networking components enable multiplayer
functionality, allowing players to connect, communicate, and play with
each other over the internet or local network. This includes
matchmaking, multiplayer modes, peer-to-peer or client-server
architectures, and network synchronization.
11. Artificial Intelligence (AI): (How the Game Thinks) AI
components control the behavior and decision-making of non-player
characters (NPCs) and enemies within the game. This includes
pathfinding, enemy behaviors, adaptive difficulty, and other AI
techniques to create challenging and engaging gameplay experiences.
12. Input Controls: (How the Player Inputs Actions) Input controls
allow players to interact with the game using devices such as keyboards,
mice, controllers, touchscreens, or motion controllers. Responsive and
intuitive controls are essential for smooth and enjoyable gameplay.
Types of Controls:
Keyboard & Mouse – Used in PC gaming.
Game Controllers – Used in console gaming (PlayStation, Xbox).
Touch Controls – Used in mobile games.
Motion Controls – Used in VR and Wii games.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
4. 2D and 3D Transformations
Transformations are essential operations in both 2D and 3D graphics that
manipulate the position, orientation, and scale of objects within a virtual
space.
2D Transformations:
A 2D transformation modifies the position, shape, or size of a 2D
object (e.g., a sprite in a game). It is represented using matrices for
efficient computation.
1. Translation: (Shifting Position) This involves moving an object from
one position to another along the x and y axes. The translation
operation is typically represented by adding or subtracting values to the
object's coordinates.
Moves an object from one place to another in a 2D plane.
Defined by Tx (change in X) and Ty (change in Y).
Formula:
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
2. Rotation: (Spinning Around a Point) Rotation involves rotating an
object around a specified point by a certain angle. The rotation can be
clockwise or counterclockwise and is usually performed around the
object's origin or a specific pivot point.
Rotates an object by an angle θ around a pivot.
Formula
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
3. Scaling: (Changing Size) Scaling modifies the size of an object along the
x and y axes. It involves multiplying or dividing the object's dimensions
by specified scale factors to make it larger or smaller.
Enlarges or shrinks an object.
Defined by Sx (scale in X) and Sy (scale in Y).
Formula
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
4. Shearing: (Slanting Effect)Shearing distorts an object by skewing its
shape along one axis while keeping the other axis unchanged. It is often
used to create perspective effects or simulate slanted surfaces.
Skews an object along the X or Y axis.
Formula (X-Shear):
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
3D Transformations:
A 3D transformation modifies objects in three dimensions (X, Y, Z). It
is used for modeling, animation, and rendering in 3D game
development.
1. Translation: (Moving in 3D Space) Similar to 2D translation, 3D
translation involves moving an object from one position to another
along the x, y, and z axes. Objects can be translated in any direction in
3D space.
Moves an object along X, Y, and Z axes.
Formula
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
2. Rotation: (Spinning in 3D) 3D rotation involves rotating an object
around an axis in 3D space. Unlike 2D rotation, 3D rotation can occur
around any arbitrary axis, such as the x, y, or z axis, or a custom axis
defined by the user.
Rotates an object around X, Y, or Z axes.
Rotation about X-axis
Formula
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
3. Scaling: (Resizing in 3D) Scaling in 3D involves modifying the size of an
object along the x, y, and z axes independently. This allows for non-
uniform scaling, where the object can be stretched or squashed along
different axes.
Changes the size of an object in X, Y, and Z axes.
Formula
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
4. Shearing: Shearing in 3D distorts an object by skewing its shape along
one or more axes while keeping the others unchanged. It can be used to
create perspective effects, deformations, or simulate non-linear
transformations.
5. Projection: Projection transforms 3D objects onto a 2D plane for
rendering. There are various types of projections, including perspective
projection, which simulates how objects appear smaller as they move
away from the viewer, and orthographic projection, which preserves the
relative size of objects regardless of their distance from the viewer.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Orthographic projection Isometric projection perspective projection
These transformations are fundamental to creating dynamic and interactive
graphics in both 2D and 3D environments. They enable developers to
manipulate objects in space, create animations, simulate movement, and
achieve a wide range of visual effects.
Comparison of 2D vs. 3D Transformations
2D Transformations 3D Transformations
Dimension Two-dimensional space (x Three-dimensional space (x, y,
and y axes) and z axes)
Representation Objects are represented on a Objects are represented in a 3D
flat surface (e.g., a computer environment
screen)
Types Translation, rotation, scaling, Translation, rotation, scaling,
shearing, reflection shearing, reflection, perspective
projection, etc.
Coordinates Only x and y coordinates x, y, and z coordinates
Affected
Depth No depth information (z- Depth information allows
coordinate is constant) objects with volume and depth
Realism Limited to flat, 2D Enables more realistic 3D
representations graphics and animations
Applications GUIs, image processing, 2D 3D modeling, animation, virtual
animations, CAD reality, simulations, game
development.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
5. Projections
Projections are a crucial aspect of 2D and 3D graphics, particularly in
computer graphics, where they are used to convert three-dimensional objects
into two-dimensional representations for display on screens or other flat
surfaces.
2D Projections:
1. Identity Projection: This is the simplest form of projection, where
points in a 2D space remain unchanged. It's essentially a flat view with
no transformation.
2. Orthographic Projection: In orthographic projection, objects are
projected onto a plane parallel to the viewing plane. This means that all
lines perpendicular to the viewing plane remain parallel after
projection. It's commonly used in technical drawing and engineering to
represent objects accurately without perspective distortion.
3. Oblique Projection: Oblique projection involves projecting objects
onto a plane at an angle other than perpendicular. This can create a
sense of depth and perspective but isn't as realistic as perspective
projection.
3D Projections:
1. Orthographic Projection: Similar to 2D orthographic projection, in 3D
graphics, orthographic projection involves projecting 3D objects onto a
2D plane without accounting for perspective. This results in objects
appearing the same size regardless of their distance from the viewer.
It's often used in technical visualization and CAD applications.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
2. Perspective Projection: Perspective projection simulates how objects
appear smaller as they move away from the viewer, creating a sense of
depth and realism. It's based on the principles of geometry and mimics
how the human eye perceives depth in the real world. Perspective
projection is commonly used in 3D graphics for rendering scenes in
video games, virtual reality, and computer-generated imagery (CGI) for
movies and animations.
3. Parallel Projection: Parallel projection is a type of projection where
lines from the viewer to the object remain parallel after projection. This
means that objects maintain their size and shape regardless of their
distance from the viewer. Parallel projection is often used in technical
drawing and architectural rendering.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Projections play a vital role in transforming three-dimensional objects into
two-dimensional representations for display. They enable realistic rendering
of scenes in 3D graphics and accurate representation of objects in technical
drawings and engineering applications. Different types of projections offer
various trade-offs between accuracy, realism, and computational complexity,
depending on the requirements of the specific application.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
6. Color Models
Color models are mathematical representations used to describe and define
colors in digital imaging, graphics, and display technologies. There are several
color models, each with its own way of representing colors based on different
principles. Here are some common color models:
1. RGB (Red, Green, Blue):
RGB is an additive color model used in digital displays, where colors are
created by mixing varying intensities of red, green, and blue light.
Each color channel (red, green, and blue) is typically represented as an
8-bit value, ranging from 0 to 255, where 0 represents no intensity and
255 represents full intensity.
Used in screens, monitors, TVs, and digital cameras.
The primary colors (Red, Green, and Blue) combine to form white light
at full intensity.
By combining different intensities of red, green, and blue light, a wide
range of colors can be produced.
How It Works:
1. (0,0,0) → Black (No light)
2. (255,255,255) → White (Full light)
3. (255,0,0) → Red, (0,255,0) → Green, (0,0,255) → Blue
Combining colors:
1. Red + Green = Yellow
2. Green + Blue = Cyan
3. Blue + Red = Magenta
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
2. CMY (Cyan, Magenta, Yellow) and CMYK (Cyan, Magenta, Yellow,
Key/Black):
CMY is a subtractive color model used in printing and color mixing. In
this model, colors are created by subtracting varying amounts of cyan,
magenta, and yellow pigments from white.
CMYK is an extension of CMY, where the "K" stands for "Key," which
represents black. It is added to improve color reproduction and to
produce richer blacks in printed materials.
CMYK is commonly used in color printing, where colors are specified
using percentages of cyan, magenta, yellow, and black ink.
How It Works:
1. (0,0,0,100) → Black (Full black ink)
2. (0,0,0,0) → White (No ink)
3. (100,0,0,0) → Cyan, (0,100,0,0) → Magenta, (0,0,100,0) →
Yellow
Combining colors:
1. Cyan + Magenta = Blue
2. Magenta + Yellow = Red
3. Yellow + Cyan = Green
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
3. HSB/HSV (Hue, Saturation, Brightness/Value):
HSB/HSV is a cylindrical color model that represents colors based on
their hue, saturation, and brightness/value.
Hue represents the type of color (e.g., red, green, blue) and is
represented as an angle around a color wheel.
Saturation represents the intensity or purity of the color and is typically
represented as a percentage.
Brightness (or Value) represents the brightness of the color and is
typically represented as a percentage or value between 0 and 255.
Hue (H): The type of color (0°–360° in a color wheel).
Saturation (S): The intensity of the color (0% = grayscale, 100% = pure
color).
Value (V): The brightness of the color (0% = black, 100% = full
brightness).
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
4. HSL (Hue, Saturation, Lightness):
Similar to HSB/HSV, HSL is a cylindrical color model that
represents colors based on their hue, saturation, and lightness.
Lightness represents the brightness of the color, but unlike
brightness in HSB/HSV, lightness is calculated by averaging the
maximum and minimum color component values.
5. Lab (CIELAB):
Lab is a color model defined by the International Commission on
Illumination (CIE) that is designed to be perceptually uniform,
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
meaning that equal distances in Lab space correspond to equal
perceptual differences in color.
Lab color space consists of three components: L* (lightness), a*
(green to red), and b* (blue to yellow). L* represents lightness on
a scale from 0 to 100, while a* and b* represent color opponent
dimensions.
These are some of the most common color models used in digital imaging,
graphics design, printing, and various other applications.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
7. Illumination and Shader Models
Illumination and shader models are essential for rendering realistic lighting
and materials in computer graphics, gaming, and 3D simulations. They define
how light interacts with surfaces to create effects such as shading, reflections,
and shadows.
Illumination Models
An illumination model (or lighting model) describes how light interacts
with objects in a scene.
It determines the color and brightness of a surface based on light
sources and material properties.
Illumination or lighting refers to the techniques handling the interaction
between light sources and objects.
The lighting models are often divided into two categories:
1. Local illumination
2. Global illumination.
1. Local illumination
In general, the local illumination model considers only direct lighting.
2. Global illumination
The illumination of a surface depends solely on the properties of the
surface and the light sources.
In the real world, however, every surface receives light indirectly.
Even though light sources are invisible from a particular point of the
scene, light can be transferred to the point through reflections or refractions
from other surfaces of the scene.
The global illumination model considers the scene objects as potential
indirect light sources.
Unfortunately, the cost for global illumination is often too high to permit
interactivity.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Consequently, local illumination has been dominant in games so as to cope
with the real-time constraints.
The rasterization-based architecture of the GPU is suitable for local
illumination.
On the other hand, the special effects industry has adopted global
illumination models for generating photorealistic visual effects, which are
usually produced off-line on the CPU. It may often take several minutes or
hours to generate an image.
There are two primary components of illumination:
1. Light Sources:
These are the virtual representations of light emitters within a scene.
Examples include directional lights (e.g., sunlight), point lights (e.g., light
bulbs), spotlights, and ambient lights.
Each type of light source contributes differently to the illumination of
objects based on its position, intensity, color, and other properties.
2. Surface Properties: Surfaces in a 3D scene have various properties that
determine how they interact with light. The most common properties
include:
Diffuse Reflectance: Determines how much light is diffusely
reflected from a surface in all directions.
Specular Reflectance: Determines how much light is reflected in
a specular (mirror-like) manner, creating highlights.
Ambient Reflectance: Represents the amount of light a surface
receives from indirect illumination in the environment.
Transparency: Determines how much light passes through a
surface, affecting its appearance and creating translucent or
transparent effects.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Types of Illumination Models:
(a) Flat Illumination Model
Simplest model, assigns a uniform color to an entire surface.
No shading or variation in brightness.
Used in early graphics and low-complexity applications.
Example: Classic arcade games like Pac-Man.
(b) Diffuse Illumination (Lambertian Reflection)
Surfaces reflect light uniformly in all directions.
The intensity depends on the angle between the light source and the
surface normal.
Used for matte surfaces like paper or cloth.
Example: Non-shiny walls in 3D games.
(c) Specular Illumination (Phong Reflection Model)
Simulates shiny or reflective surfaces.
Light reflects more in a specific direction, creating highlights.
Used for metal, glass, polished surfaces.
Example: Reflections on a car in Gran Turismo.
(d) Ambient Illumination
A constant light applied to all objects to simulate indirect light (e.g.,
room lighting).
Prevents complete darkness in shaded areas.
Example: Shadows in a room still having slight visibility.
(e) Global Illumination
Considers how light bounces between objects.
Includes advanced effects like radiosity, ray tracing, and path tracing.
Used in realistic rendering.
Example: RTX ray tracing in modern games.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
1. Phong Lighting Model
For rendering a surface illuminated by a light source, we need to represent the
irradiance measured at the surface and the outgoing radiance reaching the
camera. The relationship between them is described by BRDF (bidirectional
reflectance distribution function). A simplified BRDF was proposed by Phong.
The Phong model is a local illumination technique.
Components of the Phong Lighting Model
In the Phong model, the perceived color of a surface point is defined by four
terms named diffuse, specular, ambient, and emissive.
The diffuse and specular terms deal with the light ray directly coming from
the light source to the surface point to be lit, whereas the ambient term
accounts for indirect lighting. The emissive term applies for the object
emitting light itself. Fig. 5.1 illustrates the four terms and their sum
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
1. Diffuse Reflection
For lighting computation, it is necessary to specify the light source types.
Frequently used are the point, area, spot, and directional light sources. The
simplest among them is the directional light source, which is assumed to be
considerably distant from the scene. The sun is a good example.
The diffuse term is based on the Lambert’s law, which states that
reflections from ideally diffuse surfaces (called Lambertian surfaces) are
scattered with equal intensity in all directions, as illustrated in Fig. 5.1-(a).
Therefore, the amount of perceived reflection is independent of the view
direction and is just proportional to the amount of incoming light.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
See Fig. 5.2-(a). The light source is considerably distant, and consequently
the light vector l that connects the surface point p and the light source is
constant for a scene.
The incident angle θ of light at p is between l and the surface normal n. If θ
becomes smaller, p receives more light. Assuming l and n are normalized,
the dot product of n and l is used to measure the amount of incident light:
n·l
When θ = 0, i.e., n = l, n · l equals 1, and therefore p receives the maximum
amount of light (Fig. 5.2-(b)). When θ = 90◦ , n · l equals 0, and p receives
no light (Fig. 5.2-(c)). Note that, when θ > 90◦ , p does not receive any light
(Fig. 5.2-(d)). Therefore, the amount of incident light should be zero, but n·l
becomes negative. To resolve this problem, n · l in Equation (5.1) is
extended to the following:
max(n · l, 0)
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
max(n · l, 0) determines only the ‘amount’ of incident light.
The perceived ‘color’ of the surface point p is defined as follows:
sd ⊗ md
where sd is the RGB color of the light source, md is the diffuse reflectance of
the object material, and ⊗ represents the component-wise multiplication.
The diffuse reflection term of the Phong model is defined by
max(n · l, 0) sd ⊗ md
Models how light spreads evenly across a matte surface.
The intensity depends on the angle between the light source and the
surface normal.
No shiny highlights, only soft shading.
2. Specular Reflection
The specular term is used to make a surface look shiny via highlights, and it
requires view vector and reflection vector in addition to the light vector l.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
The normalized view vector, denoted by v in Fig. 5.3-(a), connects the
surface point p and the camera position. On the other hand, the light vector
l is reflected at p to define the reflection vector r.
Consider the angle ρ between r and v. For a perfectly shiny surface, the
highlight at p is visible to the camera only when ρ = 0.
For a surface that is not perfectly shiny, the maximum highlight occurs
when ρ = 0 but decreases rapidly as ρ increases.
Fig. 5.3-(b) illustrates the cone of the reflected light rays, the axis of which
is r. If v is located within the cone, the highlight is visible to the camera.
The rapid decrease of highlights within the cone is often approximated by
(r · v) sh
where sh represents the shininess (smoothness) of the surface.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
When r = v, (r · v) sh = 1, regardless of the value sh, and the maximum
highlight is visible to the camera. When r ≠ v, the highlight is less likely to
be visible as sh increases.
The specular term is defined as
(max(r · v, 0))sh ss ⊗ ms
where ss is the RGB color of the specular light, and ms is the specular
reflectance of the object material.
Simulates the shiny highlights seen on polished surfaces.
The intensity depends on the viewer's position relative to the light source.
Produces bright spots where light is reflected directly.
3. Ambient Reflection
The ambient light describes the light reflected from the various objects in
the scene, i.e., it accounts for indirect lighting.
The ambient light has bounced around in the scene and arrives at a surface
point from all directions, rather than along a particular direction.
As a consequence, reflections from the surface point are also scattered with
equal intensity in all directions.
These facts imply that the amount of ambient light incident on a surface
point is independent of the surface orientation, and the amount of
perceived reflection is independent of the view direction.
Therefore, the ambient reflection term is simply defined as follows:
s a ⊗ ma
where sa is the RGB color of the ambient light, and ma is the ambient
reflectance of the object material.
Represents indirect light that is scattered evenly in all directions.
Ensures that objects are visible even if no direct light source hits them.
Prevents completely black shadows.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
4. Emissive Light
The emissive term describes the amount of light emitted by a surface itself.
It is simply an RGB color value and is denoted by me. In the local
illumination model, an emissive object per se is not a light source, and it
does not illuminate the other objects in the scene.
The Phong model sums the four terms to determine the surface color:
max(n · l, 0)sd ⊗ md + (max(r · v, 0))sh ss ⊗ ms + sa ⊗ ma + me
Shader Models:
Shader models are algorithms or programs used to calculate the
appearance of surfaces and objects in a 3D scene. They define how light
interacts with materials and determine the color, texture, and other visual
properties of rendered pixels.
Shader models define how objects are rendered in 3D applications by
determining how light interacts with surfaces.
Shaders are small programs that run on the GPU (Graphics Processing
Unit) and control the appearance of objects in real-time rendering and
offline rendering.
There are different types of shaders, each responsible for different aspects
of the rendering pipeline:
1. Vertex Shader: Operates on individual vertices of 3D models and is
responsible for transforming vertices from object space to screen space,
as well as performing per-vertex calculations such as lighting and
texture coordinates.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
2. Fragment Shader (Pixel Shader): Operates on individual fragments
(pixels) generated by rasterizing primitives (e.g., triangles) and is
responsible for calculating the final color of each pixel. Fragment
shaders are often used for per-pixel lighting calculations, texture
mapping, and other effects.
3. Geometry Shader: Operates on entire primitives (e.g., triangles) and
can generate new geometry or perform operations such as tessellation
or particle effects.
Shader models are essential for achieving realistic lighting and material
effects in computer graphics, and they are widely used in rendering engines
for games, simulations, visualizations, and other applications.
Different shader models may be used depending on the complexity of the
scene, the hardware capabilities, and the desired visual style.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Shader Models (Versions)
High-Level Shading Language
A few languages have been developed for shader programming.
They include
High Level Shading Language (HLSL) developed by Microsoft for use
with the Direct3D API, and
Cg (C for graphics) developed by NVIDIA.
HLSL and Cg are very similar because Microsoft and NVIDIA have worked
in close collaboration for developing the shading languages.
Another popular language is GLSL (OpenGL Shading Language)
developed for use with the OpenGL API
Applications of Shader Models
1. Video Games: Realistic character shading
2. CGI in Movies: Special effects, fire, water, and fog in films (Marvel,
Pixar).
3. 3D Modeling & Rendering: Used in Blender, Maya, and Unreal Engine.
4. Scientific Simulations: Weather forecasting, medical imaging, and
physics calculations.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Shader Models
GPUs have continuously evolved. The evolution can be traced in terms
of the shader models of Direct3D.
A shader model corresponds to a particular GPU architecture with
distinct capabilities.
Shader Model 1 started with Direct3D 8 and included assembly level
and C-like instructions.
In 2002, Direct3D 9 was released with Shader Model 2. Since then
Direct3D 9 has been updated, and Shader Model 3 was introduced in
Direct3D 9.0c released in 2004.
Shader Model 4 came with Direct3D 10 in late 2006. In 2008,
Shader Model 5 was announced with Direct3D 11.
1. Shader Model 1 (SM 1.0 & 1.1) – The Birth of Programmable Shaders
Introduced in DirectX 8.
Allowed developers to write simple vertex and pixel shaders.
Still highly limited compared to later models.
Rendering Pipeline
1. Vertex Shader – Processes vertex positions & transformations.
2. Pixel Shader – Modifies pixel colors based on textures.
New Features
Basic per-vertex lighting.
Simple texture sampling in pixel shaders.
Advantages
First implementation of programmable shaders.
More control over object lighting and colors.
Disadvantages
Limited instruction count (only 128 instructions per pixel shader).
No dynamic branching (can’t change execution flow in real-time).
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
2. Shader Model 2 (SM 2.0) – More Realistic Texturing
Introduced in DirectX 9.0, improving shader complexity.
Allowed more instructions and multiple textures per shader.
Rendering Pipeline
1. Vertex Shader – Supports longer, more complex transformations.
2. Pixel Shader – Supports multiple texture samplers & blending.
New Features
Increased instruction limits (up to 256 instructions).
Multiple texture lookups per pixel shader.
Advantages
Allowed better lighting, shadow, and texture effects.
Higher precision calculations for colors and lighting.
Disadvantages
No dynamic flow control (can’t use loops or if-statements).
Limited floating-point precision, causing rendering artifacts.
3. Shader Model 3 (SM 3.0) – Dynamic Branching & Multi-Texturing
Introduced in DirectX 9.0c, adding branching and looping for better
shader efficiency.
Rendering Pipeline
1. Vertex Shader – Uses dynamic branching for efficient transformations.
2. Pixel Shader – Supports Multiple Render Targets (MRTs).
New Features
Dynamic branching & looping – Shaders skip unnecessary calculations,
improving performance.
Multiple Render Targets (MRTs) – Allows one shader to write to
multiple textures simultaneously.
Longer instruction limits (up to 512 instructions).
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Advantages
Better shader efficiency with dynamic branching.
Supports real-time shadows and reflections.
Disadvantages
Branching can hurt performance if not optimized properly.
Still limited precision for complex lighting.
4. Shader Model 4 and Geometry Shader Unified Shader Architecture
Evolution of GPU has made both the vertex and fragment shaders more
and more programmable, and their instruction sets can be converged.
One of the key features of Shader Model 4 is a unified shader
architecture, in which all programmable stages of the pipeline share the
cores and the resources including the graphics memory.
Another feature of Shader Model 4 is the addition of a new
programmable stage, named the geometry shader. It is located between
the vertex shader and the rasterizer.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
See Fig. 7.1. The output of the vertex shader goes to either the geometry
shader if present, or the rasterizer otherwise.
The geometry shader performs per-primitive operations. Its input
consists of the vertices of a primitive, i.e., three vertices of a triangle,
two vertices of a line segment, or a single vertex of a point.
A notable feature of the geometry shader is that it can discard the input
primitive, or emit one or more new primitives.
Introduced in DirectX 10, replacing separate vertex & pixel shaders
with a unified shader system.
GPUs no longer have separate vertex/pixel processing units –
everything runs on general-purpose shaders.
Rendering Pipeline
1. Vertex Shader – Transforms vertex data.
2. Geometry Shader – Creates new geometry on the fly.
3. Pixel Shader – Computes final pixel colors.
New Features
Unified Shader Architecture – All shaders use the same hardware
units.
Geometry Shaders – Create or modify geometry dynamically.
Shader Model 4.1 – Adds improved anti-aliasing techniques.
Advantages
More efficient GPU usage (no separate vertex/pixel units).
Better tessellation support for detailed environments.
Disadvantages
Not backward-compatible with DirectX 9 GPUs.
More complex shader programming.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
5. Shader Model 5 (SM 5.0) – Tessellation & Compute Shaders
Introduced in DirectX 11, focusing on tessellation and general-purpose
GPU computing (GPGPU).
Rendering Pipeline
1. Hull Shader – Controls tessellation level.
2. Tessellation Shader – Creates high-resolution surfaces dynamically.
3. Domain Shader – Applies displacement mapping.
4. Pixel Shader – Computes final shading & lighting.
New Features
Tessellation Shaders – Generate highly detailed surfaces.
Compute Shaders – Use GPU for physics, AI, and general computations.
Advantages
High-detail rendering for landscapes & characters.
Supports real-time physics & AI via Compute Shaders.
Disadvantages
Requires DirectX 11 GPUs (no support on older hardware).
Higher computational cost for complex tessellation.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
6. Shader Model 6 (SM 6.0+) – Ray Tracing & AI Integration
Introduced in DirectX 12 & Vulkan, adding real-time ray tracing
(RTX) and AI-powered shaders.
Rendering Pipeline
1. Ray Tracing Pipeline – Traces light paths for realistic shadows &
reflections.
2. Mesh Shader – Handles complex models efficiently.
3. Machine Learning Shaders – AI-based denoising & upscaling
New Features
Real-Time Ray Tracing (RTX) – More realistic shadows & reflections.
Mesh Shaders – Optimize rendering of complex models.
AI-Powered Shaders – Used for upscaling
Advantages
Photorealistic graphics with real-time lighting.
Better AI-driven image quality improvements.
Disadvantages
Requires Ray-Tracing supported GPUs
Expensive performance cost.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Comparison of Shader Models and Illumination Models
8. Animation
Animation is the process of creating the illusion of motion and change by
rapidly displaying a sequence of images or frames. It is a powerful technique
used in various fields such as film, television, video games, advertising,
education, and art. Animation can be produced using different methods and
techniques, each with its own unique characteristics and applications.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
1. Traditional Animation:
Traditional animation, also known as hand-drawn or cel animation,
involves creating each frame manually by hand-drawing or painting on
transparent sheets called cels.
Animators draw keyframes, which represent the most important poses
or moments in the animation, and then create intermediate frames
called "in-betweens" to smooth out the motion.
Traditional animation has a rich history and has been used in classic
animated films such as Disney's "Snow White and the Seven Dwarfs"
and "The Lion King."
2. Stop-Motion Animation:
Stop-motion animation involves capturing a series of still images of
physical objects or puppets, with slight changes made between each
frame.
The objects are moved incrementally and photographed frame by frame
to create the illusion of movement when played back at normal speed.
Examples of stop-motion animation include claymation films like
"Wallace and Gromit" and "Chicken Run," as well as puppet animation in
films like "The Nightmare Before Christmas."
3. Computer Animation:
Computer animation involves creating animated sequences using digital
tools and software.
There are various techniques within computer animation, including:
2D Animation: Creating animations using digital drawing or
vector-based software, similar to traditional animation but done
digitally.
3D Animation: Creating animations using three-dimensional
computer graphics. This involves modeling objects in 3D space,
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
applying textures and materials, rigging characters with
skeletons, and animating them using keyframes or procedural
techniques.
Motion Capture (MoCap): Recording the movements of real
actors or objects using specialized cameras and sensors, and then
transferring that motion to digital characters or models.
Particle Animation: Simulating complex phenomena such as fire,
smoke, water, and explosions using particle systems and physics
simulations.
Computer animation is widely used in the film industry, video game
development, advertising, architectural visualization, and scientific
visualization.
4. Motion Graphics:
Motion graphics involve animating graphic elements such as text, logos,
and illustrations to create dynamic visual sequences.
Motion graphics are often used in title sequences, commercials,
explainer videos, user interfaces, and infographics.
Motion graphics can be created using various software tools such as
Adobe After Effects, Cinema 4D, and Autodesk Maya.
Animation is a versatile and expressive medium that allows creators to tell
stories, convey information, and evoke emotions through movement and
visual imagery. With advancements in technology, animation continues to
evolve and push the boundaries of creativity and innovation.
9. Controller Based Animation
Controller-based animation is a technique used in computer graphics and
game development to create animations by controlling and manipulating the
movement, appearance, and behavior of objects through the use of
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
controllers. In this approach, animations are defined and managed through a
system of controllers that drive the motion and interactions of objects within
a scene.
Components of Controller-Based Animation:
1. Controllers: Controllers are objects or scripts that govern the behavior
of animated objects. They can take various forms, such as keyframe
controllers, procedural controllers, or script controllers, depending on
the complexity and requirements of the animation.
Keyframe Controllers: Keyframe controllers interpolate
between keyframes, which are specific points in time where the
animator defines the desired state of the object (position, rotation,
scale, etc.). The controller calculates the object's state at each
frame based on the keyframes provided.
Procedural Controllers: Procedural controllers generate
animation in real-time using mathematical functions or
algorithms. They can be used to create dynamic and responsive
animations that react to user input or changes in the environment.
Script Controllers: Script controllers allow developers to write
custom scripts or code to control object behavior. This provides
flexibility and allows for complex interactions and animations that
cannot be achieved with simple keyframe or procedural
animations.
2. Animation Curves: Animation curves are mathematical
representations of how an object's properties change over time. They
define the rate and timing of animation transitions, allowing for smooth
and natural movement. Controllers use animation curves to interpolate
between keyframes or generate procedural animation.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
3. Parameterization: Controller-based animation often involves
parameterizing various aspects of object behavior, such as speed,
acceleration, damping, and constraints. These parameters allow
animators to fine-tune animations and create specific effects or
behaviors.
4. Hierarchy and Constraints: In many animation systems, objects can be
organized into hierarchies, where the transformation of parent objects
affects the transformation of their child objects. Controllers can also
apply constraints to limit the movement or orientation of objects,
ensuring that they adhere to specific rules or conditions.
Advantages of Controller-Based Animation:
Flexibility: Controller-based animation provides flexibility in defining
and modifying animations, allowing animators to adjust parameters and
behaviors dynamically.
Interactivity: By using controllers, animations can respond to user
input or changes in the environment, creating interactive and dynamic
experiences.
Complexity: Controller-based animation supports complex
interactions, hierarchies, and constraints, enabling the creation of
sophisticated animations and simulations.
Examples:
In a game, a character's movement may be controlled by a keyframe
controller that interpolates between walking, running, and jumping
animations based on player input.
In a physics simulation, a procedural controller may generate realistic
motion for a bouncing ball based on the laws of physics and
environmental conditions.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
In a 3D modeling software, a script controller may animate the
movement of a mechanical arm based on a set of predefined constraints
and parameters.
Need for Controller-Based Animation
Real-Time Interaction – Animations adapt to player actions (e.g., a
character stopping abruptly instead of completing a run cycle).
AI-Driven Movement – NPCs (Non-Player Characters) react to the
environment dynamically.
Physics-Based Motion – Objects respond realistically to gravity, force,
and collisions.
Example: In a fighting game, a character’s attack animation changes based on
the opponent’s position and defense stance, rather than playing a predefined
motion.
Key techniques used in controller-based animation:
Controller-based animation allows characters and objects in a game to
move dynamically based on user inputs, AI behavior, physics, or other
environmental factors. Unlike pre-rendered animations, which play in a
fixed sequence, controller-based animations are interactive and
adaptive, making them essential for modern gaming.
1. Keyframe Animation
Keyframe animation is one of the most fundamental animation
techniques. It involves defining specific frames (keyframes) where an
object is at certain positions, and the system interpolates the movement
between them.
Key Concepts
Keyframes: The main frames where an object's position, rotation, or
scale is explicitly defined.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Interpolation: The process of generating intermediate frames between
keyframes for smooth motion.
Bezier Curves: Often used for smooth transitions between keyframes.
Mathematical Representation
Interpolation between two keyframes can be done using linear interpolation
(LERP):
Example Applications
Character animations (running, jumping, attacking).
Cinematic cutscenes in games.
2. Inverse Kinematics (IK)
Inverse Kinematics (IK) is used to animate jointed structures (e.g.,
characters, robotic arms) by defining the end position of a chain and
calculating the joint angles needed to reach that position.
Key Concepts
Forward Kinematics (FK): Controls each joint manually.
Inverse Kinematics (IK): Moves the end effector, and the system
calculates how joints should move.
Bone Hierarchy: Used to structure character rigs.
Mathematical Representation
A common IK method is the Cyclic Coordinate Descent (CCD) algorithm,
where joint angles are iteratively adjusted to bring the end effector closer to
the target.
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Example Applications
Character foot placement on uneven terrain.
Robotic arm movement in simulations.
3. Vertex Morphing (Blend Shapes)
Vertex morphing (also called blend shapes or shape keys) is a
technique where a 3D model smoothly transitions between different
predefined shapes by adjusting the position of vertices.
Key Concepts
Base Mesh: The original model.
Target Shapes: Variants of the mesh with modified vertex positions.
Blend Weights: Control how much influence each shape has.
Mathematical Representation
The final vertex position is calculated as:
4204 – APEC K. Chairmadurai, AP/CSE
CCS347 – GD Unit I
Example Applications
Facial expressions in character animation.
Lip-syncing in animated movies.
4. Skinning (Skeletal Animation)
Skinning is a technique where a 3D mesh (skin) is bound to a
skeleton (rig), allowing it to deform naturally when the skeleton
moves.
Key Concepts
Bones & Joints: The skeletal structure that controls the mesh.
Weights: Each vertex is influenced by multiple bones with different
weights.
Linear Blend Skinning (LBS): The most common skinning technique.
Mathematical Representation (LBS)
Example Applications
Character movement in games (e.g., running, jumping).
Realistic muscle and skin deformations.
--------------------------------------------------------------------------------
4204 – APEC K. Chairmadurai, AP/CSE