0% found this document useful (0 votes)
349 views640 pages

Advanced Linux 3D Graphics Programming

This document is the table of contents for a book about advanced 3D graphics programming in Linux. It covers topics such as 3D concepts, the l3d library classes, modeling in Blender, rendering techniques including vertex animation, lighting, texture mapping and light mapping. The book is intended to teach programmers how to create advanced 3D graphics applications on the Linux platform.

Uploaded by

Miro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
349 views640 pages

Advanced Linux 3D Graphics Programming

This document is the table of contents for a book about advanced 3D graphics programming in Linux. It covers topics such as 3D concepts, the l3d library classes, modeling in Blender, rendering techniques including vertex animation, lighting, texture mapping and light mapping. The book is intended to teach programmers how to create advanced 3D graphics applications on the Linux platform.

Uploaded by

Miro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 640

TE

AM
FL
Y
Advanced
Linux 3D
Graphics
Programming

Norman Lin

Wordware Publishing, Inc.


Library of Congress Cataloging-in-Publication Data

Lin, Norman.
Advanced Linux 3D graphics programming / by Norman Lin.
p. cm.
Includes index.
ISBN 1-55622-853-8 (pbk.)
1. Computer graphics. 2. Linux. 3. Three-dimensional display systems. I. Title.

T385 .L5555 2001


006.6'93--dc21 2001026370
CIP

© 2001, Wordware Publishing, Inc.


All Rights Reserved
2320 Los Rios Boulevard
Plano, Texas 75074

No part of this book may be reproduced in any form or by


any means without permission in writing from
Wordware Publishing, Inc.

Printed in the United States of America

ISBN 1-55622-853-8
10 9 8 7 6 5 4 3 2 1
0106

Blender is a registered trademark of Not a Number B. V.


Other product names mentioned are used for identification purposes only and may be trademarks of their respective companies.

All inquiries for volume purchases of this book should be addressed to Wordware Publishing, Inc., at the above
address. Telephone inquiries may be made by calling:
(972) 423-0090
iii

Contents
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Chapter 1 Basic Linux 3D Graphics Concepts . . . . . . . . . . . . . . . . . . 1
2D Graphics Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
3D Graphics Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3D Coordinate Systems and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Perspective Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Specific Matrix Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Other Matrix Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
The l3d Library Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Sample l3d Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
l3d Directory Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
The Five-Step Process of l3d Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Overview of l3d Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Applications and Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2D Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Concrete Factory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Specifying Geometry and Behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Fixed- and Floating-Point Math. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Summary of l3d Classes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Linux Programming Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Linux 3D Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Blender Interface and Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Exporting and Importing Blender Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Chapter 2 Rendering and Animation Techniques for 3D Polygons . . . . . . 39
Vertex Animation and 3D Morphing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Sample Program: morph3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Mathematical Models for Computing Light . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Self Lighting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Ambient Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Diffuse Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Specular Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Multiple Light Sources and Components . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Radiosity and Ray Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Dynamic or Static Lighting Computations . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Fog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Rendering Techniques for Drawing Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Flat Shading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
iv Contents

Gouraud Shading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Phong Shading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Light Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Texture Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Step 1: Define a Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Storing Texture Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Classes for Loading Textures from Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Practical Issues in Dealing with Texture Image Files. . . . . . . . . . . . . . . . . . . . . 73
Step 2: Define a Texture Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Step 3: Map Between Texture Space and World Space . . . . . . . . . . . . . . . . . . . . . 76
Calc: A Symbolic Algebra Package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Starting and Exiting Calc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Stack-Based Computation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Entering and Editing Mathematical Entities . . . . . . . . . . . . . . . . . . . . . . . . . 82
Solving Systems of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Solving the Texture Mapping Equations with Calc . . . . . . . . . . . . . . . . . . . . . . 86
Step 4: Reverse Project from Screen Coordinates into Texture Coordinates . . . . . . . . . . 89
Step 5: Map Texture Coordinates to Integer Indices and Draw . . . . . . . . . . . . . . . . . 92
An Optimized Texture Mapping Strategy: u/z, v/z, 1/z . . . . . . . . . . . . . . . . . . . . . 93
The Division Operation and Texture Mapping . . . . . . . . . . . . . . . . . . . . . . . . 95
Associating Textures with 3D Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Rasterization Classes for 3D Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
An Abstract 3D Rasterizer: l3d_rasterizer_3d . . . . . . . . . . . . . . . . . . . . . . . . 98
A Software 3D Rasterizer Implementation: l3d_rasterizer_3d_sw_imp . . . . . . . . . . 101
A Mesa/OpenGL 3D Rasterizer Implementation: l3d_rasterizer_3d_mesa_imp . . . . . . 115
Sample Program: textest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Light Mapping Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Software Light Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Surface Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Light Mapped Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Software Rasterization of Light Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Hardware Light Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Sample Program: lightmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Shadows and Light Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Chapter 3 3D Modeling with Blender. . . . . . . . . . . . . . . . . . . . . 161
Tutorial: Creating and Exporting Compatible, Textured 3D Morph Targets . . . . . . . . . . . 161
The Starting Morph Mesh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Inserting Two Morph Targets into Blender . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Deforming the Mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Applying a Texture and Assigning Texture Coordinates. . . . . . . . . . . . . . . . . . . . 167
Testing the Morph in Blender. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Exporting the Two Morph Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Exporting the Texture Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Importing the Morph Targets into a Program . . . . . . . . . . . . . . . . . . . . . . . . . 175
Tutorial: Using Inverse Kinematics and Roto- scoping to Model a
Walking Human Figure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Inverse Kinematics: Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Creating an Ika Chain in Blender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Contents v

Working with Ika Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183


Creating the Arm Ikas. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Creating the Main Body Ika . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Parenting the Ikas into a Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Testing the Ika Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Animating the Ika Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Connecting Ika Chains and Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Texturing and Exporting the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Importing the Textured Ika Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Rotoscoping and Inverse Kinematics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Programming IK and FK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Chapter 4 Visible Surface Determination I: General Techniques . . . . . . 203
The Goals of VSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Back-Face Culling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
3D Convexity and Back-Face Culling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Sample Program: backface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Class l3d_World_Backface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
View Frustum Culling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Defining a View Frustum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Computing the Frustum in World Coordinates . . . . . . . . . . . . . . . . . . . . . . . 220
Class l3d_Viewing_Frustum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Using the Frustum Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Hierarchical View Frustum Culling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Bounding Spheres and the View Frustum . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Computing Bounding Spheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Class l3d_bounding_sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Other Bounding Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Clipping Against the View Frustum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Sample Program: frustum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Class l3d_World_Frustum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
The Painter’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
The Z Buffer Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
General Observations about the Z Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
A Software Z Buffer: Class l3d_rasterizer_3d_zbuf_sw_imp . . . . . . . . . . . . . . . . . 248
Mesa/OpenGL Z Buffering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Factory Manager for Z Buffered Rasterizers . . . . . . . . . . . . . . . . . . . . . . . . . 261
Sample Program: texzbuf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Z Buffer-like Algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Chapter 5 Visible Surface Determination II:
Space-partitioning Techniques . . . . . . . . . . . . . . . . . . 267
Binary Space Partitioning Trees, Octrees, and Regular Spatial Partitioning . . . . . . . . . . . 267
Using a BSP Tree to Partially Pre-sort Polygons . . . . . . . . . . . . . . . . . . . . . . . 271
Choosing a Splitting Plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Back-to-Front Rendering (Painter’s Algorithm Revisited) . . . . . . . . . . . . . . . . . 274
Front-to-Back Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Combining BSP Trees and Bounding Volumes . . . . . . . . . . . . . . . . . . . . . . . 275
Sample Program: bsp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
vi Contents

Classes l3d_halfspace and l3d_bsptree . . . . . . . . . . . . . . . . . . . . . . . . . . . 277


Class l3d_world_bsptree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
The Main Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
The World Database, Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Leafy BSP Trees: Automatic Convex Partitioning of Space. . . . . . . . . . . . . . . . . . 293
Creating a Leafy BSP Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Methods for Leafy BSP Trees in Class l3d_bsptree . . . . . . . . . . . . . . . . . . . . . 296
Sample Program: leafybsp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Axis-aligned BSP Trees and Mini BSP Trees . . . . . . . . . . . . . . . . . . . . . . . . . 302
BSP Tree as a Multi-resolution Solid-Modeling Representation . . . . . . . . . . . . . . . 303
BSP Trees and Dimension Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Octrees. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Regular Spatial Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Portals and Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
The Main Ideas Behind the Portal Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 308
Rendering a Portal World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Observations about the Portal Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Portals as a Connectivity Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Advantages and Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Back-Face Culling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Clipping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Convexity or Non-Convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Moving the Camera and Objects Within a Portal Environment . . . . . . . . . . . . . . . 315
Portals and the Near Z Plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Portals and Other Rendering Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Classes for Portals and Sectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Class l3d_polygon_3d_portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Class l3d_sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Class l3d_world_portal_textured_lightmapped_obj. . . . . . . . . . . . . . . . . . . . . 329
Class l3d_rasterizer_2d_sw_lighter_imp . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Class l3d_pipeline_world_lightmapped . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Sample Program: porlotex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Other VSD Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Chapter 6 Blender and World Editing . . . . . . . . . . . . . . . . . . . . 359
World Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
No World Editor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Write Your Own World Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
Adapt an Existing Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
Using Blender for Portal Worlds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Main Ideas of a Blender Portal World Editor . . . . . . . . . . . . . . . . . . . . . . . . . 364
Step-by-Step Guide to World Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Data Flow within the World Editing System . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Creating Sectors and Portals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Tutorial: Creating Aligned Portals via Extrusion and Separation . . . . . . . . . . . . . . . 371
Tutorial: Aligning Portals from Separate Meshes . . . . . . . . . . . . . . . . . . . . . . . 374
Tips for Working with Portals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Portalization: Generating Portal Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Contents vii

Perl Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387


Architecture of the Perl Portalization System . . . . . . . . . . . . . . . . . . . . . . . . . 389
Structural Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Parsing and Generator Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Controlling Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Embedding Location, Orientation, Texture, Actor, and Other Information into Meshes. . . . . 430
Basic Ideas of Associating Attributes with Objects . . . . . . . . . . . . . . . . . . . . . . 431
Store an ID, Location, and Orientation in Overlapping Edges . . . . . . . . . . . . . . . 431
The Tool Blend_at: Remote Control of Blender. . . . . . . . . . . . . . . . . . . . . . . 433
Configuration and Testing of blend_at . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Specific Mesh Attributes Used by the Portalization System. . . . . . . . . . . . . . . . . . 437
The Name Attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
The Type Attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Attributes for Sectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Attributes for Actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Parsing of Attributes by VidscParser.pm and vidinfo . . . . . . . . . . . . . . . . . . . . 440
Program Listings for blend_at . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Class vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Class blender_config. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Class blender_controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
Class blender_xcontroller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
Tutorial: Creating a Textured Room with Actors . . . . . . . . . . . . . . . . . . . . . . . 463
Tips for Working with Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
Summary of Blender and Portal Worlds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
Other World Editing Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
Portalized Regular Spatial Partitioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
BSP Tree and Octree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
Non-convex Sector-based Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
Chapter 7 Additional Graphics Techniques. . . . . . . . . . . . . . . . . . 479
Special Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
Environment Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
Billboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
Lens Flare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
Particle Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
Physics and Particle Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
Real-Time Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
Sample Program: particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
Comments on the Sample Program’s Physics . . . . . . . . . . . . . . . . . . . . . . . . 496
Some Ideas for You to Try . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
Natural Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
Bump Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
Multi-pass Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
Advanced Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
Curved Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
Level of Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
Billboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
Edge Collapse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
BSP Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
Texture LOD Techniques: MIP Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 508
viii Contents

Landscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Storing Landscapes as Height Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Generating Fractal Landscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Rendering and LOD Techniques for Landscapes . . . . . . . . . . . . . . . . . . . . . . 511
Camera Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
Chapter 8 Non-Graphical Techniques for Games and
Interactive Environments . . . . . . . . . . . . . . . . . . . . . 515
Sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
Basics of Digital Sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
The RPlay Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Using TCP/IP Networking to Communicate with the Server . . . . . . . . . . . . . . . . . 520
Class l3d_sound_client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
Class l3d_sound_server_rplay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
TCP/IP Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
The Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
The Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
Running the Sample Server and Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
Non-Blocking Operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
What Data to Send . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
Collision Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
Intersection Testing and Bounding Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Sphere-to-Sphere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
Ray-to-Polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
Ray-to-Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Sphere-to-Polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Tunneling and Sweep Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Multiple Simultaneous Collisions and Collision Response . . . . . . . . . . . . . . . . . . 541
Allowing Penetration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
Avoiding Penetration with Temporal Search . . . . . . . . . . . . . . . . . . . . . . . . 542
Class l3d_collidable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
Class l3d_collidable_sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Class l3d_polygon_3d_collidable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
Class l3d_polygon_3d_textured_lightmapped_collidable . . . . . . . . . . . . . . . . . . . 551
Class l3d_camera_collidable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
Class l3d_world_portal_textured_lightmapped_obj_colldet . . . . . . . . . . . . . . . . . 553
Plug-in Object Seeker, Class l3d_plugin_videoscape_mesh_seeker . . . . . . . . . . . . . 563
Sample Program: collide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
More Advanced Collision Detection and Response . . . . . . . . . . . . . . . . . . . . . . 576
Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
Some Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
Rigid Body Dynamics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
Real-Time Update and Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . 579
Artificial Intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
Chapter 9 What Lies Ahead? . . . . . . . . . . . . . . . . . . . . . . . . . 583
Content Development Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
Game Blender/Blender 2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
World Foundry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
Contents ix

What Does This Mean for 3D Programmers? . . . . . . . . . . . . . . . . . . . . . . . . . 598


The Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
CD Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
Contents of the CD-ROM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
Quick Start Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
Installing the Sample Programs and Other Software . . . . . . . . . . . . . . . . . . . . 605
Troubleshooting the Sample Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
Some Comments on the Sample Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
Hardware Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
Porting the Code to Microsoft Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
Tools Used to Prepare this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
3D Graphics Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
3D Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
3D Information and Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
General Programming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
Other. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
x

Acknowledgments

I
n addition to my parents, Forest and Vicki Lin, I would like to thank the following individuals
who directly or indirectly played a role in the completion of this book. Thanks go to my brother
Tony, who persuaded me to download and try out the game Doom—an experience that con-
vinced me that interactive 3D graphics on the PC was finally possible. Special thanks also to Stan
Hall, who provided encouragement and advice even when it seemed that the book might not see

Y
the light of day.

FL
Solveig Haring and Margit Franz were kind enough to provide me with Internet access and a
cup of coffee for some of the longer nights in the computer lab. Ton Roosendaal provided some
very interesting insights into Blender and 3D graphics in general. My work colleagues Horst
AM
Hörtner, Werner Pankart, Klaus Starl, and Thomas Wieser were all supportive and understanding
during those times when work on the book required absence from the office. Andreas Jalsovec and
Dietmar Offenhuber gave me insight into some of the nuances of 3D modeling. Renate Eckmayr,
TE

Viju John, Azita Ghassemi, Manfred Grassegger, Ulrike Gratzer, Andrea Groisböck, Jogi and
Reni Hofmueller, Angelika Kehrer, Astrid Kirchner, Dietmar Lampert, Christine Maitz, Paula
McCaslin, Bernd Oswald, Gabi Raming, Regina Webhofer, and other individuals too numerous to
mention all expressed interest upon hearing that I was writing this book, and gave me much
needed inspiration and motivation.
Professor Deborah Trytten got me started on the right track in 3D graphics during my studies
at the University of Oklahoma. Kevin Seghetti carefully read and checked the text for technical
accuracy and provided many valuable suggestions. Thanks also to everyone at Wordware Pub-
lishing, especially Jim Hill, who shared my enthusiasm about the book and was key in actually
getting this project out the door.
Last but not least, I would like to thank the countless individuals around the world involved
with the creation and maintenance of the freely available, high quality, open source GNU/Linux
operating system and tools.

Team-Fly®
xi

Preface

A
university professor of mine once mentioned that there was no danger that the computer
science community would ever run out of interesting problems to solve. As a community,
computer scientists try to understand the nature of computation by forming theories and
attempting to prove their validity. We try to answer questions. Those theories which correctly cap-
ture the nature of computing problems contribute to the common pool of academic knowledge.
Previously unanswered questions receive answers—some more complete, some less complete.
The less complete answers raise new questions for further research; the more complete answers
are eventually adopted by industry practitioners.
3D graphics is a field that illustrates this phenomenon well. In the early days, 3D graphics was
mostly confined to academic research labs. The mathematics and geometry of 3D graphics were
questioned and explored, and the field grew as a result. Today, research in 3D graphics is still very
active, but at the same time, 3D graphics has also become mainstream. A number of graphics tech-
niques from academia have established themselves as efficient and effective enough for
widespread use. A 3D programmer should be familiar with these techniques. The purpose of this
book is to communicate these important 3D techniques to the intermediate 3D programmer in a
clear and intuitive way, using geometrical explanations supported with numerous working code
examples.
This book uses Linux as the implementation platform. The free operating system has a num-
ber of advantages which make it ideal for learning and programming 3D graphics. The most
important advantage is accessibility: the free, open source nature of Linux makes it possible for
any programmer to have access to a top-quality operating system and development environment.
This open nature has encouraged the development of massive amounts of free software (where
free refers not only to cost, but mainly to the freedom to study and modify the source code), includ-
ing software important for 3D graphics. Therefore, Linux offers any programmer the chance to get
involved with 3D graphics programming today, at no cost, without forcing the programmer to
either pay thousands of dollars in software licensing fees or to spend literally man-years of soft-
ware development time creating customized tools. Linux already offers the tools you need to do
serious 3D programming—and the freedom to use, learn from, and modify these tools.
This book builds upon the foundation laid in the introductory companion volume Linux 3D
Graphics Programming. It is assumed that you have an understanding of all of the material pre-
sented in the introductory volume; the first chapter provides a quick review of this material.
Therefore, this book is not suited for the complete beginner to 3D graphics. Such readers should
work through the introductory companion book before attempting to read this book.
xii

Introduction

W
elcome, reader! I am glad to have you along and hope that you are as excited as I am
about Linux and interactive 3D graphics programming. Take your time and enjoy the
following few pages as we leisurely discuss the goals and contents of this book.
This book is the second volume of a two-volume work on interactive 3D graphics program-
ming under Linux. First, let’s look at the two-volume work as a whole, then we’ll look more
specifically at the contents of this volume.
Taken as a whole, the two-volume work aims to provide you with the knowledge, code, and
tools to program top-notch, object-oriented, real-time 3D games and interactive graphics applica-
tions for Linux, which can also easily be ported to other platforms. By working through both
volumes, you will learn to use the most important techniques, tools, and libraries for Linux 3D
graphics: portals, OpenGL/Mesa, Xlib, 3D hardware acceleration, collision detection, shadows,
object-oriented techniques, and more. We also cover the often neglected topic of 3D modeling,
illustrating in detail how to use the professional 3D modeling package Blender, which is included
on the CD-ROM, to create animated 3D models and portal worlds for use in our interactive 3D
programs.
This second volume, titled Advanced Linux 3D Graphics Programming, covers more
advanced techniques needed for realistic display of larger datasets often used in interactive 3D
environments. Topics covered include: rendering and animation techniques for 3D polygons (3D
morphing, texture mapping, light mapping, fog), the creation of more sophisticated 3D models
with Blender (including jointed figures animated with inverse kinematics), importing such models
from Blender into our programs, hidden surface removal (portals, BSP trees, octrees, z buffers),
non-graphical issues relevant to interactive environments (special effects, collision detection, dig-
ital sound, TCP/IP networking, particle systems), and tutorials on using advanced 3D content
development systems under Linux (Game Blender and World Foundry). Sample programs are
provided, both in text form and on the CD-ROM, illustrating the concepts.
This book builds on the foundation laid by the introductory companion volume, Linux 3D
Graphics Programming. The first chapter of this book serves as a brief review of the earlier
material.

Goals of This Text


This text has several objectives. A primary goal of this text is to give you a solid understanding of
the fundamental concepts involved in interactive 3D graphics programming at the intermediate to
advanced level. Such an understanding not only enables you to write your own 3D programs,
libraries, and games under Linux, but also gives you the knowledge and confidence you need to
Introduction xiii

analyze and use other 3D graphics texts and programs. In the open source world of Linux, under-
standing fundamental concepts is indeed important so that you can understand and possibly
contribute to the common pool of knowledge and code. Furthermore, learning fundamental 3D
graphics concepts also enables you to understand and effectively use sophisticated 3D applica-
tions and libraries such as 3D modelers and OpenGL.
A second goal of this text is to give you plenty of hands-on experience programming 3D
graphics applications under Linux. It is one thing to understand the theoretical mechanics of an
algorithm; it is another to actually implement, debug, and optimize that same algorithm using a
particular set of programming tools. Small standalone programs are scattered throughout this text
to demonstrate key 3D graphics concepts. It is often easy to lose sight of the forest for the trees,
particularly in the complicated world of 3D graphics. Standalone sample programs address this
problem by concisely illustrating how all the necessary components of a 3D program “fit
together.” They reduce the intimidation that often accompanies the study of large, complicated
programs, and give you confidence in developing and modifying complete 3D programs under
Linux.
A third goal of this text is to help you develop and understand the techniques for developing a
reusable 3D application framework or library. In addition to the standalone programs mentioned
above, the book also develops a series of generally reusable C++ library classes for 3D graphics,
called the l3d library. This library was introduced in the introductory companion book Linux 3D
Graphics Programming and is developed further in this book. This C++ library code follows an
object-oriented approach, relying heavily on virtual functions, (multiple) inheritance, and design
patterns. In this manner, the developed library classes are usable as is but still open for extension
through subclassing. Each chapter builds upon the library classes developed in previous chapters,
either adding new classes or combining existing classes in new ways. Through subclassing, the
library classes can be adapted to work with virtually any hardware or software platform or API;
currently, the code runs under Linux and Microsoft Windows, with or without hardware accelera-
tion. The techniques used to develop the 3D library classes illustrate both valuable 3D abstractions
and generally applicable object-oriented techniques.
A fourth goal of this text is to demonstrate the excellence of the Linux platform as a graphics
programming environment. For a programmer, Linux is a dream come true. All of the source code
is available, all of the operating system features are enabled, a large number of excellent first-rate
software development tools exist, and it is all freely available, being constantly tested and
improved by thousands of programmers around the world. Linux empowers the programmer with
open source, open information, and open standards. Given this outstanding basis for development,
it is no wonder that programmers in every conceivable application area (including 3D graphics)
have flocked to Linux. This has created a wealth of 3D libraries, tools, and applications for Linux.
Linux is, therefore, an outstanding software development platform with powerful 3D tools and
software—an ideal environment for learning and practicing 3D graphics programming.
A final, personal goal of this text, and the main reason I am writing this book, is to impart to
you a sense of the excitement that 3D graphics programming offers. You, the 3D programmer,
have the power to model reality. You control every single z-buffered, Gourad-shaded, tex-
ture-mapped, perspective-correct, dynamically morphed, 24-bit, real-time pixel on the flat 2D
screen, and amazingly, your painstakingly coded bits and bytes merge to form a believable 3D
xiv Introduction

world. By working under Linux, you are no longer held back by a lack of tools or software. It’s all
out there—free for download and top quality. Linux software gives you the tools you need to real-
ize your 3D ideas.

Organization of the Book and the Code


This text follows a bottom-up organization for the presentation order of both concepts and pro-
gram code. This bottom-up organization serves two purposes: pedagogical and practical.
Seen pedagogically, a bottom-up approach means first covering fundamental concepts before
proceeding to more complex subjects. This is a fully natural progression which deals with com-
puter graphics at ever-increasing levels of abstraction. Seen practically, a bottom-up approach
means that simple C++ classes are developed first, with later, more complicated examples literally
“building upon” the foundation developed earlier through the object-oriented mechanism of
inheritance. This ensures compilable, executable code at each level of abstraction which is
incrementally understandable and extensible. Every chapter has complete, executable sample pro-
grams illustrating the concepts presented.
The bottom-up organization has a rather far-reaching impact on the structure of the code in
general. The principal goal I had in mind when structuring the code for the book was that all parts
of a class presented in a chapter should be explained within that same chapter. I tried very dili-
gently to achieve a code structure which allows me to avoid statements like “ignore this part of the
code for now; it will be explained in the next chapter.” When a class is presented, you should be
able to understand it fully within the context of the current chapter. The second most important
goal for the code was to reuse as much code as possible from previous chapters, typically through
subclassing, thus truly illustrating how more complex 3D concepts literally, at the code level,
build upon simpler concepts. To achieve these goals, the overall design of the code relies heavily
on indirection through virtual functions, even in fairly time-critical low-level routines such as
accessing elements of a list. The presence of so many virtual functions allows for a rather clean,
step-by-step, bottom-up, incrementally understandable presentation of the code. The design is
also very flexible; new concepts can be implemented through new subclasses, and behavior can be
swapped out at run time by plugging in new concrete classes. But as is always the case in computer
science, there is a tradeoff between flexibility and performance. The code design chosen for the
book is not as fast as it could be if all the virtual function calls were eliminated; of course, elimi-
nating virtual function calls leads to reduced flexibility and increased difficulty extending the
code later. Still, the code performs well: it achieves over 30 frames per second with software ren-
dering on a Pentium II 366 in a 320´ 240 window with 24-bit color, and over 30 frames per second
in 1024´ 768 with Voodoo3 hardware acceleration. In spite of its definite educational slant, it is
fast enough for real use. Again, this is one of the great things about 3D programming in the 21st
century: a straightforward, educationally biased code structure can still be executed fast enough
by consumer hardware for real-time, interactive 3D environments. Real-time 3D no longer forces
you to wrestle with assembly or to have access to expensive dedicated graphics workstations. If
you know how to program in C++ and you understand the geometrical concepts behind 3D graph-
ics, you can program real-time 3D graphics applications using free tools under Linux.
Introduction xv

Let’s now look at the organization of the text itself.


Chapter 1 reviews the essentials of Linux 3D graphics, as covered in the introductory com-
panion volume Linux 3D Graphics Programming. We cover the fundamentals of 2D graphics, 3D
coordinate systems, perspective projection, vectors, matrices, and the C++ library classes—the
l3d library—used to implement these basic ideas. We also review the most important commands
for the 3D modeling package Blender. The information in this chapter is a prerequisite for under-
standing the rest of the book.
Chapter 2 explores some important techniques which greatly increase the visual realism of
polygonal models: texture mapping, lighting, light mapping, and morphing. All of these tech-
niques are implemented in C++ classes. We also take a tour of the symbolic algebra package Calc,
available as an extension to the Emacs editor. Calc helps us solve the tedious sets of equations
which arise when performing texture mapping.
Chapter 3 is the first of two chapters dealing with Blender, a free and powerful 3D modeling
and animation package for Linux (included on the CD-ROM). In two step-by-step tutorials, we
walk through the creation of a set of textured and compatible morph targets suitable for 3D
morphing, and a human-like figure animated with inverse kinematics. These 3D models are then
imported and displayed in a 3D program.
Chapter 4 deals with the correct and efficient drawing of visible surfaces, which becomes
especially important when polygon counts increase. Surfaces which are obscured by other sur-
faces or which are completely outside of the field of vision should be discarded from unnecessary
processing as early and as cheaply as possible. We discuss a number of generally applicable tech-
niques, each illustrated with a sample program: back-face culling, the painter’s algorithm, view
volume culling, and z-buffering.
Chapter 5 discusses special visible-surface algorithms based on space-partitioning tech-
niques. The techniques discussed include BSP trees, octrees, regular spatial partitioning, and
portals. We discuss the use of portals for special techniques such as mirrors, refraction, transpar-
ency, and volumetric shadows. A portal engine is implemented as a natural extension of the
existing polygon and object classes.
Chapter 6 continues the discussion of portals from a practical point of view. We explore how
we can use Blender’s powerful modeling features to create portal-based worlds, using a combina-
tion of an edge-coding technique, to encode arbitrary data within a 3D mesh, and postprocessing
scripts written in the Perl language. This system, using both Blender and custom-written tools,
allows us to create 3D worlds which may be used by the portal engine developed in the previous
chapter. A complete example world is constructed step by step, with practical tips on efficiently
working in Blender: using layers, hiding geometry, aligning objects and portals, and executing
interactive fly-throughs.
Chapter 7 covers some special 3D graphics techniques or “tricks”: billboards, lens flare, parti-
cle systems, fractal landscapes, dynamic level-of-detail, environment mapping, atmospheric
effects, curved surfaces, multi-pass techniques, and camera tracking in 3D.
Chapter 8 discusses non-graphical techniques that can greatly enhance the reality of 3D
graphics programs, such as games. Techniques discussed and implemented include: collision
detection, digital sound and music with the RPlay sound server, TCP/IP network communications,
physics, and artificial intelligence.
xvi Introduction

Chapter 9 takes a look at the possible future direction of Linux and 3D graphics. We begin
with a look at two existing and exciting 3D content development systems under Linux: Game
Blender and World Foundry. We go through a brief tutorial of 3D game creation with each of these
systems. Some speculation about the future of Linux 3D graphics follows. We close by relating the
contents of the book to the field of 3D graphics as a whole.
The Appendix provides installation instructions for the CD-ROM, information on porting the
graphics code to Windows, and a list of useful references, both in electronic (WWW) and in print
form. Notations in brackets, such as [MEYE97], are detailed in the “References” section of the
Appendix.
The CD-ROM contains all sample code from the book, the Blender 3D modeling and anima-
tion suite, the World Foundry game development kit, freely available Linux 3D libraries and
applications, and a series of animated videos illustrating some of the more difficult-to-visualize
3D concepts discussed in the text.

Reader and System Requirements


This book requires you to have a working Linux installation up and running with the XFree86
server for the X Window System on an IBM PC or compatible system with a Pentium or better
processor. If you don’t yet have Linux installed, you can download Linux for free from the
Internet, or obtain a CD-ROM containing a ready-to-install Linux distribution. Installing Linux is
no more difficult than installing other common PC operating systems, such as Microsoft Win-
dows. A 3D graphics card with Mesa drivers is recommended for optimum performance, but the
code will run acceptably fast without hardware acceleration through a custom software renderer.
If your graphics card is supported by the new XFree86 4.0 Direct Rendering Infrastructure or by
the Utah GLX project, you can also link the code with the appropriate OpenGL library (from the
DRI or from Utah GLX) to achieve hardware-accelerated rendering in a window.

Typographical Conventions
Used in This Book
The following typographical conventions are used in this book.
n Program code, class names, variable names, function names, filenames, and any other text
identifiers referenced by program code or the operating system are printed in a fixed-
width font.
n Commands or text to be typed in exactly as shown are printed in boldface.
n Key sequences connected by a plus (+) sign (such as Ctrl+C) mean to hold the first key while
typing the second key.
1

Chapter 1

Basic Linux 3D
Graphics Concepts

L
inux 3D graphics is a young and exciting field. The purpose of this chapter is to review the
basic concepts of Linux 3D graphics programming in order to lay a groundwork for the more
involved material in the following chapters. This book was written based on the assumption
that you already know everything in this chapter; therefore, this chapter is intentionally terse. This
chapter is meant to serve as a review, not as an introduction.
If you find some of these topics unfamiliar, I suggest that you take the time to read the com-
panion volume to this book, Linux 3D Graphics Programming. The companion book is aimed at
the beginning 3D graphics programmer with little or no 3D experience. Essentially, this chapter is
a brief review of the most important concepts in the introductory companion volume.

2D Graphics Fundamentals
2D raster graphics consist of plotted pixels on a display. The pixels are arranged in a rectangular
grid, typically accessible in memory as a linear sequence of bytes. Though we specify pixels by
their addresses in memory at the lowest level, it is better to specify pixels in terms of a 2D coordi-
nate system, with horizontal x and vertical y axes. In this book, we define the origin of the 2D pixel
coordinate system to be the upper-left corner of the screen, with the x axis increasing to the right
and the y axis increasing downward.
Under Linux, we display our graphics under the X Window System. Specifically, the
approach chosen for this book is to use XImages in ZPixmap format to display 2D graphics. This
allows us direct access to the bytes (and thus the pixels) forming the image. Each pixel can have a
particular color. Exactly how this color is specified depends on the bit depth and color model of the
X server. The bit depth determines the total number of available colors and is usually 8, 15, 16, 24,
or 32 bits. The color model is typically either indexed color (meaning that colors are specified as
indices into a fixed-size palette of colors) or true color (meaning that colors are specified directly
as a combination of red, green, blue, and possibly alpha intensities). For maximum flexibility, we
must query at run time the bit depth and color model, and dynamically determine the exact bit for-
mat required to specify pixel colors.
2 Chapter 1: Basic Linux 3D Graphics Concepts

Drawing lines can be done by an incremental algorithm, stepping along one axis by whole
pixels and using the line’s slope to determine how many pixels to step in the other axis. Drawing
polygons can be done by rasterizing the lines belonging to the left and right edges of the polygon,
and drawing horizontal lines, or spans, between the left and right edges.

Figure 1-1: Drawing a 2D polygon.

Animation can be achieved through double buffering. With this technique, we have one
off-screen buffer and one on-screen buffer. We draw graphics in the off-screen buffer. When we
are finished, we copy the off-screen buffer into the on-screen buffer, at which point the graphics
become visible. Then, we draw the next frame of animation in the off-screen buffer, and repeat the
process. In this way, the on-screen buffer is continually updated with new and completed images
from the off-screen buffer, thus creating the illusion of animation.
Hardware acceleration allows us to send compact instructions to dedicated hardware, with
higher-level commands such as “draw a line” or “draw a polygon.” By sending such higher-level
commands to the hardware, and by letting the dedicated hardware then do the actual lower-level
pixel operations, a great speed-up can be achieved. Under Linux, we use the 3D library Mesa to
achieve hardware acceleration. Mesa has a syntax essentially identical to OpenGL and supports
hardware acceleration. The XFree86 4.0 project uses Mesa as part of its Direct Rendering Infra-
structure (DRI), providing for hardware-accelerated 3D graphics within a window under the X
Window System. We use the terms Mesa and OpenGL essentially interchangeably in this book.
Chapter 1: Basic Linux 3D Graphics Concepts 3

3D Graphics Fundamentals
3D graphics is the creation of a two-dimensional image or series of images on a flat computer
screen such that the visual interpretation of the image or series of images is that of a three-dimen-
sional image or series of images.

Figure 1-2: Definition of 3D


graphics.

The visual interpretation of an image depends on the optics of light rays striking the retina.
Points emit light radially in all directions along straight lines, called light rays. Some subset of all
light rays enters the eye; we call this subset seen light rays. Seen light rays are refracted by the
eye’s lens, and focus onto the retina. The biochemical reactions within the retina produce the sen-
sation of vision.

Figure 1-3: Light


rays.

If a 2D image causes the same light rays to strike the retina as those coming from a 3D object,
then the 2D image can be interpreted as a 3D object.
In 3D graphics, we want the seen light rays coming from the flat computer screen to corre-
spond to the seen light rays which would be seen if we were looking at a real 3D object located
behind the computer screen. To accomplish this, we compute intersections between the seen light
rays and the flat plane of the computer screen. These intersection points, since they lie along the
straight light rays going from the original 3D points to the eye, emit the same seen light rays as the
4 Chapter 1: Basic Linux 3D Graphics Concepts

original points. Therefore, the projected points can be visually interpreted to be the original 3D
object.

Figure 1-4: A 2D image can be interpreted as a 3D


object.

Y
FL
Computing an intersection between a seen light ray (coming from a point) and a flat plane is
called projecting the point onto the plane. In particular, this is a planar geometric perspective pro-
AM
jection. The important term is “perspective.” The fact that the projection is perspective implies
that the resulting images appear realistically foreshortened, just as would be perceived by our own
eyes or by a physical camera taking a 2D snapshot of a 3D scene.
TE

3D Coordinate Systems and Vectors


Before we can perform a perspective projection on points, we need a coordinate system to specify
the points in 3D. We can use a left-handed or a right-handed coordinate system. Define a coordi-
nate system such that x´ y=z, where the symbol ´ represents the vector cross product. In a
right-handed system, the vector cross product is computed using the right-handed convention,
implying that in Figure 1-5, the z axis points out of the page. In a left-handed system, the vector
cross product is computed using the left-handed convention, implying that in Figure 1-6, the z axis
points into the page.
In this book, we use the left-handed coordinate system, and the left-handed convention for
computing our vector cross products. By using both a left-handed coordinate system and a
left-handed rule for computing cross products, the results we obtain and the equations we use are
identical to those which would be used with a right-handed coordinate system and a right-handed
rule for computing cross products.
Within a coordinate system, we can specify points and vectors. Points are locations in the
coordinate space; vectors are directed displacements between points in the coordinate space. Be
careful not to confuse points and vectors. One way to specify points and vectors is to use the nota-
tion (x,y,z). (Another way is homogeneous notation; see the next section.) In this notation, the
point (x,y,z) is the point located at a distance of x units from the origin along the x axis, y units from
the origin along the y axis, and z units from the origin along the z axis. On the other hand, the vec-
tor (x,y,z) specifies the displacement of x units along the x axis, y units along the y axis, and z units
along the z axis. Since it is a directed displacement, we can add the vector (x,y,z) to any point to
arrive at a new, displaced point.

Team-Fly®
Chapter 1: Basic Linux 3D Graphics Concepts 5

Figure 1-5: Figure 1-6:


Right-handed Left-handed 3D
3D coordinate coordinate
system. system.

A number of standard operations are defined on vectors. Important for this book are compo-
nent-wise vector addition, scalar-vector multiplication, the vector dot product, and the vector
cross product.
Equation 1-1
Equation 1-2

Equation 1-3
Equation 1-4

Perspective Projection
Given a coordinate system in which to specify (x,y,z) points, we can then apply a perspective pro-
jection to these points to obtain the projected points, for which the seen light rays are identical to
those of the original non-projected points. In its simplest form, the perspective projection of a
point (x,y,z) is:
Equation 1-5

This formula is derived by computing the intersection between the seen light ray, coming from the
point, and a flat 2D projection plane. The d term is essentially a scaling factor. A more complete
formulation of the perspective projection is:
Equation 1-6

This form of the equation explicitly specifies the use of the d term as a field of view angle theta.
Also, it reverses the y axis orientation because it maps the projected points to the 2D pixel
6 Chapter 1: Basic Linux 3D Graphics Concepts

coordinate system. As we have seen, the 2D pixel coordinate system has y increasing downwards,
while the 3D coordinate system has y increasing upwards.

Matrices
In this book, we use 4´ 4 matrices to effect transformations on points. We can also then use 4´ 1
column vectors to represent 3D points and 3D vectors. Do not confuse the terms “column vector”
and “3D vector.” The former refers to a notational convention; the latter, to a directed displace-
ment in 3D space. The latter can be expressed by using the notation provided by the former. A 3D
point expressed in column vector notation is [x,y,z,1]T. A 3D vector expressed in column vector
notation is [x,y,z,0]T. The superscripted “T” indicates that the vectors should actually be written
transposed, in a vertical format. The fourth coordinate is w, the homogeneous coordinate; points
and vectors expressed in this notation are said to be in homogeneous coordinates. The homoge-
neous w coordinate typically has a value of 1 for points and 0 for vectors. In general, for any
arbitrary non-zero value of w, the homogeneous point [x,y,z,w]T, corresponds to the location in 3D
space given by [x/w,y/w,z/w,1]T. In other words, we divide by w.
We multiply two matrices A and B, with a result called C, as follows. Treat each column of B
as a four-element vector. Treat each row of A as a four-element vector. Then, compute the value of
each element in resultant matrix C located at row i and column j as the dot product of row i in A and
column j in B. Not all matrices may be multiplied with one another; the definition of matrix multi-
plication implies that the matrices to be multiplied must be size-compatible. Also, in general,
matrix multiplication is not commutative; AB is generally not the same as BA—the multiplication
BA might not even be possible.
Multiplying a 4´ 4 matrix by a second 4´ 4 matrix yields a resulting 4´ 4 matrix whose trans-
formation is the concatenation of the transformations represented by the first two matrices. The
resulting composite transformation applies the transformation of the original right-hand matrix
first, followed by the transformation of the original left-hand matrix. (An alternative interpreta-
tion, using a changing-coordinate system view rather than a changing-point view, allows for a
left-to-right interpretation of the transformation order.) Multiplying a 4´ 4 matrix by a 4´ 1 matrix
(in other words, by a column vector representing a 3D point or a 3D vector) yields another 4´ 1
matrix which represents the 3D point or 3D vector transformed by the 4´ 4 matrix.

Specific Matrix Transformations


The matrix forms of several important 3D transformations follow.

Equation 1-7
Rotation around
the x axis by q
degrees
Chapter 1: Basic Linux 3D Graphics Concepts 7

Equation 1-8
Rotation around
the y axis by q
degrees

Equation 1-9
Rotation around
the z axis by q
degrees

Equation 1-10
Scaling by factors
sx, sy, and sz in
the x, y, and z axes

Equation 1-11
Translation by an
offset of (tx,ty,tz)

Equation 1-12
Transformation
from camera
space to world

In the equation above, the camera is located at (VRPx,VRPy,VRPz) and is oriented with its
right-vector along (VRIx,VRIy,VRIz), its up-vector along (VUPx,VUPy,VUPz), and its for-
ward-vector along (VFWx,VFWy,VFWz).
Equation 1-13
Rotation by q
degrees about an
arbitrary vector
(u1,u2,u3)

Other Matrix Properties


The inverse of a matrix is the matrix which, when multiplied with the original matrix, yields the
identity matrix I. The identity matrix is a square matrix with all zero entries except for a series of
entries with value 1 located along the main diagonal of the matrix, from the upper-left to the
lower-right corner. When viewing matrices as transformations, the inverse of a matrix then repre-
sents the opposite of the transformation represented by the original matrix. We denote the inverse
of a matrix M as M–1.
A 4´ 4 matrix can be viewed as a specification of a coordinate system. The first three columns
of the matrix represent the x, y, and z axes of the coordinate system. The last column of the matrix
represents the origin point of the coordinate system. By multiplying a point with this matrix, we
obtain the world coordinates of the point as seen relative to the coordinate system of the matrix. In
other words, if we have a matrix M and a point P, then in the matrix product MP, the matrix M rep-
resents the coordinate system in which P is specified. The product MP yields the location of the P
8 Chapter 1: Basic Linux 3D Graphics Concepts

in the world coordinate system. By inverting a matrix representing a coordinate system, we obtain
the reverse transformation. Therefore, the matrix product M–1P yields the coordinates relative to
M of the point as specified in the world coordinate system.
When you see the matrix product MP, think of this as answering the question “P, which has
been specified relative to M, is at what location in world coordinates?” When you see M–1P, think
of this as answering the question “P, which has been specified in world coordinates, is at what
location relative to M?”

The l3d Library Classes


This book relies on the use of a series of C++ library classes implementing all of the 2D and 3D
graphics concepts described in the previous sections. This library is called the l3d library. It is
developed incrementally in the introductory companion book, Linux 3D Graphics Programming.
In this book, we use the classes presented in the first book, and continue to build on these classes
to illustrate newer and more advanced concepts. The l3d classes are on the CD-ROM and are
also available for download from the Internet at http://www.linux3dgraphics-
programming.org.

Sample l3d Program


Before looking at the l3d classes themselves, let’s first look at a sample program which uses l3d.
This will give you a practical perspective on l3d before looking at the following sections, which go
into more detail on the specific l3d classes.
The following sample program is called drawdot and illustrates usage of the l3d library
classes in order to move a green dot around the screen, thereby forming a simple drawing pro-
gram. This program works with visuals of any color depth and in both TrueColor or indexed color
modes. Notice that this program is rather short and declares only one class. This is because the l3d
library has already declared several useful classes to simplify application programs.

Figure 1-7: Output from sample program


drawdot.
Chapter 1: Basic Linux 3D Graphics Concepts 9

Listing 1-1: drawdot.cc


#include <stdlib.h>
#include <stdio.h>

#include "../lib/tool_2d/screen.h"
#include "../lib/tool_os/dispatch.h"
#include "../lib/raster/rasteriz.h"
#include "../lib/tool_2d/scrinfo.h"
#include "../lib/system/factorys.h"

//------------------------------------------------------------------------
//-
//- STEP 1: CHOOSE THE FACTORIES
//-
//------------------------------------------------------------------------

void choose_factories(void) {
factory_manager_v_0_1.choose_factories();
}

//------------------------------------------------------------------------
//-
//- STEP 2: DECLARE A PIPELINE SUBCLASS
//-
//------------------------------------------------------------------------

class my_pipeline : public l3d_pipeline {


protected:
l3d_rasterizer_2d_imp *ri;
l3d_rasterizer_2d *r;

int x, y, dx, dy;


unsigned long color;

public:
l3d_screen *s;
my_pipeline(void);
virtual ~my_pipeline(void);

void key_event(int ch); //- from dispatcher


void update_event(void); //- from dispatcher
void draw_event(void); //- from dispatcher
};

my_pipeline::my_pipeline(void) {
s = factory_manager_v_0_1.screen_factory->create(320,200);
ri = factory_manager_v_0_1.ras_2d_imp_factory->create(320,200,s->sinfo);
r = new l3d_rasterizer_2d(ri);

s->sinfo->ext_max_red =
s->sinfo->ext_max_green =
s->sinfo->ext_max_blue = 255;

s->sinfo->ext_to_native(0, 0, 0); //- allocate background color


color = s->sinfo->ext_to_native(0, 255, 128);
s->refresh_palette();

x = y = dx = dy = 0;
}
10 Chapter 1: Basic Linux 3D Graphics Concepts

my_pipeline::~my_pipeline(void) {
delete s;
delete ri;
delete r;
}

void my_pipeline::key_event(int ch) {


switch(ch) {
case 'h': dx=-1; dy=0; break;
case 'l': dx=1; dy=0; break;
case 'j': dx=0; dy=1; break;
case 'k': dx=0; dy=-1; break;
case ' ': dx=0;dy=0; break;
case 'q': {
exit(0);
}
}
}

void my_pipeline::update_event() {
x += dx;
y += dy;

if(x < 0) x = 0;
if(x > s->xsize-1) x = s->xsize-1;
if(y < 0) y = 0;
if(y > s->ysize-1) y = s->size-1;
}

void my_pipeline::draw_event(void) {
r->draw_point(x,y, color);
s->blit_screen();
}

main() {
choose_factories();

l3d_dispatcher *d;
my_pipeline *p;

//------------------------------------------------------------------------
//-
//- STEP 3: CREATE A DISPATCHER
//-
//------------------------------------------------------------------------

d = factory_manager_v_0_1.dispatcher_factory->create();

//------------------------------------------------------------------------
//-
//- STEP 4: CREATE A PIPELINE
//-
//------------------------------------------------------------------------

//- plug our custom behavior pipeline into the dispatcher


p = new my_pipeline();

//------------------------------------------------------------------------
//-
//- STEP 5: START DISPATCHER
Chapter 1: Basic Linux 3D Graphics Concepts 11

//-
//------------------------------------------------------------------------

d->pipeline = p; //- polymorphic assignment


d->event_source = p->s;
d->start();

delete d;
delete p;
}

NOTE The following instructions assume you have already installed the source code as
described in the Appendix. In particular, the installation instructions require you to set the
$L3D environment variable to point to the installation directory. Also, you should execute
these commands in a shell window under the X Window System. See the Appendix for details
on initial installation and configuration of the source code.

First, let’s look at compiling the program. Then, we look at the structure of the program itself.
Finally, we discuss the l3d classes.
The source code for the sample program is located in directory $L3D/source/app/
drawdot. The source and binary files are located in different directory trees; the next section dis-
cusses this in detail. For now, compile and run the program as follows:
1. To compile the l3d library, type cd $L3D/source/app/lib, press Enter, type make -f
makeall.lnx, and press Enter. Notice that this makefile has a different filename than the stan-
dard name of Makefile; therefore, we specify the -f flag to tell the make command which
file is the makefile.
2. Change to the source directory for drawdot by typing cd $L3D/source/app/drawdot and
press Enter. Compile drawdot: type make -f makeall.lnx and press Enter.
3. Type cd $L3D/binaries/linux_x/float/app/drawdot to change to the binaries directory and
press Enter.
4. Notice the object and executable files from the compilation process are placed in the corre-
sponding binary directory. Type drawdot and press Enter to run the program.
5. Notice the question “which configuration?” Type 1 for now to select a normal X11 window,
and press Enter.
6. Notice the empty black window which appears. Type l. Notice the green line which moves
from left to right across the very top of the display, and that the line continues moving after
you release the key.
7. Type j. Notice that the green line moves downward.
8. Control the movement of the line with the following keys: h to move left, l to move right, j to
move down, k to move up, and Space to stop movement.
9. Type q to end the program.
Having now successfully executed the program, let’s now take a look at the organization of the l3d
directory structure, then examine the drawdot program itself.
12 Chapter 1: Basic Linux 3D Graphics Concepts

l3d Directory Structure


The library classes and the applications using the library classes are split into separate source and
binary directory trees. The reason we split the source files and binary files into two separate trees
is that the same source code can be compiled on a number of different platforms. Keeping the
binary files, which vary from platform to platform, in the same directory as the source files, which
remain the same, leads to a rather chaotic organization. While chaos theory is a rich and fascinat-
ing field of scientific study, we don’t necessarily want to organize our directory structure on this
basis. Splitting source and binary directories allows for a neater multi-platform directory
structure.
Specifically, the following directory structure is used:
n $L3D/source: All source files.
n $L3D/source/util: Non-C++ source files (preprocessing scripts, etc.)
n $L3D/source/app: C++ source files related directly to 3D applications.
n $L3D/source/app/lib: C++ source for the l3d library classes.
n $L3D/source/app/[program_name]: C++ source for example programs. A few sim-
ple programs place the binary files in source directory, but most programs place them in the
binary directory.
n $L3D/binaries/linux_x: Linux binary files compiled for the X Window System.
n $L3D/binaries/linux_x/fixed: Linux binary files compiled with fixed-point math.
Fixed-point executable are currently not very well supported and do not always function cor-
rectly. Subdirectory structure is the same as that under the float subdirectory.
n $L3D/binaries/linux_x/float: Linux binary files compiled with floating-point
math. This is the primary output directory for binary files.
n $L3D/binaries/linux_x/float/app/lib: Linux floating-point binary files for
the l3d library.
n $L3D/binaries/linux_x/float/app/[program name]: Linux floating-point
binary files for example programs.
The makefiles automatically place the binary files in the corresponding binary directory. You typi-
cally invoke make -f makeall.lnx in the source directory for an application program, which then
compiles all of the Linux binaries and places them in the appropriate binaries directories.

NOTE Remember, the Appendix provides instructions on how to compile all of the sample
programs at once. The preceding discussion is primarily to give you an idea of the directory
structure and the reasoning behind it.

To summarize, then, the source files for the l3d library are in $L3D/source/app/lib, and the
source files for the sample programs are all in $L3D/source/app. The primary binaries are in
$L3D/binaries/linux_x/float/app.
Chapter 1: Basic Linux 3D Graphics Concepts 13

The Five-Step Process of l3d Programs


The drawdot program can be broken up into five steps, which are representative of program-
ming with l3d. In fact, these steps are representative of event-based programming in general, on a
variety of platforms, as evidenced by the fact that the following scheme also can be applied to
event-driven programs under various operating systems.
The five steps are as follows.
1. Choose the proper factories for three classes: the screen, the rasterizer implementation, and
the event dispatcher.
2. Declare a pipeline subclass. The pipeline must directly or indirectly ask the factory to create a
screen and a rasterizer implementation (typically in the pipeline constructor). Override the
abstract pipeline methods to allow your program to respond to events, to update itself, and to
draw to the screen using the rasterizer implementation.
3. Create a dispatcher by using the factory.
4. Create a pipeline. Your pipeline should in its constructor ask the factory to create a screen and
a rasterizer implementation, and store these objects locally. Connect the pipeline and the
screen to the dispatcher.
5. Start the dispatcher. The dispatcher enters an event loop, extracts events from the screen, and
calls your pipeline periodically to allow your pipeline to do its work, respond to input, and
draw to the screen.
Let’s examine each step in detail to understand the general l3d structure within the context of the
sample drawdot program. This serves two goals: first, to understand the general l3d structure,
and second, to understand the specific functions called by drawdot in order to draw to the
screen. Then, we will take a look at the l3d classes themselves, which we build upon throughout
the book to incorporate increasingly advanced and reusable 3D graphics concepts.
Step 1. Choose the Proper Factories
The first step in writing an l3d application is to choose the proper factories for the program. This is
done by calling the choose_factories function defined in the so-called factory manager. To
localize object creation and free applications from needing to know specific details of concrete
classes, we use the factory design pattern. The factory manager is the central location where all
concrete factories, which are globally visible to the entire program, are accessible. The following
line chooses the factories within the factory manager:
factory_manager_v_0_1.choose_factories();

NOTE The class name has the suffix v_0_1 to represent the fact that this is the first version
of the factory manager. Later versions of the factory manager class manage more factories.
This is an example of how subclassing can provide a historical record of program develop-
ment, within the source code itself.
14 Chapter 1: Basic Linux 3D Graphics Concepts

Choosing the factories essentially means customizing, at run time, all customizable behavior,
which then takes effect for the duration of the program. In particular, the l3d factory manager man-
ages three factories:
1. A screen factory, producing objects corresponding to the abstract interface l3d_screen.
Class l3d_screen represents the physical output device—a window under X11, a
full-screen hardware-accelerated window using Mesa, or even a DIBSection under Microsoft
Windows.
2. A rasterizer implementation factory, producing objects corresponding to the abstract interface
l3d_rasterizer_2d_imp. Class l3d_rasterizer_2d_imp represents a particu-
lar implementation of 2D rasterization concepts. We use the term “rasterizer” to denote a
software interface to a rasterizer implementation. A rasterizer implementation is a particular

Y
hardware or software component that draws 2D graphics primitives (triangles, lines, dots)

FL
into a frame buffer. Two important types of rasterizer implementations are software rasterizer
implementations, which write directly into an off-screen buffer, and hardware rasterizer
implementations, which use specialized, faster functions for hardware-accelerated pixel
AM
operations.
3. A dispatcher factory, producing objects corresponding to the abstract interface l3d_dis-
patcher. Class l3d_dispatcher represents a generalized event dispatcher in a
TE

particular operating system environment. Under X, the dispatcher intercepts X events within
a window’s event loop and passes them on transparently to our application. Using hardware
acceleration with Mesa, the dispatcher works within the event framework provided by Mesa
and GLUT, again forwarding events in a transparent way to our application. Under another
operating system, the dispatcher would need to call any OS-specific routines necessary to
capture and forward events.
All of these factories represent system-specific information: the output device, the rasterizer
implementation, and the event dispatcher. Therefore, by choosing the factories, we are essentially
dynamically configuring the program to use the desired run-time environment. In our case, the
factory manager simply asks the user which factories should be used, but more sophisticated solu-
tions are also possible. We could, for instance, have an auto-detect routine that searches for the
existence of particular hardware, and that, depending on whether or not it finds it, configures the
factory to create the appropriate software component accordingly.
Step 2. Declare a Pipeline Subclass
The second step in writing an l3d application is to declare a pipeline subclass. A pipeline is simply
a sequence of operations on data. The main loop in a game or graphics program is typically called
the pipeline. Therefore, the pipeline contains, directly or indirectly, your application’s main data
and functionality.
We say “directly or indirectly” because the pipeline might do nothing other than create
another object, to which it then delegates the main program’s responsibility. In such a case, the
pipeline is not directly responsible for the application’s data and functionality, but instead merely
serves as an interface between the dispatcher and the object actually doing the real work.

Team-Fly®
Chapter 1: Basic Linux 3D Graphics Concepts 15

The pipeline does not control execution of the program. Instead, it responds to events. The
abstract l3d_pipeline class provides a set of virtual event functions, which are automatically
called by an event dispatcher (class l3d_dispatcher, covered in the next section). By declar-
ing a subclass of l3d_pipeline, you can override the virtual event functions to provide
specific responses to specific events, without needing to know how or when these functions are
invoked.
In particular, an l3d_pipeline subclass should do three things:
1. Directly or indirectly create and store a screen object, a rasterizer implementation object, and
a rasterizer object. This is typically done in the constructor. The first two objects, the screen
and rasterizer implementation, must be created by using the already chosen factories (section
“Step 1: Choose the Proper Factories”). The third object, the rasterizer itself, is directly cre-
ated via the C++ operator new, since the rasterizer itself contains no platform-specific
dependencies. (Such dependencies are all in the rasterizer implementation, not the rasterizer.)
2. Declare internal variables, functions, and objects to store the current state of the virtual world.
3. Override the l3d_pipeline virtual event functions to handle input, update internal
objects, and draw output to the screen. Handling input and updating internal objects are both
done by using data structures specific to the application program. Drawing output to the
screen is done by using the screen, rasterizer, and rasterizer implementation objects created in
the constructor.
The first responsibility of an l3d_pipeline subclass is easy to understand. The pipeline repre-
sents the application. The application should display interactive graphics on the screen. We
therefore create and store a screen object, representing the output device, and a rasterizer imple-
mentation, representing a strategy for drawing graphics to the screen. The rasterizer itself presents
a high-level interface to rasterization functionality, implemented by the low-level tools offered by
a rasterizer implementation. Again, remember that a rasterizer implementation can be either a
software rasterizer implementation, directly manipulating bytes in an off-screen frame buffer, or a
hardware rasterizer implementation, calling hardware API functions to instruct the hardware to
draw the graphics for us. Therefore, through the rasterizer, rasterizer implementation, and screen,
our program has an interface to screen and screen-drawing functionality.

NOTE Theoretically, the screen object could also be created outside of the pipeline. (The
following discussion also applies to the rasterizer and rasterizer implementation objects.)
There is no technical reason why the screen absolutely must be created within the pipeline
constructor. In practice, though, this would make little sense. Consider that the pipeline repre-
sents the entire application logic. Creating a screen outside of the pipeline would also mean
needing to destroy the screen outside of the pipeline. This would imply some sort of a
“higher-level” layer of functionality which creates and destroys objects the pipeline needs in
order to function. This would only make sense if the screen object often needed to be used
outside of the context of the pipeline, at this “higher-level” layer. Given the current premise
that the pipeline is the application, a higher-level layer makes no sense. Therefore, in the cur-
rent architecture, there is no reason to move management of the screen object outside of the
pipeline.
16 Chapter 1: Basic Linux 3D Graphics Concepts

The second responsibility of an l3d_pipeline subclass, declaring data, is also intuitive. Since
it represents the application, the pipeline subclass must contain all data necessary for maintaining
and updating the current state of everything within the virtual world. This might include such
things as the current positions and velocities for objects of interest, energy levels for spaceships,
the prevailing wind velocity, or anything else being modeled. All of this data is stored within the
l3d_pipeline subclass in the form of member variables or objects.
The third and final responsibility of an l3d_pipeline subclass is to override virtual event
functions to respond to events. An l3d_pipeline subclass can override any of the following
virtual functions declared in l3d_pipeline:
void key_event(int ch); //- from dispatcher
void update_event(void); //- from dispatcher
void draw_event(void); //- from dispatcher
The key_event function is automatically called whenever a key is pressed in the application
window. The function is called with a parameter indicating the ASCII value of the key pressed,
thereby allowing the application to respond to the particular key pressed.
The update_event function is automatically called whenever the application is allowed to
update itself. You can think of your program as being a giant clockwork, with everything happen-
ing at each “tick” of the clock. This event function represents one “tick” in your program. At this
point you update the internal variables storing the positions of various objects, update velocities,
check for collisions, and so on.

TIP The calling frequency of update_event is not necessarily guaranteed to be constant.


That is to say, the amount of physical time which elapses between successive calls may be
slightly different. For accurate physical simulations, where velocities or other physical quanti-
ties should be updated based on time, we can store an internal variable recording the value
of the system clock the last time that update_event was called. We can then compare the
current system clock to the value of the variable to determine how much physical time has
elapsed, and update the time-dependent quantities accordingly. The particle system program
later in this book presents one example of such code.

The draw_event function is called whenever the application is allowed to draw its output to the
screen. This function typically will be called immediately after update_event, but this does
not necessarily have to be the case. In other words, the updating of the virtual world and the draw-
ing of the virtual world can be thought of as two separate threads of control, which are usually but
not necessarily synchronized.
With this general understanding of a pipeline’s structure (creation of screen, storage of vari-
ables, and response to events), we can take a closer look at the particular details of the pipeline in
the drawdot program.
The constructor for the drawdot pipeline takes care of the first responsibility of an
l3d_pipeline subclass: creation of screen, rasterizer implementation, and rasterizer objects.
In the constructor, we first ask the screen factory to create a screen and the rasterizer implementa-
tion factory to create a rasterizer implementation. We then create a rasterizer which uses the
created rasterizer implementation. The member variables s, ri, and r represent the screen,
rasterizer implementation, and rasterizer, respectively.
Chapter 1: Basic Linux 3D Graphics Concepts 17

The constructor also takes care of the second responsibility of an l3d_pipeline subclass:
management of data representing our virtual world. In our case, our virtual world consists of a sin-
gle pixel (a humble start). The following member variables are declared and initialized to keep
track of the dot’s status: color, x, y, dx, and dy. Variables x, y, dx, and dy represent the dot’s
current horizontal and vertical positions and velocities, and are all initialized to zero. The variable
color represents the dot’s current color, and is specified as follows. First, we logically define the
maximum red, green, and blue values to be 255. Then, we specify a color of (0, 255, 128), which
means a red intensity of 0, a green intensity of 255, and a blue intensity of 128, all being measured
in relation to the logical maximum of 255 which we just set. Finally, we convert this RGB color to
a “native” color appropriate for the current screen’s color depth and color model. The conversion
is done via an object of type l3d_screen_info, which encapsulates the complicated color
calculation discussed earlier. The color conversion function is called ext_to_native, as it
changes a color from an “external” RGB format into a format “native” to the XImage.
The drawdot pipeline then overrides the key_event, update_event, and draw_
event methods to respond to events. This fulfills the third and final responsibility of an
l3d_pipeline subclass, responding to events.
The key_event for the drawdot pipeline checks to see if any one of the directional keys
was pressed, and updates the dx and dy variables, representing the horizontal and vertical veloci-
ties, accordingly.
The update_event for the drawdot pipeline adds the velocities to the positional vari-
ables, and makes sure the position stays within the bounds of the screen. In other words, x += dx
and y += dy.
The draw_event for the drawdot pipeline first calls the draw_point routine of the
rasterizer, which then forwards the request to the rasterizer implementation to draw a pixel at a
particular point in a particular color. Remember that the drawing occurs off-screen (double buffer-
ing). The pixel color must be specified in “native” format for the current color depth and color
model. We already computed and stored this color earlier by using the function l3d_screen_
info::ext_to_native. After plotting the point, we call blit_screen to cause the
off-screen graphics to be copied to the screen.
Let us summarize the main idea behind the pipeline. A pipeline represents the main function-
ality of an application and is subclassed from l3d_pipeline. An l3d_pipeline subclass
has three responsibilities: creating screen-access objects, declaring world data, and responding to
events. Creating screen-access objects (screen, rasterizer implementation, and rasterizer) allows
access to the screen and screen-drawing functions. Declaring world data allows the program to
keep track of the state of all objects in the virtual world. Responding to events is how the pipeline
responds to input (through key_event), updates the virtual world (through update_event),
and draws to the screen (through draw_event, using the previously created screen-access
objects).
The pipeline does not need to worry about how or when events occur; it merely responds to
them. The pipeline’s virtual event functions are thus called from an outside source. This outside
source is the dispatcher.
18 Chapter 1: Basic Linux 3D Graphics Concepts

Step 3. Create a Dispatcher


The third step in writing an l3d application is to create an event dispatcher object. The event dis-
patcher serves as an interface between an event source and an event receiver. The event receiver in
our case is the pipeline. The event source is a window created under a specific event-driven win-
dowing system. The role of the dispatcher is to receive events from the system-specific window,
and to call the appropriate pipeline functions to allow the pipeline to respond to the events.
The whole idea is to isolate the pipeline (i.e., your application logic) from the details of the
underlying event-generating mechanism. This way, the pipeline’s logic can focus exclusively on
application-specific responses to events, without needing to know exactly how the windowing
system generates and transmits events. The dispatcher handles all the messy details of event cap-
turing and translates this into a clean, simple, virtual function call to the pipeline. This allows your
pipeline to work on a variety of platforms, with a variety of event-generating mechanisms.
The event dispatcher must be created using the factory chosen in step 1. This is because the
dispatcher represents system-specific code, and should thus be created through an abstract factory.
Step 4. Create a Pipeline
The fourth step in creating an l3d application is to create your pipeline object. This step is easy.
Having already declared and defined an l3d_pipeline subclass, which fulfills the three pipe-
line responsibilities (creating screen-access objects, declaring world data, and overriding
event-handling functions), we simply create the pipeline directly with the C++ new operator.
This, in turn, invokes the pipeline’s constructor, which creates the screen, rasterizer implementa-
tion, and rasterizer objects.
At this point, the application is ready to respond to events. We just need to pump events to the
pipeline in order to allow it to respond to input, update itself internally, and draw to the screen. To
start the entire event process, we start the dispatcher.
Step 5. Start the Dispatcher
The fifth step in writing an l3d application is to start the dispatcher. We must do three things:
1. Assign a pipeline to the dispatcher.
2. Assign an event source to the dispatcher.
3. Call start.
A moment’s reflection makes it clear why the two assignments are necessary. The dispatcher takes
events from the event source, interprets them minimally, and calls the appropriate pipeline virtual
event function to allow the pipeline to respond. The pipeline member of the dispatcher object
is set to the pipeline we just created. The event_source member of the dispatcher object is set
to the screen object created in the pipeline’s constructor—in other words, the screen (in our case,
the X window) is the source of events. With these two member variables set, the dispatcher can
then begin to extract events from event_source and pass them on to pipeline—a process
set in motion by calling start.
Chapter 1: Basic Linux 3D Graphics Concepts 19

Summary of Fundamental l3d Concepts


The five-step process presented above is typical of l3d programs. First, you choose the proper fac-
tories to configure the program to its environment. Then, you declare a pipeline representing your
application and its data. You create an instance of the pipeline, which in turn creates screen,
rasterizer implementation, and rasterizer objects. You “plug” this pipeline into an event dis-
patcher. Your application pipeline responds to events from the dispatcher by filling in the blanks
left by the virtual functions key_event, update_event, and draw_event. The application
pipeline draws to the screen by using the screen, rasterizer implementation, and rasterizer objects
it created within its constructor. This forms a complete, interactive, event-driven, hardware-inde-
pendent graphics program.

Overview of l3d Classes


We are now ready to look at the specific l3d classes. A much more detailed development of the fol-
lowing l3d classes appears in the introductory companion book Linux 3D Graphics Programming.
The descriptions below are more of a reference, but suffice for a high-level understanding of the
structure of the library classes.

TIP Although the descriptions of the l3d classes presented here are not as detailed as the
full-blown development in the companion book, all of the sample programs and library code
from the introductory companion book are also included on the CD-ROM. Therefore, you
can study the provided code to understand more precisely the library concepts summarized in
the following sections.

Applications and Events


Applications written with l3d are event-driven. The classes described in the following sections
deal with applications, events, and event handling.

An Event-driven Graphics Application: l3d_pipeline


The class l3d_pipeline, in file pipeline.cc, is an abstract class representing your appli-
cation. As we saw earlier, you subclass from l3d_pipeline to create your application, and
must fulfill three responsibilities: creating the screen-access objects, declaring world data, and
responding to events.
Event Generation: l3d_event_source
Class l3d_event_source, in file ev_src.h, is an empty class whose sole purpose is to
indicate that any class derived from this class is capable of serving as an event source for
an l3d_dispatcher object. A class will inherit, possibly multiply inherit, from l3d_
event_source if it is usable as a source of events. Class l3d_screen inherits from
l3d_event_source.
20 Chapter 1: Basic Linux 3D Graphics Concepts

The Event Dispatching Loop: l3d_dispatcher


The class l3d_dispatcher, in file dispatch.h, is an abstract class representing an event-
dispatching mechanism. It extracts events from an underlying event-generating mechanism and
translates these into virtual function calls on a pipeline, thereby allowing the pipeline to respond to
events. Class l3d_dispatcher is a recognition of the general concept of an event dispatching
mechanism, which can be subclassed to provide support for event dispatching on a variety of plat-
forms other than Linux and X (e.g., Linux and Mesa, or Microsoft Windows and a Windows
window). This is an example of the strategy design pattern, where an algorithm (in this case, the
event loop) is turned into a class of its own.
Figure 1-8 illustrates the relationship among the abstract classes l3d_pipeline,
l3d_dispatcher, and l3d_event_source.

Figure 1-8: Class diagram for event-related classes.

The class l3d_dispatcher_x11, subclassed from l3d_dispatcher, represents a


dispatcher specific to the X Window System. The source file is dis_x11.cc. The class
l3d_dispatcher_mesa (file dis_mesa.cc) represents a dispatcher working within the
GLUT framework provided for OpenGL and Mesa.

2D Graphics
Current mainstream display hardware for personal computers is for all practical purposes flat and
two-dimensional. The classes described in the following section deal with accessing 2D screen
and drawing 2D raster graphics.

Control of the Screen: l3d_screen


The classes l3d_screen (file screen.cc) and l3d_screen_info (file scrinfo.h)
work closely together to control and provide access to the display hardware.
The class l3d_screen is an abstract interface to a display device. A screen is responsible
for the creation, setup, and display of the data which has been plotted to the screen. However, the
screen is not responsible for doing the plotting itself. In other words, the screen is a “dumb” dis-
play device. It can show itself, initialize a color palette, and so forth, but it does not know how to
draw polygons, lines, dots, or anything else. The screen is simply a passive display device which
other objects can use as a target to draw into. In particular, the l3d_rasterizer class handles
plotting (rasterization) tasks, manipulating data within the screen. Class l3d_screen is a recog-
nition of the general concept of a screen as a generalized output device—not just an X Window
Chapter 1: Basic Linux 3D Graphics Concepts 21

with an XImage. The application program uses a screen through the abstract l3d_screen inter-
face, and is therefore not tied to any particular display device.
The class l3d_screen_x11, subclassed from l3d_screen, represents a screen under
the X Window System, in the form of a window using an XImage for graphics display. The source
file is sc_x11.cc. The class l3d_screen_mesa (file sc_mesa.cc) represents a screen
created using GLUT and Mesa.
Relevant Screen Attributes: l3d_screen_info
The class l3d_screen_info (file scrinfo.h) is an abstract interface to screen information.
We define screen information as follows: any information which an external class needs to know
about a screen object in order to be able to work with that screen. Class l3d_screen_info
encapsulates the vital statistics about a screen and makes them available through a clean, easy
interface.
In particular, when dealing with colors and XImages, the bytes making up a pixel and their
interpretation depend on the color depth and color model of the underlying X server. Class
l3d_screen_info is a recognition of the more general concept that external users of a screen
need to access such “screen information” in order to do anything useful with the screen.
Such screen information includes:
n Maximum ranges for the specification of red, green, and blue (RGB) values
n A conversion function to convert colors specified in RGB values into native color format (i.e.,
the exact bit format) needed by the underlying screen
n Number of bytes per pixel
n A pointer to the memory of the off-screen buffer (not applicable for hardware-accelerated dis-
play devices, where such access is usually not possible)
n A method for setting internal color state (needed for OpenGL)
n Methods for computing lighting and fog tables for gradual fading of colors, needed for light
and fog effects
n Methods for applying lighting and fog tables to a particular color to obtain the resulting lit or
fogged color

NOTE It might appear tempting to merge l3d_screen_info into the l3d_screen class
itself. After all, isn’t screen information part of the screen itself? The problem appears when
we try to subclass the screen. Screen information can be handled fundamentally in two differ-
ent ways: as RGB (TrueColor) or as indexed color. Similarly, screens themselves come in a
variety of sorts: X11 screens, Mesa screens, Windows screens, DOS screens. If screen infor-
mation and the screen were merged into one class, we would have the unpleasant situation of
having several sorts of each screen: an X11 RGB screen, an X11 indexed screen, a Mesa RGB
screen, a Mesa indexed screen, a Windows RGB screen, a Windows indexed screen, a DOS
RGB screen, a DOS indexed screen. Extending the class hierarchy with a new information
type or a new screen type becomes a major headache. This situation is sometimes called a
nested generalization and indicates that the class should be split into two. For this reason, we
keep the screen information separate, in its own l3d_screen_info class hierarchy. The
l3d_screen is also separate, in its own l3d_screen class hierarchy. We can then, at run
time, mix and match screen information types and screen types freely, without the
22 Chapter 1: Basic Linux 3D Graphics Concepts

multiplicative explosion of classes which would have resulted had we combined


l3d_screen_info and l3d_screen into one class.

The class l3d_screen_info_rgb, subclassed from l3d_screen_info, represents


screen information for X TrueColor visuals or other color models based upon a direct specifica-
tion of red, green, and blue pixel values. The source file is si_rgb.cc. The class
l3d_screen_info_indexed (file si_idx.cc) represents screen information for X
PseudoColor visuals or other color models based upon an indexed or paletted color model.
Figure 1-9 illustrates the class diagram for the screen-related classes.

Figure 1-9:
Screen-related
classes.
Chapter 1: Basic Linux 3D Graphics Concepts 23

The Rasterizer: l3d_rasterizer_2d


The class l3d_rasterizer_2d (file rasteriz.cc) represents a 2D, double-buffered
rasterizer—a subsystem capable of drawing 2D geometric primitives on a double-buffered raster
output device. The class does not do any of the rasterization work itself, but instead relies on a
rasterizer implementation (covered in the next section) to provide a “toolkit” of functions that the
rasterizer calls. Class l3d_rasterizer_2d is a recognition of the general concept of a
rasterizer interface, which allows access to abstract rasterization concepts. The rasterizer interface
allows drawing not only of individual pixels, but also of groups of pixels forming geometric prim-
itives, such as lines and polygons.

NOTE Separating the rasterizer and the rasterizer implementation is an example of the
generally applicable bridge design pattern, where an abstraction and its implementation are
two separate classes. This allows for the class hierarchies for the abstraction and the imple-
mentation thereof to vary and be refined independently, avoiding the multiplicative explosion
of classes discussed earlier (nested generalizations). This technique is also referred to as
using a handle.

The class l3d_rasterizer_2d_imp (file rasteriz.cc) represents an implementation of


the tools necessary for a 2D rasterizer to do its work, and is called a rasterizer implementation. A
rasterizer implementation is an abstract class, and is subclassed to provide concrete rasterizer
implementations for specific software or hardware platforms. While the related class
l3d_rasterizer_2d represents an interface to 2D rasterizer functionality, the class
l3d_rasterizer_2d_imp represents an interface to an implementation of 2D rasterizer
functionality. This rasterizer implementation can (in subclasses) be based upon either hand-coded
software algorithms, or an API to hardware-accelerated rasterization functions.
The class l3d_rasterizer_2d_sw_imp, derived from class l3d_rasterizer_
2d_imp, represents a rasterizer implementation implemented in software (as opposed to hard-
ware). The source file is ras_sw.cc. The calculation of the bytes corresponding to a pixel and
setting these bytes in order to display geometric primitives are all operations typical of software
rasterization implementations, but not necessarily of hardware rasterizer implementations. This
is the reason that such byte and pixel calculations take place only in the concrete
l3d_rasterizer_2d_sw_imp subclass, which represents a software rasterizer implementa-
tion, and not in the abstract ancestor class l3d_rasterizer_2d_imp, which represents a
more abstract concept of a software or hardware rasterizer implementation.
Class l3d_rasterizer_2d_mesa_imp (file ras_mesa.cc) represents a rasterizer
implementation implemented using OpenGL/Mesa calls. Through hardware support in the Mesa
library, or through the Direct Rendering Interface, this rasterizer implementation can take advan-
tage of hardware acceleration.
Figure 1-10 illustrates the class diagram for rasterization-related classes.
24 Chapter 1: Basic Linux 3D Graphics Concepts

Figure 1-10:
Rasterization-
related classes.

Y
FL
AM
TE

Concrete Factory Management


The class l3d_factory_manager (file factorys.cc) represents the factory manager
which contains all factories which any class might need in the entire program. Whenever an object
requests object creation through a “factory,” this factory is to be found in the central factory
manager.
Global variable factory_manager_v_0_1 is a global instance of the factory manager.
An application chooses its factories at startup through the global variable factory_man-
ager_v_0_1. Thus, the factory manager class is a singleton class, which is instantiated only
once in the entire program.
Notice that this is the only l3d class which directly manipulates the concrete factories by their
class name, and which includes the header files for the concrete factories. All other users of the
factories always go through an abstract factory interface, and do not include the header files for the
concrete factories. This is the whole point of the abstract factory: to completely isolate the

Team-Fly®
Chapter 1: Basic Linux 3D Graphics Concepts 25

application from the underlying concrete types to allow for future extension of code without any
change of original code.

Specifying Geometry and Behavior


The classes described in the following sections allow l3d programs to specify objects in 3D and to
manipulate them.

Points
Specifying points is done through two classes: l3d_point (file point.cc) and l3d_coor-
dinate (file coord.h).
The class l3d_point represents a set of four numbers, of the form (x, y, z, w). These num-
bers represent a location in space in terms of displacements along coordinate axes.
The class l3d_coordinate represents a transformable coordinate location in space. It
stores two variables of type l3d_point, named original and transformed. The trans-
formed coordinate is always initialized to equal the original coordinate at the beginning of each
frame (method reset), after which it may be transformed by zero or more transformations
(method transform). After any number of transformations, the application may choose to save
the current location in one of the intermediate storage locations (member transformed_
intermediates), so that the old location may be referenced later. In the end, the transformed
coordinate must represent a point in 2D pixel space, since ultimately the points must be used to
plot images on-screen. The whole idea is that an l3d_coordinate is more than just a single
location in space: it is an original location, a number of optional intermediate locations, and a final
location.
The most important thing to remember about the original and transformed members
is that the transformed coordinate is the one that is finally used to display the pixels on the screen.
An application must at some point put the final pixel coordinates in the transformed member
variable.
Lists
Specifying more complex geometry (as we see in the coming section on polygons) requires us to
store lists of vertices. For this, we use a combination of two data structures: l3d_list and
l3d_two_part_list (both in file list.h). Class l3d_list is a dynamically growable
list that is like an array, but which automatically expands and allocates more space as necessary.
The elements are accessed as normal through the array index operator [ ]. Class l3d_two_
part_list is an extension of l3d_list and partitions the list of items into two parts: a fixed
part and a varying part. The fixed part is fixed in size and never changes; the varying part is based
on some dynamic calculation and changes often in size. However—and this is the whole point of
the l3d_two_part_list—both the fixed and the varying parts are accessed identically. If we
stored the fixed and varying parts in two separate lists, any references to list items would need to
specify if the item comes out of the fixed or the varying list, which makes for rather inconvenient
code. With the l3d_two_part_list, external users don’t need to worry about whether
the referenced item is fixed or varying. This uniformity of access is primarily of use in polygon
26 Chapter 1: Basic Linux 3D Graphics Concepts

clipping. Clipping can create temporary (varying) vertices which must be accessible exactly like
normal (fixed) vertices, but which may vary wildly in number from frame to frame.
Using these list classes is easy. They are declared as template classes, so that they work with
any data type. Let’s first begin with the l3d_list class.
To use l3d_list, do the following:
1. Declare an instance of the list, providing the data type as the template parameter and passing
the initial size of the list to the constructor. The initial size must be greater than zero.
2. Call next_index before storing any new element into the list, to obtain the next free index
within the list. This also causes the list to grow if you exceed the current size of the list.
3. Store and access elements in the list using the array index operator [ ]. Never attempt to access
an element you have not already allocated by having called next_index.
4. Query the number of items via the num_items member. Valid indices range from 0 to
num_items –1.
5. Effectively empty or reduce the size of the list by setting num_items to zero or a smaller
number. Never set num_items to a larger number; use next_index to increase the size of
the list.
6. Copy the list by assigning it to another l3d_list object. This makes a full copy of the list
and its contents. The two copies are completely independent of one another and contain no
pointers to common objects (unless the contents of the list themselves are pointers, in which
case the two lists contain separate copies of pointer objects, which point to the same locations
in memory).

NOTE The creation of the list elements occurs through an abstract factory; the copying of
list elements, through a possibly virtual assignment operator; the access of list elements,
through an abstract class pointer in the overloaded array index operator [ ]. This means that
by providing a new factory (via the l3d_list constructor) and overriding a virtual assign-
ment operator =, an existing l3d_list in already existing code can be extended after the
fact to work with new subclasses, and the original code is still guaranteed to work because the
internal access to the list elements is done through an abstract class pointer. A detailed exam-
ple of this technique appears in Chapter 2, when we extend the existing polygon class to store
texture coordinate information with each vertex, without needing to change any of the already
existing code. By default, an abstract factory is automatically created and destroyed if you do
not provide one; the flexibility is there if you need it, but does not burden you if you do not.

The other list class is the l3d_two_part_list. Again, a two-part list consists of a fixed part,
whose size never changes, and a varying part, whose size may change greatly over time. The list
elements in both the fixed part and the varying part are accessed identically (with the array opera-
tor [ ]). To use an l3d_two_part_list, do the following:
1. Declare an instance of the list, providing the data type as the template parameter and passing
the initial fixed size of the list to the constructor. The initial size must be greater than zero.
2. Store and access fixed items in the list by using the array index operator [ ]. Valid indices for
fixed items range from 0 to num_fixed_items –1. Do not store fixed items outside of this
range. Do not change the num_fixed_items variable.
Chapter 1: Basic Linux 3D Graphics Concepts 27

3. Call next_varying_index before storing any element in the varying part of the list, to
obtain the next free index within the varying part. This also causes the varying part of the list
to grow if you exceed the current size of the list.
4. Store and access elements into the varying part of the list using the array index operator [ ].
Never attempt to access an element in the varying part which you have not already allocated
by having called next_varying_index.
5. Query the number of varying items via the num_varying_items member. Valid indices
for varying items range from num_fixed_items to num_fixed_items + num_
varying_items – 1.
6. Effectively empty or reduce the size of the varying part of the list by setting num_vary-
ing_items to zero or a smaller number. Never set num_varying_items to a larger
number; use next_varying_index to increase the size of the varying part of the list.
7. Copy the list by assigning it to another l3d_two_part_list object. This makes a full
copy of the fixed and varying parts of the list and its contents; the two copies are completely
independent of one another and contain no pointers to common objects (unless, again, the
contents are themselves pointers).
Notice that access to both fixed and varying items is done uniformly through the array access
operator [ ].

NOTE The earlier comments about factory creation, virtual assignment, and polymorphic
access all apply to l3d_two_part_list as well. This is because l3d_two_part_list
internally uses an l3d_list to store the items.

We use two-part lists for storing vertices (i.e., objects of type l3d_coordinate) which are
later used to define polygons. A typical creation of a vertex list looks like the following:
vlist = new l3d_two_part_list<l3d_coordinate> ( 4 );
(*vlist)[0].original.X_ = float_to_l3d_real(100.0);
(*vlist)[0].original.Y_ = float_to_l3d_real(100.0);

(*vlist)[1].original.X_ = float_to_l3d_real(200.0);
(*vlist)[1].original.Y_ = float_to_l3d_real(150.0);

(*vlist)[2].original.X_ = float_to_l3d_real(150.0);
(*vlist)[2].original.Y_ = float_to_l3d_real(200.0);

(*vlist)[3].original.X_ = float_to_l3d_real( 50.0);


(*vlist)[3].original.Y_ = float_to_l3d_real(120.0);

First, we create an object of type l3d_two_part_list with a data type of l3d_coordi-


nate. The constructor parameter 4 indicates the size of the fixed part of the two-part list. We then
initialize the first four elements, with indices 0 through 3, simply by using the array access opera-
tor [ ]. Each element of the list is an object of type l3d_coordinate, which we can manipulate
directly. In the code snippet above, we initialize the original member of the l3d_coordi-
nate, which is what an application program does to initially define a vertex list. The varying part
of the two-part list is not used above, but is used during polygon clipping operations.
28 Chapter 1: Basic Linux 3D Graphics Concepts

Vectors and Matrices


In addition to specifying points in 3D space, we also need to specify vectors, which are displace-
ments, and matrices, which represent transformations. Class l3d_vector (file vector.cc)
represents vectors; class l3d_matrix (file matrix.cc) represents matrices.
The l3d source code defines various operators allowing the appropriate mathematical opera-
tions among scalars, points, vectors, and matrices. For instance, we can multiply two matrices
together, multiply a matrix and a point, add a point and a vector, and so forth. Note that the pipe
character (the vertical bar “|”) is used to specify matrix multiplication.
Polygons
The l3d classes make a distinction between the geometry and the appearance of a polygon. The
base classes l3d_polygon_2d and l3d_polygon_3d describe the geometry of a polygon
in 2D or 3D, respectively. Some typical geometric operations on polygons are clipping, transfor-
mation, or surface normal computation. We subclass the geometric polygon classes to obtain
drawable polygon classes. For instance, l3d_polygon_3d_flatshaded inherits the attrib-
utes and methods used for describing the geometry of a 3D polygon, and adds new attributes and
methods necessary to draw the polygon on-screen.
A polygon requires two lists for its definition: an external vertex list, and an internal list of
indices into the external vertex list. The external vertex list can be shared among several polygons,
as is the case when we combine several polygons to make an object. The internal list of indices
specifies the polygon in terms of the external vertices, specified in clockwise order when looking
at the front of the polygon (the left-handed rule). During an analytic clipping operation, new verti-
ces can be created; this requires addition of new vertices to the external list, and temporary
redefinition of the internal vertex index list to use the new clipped vertices.
The source code for the polygon classes is located on the companion CD-ROM in the direc-
tory $L3D/source/app/lib/geom/polygon.
Objects
Class l3d_object (file object3d.h) represents a 3D polygonal object. It declares a list of
vertices and a list of polygons. The vertices are shared among all polygons. Each polygon is
defined in terms of a list of indices into the shared vertex list of the object.
The class l3d_object stores a list of matrices representing transformations on the object.
The method reset sets all vertices of the polygon back to their original positions in object coor-
dinates; the method transform then applies a transformation to the vertices, so that the vertices
may be moved into world space, camera space, and finally, screen space. The method cam-
era_transform is identical to the method transform, except that it calls a user-definable
routine after the transformation. This allows us to save the camera-space coordinates, which we
need for texture mapping (see Chapter 2). Although the perspective projection could be handled
with a matrix, the class l3d_object implements a specialized routine apply_perspec-
tive_projection, which projects the vertices of the polygon from 3D into 2D.
Polygon culling is done with the help of the linked list nonculled_polygon_nodes.
Initially, at the beginning of each frame, the list contains all polygons in the object. Through vari-
ous culling processes, we can remove polygons from the list if we determine them to be not visible
Chapter 1: Basic Linux 3D Graphics Concepts 29

for the current viewpoint (see Chapter 4). Then, for each frame, we only need to draw the polygons
that remain in the list.
Each object can update itself. This allows construction of 3D worlds in a modular fashion,
with a number of “intelligent” objects capable of acting on their own. An object can update itself
by changing its geometry (i.e., altering the vertices in the vertex list), or by applying a transforma-
tion matrix to its geometry. Such updates can take place either through subclassing or through
plug-ins. Wíth subclassing, a new object subclass overrides the virtual method update, which
updates the object as needed. With a plug-in, we write a dyamic shared library file which is loaded
at run time and which then updates the object.
Worlds
Just as groups of 3D polygons form a 3D object, groups of 3D objects form a world. Class
l3d_world (file world.h) is the class we use for storing worlds. A world is a displayable,
interactive 3D environment. It contains 3D objects to populate the world, a camera to see the
world, a rasterizer to plot pixels, and a screen to display the output.
A simple world as implemented by l3d_world is really little more than a simple list of
objects displayed with the painter’s algorithm. Later, we subclass from l3d_world to perform
more advanced visibility operations on our 3D objects, leading to a graph-like structure of objects
connected via portals. See Chapter 5 for details.
For interaction with the external events (such as the user pressing a key on the keyboard), the
world class relies on a corresponding pipeline class l3d_pipeline_world (file pi_wor.h).
The world pipeline class merely forwards events on to the world to be handled.

Fixed- and Floating-Point Math


In 3D graphics, we generally need to be able to store and manipulate fractional values. For this, we
introduce the type l3d_real. The purpose of the l3d_real type is to allow all computations
using real values to be performed using either floating-point or fixed-point arithmetic.
Floating-point arithmetic is done by using the C type float, and for good real-time performance
requires a hardware floating-point unit. Fixed-point arithmetic requires no floating-point unit in
hardware, and simulates fractional values with integers by multiplying the fractional value so that
the fractional part moves into the integer part.
By using l3d_real any time we want a real value, we can configure our code at compile
time to use either fixed- or floating-point math. There are four functions you need to know for
using variables of type l3d_real:
30 Chapter 1: Basic Linux 3D Graphics Concepts

n Type conversions. A variable of type l3d_real can be converted to and from an integer or
floating-point value through the macros l3d_real_to_float, l3d_real_to_int,
float_to_l3d_real, and int_to_l3d_real. Furthermore, a variable of type l3d_
real can be converted to and from a fixed-point number through the macros l3d_real_
to_fixed and fixed_to_l3d_real.
n Addition and subtraction. Variables of type l3d_real may be added to and subtracted from
other variables of type l3d_real with the normal plus (+) and minus (–) operators. Addi-
tion and subtraction with variables of any other type must first use one of the type conversion
macros to convert the variable of the other type to l3d_real.
n Multiplication and division. Variables of type l3d_real can be multiplied or divided by
variables of type l3d_real or of type int only. Such multiplications or divisions must use
one of the following macros, all of which return a type of l3d_real: l3d_mulrr (multi-
plies an l3d_real by an l3d_real), l3d_mulri (multiplies an l3d_real by an
int), l3d_divrr (divides an l3d_real by an l3d_real), and l3d_divri (divides
an l3d_real by an int).
n Other functions. The iceil function returns the integer ceiling of (i.e., the smallest integer
larger than) an l3d_real. ifloor returns the integer floor of (the largest integer smaller
than) an l3d_real. The function l3d_sqrt returns an l3d_real representing the
square root of the given l3d_real argument.

NOTE The reasons for not making l3d_real a class are discussed in the Appendix of the
introductory companion book, Linux 3D Graphics Programming.

If we choose to use floating-point math (at compile time via a #define), then the l3d_real
macros do nothing other than call the ordinary cast, multiplication, or division operators. If we
choose to use fixed-point math (again, at compile time via a #define), the l3d_real macros
are defined quite differently, applying the somewhat arcane rules for fixed-point math to simulate
real-valued computations using purely integer arithmetic.
An unfortunate consequence of using the l3d_real macros for multiplication and division
is that this leads to somewhat awkward code. For instance, the following computation with float-
ing-point values:
x = a * ( ( b + c ) / d )
would need to be expressed with l3d_real types as:
x = l3d_mulrr( a , l3d_divrr( b + c , d ) )

The multiplication and division must therefore be realized as nested macro invocations. Notice,
however, that no macro invocation is necessary for the addition of b and c, since addition and
subtraction of l3d_real types can be done with the normal addition and subtraction operators.
Chapter 1: Basic Linux 3D Graphics Concepts 31

Summary of l3d Classes


Figure 1-11 illustrates the major l3d classes covered up to this point. Study the previously pre-
sented drawdot program to keep a practical perspective on the use of the l3d classes, and
remember the five-step process for creating l3d applications. Also, remember that the CD-ROM
contains complete source code listings for all of these classes, and the introductory companion
book Linux 3D Graphics Programming develops the classes in detail, with several example
programs.

Figure 1-11: Class diagram for the most important l3d classes covered so far. For space reasons, not all classes are
displayed; the naming convention makes it obvious where the other classes would appear in the diagram. For instance,
l3d_screen_mesa is a descendant of l3d_screen, as is clear from the naming convention.
32 Chapter 1: Basic Linux 3D Graphics Concepts

TIP See the source code for the sample program fltsim on the CD-ROM (taken from the
introductory companion book Linux 3D Graphics Programming) for a typical example of cre-
ating an animated world populated with polygonal objects and a user-controllable camera.

Linux Programming Tools


This book assumes you are already fairly comfortable programming on a Linux system. I won’t
spend any time explaining how to change directories, start the X Window System, set environ-
ment variables, or bring up a command shell. The following list, while not complete, mentions a
number of important Linux programming tools with which you should be familiar. You don’t have
to be an expert with all of these tools, but it is important to know of their existence, to be able to
perform basic operations with these tools, and to know how to find out more about these tools.
n man: The online manual. Use this command to find information about other commands.
n info: The online info documentation, often used for GNU projects or for very large documen-
tation. Use this command to find information about other programs not documented with
“man.”
n Emacs: The most powerful editor around. We use an Emacs module in Chapter 2 to perform
symbolic algebra. You should be comfortable editing files and executing commands in Emacs
in order to be able to carry out the symbolic algebra exercises in Chapter 2.
n gcc: The GNU C/C++ compiler.
n gdb: The GNU debugger. It is text-based, but several graphical front ends exist.
n make: The project-compiling utility. You should have a basic understanding of what
Makefiles are and how they are interpreted, since practically every non-trivial Linux project
uses “make” to direct the actual compilation of the source code files.
n binutils: A collective term for the GNU binary utilities. Use these tools to find out information
about system libraries and object files. This can help in finding out which library file contains
a particular function with which you need to link your program.
n find: A powerful file-finding program. Use “find” to look for files (such as C++ header files)
containing a particular string or matching some criteria.

Linux 3D Modeling
The introductory companion book Linux 3D Graphics Programming covered basic usage of the
Blender 3D modeling package for creating 3D models. We continue the coverage in this book.
Blender is zero-cost software; a fully functional version of Blender, with no artificially imposed
restrictions, is included on the CD-ROM.
Chapter 1: Basic Linux 3D Graphics Concepts 33

NOTE The term “zero-cost software” means that you can use the software without having to
pay for the software. The difference between “zero-cost software” and “free software” is that
free software comes with source code, and can be freely modified. “Free software” refers to
freedom, not price. Free software by its open nature is usually zero-cost software, but the
reverse is not true. The GNU/Linux operating system is free software, as is the source code
included with this book. See http://www.gnu.org for a full definition of “free software.”
Remember, your freedom as a professional programmer is literally at stake here.

The rest of this section briefly goes over the most important features of Blender as discussed in the
introductory companion book.

Blender Interface and Commands


The Blender interface consists of a number of non-overlapping windows. Each window has a
header and a type. You may choose to display the window header either at the top or the bottom of
a window by right-clicking on the header. Change the type of the window by clicking and drag-
ging the button on the left side of the window header. You can change the type of a window at any
time. The types of windows are:
n InfoWindow: Contains global settings for the entire program, such as directory names.
n 3DWindow: Displays 3D objects and allows creation, positioning, and editing of these
objects.
n ButtonsWindow: Edits properties of the current selection in the 3D window, or of global edit-
ing or rendering parameters.
n FileWindow: Selects files for loading and saving.
n OopsWindow: Displays the internal object-oriented programming structure (OOPS) of the
currently selected scene in a tree format. Editing is not allowed in this window.
n SoundWindow: Allows integration of sounds into interactive 3D environments created with
Blender. In Blender 2.0 beta, this window type is not yet fully functional.
n IpoWindow: Displays and edits interpolators (IPOs), which are used for interpolating various
object parameters over time. Blender has a very flexible interpolation system where many
parameters can be interpolated and thus animated over time; interpolators can animate
objects, vertices, textures, lamps, or sequences.
n SequenceWindow: Combines and mixes among various animation sequences. This window
type is essentially a post-production video studio, where each animation sequence is repre-
sented as a horizontal strip with the length indicating the duration. Strips can be blended or
combined with one another, allowing, for instance, smooth fades between different parts of a
3D animation. Another exciting possibility is the combination of Blender rendered anima-
tions with existing video footage from another source, such as a video camera.
n TextWindow: Edits a text file. Used primarily for creating script files for the built-in Python
extension language.
n ImageWindow: Displays an image. Useful for previewing texture image files; also used for
assigning specific texture coordinates to specific faces of a mesh.
34 Chapter 1: Basic Linux 3D Graphics Concepts

n ImageSelectWindow: Selects an image file for loading or saving, displaying a thumbnail pre-
view of each image file for easy differentiation.
The majority of the modeling work in Blender is done in the 3DWindow, where geometry is inter-
actively created in 3D, and the ButtonsWindow, where various properties and parameters can be
set. See Chapters 3 and 6 for further discussion of Blender and some of the other window types.

Figure 1-12: Window types in Blender


and their appearance in the window
header.

Y
FL
AM
Figure 1-13: All Blender
window types open
TE

simultaneously.

New windows in the Blender user interface are created by splitting existing windows. A bor-
der separates each window header from the window contents. Middle-click the border to split the
window. Right-click a border to join two adjacent windows.
A window can receive keyboard input only when the mouse cursor is positioned over that
window. As mentioned earlier, the actual 3D modeling work in Blender is done in the 3DWindow.
During modeling, the most important Blender commands can be executed by an appropriate sin-
gle-keystroke command within the 3DWindow. The following list summarizes the most important
single-keystroke commands in the 3DWindow:

Team-Fly®
Chapter 1: Basic Linux 3D Graphics Concepts 35

n Ctrl+Up Arrow: Toggle maximization of window.


n 1, 3, 7 on the numeric keypad: Display front, side, or top view.
n 5 on the numeric keypad: Toggle between perspective and orthogonal viewing.
n Tab: Toggle EditMode. In EditMode, you edit and move individual vertices within one single
object; outside of EditMode, you position and move complete objects.
n Shift+middle-click+drag: Translate viewpoint.
n Ctrl+middle-click+drag: Zoom view.
n Middle-click+drag: Rotate view.
n Left-click: Position 3D cursor.
n Right-click: Select object (outside of EditMode) or vertex (in EditMode). Use Shift to extend
the selection.
n SPACE: Display the ToolBox, a menu for creating and performing some common operations
on objects.
n b: Begin border selection mode; end with left-click.
n x: Delete selection.
n g: Grab (translate) selection.
n r: Rotate selection.
n s: Scale selection.
n Shift+s: Snap (align). This operation can snap vertices or objects to the grid or cursor, or snap
the cursor to objects or the grid.
n h: Hide selection (only in EditMode). This is useful for temporarily hiding parts of a large,
complex object to allow interior parts to be more easily seen and edited.
n Shift+d: Create a separate duplicate copy of an object.
n Alt+d: Create a linked duplicate copy of an object. Changes to the geometry of one object are
mirrored automatically in the other object.
n Ctrl+p: Create a hierarchical parent relationship between selected objects.
n Alt+p: Clear the hierarchical parent relationship between the selected objects.
n 1 through 0: Switch the currently visible layer to be a layer between 1 and 10.
n Alt+1 through Alt+0: Switch the current layer between layers 11 and 20.
n Shift+1 through Shift+0: Toggle visibility of layers 1 through 10, allowing multiple selection
of layers.
n Shift+Alt+1 through Shift+Alt+0: Toggle visibility of layers 11 through 20.
n m: Move the currently selected objects to another layer.
n F1: Load file.
n F2: Save file.
n Alt+w: Export file as Videoscape, a simple ASCII format without textures.
n q: Quit Blender.
36 Chapter 1: Basic Linux 3D Graphics Concepts

Exporting and Importing Blender Models


Models created in Blender can be imported into l3d programs. The Blender file format is very
complex and not documented for external users. For this reason, we need to export the Blender
models in a different file format in order to import them into l3d programs.
The simplest export format offered by Blender is the Videoscape format. This file format is
simple and straightforward; it contains vertices and polygons defined in terms of indices into a
common vertex list. Conveniently, this is exactly the same way that l3d stores polygonal objects.
We use a plug-in (file vidflat.h) to read the Videoscape file into l3d. The plug-in con-
structor is of particular interest. The data parameter passed to the constructor is a plug-in
environment object of type l3d_plugin_environment, so that the plug-in knows some-
thing about its calling environment. The environment object, in turn, stores another data
parameter. In the case of the Videoscape plug-in, this data parameter is a pointer to a character
string containing a list of additional parameters needed to load the Videoscape file.
The Videoscape parameter string in the environment object has the following form:
x y z A B C D E F G H I mesh.obj
The entries are interpreted as follows:
n x y z: Floating-point numbers indicating the origin of the object’s local coordinate system.
In other words, this is the location of the object.
n A B C D E F G H I: Nine floating-point numbers representing the object’s orientation as a
3´ 3 rotation matrix. The numbers are in row-major order (i.e., A B C is the first row of the
matrix, D E F the second row, G H I the last row). Recall the relationship between matrices
and coordinate systems, (as explained in the introductory companion book Linux 3D Graph-
ics Programming). This relationship implies that the vector (A,D,G)—the first row of the
matrix—is the x axis of the object’s local coordinate system, (B,E,H) is the y axis, and (C,F,I)
is the z axis.
n mesh.obj: The filename of the Videoscape file containing the geometry of the object.
The plug-in constructor first parses the parameter string to get the position, orientation, and
required filename for the Videoscape object to be loaded. Next, it opens the mesh file and reads in
the vertices and faces. The Videoscape format stores coordinates in a right-handed system, with
the z axis going up; l3d uses a left-handed system with the y axis going up. To convert between
these two systems, we need to do two things: swap the y and the z values as we read them from the
Videoscape file, and reverse the ordering of the vertices in a face definition to preserve the
front/back orientation of the face after the vertex swap. After converting the coordinates in this
manner, the constructor then initializes each polygon by computing its center, surface normal, and
containing plane. At this point, the polygon is also assigned a random color.
After reading in all vertices and faces, the constructor then defines two transformation matri-
ces, one for rotating the object according to the rotation matrix specified within the parameter
string, and a second matrix for translating the object to the specified position. These matrices are
concatenated together and stored in the object as a composite transformation.
Chapter 1: Basic Linux 3D Graphics Concepts 37

TIP See sample program vidflat on the CD-ROM for an example of using the
Videoscape plug-in.

Summary
This chapter flew through the basics of Linux 3D graphics programming, as covered in depth in
the introductory companion book Linux 3D Graphics Programming. We looked at:
n 2D graphics, theory, and practice under Linux and X
n Definition of 3D graphics
n 3D coordinate systems, vectors, and matrices
n Perspective projection
n Storing points, polygons, objects, and worlds in computer memory
n l3d library classes in C++ for implementing event-driven, object-oriented 3D graphics
n 3D modeling, export, and import with Blender
In a nutshell, that is the prerequisite knowledge for effectively reading the coming chapters. Don’t
worry if you are rusty on one or two concepts—the important thing to ensure, before proceeding
with the rest of the book, is that you are fairly comfortable with all or almost all of the topics dis-
cussed in this chapter.
With the basics of polygonal 3D graphics now firmly in mind, we can proceed to the next
chapter, which discusses rendering and animation techniques for 3D polygons.
39

Chapter 2

Rendering and
Animation Techniques
for 3D Polygons

T
his chapter examines rendering and animation techniques for 3D polygons. In Chapter 1, we
saw that the l3d library classes provide facilities for storing, grouping, and manipulating 3D
polygons, objects, and worlds. We also saw that l3d classes can rasterize flat-shaded poly-
gons by stepping along the polygon’s edges from top to bottom, and drawing horizontal spans of
pixels between the left and right edges. In this chapter, we extend these basic polygonal tech-
niques to achieve more interesting and realistic visual effects.
We cover the following topics in this chapter:
n 3D morphing (allowing a 3D object to smoothly change shape into another)
n Adding surface detail through texture mapping
n Using the symbolic algebra package Calc to derive the texture mapping equations
n Mathematical models for computing light intensities
n Rendering computed light intensities with light mapping
n Shadows
This chapter presents the following sample programs: morph3d illustrating vertex animation and
morphing in 3D, textest illustrating texture mapping, and lightmap illustrating light
mapping.

Vertex Animation and 3D Morphing


The program morph2d from the companion book Linux 3D Graphics Programming already
explained and illustrated the concept of morphing in 2D. As we used the term earlier, morphing
referred to the change in shape of a polygon by interpolating the vertices in its vertex list between
certain key frames or morph targets.
Here, we wish to extend the concept of morphing to 3D, and also to groups of polygons
instead of single polygons. As it turns out, this is extremely easy. First of all, the vertex
40 Chapter 2: Rendering and Animation Techniques for 3D Polygons

interpolator class we used in 2D, l3d_vertex_interpolator, can be reused for the 3D


morphing case. Second, all polygons in a 3D object share a common vertex list; therefore, chang-
ing the vertex list in the object automatically redefines the geometry of all polygons in the object.

Figure 2-1: Morphing in 3D is a simple extension of the same concept


in 2D. We need only interpolate the vertex list, shared by all
polygons, from one key frame to another. Here, we have a cube
defined by vertices c1-c8, and a pyramid defined by vertices p1-p8.
Each vertex c1-c8 is morphed to the corresponding vertex p1-p8,
which morphs the cube into the pyramid.

Note that in order to be completely correct, we would have to recompute the surface normal
vector for all affected polygons any time we made any change to the vertex list. If we change one
vertex in the vertex list, we would need to recompute the surface normals for all polygons using
that vertex. If we change all vertices in the vertex list, as is usually the case with morphing, then
we would need to recompute the surface normals for all polygons in the object. Also, any other
geometry-specific information stored with the polygons or the object would need to be recom-
puted whenever we change the vertex list, because changing the vertex list is equivalent to
changing the underlying geometry of the object.
In the following sample program morph3d, however, we won’t be recomputing the normals,
but rather just changing the vertex list. This leads to simpler and faster code, but is actually incor-
rect since the surface normals are no longer normal to the morphed geometry. Since we don’t use
the surface normal in this sample program, the fact that the post-morphing normals are incorrect is
actually not noticeable.

Figure 2-2: Surface normals or other additional 3D


geometrical attributes are no longer accurate after
morphing.

Sample Program: morph3d


Sample program morph3d displays a number of morphing pyramid objects in an interactive
scene. Each pyramid slowly changes shape from a compact pyramid to a more elongated shape,
then back again. You can interactively fly though the scene with the standard keyboard controls
used in l3d library:
Chapter 2: Rendering and Animation Techniques for 3D Polygons 41

j: Rotate left (rotates around the VUP axis)


l: Rotate right (rotates around the VUP axis)
i: Thrust forward (translates along the VFW axis)
k: Thrust backward (translates along the VFW axis)
J: Roll left (rotates around the VFW axis)
L: Roll right (rotates around the VFW axis)
I: Pitch forward (rotates around the VRI axis)
K: Pitch backward (rotates around the VRI axis)
s: Thrust left (translates along the VRI axis)
f: Thrust right (translates along the VRI axis)
e: Thrust up (translates along the VUP axis)
d: Thrust down (translates along the VUP axis)
v: Cycle through to the next thrust level

NOTE The VFW axis is the forward axis along which the camera is looking. The VUP axis is
the axis pointing straight up from the camera. The VRI axis is the axis pointing straight to the
right of the camera. Taken as a set, the vectors VRI, VUP, and VFW form a left-handed camera
coordinate system.

Figure 2-3: Output from


program morph3d. Each
pyramid slowly changes
shape.

Listing 2-1: main.cc, the main file for program morph3d


#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>

#include "../lib/geom/object/object3d.h"
#include "../lib/geom/polygon/p3_flat.h"
#include "../lib/tool_2d/screen.h"
#include "../lib/tool_os/dispatch.h"
#include "../lib/raster/rast3.h"
#include "../lib/tool_2d/scrinfo.h"
#include "../lib/geom/world/world.h"
42 Chapter 2: Rendering and Animation Techniques for 3D Polygons

#include "../lib/system/fact0_2.h"
#include "../lib/pipeline/pi_wor.h"
#include "../lib/dynamics/plugins/plugenv.h"
#include "shapes.h"

#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>
#include <stdarg.h>

class my_world:public l3d_world {


public:
my_world(void);
};

my_world::my_world(void)
: l3d_world(640,400)
{
l3d_screen_info *si = screen->sinfo;

camera->VRP.set(0,0,–50,0);
camera->near_z = float_to_l3d_real(5.5);
camera->far_z = int_to_l3d_real(500);

int i,j,k,onum=0;

i=10; j=0; k=20;


k=0;

//- create some pyramid objects


for(i=1; i<100; i+=20) {
objects[onum=objects.next_index()] = new pyramid();
l3d_screen_info_indexed *si_idx;
l3d_screen_info_rgb *si_rgb;

l3d_polygon_3d_flatshaded *p;
for(int pnum=0; pnum<objects[onum]->polygons.num_items; pnum++) {
p = dynamic_cast<l3d_polygon_3d_flatshaded *>(objects[onum]->polygons[pnum]);
if(p) {
p->final_color = si->ext_to_native
(rand()%si->ext_max_red,
rand()%si->ext_max_green,
rand()%si->ext_max_blue);
}
}

if (objects[onum]==NULL) exit;
objects[onum]->modeling_xforms[1].set
( int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(i),
int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(1),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(1),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1) );
objects[onum]->modeling_xform =
objects[onum]->modeling_xforms[1] |
objects[onum]->modeling_xforms[0] ;
}

screen->refresh_palette();
}
Chapter 2: Rendering and Animation Techniques for 3D Polygons 43

main() {
l3d_dispatcher *d;
l3d_pipeline_world *p;
my_world *w;

factory_manager_v_0_2.choose_factories();
d = factory_manager_v_0_2.dispatcher_factory->create();

w = new my_world();
p = new l3d_pipeline_world(w);
d->pipeline = p;
d->event_source = w->screen;

d->start();

delete d;
delete p;
delete w;
}

Listing 2-2: shapes.h, the header file for the 3D objects for program morph3d
#include "../lib/geom/object/object3d.h"
#include "geom/vertex/verint.h"

class pyramid:public l3d_object {


static const int num_keyframes = 2;
int keyframe_no;
l3d_two_part_list<l3d_coordinate> *keyframes[2];
l3d_vertex_interpolator interp;
bool currently_interpolating;

public:
pyramid(void);
virtual ~pyramid(void);
int update(void);
};

Listing 2-3: shapes.cc, the implementation file for the 3D objects for program morph3d
#include "shapes.h"

#include <stdlib.h>
#include <string.h>

pyramid::pyramid(void) :
l3d_object(4)
{
keyframes[0] = new l3d_two_part_list<l3d_coordinate>(4);
(*keyframes[0])[0].original.set
(float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[0])[1].original.set
(float_to_l3d_real(10.0),
float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[0])[2].original.set
(float_to_l3d_real(0.),
float_to_l3d_real(10.),
44 Chapter 2: Rendering and Animation Techniques for 3D Polygons

float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[0])[3].original.set
(float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(10.),
float_to_l3d_real(1.));

keyframes[1] = new l3d_two_part_list<l3d_coordinate>(4);


(*keyframes[1])[0].original.set
(float_to_l3d_real(5.),
float_to_l3d_real(5.),
float_to_l3d_real(5.),
float_to_l3d_real(1.));
(*keyframes[1])[1].original.set
(float_to_l3d_real(–10.),

Y
float_to_l3d_real(10.),
float_to_l3d_real(0.),

FL
float_to_l3d_real(1.));
(*keyframes[1])[2].original.set
(float_to_l3d_real(10.),
float_to_l3d_real(0.),
AM
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[1])[3].original.set
(float_to_l3d_real(10.0),
TE

float_to_l3d_real(15.0),
float_to_l3d_real(0.),
float_to_l3d_real(1.));

int pi;
pi = polygons.next_index();
polygons[pi] = new l3d_polygon_3d_flatshaded(3);
printf("before: %p", polygons[pi]->vlist);
polygons[pi]->vlist = &vertices;
printf("after: %p", polygons[pi]->vlist);
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=0;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=1;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=3;
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();

pi = polygons.next_index();
polygons[pi] = new l3d_polygon_3d_flatshaded(3);
polygons[pi]->vlist = &vertices;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=2;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=3;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=1;
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();

pi = polygons.next_index();
polygons[pi] = new l3d_polygon_3d_flatshaded(3);
polygons[pi]->vlist = &vertices;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=0;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=2;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=1;
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();

pi = polygons.next_index();
polygons[pi] = new l3d_polygon_3d_flatshaded(3);
polygons[pi]->vlist = &vertices;

Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 45

(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=3;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=2;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=0;
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();

num_xforms = 2;
modeling_xforms[0] = l3d_mat_rotx(0);
modeling_xforms[1].set
( float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.) );

modeling_xform=
modeling_xforms[1] | modeling_xforms[0];

currently_interpolating = false;
keyframe_no=0;
vertices = keyframes[keyframe_no];
}

pyramid::~pyramid(void) {
for(register int i=0; i<polygons.num_items; i++) {delete polygons[i]; }
for(int i=0; i<num_keyframes; i++) {
delete keyframes[i];
}
}

int pyramid::update(void) {
if(currently_interpolating) {
vertices = interp.list;
if(! interp.step()) {
currently_interpolating = false;
}
}else {
keyframe_no++;
if(keyframe_no >= num_keyframes) {keyframe_no = 0; }
int next_keyframe = keyframe_no + 1;
if(next_keyframe >= num_keyframes) {next_keyframe = 0; }

vertices = keyframes[keyframe_no];
interp.start( *keyframes[keyframe_no], *keyframes[next_keyframe],
rand()%100 + 50, 3);

currently_interpolating = true;
}
}

The program morph3d uses the l3d_world and l3d_object classes to store the virtual
world. The file main.cc is very simple: it declares a world subclass which creates some pyra-
mid objects.
The pyramid class is a morphing 3D pyramid. It changes its shape between a compact and
an elongated pyramid. To accomplish this, we added the following lines to the pyramid class:
class pyramid:public l3d_object {
static const int num_keyframes = 2;
int keyframe_no;
l3d_two_part_list<l3d_coordinate> *keyframes[2];
l3d_vertex_interpolator interp;
bool currently_interpolating;
46 Chapter 2: Rendering and Animation Techniques for 3D Polygons

The keyframes variable holds two separate, different vertex lists. Each list is one complete set
of vertex positions defining a shape for the 3D object. We create and fill these lists within the
pyramid constructor.
keyframes[0] = new l3d_two_part_list<l3d_coordinate>(4);
(*keyframes[0])[0].original.set
(float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[0])[1].original.set
(float_to_l3d_real(10.0),
float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[0])[2].original.set
(float_to_l3d_real(0.),
float_to_l3d_real(10.),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[0])[3].original.set
(float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(10.),
float_to_l3d_real(1.));

keyframes[1] = new l3d_two_part_list<l3d_coordinate>(4);


(*keyframes[1])[0].original.set
(float_to_l3d_real(5.),
float_to_l3d_real(5.),
float_to_l3d_real(5.),
float_to_l3d_real(1.));
(*keyframes[1])[1].original.set
(float_to_l3d_real(–10.),
float_to_l3d_real(10.),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[1])[2].original.set
(float_to_l3d_real(10.),
float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[1])[3].original.set
(float_to_l3d_real(10.0),
float_to_l3d_real(15.0),
float_to_l3d_real(0.),
float_to_l3d_real(1.));

Now, given two different shapes of the object stored in two separate vertex lists, we use the class
l3d_vertex_interpolator, exactly as we did in the earlier program morph2d, to inter-
polate the positions of the vertices between the two vertex lists. This takes place in the method
pyramid::update. The usage of the vertex interpolator is exactly the same as before: we
specify a starting vertex list, an ending vertex list, a total count of interpolation steps, and a dimen-
sion (2D or 3D) determining which elements of the vertex are interpolated. The vertex interpolator
has its own internal, separate vertex list, which contains the interpolated “in-between” positions
between the specified starting vertex list and ending vertex list. We interpolate between vertex
lists by calling interp.step, and assign the interpolated vertex list to the pyramid’s vertex list.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 47

if(currently_interpolating) {
vertices = interp.list;
if(! interp.step()) {
currently_interpolating = false;
}
Once the interpolation has finished, after the specified number of steps have been taken and the
positions of the vertices in the vertex list have reached the positions in the target vertex list, we
then proceed to interpolate between the next two pairs of vertex lists in the array keyframes. In
this program, since we only have two vertex lists, this simply has the effect of morphing back from
the second to the first vertex list again. Notice that we pass to the vertex interpolator a random
parameter for the number of interpolation steps, so that each pyramid always morphs at a different
speed.
}else {
keyframe_no++;
if(keyframe_no >= num_keyframes) {keyframe_no = 0; }
int next_keyframe = keyframe_no + 1;
if(next_keyframe >= num_keyframes) {next_keyframe = 0; }

vertices = keyframes[keyframe_no];
interp.start( *keyframes[keyframe_no], *keyframes[next_keyframe],
rand()%100 + 50, 3);

currently_interpolating = true;
}

That’s all there is to morphing in 3D. It’s nothing more than a step-by-step interpolation of all ver-
tex positions from a starting list to an ending list, just as with the 2D case.

NOTE When morphing more complicated shapes, you will want to use a 3D modeling pro-
gram (also called a 3D modeler) to create the 3D objects. Chapter 3 covers using Blender to
do exactly this. To create models which morph into one another, it is vital that both models
have the same number of vertices and faces in the same order. The best way to ensure this is
simply to start with the first model and modify it within the 3D modeler to become the second,
target model. If you create the target model from scratch, it is almost certain that the number
or order of the vertices and faces will be different from those of the first model, which would
make morphing impossible or visually disturbing. See Chapter 3 for more details.

NOTE It is actually not “impossible” to morph between two objects with different numbers
of vertices or faces, but doing so is much more complicated than the case described above
(where a 1:1 correspondence does exist between the vertices of the two objects). You would
have to define some function to automatically map vertices to “reasonable” in-between posi-
tions, based on the shapes of the source and target objects, a topic which goes beyond the
scope of this book.
48 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Lighting
Polygons, as we have treated them until now, have all been single-colored. We’ll now look at some
methods of combining color and light so that our polygons look more realistic.
For our purposes, we model light as a single, integer intensity value which changes the bright-
ness of a particular color. Therefore, we combine light and color to arrive at a final lighted color.
This is accomplished by means of lighting tables. In the lighting table, we simply look up the color
along one axis, and the light intensity along the other axis, and find an entry in the table that tells us
the final color with the specified intensity. For colors specified directly with red, green, and blue
values (as opposed to through a palette), we split the lighting tables into three separate red, green,
and blue tables, solely for the reason of saving memory.

Figure 2-4: Red, green, and


blue lighting tables in use.

We define intensities as going from 0 to a certain predefined maximum integer light level,
MAX_LIGHT_LEVELS. A light intensity of 0 combined with any color yields black; a light inten-
sity of MAX_LIGHT_LEVELS combined with a color yields the original color at full intensity.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 49

The light model just described assumes that the light itself has a color of white. It is also possi-
ble to model colored light sources, which not only change the intensity of the color in question, but
also change the color itself by introducing new color components contained within the light itself.
For instance, shining a red light on a white surface yields a final color of red. This can be imple-
mented most easily using an RGB color model. The color of the light is represented as a red
intensity, a green intensity, and a blue intensity. The color of the surface is represented as a red
color, a green color, and a blue color. The color of the final, lighted surface is the red color lighted
by the red intensity, combined with the green color lighted by the green intensity, combined with
the blue color lighted by the blue intensity.
We can view the lighting problem as two separate issues. The first issue is computing the light
intensity itself. The second issue is deciding where to compute and draw the light intensities (once
per polygon, once per vertex, once per pixel, and so forth). We now cover each of these topics sep-
arately, starting with the computation of light intensities.

Mathematical Models for Computing Light


We are physically able to see objects because they emit or reflect light energy. There are various
mathematical models we can use to model the amount of light emitted by or reflected from a
surface.
The lighting models presented in the next few sections attempt to approximate the effects of
light by computing the amount of light we see from a surface. Note that these lighting models do
not necessarily represent a realistic modeling of the physical phenomena underlying the emission
and reflection of light, but nevertheless provide adequate results in practice and are widely used in
interactive 3D graphics.
The lighting equations presented below all assume that light intensities are real values which
go from 0 to 1. We can then multiply these with a scaling factor to change them into integer values
to use as lookup indices into the light table. In the l3d library, the scaling factor is called
MAX_LIGHT_LEVELS and has a value of 255.

Self Lighting
One simple lighting model is to assume that each object emits its own light energy and reflects no
light energy. In this case, each object’s light intensity is determined solely by its own inherent
brightness; it does not depend on any external factors. We can express the equation for self light-
ing as follows:
Equation 2-1

In this equation, Iself represents the computed intensity based on the self lighting model, and Io
represents the inherent brightness of the object itself.
While self intensity is an easy lighting model to represent, it is more interesting to model the
reflection of light off of surfaces, since reflected light is the majority of light we see in a typical
real-world scene.
50 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Ambient Lighting
The term ambient light refers to light which has been scattered and reflected so many times, off of
so many objects, that its original direction cannot be determined. Ambient light, therefore, is
nondirectional light which equally reflects off of and equally illuminates all objects. It also pro-
vides a convenient way within a scene of modeling a global minimum light level whose exact
source is unimportant. We can express ambient lighting as a simple global constant for the entire
scene:

Equation 2-2

We could also multiply the ambient light intensity by a scaling factor, ranging from 0 to 1, repre-
senting the reflectiveness of the surface to which the ambient light is applied. This can have the
effect of reducing the effect of the ambient light if the surface is less reflective. The ambient scal-
ing factor would be stored with each object or with each polygon.

Equation 2-3

Such parameters particular to a surface which affect the incident light are called material
parameters.

Diffuse Reflection
Ambient light is still not a very satisfying light model because it affects all surfaces identically. We
may obtain a better result by modeling the diffuse reflection of light off of a surface. Matte sur-
faces, also called diffuse surfaces, scatter light rays with equal probability in all directions,
resulting in a diffuse illumination of the surface.

Figure 2-5: Diffuse reflection of light rays.

Mathematically, let us assume that we have a point light source and a surface for which we
wish to compute the diffuse illumination. A point light source is an infinitely small emitter of light
energy located at a single point some finite distance from the surface, which casts rays of light
energy (consisting of photons) equally in all directions. To compute the diffuse illumination of the
surface from the point light source, we must compute two quantities. The first is the amount of
Chapter 2: Rendering and Animation Techniques for 3D Polygons 51

light striking the surface. The second is the amount of light that the viewer sees reflected off of the
surface.

Figure 2-6: A point light source, a


surface, and the viewer. We are
interested in two quantities: how much
the light illuminates the surface, and
how much of the light the viewer sees
reflected off of the surface.

We compute the amount of light striking the surface by considering an infinitesimally small
beam of light energy striking a surface. For a constant light intensity, this light beam imparts a cer-
tain fixed amount of light energy to the surface it strikes. It turns out that the angle that the light
beam forms with the surface normal vector is important. If the light beam strikes the surface
directly head-on, then the resulting light intensity on the surface is at its maximum. If the light
beam strikes the surface at an angle, the light intensity on the surface decreases. This is because
the amount of surface area covered by the light beam is greater when it strikes the surface at an
angle, and thus the fixed quantity of light energy from the light beam is divided across a larger sur-
face area, yielding an overall lesser intensity per unit area of surface.

Figure 2-7: A light beam covers the


least surface area when striking the
surface directly along its normal vector;
as the angle to the normal vector
increases, the beam must cover a
larger surface area, resulting in
decreased overall intensity.
52 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Mathematically, we can express this relationship as follows.


Equation 2-4

Ip is the intensity of the point light source, ranging from the darkest value of 0 to the brightest
value of 1. The angle theta is the angle between the incident light ray and the surface normal. If the
angle is zero, then the light strikes the surface head-on, the cosine of the angle is 1, and the inten-
sity is maximum. As the angle approaches 90 or –90 degrees, the cosine approaches zero, causing
the light intensity to decrease.
We can also formulate the diffuse lighting equation as follows.
Equation 2-5

Here, N is the surface normal vector, and L is the vector from the surface to the light source. Note
that both vectors must be normalized. Recall that the dot product of these two normalized vectors
is exactly the same as computing the cosine of the angle between them.
We can also multiply the computed light by a diffuse scaling factor Md, ranging from 0 to 1
depending on the diffuse reflectiveness of the material, to reduce the effect of the light for less
reflective materials.
Equation 2-6

Finally, we can also multiply the light intensity by an attenuation factor fatt between 0 and 1, so
that the computed light decreases with its distance from the light source, in accordance with the
physical reality that the amount of light energy reaching a surface decreases with distance.
Equation 2-7

It is suggested in Computer Graphics: Principles and Practice that this distance-based falloff fac-
tor be computed as follows [FOLE92].

Equation 2-8

Term dL represents the distance between the light source and the surface being illuminated, and
the terms c1, c2, and c3 are user-defined constants for the light source. The attenuation factor is
therefore inversely proportional to the distance, but is never greater than one (otherwise it would
increase, not decrease, the effect of the light). The constant c1 is chosen to prevent the denomina-
tor from being too small when dL is very small, and the constants c2 and c3 allow us to control
whether the light falls off proportional to the inverse square of the distance, or to the inverse dis-
tance, or to a combination of both. Physically, for a single point light source, the energy falls off as
the inverse square of the distance, but typical objects in real-world scenes receive light from more
than just a single point light source. This means that the inverse square energy falloff is much too
sudden, which is the reason that we introduce the c1 and c2dL terms in the denominator, allowing
for a more gradual light falloff.
Now, given the computed light intensity that reaches a surface from a point light source, the
second question we would like to answer is how much of this light a viewer sees from a particular
Chapter 2: Rendering and Animation Techniques for 3D Polygons 53

viewing position relative to the surface. This depends on how much light is reflected from the
surface.
As it turns out, the viewing position ends up having no effect on the final light intensity seen
off of a diffuse surface. This means that the diffuse light intensity as computed above represents
the amount of light seen reflected off of the surface, from any viewing angle or position. Let us
examine why this is so.

TIP Diffuse illumination depends only on the positions and orientations of the light source
and the surface. It does not depend on the position of the camera.

As we mentioned earlier, the amount of light the viewer sees from a surface depends on the
amount of light reflected by the surface. Matte surfaces, also referred to as Lambertian surfaces,
obey a law of physics called Lambert’s law. Lambert’s law states that, ideally, diffuse surfaces
reflect light, per unit area, directly proportional to the cosine between the surface normal vector
and the vector going from the surface to the viewer (call this last vector the view vector). This
means that if we consider a particular fixed amount of the surface, the greater the angle between
the normal vector and the view vector, the less reflected light we see. This would at first seem to
imply that the light intensity should decrease as the angle increases. But this is not the case,
because as the angle between the normal vector and the view vector increases, we also see more of
the surface—in particular, we see proportionally more of the surface according to the reciprocal of
the cosine of the angle between the normal and the view vector. The increase in seen surface area
exactly cancels out the decrease in light intensity per unit area. In other words, as the angle
increases, the reflected light energy per unit area decreases, but the amount of area we see
increases by exactly the same proportion, meaning that the amount of reflected light stays the
same regardless of the viewer’s position, for Lambertian surfaces.

Figure 2-8: As the angle between the


viewer and the diffuse surface
increases (from viewpoint v1 to v2),
the reflected light decreases according
to the cosine of the angle (Lambert’s
law). But geometrically, we
simultaneously see more of the surface,
according to the reciprocal of the
cosine of the angle (dS2 > dS1). The
two factors cancel, yielding a
viewpoint-independent light intensity.
54 Chapter 2: Rendering and Animation Techniques for 3D Polygons

A few comments about the fundamental assumptions of the preceding light model are in
order. The light model we just described works by computing the diffuse illumination from all
light sources off of all surfaces. This implies a strict separation between light sources and the sur-
faces being illuminated. This model is not complete because it does not account for the
inter-object reflections that take place in the real world. An object reflecting light then also
becomes a light source, further illuminating other objects, which again reflect light. One way to
simulate the effect of this complex inter-object illumination is simply to add an ambient term to
the overall lighting; since the inter-object light reflection is so complex, the resulting illumination
can be considered as without direction and ambient. A more accurate, though much more
computationally intensive, means of accounting for inter-object illumination is radiosity, which is
covered below.

Y
Another assumption we made was that our light sources were point light sources, which are

FL
located at a particular point in space and radiate light energy equally in all directions. Alternative
models of light sources are also possible. Sunlight illuminating objects on Earth, for instance, is
not really accurately modeled by a point light source, because the distance of the sun from the
AM
earth is practically infinite in comparison to the dimensions of the objects being illuminated.
Because of the great distance separating the sun and the earth, all the light rays coming from the
sun are practically parallel to one another. This means that in the lighting equations presented
TE

above, the vector L from the surface to the light source always points in the same direction for all
polygons, in the direction of the parallel rays of sunlight.

Figure 2-9: For a point light


source, the vector L points from
the surface to the light source;
each different surface orientation
yields a different vector L (left).
For sunlight, which can be
thought of as an infinitely distant
point light source, the vector L is
the same for all surface
orientations (right).

Finally, it is also possible to model spotlights, which have only an exactly defined area of
influence, such as a cone. Outside of the area of influence the light does not affect the surface’s
intensity at all. This allows for creating illumination effects on the surface with sharply defined
edges, much as a real-world spotlight only illuminates within a certain elliptical area on the
ground.
Figure 2-10: Spotlights clearly delineate
the area of influence of the light. The left
surface is illuminated with a point light;
the right, with a spotlight.

Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 55

Specular Reflection
Specular reflection refers to the bright highlights which appear when a shiny surface is illumi-
nated. Such effects are not accounted for by the preceding lighting equations.

Figure 2-11: The left sphere has been rendered with


a specular highlight; the right, only with diffuse
lighting.

There are a number of ways of computing the specular reflection. One which we examine here
is the so-called Phong illumination model. Do not confuse Phong illumination, which deals with
the computation of specular light intensities, with Phong shading, which refers to a rendering
method (covered in the next section) for drawing already computed light intensities.
Phong illumination assumes the surface is shiny and reflective. Therefore, with Phong illumi-
nation, we compute a reflection vector, which is the reflection of the light vector L (going from the
surface to the light source) around the surface normal vector N.

Figure 2-12: The reflection vector R for Phong


illumination.

The idea is that as the orientation of this reflection vector R comes nearer to the orientation of
the view vector V (going from the surface to the viewer), the amount of reflected light should
increase exponentially, causing a sudden, bright highlight to appear.
Mathematically, we can express Phong illumination as follows. Note that all vector quantities
are normalized.
Equation 2-9

Thus, the specular intensity increases exponentially as the angle alpha between the reflection vec-
tor and the view vector decreases. The exponential cosine term has this effect; if the vectors lie on
top of one another, the cosine is one, and the intensity is at a maximum. As the cosine decreases,
the intensity falls off rapidly. The cosine of the angle can be computed by dotting the reflection
vector with the view vector. The exponential factor n is a parameter describing the shininess of the
surface; the greater the value of n, the shinier the surface appears because of an exponentially
faster increase and decrease of light intensity around the highlight.
56 Chapter 2: Rendering and Animation Techniques for 3D Polygons

The only issue which remains is how to find the reflection vector. We can calculate the reflec-
tion vector by the following equation.

Equation 2-10

The easiest way to derive this is to use the rotation matrix for rotation about an arbitrary axis. Here,
we wish to rotate the vector L by 180 degrees about vector N. Recall the rotation matrix for rota-
tion about an arbitrary axis:
Equation 2-11

The elements of the matrix are the components of the vector about which the rotation is to take
place, in this case N. Replacing all u entries with corresponding N entries, multiplying the matrix
with the vector L to be rotated, and realizing that the cosine of 180 degrees is –1 and the sine of 180
degrees is 0, gives the following expression:

Equation 2-12

Simplifying further yields:

Equation 2-13

Performing the matrix multiplication gives us:


Equation 2-14

Notice that each row contains a subtraction with an isolated L term; thus, we can we can pull out
the entire vector [L1,L2,L3,0]T and express this as a vector subtraction. We can also isolate the
vector [N1,N2,N3,0]T. Rearranging the equation to be the difference of these two vectors yields
the following expression:
Equation 2-15
Chapter 2: Rendering and Animation Techniques for 3D Polygons 57

which simplifies to:


Equation 2-16

For a geometrical derivation of the reflection vector, see Computer Graphics: Principles and
Practice [FOLE92].
Note that unlike diffuse reflection, the effect of which was independent of the viewer position,
specular lighting depends on the viewer position, because the position of the highlight changes as
the viewer looks at the object from a different angle or position.

NOTE Specular lighting, when it is unwanted, can cause some odd lighting artifacts. I
encountered this once while programming a virtual reality application for the CAVE environ-
ment (an immersive 3D environment which surrounds the viewer on all sides with four huge
screens and provides for true stereoscopic perception and head tracking through special
shutter glasses). One odd problem that cropped up was with the floor of a room we had mod-
eled. The surface of the floor appeared to change intensity suddenly when the viewer was
located at particular positions within the world. Changing the position, number, and type of
light sources did not eliminate the problem. It turned out that the problem was the 3D model
exported by the 3D artist included a specular lighting component, which then caused the
undesired camera-dependent specular “flashes” of light on the floor. These were particularly
problematic because the floor, which was not intended to be specular, was not highly tessel-
lated, meaning that the specular light computation was applied to very large triangles. This
caused very large and intermittent white flashes. The solution was to remove the specification
of the specular component in the exported 3D model.

Multiple Light Sources and Components


If we wish to compute the contributions of several light sources to a surface’s illumination, we
merely need to compute the light contribution of each light source, then add all the contributions
together. Similarly, if we wish to compute lighting based on several light components (for
instance, a combination of self-emitted light energy, diffuse reflected light, specular reflection,
and ambient light) we also merely need to compute each light contribution separately, then add
them all together to arrive at the final light intensity.
One practical problem which may arise is that after summing the light contributions from var-
ious sources, the resulting light value may be greater than the maximum allowable intensity of 1.0,
which, when multiplied with the integer scaling factor mentioned at the beginning of the discus-
sion on lighting, would yield a lighting table index that would be greater than the maximum
allowable index of MAX_LIGHT_LEVELS. In such a case, we can clamp the light intensity to the
maximum value, at the cost of some inaccuracy (the intensity should actually be brighter than the
maximum possible value). You can see an example of such clamping in Figure 2-11; there, the
specular highlight should not be a circle of equal white intensity, instead getting brighter towards
its center, but it is impossible for the color on the printed page to increase brighter than pure white.
The human eye also has a physically limited maximum input level, so clipping of input light levels
also occurs naturally.
58 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Radiosity and Ray Tracing


Radiosity is a very realistic technique for computing light intensities, based on a physical model of
light energy. It works by modeling the physical interaction of light with surfaces, and by ensuring
that all of the light energy in a scene is accounted for. Key to the radiosity method is the fact that it
also accounts for the illumination resulting from the reflection of light off of surfaces; the previous
methods all consider light sources separately from the illuminated surfaces. Radiosity generally
works by dividing an entire environment into a number of small patches, each of which is charac-
terized by the amount of light energy it emits or reflects. This implies that all parts of the model
can potentially emit light, providing for correct inter-object lighting. The difficult part about
radiosity is that the entire lighting solution across all patches must be consistent, meaning that for
a large number of patches, a large number of lighting equations must be simultaneously solved to
arrive at the correct intensity for each patch. This only makes sense; if each patch can emit light
and affect all other patches, which then also emit different amounts of light, a consistent solution
requires satisfying the lighting equation for every patch simultaneously. A popular method of
computing radiosity is a progressive refinement method, where the original model is subdivided
into patches only as necessary; if a large change in light intensity is observed for a particular patch,
then the patch is subdivided, and the light is recomputed for each smaller patch.
As realistic as radiosity is, it is slow to compute. Therefore, its role in interactive 3D graphics
is more of a preprocessing step. Radiosity can be used in an offline manner to compute light levels,
which are then saved along with the 3D geometry. Then, at run time, we simply draw the light
using the precomputed light levels.
Ray tracing is a rendering technique that can be used to model both reflection and refraction
of light. The technique works by tracing the paths of “reverse” light rays from the eye, through
each pixel in the image, and to an object in the scene (ordinarily light travels in the other direction,
from the objects to the eye). The reverse light rays, like real light rays, are allowed to bounce
among objects, and to be refracted after traveling through objects. This means that ray tracing can
also create very realistic lighting effects. Ray tracing is currently too slow for real-time applica-
tions. See Chapter 7 for more information on ray tracing, and a way of simulating some of the
effects of ray tracing through environment mapping.

Dynamic or Static Lighting Computations


One question we have not addressed is exactly when we carry out the lighting calculations. This
answer is partially dependent on the rendering technique we use for drawing the polygons (cov-
ered in the next section); it is also partially dependent on whether we want static lighting or
dynamic lighting. Static lighting is lighting which does not change throughout the course of a
user’s interaction with the virtual environment. In such a case, we compute light intensities in a
preprocessing stage and store these light intensities in some format needed by our rendering tech-
nique; storing one color per polygon, or one color per vertex, or one color per lumel in a light map
are all possible means of saving the computed light intensities. Then, when drawing the interactive
3D scene, we do not need to worry about computing any light intensities; the light has already
been “burned into” or “painted onto” our geometry, and only needs to be drawn using an appropri-
ate rendering technique.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 59

Dynamic lighting is lighting that changes while a user is interacting with the 3D environment.
Dynamic lighting implies a change in any factor affecting any of the lighting equations used. Such
factors include surface geometry, surface orientation, surface attributes, light position, light attrib-
utes, and lighting parameters. Based on the new parameters, we dynamically recompute the
illumination for all affected parts of all geometry, and store the new illumination values with the
geometry for later use by the light rendering process. By defining lights to only have a certain area
of influence (such as a sphere), we can limit the amount of geometry that dynamic light affects and
therefore limit the amount of computation that needs to be done when this dynamic light changes
intensity.

Fog
Fog is not really a lighting effect, but it affects the intensity of the surface being viewed, so we con-
sider it here as well. Fog simply blends an original color with a certain fog color, where the amount
of the blending is based on the distance of the surface to the viewer. Objects very far away from the
viewer fade into the fog and are thus displayed with the fog color itself. Near the viewer, the effect
of fog is negligible, and thus the surface should be displayed with its original color. From near to
far the colors should fade gradually into the fog.
Mathematically, we can express the effect of fog on a particular surface color as follows.
Equation 2-17

Here, c represents the final color of the surface with the effect of fog, cs is the original, unfogged
color of the surface, cf is the color of the fog, and f is the fog factor, which ranges from 0 to 1. If f is
0, no fog is applied and we see the original color. If f is 1, we see only the fogged color. Between 0
and 1 are varying amounts of fog. The way we actually implement this in the C++ code in the l3d
library is to scale the real-valued fog factor to lie within the integer range from zero to
MAX_LIGHT_LEVELS. We then use the integer scaled fog factor as a lookup index into a
precomputed fog table which slowly fades each color or color component (red, green, or blue)
from its original value into the fog color of white.

CAUTION The OpenGL definition of the fog factor reverses the sense of 0 and 1: a fog fac-
tor of 0 in OpenGL means that we see only the fog color; a fog factor of 1 means that we see
only the original color.

The question remains as to how to compute the fog factor f. The most correct way would be to use
radial fog, which computes the distance of each fogged surface to the viewer and uses this actual
distance as the basis of some blending function going from 0 to 1. But distance computations
require a time-consuming square root operation, and since fog is typically computed per pixel, this
can be slow. A simpler fog computation, which is less accurate, is based on the z values of each
pixel after the perspective transformation. The z values are then mapped to the range 0 to 1. See
Figure 2-13 for a comparison of radial fog versus z depth fog.
60 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Figure 2-13: Correct radial fog


versus simpler z depth fog.

Having decided on a distance to use (radial distance or z depth distance), we then need to map
this distance into a fog value from 0 to 1. We can choose to map the fog linearly or exponentially.
A linear mapping causes colors to fade out at a constant rate as they move into the fog. An expo-
nential mapping causes colors to fade out more quickly as they recede into the fog. We can even
use a discontinuous fog function to cause fog only to appear at certain distances from the viewer.

NOTE An option somewhere between z depth fog and radial fog is to approximate radial
fog by taking the z depth value, then subtracting a percentage of x and y deviation from the
center of the screen. This is more accurate than z depth fog, but still faster than true radial fog.

Notice that regardless of the means of computing fog, the computation is done in camera coordi-
nates because fog is an effect which depends on distance from the camera. This is another example
of why we need the camera_transform method in classes l3d_object and l3d_poly-
gon_3d, which allows us to save the camera space coordinates for later use. The fog computation
is typically done just before polygons are about to be drawn to the screen in the rasterizer; at this
point, however, the polygons have already undergone the perspective transformation, meaning
that we must have previously saved the camera space coordinates in order to perform this late fog
computation.

Rendering Techniques for Drawing Light


The previous sections dealt with the mathematical computation of light intensities based on the
properties and orientations of the light sources and the surfaces. We haven’t yet addressed the
issue of how often and exactly at which points in 3D space we need to evaluate the lighting equa-
tions. It turns out that this issue depends on the rendering technique we use for drawing light on
our polygons. Such rendering techniques are also called shading techniques.
Let’s now look at a number of rendering techniques for light, and also at the implications that
each technique has on the frequency and location of lighting computations.

TIP The rendering or shading technique influences how many lighting calculations we per-
form and at which points in 3D space.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 61

Flat Shading
Flat shading is the rendering technique we have been using so far: each polygon has one color.
With flat shading, we perform the lighting computations once per polygon, typically for the
center point of the polygon. This computed light intensity is then used for the entire polygon.
Flat shading leads to a faceted appearance of the model, because the light intensity changes
abruptly from one face to another. See Figure 2-14.

Figure 2-14: Flat shading.

Gouraud Shading
Gouraud shading is a simple technique for combating the faceted appearance of flat-shaded poly-
gons. With Gouraud shading, we perform the lighting computations once per vertex for each
polygon. This means that we need to store and transform vertex normals along with the geometry.
Given the light values at each vertex of the polygon, Gouraud shading then interpolates these light
values while the polygon is being drawn to arrive at a separate light intensity for each pixel. After
arriving at the light intensity for the pixel being drawn, this light intensity is combined with the
color of the polygon, using a lighting table. The final, lighted color is then drawn.

Figure 2-15:
Gouraud shading
computes light
intensity at each
vertex, then
interpolates these
precomputed
intensities as the
polygon is being
drawn, giving a
unique light intensity
to every pixel.

The effect of Gouraud shading is to smooth out over the entire polygon the changes in light
intensity which occur among the polygon’s vertices. Also, since light intensities are computed at
the vertices of the polygon, and since all polygons in an object share vertices, the light changes
over the entire 3D object are also smoothed out, resulting in an overall smooth appearance of the
surface. See Figure 2-16.
62 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Figure 2-16: Gouraud shading makes a surface appear smooth and round.

Note that Gouraud shading is not always desirable, because it “washes out” all the sharp
changes in light intensity which occur over the entire 3D object. If the object being modeled is
supposed to appear round, then this is fine. But, if the object is supposed to have some sharp cor-
ners or edges, then the light intensity should indeed change suddenly from one face to the next
because of the physical properties of the surface. But simple Gouraud shading would smooth over
all sharp changes in light intensity, including the desired ones.

Figure 2-17: Gouraud shading eliminates all sharp


edges, even if these edges should actually appear.

Solving this problem would require storing vertex normals per polygon instead of per object;
this would effectively allow each shared vertex to have several vertex normals, one for each face
touching the vertex. All vertex normals for one vertex would be the same if the surface is supposed
to appear rounded at that point; the vertex normals would be different if an edge should appear
between the adjacent faces.
A popular technique for allowing smooth edged polygons and sharp edged polygons in the
same model is to use smoothing groups. With this technique, we assign a 32-bit long integer to
each polygon, which is then treated as 32 Boolean flags. During the creation of a 3D model, the 3D
artist can turn each of these on or off. When determining each vertex normal for a polygon, the
long value is subjected to a logical AND operation with the long value of each neighbor. If the
result is not zero, the neighbor’s normal is included in the calculation; if it is zero, that neighbor is
excluded. This means that each vertex can have more than one vertex normal depending on which
polygon it is being considered for. By carefully setting the bits, the artist can specify which poly-
gons affect which adjacent polygons, and thereby control smoothing (or lack thereof) on each
edge.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 63

Phong Shading
Phong shading takes Gouraud shading one step further. With Gouraud shading, we evaluated the
lighting equation once per vertex, then we interpolated the precomputed intensities while drawing
the polygon, giving an interpolated light value per pixel.
With Phong shading, we actually re-evaluate the entire lighting equation at each pixel. To do
this, we again use vertex normals, and interpolate the normal vector itself while we are drawing
the polygon. We use the interpolated normal vector in the lighting equation, and recompute the
light for each pixel. Naturally, this requires a large amount of computation for each pixel.

Figure 2-18: Phong shading


interpolates the vertex normals
themselves while the polygon is
being drawn, and re-evaluates
the entire lighting equation for
each pixel using the interpolated
normal vector.

Light Maps
A light map is a special form of a texture map (covered in the next section). Essentially, the idea is
to associate a square grid, called the light map, with each polygon. The entire light map covers the
whole polygon, and thus each element in the light map covers a certain small area of the polygon.
We call each element in the light map a lumel, short for luminance element. To compute the light
intensities, we evaluate the lighting equation once for each lumel, meaning that we evaluate the
lighting equation several times for one polygon. The finer the grid, the more times the light equa-
tion needs to be evaluated, but also the greater the quality of the final result.

Figure 2-19: Light mapping.


64 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Although we evaluate the lighting equation at several points on the polygon (at each lumel),
this is only a one-time preprocessing step (assuming static and not dynamic lighting). After com-
puting and saving the intensities in the light map, we simply store the precomputed light map with
the polygon. Then, when drawing the polygon, for each pixel we determine the corresponding
lumel in the light map (a non-trivial computation, as we see in the next section), combine the
intensity in the light map with the polygon’s color by using the lighting table, and finally draw the
lighted pixel to the screen.
Light mapping can be seen as being somewhere between Gouraud and Phong shading. With
Gouraud shading, we only computed the light at the vertices of the polygon; with Phong shading,
we computed light at every single pixel. With light mapping, we define an arbitrary grid of lumels
on top of the polygon, and compute the light intensities at every grid element.

Y
The most difficult part of light mapping is the determination, during the rasterization of a par-

FL
ticular polygon, of which lumel in the light map corresponds to the current pixel on-screen which
we are drawing. When rasterizing a polygon, we know that all pixels we draw do correspond to
(i.e., result from the perspective projection of) some point on the polygon and thus to some point in
AM
the light map, but we do not immediately know which point exactly. If you have a super-human
memory, you may recall that we briefly mentioned this problem in passing in the introductory
companion book Linux 3D Graphics Programming. To summarize, it is not exactly easy or intu-
TE

itive to calculate which lumel on the polygon corresponds to the current pixel of the polygon being
drawn, but it is possible, as we will see in grueling detail in the next section. This calculation is
actually one particular application of the technique called texture mapping, which allows us to
associate arbitrary images with a polygon. Therefore, let us now turn to the topic of texture map-
ping, returning to light mapping again afterwards.

Texture Mapping
As recently as the mid-1990s, an interactive 3D graphics application with texture mapping was
quite a sensation for personal computers. Today, end users expect texture mapping even in simple
3D graphics applications. Therefore, it is important to understand this technique.
The goal of texture mapping is to draw the pixels of a polygon with colors coming from a sep-
arate 2D image. (The image can also be 3D, but for now assume the image is 2D.) The image is
called the texture; the individual pixels within the texture image are often called texels. Seen this
way, the next question is: given a pixel of a polygon on-screen, which texel from the texture
should be drawn?
The answer to this question requires us to map the image onto the polygon. For instance, we
might want to draw the pixels of the polygon so that it appears that the image were “glued on” to
the surface of the polygon. This is by far the most common application of texture mapping. To
“glue” an image onto a polygon, we need to consider the coordinate space of the polygon, and we
also need to define a new coordinate space to contain the image. This new coordinate space con-
taining the image is texture space, which we look at in more detail in the next section.

Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 65

Figure 2-20: A texture, a non-texture-mapped polygon,


and the same polygon with the texture map applied, in
several different orientations. Notice that regardless of the
orientation of the polygon, the texture map appears to be
glued onto the surface of the polygon.

Figure 2-21: The fundamental


question of texture mapping.
Given a pixel coordinate of a
polygon we are drawing, which
pixel from the texture image
should we draw?

Texture mapping requires us to perform the following steps:


1. Define a texture. The texture is simply a 2D image from which the colors of the polygon will
be taken, pixel by pixel, during rasterization.
2. Define a texture space. This is a coordinate system to specify the location, orientation, and
size of the image in 3D world coordinates.
3. Define a mapping between texture space and the world space coordinates of the polygon’s
vertices.
4. Reverse project screen coordinates into texture coordinates during rasterization. This means
we must determine, based on a 2D (x,y) pixel coordinate obtained while rasterizing a polygon,
the corresponding coordinates (u,v) in texture space, given the texture space of step 2 and
world-to-texture-space mapping of step 3.
5. Map the computed texture space coordinates to integer (u',v') coordinates in the 2D texture
image, and draw the pixel using the color at the (u',v') coordinates in the texture image.
This is a correct but somewhat slow means of performing texture mapping. First of all, let’s look at
each step to make sure we understand the basic procedure. Then, we’ll look at an optimized ver-
sion of the texture mapping process, based on the u/z, v/z, and 1/z terms. Based on this optimized
procedure, we’ll develop the C++ classes necessary to implement texture mapping.

Step 1: Define a Texture


The step which requires the least mathematical trickery is the first step: defining the texture itself.
66 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Storing Texture Data


The class l3d_texture defines a texture in the l3d library. It inherits a texture orientation from
class l3d_texture_space, which is covered in the next section. It also manages an object
actually containing the texture image data, stored in variable tex_data and of type l3d_tex-
ture_data. The member variable owns_tex_data specifies whether this texture object
actually “owns” the texture data or not. Several textures, with different orientations or sizes, can
share the same texture data to save memory, but only one texture can actually own the data. The
owner of the data is the one responsible for freeing the memory of the texture data when it is no
longer needed.
The class l3d_texture_data actually stores the texture data. One l3d_texture_
data object may be referenced by several l3d_texture objects. Member variable id is an
integer variable used to uniquely identify the texture data. We’ll need this identifier when we write
Mesa routines to draw texture-mapped polygons. The member variables width and height are
the width and height of the texture data, in pixels. Note that the width and height must be powers of
two, solely for performance reasons: performing an arithmetic modulo operation (the operator %
in the C language) can be implemented as a bit-wise logical AND operation when using powers of
two, which is faster than the general purpose modulo operator. For instance, the expression
2335%256 in C can be rewritten as 2335&255, because 256 is a power of two, but 2335%17 can-
not be rewritten as a bit-wise logical AND operation because 17 is not a power of two.
The member variable data is a linear array of unsigned char. It contains the image data
and is of size width´ height´ bytes-per-pixel. The image data is stored linearly in left-to-right and
top-to-bottom order within the data array. Each pixel in the array is stored in the same format as
required by the screen. For example, if a pixel on-screen requires four bytes, one pixel in the tex-
ture is also stored using four bytes. Storing the textures in the native screen format means that the
rasterizer can directly copy pixels from the texture into the buffer without needing to do any pixel
format translation.

Figure 2-22: The storage in memory of the texture data: left to right and top to bottom.
The textures are internally stored in the same pixel format as required by the screen; this
means that each texel block may represent more than one byte.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 67

Listing 2-4 presents the texture classes discussed so far. It also declares two classes, l3d_
texture_space and l3d_texture_computer, which we cover in the coming sections.
Listing 2-4: texture.h
#ifndef __TEXTURE_H
#define __TEXTURE_H
#include "../../tool_os/memman.h"

#include <stdlib.h>
#include <stdio.h>
#include "../vertex/coord.h"

class l3d_texture_data {
public:
int id;
l3d_texture_data(void) {id=0; data = NULL; }
virtual ~l3d_texture_data(void) {if(data) delete [] data; }
int width, height;
unsigned char *data;

};

class l3d_texture_space {
public:
l3d_coordinate O, U, V;
};

class l3d_texture : public l3d_texture_space


{
public:
bool owns_tex_data;
l3d_texture_data *tex_data;
l3d_texture(void) {owns_tex_data = false; }
~l3d_texture(void) {if (owns_tex_data) delete tex_data; }
};

class l3d_texture_computer {

private:
l3d_matrix _world_to_tex_matrix;
public:
static const l3d_real EPSILON_VECTOR = float_to_l3d_real(0.0001);
l3d_point O; l3d_vector U, V;
l3d_real *fovx,*fovy;
int *screen_xsize, *screen_ysize;

l3d_real u, v;

const l3d_matrix& world_to_tex_matrix(const l3d_texture_space &tex_space) {


l3d_vector
U_vector = tex_space.U.original - tex_space.O.original,
V_vector = tex_space.V.original - tex_space.O.original,
U_cross_V_vector = cross(U_vector, V_vector);

l3d_vector w = U_cross_V_vector;
l3d_vector v_x_u = cross(V_vector,U_vector),
u_x_w = cross(U_vector, w),
w_x_v = cross(w, V_vector),
o_vec(tex_space.O.original.X_,
68 Chapter 2: Rendering and Animation Techniques for 3D Polygons

tex_space.O.original.Y_,
tex_space.O.original.Z_,
int_to_l3d_real(0));

_world_to_tex_matrix.set(
w_x_v.X_, w_x_v.Y_, w_x_v.Z_, dot(o_vec,w_x_v)*int_to_l3d_real(–1),
u_x_w.X_, u_x_w.Y_, u_x_w.Z_, dot(o_vec,u_x_w)*int_to_l3d_real(–1),
v_x_u.X_, v_x_u.Y_, v_x_u.Z_, dot(o_vec,v_x_u)*int_to_l3d_real(–1),
int_to_l3d_real(0),
int_to_l3d_real(0),
int_to_l3d_real(0),
dot(U_vector,w_x_v));

return _world_to_tex_matrix;
}

};

#endif

Classes for Loading Textures from Disk


Now we know how to store texture data in memory, but what about creating the texture image data
in the first place? Typically, textures are created with a bitmap paint program, such as the GIMP
program under Linux. The textures should be created with dimensions that are powers of two, for
instance, 64 by 64 pixels. The textures need to be saved to disk in a file format which we can then
read in and place into our l3d_texture_data class.
The l3d classes for loading textures support the portable pixmap (PPM) file format developed
by Jef Poskanzer. Converters exist (under Linux and other operating systems) which convert most
graphics formats to and from PPM format, and the better bitmap graphics programs also directly
support loading and saving PPM files.
The main reason for using PPM files is simplicity: parsing PPM files is extremely easy. The
bytes in the file consist of an identifying header, a width specification, a height specification, a
maximum color value, and then the pixel image data in left-to-right and top-to-bottom order, with
three decimal values for each pixel corresponding to red, green, and blue intensities for the pixel.
All fields are separated by white space (blanks, tabs, carriage returns, or line feeds). The exact
specifications for the PPM format are in the online manual under the keyword “ppm.”
Class l3d_texture_loader is an abstract class that loads a texture from disk into mem-
ory. See Listing 2-5. The constructor requires a parameter of type l3d_screen_info, because
the texture loader needs to know the pixel format of the screen in order to convert the texture data,
during the loading process, into the pixel format needed by the screen. The virtual method load
loads the image file stored in the given filename. The width and height of the image file are read
from the file by the texture loader and are stored in member variables width and height. Mem-
ber variable data then holds the actual image data from the image file, in the proper pixel format
for the screen. This data then eventually gets assigned to an l3d_texture_data object, where
it can then be referenced by an l3d_texture object.
Listing 2-5: texload.h
#ifndef __TEXLOAD_H
#define __TEXLOAD_H
Chapter 2: Rendering and Animation Techniques for 3D Polygons 69

#include "../../tool_os/memman.h"

#include "../../tool_2d/scrinfo.h"
#include <stdio.h>

class l3d_texture_loader {
protected:
l3d_screen_info *si;
public:
l3d_texture_loader(l3d_screen_info *si) {
this->si = si;
data = NULL;
}
virtual ~l3d_texture_loader(void) {};

virtual void load(const char *fname) = 0;


unsigned char *data;
int width, height;
};

#endif

Class l3d_texture_loader_ppm is a texture loader specialized for loading PPM images


from disk. See Listings 2-6 and 2-7. The overridden load method loads a PPM file by calling the
protected routine process_tok. The process_tok method is a finite state machine which
moves from state to state. In a particular state, there are only a certain fixed number of valid inputs.
Each input causes a certain action to occur and a change to another state. By studying the manual
page for the PPM file format, it should be quite easy to follow the logic of the file parsing code.
The most important part of the code takes place in states 6 and 15 (for ASCII or binary format
PPM files, respectively). Here, we convert the RGB value we just read from the PPM file into
the native color format needed by the screen, by using the ext_to_native method of the
l3d_screen_info object. Remember that ext_to_native also conveniently automati-
cally allocates a new palette entry if we are using a palette color model.
Listing 2-6: tl_ppm.h
#ifndef __TL_PPM_H
#define __TL_PPM_H
#include "../../tool_os/memman.h"

#include "texload.h"

class l3d_texture_loader_ppm : public l3d_texture_loader {


protected:
int process_tok(char *tok);
unsigned char *cur_pdata;
int state;
int cur_r, cur_g, cur_b;

public:
l3d_texture_loader_ppm(l3d_screen_info *si) :
l3d_texture_loader(si) {};
void load(const char *fname);
};

#endif
70 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Listing 2-7: tl_ppm.cc


#include "tl_ppm.h"
#include "../../system/sys_dep.h"

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "../../tool_os/memman.h"

int l3d_texture_loader_ppm::process_tok(char *tok) {


switch(state) {
case 0: {
if(strcmp(tok,"P3")==0) {
state=1;
return 1;
}
else if (strcmp(tok,"P6")==0) {
state=10;
return 1;
}
else return 0;
}

case 1: {
sscanf(tok,"%d",&width);
state=2;
return 1;
}

case 2: {
sscanf(tok,"%d",&height);
data = new unsigned char [width*height*si->bytes_per_pixel];

cur_pdata = data;

state = 3;
return 1;
}

case 3: {
int maxcol=0;
sscanf(tok,"%d",&maxcol);
si->ext_max_red = si->ext_max_green = si->ext_max_blue = maxcol;
state = 4;
return 1;
}

case 4: {
sscanf(tok,"%d",&cur_r);
state = 5;
return 1;
}

case 5: {
sscanf(tok,"%d",&cur_g);
state = 6;
return 1;
}

case 6: {
Chapter 2: Rendering and Animation Techniques for 3D Polygons 71

int blue=0;
sscanf(tok,"%d",&cur_b);

unsigned long native_col = si->ext_to_native(cur_r, cur_g, cur_b);

register int i;
unsigned long mask = MAX_BYTE;
char shift = 0;

for(i=0; i<si->bytes_per_pixel; i++) {


*cur_pdata++ = (native_col & mask) >> shift;
mask <<= BITS_PER_BYTE;
shift += BITS_PER_BYTE;
}

state = 4;
return 1;
}

case 10: {
sscanf(tok,"%d",&width);
state = 11;
return 1;
}

case 11: {
sscanf(tok,"%d",&height);
data = new unsigned char [width*height*si->bytes_per_pixel] ;

cur_pdata = data;

state = 12;
return 1;
}

case 12: {
int maxcol=0;
sscanf(tok,"%d",&maxcol);
si->ext_max_red = si->ext_max_green = si->ext_max_blue = maxcol;
state = 13;
return 2;
}

case 13: {
unsigned char c;
sscanf(tok,"%c",&c);
cur_r = c;
state = 14;
return 1;
}

case 14: {
unsigned char c;
sscanf(tok,"%c",&c);
cur_g = c;
state = 15;
return 1;
}

case 15: {
72 Chapter 2: Rendering and Animation Techniques for 3D Polygons

unsigned char c;
sscanf(tok,"%c",&c);
cur_b = c;

unsigned long native_col = si->ext_to_native(cur_r, cur_g, cur_b);

register int i;
unsigned long mask = MAX_BYTE;
char shift = 0;

for(i=0; i<si->bytes_per_pixel; i++) {


*cur_pdata++ = (native_col & mask) >> shift;
mask <<= BITS_PER_BYTE;
shift += BITS_PER_BYTE;
}

state = 13;
return 1;
}

default: fprintf(stderr,"unknown error parsing ppm file");


return –1;
}
}

void l3d_texture_loader_ppm::load(const char *fname) {


FILE *fp;
char *tok;
char s[256]="";
int result;

fp=fopen(fname, "r");
if(!fp) {
fprintf(stderr,"Error opening texture ppm file %s", fname);
return;
}

state = 0;
fgets(s, 256, fp);
while (!feof(fp)) {
tok = strtok(s," ");
while(tok) {
if(tok[0]=='#') break;

result = process_tok(tok);
tok = strtok(NULL," ");
}
if (result==2) break;
fgets(s, 256, fp);
}

if(!feof(fp) && result==2) { //- P6 binary ppm file


s[1] = 0;
s[0] = fgetc(fp);

//- skip comment lines before begin of binary image data


while (s[0] == '#') {
s[0] = fgetc(fp);
while(s[0] != '') {
s[0] = fgetc(fp);
Chapter 2: Rendering and Animation Techniques for 3D Polygons 73

}
}

//- process binary image data


while(!feof(fp)) {
process_tok(s);
s[0] = fgetc(fp);
}
}
}

Practical Issues in Dealing with Texture Image Files


You may have some existing image data that you wish to convert into a PPM file for use as a tex-
ture. The PPM utilities, available for many operating systems and included on the CD-ROM, offer
several command-line utilities for converting to and from various file formats. You can find these
by typing man -k ppm and man -k pnm. (PNM stands for portable anymap, meaning either a
bitmap with two colors or a pixmap with arbitrary colors.) Some of the more important utilities are
giftopnm and djpeg –pnm., which convert GIF or JPEG files to PPMs. Don’t forget that the
dimensions of the texture should be powers of two.
Another issue arises when using an indexed or palette color model. A single PPM file can
have more colors than the maximum number of entries in the palette, which would cause palette
overflow in the call to ext_to_native. A further problem is that we usually don’t have just
one texture, but instead have several texture files, each with many colors. Even if the palette is big
enough to hold the colors for one texture, the total number of colors in all texture files is probably
greater than the maximum palette size. If we are working with a palette, it is therefore essential to
generate a common palette for all textures. Conceptually, this works by examining all colors in all
input files and finding a palette with colors that most closely match most of the colors in most of
the textures—a perfect match for all colors in all textures is, of course, impossible. The program
ppmquantall generates a common palette of a specified size for a given set of input images,
maps the original colors in the images to the new images in the common palette, and overwrites
the original input files. For instance, to generate a common palette consisting of 128 colors for all
PPM files in the current directory, you would type ppmquantall 128 *.ppm. Remember, this
overwrites the original input files, so make sure you have backups before you execute this com-
mand. Also, you might want to artificially make the common palette size smaller than the actual
palette size. By doing this, you slightly reduce the overall color quality for all textures, but also
leave a certain number of entries in the actual palette free for later use. Then, when we are comput-
ing the lighting and fog tables, we have some custom colors we can allocate in case no existing
color can be found that closely matches the desired light or fog level. If you generate a common
palette that uses up all of the entries in the actual palette, then you have no entries left over to com-
pensate for poor matches when computing the lighting and fog tables. This may or may not be a
problem. If the original common palette contains a wide variety of colors and a wide variety of
intensities, then maybe there will be a lot of good color matches when computing the lighting and
fog tables. The sad fact of life with a palette is that you quite simply have a limited number of col-
ors available for use; either you risk poor lighting and fog tables if you want the best possible
74 Chapter 2: Rendering and Animation Techniques for 3D Polygons

texture quality, or you sacrifice color resolution in the textures in exchange for more freedom
when computing the lighting and fog tables.
There are also a number of other useful PPM utilities for performing image processing opera-
tions on image files—smoothing, scaling, rotating, brightening, and so forth. (See the online
manual.) Also, the GIMP bitmap editing program under Linux offers many powerful image
manipulation functions in a graphical interface, allowing you to immediately preview and try out a
number of visual effects.

Step 2: Define a Texture Space


Having defined a 2D texture image, we need to somehow position this texture image in our 3D
world. This is because the 2D texture image is supposed to affect the colors of the 3D polygons

Y
comprising our virtual world; it only makes sense, then, that the texture image must also be posi-

FL
tioned within the 3D world.
Positioning the texture image in our 3D world first requires us to define a texture space con-
taining the texture image. Texture space is simply a normal coordinate system with three axes and
AM
an origin. We name the three texture space axes the u axis, v axis, and w axis, corresponding to the
x, y, and z axes in our normal left-handed world coordinate system.
TE

CAUTION Do not confuse the w axis, the third dimension of texture space, with the homo-
geneous w coordinate. These two terms have nothing to do with one another!

The special thing about texture space is that we associate a color with each point in the space.
Thus, you can think of texture space as a solid, colored, infinitely large cube, where each point in
the volume of the solid cube may be assigned a separate color. The convention we define for tex-
ture space is that the 2D texture image we defined in step 1 is located in the 2D square in texture
space defined by the corner points (0,0,0), (0,1,0), (1,1,0), (1,0,0). Thus, the texture image occu-
pies exactly one square unit, starting at the origin, and located in the uv plane of texture space.
Outside of this single square unit, but still in the uv plane, we can think of the texture as being
duplicated, meaning that each square unit in texture space, starting and ending on whole numbers,
contains the colors of the 2D texture image. As for the w axis, for 2D textures we impose the
restriction that the colors in texture space do not depend on w; they only depend on the (u,v) val-
ues. (3D textures also allow for color variation along the w axis.) Graphically, you can imagine
that the 2D uv plane is simply duplicated infinitely up and down the entire length of the w axis.

Figure 2-23: Texture space is a normal 3D space, with


the extension that each point contains a color. The 2D
texture image occupies the square area in texture
coordinate points located from (0,0,0) to (1,1,0).
Outside of this square area, the texture image is
typically repeated.

Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 75

It is important to understand that a coordinate in texture space (also often called a (u,v) coordi-
nate) is essentially equivalent to a pixel in the texture image. Texture space is a colored space; the
texture image defines the colors in the space. Therefore, each point in the texture space corre-
sponds to a pixel in the texture image. Here are some examples: the (0,0,0) coordinate in texture
space corresponds to the first pixel in the texture image; (0.5, 0.5, 0) is the pixel in the middle of
the image; (0.999, 0.999, 0) is the last pixel in the image; and (1,1,0) is again the first pixel in the
image, because the texture repeats itself.

Figure 2-24: Points in texture space


correspond to colors in the texture
image.

Since texture space is a normal 3D space, we can define it in terms of three vectors, defining
the axes, and an origin point. We name the origin point O, and the three axis vectors U, V, and W.
Since we have said that for 2D textures, which is all we use in this book, the w texture coordinate
does not affect the color in texture space, it turns out that we rarely need the W axis. When we do
need the W axis vector, we can compute it as U´ V.

NOTE Computing the W vector as U´ V has one disadvantage: the size of the resulting vec-
tor is then arbitrary. We could simply normalize the W vector after the cross product, but the
problem is that the sizes of the texture space axis vectors define the scaling of the texture in
world space. We look at this more closely in the next section. This means that by computing
the W vector dynamically, we lose the freedom to specify a custom scaling of the texture along
the w axis. Since for 2D textures, we ignore the w axis anyway, this is unimportant. But with
true 3D textures, where the w axis may be important, we should define our own W axis to have
full control over the entire specification of the 3D texture space.

This means that to store a texture space, we just need to store O, U, and V. Instead of storing the
vectors U and V, we instead store the tip points of these vectors; the base point of these vectors is
then O. As usual, we subtract base from tip to obtain the vector. We store the tip points to allow for
consistent transformation using the l3d_coordinate class, as mentioned earlier in this chap-
ter in the discussion about surface normal vectors.
Class l3d_texture_space, presented earlier in Listing 2-4, stores a texture space in this
manner. It simply has three member variables, all of type l3d_coordinate: O, U, and V. Class
l3d_texture, as we saw earlier, inherits from l3d_texture_space. This means that a
texture also has a texture space associated with it.
76 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Step 3: Map Between Texture Space and World Space


Defining a texture space in terms of a texture coordinate system actually has the effect of position-
ing, orienting, and scaling the texture within 3D space. Recall that texture space is a colored space.
Our colored 2D texture image is located in the uv plane of texture space, in the unit square going
from (0,0,0) to (1,1,0). By positioning the origin point of the texture coordinate system, we are
also effectively positioning the first pixel of the texture image. By orienting the axes of the texture
coordinate system, we are also effectively orienting the texture image. By scaling the axes of the
texture coordinate system, we scale the size of the texture image.

Figure 2-25: Changing the position Figure 2-26: Changing the


of the origin of the texture orientation of the texture space
coordinate system has the effect of axes has the effect of orienting
positioning the texture image. the texture image.

Figure 2-27: Changing the size of the texture


coordinate axes has the effect of scaling the texture
image.

The purpose of the texture coordinate system, then, is to define the position, orientation, and
scaling of the texture image within the 3D world coordinate system.
Given a texture space, we should then define a mapping between texture space and world
space. This is nothing more than a change in coordinate system. This means that given a particular
coordinate in texture space, we should be able to find out the coordinate in world space. Con-
versely, and more commonly, we must also be able to determine, given a coordinate in world
space, what its coordinates are in texture space.
Let’s now look at each of these mappings separately.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 77

Figure 2-28: Mapping between texture coordinates


and world coordinates. If we know the coordinates of
a point P in one of the coordinate systems, we should
be able to find the coordinates of the point in the
other coordinate system.

The mapping from texture space to world space is very easy to compute. Recalling the rela-
tionship between matrices and coordinate systems, we can simply write the texture coordinate
system as a matrix, using the first three columns as the axis vectors and the last column as the ori-
gin point. This matrix, when multiplied with a column vector representing a location in texture
space, then converts the texture space coordinates to world space coordinates, for the reasons dis-
cussed in the introductory companion book Linux 3D Graphics Programming.
Equation 2-18

The mapping from world space to texture space is more difficult, but it is the mapping which we
more often need. Why do we need it more often? The reason is that the final texture mapping strat-
egy we implement needs texture coordinates for each world space polygon vertex. The idea is that
we ultimately wish to associate texture coordinates—and thus colors from the texture image—
with every pixel we draw on-screen for a particular polygon. There are two ways of attacking the
problem. The first is the direct pixel-level approach, covered in the next section, where we explic-
itly compute a texture coordinate for every pixel as it is being drawn. The second is a vertex-level
approach, covered in the section titled “An Optimized Texture Mapping Strategy,” where we only
explicitly compute texture coordinates for each world space vertex of a polygon, and interpolate
these precomputed values to arrive at the texture coordinates for each pixel. For the second, ver-
tex-level approach, we know the world space coordinates of each polygon vertex, and need to
compute the texture coordinates for each vertex. In other words, given the world space coordinates
of a vertex, what are its coordinates in texture space?
So, how do we compute the matrix that maps from world space to texture space? Concep-
tually, the answer is simple: invert the matrix that converts texture space to world space. It is
tempting, but incorrect, to use the same strategy we used (in the introductory companion book
Linux 3D Graphics Programming) for deriving the camera transformation matrix. In that case, we
78 Chapter 2: Rendering and Animation Techniques for 3D Polygons

simply took the transpose of the matrix to find its inverse. But recall why this works: the vectors
for camera space are all mutually orthogonal, and are all of unit size. While our texture space vec-
tors are probably orthogonal, they are not always of unit size. This is because, as we said earlier,
we must allow for different-sized texture space axis vectors to allow for scaling of the texture.
Therefore, since the U, V, and W vectors are not of unit size, we cannot invert the matrix simply by
taking its transpose.
In this case, we need to perform a bit of tedious algebra to invert the matrix. We need to
express the matrix as a set of simultaneous equations, solve these equations algebraically, then
recast the expression into matrix form. We are going to go through every step of the algebra in
excruciating detail because it is important that you understand how to find the solution to such
mathematical problems, which arise often in 3D graphics. But don’t worry—we are going to use
the computer to do most of the algebra for us; specifically, we are going to use the symbolic alge-
bra package Calc, which is an extension to the Emacs editor. Calc is included on the CD-ROM.
The important thing is to understand what the problem is, how to cast it in a form that can be
solved by Calc, and how to use Calc.
Our problem is to invert a matrix. First of all, let’s rewrite the problem as a set of linear equa-
tions. Then, we’ll see how we can use Calc to solve the set of linear equations.
The matrix presented earlier converts from texture space to world space. This means that mul-
tiplying texture space coordinates (u,v,w) with this matrix gives us world space coordinates (x,y,z).
Mathematically, this can be expressed as:
Equation 2-19

Performing the matrix multiplication yields the following system of equations.


Equation 2-20

This system of linear equations is merely a different representation for the same information con-
tained within the matrix equation. Given u, v, and z in texture space, the equations tell us how to
find x, y, and z in world space.
We now want to solve this system of equations for u, v, and w, based on known values of x, y,
and z. This takes us from world space to texture space. Solving this system of linear equations is
exactly the same as inverting the equivalent matrix.
The system of equations is solvable; there are three equations and three unknowns. You may
wish to try to solve the system of equations by hand, using the normal techniques of grade school
algebra. If your patience level is the same as mine, you will notice that this system is rather tedious
to solve. Scores of terms need to be copied from one step to the next, some terms cancel out sur-
prisingly, and others look like they will cancel out but actually do not. The danger of making a tiny
sign error somewhere is great, leading to incorrect results and wasted hours of scribbling, not to
Chapter 2: Rendering and Animation Techniques for 3D Polygons 79

mention a cramped hand. Most of the work involved in solving this system of equations seems
tedious, repetitive, manual, and error-prone. It seems like a perfect job for the computer. The Calc
package can help us here. So, let’s now take a detour to look at how to use the Calc package to
solve this system of equations.

NOTE If you are uninterested in understanding how to solve the system of equations, you
can skip this section and just use the results. However, I sincerely hope that you take the time
to learn to use Calc and to understand how to solve such problems using Calc as a tool.
Sooner or later, every 3D programmer will be confronted with solving systems of equations. A
technique as ubiquitous as texture mapping cannot be fully understood without understand-
ing the solutions to the equations.

Calc: A Symbolic Algebra Package


Calc is a free (zero cost and open source) mathematical tool which runs as part of the Emacs envi-
ronment. It can perform numeric and symbolic mathematical manipulations. Its ability to perform
numeric math operations means that Calc can compute quantities specified as numbers, just as
with a normal desk calculator—for instance, (37–140)/15.3. Its ability to perform symbolic math
operations means that Calc can solve systems of equations specified algebraically in terms of
known and unknown variables. For the problem at hand—converting from world space to texture
space—we are mostly interested in the symbolic capabilities of Calc.
Here is an example of Calc’s symbolic capabilities. If we have the system of equations:
Equation 2-21

then Calc can automatically solve this system of equations for x and y to give us the solution:
Equation 2-22

Internally, therefore, Calc has performed the typical grade school algebraic manipulations for
solving systems of equations. For this example, the equations are very easy to solve by hand, but in
the case of inverting the texture space to world space matrix, the system of equations is quite sim-
ply tedious to solve by hand. The symbolic algebra capabilities of Calc relieve us of exactly this
burden. Let’s first take a brief tour of how to use Calc, then focus on using the symbolic algebra
capabilities to solve the texture mapping equations.

NOTE Calc has excellent online info documentation, including many hands-on tutorials.
Alternatively, within Calc, press ? and type i to display the info documentation from within
Calc. Since we don’t have enough space here to describe all of Calc’s features—many of
which are very interesting and useful for solving the types of problems which arise in 3D
graphics—I encourage you to have a look at Calc’s info documentation.

Starting and Exiting Calc


To use Calc, you must have both Emacs and the Calc package installed, and you must know how to
do basic editing in Emacs. See the Appendix for instructions on installing Calc.
80 Chapter 2: Rendering and Animation Techniques for 3D Polygons

With Emacs and Calc installed, you can get started as follows.
1. Type emacs and press Enter. The Emacs window appears.
2. In the Emacs window, press Alt+x to run an extended command. Notice the text “M–x” which
appears at the bottom of the screen. Type calc and press Enter. Two new small windows
appear at the bottom part of the existing Emacs window. See Figure 2-29. Notice that the cur-
sor is now located within the leftmost Calc window.
3. Press Ctrl+x, 1. This maximizes the Calc window to occupy the entire Emacs window.

Figure 2-29: Calc immediately


after starting. The two windows
at the bottom are the Calc
windows.

To exit Calc, simply kill the Calc buffer. When the cursor is in the Calc window, press Ctrl+x,
Ctrl+k, Enter.
Next, let’s enter some simple expressions and let Calc evaluate them.

Stack-Based Computation
Fundamentally, Calc operates with a stack of mathematical expressions. The typical flow of oper-
ations in Calc is to enter entries on the stack, then to perform operations with the entries on the
stack. Some operations, such as negation, operate only on the topmost element of the stack. Other
operations, such as addition, operate on the top two elements of the stack, removing these and
replacing them with one new element, the result of the operation. This style of operation is also
called reverse Polish notation or RPN; some desk calculators operate in this mode.
Let’s now perform some basic computations to see the stack in action and to learn some typi-
cal Calc commands.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 81

1. Press ', a single quote character. This tells Calc that you are about to input an algebraic expres-
sion, and that this expression should be placed on top of the stack. Notice the prompt
“Algebraic:” which appears at the bottom of the screen.
2. Type 125 and press Enter. Notice the text “1: 125” which appears within the Calc window.
This means that entry number 1 of the stack has just been set to value 125.
3. Type n. This negates the top entry on the stack. Notice the number 125 changes to –125.
4. Press ', type 25, and press Enter. This enters a new entry on the top of the stack. The existing
entries on the stack are pushed down by one position. Notice that the entry number next to the
number –125 has changed to 2, indicating that it is now entry number 2 on the stack. Notice
the new line “1: 25” which appears, indicating that the top entry of the stack is the number 25.
5. Press +. This adds the top two entries on the stack and replaces them with a new entry on the
top of the stack representing the result. Notice that the entries –125 and 25 on the stack disap-
pear and are replaced by a new entry with a value of –100, the result of adding –125 and 25.
6. Press Shift+u to undo the last operation. Notice that the original two entries reappear.
7. Press / to divide the second entry on the stack (–125) by the first entry (25). Notice that the
result (–5) appears on top of the stack.
8. Press Shift+u to undo the last operation.
9. Press Tab. This exchanges the top two entries on the stack.
10. Press / to divide the second entry on the stack (25) by the first entry (–125). Notice that the
result (0.2) is different than before, because the order of the stack entries was different; we
have exchanged dividend and divisor.
11. Press Shift+u to undo the last operation.
12. Press Space. Notice that the top entry of the stack is duplicated. Press Space three more times
to duplicate the top entry three more times.
13. Press Backspace. Notice that the top entry of the stack disappears. Press Backspace three
more times to delete the next three entries from the top of the stack.
14. Press Ctrl+u. This allows you to enter a numerical argument to an upcoming command. Type
2 as a numerical argument. Now, enter the command to be executed with the argument of 2.
Press Space, the “duplicate entry” command.
15. Notice that the top two entries of the stack have been duplicated, because of the numerical
argument 2 which we entered earlier.
Now we know how to enter items onto the stack, how to duplicate and delete them, and how to per-
form basic arithmetic operations on them. Here is a summary of the most important Calc
commands:
n Press ' (single quote) to enter an algebraic expression.
n Press +, –, /, or * to add, subtract, divide, or multiply the top two entries of the stack, replacing
them with a new entry containing the result.
n Type n to negate the top entry on the stack.
n Press Space to duplicate the entry on top of the stack.
82 Chapter 2: Rendering and Animation Techniques for 3D Polygons

n Press Ctrl+u, type number n, and press Space to duplicate the top n entries on the stack.
n Press Tab to exchange the top two entries on the stack.
n Press Shift+u to undo operations.
n Press Backspace to delete the entry on top of the stack.

Entering and Editing Mathematical Entities


We’ve now seen how to use Calc to manipulate numbers. Calc’s real attraction for us, though, is its
symbolic algebra facility. We can enter variables in Calc simply by using alphanumeric identifiers
in our expressions. Let’s try entering some symbolic equations for Calc to solve.
1. Press ' to allow algebraic entry.
2. Type x + y = 10 and press Enter. Notice that the formula, with variables, appears on the Calc
stack.
3. Press ', type x – y = 5, and press Enter. This second formula appears on the stack.
4. Press Ctrl+u and type 2. This enters an argument of two for the next command to come.
5. Type v, then press Shift+p to pack the top two entries on the stack into a vector of equations.
Notice that the two equations are placed into one list of equations, separated by commas and
surrounded by square brackets.

NOTE It would also have been possible to enter the vector of equations directly in one step,
instead of entering two separate equations then packing them. To enter both equations in one
step, simply separate the equations with a comma and place square brackets around the
entire list of equations during the entry of the expression.

6. Type a, then press Shift+s to solve the system of equations on the top of the stack. Notice the
prompt that appears at the bottom of the screen, “Variable to solve for:”
7. Type [x,y] and press Enter. This tells Calc that we want to solve the system of equations for
two variables, x and y. Do not forget to type the square brackets.
8. Notice that Calc solves the system and displays the solution x=7.5, y=2.5.
Note that when entering expressions containing terms that should be multiplied, you can either
explicitly specify the multiplication with the * (asterisk) character, as in “a*b”, or you can simply
separate the terms with a space, as in “a b”. You should not write the terms directly next to one
another, as in “ab”, because in this case, Calc interprets this as one term instead of two terms multi-
plied together.
Although we now know how to use Calc to solve systems of equations, we should not try just
yet to solve the texture mapping equations. Instead, we should continue to look at some other edit-
ing features of Calc. Later, when we actually solve the texture mapping equations, we will be
confronted face to face with the full complexity of the solutions, and will need all tools at our dis-
posal to reduce the solutions to the simplest form possible.
The following list summarizes the features of Calc that we use in the following sections to
solve and simplify the texture mapping equations.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 83

n Editing expressions. Calc allows you to edit the top expression on the stack. This is extremely
useful for performing manual simplifications or rearranging an existing formula so that Calc
can more easily recognize and simplify expressions. Edit the top expression on the stack as
follows. Press ', a single straight quote character. A new window, with the title “Calc Edit
Mode,” opens, containing the top expression of the stack. Use the normal Emacs editing com-
mands to edit the text of the expression. Press Ctrl+c twice to accept your changes, or Alt+#,
x to cancel your changes.
n Display modes. Calc offers different ways of displaying algebraic expressions. The Normal
mode is a compact, straight-line representation, where two variables next to one another are
understood to be multiplied together, as with standard algebraic notation. The Big mode is an
easy-to-read ASCII representation, where fractions are displayed vertically instead of hori-
zontally. The C mode is a representation using the syntax and operators of C language expres-
sions. The TeX mode is a representation suitable for inclusion in the typesetting program TeX,
which is often used in academic circles for publishing documents with large amounts of math.
(The equations in this book were all typeset using the LyX front end to the LaTeX macro
extension to the TeX program.) Change the display mode in Calc as follows. Type d. Then, for
Normal mode, press Shift+n. For Big mode, press Shift+b. For C mode, press Shift+c. For
Tex Mode, press Shift+t.

Figure 2-30: Normal, Big, C,


and TeX display modes in Calc.

n Vector entry. Calc has facilities for dealing with row and column vectors of numbers or vari-
ables. To enter a row vector, first press ' to allow algebraic entry. Then, press [ to begin the
vector. Enter each element of the vector, separated by the comma (,) character. Press ] to end
84 Chapter 2: Rendering and Animation Techniques for 3D Polygons

the vector, and press Enter to finish the input. To enter a column vector, follow the same pro-
cedure, but separate the elements of the vector with the semicolon (;) character.
n Matrix entry. Calc also allows entry and computation with matrices of numbers or variables.
To enter a matrix, begin as if you were entering a vector: press ' then [. Enter the first row of
elements in the matrix, separating the elements of the matrix with the comma (,) character. To
finish a row, enter a semicolon (;) character, and continue typing the elements of the next row.
Continue until the matrix is complete. Then press ] and Enter.

Figure 2-31: A matrix


and a column vector
entered within Calc.
The matrix was entered

Y
with the key sequence

FL
'[a,b,c,d;e,f,g,h;i,j,k,l;m
,n,o,p] followed by
Enter. The vector was
AM
entered with the key
sequence '[x;y;z;w]
followed by Enter.
TE

n Simplifying equations. Calc offers many commands to simplify algebraic expressions. The
most important commands are the simplify and normalize-rational commands. The simplify
command tries to simplify the expression, using rules which ordinarily might be too slow to
apply. For instance, simplify combines non-adjacent sums of terms: a+b+2a is simplified to
b+3a. The normalize-rational command attempts to rearrange a formula to be the quotient of
two polynomials. For instance, a+b/c+d/e is simplified to (ace+be+cd)/(ce). Invoke these
commands as follows. Type a to begin entering an algebraic command. Then, to apply the
simplify command, type S; to apply the normalize-rational command, type n. The simplifica-
tion occurs on the topmost element of the stack.
n Collecting terms. The collect command of Calc rearranges a formula as a polynomial in a
given variable. For instance, if we have the expression ax+bx+cx, then collecting the x terms
yields the expression (a+b+c)x. Invoke the collect command as follows. Type a to begin
entering an algebraic command, then type c. Notice the prompt “Collect terms involving:”
which appears at the bottom of the screen. Type the name of the term which should be col-
lected, and press Enter.
n Selections. Often, it is useful to operate on just one small part of an expression, rather than the
expression as a whole. Calc allows us to define a selection, which is simply a smaller portion
of an expression. After defining a selection, most Calc commands then operate only on the
selection. Most commonly, after a selection you will apply one of the simplification com-
mands (a, S or a, n) or the edit command (', the straight quote character). This allows you to
simplify or edit a smaller portion of the entire expression. Apply a selection within Calc as
follows. Move the cursor to be on top of the beginning of the subexpression that you want to
select. Type j to enter selection mode. Then, type one of the following keys to define a

Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 85

selection: s for the smallest subexpression at the cursor position, m for more of the current
subexpression, n for the next subexpression after the current selection, or c to cancel selection
mode and select the entire formula. After defining a selection, notice that the rest of the equa-
tion is replaced with dot characters to indicate that these parts of the expression are tempo-
rarily inactive.
Now that we know some of the more important tools that Calc offers for editing and simplifying
expressions, let’s move on to the actual practical reality of solving systems of equations within
Calc.

Solving Systems of Equations


Recall, we want to use Calc to solve a system of equations. The last hurdle standing in our way is
an unfortunate quirk of Calc’s method of solving systems of equations. The quirk is that when
solving a system of equations for many variables, only one variable—usually the last variable
solved for—tends to have a reasonably simple form. The other variables tend to have incredibly
complicated, non-simplified forms, spanning ten lines or more and defying human comprehension.
For instance, say we have a system of three equations with three unknown variables u, v, and
w, and we ask Calc to solve this system by using the a, S command described earlier. But when
entering which variables to solve for, we can enter them in any order: we can enter [u,v,w],
[v,w,u], or [w,u,v] (among other permutations). In the first case, variable w would typically have
the simplest form; in the second case, u; in the third case, v.
This may sound unimportant, but in practice it is very important. You might think that the
complicated expressions for the non-last variables could be simplified with the simplify and nor-
malize-rational commands described in the previous section. Unfortunately, this is not the case.
The problem is that many common subexpressions differ only by a sign—for instance, a/(b–c) +
a(c–b). In this case, the subexpression (c–b) is actually the same as the denominator (b–c) of the
first expression, only negated. When this same situation of identical-but-negated subexpressions
is expanded to expressions containing ten or more terms, Calc is not always able to recognize the
commonality among the subexpressions, and is thus often not able to simplify the expressions fur-
ther. If you notice such expressions, it is possible for you to edit the offending expression
manually, negating both numerator and denominator, so that both terms then have truly identical
denominators, which would make it explicitly clear to Calc that the expression can be simplified.
However, with extremely long expressions, even the human capability for pattern recognition can
begin to reach its limits.
Therefore, the best way to solve systems of equations in Calc is to solve the system repeat-
edly, each time allowing a different variable to be last in the solution vector, and therefore the
simplest. Assuming our system has the three unknown variables u, v, and w, we would proceed as
follows.
1. Enter the system of equations to be solved, within square brackets and separated by commas.
2. Duplicate the system of equations before solving it: press Space. In this way, you still have a
copy of the original equations, even after solving it (which replaces the original system of
equations with its solution).
86 Chapter 2: Rendering and Animation Techniques for 3D Polygons

3. Solve the duplicated system with one particular variable, say u, as the last one. Type a, S, type
[v,w,u] and press Enter.
4. The solution for all three variables appears, but only the solution for the last variable u is suffi-
ciently simple to be understandable. Edit the solution: press ' (straight quote). Delete
everything before and after the last solution. Press Ctrl+c twice to accept your changes. Now,
only the solution for the last variable remains on top of the stack.
5. Press Tab to exchange the top two entries of the stack. The system of equations now appears
on top of the stack, and the solution for one variable beneath it.
6. Repeat again starting at step 2, but with a different variable, say v, as the last one. Continue
until all variables have been solved for as the last variable in the solution vector.
This procedure gives you the best results from the equation solver. But often, even if a variable
was solved for as the last variable in the solution vector, the resulting equation can still be simpli-
fied further with the simplify and/or normalize-rational commands. In particular, it is often helpful
to simplify only parts of the equations at a time. Do this with the selection command. Select a rea-
sonably small subexpression, consisting of, say, four to five terms. First simplify the expression by
typing a, S, then apply the normalize-rational command by typing a, n. Then, select more of the
expression with j, m, and continue to simplify. If Calc ever fails to simplify, simply undo the oper-
ation, select a smaller subexpression, and try again. If even that fails, and you are convinced that
the solution can be simplified further, you need to help Calc out by making common terms easier
to recognize. Most often, common-but-negated terms will appear scattered throughout the expres-
sion. By editing the expression and judiciously negating terms so that no mathematical change in
value is effected (for instance, by negating both numerator and denominator of a fractional expres-
sion), it is often possible to make similar-but-negated terms exactly identical. After doing this, try
simplifying again; Calc will then usually recognize the common terms which you made explicit,
and simplify further.
To summarize, when solving systems of equations in Calc, you should solve the system multi-
ple times, each time with a different variable as the last one in the solution vector. Use the
duplicate command to avoid needing to enter the system of equations multiple times. Simplify the
resulting solutions a piece at a time by using selections, the simplify command, and the normal-
ize-rational command.

Solving the Texture Mapping Equations with Calc


Now, at long last, we have enough information to solve the texture mapping equations in Calc. We
solve the system with the procedure described in the previous section: duplicate, solve with one
variable as the last variable, repeat for other variables. Simplify each solution piece by piece by
using selections.
Immediately after entering the system of equations from Equation 2-20 into Calc, the Calc
window appears as in Figure 2-32.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 87

Figure 2-32: The texture


space to world space
equations in Calc.

After solving the system with a, S followed by [v,w,u], the Calc window appears as in Figure
2-33. Notice the complexity of the solutions for v and w, and the relative simplicity of the solution
for u.

Figure 2-33: The system solved


for [v,w,u].

We then edit the expression to remove everything but the solution for the last variable, u. After
doing this, the Calc window appears as in Figure 2-34.

Figure 2-34: The


solution for u has
been isolated.
88 Chapter 2: Rendering and Animation Techniques for 3D Polygons

We can simplify the solution for u further. A naive attempt to apply the simplify and normal-
ize-rational commands to the entire expression at once results in Calc reporting a “division by 0”
error. But the expression can be simplified. By simplifying and normalizing the equation piece for
piece using selections, and by collecting terms for x, y, z, ux, uy, and uz, the expression finally sim-
plifies to the form shown in Figure 2-35. I encourage you to try to arrive at the same result yourself
by using Calc.

Figure 2-35: The


solution for u has
been simplified.

By repeating the solution procedure for variables v and w, we arrive at the following simpli-
fied solutions for u, v, and w.
Equation 2-23

These equations are the solution to the world-to-texture-space problem. Given variables x, y, and z
representing world space coordinates, and a texture space definition as specified by the O point
and vectors U, V, and W, these equations give us the equivalent coordinates u, v, and w in texture
space. As you can see, the solution is not completely intuitive, and would have been rather tedious
to derive by hand.
There is still more structure to be discovered in these equations. Notice the recurring pairs of
differences which appear: (vz wy – vy wz), (vx wz – vz wx), and so forth. These terms are actually
components of cross product vectors. In particular, the solution for u contains the components of
the cross product vector V´ W; the solution for v contains the components of U´ W; the solution for
w contains the components of V´ U. The denominator in each case is UŸ(V´ W). Making this sim-
plification, and recasting the equation into matrix form, gives us Equation 2-24. The subscripts on
each cross product expression mean to take the corresponding component of the vector after the
cross product operation. For instance, (V´ W)x means to take the cross product of vectors V and W,
and to use the x component of the resulting vector.
Equation 2-24
Chapter 2: Rendering and Animation Techniques for 3D Polygons 89

Verify that this matrix form of the equation, using cross products, is the same as the Calc solution
by performing the matrix multiplication, expanding the cross products, and dividing each compo-
nent of the resulting solution vector (u, v, and wtex) by the homogeneous component whom. We
have subscripted the w texture component and the homogeneous w divisor with tex and hom to dis-
tinguish their separate functions; we use the homogeneous whom component to effect a division
on all terms, since the solution equations for u, v, and w, as shown by Calc, all have a common
denominator.
Therefore, the matrix in Equation 2-24 converts from world space coordinates to texture
space coordinates; by multiplying a column vector in world coordinates with the matrix, we obtain
a new column vector with the texture coordinates. Since this is a useful operation, we have encap-
sulated it into an l3d class: class l3d_texture_computer. Any class needing the ability to
perform texture related computations inherits from this class. The method world_to_tex_
matrix takes an object of type l3d_texture_space as a parameter, and returns the matrix,
which converts coordinates from world coordinates into the texture space of the given
l3d_texture_space object. The member variables fovx, fovy, screen_xsize, and
screen_ysize are pointers to external integers specifying the horizontal and vertical screen
size and field of view terms. We need these variables later within the Mesa 3D rasterizer to per-
form a reverse projection.
Remember that with 2D textures, we generally disregard the third texture coordinate. This
means that when converting from world space to texture space, we can discard the third compo-
nent of the resulting vector; only the first two components, u and v, are relevant for 2D image
textures.
Now we know how to convert from texture space to world space, and from world space to tex-
ture space. This is an important step in texture mapping; with this understanding, the relationship
between the texture space and world space is clearly defined, and thus the relationship between the
texture image and the world space is clearly defined. The next step is to understand the relation-
ship between screen space and texture space.

Step 4: Reverse Project from Screen Coordinates into


Texture Coordinates
The practical goal of texture mapping is to determine, for each pixel of a polygon being drawn,
which pixel from the texture image should be drawn. We can see this goal as being a conversion
from screen coordinates to texture coordinates. We know the screen coordinates of each pixel we
are drawing on-screen; what we need to know are the texture coordinates in texture space, which
then give us the precise pixel coordinates within the texture image we need to display.
Given the previous section’s matrix converting world coordinates to texture coordinates, it
would seem logical to approach this problem by converting screen coordinates to world coordi-
nates, then from world coordinates to texture coordinates. In theory, this should work fine. But, as
far as I can tell, along that path lies madness, even with the help of a powerful symbolic algebra
tool like Calc. So, let’s assault the problem from a different angle—let’s convert directly from
screen space (pixel coordinates) back into texture space (texture coordinates), skipping the inter-
mediate step of world coordinates. Our strategy is to start with texture space, then convert to world
90 Chapter 2: Rendering and Animation Techniques for 3D Polygons

space, then convert to screen space with a perspective projection. Then, we take the resulting sys-
tem of equations, converting from texture space to screen space, and solve it for the texture space
variables, thus resulting in equations converting from screen space to texture space. Again, we
rely heavily on Calc’s help to save us from the tedium of solving the equations by hand.
Let’s start with the texture space to world space matrix, multiplied with a column vector rep-
resenting a location in texture space. The result is a location in world space.
Equation 2-25

Carrying out the matrix multiplication, we obtain:

Equation 2-26

We have said that for 2D texture images, we don’t use the w texture component, because there is
no color variation along the W axis in texture space. Therefore, the value of w does not affect the
color we obtain from the texture image; any value of w can be used in the equation. The simplest
value of w is 0, which causes the w term to conveniently disappear. After replacing w with 0, we
obtain:
Equation 2-27

We can rewrite this from matrix form into a system of linear equations.
Equation 2-28

Now, let’s apply a simple perspective projection. We do this by dividing x and y by z to yield xp
and yp (the subscript p stands for projected), resulting in Equation 2-29. This simple projection
incorrectly ignores the effect of the field of view parameters and the reversal of the y axis orienta-
tion which actually occurs with the real perspective projection. We can use Equation 2-30 to
convert from the real screen projected coordinates xs and ys, which include the field of view and y
reversal, into the simple projected coordinates xp and yp. Since it is easy to convert between the
real and the simple projected coordinates, and since the math is less tedious if we use the simple
projected coordinates, the rest of the derivation for the texture mapping equations uses xp and yp.
Any general observations about the variables xp and yp apply equally to the variables xs and ys.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 91

Equation 2-29 Equation 2-30

Let’s now rearrange the first and second equations in Equation 2-29 to move the z to the left-hand
side of the equations. This yields:
Equation 2-31

This is the system of equations we want to solve. First, let us consider what the equations in their
current form tell us, before solving the equations for other variables. In their current form, this sys-
tem of equations assumes we know the texture coordinates u and v. The equations then return three
values: the screen x coordinate multiplied by the world z coordinate, the screen y coordinate multi-
plied by the world z coordinate, and the world z coordinate. Notice that the 3D world space z
coordinate affects the first two equations. The reason is the perspective projection, which divides
by z. The texture space to world space matrix, developed earlier, relates texture space to non-pro-
jected 3D world coordinates. Therefore, to relate texture space to projected 2D screen
coordinates—which is the main goal, remember—we must “reverse project” the screen coordi-
nates by multiplying them with the world space z coordinate. Thus, the 3D z coordinate, due to the
perspective projection process, plays an important role in texture mapping.
Now that we understand what this system of equations tells us, let’s solve the system for the
variables we want to know. We wish to find the texture coordinates u and v. Since the 3D z coordi-
nate also plays a role here, it is also an unknown. So we solve the system for variables u, v, and z.
Do this by using Calc as described in the previous section, solving the system multiple times, each
time with a different variable as the last variable in the solution vector. After solving the system in
this way and simplifying the results, the final equations are as follows.
Equation 2-32

This is an important equation for texture mapping. It tells us explicitly, for a given projected coor-
dinate on-screen, the corresponding (u,v) coordinates in the texture space.
Notice that the u, v, and z variables are non-linear functions of the projected coordinates xp,
and yp. Non-linear means that a change in xp or yp does not always result in the same amount of
change in the u, v, or z variables. Mathematically, you can recognize this because the independent
variables xp and yp appear in both the numerator and denominator of a fraction, meaning that a
constant change in xp or yp leads to a non-constant change in the dependent variables u, v, or z.
Changing the denominator of a fraction by a constant amount leads to a non-constant change in the
final value; the difference between 1/2 and 1/3 is much greater than the difference between
1/10002 and 1/10003, though the denominator changes by a constant amount of one in both cases.
92 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Geometrically, you can understand this non-linearity by examining Figure 2-36. When a polygon
is tilted away from the projection plane—which is essentially our computer screen—a movement
of one pixel on the screen corresponds to a non-constant movement on the polygon in 3D. This
non-linearity is a fundamental property of the perspective projection process and causes (or
indeed, is) the foreshortening which we observe in ordinary visual perception.

Figure 2-36: u, v, and z are not linear


in screen space. Moving one pixel
on-screen from a to b does not move
you by a constant amount along the
polygon, because of the perspective
foreshortening.

A further observation about Equation 2-32 is that the numerator and denominator of each
term, taken separately, are indeed linear with respect to xp and yp; it is the result, dividing numera-
tor by denominator, which is non-linear.
Linearity of an equation with respect to xp and yp (and therefore, also with respect to xs and
ys) is important because it allows us to compute the value at a particular screen location based on
the value at a previous screen location. We used the same idea when developing the incremental
algorithm for line drawing: the value for the next pixel could be calculated from the value of the
current pixel. With regard to the texture mapping equations, this means that we can compute an
initial value for the numerator and denominator based on the starting values of xp and yp for the
first pixel of the texture mapped polygon we are drawing. Then, we can incrementally compute
the numerator and denominator (requiring just one addition operation) for all other pixels in the
polygon. We must perform the division, however, to obtain the u and v coordinates.

Step 5: Map Texture Coordinates


to Integer Indices and Draw
After computing the (u,v) coordinates from the screen coordinates with the equation from the pre-
vious section, we map the real-valued (u,v) coordinates to integer indices within the 2D texture
map image. The coordinates u and v each go from 0 to 1, with the convention that the texture
Chapter 2: Rendering and Animation Techniques for 3D Polygons 93

image exists within the square in texture space defined by corner points (0,0) and (1,1). Outside of
the square (0,0) to (1,1)—that is, for u and v values greater than one or less than zero—we simply
regard the texture as being duplicated.
The coordinates within the texture map image are integer coordinates from 0 to width–1 hori-
zontally and from 0 to height–1 vertically, where width and height are both powers of two. So to
map from the real-valued (u,v) to the indices within the texture map, we simply multiply u by
width and v by height, yielding integer coordinates (i,j) within the texture map image. If u or v
were greater than one or less than zero, this could lead to integer texture map indices lying outside
of the texture image; to correct this and allow for repetition of the texture, we simply perform a
logical bit-wise AND of i and j with width–1 and height–1. This has the effect of causing any val-
ues greater than width or height to “wrap around” back to zero again. Notice that this only works
because width and height are powers of two; with powers of two, the AND operation is essentially
a modulo operation, which gives us the value of the remainder after a division. Also, note that this
wrap-around behavior helps compensate for the fact that pixels slightly outside of the polygon
might be drawn due to numerical inaccuracy inherent to the DDA polygon drawing algorithm. If a
pixel lies outside of the polygon, then a computed texture coordinate will lie outside of the texture
image, but the wrap-around behavior allows us still to retrieve a valid texel from the texture.
Instead of wrapping around, we could also clamp the values to force them to lie within the texture
image.
Given the integer coordinates (i,j) within the texture image, we simply use the color at this
location in the image (stored within the l3d_texture_data class) for drawing the current
pixel of the texture mapped polygon being rasterized on-screen.

An Optimized Texture Mapping Strategy: u/z, v/z, 1/z


The previous sections described a pixel-level strategy for texture mapping. During rasterization of
a polygon, we directly calculate from the screen coordinates the corresponding texture coordi-
nates for each pixel.
An alternative strategy is a vertex-level approach, which we hinted at earlier. With a ver-
tex-level approach, we only explicitly compute texture coordinates for the vertices of the polygon.
We perform this computation in world space, by using the world space to texture space matrix
developed earlier. Then, when drawing the polygon, we more or less interpolate the texture coor-
dinates between the vertices. We say “more or less” because we actually do not interpolate the
texture coordinates u and v themselves, but rather the related quantities u/z and v/z. Let’s look at
the texture mapping equations again to see where these u/z and v/z terms come from.
If we look at Equation 2-32, which converts projected coordinates into texture coordinates,
we notice that the equations for u, v, and z all look similar. In particular, we can relate the equations
for u and v with the equation for z. The denominator of z is the same as the denominator of u and v;
therefore, u and v can be expressed as their numerators multiplied by z, and finally divided by the
numerator of z. This is shown in Equation 2-33.
94 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Equation 2-33

We can rearrange the equations as follows.


Equation 2-34

Y
FL
In this form, we can see that the u/z, v/z, and 1/z terms are all linear functions of xp and yp. This
means that calculating the next value of u/z, v/z, or 1/z for the next pixel location can be done with
a simple addition to the value at the current pixel location. Furthermore, by dividing u/z by 1/z, we
AM
arrive at the desired texture coordinate u; similarly, by dividing v/z by 1/z, we arrive at v.
One relevant question is in which coordinate space z should be. Two obvious choices come to
mind: the world space z coordinate or the camera space z coordinate. The best coordinate space to
TE

use for the z value is the camera space z coordinate. This is because the world space z coordinate
can vary wildly; its values can be extremely large, extremely small, positive, or negative; the
world z coordinate depends on the location of the object within world space, which can be com-
pletely arbitrary. On the other hand, the camera space z coordinate is a little more restrictive; it
cannot be negative, has a minimum value (the near z clipping plane), and has a maximum value
beyond which objects are too far from the camera to be seen. These additional restrictions on the
possible values of the camera space z coordinate make it more amenable for use within the texture
mapping computations.
The linearity of u/z, v/z, and 1/z suggests to us the following texture mapping strategy.
1. Define a texture and a texture space.
2. For each polygon, compute u, and v for each vertex index of the polygon, using the world
space to texture space matrix. Store these u and v values in the new tex_coord member of
the derived l3d_polygon_ivertex_textured class, covered in the next section.
3. After transforming polygons into camera space, also save the camera space z coordinate in the
tex_coord member of l3d_polygon_ivertex_textured.
4. During rasterization, compute u/z, v/z, and 1/z for each vertex index of the polygon.
5. While stepping vertically between scanlines and horizontally between pixels, interpolate the
u/z, v/z, and 1/z values. Do this by maintaining two sets of variables, one set for the left side
and one set for the right side of the polygon. The left side of the polygon is always being
drawn from a starting left vertex to an ending left vertex; the same for the right side of the
polygon. Each vertical scanline step moves the current pixel from the starting vertex closer to
the ending vertex. Thus, we also move from the starting u/z to the ending u/z; similarly for v/z
and 1/z. This means that for each scanline, we always have a current u/z, v/z, and 1/z value for
the left side and the right side of the polygon. Then, for each pixel within the scanline, we
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 95

interpolate from the left u/z, v/z, and 1/z values to the right u/z, v/z, and 1/z values. This gives
us a u/z, v/z, and 1/z value for every pixel of the polygon we rasterize. See Figure 2-37.

Figure 2-37: Interpolating


u/z, v/z, and 1/z vertically
and horizontally between
vertices to arrive at a u/z,
v/z, and 1/z value for each
pixel.

6. Before drawing each pixel, divide the current u/z by 1/z, and divide v/z by 1/z. This returns the
u and v values for the current pixel. Scale these to integer texture map coordinates, extract the
appropriate pixel color from the texture map, and draw the current pixel with the color from
the texture map.

The Division Operation and Texture Mapping


Notice that regardless of whether we use the pixel-level or vertex-level approach to texture map-
ping, we must perform a divide to arrive at the u and v values. With the pixel-level approach, we
must divide the numerator by the denominator for the u and v terms; with the vertex-level
approach, we must divide u/z and v/z by 1/z. Thus, the pixel-level and the vertex-level approach
are mathematically equivalent; it is just that with the vertex-level approach, we have made the
relationship between u, v, and z more explicit.
The divide operation is time consuming. To perform texture mapping such that each pixel has
the correct u and v values, we must perform at least one division per pixel. (We say one division
and not two because we divide the u and v terms by the same denominator, meaning that we can
compute the reciprocal of the denominator with one divide operation, then multiply the u and v
terms with the precomputed reciprocal.)
It is not necessarily an absolute requirement that each and every pixel have the correct u and v
values; as long as most of the values are correct or not too far from correct, the image will still
likely be acceptable. Thus, one way of reducing the number of required divide operations, with a
slight reduction in image correctness, is only to compute the true u and v values every so many
pixels—say, every 16 pixels. Let’s call each group of 16 horizontally adjacent pixels a run of pix-
els. Then, at the beginning and end of each run, the u and v values are correct, because we compute
them as usual. For the pixels within a run, we linearly interpolate the u and v values between the
start of the run and the end of the run. The interpolated u and v values will not be the correct u and v
values as dictated by the texture mapping equations, because we cannot correctly interpolate u and
96 Chapter 2: Rendering and Animation Techniques for 3D Polygons

v in screen coordinates. However, they will not be “too far” from the original values, since we
know that the endpoints of the run are correct. The smaller the runs, the less noticeable the error,
but the larger the runs, the fewer divisions we must perform and the better the overall performance
of the texture mapping code.
Figure 2-38: Dividing a horizontal
row of pixels into runs. We only
compute the true u and v values
at the endpoints of the run. In
between the endpoints, we
linearly interpolate.

We’ve now covered the theory we need to perform texture mapping. The approach imple-
mented in l3d is the vertex-level approach, interpolating u/z, v/z, and 1/z. Let’s now look at the new
polygon class needed to implement this texture mapping strategy.

Associating Textures with 3D Polygons


The class l3d_polygon_3d_textured is a texture mapped polygon. It is a 3D polygon with
a texture and with texture coordinates stored with each vertex index. The class l3d_poly-
gon_ivertex_textured, inheriting from l3d_polygon_ivertex, allows us to store
the texture coordinates. It is an index to a polygon vertex with an additional member tex_coord
to store a texture coordinate. Note that we store the texture coordinates with the vertex indices and
not with the vertices themselves. The vertices themselves are shared among all polygons in an
object, but the vertex indices belong to each individual polygon. Since the texture coordinates
must be associated with each vertex of a polygon, and since adjacent polygons sharing a vertex
might nevertheless have different texture coordinates at the same vertex, we must store the texture
coordinates with the polygon’s own vertex index and not with the shared vertex. The class
l3d_polygon_ivertex_textured_items_factory is a factory returning items of the
new type l3d_polygon_ivertex_textured; recall, the l3d_list class internally
stores a factory for production of new list items during automatic expansion of the list. The factory
overrides the clone method to copy itself; when copying a polygon, we should also copy the ver-
tex index factory so that the copied polygon can produce the same kind of vertex indices (textured
or non-textured) as the original polygon.
Returning to the class l3d_polygon_3d_textured, the normal constructor replaces the
iv_items_factory, responsible for producing vertex index items for the (inherited) lists
ivertices, clip_ivertices, and temp_clip_ivertices, with the new factory so
Chapter 2: Rendering and Animation Techniques for 3D Polygons 97

that the vertex index lists all now contain the extended vertex indices with texture coordinates.
Therefore, all vertex index lists can store texture coordinates. Only after replacing the factory does
the constructor then call the init method, which then creates the vertex index lists using the new
factory. This is also why the empty base class constructors are called, to prevent the base class
constructors from creating the lists before the new factory is in place. Class l3d_poly-
gon_3d_textured also declares a copy constructor and a clone method, as usual.
The member variable texture stores the texture orientation and data.
The method assign_tex_coords_from_tex_space takes a texture space as a
parameter, computes the texture coordinates based on the world space to texture space matrix, and
stores the resulting texture coordinates with each vertex index in the ivertices list. Notice that
we only use the first and second (u and v) components of the vector, since the third texture coordi-
nate is unimportant for 2D textures.
The overridden methods init_transformed and transform perform the same trans-
formation functions as in the parent class l3d_polygon_3d, but have been extended also to
initialize and transform the O, U, and V member variables as well. This means that any transforma-
tion of the polygon also transforms the polygon’s associated texture space, as well.
The overridden method after_camera_transform copies the current camera space z
coordinate of each vertex into the corresponding vertex index object. We store the z coordinate in
the tex_coord.Z_ element of the vertex index. This means that the tex_coord.X_ and
tex_coord.Y_ elements are the (u,v) texture coordinates for the vertex index; the
tex_coord.Z_ element is not the third texture coordinate w, which we don’t need, but is
instead the camera space z value, which we do need.
The overridden methods clip_segment_to_near_z and clip_near_z perform the
same functions as in the parent class l3d_polygon_3d, but have been extended also to clip the
u and v coordinates against the near z plane. Near z clipping has the effect of finding edges which
cross the near z plane, and clipping these edges to become shorter so that they do not recede
beyond the near z plane. This means that the 3D location of the endpoints of the clipped edges
changes. Since we have now associated texture coordinates with the endpoints of these edges,
clipping off the endpoints and replacing them with new endpoints means that we also must clip the
texture coordinates and assign them to the new clipped endpoint. This is very simple; during nor-
mal clipping of the geometry, we already compute a time parameter t indicating the “time” of the
intersection, which is a relative value indicating the percentage displacement between the starting
point and the ending point. We multiply the time parameter with the x, y, and z displacements to
find the exact location of the clipped point. To clip the u and v values, we also use the same time
parameter, and multiply it with the u displacement and v displacement to obtain the (u,v) coordi-
nates at the new clipped point.
Similarly, the overridden methods clip_segment_to_edge_2d and clip_to_
edge_2d have been extended not only to clip the polygon in 2D, but also to clip the texture coor-
dinates in 2D. The only tricky part is that we cannot clip the u and v values directly in 2D; instead,
we must compute the u/z, v/z, and 1/z values, clip these by using the same time parameter t used to
clip the geometry, then divide u/z and v/z by 1/z to recover the clipped u and v values. This is differ-
ent than the near z clipping case; near z clipping occurs in 3D, where the u and v values have a
98 Chapter 2: Rendering and Animation Techniques for 3D Polygons

linear relationship with the 3D x, y, and z values, as evidenced by the linear equations in the world
to texture space matrix. But for 2D clipping, the clipping occurs in 2D screen coordinates; the u
and v values do not have a linear relationship to the screen coordinates, but the u/z, v/z, and 1/z val-
ues are linear in screen space. This is why we clip these values, and not the original u and v values,
in 2D.
The method clip_to_plane, which clips the polygon against an arbitrary plane in 3D
(contrast with the 2D clipping of the previous paragraph), is also overridden to clip the texture
coordinates in 3D in addition to the geometry. Since there is ordinarily a linear relationship
between texture space and 3D world space (unless we manually assign texture coordinates; see
Chapter 3), we directly clip the u and v values in 3D, in contrast to the u/z, v/z, and 1/z clipping nec-
essary in 2D. As we saw earlier, after clipping the crossing polygon segments in 3D against the
clipping plane, the clipping plane stores an intersection_t parameter representing the per-
centage displacement of the new clipped point along the path from the start point in the crossing
segment to the end point. To clip the u and v values in 3D, we simply use this same intersec-
tion_t parameter to compute new u and v values as the same percentage displacement along the
path from the starting (u,v) values at the first vertex in the crossing segment to the ending (u,v) val-
ues at the end vertex in the crossing segment. In other words, to perform clipping in 3D, we simply
linearly interpolate the (u,v) values just as we do with the world coordinate locations.
The overridden method draw causes the polygon to draw itself; the polygon, in turn, asks its
rasterizer to draw the polygon. This means that all existing code which uses polygons—such as
the entire 3D object class—can handle texture mapped polygons with no change; the virtual call to
the draw routine always gets bound at run time to the correct polygon-specific drawing routine.
Now, let’s look at the rasterization classes that actually draw the polygons on-screen.

Rasterization Classes for 3D Polygons


We’ve now seen everything we need to store the data for texture mapping. It’s now time to see the
actual texture mapping routines themselves. These are located within the rasterizer classes.

An Abstract 3D Rasterizer: l3d_rasterizer_3d


Class l3d_rasterizer_3d is a 3D rasterizer class that forwards requests for rasterization on
to a 3D rasterizer implementation. (Recall that the bridge design pattern separates the rasterizer
and its implementation.)
Listing 2-8: rast3.h
#ifndef __RAST3_H
#define __RAST3_H
#include "../tool_os/memman.h"

#include "rasteriz.h"
#include "../tool_2d/scrinfo.h"

#include "../geom/polygon/p3_flat.h"
#include "../geom/texture/texture.h"

class l3d_polygon_3d_textured;
class l3d_polygon_3d_textured_lightmapped;
Chapter 2: Rendering and Animation Techniques for 3D Polygons 99

class l3d_rasterizer_3d_imp :
virtual public l3d_rasterizer_2d_imp,
virtual public l3d_texture_computer
{
public:
l3d_rasterizer_3d_imp(int xs, int ys, l3d_screen_info *si);

/* virtual */ void clear_buffer(void) {};

virtual void draw_polygon_textured(const l3d_polygon_3d_textured *p_poly);


virtual void draw_polygon_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped *p_poly);
virtual void draw_text(int x, int y, const char *string) {};
virtual void set_text_color(int red, int green, int blue) {};
};

class l3d_rasterizer_3d : public l3d_rasterizer_2d {


protected:
l3d_rasterizer_3d_imp *imp3d;

public:
l3d_rasterizer_3d(l3d_rasterizer_2d_imp *i2,
l3d_rasterizer_3d_imp *i3) :
l3d_rasterizer_2d(i2)
{
imp3d = i3;
}
virtual ~l3d_rasterizer_3d(void) {}

void draw_point( int x , int y , unsigned long col ) {


imp3d->draw_point(x,y,col);
}

/* virtual */ void draw_line( int x0 , int y0 , int x1 , int y1 , unsigned long col ) {
imp3d->draw_line(x0,y0,x1,y1,col);
}

/* virtual */ void clear_buffer(void) {


imp3d->clear_buffer();
}

/* virtual */ void draw_polygon_flatshaded( const l3d_polygon_3d_flatshaded


*p_poly )
{imp3d->draw_polygon_flatshaded(p_poly); }

virtual void draw_polygon_textured(const l3d_polygon_3d_textured *p_poly)


{imp3d->draw_polygon_textured(p_poly); }

virtual void draw_polygon_textured_lightmapped


(const l3d_polygon_3d_textured_lightmapped *p_poly)
{imp3d->draw_polygon_textured_lightmapped(p_poly); }

void draw_text(int x, int y, const char *string)


{imp3d->draw_text(x, y, string); }

};

class l3d_rasterizer_3d_imp_factory {
public:
virtual l3d_rasterizer_3d_imp *create(int xs, int ys, l3d_screen_info *si)=0;
100 Chapter 2: Rendering and Animation Techniques for 3D Polygons

};

#ifndef __ACTIVE__P3_TEX_H
#ifndef __ACTIVE__P3_CLIP_H
#include "../geom/polygon/p3_tex.h"
#include "../geom/polygon/p3_ltex.h"
#endif
#endif

#endif

Listing 2-9: rast3.cc


#include "rast3.h"
#include "../tool_os/memman.h"

l3d_rasterizer_3d_imp::l3d_rasterizer_3d_imp
(int xs, int ys, l3d_screen_info *si) :
l3d_rasterizer_2d_imp(xs,ys,si)
{}

void l3d_rasterizer_3d_imp::draw_polygon_textured
(const l3d_polygon_3d_textured *p_poly)
{}

void l3d_rasterizer_3d_imp::draw_polygon_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped *p_poly)
{}

The member variable imp3d is a pointer to a 3D rasterizer implementation object, which actually
carries out the rasterizer’s drawing requests. The 3D rasterizer implementation object is of
(abstract) type l3d_rasterizer_3d_imp, and offers the new methods draw_poly-
gon_textured and draw_polygon_textured_lightmapped. These methods are
empty in the abstract l3d_rasterizer_3d_imp class; they are overridden in the actual con-
crete 3D rasterizer implementation (see next section).

NOTE The factory manager class l3d_factory_manager_v_0_2, which we saw earlier


in this chapter, creates and manages an object of type l3d_rasterizer_3d_imp. Thus, it
is through the factory manager that the application chooses which 3D rasterizer implementa-
tion (software or OpenGL/Mesa) to use.

The overridden methods draw_point, draw_line, clear_buffer, and draw_poly-


gon_flatshaded perform the same functions as in the parent class l3d_rasterizer_2d,
only the requests are forwarded to the 3D rasterizer implementation (imp3d) instead of to the 2D
rasterizer (imp2d).
The new methods draw_polygon_textured and draw_polygon_textured_
lightmapped are the new functionality offered by the 3D rasterizer interface. These methods
draw a textured and a light mapped polygon, respectively, by forwarding the request on to the
rasterizer implementation.
The new methods set_text_color and draw_text are used for drawing text strings
into the rasterizer’s buffer for display. We haven’t needed to until now, but it can be useful to dis-
play diagnostic or other messages to the screen; since the screen contents are controlled by the
rasterizer, we need an interface in the rasterizer (and in the rasterizer implementation) offering text
Chapter 2: Rendering and Animation Techniques for 3D Polygons 101

drawing capabilities. This is exactly what these virtual text functions do; they are overridden in the
concrete rasterizer implementation subclasses covered below.

A Software 3D Rasterizer Implementation:


l3d_rasterizer_3d_sw_imp
The class l3d_rasterizer_3d_sw_imp is a software implementation of the rasterization
services required by class l3d_rasterizer_3d.
The main function of interest is the new function draw_polygon_textured.
Fundamentally, it draws a polygon in much the same manner as we draw flat-shaded polygons: we
proceed from top to bottom, rasterizing each edge on the left and right sides of the polygon simul-
taneously. Between the left edge and the right edge we draw a horizontal span of pixels. Figure
2-39 summarizes the idea behind rasterizing flat-shaded polygons. The current left edge and cur-
rent right edge are drawn in bold; the left edge goes horizontally from left_x_start to
left_x_end; the right edge, from right_x_start to right_x_end. The current scanline
is the horizontal line labeled with y. For the current scanline, we have the current value of the left
edge left_x, the current value of the right edge right_x, and the current horizontal position
x. Within the scanline we move from left_x to right_x by increments of one pixel. After fin-
ishing drawing the current scanline, we move down to the next scanline, labeled with y+1; at this
point, we also move the left edge horizontally by left_dx_dy and the right edge by
right_dx_dy. After finishing drawing the left edge and the right edge, the left edge reaches the
horizontal value left_x_end; the right edge reaches right_x_end.

Figure 2-39: Variables


for rasterizing
flat-shaded polygons.

The main difference between the previous flat-shaded rasterization code and the current tex-
ture mapping code is that now, while stepping along the edges vertically and the span horizontally,
we must also keep track of the current u/z, v/z, and 1/z values. We do this using two sets of vari-
ables, as mentioned earlier: one for the left edge of the polygon and one for the right edge of the
polygon. Recall, the polygon drawing code works by searching simultaneously for edges of
non-zero height on the left and right sides of the polygon, and by drawing horizontal spans of
102 Chapter 2: Rendering and Animation Techniques for 3D Polygons

pixels from top to bottom between the left and right edges. With texture mapping, as soon as we
find an edge to be drawn for the left or the right side, we immediately make note of the u/z, v/z, and
1/z values for both the starting vertex and the ending vertex of the edge. Then, we compute three
delta values indicating how much the u/z, v/z, and 1/z values change from one scanline to the next
by dividing the total change by the height of the edge. In this manner, we always know the u/z, v/z,
and 1/z values at the left and right sides of the current scanline. Then, within the scanline, we draw
one horizontal span of pixels from the left edge to the right edge. Based on the left and right u/z,
v/z, and 1/z values, and the length of the horizontal span in pixels, we can interpolate the values
from the left edge to the right edge. This gives us a u/z, v/z, and 1/z value for each pixel. We then
simply divide u/z by 1/z and v/z by 1/z to obtain the u and v values, scale these real-valued u and v
values to integer texture map coordinates, retrieve the pixel from the texture map, and draw the
pixel on-screen. We actually divide the pixels into horizontal runs of 16 pixels, only computing the
u and v values every 16th pixel and linearly interpolating the u and v values between the ends of
the run.
See Figure 2-40 for a summary of the new variables needed while rasterizing texture mapped
polygons. Notice that this is simply an extension of the flat-shaded case; we keep track of addi-
tional variables at the starting and ending vertices of the left and right edges, interpolate these
values vertically between scanlines to obtain values for the left and right edges of the scanline, and
also interpolate these values horizontally within the scanline between the left and right values of
the current scanline.

Figure 2-40: Variables for


rasterizing texture mapped
polygons.

Listing 2-10: ras3_sw.h


#ifndef __RAS3_SW_H
#define __RAS3_SW_H
#include "../tool_os/memman.h"

#include "ras_sw.h"
#include "rast3.h"
#include "math.h"
Chapter 2: Rendering and Animation Techniques for 3D Polygons 103

#include "../system/sys_dep.h"

class l3d_rasterizer_3d_sw_imp :
virtual public l3d_rasterizer_2d_sw_imp,
virtual public l3d_rasterizer_3d_imp
{
protected:
void compute_surface_orientation_and_size(void);
unsigned char *text_color;
public:
l3d_rasterizer_3d_sw_imp(int xs, int ys, l3d_screen_info *si);
virtual ~l3d_rasterizer_3d_sw_imp(void);

/* virtual */ void draw_polygon_textured(const l3d_polygon_3d_textured *p_poly);


/* virtual */ void draw_polygon_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped *p_poly);
/* virtual */ void draw_text(int x, int y, const char *text);
/* virtual */ void set_text_color(int red, int green, int blue);
/* virtual */ void clear_buffer(void);

};

class l3d_rasterizer_3d_sw_imp_factory :
public l3d_rasterizer_3d_imp_factory
{
public:
l3d_rasterizer_3d_imp *create(int xs, int ys, l3d_screen_info *si);
};

#endif

Listing 2-11: ras3_sw.cc


#include "ras3_sw.h"
#include "../system/sys_dep.h"
#include <string.h>
#include <stdlib.h>
#include "../tool_os/memman.h"

static const unsigned char period[] =


{0x00, 0x00, 0x18, 0x18, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};

static const unsigned char space[] =


{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};

static const unsigned char digits[][13] = {


{0x0,0x0, 0x18,0x3C,0x66,0xC3,0xC3,0xC3,0xC3,0xC3,0x66,0x3C,0x18},
{0x0,0x0, 0x7F,0x7F,0xC,0xC,0xC,0xC,0xC,0xC,0x6C,0x3C,0xC},
{0x0,0x0, 0xFF,0xFF,0xC0,0x60,0x30,0x18,0xC,0x6,0xC6,0x66,0x3C},
{0x0,0x0, 0x7E,0xC3,0x3,0x6,0xC,0x38,0xC,0x6,0x3,0xC3,0x7E},
{0x0,0x0, 0x6,0x6,0x6,0x6,0xFF,0xFF,0xC6,0xC6,0x66,0x36,0x1E},
{0x0,0x0, 0x7E,0xFF,0xC3,0x3,0x3,0xFF,0xFE,0xC0,0xC0,0xC0,0xFE},
{0x0,0x0, 0x7E,0xFF,0xC3,0xC3,0xC3,0xFF,0xFE,0xC0,0xC0,0xC0,0x7E},
{0x0,0x0, 0x60,0x60,0x60,0x60,0x30,0x18,0xC,0x6,0x3,0x3,0xFF},
{0x0,0x0, 0x7E,0xFF,0xC3,0xC3,0xC3,0x7E,0xC3,0xC3,0xC3,0xC3,0x7E},
{0x0,0x0, 0x7E,0xFF,0xC3,0x3,0x3,0x7F,0xC3,0xC3,0xC3,0xC3,0x7E}
};

static const unsigned char letters[][13] = {


{0x00, 0x00, 0xc3, 0xc3, 0xc3, 0xc3, 0xff, 0xc3, 0xc3, 0xc3, 0x66, 0x3c, 0x18},
{0x00, 0x00, 0xfe, 0xc7, 0xc3, 0xc3, 0xc7, 0xfe, 0xc7, 0xc3, 0xc3, 0xc7, 0xfe},
104 Chapter 2: Rendering and Animation Techniques for 3D Polygons

{0x00, 0x00, 0x7e, 0xe7, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xe7, 0x7e},
{0x00, 0x00, 0xfc, 0xce, 0xc7, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc7, 0xce, 0xfc},
{0x00, 0x00, 0xff, 0xc0, 0xc0, 0xc0, 0xc0, 0xfc, 0xc0, 0xc0, 0xc0, 0xc0, 0xff},
{0x00, 0x00, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xfc, 0xc0, 0xc0, 0xc0, 0xff},
{0x00, 0x00, 0x7e, 0xe7, 0xc3, 0xc3, 0xcf, 0xc0, 0xc0, 0xc0, 0xc0, 0xe7, 0x7e},
{0x00, 0x00, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xff, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3},
{0x00, 0x00, 0x7e, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x7e},
{0x00, 0x00, 0x7c, 0xee, 0xc6, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06},
{0x00, 0x00, 0xc3, 0xc6, 0xcc, 0xd8, 0xf0, 0xe0, 0xf0, 0xd8, 0xcc, 0xc6, 0xc3},
{0x00, 0x00, 0xff, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0},
{0x00, 0x00, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xdb, 0xff, 0xff, 0xe7, 0xc3},
{0x00, 0x00, 0xc7, 0xc7, 0xcf, 0xcf, 0xdf, 0xdb, 0xfb, 0xf3, 0xf3, 0xe3, 0xe3},
{0x00, 0x00, 0x7e, 0xe7, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xe7, 0x7e},
{0x00, 0x00, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xfe, 0xc7, 0xc3, 0xc3, 0xc7, 0xfe},
{0x00, 0x00, 0x3f, 0x6e, 0xdf, 0xdb, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0x66, 0x3c},
{0x00, 0x00, 0xc3, 0xc6, 0xcc, 0xd8, 0xf0, 0xfe, 0xc7, 0xc3, 0xc3, 0xc7, 0xfe},

Y
{0x00, 0x00, 0x7e, 0xe7, 0x03, 0x03, 0x07, 0x7e, 0xe0, 0xc0, 0xc0, 0xe7, 0x7e},
{0x00, 0x00, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0xff},

FL
{0x00, 0x00, 0x7e, 0xe7, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3},
{0x00, 0x00, 0x18, 0x3c, 0x3c, 0x66, 0x66, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3},
{0x00, 0x00, 0xc3, 0xe7, 0xff, 0xff, 0xdb, 0xdb, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3},
{0x00, 0x00, 0xc3, 0x66,
AM
0x66, 0x3c, 0x3c, 0x18, 0x3c, 0x3c, 0x66, 0x66, 0xc3},
{0x00, 0x00, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x3c, 0x3c, 0x66, 0x66, 0xc3},
{0x00, 0x00, 0xff, 0xc0, 0xc0, 0x60, 0x30, 0x7e, 0x0c, 0x06, 0x03, 0x03, 0xff}
};
TE

unsigned char font_bitmaps[256][13];

l3d_rasterizer_3d_imp * l3d_rasterizer_3d_sw_imp_factory::create
(int xs, int ys, l3d_screen_info *si)
{
return new l3d_rasterizer_3d_sw_imp(xs,ys,si);
}

l3d_rasterizer_3d_sw_imp::l3d_rasterizer_3d_sw_imp
(int xs, int ys, l3d_screen_info *si)
: l3d_rasterizer_3d_imp(xs,ys,si),
l3d_rasterizer_2d_sw_imp(xs,ys,si),
l3d_rasterizer_2d_imp(xs,ys,si)

text_color = new unsigned char [si->bytes_per_pixel];


set_text_color(si->ext_max_red, si->ext_max_green, si->ext_max_blue);

//- create font bitmaps


int i,j;

for(i=0; i<26; i++) {


for(j=0; j<13; j++) {
font_bitmaps['A' + i][j] = letters[i][j];
font_bitmaps['a' + i][j] = letters[i][j];
}
}

for(i=0; i<10; i++) {


for(j=0; j<13; j++) {
font_bitmaps['0' + i][j] = digits[i][j];
}
}

Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 105

for(j=0; j<13; j++) {


font_bitmaps[' '][j] = space[j];
}

for(j=0; j<13; j++) {


font_bitmaps['.'][j] = period[j];
}

l3d_rasterizer_3d_sw_imp::~l3d_rasterizer_3d_sw_imp(void) {
delete [] text_color;
}

void l3d_rasterizer_3d_sw_imp::clear_buffer(void)
{
l3d_rasterizer_2d_sw_imp::clear_buffer();
}

#define Z_MULT_FACTOR 512


#define Z_ADD_FACTOR -0.42

//- convenience macro for accessing the vertex coordinates. Notice


//- that we access the clipped vertex index list (not the original),
//- and the transformed coordinate (not the original). This means
//- we draw the clipped version of the transformed polygon.
#define VTX(i) ((**(p_poly->vlist))[ (*(p_poly->clip_ivertices))[i].ivertex ].transformed)

#include "../math/fix_lowp.h"
#if 0
#define l3d_fixed float
#define l3d_real_to_fixed(x) (x)
#define fixfixdiv(a,b) ((a)/(b))
#define fixfixmul(a,b) ((a)*(b))
#define int2fix(a) ( (float)(a))
#define fix2int(a) ( (int)(a))
#define fix2float(a) (a)
#define float2fix(a) (a)
#define iceil_fix(a) ( (int)ceil((double)(a)) )

#endif

void l3d_rasterizer_3d_sw_imp::draw_polygon_textured
(const l3d_polygon_3d_textured *p_poly)
{

l3d_fixed x0,y0,x1,y1,x2,y2,x3;
l3d_fixed top_y, bottom_y;
int point_on_right=0;
int left_idx, right_idx, top_y_idx, bottom_y_idx;

int maxy_upper, iceil_fix_y0, iceil_fix_y1, iceil_fix_y2;

l3d_point *vtemp;

int i;

int scanline;

//- variables for the left edge, incremented BETWEEN scanlines


106 Chapter 2: Rendering and Animation Techniques for 3D Polygons

//- (inter-scanline)

l3d_fixed volatile
left_x, left_ui, left_vi, left_zi,
left_x_start, left_y_start, left_ui_start, left_vi_start, left_zi_start,
left_x_end, left_y_end, left_ui_end, left_vi_end, left_zi_end,
left_dx_dy, left_dui_dy, left_dvi_dy, left_dzi_dy;
int
left_ceilx,
left_ceilx_start, left_ceilx_end,
left_ceily_start, left_ceily_end;

//- variables for the right edge, incremented BETWEEN scanlines


//- (inter-scanline)

l3d_fixed volatile
right_x, right_ui, right_vi, right_zi,
right_x_start,right_y_start,right_ui_start,right_vi_start,right_zi_start,
right_x_end, right_y_end, right_ui_end, right_vi_end, right_zi_end,
right_dx_dy, right_dui_dy, right_dvi_dy, right_dzi_dy;

int
right_ceilx,
right_ceilx_start, right_ceilx_end,
right_ceily_start, right_ceily_end;

//- variables incremented WITHIN one scanline (intra-scanline)

l3d_fixed volatile
u,v,z,
ui, vi, zi, dui_dx, dvi_dx, dzi_dx,

//- for linear interp. within one run


u_left, u_right, du_dx,
v_left, v_right, dv_dx, inv_dx,
denom, inv_run_dx,

run_x, run_x_start, run_x_end,


u_run_end, v_run_end, du_dx_run, dv_dx_run;

long int cur_x;

top_y = l3d_real_to_fixed(VTX(0).Y_);
top_y_idx = 0;
bottom_y = top_y;
bottom_y_idx = top_y_idx;
for(i=0; i<p_poly->clip_ivertices->num_items; i++) {
if(l3d_real_to_fixed(VTX(i).Y_) < top_y) {
top_y = l3d_real_to_fixed(VTX(i).Y_);
top_y_idx = i;
}
if(l3d_real_to_fixed(VTX(i).Y_) > bottom_y) {
bottom_y = l3d_real_to_fixed(VTX(i).Y_);
bottom_y_idx = i;
}
}

left_idx = top_y_idx;
right_idx = top_y_idx;
Chapter 2: Rendering and Animation Techniques for 3D Polygons 107

left_x_start=l3d_real_to_fixed(VTX(top_y_idx).X_);
left_ceilx_start=iceil_fix(left_x_start);
left_y_start=l3d_real_to_fixed(VTX(top_y_idx).Y_);
left_ceily_start=iceil_fix(left_y_start);
left_ceily_end=left_ceily_start;
left_zi_start = fixfixdiv(int2fix(Z_MULT_FACTOR),
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.Z_)
+ float2fix(Z_ADD_FACTOR));
left_zi_end = left_zi_start;
left_ui_start = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.X_),
left_zi_start);
left_ui_end = left_ui_start;
left_vi_start = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.Y_),
left_zi_start);
left_vi_end = left_vi_start;

right_x_start=left_x_start;
right_y_start=left_y_start;
right_ceily_start=left_ceily_start;
right_ceily_end=right_ceily_start;
right_ui_start = left_ui_start;
right_vi_start = left_vi_start;

right_zi_start = left_zi_start;

scanline = left_ceily_start;
int oneshot=1;

while(scanline < ysize) {


//- if needed, find next left-edge whose ceily_end > current scanline
while( left_ceily_end - scanline <= 0 )
{
if (left_idx == bottom_y_idx) {
return;
}
left_idx = p_poly->next_clipidx_left(left_idx,p_poly->clip_ivertices->num_items);
left_y_end=l3d_real_to_fixed(VTX(left_idx).Y_);
left_ceily_end = iceil_fix(left_y_end);
if(left_ceily_end - scanline) {
#define MIN_EDGE_HEIGHT float2fix(0.005)
#define MIN_EDGE_WIDTH float2fix(0.005)
left_x_end=l3d_real_to_fixed(VTX(left_idx).X_);
left_ceilx_end=iceil_fix(left_x_end);

left_zi_end = fixfixdiv(int2fix(Z_MULT_FACTOR),
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.Z_)
+ float2fix(Z_ADD_FACTOR));
left_ui_end = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.X_),
left_zi_end);
left_vi_end = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.Y_),
108 Chapter 2: Rendering and Animation Techniques for 3D Polygons

left_zi_end);

if(left_y_end - left_y_start >= MIN_EDGE_HEIGHT) {


left_dx_dy = fixfixdiv(left_x_end-left_x_start,left_y_end-left_y_start);

left_dui_dy = fixfixdiv(left_ui_end - left_ui_start,


left_y_end - left_y_start);
left_dvi_dy = fixfixdiv(left_vi_end - left_vi_start,
left_y_end - left_y_start);
left_dzi_dy = fixfixdiv(left_zi_end - left_zi_start,
left_y_end - left_y_start);

}
else
{
left_dx_dy =
left_dui_dy =
left_dvi_dy =
left_dzi_dy = int2fix(0);
}

left_x = left_x_start + //- sub-pixel correction


fixfixmul(int2fix(left_ceily_start)-left_y_start , left_dx_dy);
left_ui = left_ui_start + //- sub-pixel correction
fixfixmul(int2fix(left_ceily_start)-left_y_start,
left_dui_dy);
left_vi = left_vi_start + //- sub-pixel correction
fixfixmul(int2fix(left_ceily_start)-left_y_start,
left_dvi_dy);
left_zi = left_zi_start + //- sub-pixel correction
fixfixmul(int2fix(left_ceily_start)-left_y_start,
left_dzi_dy);
}else {
left_x_start = l3d_real_to_fixed(VTX(left_idx).X_);
left_ceilx_start=iceil_fix(left_x_start);
left_y_start = l3d_real_to_fixed(VTX(left_idx).Y_);
left_ceily_start = iceil_fix(left_y_start);
left_zi_start = fixfixdiv(int2fix(Z_MULT_FACTOR),
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.Z_)
+ float2fix(Z_ADD_FACTOR));
left_ui_start = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.X_),
left_zi_start);
left_vi_start = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.Y_),
left_zi_start);

}
}

//- if needed, find next right-edge whose ceily_end > current scanline
while(right_ceily_end - scanline <= 0 )
{
if (right_idx == bottom_y_idx) {
return;
}
right_idx = p_poly->next_clipidx_right(right_idx, p_poly->clip_ivertices->num_items);
Chapter 2: Rendering and Animation Techniques for 3D Polygons 109

right_y_end=l3d_real_to_fixed(VTX(right_idx).Y_);
right_ceily_end = iceil_fix(right_y_end);
if(right_ceily_end - scanline) {

right_x_end=l3d_real_to_fixed(VTX(right_idx).X_);
right_ceilx_end = iceil_fix(right_x_end);
right_zi_end = fixfixdiv(int2fix(Z_MULT_FACTOR),
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[right_idx]))->tex_coord.Z_)
+ float2fix(Z_ADD_FACTOR));
right_ui_end = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[right_idx]))->tex_coord.X_),
right_zi_end);
right_vi_end = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[right_idx]))->tex_coord.Y_),
right_zi_end);

if(right_y_end-right_y_start>=MIN_EDGE_HEIGHT) {
right_dx_dy =
fixfixdiv(right_x_end-right_x_start,right_y_end-right_y_start);

right_dui_dy = fixfixdiv(right_ui_end - right_ui_start,


right_y_end - right_y_start);
right_dvi_dy = fixfixdiv(right_vi_end - right_vi_start,
right_y_end - right_y_start);
right_dzi_dy = fixfixdiv(right_zi_end - right_zi_start,
right_y_end - right_y_start);

}
else
{
right_dx_dy =
right_dui_dy =
right_dvi_dy =
right_dzi_dy = int2fix(0);
}

right_x = right_x_start + //- sub-pixel correction


fixfixmul(int2fix(right_ceily_start)-right_y_start , right_dx_dy);

right_ui = right_ui_start + //- sub-pixel correction


fixfixmul(int2fix(right_ceily_start)-right_y_start,
right_dui_dy);
right_vi = right_vi_start + //- sub-pixel correction
fixfixmul(int2fix(right_ceily_start)-right_y_start,
right_dvi_dy);
right_zi = right_zi_start + //- sub-pixel correction
fixfixmul(int2fix(right_ceily_start)-right_y_start,
right_dzi_dy);

}else {
right_x_start = l3d_real_to_fixed(VTX(right_idx).X_);
right_ceilx_start = iceil_fix(right_x_start);
right_y_start = l3d_real_to_fixed(VTX(right_idx).Y_);
right_ceily_start = iceil_fix(right_y_start);

right_zi_start = fixfixdiv(int2fix(Z_MULT_FACTOR),
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
110 Chapter 2: Rendering and Animation Techniques for 3D Polygons

& ((*(p_poly->clip_ivertices))[right_idx]))->tex_coord.Z_)
+float2fix(Z_ADD_FACTOR));
right_ui_start = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[right_idx]))->tex_coord.X_),
right_zi_start);
right_vi_start = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[right_idx]))->tex_coord.Y_),
right_zi_start);
}
}

if (left_ceily_end > ysize) left_ceily_end = ysize;


if (right_ceily_end > ysize) right_ceily_end = ysize;

while ( (scanline < left_ceily_end) && (scanline < right_ceily_end) )


{
if (left_x > right_x) {
}
else {
left_ceilx = iceil_fix(left_x);
right_ceilx = iceil_fix(right_x);

//- if the segment is (horizontally) visible, draw it.

if(right_x - left_x > MIN_EDGE_WIDTH) {

dui_dx = fixfixdiv(right_ui - left_ui , right_x - left_x);


dvi_dx = fixfixdiv(right_vi - left_vi , right_x - left_x);
dzi_dx = fixfixdiv(right_zi - left_zi , right_x - left_x);

}else {
//- a segment exactly one pixel wide (left_x == right_x)
dui_dx =
dvi_dx =
dzi_dx = int2fix(0);
}

ui = left_ui + //- sub-pixel correction


fixfixmul(int2fix(left_ceilx) - left_x, dui_dx);
vi = left_vi + //- sub-pixel correction
fixfixmul(int2fix(left_ceilx) - left_x, dvi_dx);
zi = left_zi + //- sub-pixel correction
fixfixmul(int2fix(left_ceilx) - left_x, dzi_dx);

cur_x=left_ceilx;

#define LINEAR_RUN_SHIFT 4
#define LINEAR_RUN_SIZE (1<<LINEAR_RUN_SHIFT)
z = fixfixdiv(int2fix(1), zi);
u = fixfixmul(ui,z);
v = fixfixmul(vi,z);

for(/*register */ unsigned char *pix=sinfo->p_screenbuf+(left_ceilx + SW_RAST_Y_REVERSAL(ysize,


scanline)*xsize)*sinfo->bytes_per_pixel;
pix< sinfo->p_screenbuf+(right_ceilx+ SW_RAST_Y_REVERSAL(ysize,
scanline)*xsize)*sinfo->bytes_per_pixel;
)
{
Chapter 2: Rendering and Animation Techniques for 3D Polygons 111

run_x_end = int2fix(cur_x) +
int2fix(LINEAR_RUN_SIZE);
if (run_x_end > int2fix(right_ceilx)) {
run_x_end = int2fix(right_ceilx);
}
denom = fixfixdiv
(int2fix(1) ,
zi + fixfixmul(dzi_dx, run_x_end-int2fix(cur_x)));
u_run_end=fixfixmul
(ui + fixfixmul(dui_dx,(run_x_end-int2fix(cur_x))),
denom) ;
v_run_end=fixfixmul
(vi + fixfixmul(dvi_dx,(run_x_end-int2fix(cur_x))),
denom) ;

inv_run_dx = fixfixdiv
(int2fix(1),
(run_x_end-int2fix(cur_x)));
du_dx_run = fixfixmul((u_run_end - u) ,inv_run_dx);
dv_dx_run = fixfixmul((v_run_end - v) ,inv_run_dx);

for(run_x = int2fix(cur_x);
run_x < run_x_end;
run_x+=int2fix(1))
{

unsigned char *texel =


p_poly->texture->tex_data->data +
((fix2int(l3d_mulri(v,p_poly->texture->tex_data->height-1))&
(p_poly->texture->tex_data->height-1))*(p_poly->texture->tex_data->width)
+
(fix2int(l3d_mulri(u,p_poly->texture->tex_data->width-1))&
(p_poly->texture->tex_data->width-1)) ) *
sinfo->bytes_per_pixel;

for(register int b=0; b<sinfo->bytes_per_pixel;b++) {


*pix++ = *texel++;
}
u += du_dx_run;
v += dv_dx_run;
}

cur_x += LINEAR_RUN_SIZE;
#if 0
ui += dui_dx<<LINEAR_RUN_SHIFT;
vi += dvi_dx<<LINEAR_RUN_SHIFT;
zi += dzi_dx<<LINEAR_RUN_SHIFT;
#else
ui += l3d_mulri(dui_dx, LINEAR_RUN_SIZE);
vi += l3d_mulri(dvi_dx, LINEAR_RUN_SIZE);
zi += l3d_mulri(dzi_dx, LINEAR_RUN_SIZE);
#endif
}
}

scanline++;
left_x += left_dx_dy;
right_x += right_dx_dy;

left_ui += left_dui_dy;
112 Chapter 2: Rendering and Animation Techniques for 3D Polygons

left_vi += left_dvi_dy;
left_zi += left_dzi_dy;
right_ui += right_dui_dy;
right_vi += right_dvi_dy;
right_zi += right_dzi_dy;
}

//- for the left and/or right segment(s) which just completed drawing
//- initialize xxx_start = xxx_end, to begin next segment. xxx_end is
//- then searched for in the next iteration (the while() loops)
if ( left_ceily_end - scanline <= 0 ) {
left_x_start=left_x_end;
left_y_start=left_y_end;
left_ceily_start=left_ceily_end;
left_ui_start=left_ui_end;
left_vi_start=left_vi_end;
left_zi_start=left_zi_end;

}
if ( right_ceily_end - scanline <= 0 ) {
right_x_start=right_x_end;
right_y_start=right_y_end;
right_ceily_start=right_ceily_end;
right_ui_start=right_ui_end;
right_vi_start=right_vi_end;
right_zi_start=right_zi_end;

}
}

}
#include "../math/fix_prec.h"

void l3d_rasterizer_3d_sw_imp::draw_polygon_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped *p_poly)
{
l3d_texture *old_tex;

if(p_poly->surface==NULL) {
p_poly->surface = p_poly->get_scache()->combine_lightmap_and_texmap
(p_poly->surface_xsize, p_poly->surface_ysize, p_poly);
}

p_poly->surface->O = p_poly->surface_orientation.O;
p_poly->surface->U = p_poly->surface_orientation.U;
p_poly->surface->V = p_poly->surface_orientation.V;

old_tex = p_poly->texture;
p_poly->texture = p_poly->surface;
draw_polygon_textured(p_poly);
p_poly->texture = old_tex;
}

void l3d_rasterizer_3d_sw_imp::set_text_color(int red, int green, int blue) {

unsigned long native_col = sinfo->ext_to_native


(red, green, blue);

register int i;
unsigned long mask = MAX_BYTE;
Chapter 2: Rendering and Animation Techniques for 3D Polygons 113

char shift = 0;

unsigned char *c = text_color;


for(i=0; i<sinfo->bytes_per_pixel; i++) {
*c++ = (native_col & mask) >> shift;
mask <<= BITS_PER_BYTE;
shift += BITS_PER_BYTE;
}
}

void l3d_rasterizer_3d_sw_imp::draw_text(int x, int y, const char *text) {

#define FONT_Y_SIZE 13
#define FONT_X_SIZE 8
#define FONT_SPACING 2

int i;
int font_y, font_x;
int x_offset;

x_offset = x * sinfo->bytes_per_pixel;
unsigned char *pix;

unsigned char *fg_color;

for(font_y=0; font_y < FONT_Y_SIZE; font_y++) {


pix=sinfo->p_screenbuf+(x_offset +
SW_RAST_Y_REVERSAL(ysize,y + font_y)*xsize)
*sinfo->bytes_per_pixel;

unsigned int c;
for(c=0; c<strlen(text); c++) {
int current_bit;
for(font_x=0, current_bit=0x80;
font_x < FONT_X_SIZE;
font_x++, current_bit >>= 1)
{
if(font_bitmaps[text[c]][FONT_Y_SIZE-font_y] & current_bit) {
fg_color = text_color;
for(register int b=0; b<sinfo->bytes_per_pixel;b++) {
*pix++ = *fg_color++;
}
}else {
pix += sinfo->bytes_per_pixel;
}
}
for(font_x=0; font_x < FONT_SPACING; font_x++) {
pix += sinfo->bytes_per_pixel;
}
}
}
}

Two other items deserve special mention in the texture mapping code. First, all of the real-valued
variables are of type l3d_fixed instead of l3d_real. This means that we explicitly force the
usage of fixed-point values within the texture mapping code. As with l3d_real values, we also
use macros to multiply and divide l3d_fixed values: fixfixmul and fixintmul multiply
a fixed point by another fixed-point or integer value, and fixfixdiv and fixintdiv divide a
fixed-point value by another fixed-point or integer value. Addition and subtraction of l3d_
114 Chapter 2: Rendering and Animation Techniques for 3D Polygons

fixed values, as with l3d_real, can be done with the normal C++ addition and subtraction
operators.
The reason we use fixed-point values in the texture mapping code is speed. Fixed-point arith-
metic represents real-valued numbers as integers (see the Appendix); therefore, the conversion
from a fixed-point number to an integer is quite fast—it is simply a bit shift operation. On the other
hand, converting a floating-point number to an integer is comparatively slow. This might seem
trivial, but we must convert the real-valued (u,v) coordinates to integer texture map indices for
every pixel. In the l3d library, for instance, simply changing from floating point to fixed point dou-
bled the performance of the texture mapping code.
The other item of note is the use of the Z_MULT_FACTOR and Z_ADD_FACTOR constants in
the code. Instead of computing 1/z, we compute Z_MULT_FACTOR/(z + Z_ADD_FACTOR). The

Y
1/z values in camera space can be very small; for instance, if z is 1000, then 1/z is 0.001. The

FL
Z_MULT_FACTOR has the effect of scaling the 1/z values to be slightly larger so that interpolation
between the 1/z values occurs between larger values and is thus more accurate. The
Z_ADD_FACTOR has the effect of making all z values slightly smaller before computing 1/z. The
AM
idea is that because of the near z clipping plane, none of the z values will ever be smaller than a cer-
tain value. For instance, if the near z plane is at 5, then z will always be at least 5, meaning that 1/z
is at most 1/5, or 0.2. Even with the Z_MULT_FACTOR, there is still a certain range of larger 1/z
TE

values which never occur due to the near z clipping. The purpose of the Z_ADD_FACTOR is to
move the clipped z values slightly closer to the origin, so that the range of possible 1/z values is
larger. These numerical range considerations are particularly important when using fixed-point
arithmetic since numerical accuracy is always a troublesome problem—troublesome because we
only have a fixed number of bits to store whole and fractional parts, and if our computations are
carelessly formulated, the resulting values will require either more or less precision than we have,
leading to incorrect results. Essentially, therefore, we use the Z_MULT_FACTOR and
Z_ADD_FACTOR to scale our computations such that the results always lie within the fixed preci-
sion of our fixed point values.
The text drawing code in the software rasterizer relies on a table of bitmaps, containing one
bitmap for each character, and the two overridden methods set_text_color and
draw_text. The constructor for the rasterizer implementation creates the table of bitmaps in
variable font_bitmaps. Each character is associated with an 8 by 13 bitmap, stored as a series
of 13 bytes. Each bit in each byte of each character’s bitmap represents one pixel forming the
shape of the character; a value of 1 for the bit means the pixel should be drawn to form the shape of
the character, while a value of 0 means the bit should not be drawn. The bits are interpreted such
that the most-significant (leftmost) bit is the leftmost pixel to be displayed. Since each bit is 8
bytes (an admittedly hard-coded assumption in the code), one byte represents the eight horizontal
pixels belonging to each character’s bitmap.

Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 115

Figure 2-41: The character bitmap for


the character “0.” Each horizontal line
in the bitmap (left) is interpreted as a
byte, where a lit pixel represents bit
value 1 and an unlit pixel represents bit
value 0 (middle). The bits for each byte,
represented in hex, are shown on the
right, and are the actual values stored
in the font bitmap array. We store one
such bitmap for each possible character
we want to draw (letters, numbers,
punctuation, etc.).

We use the following methods to draw the text. Method set_text_color takes a red,
green, and blue color specification in external format, converts the color to native internal screen
format, then stores the color as a series of bytes in the array text_color. Then, method draw_
text draws a given string using the previously set color. It takes each character in the string to be
printed, finds the appropriate bitmap for the character by using its ASCII code, and, for each bit in
the character’s bitmap with a value of 1, draws the color stored in text_color into the
rasterizer buffer. Thus, the string is drawn character for character, pixel by pixel into the buffer.

A Mesa/OpenGL 3D Rasterizer Implementation:


l3d_rasterizer_3d_mesa_imp
Mesa can also draw texture mapped polygons for us. For a computer without hardware accelera-
tion, there is no speed benefit to be gained by using Mesa instead of a custom software texture
mapper, because Mesa must do exactly the same math. The picture looks quite different, though,
with a 3D graphics card. In this case, Mesa simply forwards the request to draw a texture mapped
polygon on to the hardware, which then draws the polygon at speeds impossible to achieve via
software alone. Therefore, let’s now take a look at a Mesa/OpenGL 3D rasterizer implementation.
As usual, the rasterizer implementation is chosen via a factory at the program start, and is plugged
into the rasterizer interface. Application code need not be modified to use either software or hard-
ware rasterizers.
Listing 2-12: ras3_mes.h
#ifndef __RAS3_MES_H
#define __RAS3_MES_H
#include "../tool_os/memman.h"

#include <GL/glut.h>
#include "rast3.h"
#include "ras_mesa.h"

class l3d_rasterizer_3d_mesa_imp :
virtual public l3d_rasterizer_2d_mesa_imp,
virtual public l3d_rasterizer_3d_imp
116 Chapter 2: Rendering and Animation Techniques for 3D Polygons

{
private:

GLuint fontOffset;
void make_raster_font(void);

protected:
void texture_setup(const l3d_polygon_3d_textured *p_poly);
void ogl_texturing_between_glbegin_end
(const l3d_polygon_3d_textured *p_poly);

l3d_real ui_xs, ui_ys, ui_c, vi_xs, vi_ys, vi_c, zi_xs, zi_ys, zi_c,
ui, vi, zi;

unsigned long text_color;

public:
l3d_rasterizer_3d_mesa_imp(int xs, int ys, l3d_screen_info *si);

void draw_point(int x, int y, unsigned long col);


void draw_line(int x0, int y0, int x1, int y1, unsigned long col);
void draw_polygon_flatshaded(const l3d_polygon_2d_flatshaded *p_poly);

void draw_polygon_textured(const l3d_polygon_3d_textured *p_poly);


void draw_polygon_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped *p_poly);

void draw_text(int x, int y, const char *string);


void set_text_color(int red, int green, int blue);

void clear_buffer(void);
};

class l3d_rasterizer_3d_mesa_imp_factory :
public l3d_rasterizer_3d_imp_factory {
public:
l3d_rasterizer_3d_imp *create(int xs, int ys, l3d_screen_info *si);
};

#endif

Listing 2-13: ras3_mes.cc


#include "ras3_mes.h"
#include <math.h>
#include "../tool_os/memman.h"

static const GLubyte period[] =


{0x00, 0x00, 0x18, 0x18, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};

static const GLubyte space[] =


{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};

static const GLubyte digits[][13] = {


{0x0,0x0, 0x18,0x3C,0x66,0xC3,0xC3,0xC3,0xC3,0xC3,0x66,0x3C,0x18},
{0x0,0x0, 0x7F,0x7F,0xC,0xC,0xC,0xC,0xC,0xC,0x6C,0x3C,0xC},
{0x0,0x0, 0xFF,0xFF,0xC0,0x60,0x30,0x18,0xC,0x6,0xC6,0x66,0x3C},
{0x0,0x0, 0x7E,0xC3,0x3,0x6,0xC,0x38,0xC,0x6,0x3,0xC3,0x7E},
{0x0,0x0, 0x6,0x6,0x6,0x6,0xFF,0xFF,0xC6,0xC6,0x66,0x36,0x1E},
{0x0,0x0, 0x7E,0xFF,0xC3,0x3,0x3,0xFF,0xFE,0xC0,0xC0,0xC0,0xFE},
{0x0,0x0, 0x7E,0xFF,0xC3,0xC3,0xC3,0xFF,0xFE,0xC0,0xC0,0xC0,0x7E},
Chapter 2: Rendering and Animation Techniques for 3D Polygons 117

{0x0,0x0, 0x60,0x60,0x60,0x60,0x30,0x18,0xC,0x6,0x3,0x3,0xFF},
{0x0,0x0, 0x7E,0xFF,0xC3,0xC3,0xC3,0x7E,0xC3,0xC3,0xC3,0xC3,0x7E},
{0x0,0x0, 0x7E,0xFF,0xC3,0x3,0x3,0x7F,0xC3,0xC3,0xC3,0xC3,0x7E}
};

static const GLubyte letters[][13] = {


{0x00, 0x00, 0xc3, 0xc3, 0xc3, 0xc3, 0xff, 0xc3, 0xc3, 0xc3, 0x66, 0x3c, 0x18},
{0x00, 0x00, 0xfe, 0xc7, 0xc3, 0xc3, 0xc7, 0xfe, 0xc7, 0xc3, 0xc3, 0xc7, 0xfe},
{0x00, 0x00, 0x7e, 0xe7, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xe7, 0x7e},
{0x00, 0x00, 0xfc, 0xce, 0xc7, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc7, 0xce, 0xfc},
{0x00, 0x00, 0xff, 0xc0, 0xc0, 0xc0, 0xc0, 0xfc, 0xc0, 0xc0, 0xc0, 0xc0, 0xff},
{0x00, 0x00, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xfc, 0xc0, 0xc0, 0xc0, 0xff},
{0x00, 0x00, 0x7e, 0xe7, 0xc3, 0xc3, 0xcf, 0xc0, 0xc0, 0xc0, 0xc0, 0xe7, 0x7e},
{0x00, 0x00, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xff, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3},
{0x00, 0x00, 0x7e, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x7e},
{0x00, 0x00, 0x7c, 0xee, 0xc6, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06},
{0x00, 0x00, 0xc3, 0xc6, 0xcc, 0xd8, 0xf0, 0xe0, 0xf0, 0xd8, 0xcc, 0xc6, 0xc3},
{0x00, 0x00, 0xff, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0},
{0x00, 0x00, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xdb, 0xff, 0xff, 0xe7, 0xc3},
{0x00, 0x00, 0xc7, 0xc7, 0xcf, 0xcf, 0xdf, 0xdb, 0xfb, 0xf3, 0xf3, 0xe3, 0xe3},
{0x00, 0x00, 0x7e, 0xe7, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xe7, 0x7e},
{0x00, 0x00, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xfe, 0xc7, 0xc3, 0xc3, 0xc7, 0xfe},
{0x00, 0x00, 0x3f, 0x6e, 0xdf, 0xdb, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0x66, 0x3c},
{0x00, 0x00, 0xc3, 0xc6, 0xcc, 0xd8, 0xf0, 0xfe, 0xc7, 0xc3, 0xc3, 0xc7, 0xfe},
{0x00, 0x00, 0x7e, 0xe7, 0x03, 0x03, 0x07, 0x7e, 0xe0, 0xc0, 0xc0, 0xe7, 0x7e},
{0x00, 0x00, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0xff},
{0x00, 0x00, 0x7e, 0xe7, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3},
{0x00, 0x00, 0x18, 0x3c, 0x3c, 0x66, 0x66, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3},
{0x00, 0x00, 0xc3, 0xe7, 0xff, 0xff, 0xdb, 0xdb, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3},
{0x00, 0x00, 0xc3, 0x66, 0x66, 0x3c, 0x3c, 0x18, 0x3c, 0x3c, 0x66, 0x66, 0xc3},
{0x00, 0x00, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x3c, 0x3c, 0x66, 0x66, 0xc3},
{0x00, 0x00, 0xff, 0xc0, 0xc0, 0x60, 0x30, 0x7e, 0x0c, 0x06, 0x03, 0x03, 0xff}
};

l3d_rasterizer_3d_imp * l3d_rasterizer_3d_mesa_imp_factory::create
(int xs, int ys, l3d_screen_info *si)
{
return new l3d_rasterizer_3d_mesa_imp(xs,ys,si);
}

l3d_rasterizer_3d_mesa_imp::
l3d_rasterizer_3d_mesa_imp(int xs, int ys, l3d_screen_info *si):
l3d_rasterizer_3d_imp(xs,ys,si),
l3d_rasterizer_2d_imp(xs,ys,si),
l3d_rasterizer_2d_mesa_imp(xs,ys,si)
{
make_raster_font();
set_text_color(si->ext_max_red,
si->ext_max_green,
si->ext_max_blue);

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-xsize/2,xsize/2,-ysize/2,ysize/2);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0,0,xsize,ysize);

//- prevent back-facing polygons - those which have a counterclockwise


118 Chapter 2: Rendering and Animation Techniques for 3D Polygons

//- vertex orientation - from being drawn, to mimic the behavior of the
//- software rasterizer.
glFrontFace(GL_CW);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
}

void l3d_rasterizer_3d_mesa_imp::clear_buffer(void)
{
l3d_rasterizer_2d_mesa_imp::clear_buffer();
}

void l3d_rasterizer_3d_mesa_imp::draw_point(int x, int y, unsigned long col) {


glBegin(GL_POINTS);
sinfo->set_color(col);
glVertex2i(x-(xsize>>1),(ysize>>1)-y);
glEnd();
}

void l3d_rasterizer_3d_mesa_imp::draw_line(int x0, int y0, int x1, int y1,


unsigned long col)
{
glBegin(GL_LINES);
sinfo->set_color(col);
glVertex2i(x0-(xsize>>1),(ysize>>1)-y0);
glVertex2i(x1-(xsize>>1),(ysize>>1)-y1);
glEnd();
}

void l3d_rasterizer_3d_mesa_imp::draw_polygon_flatshaded
(const l3d_polygon_2d_flatshaded *p_poly)
{
int i;

glBegin(GL_POLYGON);
sinfo->set_color(p_poly->final_color);
for(i=0; i<p_poly->clip_ivertices->num_items; i++) {
glVertex2i(iceil((**(p_poly->vlist))
[(*p_poly->clip_ivertices)[i].ivertex].transformed.X_)

- (xsize>>1),

(ysize>>1) -
iceil((**(p_poly->vlist))
[(*p_poly->clip_ivertices)[i].ivertex].transformed.Y_));
}
glEnd();
}

void l3d_rasterizer_3d_mesa_imp::texture_setup
(const l3d_polygon_3d_textured *p_poly)
{

if (!glIsTexture(p_poly->texture->tex_data->id)) {
printf("generating texture");
GLuint glid = (GLuint) (p_poly->texture->tex_data->id);
glGenTextures(1, & glid );
p_poly->texture->tex_data->id = glid;

glBindTexture(GL_TEXTURE_2D, p_poly->texture->tex_data->id);
Chapter 2: Rendering and Animation Techniques for 3D Polygons 119

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);


glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glEnable(GL_TEXTURE_2D);

glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGBA,
p_poly->texture->tex_data->width,
p_poly->texture->tex_data->height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
p_poly->texture->tex_data->data);
}else {
glEnable(GL_TEXTURE_2D); //- texturing may have been turned off by another

glBindTexture(GL_TEXTURE_2D,p_poly->texture->tex_data->id);
}

float texx[4] = {0.0, 0.0, 1.0, 1.0 };


float texy[4] = {0.0, 1.0, 1.0, 0.0 };

void l3d_rasterizer_3d_mesa_imp::draw_polygon_textured
(const l3d_polygon_3d_textured *p_poly)

{
texture_setup(p_poly);
glBegin(GL_POLYGON);
ogl_texturing_between_glbegin_end(p_poly);
glEnd();
}

void l3d_rasterizer_3d_mesa_imp::ogl_texturing_between_glbegin_end
(const l3d_polygon_3d_textured *p_poly)
{
int i;

GLfloat plane[4];
l3d_vector plane_normal;

plane_normal = normalized(p_poly->sfcnormal.transformed
- p_poly->center.transformed);

plane[0] = plane_normal.a[0];
plane[1] = plane_normal.a[1];
plane[2] = plane_normal.a[2];
plane[3] = -plane[0] * p_poly->center.transformed.X_
- plane[1] * p_poly->center.transformed.Y_
- plane[2] * p_poly->center.transformed.Z_;

for(i=0; i<p_poly->clip_ivertices->num_items; i++) {

#define SCREEN_X ((**(p_poly->vlist)) [(*(p_poly->clip_ivertices))[i].ivertex].transformed.X_)


120 Chapter 2: Rendering and Animation Techniques for 3D Polygons

#define SCREEN_Y ((**(p_poly->vlist)) [(*(p_poly->clip_ivertices))[i].ivertex].transformed.Y_)

l3d_real hf=l3d_mulri(*fovx,*screen_xsize),
vf=l3d_mulri(*fovy,*screen_ysize);
float orig_z_denom =
(plane[0]*(SCREEN_X - xsize/2)/ hf -
plane[1]*(SCREEN_Y - ysize/2)/ vf +
plane[2]);
if(orig_z_denom > -0.0001) {

//- this would imply that the orig_z value is < 0.0001 ie non-positive
//- so we cancel the entire polygon drawing operation
return;
}
}

for(i=0; i<p_poly->clip_ivertices->num_items; i++) {

#define SCREEN_X ((**(p_poly->vlist)) [(*(p_poly->clip_ivertices))[i].ivertex].transformed.X_)


#define SCREEN_Y ((**(p_poly->vlist)) [(*(p_poly->clip_ivertices))[i].ivertex].transformed.Y_)

glTexCoord2f
(
((l3d_polygon_ivertex_textured *)&(*p_poly->clip_ivertices)[i])
->tex_coord.X_,
((l3d_polygon_ivertex_textured *)&(*p_poly->clip_ivertices)[i])
->tex_coord.Y_
);

l3d_real hf=l3d_mulri(*fovx,*screen_xsize),
vf=l3d_mulri(*fovy,*screen_ysize);
float orig_z = -plane[3] /
(plane[0]*(SCREEN_X - xsize/2)/ hf -
plane[1]*(SCREEN_Y - ysize/2)/ vf +
plane[2]);

GLfloat tmp[4];

tmp[0] =
(((**(p_poly->vlist))
[(*(p_poly->clip_ivertices))[i].ivertex].transformed.X_ - xsize/2)

* orig_z );

tmp[1] =
((-(**(p_poly->vlist))
[(*(p_poly->clip_ivertices))[i].ivertex].transformed.Y_ + ysize/2)

* orig_z );
tmp[2] = 1;
tmp[3] = orig_z;

glVertex4fv(tmp);
}
}

void l3d_rasterizer_3d_mesa_imp::make_raster_font(void)
{
GLuint i, j;
Chapter 2: Rendering and Animation Techniques for 3D Polygons 121

glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

fontOffset = glGenLists (128);


for (i = 0,j = 'A'; i < 26; i++,j++) {
glNewList(fontOffset + j, GL_COMPILE);
glBitmap(8, 13, 0.0, 2.0, 10.0, 0.0,
letters[i]);
glEndList();
}

for (i = 0,j = 'a'; i < 26; i++,j++) {


glNewList(fontOffset + j, GL_COMPILE);
glBitmap(8, 13, 0.0, 2.0, 10.0, 0.0,
letters[i]);
glEndList();
}

for (i = 0,j = '0'; i < 10; i++,j++) {


glNewList(fontOffset + j, GL_COMPILE);
glBitmap(8, 13, 0.0, 2.0, 10.0, 0.0,
digits[i]);
glEndList();
}

glNewList(fontOffset + ' ', GL_COMPILE);


glBitmap(8, 13, 0.0, 2.0, 10.0, 0.0,
space);
glEndList();

glNewList(fontOffset + '.', GL_COMPILE);


glBitmap(8, 13, 0.0, 2.0, 10.0, 0.0,
period);
glEndList();

void l3d_rasterizer_3d_mesa_imp::draw_text(int x, int y, const char *string) {


sinfo->set_color(text_color);
glDisable(GL_TEXTURE_2D); //- text should not be displayed using a texture
glRasterPos2i(x-(xsize>>1),(ysize>>1)-y - 13); //- 13 is height of 1 char

glPushAttrib (GL_LIST_BIT);
glListBase(fontOffset);
glCallLists((GLsizei) strlen(string), GL_UNSIGNED_BYTE, (GLubyte *) string);
glPopAttrib ();

void l3d_rasterizer_3d_mesa_imp::set_text_color(int red, int green, int blue) {


text_color = sinfo->ext_to_native(red, green, blue);
}

void l3d_rasterizer_3d_mesa_imp::draw_polygon_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped *p_poly)
{
GLuint glid = (GLuint) (p_poly->texture->tex_data->id);

l3d_list<l3d_point> old_texcoords(10);

//- try to find texture data in surface cache


122 Chapter 2: Rendering and Animation Techniques for 3D Polygons

if (!p_poly->get_scache()->find_texdata(p_poly->texture->tex_data)) {

if (glIsTexture(p_poly->texture->tex_data->id)) {

glDeleteTextures(1, &glid);
}

glGenTextures(1, & glid );


p_poly->texture->tex_data->id = glid;

glBindTexture(GL_TEXTURE_2D, p_poly->texture->tex_data->id);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);


glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glEnable(GL_TEXTURE_2D);

glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGBA,
p_poly->texture->tex_data->width,
p_poly->texture->tex_data->height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
p_poly->texture->tex_data->data);
}else {
glBindTexture(GL_TEXTURE_2D,p_poly->texture->tex_data->id);
}

//- based on texture orientation: compute uvz's for each vertex


for(int i=0; i<p_poly->clip_ivertices->num_items; i++) {

float px =
( (**p_poly->vlist) [(*p_poly->clip_ivertices)[i].ivertex].transformed.X_
- *screen_xsize/2 )
/ (*screen_xsize * *fovx);

float py =
(*screen_ysize/2
-
(**p_poly->vlist) [(*p_poly->clip_ivertices)[i].ivertex].transformed.Y_ )
/
(*screen_ysize * *fovy);

float ox = p_poly->texture->O.transformed.X_;
float oy = p_poly->texture->O.transformed.Y_;
float oz = p_poly->texture->O.transformed.Z_;
float ux = p_poly->texture->U.transformed.X_ - ox;
float uy = p_poly->texture->U.transformed.Y_ - oy;
float uz = p_poly->texture->U.transformed.Z_ - oz;
float vx = p_poly->texture->V.transformed.X_ - ox;
float vy = p_poly->texture->V.transformed.Y_ - oy;
float vz = p_poly->texture->V.transformed.Z_ - oz;

float u = (px*(oy*vz - oz*vy) + py*(oz*vx - ox*vz) - oy*vx + ox*vy)


/ (px*(uz*vy - uy*vz) + py*(ux*vz - uz*vx) + uy*vx - ux*vy);
Chapter 2: Rendering and Animation Techniques for 3D Polygons 123

float v = (px*(oy*uz - oz*uy) + py*(oz*ux - ox*uz) - oy*ux + ox*uy)


/ ((uy*vz - uz*vy)*px + py*(uz*vx - ux*vz) + ux*vy - uy*vx);
float z = (ox*(uz*vy - uy*vz) + oy*(ux*vz - uz*vx) + oz*(uy*vx - ux*vy))
/ (px*(uz*vy - uy*vz) + py*(ux*vz - uz*vx) + uy*vx - ux*vy);

old_texcoords[old_texcoords.next_index()] =
((l3d_polygon_ivertex_textured *)&(*p_poly->clip_ivertices)[i])
->tex_coord;

((l3d_polygon_ivertex_textured *)&(*p_poly->clip_ivertices)[i])
->tex_coord.set(float_to_l3d_real(u),
float_to_l3d_real(v),
float_to_l3d_real(z),
float_to_l3d_real(1.0));
}

draw_polygon_textured(p_poly);

//- set up mag/min filters to make light maps smoother


glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

//- set up blend function to combine light map and text map
glEnable(GL_BLEND);
glBlendFunc(GL_ZERO, GL_SRC_ALPHA);

int i;

//- try to find light map data in cache


glid = (GLuint) (p_poly->lightmap->tex_data->id);
if (!p_poly->get_scache()->find_texdata(p_poly->lightmap->tex_data)) {
if (glIsTexture(p_poly->lightmap->tex_data->id)) {

glDeleteTextures(1, &glid);
}
printf("generating texture");
glGenTextures(1, & glid );
p_poly->lightmap->tex_data->id = glid;
printf("ID was %d", p_poly->lightmap->tex_data->id);

glBindTexture(GL_TEXTURE_2D, p_poly->lightmap->tex_data->id);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);


glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);

glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glEnable(GL_TEXTURE_2D);

glPixelStorei(GL_UNPACK_ALIGNMENT,1);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGBA,
p_poly->lightmap->tex_data->width,
p_poly->lightmap->tex_data->height,
0,
GL_ALPHA,
GL_UNSIGNED_BYTE,
p_poly->lightmap->tex_data->data);
124 Chapter 2: Rendering and Animation Techniques for 3D Polygons

}else {
glBindTexture(GL_TEXTURE_2D,p_poly->lightmap->tex_data->id);
}

//- restore lumel-space uv's for each clip_ivertex

for(int i=0; i<p_poly->clip_ivertices->num_items; i++) {


((l3d_polygon_ivertex_textured *)&(*p_poly->clip_ivertices)[i])
->tex_coord = old_texcoords[i];
}

//- draw the same polygon again, but this time with the light map
glBegin(GL_POLYGON);
ogl_texturing_between_glbegin_end(p_poly);
glEnd();

Y
glDisable(GL_BLEND);

FL
}

The class l3d_rasterizer_3d_mesa_imp is the 3D rasterizer implementation using Mesa.


AM
The first thing to notice is the following line in the constructor:
gluOrtho2D(-xsize/2,xsize/2,-ysize/2,ysize/2);
This line has the effect of setting the origin of the screen coordinate system to be the center of the
screen, not the top left corner as usual. Furthermore, the y axis values increase as they move
TE

upwards, not downwards; this is the normal y axis orientation, not the reversed y orientation. The
reason we must make this change is for the texture mapping code, as we see shortly.

Figure 2-42: The screen coordinate system


for the Mesa 3D rasterizer implementation
class. Notice that the origin is at the center
of the screen, and that the y axis
orientation is normal.

Because we have redefined the origin and y axis orientation, we must rewrite all of the inher-
ited drawing routines from the 2D rasterizer to use the new coordinate system. The draw_
point, draw_line, and draw_polygon_flatshaded routines perform the same func-
tion and call the same OpenGL routines as in the parent class l3d_rasterizer_
2d_mesa_imp, only here in the 3D rasterizer we offset all of the incoming x and y coordinates to
use the new screen-centered coordinate system.

Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 125

Like the software 3D rasterizer, the Mesa 3D rasterizer also overrides the routines
set_text_color and draw_text to draw text into the rasterizer buffer, although here we
must use OpenGL commands to draw text. Method set_text_color converts the given exter-
nal red, green, and blue color into the native pixel format, and stores it in the variable
text_color. Method draw_text then calls the set_color routine of the screen info
object with the color text_color, which for Mesa calls glColor4f. Then, after setting the
color, method draw_text disables texture mapping, since otherwise text might be drawn using
the currently active texture map if texture mapping had been enabled earlier, and sets the raster
position at which the text is to be drawn (in 2D screen coordinates, with the screen-centered coor-
dinate system) with glRasterPos2i. Finally, the text is drawn with the calls to
glPushAttrib, glListBase, glCallLists, and glPopAttrib. The calls to
glPushAttrib and glPopAttrib save and restore the OpenGL state variables before and
after the calls to glListBase and glCallLists. The calls to glListBase and
glCallLists execute a display list. A display list is an OpenGL-specific concept; it is a collec-
tion of OpenGL commands that can be executed as a logical unit, something like a subroutine
consisting of OpenGL commands. Each display list is assigned an integer identifier by OpenGL.
Execution of a display list may be faster than individually calling the separate OpenGL com-
mands, because the commands in the display list may be cached on or optimized by the graphics
hardware. For the case of drawing text, we use a separate display list for each character, created in
the initialization routine make_raster_font. This routine first allocates space for 128 con-
secutive display lists with glGenLists, which returns the integer identifier for the first of the
lists. Then, for each display list corresponding to a character we wish to display, we enter com-
mands into the display list by executing glNewList, which begins the list, glBitmap, which
draws the bitmap data to the screen using the current color, and glEndList to finish the list.
The method draw_polygon_textured is the new routine of interest which uses
OpenGL calls to draw a texture mapped polygon, using hardware acceleration if present. It calls
texture_setup to initialize the texture, then calls glBegin(GL_POLYGON) to draw a
polygon, and calls ogl_texturing_between_glbegin_end to actually issue the
OpenGL texture mapping commands, finishing with a call to glEnd.
The method texture_setup has the responsibility of making the texture image data avail-
able to OpenGL. Recall, OpenGL is a state machine; issued commands are affected based on the
currently set parameters. The texture is one such parameter; to draw a texture mapped polygon, we
must first select the texture. OpenGL (version 1.1 and later) stores textures in texture objects. A
texture object is referenced by a non-zero integer identifier, which we store in the id member of
the l3d_texture_data class. In method texture_setup, we first check to see if the
OpenGL texture data has already been allocated and initialized via glIsTexture, passing the
texture id. If the texture does exist, we simply make it the current texture for the subsequent draw-
ing commands, through the call to glBindTexture. If the texture does not exist, we must
create it for OpenGL. First, we call glGenTextures to return a new, unused texture id num-
ber, and store it with the texture data. We then call glBindTexture to make this texture object
current.
126 Chapter 2: Rendering and Animation Techniques for 3D Polygons

At this point we have created and bound a new texture to OpenGL, but the texture is still
empty. Therefore, we issue additional commands to make the texture image data known to
OpenGL. First, we call glTexParameteri several times to set up some parameters for the tex-
ture. The first two calls to glTexParameteri cause scaled-up (magnified) or scaled-down
(minified) textures to be drawn using the closest pixel within the texture map; the alternative
would be to use some sort of average of several pixels within the texture map, which produces a
smoother result but is slower. The second two calls to glTexParameteri cause the texture
coordinates to wrap around or repeat outside of the (0,0) to (1,1) square in texture space. Then,
we call glHint with the parameters GL_PERSPECTIVE_CORRECTION_HINT and GL_
NICEST, which causes OpenGL to try to perform the “nicest” perspective correction possible
when doing texture mapping. The perspective correction is simply the interpolation of u/z, v/z, and
1/z with a division to retrieve the mathematically correct u, v, and z values, as we already have
been doing; texture mapping without perspective correction interpolates the u, v, and z values
directly (which we did in software for short runs of 16 pixels) and is faster but leads to incorrect
and visually disturbing artifacts. The call to glEnable(GL_TEXTURE_2D) turns on the tex-
ture mapping in OpenGL.

TIP If you want to see the effect of texture mapping without perspective correction in the soft-
ware rasterizer, try setting the constant LINEAR_RUN_SHIFT in class l3d_rasterizer_
3d_sw_imp to be 16. This means that each linear run is 2^16 or 65,536 pixels long, mean-
ing that across an entire scanline, the u and v values (instead of u/z, v/z, and 1/z, which
would be correct) are linearly interpolated. This results in distortion and warping of the texture
image but faster performance because no division operations are performed within the
scanline.

Finally, and most importantly, we call glTexImage2D, which actually sends the texture image
data to OpenGL. The first parameter specifies that we are working with a 2D texture. The next
parameter is only used with multiple levels of detail, where a different texture is drawn depending
on the distance from the camera, so we set it to 0. The third parameter, which we set to GL_RGBA,
means that our texture data specifies red, green, blue, and also possibly alpha (transparency) com-
ponents for our colors. The next two parameters are the width and height of the texture; after that
comes an optional border parameter to surround the texture, which we set to 0 (no border). The
next two parameters specify the format of the texture image data: red, green, blue, and alpha com-
ponents, each as a byte—in other words, 32-bit RGBA, which is also the same pixel format
specified in the Mesa screen information class l3d_screen_info_rgb_mesa. The final
parameter is a pointer to the actual texture image data itself.
The method ogl_texturing_between_glbegin_glend issues the actual OpenGL
commands to draw the textured polygon. This routine must be called between glBegin(GL_
POLYGON) and glEnd, as the name of the routine implies. Fundamentally, the routine works as
follows: for each vertex in the polygon, we call glTexCoord2f with the (u,v) coordinates of the
polygon, which we stored in the l3d_polygon_ivertex_textured.tex_coord mem-
ber. Then, we call glVertex4fv with the spatial coordinates of the vertex. After specifying the
texture and spatial coordinates for all vertices of the polygon, the routine returns, glEnd is called,
and the texture mapped polygon is then drawn by OpenGL.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 127

There is, however, one tricky computation which must be performed within the method
ogl_texturing_between_glbegin_glend. The problem is that we must pass a
pre-projection vertex to Mesa, and must allow Mesa to perform the perspective division, in order
for the texture mapping to occur correctly. We cause Mesa to do perspective division by using
homogeneous coordinates. The last operation performed by the hardware on a coordinate is the
division by the homogeneous w component. If we set w to be the 3D z component, this division
effects the perspective projection. Our problem is that we already performed the perspective divi-
sion, in l3d_object::apply_perspective_projection, and we already know
exactly which 2D pixel location we need. So our task is, from this already known 2D pixel loca-
tion, to compute the pre-projection 3D coordinate which, when re-projected by Mesa, yields
exactly the 2D pixel location we desire. This is a form of “reverse projection,” very similar to the
way that we “reverse projected” from screen space into texture space. Here, we must reverse pro-
ject from 2D screen space back into 3D world space.

CAUTION You may be tempted to believe that we can simply save the vertex coordinates
before perspective projection, in one of the transformed intermediates within the vertex.
Then, you might think, later on within the Mesa rasterizer, we would simply retrieve the
pre-projection coordinate and pass it to Mesa. The flaw with this reasoning is that after pro-
jection, we perform an additional clipping operation in 2D, clipping the projected polygon to
the view window. This clipping can produce additional vertices, which did not directly result
from a 3D to 2D projection operation. For such vertices produced by a 2D clip, we have no a
priori knowledge of the corresponding 3D point. Therefore, we cannot just save the
pre-projection 3D coordinate ahead of time; we must dynamically compute it later as
needed.

This reverse projection from screen space into world space is quite easy and relies on the plane
equation of the polygon which we developed earlier. Consider the following system of equations:
Equation 2-35

This is a system of three equations. The first is the plane equation of the polygon, and it constrains
how the x, y, and z values in 3D relate to one another: they must lie on a plane. The a, b, and c con-
stants of the plane equation, recall, are simply the normalized x, y, and z components of the surface
normal to the plane; the d component can be calculated by substituting a known x, y, and z point on
the plane (such as the polygon center) into the plane equation and solving for d. The second two
equations are the simplified perspective projection equations ignoring field of view and y axis
reversal; as seen earlier, we can easily convert from these simple projected coordinates to the real
screen projected coordinates.
128 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Solving this system for x, y, and z (by using Calc) yields the following system of equations.
Equation 2-36

These equations give us the world space coordinates from the screen space coordinates—a
“reverse projection.”
As you can see here, it is very important to express a problem in terms of known and unknown
quantities. If we are only given the perspective projection equations relating x, y, and z, we do not
have enough information to perform reverse projection. Mathematically, we have two equations
(x=x/z, y=y/z) but three unknowns (x, y, and z). Geometrically, the perspective projected point
could have come from any point in 3D which lies on the light ray. But, by adding the plane equa-
tion to the system of equations, we do have enough information to perform reverse projection.
Mathematically, we now have three equations and three unknowns. Geometrically, the additional
constraint of the plane equation means that the point in 3D must not only project to the given 2D
point, but it must also lie on the plane of the polygon. The general observation here is that an addi-
tional equation relating the known quantities allowed us to solve the system of equations. The
challenge is to determine which known information might be of use in finding some unknown
quantities.
So, to use these equations, we first convert from real screen projected coordinates to the sim-
ple projected coordinates, using the earlier Equation 2-30. Notice that this conversion relies on the
horizontal and vertical field of view terms and screen size, which is why the l3d_tex-
ture_computer class, from which the Mesa rasterizer inherits, stores these values.

CAUTION The fact that the Mesa texture mapping code requires the horizontal and vertical
field of view terms and the screen size means that you must initialize these values before
attempting to perform any texture mapping operations.

The following sample program textest initializes these field of view terms in the beginning of
the world constructor. The reason we must perform this initialization is that in the rasterizer we
store pointers to the actual variables storing the field of view and screen size terms. This allows us
to change values in the external variables (e.g., to interactively change the field of view) without
needing also to propagate the updated values to the rasterizer. But the fact that the rasterizer stores
pointers to the variables means that failure to initialize these pointers followed by an attempt to
perform Mesa texture mapping results in an attempt to access memory through an uninitialized
pointer—a practice which, to put it mildly, is best avoided. Ordinarily, such initialization would be
done by including a required parameter in the class’s constructor, so that it is not possible to create
an object incorrectly, but in this case we create the rasterizer implementation indirectly through a
factory, where the factory’s original creation interface (defined in the introductory companion
book Linux 3D Graphics Programming) was defined before we even saw the notion of field of
view. Since the code philosophy in this book emphasizes a bottom-up approach, we’ll have to live
with this small inconvenience for now. In a real production 3D program, you probably would have
Chapter 2: Rendering and Animation Techniques for 3D Polygons 129

no need to create 2D rasterizer implementations, only 3D rasterizer implementations, meaning


that you could just change the factory creation interface and break the old code creating 2D
rasterizer implementations. However, for pedagogical purposes, in this book we retain backward
compatibility with the code we developed earlier.
Then, having obtained the simple projected coordinates, we use the above equations to deter-
mine the 3D pre-projection coordinates. We finally pass these pre-projection 3D coordinates on to
OpenGL in the glVertex4fv command. The glVertex4fv function expects as a parameter
an array in which four x, y, z, and w components have been stored. We must specify the 3D z coor-
dinate as the fourth (homogeneous w) component, so that Mesa has the opportunity to perform the
homogeneous division by w, thus effecting the perspective division by z and therefore allowing
perspective correct texture mapping through OpenGL. It is also important that we not pass the 3D
z coordinate as the third (3D z) component to glVertex4fv, instead passing the constant 1.
What does this achieve? Remember that OpenGL divides all components by the w component.
This means that, after the division by w, which we set equal to the 3D z value, the third component
of the OpenGL vertex contains 1/z. As we see in Chapter 4, this 1/z value can be used for the z
buffer algorithm.

NOTE Note that the last operation Mesa performs on the coordinate is the division by the
homogeneous component w. This is the reason that, for the Mesa 3D rasterizer, we redefined
the origin of our screen coordinate system to be the center of the screen with a normal y axis
orientation. Since the division by w is the last operation, we have no way of telling Mesa “oh,
and after the homogeneous division, please also offset the x and y values by half the screen
size and reverse the y axis orientation.” If we want Mesa to perform the perspective
division—which should be the case for texture mapping to occur correctly—then we have no
control over the post-projection coordinates, since they are taken over by the hardware at that
point.

Sample Program: textest


Congratulations, you now understand basic texture mapping! Let’s now take a look at a sample
program using texture mapping, so that all of this mind-boggling math actually produces a visual
result. With an understanding of the basic concepts of texture mapping, using the texture mapping
classes is comparatively easy. The next sample program, textest, creates a navigable scene
with two texture mapped pyramids. One of the pyramids rotates, to provide some animation in the
scene. You can fly through the scene with the normal navigation keys of the l3d_pipe-
line_world class.
The code is in Listings 2-14 through 2-18. File shapes.cc is the most significant file,
which loads textures and creates texture-mapped polygons.
Listing 2-14: shapes.h, the textured object class declaration for program textest
#include "../lib/geom/object/object3d.h"
#include "../lib/geom/polygon/p3_tex.h"
#include "../lib/geom/texture/texture.h"
#include "../lib/geom/texture/texload.h"

class pyramid:public l3d_object {


int xtheta;
130 Chapter 2: Rendering and Animation Techniques for 3D Polygons

public:
int dx_theta;
pyramid(l3d_texture_loader *l);
virtual ~pyramid(void);
/* virtual */ int update(void);
};

Listing 2-15: shapes.cc, the textured object class implementation for program textest
#include "shapes.h"
#include "../lib/tool_os/memman.h"
#include <stdlib.h>
#include <string.h>

pyramid::pyramid(l3d_texture_loader *l) :
l3d_object(4)
{
(*vertices)[0].original.set(float_to_l3d_real(0.),float_to_l3d_real(0.),float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*vertices)[1].original.set(float_to_l3d_real(10.0),float_to_l3d_real(0.),float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*vertices)[2].original.set(float_to_l3d_real(0.),float_to_l3d_real(10.),float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*vertices)[3].original.set(float_to_l3d_real(0.),float_to_l3d_real(0.),float_to_l3d_real(10.),
float_to_l3d_real(1.));

l3d_polygon_3d_textured *p;
int pi;

pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured(3);

polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=0;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=1;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=3;

l->load("test.ppm");
p->texture = new l3d_texture;
p->texture->tex_data = new l3d_texture_data;
p->texture->tex_data->width = l->width;
p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
p->texture->owns_tex_data = true;
p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);

pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured(3);
polygons[pi]->vlist = &vertices;

(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=2;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=3;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=1;

p->texture = new l3d_texture;


p->texture->tex_data = new l3d_texture_data;
p->texture->tex_data->width = l->width;
Chapter 2: Rendering and Animation Techniques for 3D Polygons 131

p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
p->texture->owns_tex_data = false;

p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];

polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);

pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured(3);
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=0;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=2;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=1;

p->texture = new l3d_texture;


p->texture->tex_data = new l3d_texture_data;
p->texture->tex_data->width = l->width;
p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
p->texture->owns_tex_data = false;

p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];

polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);

pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured(3);
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=3;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=2;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=0;

p->texture = new l3d_texture;


p->texture->tex_data = new l3d_texture_data;
p->texture->tex_data->width = l->width;
p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
p->texture->owns_tex_data = false;

p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];

polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);

num_xforms = 2;
xtheta=0;
modeling_xforms[0] = l3d_mat_rotx(xtheta);
modeling_xforms[1].set
( float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.),
132 Chapter 2: Rendering and Animation Techniques for 3D Polygons

float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.) );

modeling_xform=
modeling_xforms[1] | modeling_xforms[0];
}

pyramid::~pyramid(void) {
}

int pyramid::update(void) {
xtheta += dx_theta; if (xtheta>359) xtheta-=360;
modeling_xforms[0]=l3d_mat_rotx(xtheta);
modeling_xform=
modeling_xforms[1] | modeling_xforms[0];
}

Listing 2-16: textest.h, the custom world class declaration for program textest
#ifndef __TEXTEST_H
#define __TEXTEST_H

#include "../lib/geom/world/world.h"
#include "../lib/pipeline/pi_wor.h"
#include "../lib/geom/texture/texture.h"
#include "../lib/geom/texture/tl_ppm.h"

class my_world:public l3d_world {


public:
my_world(void);
void update_all(void);

virtual ~my_world(void) {delete texture_loader; }


l3d_texture_loader *texture_loader;
};

#endif

Listing 2-17: textest.cc, the custom world class implementation for program textest
#include "textest.h"

#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>
#include <stdarg.h>

#include "../lib/geom/object/object3d.h"
#include "../lib/tool_2d/screen.h"
#include "../lib/tool_os/dispatch.h"
#include "../lib/raster/rasteriz.h"
#include "../lib/tool_2d/scrinfo.h"
#include "../lib/geom/world/world.h"
#include "../lib/geom/texture/texture.h"
#include "../lib/geom/texture/tl_ppm.h"

#include "shapes.h"

void my_world::update_all(void) {
l3d_world::update_all();
}
Chapter 2: Rendering and Animation Techniques for 3D Polygons 133

my_world::my_world(void)
: l3d_world(400,300)
{
rasterizer_3d_imp->fovx = &(camera->fovx);
rasterizer_3d_imp->fovy = &(camera->fovy);
rasterizer_3d_imp->screen_xsize = &(screen->xsize);
rasterizer_3d_imp->screen_ysize = &(screen->ysize);

l3d_screen_info *si = screen->sinfo;


texture_loader = new l3d_texture_loader_ppm(screen->sinfo);

camera->VRP.set(0,0,-50,0);
camera->near_z = float_to_l3d_real(5.5);
camera->far_z = int_to_l3d_real(500);

int i,j,k,onum;

i=10; j=0; k=20;


k=0;

for(i=1; i<10; i+=5) {


pyramid *p;
objects[onum = objects.next_index()] = p = new pyramid(texture_loader);
p->dx_theta = i-1;

if (objects[onum]==NULL) exit;
objects[onum]->modeling_xforms[1].set
( int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(-i),
int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(i/2),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(i),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1) );
objects[onum]->modeling_xform =
objects[onum]->modeling_xforms[1] |
objects[onum]->modeling_xforms[0] ;
onum++;
}

screen->refresh_palette();
screen->sinfo->compute_light_table();
screen->sinfo->compute_fog_table();
}

Listing 2-18: main.cc, the main program file for program textest
#include "textest.h"
#include "../lib/system/fact0_2.h"

#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>
#include <stdarg.h>

main() {
l3d_dispatcher *d;
l3d_pipeline_world *p;
my_world *w;

factory_manager_v_0_2.choose_factories();
d = factory_manager_v_0_2.dispatcher_factory->create();
134 Chapter 2: Rendering and Animation Techniques for 3D Polygons

w = new my_world();
p = new l3d_pipeline_world(w);
d->pipeline = p;
d->event_source = w->screen;

d->start();

delete p;
delete d;
delete w;
}

The main program file, main.cc, begins by choosing the factories using the new l3d_fac-
tory_manager_v_0_2 class. This class chooses the 3D rasterizer implementation factory:
either software or OpenGL/Mesa. Then, the main program simply creates the world, pipeline, and

Y
dispatcher objects, hooks them together, and starts the dispatcher.
The class my_world is our custom world class. First, it stores pointers to the field of view

FL
and screen size variables into the 3D rasterizer, since in the case of Mesa rasterization we need
these variables to perform the reverse projection (as mentioned earlier). Next, the class creates a
AM
texture loader, of type l3d_texture_loader_ppm, which is used by the pyramid class
(covered in the next paragraph) to load the texture images from disk. Class my_world then popu-
lates the world with a number of objects of type pyramid.
TE

The class pyramid is the real focus of interest for this program. Class pyramid, inheriting
from class l3d_object, is a texture mapped, rotating pyramid. The member variable xtheta
is the current rotation angle; dx_theta is the change in xtheta per unit time. The constructor
requires a parameter of type l3d_texture_loader so that the pyramid can load the texture
image data from disk. In the constructor, we first define a vertex list, as usual. Then, we create a
number of polygons referencing the vertex list, but of the new type l3d_polygon_3d_
textured.
pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured(3);
After creating the polygon and assigning its vertex indices as usual, we then load a texture image
from disk by using the texture loader, then assign the newly loaded texture to the polygon. The tex-
ture image is a 64 by 64 pixel color image saved in PPM format. Since the polygon is the only one
referencing the texture, it is responsible for freeing the data—hence, the owns_tex_data
member is set to true.
l->load("test.ppm");
p->texture = new l3d_texture;
p->texture->tex_data = new l3d_texture_data;
p->texture->tex_data->width = l->width;
p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
p->texture->owns_tex_data = true;

Next, we have to define the orientation of the texture space, which places the texture in 3D space
relative to the polygon. In this case, we directly use the first vertex of the polygon as the texture
space origin, and the two edges of the polygon as the texture space axes. This has the effect of
causing the texture image to lie exactly on the face of the polygon.

Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 135

p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];
Finally, we perform some last necessary computations in 3D. We compute the center and surface
normal of the polygon, and finally, based on the previous texture space definition, we assign (u,v)
texture coordinates to each of the polygon’s vertices by using the world to texture space matrix.
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
Creating and initializing the texture mapped polygons in this manner is actually all we need to do
in order to make use of texture mapped polygons in our programs. Having already chosen a 3D
rasterizer implementation at program start that is capable of drawing texture mapped polygons,
we are free to create texture mapped polygons and assign them to our 3D objects. All routines that
draw or otherwise manipulate 3D polygons do so through abstract pointers and virtual function
calls, meaning that the texture mapped polygons can be used with no impact to existing code.
They are finally displayed through a virtual function call to draw, which then ends up calling the
rasterizer implementation to draw the texture mapped polygon on-screen.

Figure 2-43: Output from program textest,


displaying two texture mapped pyramid objects.

With texture mapping under our belts, it is now time to return to the topic of light mapping,
which builds on the topic of texture mapping.

Light Mapping Revisited


Recall, light mapping is a means of applying computed light intensities to a polygon. It works by
defining a grid of lumels on top of the polygon; each lumel in the lumel grid may be assigned a
separate light intensity.
We mentioned earlier that the main question with light mapping is the determination of which
lumel in the light map corresponds to the current pixel. We can now recognize this as a texture
mapping problem: to solve it, we define and orient the light map exactly as we defined the texture
map, by means of an additional coordinate system. Then we use the texture mapping equations to
convert between screen space, world space, and texture (in this case, light map) space.
136 Chapter 2: Rendering and Animation Techniques for 3D Polygons

The difference between texture mapping and light mapping is that with texture mapping, the
color in the texture map defines the final color to be displayed on-screen; with light mapping, the
light intensity in the light map does not define the color to be displayed, but instead affects an
existing color, making it either brighter or darker via a lighting table.
There are a few ways to implement light mapping. One way would be to modify the inner loop
of the rasterizer to perform a double texture mapping calculation (once for the texture map, once
for the light map), then to use the lighting table to find the value of the lighted color and display it.
This requires rather invasive changes to the rasterization code, since we now have to handle both a
texture map and a light map. We don’t explore this option further here.
Another way to implement light mapping is to introduce the concept of a surface, which is
nothing more than a precombined texture map and light map. By precombining the texture map
and light map into a surface, we can treat the new surface as a single texture, and associate this sin-
gle texture with the polygon, which can then be rasterized using the exact same texture mapping
routine as before. We’ll look at this solution in more detail shortly.
A third, and perhaps simplest, way of realizing light mapping is to perform two entire
rasterization passes for each polygon. First, we rasterize the polygon with texture but no lighting.
Then, we re-rasterize the same polygon, but using the light map instead of the texture map. During
rasterization, instead of drawing the colors of the light map (which are all just various intensities
of white), we alter the existing, already drawn colors from the texture map. The end result is the
colors of the texture map lighted by the intensities in the light map. Since this strategy requires two
entire rasterization passes, it is only really practical in the presence of hardware acceleration. We
also look at this solution in the coming sections.

Software Light Mapping


Let’s begin by looking at a technique for implementing light mapping in software. The method
relies on surfaces and a surface cache.

Surfaces
A surface is a pre-tiled, pre-lit texture. A texture and a light map combine to form a surface; the
surface can then be treated as a normal texture. With this approach, light mapping becomes simply
a sort of preprocessing step before drawing the texture mapped polygons. In theory, this sounds
easy enough: for each texel in the texture map, simply light it according to the corresponding
lumel in the light map, and we’re done, right? In practice, several problems prevent such a
straightforward implementation. First of all, the texture map and the light map have different
sizes. Typically the light map will have a lower resolution than the texture map because a very fine
resolution of lighting detail is not always needed. Secondly, and this is where it becomes tricky,
the texture may be scaled. If we have scaled the texture down, this means that the texture then
repeats itself (we also say tiles itself) across the surface of the polygon. The problem is that
although the texels of the texture map repeat themselves across the surface of the polygon, the
lumels of the light map must not repeat themselves across the surface of the polygon. A few
moments of thought make this clear: if the lumels were also repeated across the polygon, then
altering the intensity of one lumel would alter the intensity of several points on the polygon, which
Chapter 2: Rendering and Animation Techniques for 3D Polygons 137

is not at all what is desired! Instead, each lumel should individually control the intensity of one
part of the polygon, regardless of whether the texture underneath repeats itself or not. See Figure
2-44.

Figure 2-44: We can tile the texture, but we cannot


tile the light map.

The solution to this quandary is to create a surface which may be larger than the original tex-
ture. Specifically, the surface must be exactly large enough to contain the number of repetitions of
the texture occurring on the polygon; the exact number of repetitions depends on the relative sizes
and orientations of the polygon and the texture. This is what we mean when we say that a surface is
a “pre-tiled” texture. Then, we apply the light map to this larger, repeated texture. The result is a
single, large texture image, which (a) contains one or more repetitions of the original texture
image, and (b) has been illuminated by the light map.

Figure 2-45: Texture mapping without light mapping tiles the texture within the
rasterizer. Texture mapping with light mapping tiles the texture beforehand into a
surface, which is then treated as one large texture by the rasterizer.
138 Chapter 2: Rendering and Animation Techniques for 3D Polygons

After creating the pre-tiled, pre-lit surface (which may be larger than the original texture), we
must then redefine the texture coordinates of the associated polygon. If the surface is larger than
the original texture, the new texture coordinates must be appropriately smaller. For instance, if the
polygon contains exactly two repetitions of the original texture horizontally, then the original u
values go from 0 to 2. The larger surface is then twice as wide as the original texture and already
contains two repetitions of the original texture, which means that only one repetition of the surface
should be mapped to the polygon, meaning that the new u values should only go from 0 to 1.
We create surfaces “on demand”—as soon as we attempt to draw a polygon, we first of all see
if we have created a surface for the polygon. Each polygon has a unique surface. If the polygon’s
surface has already been created beforehand, we simply reuse the previous surface and draw the
polygon with the surface as the texture. If the polygon’s surface has not been created, we combine
the texture and light maps into the surface and store the surface for future use. A surface is there-
fore created as soon as it is needed to be drawn. (A predictive algorithm could also attempt to
create surfaces for polygons which are not yet visible but which might soon be visible, to improve
performance.) This all implies that we need some means of managing which surfaces have been
created and which have not. This is the role of the surface cache, which determines for a particular
polygon if its surface has already been created or not. The surface cache can also discard old sur-
faces which have not been used in a long time to save memory.
Let’s now look at the l3d classes for light mapping. The class l3d_surface is a surface.
Member variable polygon points to the polygon with which the surface is associated. Member
variable surface is a pointer to an object of type l3d_texture—after all, a surface is noth-
ing more than a texture which has been pre-tiled and pre-lit. Member variable age keeps track of
the relative age of the surface, so that older surfaces can be discarded to make space for newer sur-
faces. Of course, if a discarded surface is then referenced again, it must be regenerated.
Listing 2-19: scache.h
#ifndef __SCACHE_H
#define __SCACHE_H
#include "../../tool_os/memman.h"

class l3d_texture;
class l3d_texture_data;

#include "../../tool_2d/scrinfo.h"
#include "../../datastr/list.h"

class l3d_polygon_3d_textured_lightmapped;

class l3d_surface {
public:

l3d_surface(void);
const l3d_polygon_3d_textured_lightmapped *polygon;
l3d_texture *surface;

int age;

virtual ~l3d_surface(void);
};
Chapter 2: Rendering and Animation Techniques for 3D Polygons 139

class l3d_surface_cache {
protected:
l3d_screen_info *sinfo;
l3d_list<l3d_surface> *surfaces;

public:
l3d_surface_cache(l3d_screen_info *sinfo) {
surfaces = new l3d_list<l3d_surface> (10);
this->sinfo = sinfo;
}

virtual ~l3d_surface_cache(void) {
int i;
delete surfaces;
}

l3d_texture *combine_lightmap_and_texmap(int width,


int height,
const l3d_polygon_3d_textured_lightmapped *polygon);

void reset(void) {
delete surfaces;
surfaces = new l3d_list<l3d_surface> (10);
}

l3d_texture_data *find_texdata(l3d_texture_data *texdata);


};

#include "../polygon/p3_tex.h"
#include "../polygon/p3_ltex.h"

#endif

Listing 2-20: scache.cc


#include "scache.h"

#include "../../tool_2d/si_rgb.h"
#include "../../tool_os/memman.h"

l3d_surface::l3d_surface(void) {
surface = NULL;
polygon = NULL;
}

l3d_surface::~l3d_surface(void)
{

if(surface) delete surface;


if(polygon) {polygon->surface = NULL; }
}

l3d_texture *l3d_surface_cache::
combine_lightmap_and_texmap(int width,
int height,
const l3d_polygon_3d_textured_lightmapped *polygon)
{

{
int i;
140 Chapter 2: Rendering and Animation Techniques for 3D Polygons

for(i=0; i<surfaces->num_items; i++) {


if((*surfaces)[i].polygon == polygon) {
return (*surfaces)[i].surface;
}
}
}

l3d_texture *new_tex;
l3d_surface *new_surface;

new_tex = new l3d_texture;

int next_surface_index = surfaces->next_index();


(*surfaces)[next_surface_index].polygon = polygon;
(*surfaces)[next_surface_index].surface = new_tex;

new_tex->tex_data = new l3d_texture_data;


new_tex->owns_tex_data = true;
new_tex->tex_data->width = width;
new_tex->tex_data->height = height;
new_tex->tex_data->data = new unsigned char [sinfo->bytes_per_pixel *
new_tex->tex_data->width *
new_tex->tex_data->height];

int i,j,u,v;

//- tile texture into larger tiled area


for(v=0; v<new_tex->tex_data->height; v++) {
for(u=0; u<new_tex->tex_data->width; u++) {
for(i=0; i<sinfo->bytes_per_pixel; i++) {
*(new_tex->tex_data->data
+ (v*new_tex->tex_data->width + u) * sinfo->bytes_per_pixel
+ i)
=
*(polygon->texture->tex_data->data
+ ( ((v&(polygon->texture->tex_data->height-1))*polygon->texture->tex_data->width
+ (u&(polygon->texture->tex_data->width-1))) * sinfo->bytes_per_pixel
+ i ) ) ;
}
}
}

unsigned char *c2 = new_tex->tex_data->data;


unsigned char *l = polygon->lightmap->tex_data->data;
l3d_real lx, ly, dlx, dly;

dly = l3d_divrr(int_to_l3d_real(polygon->lightmap->tex_data->height) ,
int_to_l3d_real(new_tex->tex_data->height));
dlx = l3d_divrr(int_to_l3d_real(polygon->lightmap->tex_data->width) ,
int_to_l3d_real(new_tex->tex_data->width));
for(j=0, ly=int_to_l3d_real(0); j<new_tex->tex_data->height; j++, ly+=dly) {
for(i=0, lx=int_to_l3d_real(0); i<new_tex->tex_data->width; i++, lx+=dlx) {

int i_texel=(j*new_tex->tex_data->width + i) * sinfo->bytes_per_pixel;


c2 = new_tex->tex_data->data + i_texel;

int lumel_idx = l3d_real_to_int(ly)*polygon->lightmap->tex_data->width+l3d_real_to_int(lx);

int lumel = l[lumel_idx];


sinfo->light_native(c2, lumel);
Chapter 2: Rendering and Animation Techniques for 3D Polygons 141

}
}

return new_tex;
}

l3d_texture_data *l3d_surface_cache::
find_texdata(l3d_texture_data *texdata)
{

int i;

for(i=0; i<surfaces->num_items; i++) {


if((*surfaces)[i].surface->tex_data == texdata) {

return (*surfaces)[i].surface->tex_data;
}
}

l3d_texture *new_tex;
l3d_surface *new_surface;

new_tex = new l3d_texture;

int next_surface_index = surfaces->next_index();


(*surfaces)[next_surface_index].polygon = NULL;
(*surfaces)[next_surface_index].surface = new_tex;

new_tex->tex_data = texdata;
new_tex->owns_tex_data = false;

return NULL;
}

Surface Cache
The surface cache is responsible for managing surfaces that have already been created. We create a
surface as soon as its associated polygon is drawn for the first time. Since creating a surface is a
time-consuming process (the texture data must be replicated and combined with the light map),
we save already created surfaces for later reuse in the surface cache.
Class l3d_surface_cache is the class implementing the surface cache. Member vari-
able sinfo points to the screen information object, so that we can know the pixel format of the
texture data. Member variable surfaces is a list of pointers to surface objects. Method reset
deletes all surfaces from the cache, thereby resetting the cache and causing all surfaces to be
regenerated as they are needed. Method find_texdata searches the cache for a particular tex-
ture data object, returning a pointer to the data if it was found, or inserting it into the cache and
returning NULL if it was not found.
Method combine_lightmap_and_texmap combines a light map and a texture map,
returning a pointer to the new surface. This method takes a width and a height as parameters,
which must be the new, possibly larger, size of the pre-tiled texture. (We’ll see in the next section
how we compute these size parameters.) First of all, we search to see if we have already created
the surface and have stored it in the cache; if so, we return a pointer to the existing surface. If not,
then we must create the surface. We do this by creating a new texture of the specified width and
142 Chapter 2: Rendering and Animation Techniques for 3D Polygons

height, which, remember, may be larger than the original texture size to allow for pre-tiling of the
texture. We then copy the original polygon texture into the newly allocated texture, repeating the
texture as necessary. Then, we loop through all texels of this pre-tiled texture, simultaneously
stepping through the lumels in the light map. This is a simple scaling operation; we merely stretch
the light map to the dimensions of the pre-tiled texture. Note that since the light map has a lower
resolution than the texture map, several texels will map to one lumel. For each texel, we call
l3d_screen_info::light_native with the intensity of the corresponding lumel; this
changes the color of the texel to correspond to the given light level.
The question remains as to how to compute the size of the pre-tiled texture. This depends, as
we said, on the relative sizes of the original texture and the polygon onto which the texture is
mapped. The routine compute_surface_orientation_and_size in class l3d_
polygon_3d_textured_lightmapped handles exactly this task. So, let’s now take a look
at the class l3d_polygon_3d_textured_lightmapped.

Light Mapped Polygons


Class l3d_polygon_3d_textured_lightmapped extends the textured polygon class to
allow for light mapping.
Listing 2-21: p3_ltex.h
#ifndef __P3_LTEX_H
#define __P3_LTEX_H
#include "../../tool_os/memman.h"

#include "p3_tex.h"
#include "../surface/scache.h"

class l3d_polygon_3d_textured_lightmapped :
virtual public l3d_polygon_3d_textured,
virtual public l3d_texture_computer
{
protected:
l3d_surface_cache *scache;
public:
mutable l3d_texture *surface;
//- mutable since the draw routine for s/w rendering needs to assign
//- this member (the tiled, lit surface) if the surface is not currently
//- in the cache, though the poly is otherwise passed as const. logically
//- the draw routine handles the polygon as a const, physically though
//- it can change the mutable surface member

l3d_surface_cache *get_scache(void) const {return scache; }

l3d_texture_space surface_orientation;
int surface_xsize, surface_ysize;

l3d_polygon_3d_textured_lightmapped(int num_pts, l3d_surface_cache *scache);

~l3d_polygon_3d_textured_lightmapped(void) {
//- DO NOT delete the surface! it belongs to the surface cache,
//- or might not even have been created if h/w accel with 2-pass
//- rendering was used
Chapter 2: Rendering and Animation Techniques for 3D Polygons 143

delete lightmap;
}

l3d_texture *lightmap;

void compute_surface_orientation_and_size(void);

void init_transformed(void);
void transform(const l3d_matrix &m, int count);

void draw(l3d_rasterizer_3d *r ) {
r->draw_polygon_textured_lightmapped(this);
}

l3d_polygon_3d_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped &r);
l3d_polygon_2d *clone(void);

};

#endif

Listing 2-22: p3_ltex.cc


#include "p3_ltex.h"

#include "../../raster/ras3_sw.h"
#include "../../tool_os/memman.h"

#define EPSILON_VECTOR 0.001

l3d_polygon_3d_textured_lightmapped::l3d_polygon_3d_textured_lightmapped
(int num_pts, l3d_surface_cache *scache):
l3d_polygon_2d(),
l3d_polygon_3d(),
l3d_polygon_3d_clippable(),
l3d_polygon_3d_textured(num_pts)
{
this->scache = scache;
lightmap = new l3d_texture;
lightmap->tex_data = new l3d_texture_data;
lightmap->owns_tex_data = true;
lightmap->tex_data->width = 64;
lightmap->tex_data->height = 64;
lightmap->tex_data->data = new unsigned char[lightmap->tex_data->width *
lightmap->tex_data->height];

int i,j;
for(j=0;j<lightmap->tex_data->height;j++) {
for(i=0;i<lightmap->tex_data->width;i++) {
lightmap->tex_data->data[i + j*lightmap->tex_data->width] = MAX_LIGHT_LEVELS/4;
}
}

surface = NULL;

void l3d_polygon_3d_textured_lightmapped::compute_surface_orientation_and_size
(void)
{
144 Chapter 2: Rendering and Animation Techniques for 3D Polygons

O = texture->O.original;
U = texture->U.original - O;
V = texture->V.original - O;

l3d_real minu,minv,maxu,maxv;

int i= 0;

l3d_point tex_point;
tex_point =
world_to_tex_matrix(*texture)
|
(**vlist) [(*ivertices)[i].ivertex].original;
u = tex_point.X_ = l3d_divrr(tex_point.X_, tex_point.W_);
v = tex_point.Y_ = l3d_divrr(tex_point.Y_, tex_point.W_);

Y
maxu = minu = tex_point.X_;
maxv = minv = tex_point.Y_;

FL
U = texture->U.original - texture->O.original;
V = texture->V.original - texture->O.original;
AM
#define VTX(i) ((**(vlist))[ (*(clip_ivertices))[i].ivertex ].transformed)

for(i=1; i < ivertices->num_items; i++) {


TE

l3d_real u,v;

tex_point =
world_to_tex_matrix(*texture)
|
(**vlist) [(*ivertices)[i].ivertex].original;
u = tex_point.X_ = l3d_divrr(tex_point.X_, tex_point.W_);
v = tex_point.Y_ = l3d_divrr(tex_point.Y_, tex_point.W_);

if(u<minu) minu=u;
if(u>maxu) maxu=u;
if(v<minv) minv=v;
if(v>maxv) maxv=v;
}

int iminu, iminv, imaxu, imaxv;

iminu = ifloor(minu);
iminv = ifloor(minv);
imaxu = iceil(maxu);
imaxv = iceil(maxv);

int u_tilecount = imaxu - iminu;


int v_tilecount = imaxv - iminv;

int u_tilecount_pow2 = 1 << iceil(float_to_l3d_real(log(u_tilecount) / log(2)));


int v_tilecount_pow2 = 1 << iceil(float_to_l3d_real(log(v_tilecount) / log(2)));
imaxu += u_tilecount_pow2 - u_tilecount;
imaxv += v_tilecount_pow2 - v_tilecount;

surface_xsize = texture->tex_data->width * (imaxu - iminu);


surface_ysize = texture->tex_data->height * (imaxv - iminv);

Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 145

surface_orientation.O.original.set(O.X_ + U.X_*iminu + V.X_*iminv,


O.Y_ + U.Y_*iminu + V.Y_*iminv,
O.Z_ + U.Z_*iminu + V.Z_*iminv,
int_to_l3d_real(1));
surface_orientation.U.original.set(O.X_ + U.X_*imaxu + V.X_*iminv,
O.Y_ + U.Y_*imaxu + V.Y_*iminv,
O.Z_ + U.Z_*imaxu + V.Z_*iminv,
int_to_l3d_real(1));
surface_orientation.V.original.set(O.X_ + U.X_*iminu + V.X_*imaxv,
O.Y_ + U.Y_*iminu + V.Y_*imaxv,
O.Z_ + U.Z_*iminu + V.Z_*imaxv,
int_to_l3d_real(1));

assign_tex_coords_from_tex_space(surface_orientation);
}

void l3d_polygon_3d_textured_lightmapped::init_transformed(void) {
l3d_polygon_3d_textured::init_transformed();

surface_orientation.O.transformed = surface_orientation.O.original;
surface_orientation.U.transformed = surface_orientation.U.original;
surface_orientation.V.transformed = surface_orientation.V.original;

void l3d_polygon_3d_textured_lightmapped::transform(const l3d_matrix &m, int count) {


l3d_polygon_3d_textured::transform(m,count);

surface_orientation.O.transformed = m | surface_orientation.O.transformed;
surface_orientation.U.transformed = m | surface_orientation.U.transformed;
surface_orientation.V.transformed = m | surface_orientation.V.transformed;

l3d_polygon_2d* l3d_polygon_3d_textured_lightmapped::clone(void) {
return new l3d_polygon_3d_textured_lightmapped(*this);
}

l3d_polygon_3d_textured_lightmapped::l3d_polygon_3d_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped &r)
: l3d_polygon_2d(r),
l3d_polygon_3d(r),
l3d_polygon_3d_textured(r),
l3d_polygon_3d_clippable(r),
l3d_texture_computer(r)
{
scache = r.scache;
surface = r.surface;
surface_orientation = r.surface_orientation;
surface_xsize = r.surface_xsize;
surface_ysize = r.surface_ysize;
lightmap = r.lightmap;
}

The member variable scache is a pointer to the surface cache responsible for managing the sur-
face of the polygon. The member variable lightmap points to the light map, which is a small
texture containing light intensities. The member variable surface points to the surface itself,
which contains the tiled and lighted version of the texture. The surface is created by the surface
cache, which is also responsible for deleting the surface.
146 Chapter 2: Rendering and Animation Techniques for 3D Polygons

The method compute_surface_orientation_and_size determines how many


repetitions of the original texture appear on the polygon, based on the original texture space defi-
nition. This is done by applying the world space to texture space matrix to every vertex in the
polygon, and taking the maximum and minimum resulting u and v values. Then, since a distance
of 1 in texture space indicates one repetition of the texture, the number of repetitions horizontally
and vertically can be computed easily as the difference between the maximum u or v value and the
minimum. We round the horizontal and vertical repetition counts up to whole powers of two, then
multiply them by the width and height of the original texture, respectively. We store the results in
surface_xsize and surface_ysize. These variables represent the total pixel dimensions
of the pre-tiled surface, which is used by the method combine_lightmap_and_texmap.
The surface, like all textures, has dimensions that are powers of two, since this is what the original
software rasterizer requires.
After computing the size of the surface, we also have to define its orientation, which we store
in member variable surface_orientation, a texture space. We define the orientation of the
surface by defining its texture space as follows. The origin of the surface’s texture space is the
location in the original texture space with the minimum rounded (u,v) values for the polygon. The
tip of the u axis of the surface’s texture space is the location in the original texture space with the
maximum rounded u and minimum rounded v value. The tip of the v axis of the surface’s texture
space is the location in the original texture space with the minimum rounded u and maximum
rounded v value. In other words, the origin of the surface’s texture space lies on integer coordi-
nates within the original texture space. The u axis points straight in the u direction in the original
texture space; the v axis points straight in the v direction. Both u and v axes have tips that also lie
exactly on integer coordinates in the original texture space. This special integer alignment of the
surface’s texture space, combined with the fact that the surface contains an integer and not a frac-
tional number of repetitions of the texture, means that the relative positioning of the pre-tiled
surface and the original texture is the same—using the pre-tiled surface and the new surface orien-
tation appears exactly the same as using the original texture and the old texture orientation.
With the new size and orientation of the surface, we must also assign new texture coordinates
to the polygon’s vertices. Since the surface and its u and v axes are larger than in the original tex-
ture, the texture coordinates based on the surface will be different than (smaller than) the texture
coordinates based on the original texture. We simply call assign_tex_coords_from_
tex_space with the new surface orientation as a parameter.
The other members of class l3d_polygon_3d_textured_lightmapped are stan-
dard and need no lengthy explanation. The clone method and the copy constructor provide for
duplication of the polygon through an abstract pointer. The transform and init_trans-
formed methods have been extended to transform the light map orientation. The overridden
draw method causes the light mapped polygon to draw itself, by asking its rasterizer to do so.
Let’s now look at the methods for rasterizing light mapped polygons. First, we look at software
light mapping, then look at the same techniques with hardware acceleration. Surprisingly, this is
one of the cases where software rasterization is simpler than hardware rasterization.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 147

Software Rasterization of Light Maps


Because the surface cache has already pre-tiled and pre-lit the texture, the software rasterizer
requires practically no changes to support light mapping. The method draw_polygon_tex-
tured_lightmapped in class l3d_rasterizer_3d_sw_imp handles rasterization of
light mapped polygons.
First, we must have access to the pre-lit, pre-tiled surface. If the surface of the polygon in
member variable surface is not currently assigned (either because the polygon is being drawn
for the first time or because the surface was created but later discarded from the cache), then we
create the surface by calling combine_lightmap_and_texmap, using the precomputed sur-
face width and height. Then, we assign the surface orientation, stored in the polygon’s variable
surface_orientation, to the orientation stored within the surface itself. Finally, we simply
assign the polygon’s texture member variable to be the same as the new surface variable,
and draw the polygon using the existing texture mapping routine. The surface is pre-tiled, pre-lit,
and has the correct orientation. We can therefore treat it exactly like any other texture, and
rasterize the polygon just like a normal texture mapped polygon.

Hardware Light Mapping


With the presence of hardware acceleration, we can use a different, more straightforward strategy
for light mapping polygons. Since 3D hardware can rasterize polygons so quickly, we simply
make two passes over every polygon. First, we draw the polygon textured but without lighting.
Then, we draw the same polygon again, this time using the light map and not replacing the existing
textured pixels, but instead altering their intensities based on the intensities stored in the light map.

Figure 2-46: With hardware, we draw the


polygon in a first pass with texture but
without light map applied (left). In this first
pass, the texture may be tiled. In a second
pass, we apply a light map (middle) to the
existing textured pixels (right). The light map
is not tiled.

This two-pass approach requires three main modifications to our texture mapping strategy:
1. We don’t use surfaces. In other words, we don’t combine textures and light maps into one sur-
face. Therefore, we use the surface cache to store textures and light maps separately, not
surfaces.
2. We must recompute and reassign the polygon’s texture coordinates to draw the texture map
correctly, then we must restore the original texture coordinates, originally computed in com-
pute_surface_orientation_and_size, which define the position and orientation
of the light map.
3. We must tell Mesa to blend the texture map and the light map.
The Mesa rasterizer draws light mapped polygons in routine draw_polygon_textured_
lightmapped. The routine first looks for the texture data in the cache, creating the texture if it
148 Chapter 2: Rendering and Animation Techniques for 3D Polygons

was not found. Texture creation takes place via the OpenGL calls we saw earlier when discussing
the Mesa texture mapping routines. Next comes the tricky part: we must recompute the polygon’s
texture coordinates twice, once for the texture and once for the light map. The light maps have sep-
arate sizes and orientations, leading to two separate sets of texture coordinates.
To recompute the texture coordinates, we use the pixel-level approach discussed earlier. Here,
we are already in the rasterization stage of the pipeline. The polygon has been projected into 2D
and also clipped in 2D. Remember, the 2D clipping operation can introduce new 2D vertices
which do not directly result from a 3D to 2D projection operation, meaning that for such vertices
we have no 3D information. This means that we only have 2D information, but we now need to
compute the texture coordinates in 3D. The Mesa rasterizer code uses the screen space to texture
space formulas developed earlier. Note that it would have also been possible to use the reverse
projection (covered earlier in the discussion of the Mesa texture mapping routines) to retrieve the
3D world coordinates from the pixel coordinates, then use the world-to-texture matrix to find the
texture coordinates, but it is easier to use the pixel-level formulas, since they go directly from 2D
screen coordinates to texture coordinates.
Let’s step through the new computation of the texture coordinates for the texture map.
First, we start a loop for all post-projection, clipped 2D vertices:
for(int i=0; i<p_poly->clip_ivertices->num_items; i++) {
Then, we compute the simple projection coordinates from the actual screen coordinates. This
undoes the effect of the field of view correction, the y axis reversal, and the top-left instead of
screen-centered origin. We do this computation because the simple projection coordinates are
what we need for the screen space to texture space formulas. Notice that to perform this computa-
tion, we need the field of view terms and the screen size, which is why these variables are
referenced in the l3d_texture_computer class.
float px =
( (**p_poly->vlist) [(*p_poly->clip_ivertices)[i].ivertex].transformed.X_
- *screen_xsize/2 )
/ (*screen_xsize * *fovx);

float py =
(*screen_ysize/2
-
(**p_poly->vlist) [(*p_poly->clip_ivertices)[i].ivertex].transformed.Y_ )
/
(*screen_ysize * *fovy);

Next, given the simple projection coordinates, we simply implement the screen space to texture
space conversion formulas derived earlier. We compute u, v, and z coordinates for the current ver-
tex—in other words, the horizontal and vertical positions in the texture space of the texture map,
and the camera space z coordinate.
float ox = p_poly->texture->O.transformed.X_;
float oy = p_poly->texture->O.transformed.Y_;
float oz = p_poly->texture->O.transformed.Z_;
float ux = p_poly->texture->U.transformed.X_ - ox;
float uy = p_poly->texture->U.transformed.Y_ - oy;
float uz = p_poly->texture->U.transformed.Z_ - oz;
float vx = p_poly->texture->V.transformed.X_ - ox;
float vy = p_poly->texture->V.transformed.Y_ - oy;
float vz = p_poly->texture->V.transformed.Z_ - oz;
Chapter 2: Rendering and Animation Techniques for 3D Polygons 149

float u = (px*(oy*vz - oz*vy) + py*(oz*vx - ox*vz) - oy*vx + ox*vy)


/ (px*(uz*vy - uy*vz) + py*(ux*vz - uz*vx) + uy*vx - ux*vy);
float v = (px*(oy*uz - oz*uy) + py*(oz*ux - ox*uz) - oy*ux + ox*uy)
/ ((uy*vz - uz*vy)*px + py*(uz*vx - ux*vz) + ux*vy - uy*vx);
float z = (ox*(uz*vy - uy*vz) + oy*(ux*vz - uz*vx) + oz*(uy*vx - ux*vy))
/ (px*(uz*vy - uy*vz) + py*(ux*vz - uz*vx) + uy*vx - ux*vy);
The last step is to store these values with the vertex (more precisely, the vertex index). We also
save the original texture coordinates, defining the light map orientation, so we can restore them
later when we are drawing the light map.
old_texcoords[old_texcoords.next_index()] =
((l3d_polygon_ivertex_textured *)&(*p_poly->clip_ivertices)[i])
->tex_coord;

((l3d_polygon_ivertex_textured *)&(*p_poly->clip_ivertices)[i])
->tex_coord.set(float_to_l3d_real(u),
float_to_l3d_real(v),
float_to_l3d_real(z),
float_to_l3d_real(1.0));

Once we have computed the u, v, and z values, based on the texture map orientation, for all vertex
indices within the polygon, we then draw the polygon using the previous Mesa texture mapping
routine, draw_polygon_textured.
At this point, we now have the texture mapped polygon drawn. Next, we make a second pass
to add in the effect of the light map. Before we make the second pass, though, we must tell
OpenGL that we want the next texture that is drawn, which will be the light map, to not overwrite
the current pixels, but instead to alter their intensity. The following lines do this:
//- set up mag/min filters to make light maps smoother
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

//- set up blend function to combine light map and text map
glEnable(GL_BLEND);
glBlendFunc(GL_ZERO, GL_SRC_ALPHA);

The two calls to glTexParameteri tell OpenGL to smooth the light maps. A texture map or
light map is (practically) always displayed on-screen either larger or smaller than its original
dimensions; in other words, it is very rare that one pixel in the texture map maps to exactly one and
only one pixel in the final image. More often, a texel in the texture map maps to either less than one
or more than one pixel in the final image. This leads to either stretching of the texture image,
which OpenGL calls maxification, or squashing of the image, which OpenGL calls minification.
In such cases, we can either choose the nearest texel to display, which is what we have been doing
so far in both the software and Mesa rasterizers, or we can compute an average of the nearest texel
and the surrounding texels, displaying the averaged color. Doing such averaging is, of course,
slower than simply choosing the nearest texel, but it also generally results in a better quality
image, smoothing out the changes in a texture map and making the individual texels harder to dis-
cern. Since light maps are typically low resolution, and since we are using Mesa mainly as an
interface to graphics acceleration hardware, we go ahead and turn on the averaging, assuming that
the underlying graphics acceleration hardware is capable of handling the additional computations.
The parameter GL_LINEAR tells OpenGL to perform a weighted linear average on the texels.
150 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Figure 2-47: Normal texture mapping


without linear filtering applied. Notice that
the individual texels of the texture image
are clearly visible.

Figure 2-48: Texture mapping with linear


filtering applied. Notice that the individual
texels are no longer visible, but that the
image is slightly blurrier as a result.

Then, we also need to tell OpenGL to blend the light map with the existing pixels from the
texture map which were already drawn in the first pass. To do this, we call glEnable(GL_
BLEND), which enables some sort of blending for the coming drawing operations, and then call
glBlendFunc(GL_ZERO, GL_SRC_ALPHA) to specify exactly how the already existing
pixels should be blended with the new pixels to be drawn from the light map. The two parameters
to glBlendFunc control the relative weight of the new (or source) and the existing (or destina-
tion) pixels, respectively. Thus, the first parameter GL_ZERO means that we display none of the
color of the light map. The second parameter GL_SRC_ALPHA means that we display the existing
color from the texture map, but using the value from the source pixel (the light map) as an alpha
value. In computer graphics, an alpha value is simply a transparency value; the greater the alpha,
the more transparent a pixel is. Thus, by treating the light map values as transparency values, we
can “see through” the brighter areas of the light map to see more of the underlying light from the
Chapter 2: Rendering and Animation Techniques for 3D Polygons 151

texture; in darker areas of the light map, we see less or none of the original texture. This is exactly
the desired illumination effect.
Then, the rest of the routine draw_polygon_texture_lightmapped draws the same
polygon again, only this time with the light map as the texture. First, we look for the light map data
in the surface cache, creating it if necessary with OpenGL calls. Then, we restore the original light
map texture coordinates for each vertex, which we saved during the first texture coordinate
recomputation. Finally, we draw the polygon just as we did in the draw_polygon_textured
routine, performing a reverse projection using the polygon’s plane to obtain the pre-projection
coordinates Mesa needs to do perspective correct texture mapping.

Sample Program: lightmap


The last sample program in this chapter illustrates the use of light mapping and texture mapping
combined. We also see an implementation of the actual computation of light intensities for the
light maps, by using the diffuse reflection equations discussed earlier.
The program lightmap displays a number of static pyramid objects, all initially dark. You
can fly around the scene with the standard navigation keys provided by class l3d_pipe-
line_world. Press p to place a point light at the current camera position. This causes a
computation of light intensity for every lumel of every light map in the scene. Then, we reload all
light maps by resetting the surface cache. We, therefore, see the effect of the light immediately.
You can place any number of lights in the scene. The lights themselves are not stored in the pro-
gram; only their effect on the light maps is stored.
The program actually displays light mapping and texture mapping simultaneously. The effect
of the light mapping is easier to see with a blank texture, which is the default. To see light mapping
with a real texture, type cp test.ppm.colored test.ppm when in the program directory. This copies
a colored image into the file test.ppm, which is the texture file used by the program. Then run
the program as usual. To return to the blank texture image, type cp test.ppm.white test.ppm.

Figure 2-49: Output from the sample


program lightmap.
152 Chapter 2: Rendering and Animation Techniques for 3D Polygons

Listing 2-23: shapes.h, the declaration of the light mapped pyramid objects for sample
program lightmap
#include "../lib/geom/object/object3d.h"
#include "../lib/geom/polygon/p3_ltex.h"
#include "../lib/geom/texture/texture.h"
#include "../lib/geom/texture/texload.h"

class pyramid:public l3d_object {


public:
pyramid(l3d_texture_loader *l, l3d_surface_cache *scache);
virtual ~pyramid(void);
};

Listing 2-24: shapes.cc, the implementation of the light mapped pyramid objects for sample
program lightmap
#include "shapes.h"

#include <stdlib.h>
#include <string.h>

pyramid::pyramid(l3d_texture_loader *l, l3d_surface_cache *scache) :


l3d_object(4)
{
(*vertices)[0].original.set(float_to_l3d_real(0.),float_to_l3d_real(0.),float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*vertices)[1].original.set(float_to_l3d_real(50.0),float_to_l3d_real(0.),float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*vertices)[2].original.set(float_to_l3d_real(0.),float_to_l3d_real(50.),float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*vertices)[3].original.set(float_to_l3d_real(0.),float_to_l3d_real(0.),float_to_l3d_real(50.),
float_to_l3d_real(1.));

l3d_polygon_3d_textured_lightmapped *p;
int pi;

pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured_lightmapped(3,scache);
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=0;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=1;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=3;
l->load("test.ppm");
p->texture = new l3d_texture;
p->texture->tex_data = new l3d_texture_data;
p->texture->tex_data->width = l->width;
p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
p->texture->owns_tex_data = true;
p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
p->compute_surface_orientation_and_size();
p->plane.align_with_point_normal(p->center.original, normalized(p->sfcnormal.original -
p->center.original));

pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured_lightmapped(3,scache);
Chapter 2: Rendering and Animation Techniques for 3D Polygons 153

polygons[pi]->vlist = &vertices;

(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=2;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=3;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=1;

p->texture = new l3d_texture;


p->texture->tex_data = new l3d_texture_data;
p->texture->tex_data->width = l->width;
p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
p->texture->owns_tex_data = true;

p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];

polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
p->compute_surface_orientation_and_size();
p->plane.align_with_point_normal(p->center.original, normalized(p->sfcnormal.original -
p->center.original));

pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured_lightmapped(3,scache);
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=0;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=2;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=1;

p->texture = new l3d_texture;


p->texture->tex_data = new l3d_texture_data;
p->texture->tex_data->width = l->width;
p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
p->texture->owns_tex_data = true;

p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];

polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
p->compute_surface_orientation_and_size();
p->plane.align_with_point_normal(p->center.original, normalized(p->sfcnormal.original -
p->center.original));

pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured_lightmapped(3,scache);
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=3;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=2;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=0;

p->texture = new l3d_texture;


p->texture->tex_data = new l3d_texture_data;
p->texture->tex_data->width = l->width;
p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
154 Chapter 2: Rendering and Animation Techniques for 3D Polygons

p->texture->owns_tex_data = true;

p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];

polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
p->compute_surface_orientation_and_size();
p->plane.align_with_point_normal(p->center.original, normalized(p->sfcnormal.original -
p->center.original));

num_xforms = 1;
modeling_xforms[0].set
( float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.),

Y
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.) );

FL
modeling_xform=
modeling_xforms[0];
}
AM
pyramid::~pyramid(void) {
for(register int i=0; i<polygons.num_items; i++) {delete polygons[i]; }
}
TE

Listing 2-25: main.cc, the main program file for sample program lightmap
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>

#include "../lib/geom/object/object3d.h"
#include "../lib/tool_2d/screen.h"
#include "../lib/tool_os/dispatch.h"
#include "../lib/raster/rasteriz.h"
#include "../lib/tool_2d/scrinfo.h"
#include "../lib/geom/world/world.h"
#include "../lib/system/fact0_2.h"
#include "../lib/geom/texture/texture.h"
#include "../lib/geom/texture/tl_ppm.h"
#include "../lib/pipeline/pi_wor.h"

#include "shapes.h"

#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>
#include <stdarg.h>

class my_pipeline : public l3d_pipeline_world {


public:
my_pipeline(l3d_world *w) :
l3d_pipeline_world(w) {}
void key_event(int ch);
};

class my_world :

Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 155

public l3d_world,
public l3d_texture_computer
{
public:
my_world(void);
void update_all(void);

virtual ~my_world(void) {
delete texture_loader;
delete scache;
}
l3d_texture_loader *texture_loader;
l3d_surface_cache *scache;

int compute_light_level(l3d_real u, l3d_real v);


void place_lamp(void);
void reset_surface_cache(void) {
scache->reset();
}
};

void my_pipeline::key_event(int c) {
l3d_pipeline_world::key_event(c);

switch(c) {
case 'p': {
my_world *w;
w = dynamic_cast<my_world *>(world);
if(w) {
w->place_lamp();
w->reset_surface_cache();
}
}
}
}

main() {
l3d_dispatcher *d;
my_pipeline *p;
my_world *w;

factory_manager_v_0_2.choose_factories();
d = factory_manager_v_0_2.dispatcher_factory->create();

w = new my_world();
p = new my_pipeline(w);
d->pipeline = p;
d->event_source = w->screen;

d->start();

delete p;
delete d;
delete w;
}

void my_world::update_all(void) {
l3d_world::update_all();
}
156 Chapter 2: Rendering and Animation Techniques for 3D Polygons

my_world::my_world(void)
: l3d_world(400,300)
{

rasterizer_3d_imp->fovx = &(camera->fovx);
rasterizer_3d_imp->fovy = &(camera->fovy);
rasterizer_3d_imp->screen_xsize = &(screen->xsize);
rasterizer_3d_imp->screen_ysize = &(screen->ysize);

l3d_screen_info *si = screen->sinfo;


texture_loader = new l3d_texture_loader_ppm(screen->sinfo);

camera->VRP.set(0,0,-50,0);
camera->near_z = float_to_l3d_real(5.5);
camera->far_z = int_to_l3d_real(500);

scache = new l3d_surface_cache(screen->sinfo);

int i,j,k,onum;

i=10; j=0; k=20;


k=0;

for(i=1; i<200; i+=50) {


objects[onum = objects.next_index()] = new pyramid(texture_loader,scache);

if (objects[onum]==NULL) exit;
objects[onum]->modeling_xforms[0].set
( int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(i),
int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(1),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(1),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1) );
objects[onum]->modeling_xform =
objects[onum]->modeling_xforms[0] ;
}

screen->refresh_palette();
}

int my_world::compute_light_level
(l3d_real u, l3d_real v)
{
l3d_real x,y,z;

x = O.X_ + l3d_mulri(U.X_, u) + l3d_mulri(V.X_, v);


y = O.Y_ + l3d_mulri(U.Y_, u) + l3d_mulri(V.Y_, v);
z = O.Z_ + l3d_mulri(U.Z_, u) + l3d_mulri(V.Z_, v);

l3d_point aLight(0,0,0,1);

l3d_vector L(aLight.X_ - x ,
aLight.Y_ - y ,
aLight.Z_ - z ,
int_to_l3d_real(0)),

N(cross(U,V));
l3d_real intensity;

l3d_real f_att=l3d_divrr(int_to_l3d_real(1) ,
(float_to_l3d_real(0.1)+float_to_l3d_real(0.02*sqrt(l3d_real_to_float(dot(L,L))))));
Chapter 2: Rendering and Animation Techniques for 3D Polygons 157

if (f_att>int_to_l3d_real(1)) f_att=int_to_l3d_real(1);

intensity = l3d_mulrr ( l3d_mulrr(int_to_l3d_real(MAX_LIGHT_LEVELS/2) , f_att) ,


dot(normalized(N),normalized(L)) );
if (intensity>int_to_l3d_real(MAX_LIGHT_LEVELS/2)) {intensity = int_to_l3d_real(MAX_LIGHT_LEVELS/2);
}

if (intensity<int_to_l3d_real(0)) {intensity = int_to_l3d_real(0); }

return l3d_real_to_int(intensity);
}

void my_world::place_lamp(void)
{
int iObj, iFacet;

l3d_vector facet_to_cam_vec;
l3d_vector N;

for(iObj=0; iObj<objects.num_items; iObj++) {


l3d_object *object = objects[iObj];

l3d_polygon_3d_clippable *pc;

object->init_nonculled_list();
object->vertices->num_varying_items = 0;
object->transform_stage = 0;
for(register int ivtx=0; ivtx<object->vertices->num_fixed_items; ivtx++) {
(*(object->vertices))[ivtx].reset();
}

l3d_polygon_3d_node *n;
n = object->nonculled_polygon_nodes;
while(n) {
n->polygon->init_clip_ivertices();
n->polygon->init_transformed();
n=n->next;
}

if (object->num_xforms) {
object->transform(object->modeling_xform);
}

object->camera_transform(camera->viewing_xform);

n = object->nonculled_polygon_nodes;
while(n) {
l3d_polygon_3d_textured_lightmapped *pt;

pt = dynamic_cast<l3d_polygon_3d_textured_lightmapped *>
(n->polygon);
if(pt) {
int lumel_y,lumel_x;

O = pt->surface_orientation.O.transformed;
U = pt->surface_orientation.U.transformed - O;
V = pt->surface_orientation.V.transformed - O;
printf("obj %d ", iObj);
O.print();
U.print();
158 Chapter 2: Rendering and Animation Techniques for 3D Polygons

V.print();

for(lumel_y=0; lumel_y < pt->lightmap->tex_data->height; lumel_y++) {


for(lumel_x=0; lumel_x<pt->lightmap->tex_data->width;lumel_x++) {
l3d_real u,v;
u = l3d_divri(int_to_l3d_real(lumel_x),
pt->lightmap->tex_data->width);
v = l3d_divri(int_to_l3d_real(lumel_y),
pt->lightmap->tex_data->height);
int new_light =
(int)(pt->lightmap->tex_data->data
[ lumel_y * pt->lightmap->tex_data->width + lumel_x ]) +
compute_light_level(u, v);

if ( new_light > MAX_LIGHT_LEVELS ) new_light = MAX_LIGHT_LEVELS;

pt->lightmap->tex_data->data
[ lumel_y * pt->lightmap->tex_data->width + lumel_x ] = new_light;

}
}
}

n = n->next;
}
}
}

The pyramid class is the same as in the previous program textest; the only differences are
the pyramid does not rotate and the polygons used to make up the pyramid are of type
l3d_polygon_3d_textured_lightmapped. Since all important functions on polygons
are virtual functions, we need only create instances of our new light mapped polygons and assign
them to our 3D objects. Notice that the constructor of the pyramid, as well as the constructor of the
light mapped polygons, require a surface cache as a parameter. The surface cache is created and
destroyed by our custom world class, my_world.
The custom world class offers two new methods for light mapping: place_lamp and com-
pute_light_level. Method place_lamp transforms all polygons in the scene into camera
space. Since all polygons are light mapped polygons, this means that all light map orientations are
also transformed into camera space. Then, for each lumel in each light map, we compute the light
level by calling compute_light_level. This illuminates all lumels of all polygons in the
scene.
Method compute_light_level first converts the (u,v) coordinates of the lumel back
into camera space, by using the light map’s orientation vectors (as a coordinate system), which
have also been translated into camera space. We then compute the light intensity of the lumel
using the diffuse lighting equation presented earlier. The light position is set to be at the origin—in
other words, the location of the camera. We clamp the maximum light intensity from the lighting
equation only to be half of the maximum allowable light intensity. This ensures that cumulative
applications of light can still cause an increase in intensity.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 159

NOTE We convert from 2D lumel (u,v) space into 3D camera space (3D space with the
camera at the origin in the standard orientation), instead of converting from 2D lumel space
into 3D world space (3D space with the camera somewhere other than the origin and ori-
ented arbitrarily), then perform the lighting calculations in 3D camera space. It would also be
possible to convert from lumel space to world space, by applying the inverse of the camera
transformation matrix to go from lumel space to camera space to world space. Then, we
would perform the lighting computations in world space. But this extra conversion to world
space is unnecessary. We just need to convert from 2D lumel space into some 3D space where
we can determine the 3D light positions, 3D light vectors, and 3D polygon surface normal
vectors, so that we can perform the 3D lighting computations. Both camera space and world
space are fine for this purpose. In this particular example, it is simply easier to use camera
space coordinates, though you may find it more intuitive to think of lighting computations as
occurring in world space. As long as we can examine the relative 3D positions of lights, light
vectors, and normal vectors, it doesn’t matter which 3D coordinate system we use to do the
lighting computations.

Shadows and Light Maps


The lighting computation in the last sample program does not take into account the fact that some
objects obstruct other objects. Light does not travel through opaque objects, creating areas of
shadow on objects behind the obstructing object. Shadow computation can be done by performing
an additional test before lighting each lumel. We check to see if any polygon from any object lies
between the light source and the camera space (or world space) coordinates of the lumel. If there is
such a polygon, the light rays from the light source are blocked and do not reach the lumel in ques-
tion; therefore, we proceed computing the light for the next lumel. If there is no such polygon, the
light rays from the light source reach the target lumel, and the lighting can be applied as normal.

Figure 2-50: Shadow computation with


light maps.

Testing to see whether any polygon lies between the light source and the lumel requires the
creation of a vector from the light source to the lumel and an intersection test to see if the vector
intersects any polygon in the scene. Creating the vector is easy: its starting point in 3D is the light
160 Chapter 2: Rendering and Animation Techniques for 3D Polygons

source, and its ending point in 3D is the lumel. The intersection test is harder. We learned how to
intersect a parametric line with a plane when we looked at clipping against arbitrarily oriented
planes. The problem here is that we don’t want to intersect a line with an entire infinite plane, but
only with a polygon—a tiny polygonal cut-out portion of a plane. This requires the line-plane test
developed earlier in this chapter, plus an additional test to see if the point of intersection lies within
the bounds of the polygon. Chapter 8 develops exactly this line-polygon intersection test, for the
purposes of collision detection.
With the line-polygon intersection test defined, computing shadows simply reduces to a
brute-force check to see if any polygon obstructs any light source for every lumel in the scene.
This works, but requires a lot of computation. One way of getting around this is to divide the scene
up into separate larger regions, such as cubes, with each region containing many polygons. We
then intersect the light ray with the boundaries of region. If the light ray does not intersect a region,
then it cannot intersect any of the polygons in the region. This can reduce the total amount of work
needed to compute shadows. A related way of computing shadows has to do with an explicit visi-
bility encoding by means of portals (see Chapter 5).

NOTE There are also many other ways of computing shadows. Visible surface determina-
tion, covered in Chapters 4 and 5, is a huge area in 3D graphics. Computing shadows is also
essentially a visible surface problem; we must determine which surfaces are visible from the
viewpoint of the light source. Surfaces which can be “seen” by the light are illuminated; those
invisible to the light are in shadow.

Summary
In this chapter, we saw several visual techniques for 3D polygons. These techniques include 3D
morphing, light computation, texture mapping with PPM image files, and light mapping. We
solved some equations using the symbolic algebra package Calc, and observed the linearity of u/z,
v/z, and 1/z in screen coordinates, and the non-linearity of u, v, and z in screen coordinates.
We now have a pretty good handle on 3D polygons, but our sample programs have still used
fairly basic 3D objects: pyramids consisting of just a few triangles. Creating more complicated
objects requires the use of a 3D modeling package. The next chapter covers using the free 3D
modeling package Blender to create more interesting models for use in our 3D programs.
161

Chapter 3

3D Modeling with
Blender

T
he Blender 3D modeling package is a powerful tool for creating 3D objects. Chapter 1
already reviewed the basics of using Blender to create simple polygonal models for import
into an l3d program. In this chapter, we create more complex models, illustrating the use of
Blender’s animation and texture mapping features. In particular, we cover the following topics:
n Assigning texture coordinates to models
n Using VRML to export and import texture coordinates
n Creating compatible morph targets
n Viewing animations in Blender
n Using inverse kinematics and rotoscoping
This chapter is organized around two tutorials. The first illustrates a 3D morph between a cube and
a texture-mapped tree. The second creates a jointed, animated human figure.

Tutorial: Creating and Exporting


Compatible, Textured 3D Morph Targets
We already know how to import arbitrary models into our 3D programs by using our Videoscape
plug-in for l3d objects. This tutorial takes the idea a step further, illustrating how to use Blender to
apply textures to our models, and how to import these applied textures into our 3D programs. Fur-
thermore, this tutorial also illustrates how to create compatible morph targets—two models with
the same number and ordering of vertices and faces that can morph into one another. Chapter 2
illustrated morphing in 3D, but with simple, uninteresting objects. Now it’s time to create an inter-
esting model in Blender and morph it.
Let’s start with the morphing aspect; we then deal with applying texture coordinates after we
have generated the morph targets.
First, clear everything in Blender by pressing Ctrl+x. Delete the default plane object.
162 Chapter 3: 3D Modeling with Blender

The Starting Morph Mesh


As we touched upon in Chapter 2, in order for two models to be able to morph into one another,
they must have the same number and ordering of vertices and faces. The reason for this is that
morphing is simply interpolation of vertex positions from old positions to new positions, which is
only (easily) possible if the topology of both models is the same. The best way to ensure this is to
create both source and target models by using the same basis mesh in Blender, merely deforming
the mesh within Blender to shape it into the source and target models. It is not permitted to add or
delete vertices or faces; only deformation of the mesh (i.e., displacement of the mesh’s vertices) is
allowed.

Figure 3-1: Incompatible (left) and compatible (right) morph targets.

We’ll start out with a cube.


1. Move the 3D cursor to the origin if it is not already there.
2. Add a cube. Remain in EditMode, and make sure that all vertices are selected (displayed in
yellow).
3. Type w. The Specials pop-up menu appears. Select Subdivide. Blender subdivides every
selected face into four smaller, equally sized faces. Subdividing the cube in this manner pre-
serves the shape of the cube, but gives us more faces to work with.
4. Subdivide the cube again. Each side of the cube should now have 16 faces (4´ 4).
5. Scale the cube up, so that each side is approximately 8 grid units long.
Now, we’re going to define two morph targets for this mesh, then edit these targets.
Chapter 3: 3D Modeling with Blender 163

Figure 3-2

Inserting Two Morph Targets into Blender


The next step is to insert two morph targets into Blender, so that we can see and test the morph
within Blender itself. Blender refers to these morph targets as vertex keys, a term which comes
from key-frame animation. Insert two vertex keys as follows:
1. Ensure that Blender is in EditMode.
2. Notice the numerical slider button on the right-hand side of the header of the ButtonsWindow.
This slider indicates the current frame number for animations. Ensure that the value of this
slider is 1. If not, change its value by dragging it with the mouse or by pressing Down Arrow
(decrease frame by 10) or Left Arrow (decrease frame by 1) until its value is 1.
3. Type i to insert a key. A pop-up menu appears, allowing you to select different kinds of keys.
Select Mesh. This saves the current positions of the vertices of the mesh into a key.
Figure 3-3
164 Chapter 3: 3D Modeling with Blender

4. Exit and re-enter EditMode by pressing Tab twice.


5. Increase the frame number to 21 by pressing Up Arrow (increase frame by 10) twice, or press
Right Arrow (increase frame by 1) 20 times.
6. Type i to insert another key.
We now have inserted two vertex keys. You can see these keys by looking in an IpoWindow as
follows.
1. Exit EditMode. Select the cube object for which you just inserted the vertex keys.
2. Open an IpoWindow, preferably in a new window next to the 3DWindow (as opposed to
changing the existing 3DWindow to be of type IpoWindow).
3. In the window header of the IpoWindow, click on the ICONBUT with the image of a polygon

Y
with vertices. This displays vertex keys in the IpoWindow.

FL
4. Ensure that the ICONBUT with the image of a key is not active.
Notice the two horizontal lines within the window. The bottommost orange line represents the
first, baseline vertex key. The first vertex key is always displayed in orange. The second and all
AM
additional vertex keys are displayed as blue horizontal lines. The order of the vertex keys is from
bottom to top.
Also notice the curved line. This is the interpolation curve, which interpolates from the first
TE

key to the second key (vertically) over time (horizontally). The vertical green line indicates the
current frame number. If you change the current frame with the arrow keys, the green line moves
left or right along the time axis accordingly.

Figure 3-4: The IpoWindow with the two


vertex keys.

Here are some operations you can perform in the IpoWindow. Don’t try these out just now
because the current configuration is important for completing this tutorial. You can right-click a
vertex key in the IpoWindow to select it, at which point it changes to a lighter color (light orange
for the first vertex key, light blue for the additional vertex keys). Type g to grab and move the
selected vertex key(s) to another location, or x to delete a vertex key. If you add vertex keys in the

Team-Fly®
Chapter 3: 3D Modeling with Blender 165

3DWindow (remember that you must change the frame number to insert multiple vertex keys),
you can see how more and more blue lines (the vertex keys) appear within the IpoWindow.
For now, make sure that only two vertex keys are present, and that the second vertex key, the
topmost one, is selected (displayed in light blue).

Deforming the Mesh


We now have two vertex keys defined. The second one is selected, because it was the last one
inserted (or because you right-clicked it in the IpoWindow).
With a vertex key selected, any changes to the mesh apply to that vertex key only. This means
that if we go into EditMode now and change the mesh, it changes the mesh just for the second ver-
tex key, but the first vertex key still retains the original shape of the mesh.
This means that we now need to enter EditMode and deform the mesh into a new shape. Begin
as follows.
1. Select the upper-right vertex of the cube.
2. Move the vertex somewhere away from the corner, so that a sharp point is created.
3. Exit EditMode. At this point, by exiting EditMode with a modified mesh, we have just saved
the current shape into the currently selected vertex key.

Figure 3-5

4. Press Left Arrow a few times to decrease the frame number by a few frames. Notice that the
sharp point you created at the corner of the mesh slowly starts to disappear. As you decrease
the frame number, Blender automatically interpolates the vertex positions to the appropriate
vertex key (computed by Blender by intersecting the interpolation curve in the IpoWindow
with the vertex key for the current frame number), which in this case is the first vertex key.
Since the first vertex key contains the unmodified cube, the mesh morphs back to the original
cube the closer the frame number is to the first frame.
166 Chapter 3: 3D Modeling with Blender

Figure 3-6

5. Press Right Arrow to return the frame number back to frame 21. Notice that the sharp point
reappears as you approach frame 21, the time at which the second vertex key was inserted.
Importantly, and quite fortunately, if we save a Videoscape mesh from Blender, the exported
geometry reflects the current state of the mesh for the currently active frame. This means that to
export two compatible morph targets, we simply change to frame 1, write the first mesh, then
change to frame 21, at which point Blender automatically interpolates the vertices to the second
vertex key, and write a second mesh.
Now, you should again be at frame 21. Your job now is to enter EditMode and deform the
mesh into something interesting. Any shape will do, but for this tutorial I chose to deform the
mesh into the shape of a palm tree. Here are some hints for making the palm tree:
1. Start by making the bottom half of the cube longer and thinner to represent the trunk. Next,
make the top flat and somewhat x-shaped. Finally, pull down the outer edges of the top x to
make the leaves droop down.
2. Select groups of vertices using border select mode, and translate, rotate, or scale these groups
of vertices. Creating the tree by moving each vertex individually would be very time
consuming.
3. Be careful not to pull vertices around in physically impossible ways. For instance, if a vertex
is originally on one side of an edge in the original model, you should not pull the vertex such
that it is on the other side of the edge. If you do so, this would reverse the front/back orienta-
tion of the faces connected with the vertex, which leads to incorrect display later. Think of
yourself as a sculptor working with a malleable balloon; you can stretch or compress the bal-
loon to change its shape, but if you try to poke the balloon through itself, the whole thing
blows up in your face.
Chapter 3: 3D Modeling with Blender 167

Figure 3-7: Avoid pulling vertices across edges where


the orientation would change. In this example, the
vertex 2 may be pulled to the point marked a while
retaining the same orientation. On the other hand,
pulling vertex 2 to point b would change the
orientation of the polygon; the vertex sequence 1-2-3
would change from clockwise to counterclockwise. This
is generally undesirable.

The file treemorph.blend (in the same directory as the source code for the next sample pro-
gram, vids3d) contains the palm tree created by deforming the cube.

Figure 3-8: The completed palm tree as a


compatible morph target to the original
subdivided cube.

Next, let’s look at applying a texture to the tree.

Applying a Texture and Assigning Texture Coordinates


We saw in Chapter 2 the theory of defining a texture space and applying this to a polygon. Chapter
6 covers using Blender in this way to define texture spaces and apply them to polygons. The prob-
lem is, this method of applying textures to polygons is not directly supported by Blender (actually,
it is, but the results cannot be exported into a common file format), and requires us to use a sepa-
rate program in addition to Blender. For now, we’ll look at an alternative way of applying texture
coordinates to polygons, which is supported directly by Blender and which can be exported and
thus read into our programs. This alternative way of applying texture coordinates is a direct
assignment of (u,v) texture coordinates to polygon vertices.
168 Chapter 3: 3D Modeling with Blender

NOTE The following paragraphs describe the use of the (u,v) texture editor, a relatively new
feature of Blender. This feature appears to have some small problems in Blender version
1.80, so you might want to use Blender 2.0 to do texturing.

Applying textures in this way requires us to use a special mode of Blender called FaceSelect
mode. Toggle this mode on and off by pressing f. (You can think of this as being analogous to
EditMode, which is toggled on and off with Tab.) In FaceSelect mode, right-clicking a face in the
3DWindow causes it to be selected. Press Shift+right-click to select multiple faces, b to select
faces using border select mode, a to select/deselect all faces, or h to temporarily hide selected
faces. Notice that these keys are the same ones that are used in EditMode for selecting vertices, or
outside of EditMode for selecting objects. Blender draws the selected faces with a yellow dotted
border.

NOTE In EditMode, you can select faces in a limited fashion by selecting all vertices belong-
ing to the face. The problem with EditMode selection is that it selects all faces sharing the
selected vertices; in other words, face selection in EditMode is actually an alternative interpre-
tation of vertex selection, which sometimes makes face selection awkward in EditMode. On
the other hand, FaceSelect mode allows you to selectively select or deselect faces at the face
level, independent of the vertices.

You may find it easiest to use FaceSelect mode in the perspective view, rotating the viewpoint as
needed to display the faces to be selected. Also, first hiding obstructing faces by pressing Alt+h is
a useful way of making selecting easier.
After selecting faces in FaceSelect mode, Blender then allows you to map a texture to the
selected faces in various ways. To do this, you first must load an image texture image. Do this in a
window of type ImageWindow. It is easiest if you have an ImageWindow open next to the
3DWindow. Load an image via the Load button in the window header of the ImageWindow. Note
that if the size of the ImageWindow is too small, the window header might be cut off, and the Load
button might be invisible. Temporarily maximize the ImageWindow in this case to see the Load
button. For the purposes of interactive texture mapping within Blender, the image should be
square; its width and height must be identical. Supported file formats are JPEG, Targa, Iris, or
HamX (a custom Blender format).

NOTE Note that Blender does not support the PPM format, which is the format used by l3d,
so we need to have two copies of our texture images, one for Blender and one for l3d. You
can use a program such as GIMP to convert between image formats. GIMP automatically rec-
ognizes file extensions, so converting from JPEG to PPM is really as simple as loading the
JPEG and saving as PPM. Alternatively, the programs cjpeg and djpeg allow for batch
mode conversion to and from the JPEG format.

Loading an image in the ImageWindow associates the texture image with the currently selected
faces. Since our Videoscape plug-in object only supports one texture image per object, as men-
tioned above, you should select all faces and then load an image to associate the same image file
with all faces. Do this as follows:
Chapter 3: 3D Modeling with Blender 169

Figure 3-9: The ImageWindow.

1. Open an ImageWindow next to the 3DWindow.


2. In the 3DWindow, press f to enter FaceSelect mode.
3. If all faces are not selected, then press a until all faces are selected.
4. Press Alt+z. This turns on the textured display mode in the 3DWindow. Since we are going to
be editing texture coordinates on the mesh, it is important that we see the results of our work
immediately in the 3DWindow. (Type z to return to non-textured display and to toggle
between normal wire-frame display and solid non-textured display.)
5. In the ImageWindow, click the Load button, and select an image file with square dimensions
(width and height identical). Middle-click the selected filename to load it.
At this point, the texture image is loaded, and is mapped to every face on the polygon. The texture
is mapped so that each face has a complete copy of the image. This is very similar to the default
texture mapping used in the last sample program, along with the undesirable property that this is
not in general the mapping that we want.
The way we change the mapping is as follows. In the 3DWindow, first select the faces for
which you wish to apply the texture in a different way. Then, type u. This brings up a pop-up menu
that allows you to specify the mapping of the current texture image onto the selected faces. The
various entries in the menu have the following meanings.
n Cube: Applies a cubical mapping to the faces. From the object center, an imaginary cube
(consisting of six faces) is drawn around the object. The texture mapped onto the faces takes
on the orientation of one of the six faces of the cube which is oriented most closely with the
face. Mostly vertical faces get a texture mapping defined by the vertical sides of the cube;
mostly horizontal faces get a texture mapping defined by the horizontal (top or bottom) sides
of the cube. Mathematically, either the (x,y), (x,z), or (y,z) coordinates are used as the (u,v)
coordinates depending on the normal vector to the face. A cubical mapping is good for map-
ping textures onto mostly horizontal or vertical surfaces, such as walls, floors, and ceilings.
170 Chapter 3: 3D Modeling with Blender

n Cylinder: Applies a cylindrical mapping. The texture space is an imaginary cylinder drawn
around the object, and the texture coordinates are determined by projecting vertices of each
polygon onto this cylinder. A cylindrical mapping is good for wrapping a label texture around
a bottle.
n Sphere: Applies a spherical mapping. The texture space is an imaginary sphere around the
object, and the texture coordinates are determined by projecting vertices of each polygon onto
the sphere. A spherical mapping is good for wrapping a texture around a sphere or sphere-like
object—a creature’s head, for instance.
n Bounds to 64, bounds to 128: Uses the current projection of the polygons in the 3DWindow
for computing the texture coordinates, rounding these to fit within a bounding box of 64´ 64
or 128´ 128 pixels, respectively.
n Standard 64, standard 128, standard 256: Uses the default square mapping, where every face
is mapped to an identical square subset of the texture with a size of 64´ 64, 128´ 128, or
256´ 256 pixels, respectively. This is the initial face mapping.
n From window: Uses the current projection of the polygons in the 3DWindow for computing
texture coordinates.
After selecting one of these mappings, Blender computes the mapping and stores a (u,v) texture
coordinate with every vertex of every selected face. The results of the mapping are immediately
visible in the 3DWindow (assuming that textured display mode has been activated with Alt+z).
Furthermore, the (u,v) mapping for the selected faces is displayed as a set of connected verti-
ces in the ImageWindow. Remember that the ImageWindow displays the texture, which
represents the texture space; the (u,v) texture coordinates for each vertex are locations in texture
space, which can thus be displayed as points on the image. Within the ImageWindow, you can then
use the normal mesh editing keys to select, grab, scale, or rotate the (u,v) coordinates within the
ImageWindow, which has the effect of repositioning the point within texture space and thus
assigning new texture coordinates to the original polygon’s vertex. This allows you, for instance,

Figure 3-10: Texture coordinates in the


ImageWindow (right), which correspond to
the vertices of the selected faces (left) and
can be moved arbitrarily.
Chapter 3: 3D Modeling with Blender 171

to exactly map a certain part of the texture to fill an entire face, by moving the points correspond-
ing to the face’s vertices onto the appropriate locations on the image in the ImageWindow.
The texture file treetex.jpg (in the same directory as the treemorph.blend file) has
been applied to the tree by using a cylindrical mapping for the trunk and a window-based mapping
for the leaves. The texture file contains two parts. The top part of the texture is green, for the
leaves; the bottom part of the texture is brown, for the trunk.
You can now see why we earlier stated that it is not necessarily very limiting that our
Videoscape plug-in only allows one texture per object. By defining a suitably large texture con-
taining many smaller images, and by assigning specific parts of the texture to specific faces, we
can effectively apply several textures to different faces, even though physically we are only using
one texture.
For this example of the palm tree, you can perform the mapping as follows:
1. Select all faces in the trunk of the tree model. You will need to rotate the model and extend the
selection with Shift+right-click a few times to do this, because it is not possible to select a
face when you are viewing it from its back side.
2. Press u and apply a cylindrical mapping to all the faces in the trunk of the tree.
3. In the ImageWindow, press a to select all texture vertices.
4. Press s to scale down the texture vertices vertically. Move the mouse cursor vertically towards
the center of the selected vertices and middle-click. This limits the scaling in the vertical
direction. Scale down the vertices until they are as high as the brown portion of the texture.
5. Press s to scale down the texture vertices horizontally. Move the mouse cursor horizontally
towards the center of the selected vertices and middle-click. Scale down the vertices until they
are as wide as the brown portion of the texture.
6. Press g to translate the texture vertices. Move them so that all vertices are located within the
brown portion of the texture.

Figure 3-11: The faces for the trunk are


cylindrically mapped to just the brown part
of the texture image.
172 Chapter 3: 3D Modeling with Blender

At this point, you have cylindrically wrapped the brown portion of the texture around the selected
faces, which is a fairly good way of doing the mapping, since the trunk of a tree is essentially a
cylinder.
Proceed in a similar fashion to wrap the green portion of the texture around the leaves, but use
the From Window mapping. First, look at the tree from the side view in the 3DWindow so that the
flat sides of the leaves are visible. Then, select all leaf faces, apply a texture mapping from the
window view, and scale and translate the vertices in the ImageWindow to be positioned over the
correct portion of the texture (the green part in this case).

Figure 3-12

If you want to tweak the texture coordinates, you simply need to drag the appropriate vertices
in the ImageWindow. Again, you would do this to exactly control which parts of the image get
mapped to which parts of which faces. After repositioning a vertex in the ImageWindow, the
3DWindow automatically displays the results of the new mapping. To see the results of the map-
ping while you are still moving the vertex and before you have confirmed its new position,
activate the Lock button in the header of the ImageWindow.

CAUTION Single-pass light mapping does not work if (u,v) coordinates have been directly
assigned as described above.

It is important to note that manually assigning (u,v) vertices in this manner creates a non-linear
mapping between texture space and world space. Each vertex in the ImageWindow represents a
vertex of a polygon in 3D space. By manually positioning these vertices in the ImageWindow, we
create a connection between the vertex location in 3D space and a (u,v) location in texture space.
In other words, we create an arbitrarily defined mapping between world space and texture space.
This means that with this method, we have no simple matrix that converts between texture and
world space, as we did in Chapter 2, where we always used the texture space to calculate the tex-
ture coordinates as opposed to letting the user arbitrarily position them. The lack of a simple
conversion between texture and world space when using direct (u,v) texture coordinate assign-
Chapter 3: 3D Modeling with Blender 173

ment implies that any techniques requiring a texture-world space or world-texture space conver-
sion cannot be applied. In particular, the single-pass light mapping strategy of Chapter 2, using a
single pre-tiled surface combining texture and light map information, cannot be applied with
objects that have been textured through direct (u,v) assignment, because this technique requires us
to convert from texture (or lumel) space back into world space, which is not (easily) possible if the
(u,v) coordinates have been arbitrarily assigned. Note that a two-pass light mapping strategy is
still entirely possible even with arbitrarily assigned (u,v) coordinates, because in this case the sec-
ond lighting pass has nothing to do with the arbitrary (u,v) coordinates, instead using a
well-defined light map space. It’s only with the single-pass light mapping that we have problems.
For this reason, the Videoscape plug-in creates textured polygons, but not textured light mapped
polygons.

Testing the Morph in Blender


At this point, you should have a textured tree object, which was formed by deforming a cube in the
second morph target.
You can preview the morph in Blender simply by using the arrow keys to change the active
frame number; as mentioned earlier, Blender automatically interpolates the vertices between the
positions saved within the vertex keys. If you want to see the animation without keyboard inter-
vention, press Alt+a within the 3DWindow. This animates the mesh automatically for a certain
number of frames. Two buttons in the DisplayButtons control the starting and ending frames of the
animation: the Sta button specifies the starting frame, and the End button specifies the ending
frame. Press Esc to terminate the animation.

Figure 3-13: Testing the morph in Blender.

Exporting the Two Morph Targets


Once we are satisfied with the morphing animation, we can export the two (compatible) models as
follows:
1. Change to frame 1. Notice that the mesh appears cubical in the 3DWindow, as defined by the
first vertex key.
2. Press Alt+w and write the current mesh to the file cube.obj.
3. Change to frame 21. The mesh appears as defined by the second vertex key.
4. Press Alt+w and write the current mesh to file tree.obj.
That was the easy part. Now comes the more tricky part—exporting the texture coordinates.
174 Chapter 3: 3D Modeling with Blender

Exporting the Texture Information


The Videoscape file format does not store texture coordinates, so by exporting the morph target
meshes to Videoscape, we lose the texture information. One simple solution is to export the file in
VRML format, which is also an ASCII file format but is slightly more complicated to parse than
Videoscape. With VRML export, Blender does save the texture coordinates in the VRML file.
Fortunately, the ordering of the vertices and faces is the same in the Videoscape and the VRML
files. This means that to associate texture coordinates with our Videoscape format objects, we can
simply cut out the texture coordinate mapping within the VRML file, save it to a separate texture
coordinates file, and read in this file later while reading in the Videoscape file.

NOTE The newest free version of Blender supports Python scripting (previously this was only

Y
available in the commercial C-key version). Therefore, it should also be possible to write a

FL
Python script to export the texture coordinates.

The texture coordinates file is the mesh.uv file we spoke of earlier when discussing the parame-
AM
ter string passed to the Videoscape plug-in object. It is simply an ASCII file, where each line
contains a single (u,v) coordinate. The ordering of the lines in the texture coordinates file should
be the same as the order of the vertex indices defining the faces in the Videoscape file. Note that
the texture coordinates are associated with each vertex index defining a face, and not with the
TE

common vertices shared by all faces. This is because even though two adjacent faces may share a
vertex, the texture coordinate at the shared vertex might be different for the two faces. Thus, tex-
ture coordinates are defined per face, and the texture coordinates file specifies one (u,v) coordinate
for each vertex index of each face.
Again, you can create this texture coordinates file simply by cutting out the appropriate por-
tion of a VRML exported file, as follows:
1. Save the file as VRML by pressing Space to bring up the Toolbox, choosing File, then Save
VRML.
2. Edit the VRML file in your favorite text editor. Search for the beginning of the definition of
your mesh object; the line begins with the keyword DEF (for definition) and contains the
name of the mesh.
3. Search for the texture coordinate definition for this mesh. It follows the DEF line and begins
with the keyword TextureCoordinate2. Following this line is a list of several texture
coordinates.
4. Save the list of texture coordinates into a separate file. I suggest giving this file an extension of
.uv, for instance, meshname.uv.
5. Specify this meshname.uv file in the parameter string passed to the Videoscape plug-in
object, as discussed earlier. In this way, the (u,v) coordinates are read out of the file while
loading the geometry from the Videoscape mesh file, and the texture mapping is applied.

CAUTION VRML export of texture coordinates on triangular faces does not appear to work
in Blender. So, only use quadrilaterals if you wish to export texture information.

Team-Fly®
Chapter 3: 3D Modeling with Blender 175

Unfortunately, it appears that there is a small bug in the Blender VRML export of texture coordi-
nates with triangular faces. In the experiments I performed, the VRML exported texture
coordinates are simply not correct if the faces are triangular. Making the faces into quadrilaterals
(four-sided) seems to fix the problem. Thus, to export manually assigned texture coordinates in
this manner, make sure all your faces have four sides.
Also, notice that the orientation of the v axis in texture space is different in Blender/VRML
than it is in l3d. We have been using the convention that the upper-left pixel in the texture image is
(0,0) in texture space. Blender’s exported VRML texture coordinates, however, take the lower-
left pixel as point (0,0) in texture space. This means that, for texture coordinates in the range 0.0 to
1.0 (which is the case if all of the texture coordinates lie within the image in Blender’s
ImageWindow), the v coordinate we use in l3d is 1.0–v from the mesh.uv file, which essentially
flips the orientation of the v axis.

Importing the Morph Targets into a Program


The program vids3d imports our morph-compatible meshes and performs a morph between the
cube and tree shapes.
Listing 3-1: File shapes.h, program vids3d
#include "../lib/geom/object/object3d.h"
#include "geom/vertex/verint.h"
#include "dynamics/plugins/plugenv.h"

class pyramid:public l3d_object {


static const int num_keyframes = 2;
int keyframe_no;
l3d_two_part_list<l3d_coordinate> *keyframes[2];
l3d_vertex_interpolator interp;
bool currently_interpolating;

public:
pyramid(l3d_two_part_list<l3d_coordinate> *keyframe0,
l3d_two_part_list<l3d_coordinate> *keyframe1);
virtual ~pyramid(void);
int update(void);
};

Listing 3-2: File shapes.cc, program vids3d


#include "shapes.h"

#include <stdlib.h>
#include <string.h>
#include "../lib/tool_os/memman.h"

pyramid::pyramid(l3d_two_part_list<l3d_coordinate> *keyframe0,
l3d_two_part_list<l3d_coordinate> *keyframe1)
:
l3d_object(100)
{
keyframes[0] = keyframe0;
keyframes[1] = keyframe1;

currently_interpolating = false;
keyframe_no=0;
176 Chapter 3: 3D Modeling with Blender

pyramid::~pyramid(void) {
//- set vertices pointer to NULL to prevent parent l3d_object from trying
//- to delete these vertices, because they do not belong to us
vertices = NULL;
}

int pyramid::update(void) {
if(currently_interpolating) {
vertices = interp.list;
if(! interp.step()) {
currently_interpolating = false;
}
}else {
keyframe_no++;
if(keyframe_no >= num_keyframes) {keyframe_no = 0; }
int next_keyframe = keyframe_no + 1;
if(next_keyframe >= num_keyframes) {next_keyframe = 0; }

vertices = keyframes[keyframe_no];
interp.start( *keyframes[keyframe_no], *keyframes[next_keyframe],
rand()%100 + 50, 3);

currently_interpolating = true;
}

//- it is important to recalculate the normals because the mesa


//- texture mapping reverse projection relies on correct normals!
int i;
for(i=0; i<polygons.num_items; i++) {
polygons[i]->compute_center();
polygons[i]->compute_sfcnormal();
}
}

Listing 3-3: File main.cc program vids3d


#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>

#include "../lib/geom/object/object3d.h"
#include "../lib/geom/polygon/p3_flat.h"
#include "../lib/tool_2d/screen.h"
#include "../lib/tool_os/dispatch.h"
#include "../lib/raster/rast3.h"
#include "../lib/tool_2d/scrinfo.h"
#include "../lib/geom/world/world.h"
#include "../lib/system/fact0_2.h"
#include "../lib/pipeline/pi_wor.h"
#include "../lib/dynamics/plugins/plugenv.h"
#include "../lib/geom/texture/tl_ppm.h"
#include "../lib/dynamics/plugins/vidmesh/vidmesh.h"
#include "../lib/tool_os/memman.h"

#include "shapes.h"

#include <stdlib.h>
Chapter 3: 3D Modeling with Blender 177

#include <string.h>
#include <stdio.h>
#include <math.h>
#include <stdarg.h>

class my_world:public l3d_world {


public:
my_world(void);
};

my_world::my_world(void)
: l3d_world(400,300)
{
l3d_screen_info *si = screen->sinfo;

camera->VRP.set(0,0,-50,0);
camera->near_z = float_to_l3d_real(5.5);
camera->far_z = int_to_l3d_real(500);

rasterizer_3d_imp->fovx = &(camera->fovx);
rasterizer_3d_imp->fovy = &(camera->fovy);
rasterizer_3d_imp->screen_xsize = &(screen->xsize);
rasterizer_3d_imp->screen_ysize = &(screen->ysize);

int i,j,k,onum=0;

i=10; j=0; k=20;


k=0;

//- create two videoscape plug-in objects, one for the first morph
//- target and one for the second

l3d_object *morph_target0, *morph_target1;

l3d_texture_loader *texloader;
l3d_surface_cache *scache;
texloader = new l3d_texture_loader_ppm(si);
scache = new l3d_surface_cache(si);
l3d_plugin_environment *plugin_env0, *plugin_env1;

plugin_env0 = new l3d_plugin_environment(texloader, screen->sinfo, scache,


(void *)"0 0 0 1 0 0 0 1 0 0 0 1 cube.obj tree.ppm tree.uv");
morph_target0 = new l3d_object(10);
//- max 10 fixed vertices, can be overridden by plug-in if desired
//- by redefining the vertex list

morph_target0->plugin_loader =
factory_manager_v_0_2.plugin_loader_factory->create();
morph_target0->plugin_loader->load("../lib/dynamics/plugins/vidmesh/vidmesh.so");
morph_target0->plugin_constructor =
(void (*)(l3d_object *, void *))
morph_target0->plugin_loader->find_symbol("constructor");
morph_target0->plugin_update =
(void (*)(l3d_object *))
morph_target0->plugin_loader->find_symbol("update");
morph_target0->plugin_destructor =
(void (*)(l3d_object *))
morph_target0->plugin_loader->find_symbol("destructor");
morph_target0->plugin_copy_data =
(void (*)(const l3d_object *, l3d_object *))
178 Chapter 3: 3D Modeling with Blender

morph_target0->plugin_loader->find_symbol("copy_data");

if(morph_target0->plugin_constructor) {
(*morph_target0->plugin_constructor) (morph_target0,plugin_env0);
}

plugin_env1 = new l3d_plugin_environment(texloader, screen->sinfo, scache,


(void *)"0 0 0 1 0 0 0 1 0 0 0 1 tree.obj tree.ppm tree.uv");
morph_target1 = new l3d_object(10);
//- max 10 fixed vertices, can be overridden by plug-in if desired
//- by redefining the vertex list

morph_target1->plugin_loader =
factory_manager_v_0_2.plugin_loader_factory->create();
morph_target1->plugin_loader->load("../lib/dynamics/plugins/vidmesh/vidmesh.so");
morph_target1->plugin_constructor =
(void (*)(l3d_object *, void *))
morph_target1->plugin_loader->find_symbol("constructor");
morph_target1->plugin_update =
(void (*)(l3d_object *))
morph_target1->plugin_loader->find_symbol("update");
morph_target1->plugin_destructor =
(void (*)(l3d_object *))
morph_target1->plugin_loader->find_symbol("destructor");
morph_target1->plugin_copy_data =
(void (*)(const l3d_object *, l3d_object *))
morph_target1->plugin_loader->find_symbol("copy_data");

if(morph_target1->plugin_constructor) {
(*morph_target1->plugin_constructor) (morph_target1,plugin_env1);
}

//- create some pyramid objects


for(i=1; i<100; i+=20) {
objects[onum=objects.next_index()] =
new pyramid(morph_target0->vertices, morph_target1->vertices);

*objects[onum] = *morph_target0;
objects[onum]->num_xforms = 2;
objects[onum]->modeling_xforms[0] = l3d_mat_rotx(0);
objects[onum]->modeling_xforms[1].set
( float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.) );

objects[onum]->modeling_xform=
objects[onum]->modeling_xforms[1] | objects[onum]->modeling_xforms[0];
objects[onum]->vertices = morph_target0->vertices;

l3d_screen_info_indexed *si_idx;
l3d_screen_info_rgb *si_rgb;

l3d_polygon_3d_flatshaded *p;
for(int pnum=0; pnum<objects[onum]->polygons.num_items; pnum++) {
p = dynamic_cast<l3d_polygon_3d_flatshaded *>(objects[onum]->polygons[pnum]);
if(p) {
p->final_color = si->ext_to_native
(rand()%si->ext_max_red,
rand()%si->ext_max_green,
Chapter 3: 3D Modeling with Blender 179

rand()%si->ext_max_blue);
}
}

if (objects[onum]==NULL) exit;
objects[onum]->modeling_xforms[1].set
( int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(i),
int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(1),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(1),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1) );
objects[onum]->modeling_xform =
objects[onum]->modeling_xforms[1] |
objects[onum]->modeling_xforms[0] ;
}

screen->refresh_palette();
}

main() {
l3d_dispatcher *d;
l3d_pipeline_world *p;
my_world *w;

factory_manager_v_0_2.choose_factories();
d = factory_manager_v_0_2.dispatcher_factory->create();

w = new my_world();
p = new l3d_pipeline_world(w);
d->pipeline = p;
d->event_source = w->screen;

d->start();

delete d;
delete p;
delete w;
}

The program is almost identical to the morphing pyramid program from Chapter 2. First, we cre-
ate two Videoscape plug-in objects and load the cube and the tree object into each of these objects.
Then, we create a series of morphable pyramid objects, just as we did in Chapter 2. (The name pyr-
amid is here somewhat of a misnomer, since the morphable objects are not pyramids in this case,
but the naming remains the same to emphasize the similarity of the code with that of Chapter 2.)
In this case, instead of explicitly assigning the vertices in the source and destination morph
vertex lists, we simply use the vertex lists from the already loaded plug-in objects. Then, each pyr-
amid object morphs itself from its source to its target vertex list, and back again, using the vertex
interpolator class.
The logic, therefore, is essentially the same as the simpler morphing program of Chapter 2,
only now we are morphing between two real 3D models, created and textured within Blender. The
logic is simple, yet the results look quite good in 3D.
Note that in this morphing program, in contrast to the morphing program of Chapter 2, we do
recalculate the surface normals during morphing. This is because the Mesa texture mapping rou-
tines require a reverse projection, as we saw in Chapter 2. Performing this reverse projection
180 Chapter 3: 3D Modeling with Blender

during Mesa texture mapping requires us to have an accurate surface normal vector, which is why
we recalculate it whenever we interpolate the vertex positions.

Figure 3-14: Output from sample


program vids3d.

Tutorial: Using Inverse Kinematics and Roto-


scoping to Model a Walking Human Figure
This tutorial illustrates two very powerful features of Blender: inverse kinematics and
rotoscoping. These techniques allow us to animate a human-like jointed figure, which we then
import into a program.
This tutorial is a small introduction to the field of character animation. Character animation is
a special branch of animation dealing with animating characters—living entities such as humans,
dogs, or imaginary creatures such as aliens. Even ordinarily inanimate objects, such as a sack of
flour, can be animated as characters so that they appear to walk, move around, and come to life.
Indeed, to animate literally means to bring to life. Character animation is a very broad field and
requires years of study to master. This is because the subtle nuances of a character’s movement
convey much about that character’s personality. Creating believable characters requires a firm
concept of the character you wish to portray as well as an understanding of anatomy, proportion,
staging, posing, timing, weight, movement, and so forth.
While this small tutorial cannot show you all of the aspects of character animation, what it
does illustrate is the technical side of basic character animation using Blender. We’ll create a sim-
ple human-like character and move its limbs around to simulate a walking motion. One of the most
important tools for doing character animation is inverse kinematics, which we now look at in more
detail.
Chapter 3: 3D Modeling with Blender 181

Inverse Kinematics: Definition


Inverse kinematics, abbreviated IK, is best thought of as a way of automatically positioning a
series of joints by specifying the position of just one point—the endpoint. An example will make
this clear. Let’s say we have modeled an arm object as a series of two segments: the top segment
and the bottom segment.

Figure 3-15: An arm modeled as a chain (or Ika)


consisting of two bones (or limbs).

In Blender terminology, each of these segments is called a limb; in other literature, these seg-
ments are sometimes called bones. A series of connected limbs is called a chain or an IK chain.
Blender also refers to a chain as an Ika, short for inverse kinematics. The connecting points
between the chains are called joints. The start of the chain is the root or base of the chain. The last
point in the chain is called the effector.
The important thing about an Ika chain is that moving the effector—a single point—leaves the
base fixed in place, and automatically moves all points in between the base and the effector. In the
arm example above, think of moving the effector as being equivalent to moving the hand. By mov-
ing the hand, which is located at the tip of the arm, we automatically cause the rest of the arm to be
correctly repositioned. This makes animation in many cases much easier; we just pull the hand
(effector) to the new position and the entire arm follows.
You can also move the base of an Ika chain. Moving the base of an Ika moves the entire chain
and the effector, without changing the relative positions of the joints. You move the base of an Ika
in order to reposition the entire Ika.
The alternative to inverse kinematics is forward kinematics, abbreviated FK. To position the
arm with forward kinematics, you move the top part of the arm, which also moves the bottom part
of the arm with it. Then, you move the bottom part of the arm, which then moves without affecting
the top. This finally determines the position of the hand, located at the end of the arm. FK implies a
simple hierarchical relationship between the top limb (the parent) and the bottom limb (the child).
You can create such a simple hierarchical relationship for use in forward kinematics by pressing
Ctrl+p (although not all hierarchical relationships are created for FK animation purposes).
182 Chapter 3: 3D Modeling with Blender

Figure 3-16: Moving


the effector of the Ika
chain causes the rest
of the chain to
automatically follow.

Figure 3-17: Moving


the base of an Ika
moves the entire Ika.

Just remember the following in order to distinguish between inverse and forward kinematics:
In inverse kinematics, you move the lowest item in a hierarchy (the effector), which automatically
successively moves all higher items in the hierarchy. In forward kinematics, you explicitly start at
the highest level in the hierarchy, and successively manually move the lower items in the hierar-
chy, which finally determines the position of the lowest item in the hierarchy. Inverse kinematics
works from the bottom up; forward kinematics works from the top down.
The choice of which is more appropriate—inverse kinematics or forward kinematics—
depends on the most convenient way you like to think about the animation process. Generally, it’s
easier to think of limb animation in terms of inverse kinematics. For instance, when animating an
arm, you usually think of the animation in terms of moving the hand (the effector) towards an
object, and not in terms of moving the upper and lower arms individually. But since inverse kine-
matics automatically recalculates the joint positions, it also gives you less control over the exact
animation. When animating a tail or a snake with many segments, it might be better to use forward
kinematics.
For this tutorial, we’ll use IK for animating the arms and legs of a human-like figure to create
a simple walking motion. The first step is to understand how to create and manipulate Ikas in
Blender.
Chapter 3: 3D Modeling with Blender 183

Creating an Ika Chain in Blender


Let’s begin modeling our human-like figure by creating four Ika chains: two for the arms and two
for the legs. (The file ika.blend, located in the source code directory for the next sample pro-
gram ika, contains the finished Ika model we create in the next few sections.) Begin with an
empty Blender configuration (Ctrl+x). Then create the Ika chain for one leg as follows:
1. Position the 3D cursor as follows. Switch to front view (press 1 on the numeric keypad), and
move the 3D cursor one unit to the right of center. Switch to side view (press 3 on the numeric
keypad), and move the 3D cursor about two units above center.
2. Press Shift+a to add an item from the Toolbox, then select Ika. Blender adds an Ika with its
base at the 3D cursor position.
3. Move the mouse cursor to position the effector of the Ika and left-click. The position of the Ika
effector should be somewhat underneath and to the right of the Ika base. After left-clicking,
notice that Blender then draws the next limb in the Ika automatically. The previous effector is
now a joint between the first limb and the new limb; the tip of the new limb is now the new
effector.
4. Move the mouse cursor to position the new effector. The effector should be somewhat below
and to the left of the previous joi