Advanced Linux 3D Graphics Programming
Advanced Linux 3D Graphics Programming
AM
FL
Y
Advanced
Linux 3D
Graphics
Programming
Norman Lin
Lin, Norman.
Advanced Linux 3D graphics programming / by Norman Lin.
p. cm.
Includes index.
ISBN 1-55622-853-8 (pbk.)
1. Computer graphics. 2. Linux. 3. Three-dimensional display systems. I. Title.
ISBN 1-55622-853-8
10 9 8 7 6 5 4 3 2 1
0106
All inquiries for volume purchases of this book should be addressed to Wordware Publishing, Inc., at the above
address. Telephone inquiries may be made by calling:
(972) 423-0090
iii
Contents
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Chapter 1 Basic Linux 3D Graphics Concepts . . . . . . . . . . . . . . . . . . 1
2D Graphics Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
3D Graphics Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3D Coordinate Systems and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Perspective Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Specific Matrix Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Other Matrix Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
The l3d Library Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Sample l3d Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
l3d Directory Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
The Five-Step Process of l3d Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Overview of l3d Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Applications and Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2D Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Concrete Factory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Specifying Geometry and Behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Fixed- and Floating-Point Math. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Summary of l3d Classes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Linux Programming Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Linux 3D Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Blender Interface and Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Exporting and Importing Blender Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Chapter 2 Rendering and Animation Techniques for 3D Polygons . . . . . . 39
Vertex Animation and 3D Morphing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Sample Program: morph3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Mathematical Models for Computing Light . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Self Lighting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Ambient Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Diffuse Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Specular Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Multiple Light Sources and Components . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Radiosity and Ray Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Dynamic or Static Lighting Computations . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Fog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Rendering Techniques for Drawing Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Flat Shading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
iv Contents
Gouraud Shading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Phong Shading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Light Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Texture Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Step 1: Define a Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Storing Texture Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Classes for Loading Textures from Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Practical Issues in Dealing with Texture Image Files. . . . . . . . . . . . . . . . . . . . . 73
Step 2: Define a Texture Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Step 3: Map Between Texture Space and World Space . . . . . . . . . . . . . . . . . . . . . 76
Calc: A Symbolic Algebra Package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Starting and Exiting Calc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Stack-Based Computation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Entering and Editing Mathematical Entities . . . . . . . . . . . . . . . . . . . . . . . . . 82
Solving Systems of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Solving the Texture Mapping Equations with Calc . . . . . . . . . . . . . . . . . . . . . . 86
Step 4: Reverse Project from Screen Coordinates into Texture Coordinates . . . . . . . . . . 89
Step 5: Map Texture Coordinates to Integer Indices and Draw . . . . . . . . . . . . . . . . . 92
An Optimized Texture Mapping Strategy: u/z, v/z, 1/z . . . . . . . . . . . . . . . . . . . . . 93
The Division Operation and Texture Mapping . . . . . . . . . . . . . . . . . . . . . . . . 95
Associating Textures with 3D Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Rasterization Classes for 3D Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
An Abstract 3D Rasterizer: l3d_rasterizer_3d . . . . . . . . . . . . . . . . . . . . . . . . 98
A Software 3D Rasterizer Implementation: l3d_rasterizer_3d_sw_imp . . . . . . . . . . 101
A Mesa/OpenGL 3D Rasterizer Implementation: l3d_rasterizer_3d_mesa_imp . . . . . . 115
Sample Program: textest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Light Mapping Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Software Light Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Surface Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Light Mapped Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Software Rasterization of Light Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Hardware Light Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Sample Program: lightmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Shadows and Light Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Chapter 3 3D Modeling with Blender. . . . . . . . . . . . . . . . . . . . . 161
Tutorial: Creating and Exporting Compatible, Textured 3D Morph Targets . . . . . . . . . . . 161
The Starting Morph Mesh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Inserting Two Morph Targets into Blender . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Deforming the Mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Applying a Texture and Assigning Texture Coordinates. . . . . . . . . . . . . . . . . . . . 167
Testing the Morph in Blender. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Exporting the Two Morph Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Exporting the Texture Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Importing the Morph Targets into a Program . . . . . . . . . . . . . . . . . . . . . . . . . 175
Tutorial: Using Inverse Kinematics and Roto- scoping to Model a
Walking Human Figure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Inverse Kinematics: Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Creating an Ika Chain in Blender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Contents v
Landscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Storing Landscapes as Height Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Generating Fractal Landscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Rendering and LOD Techniques for Landscapes . . . . . . . . . . . . . . . . . . . . . . 511
Camera Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
Chapter 8 Non-Graphical Techniques for Games and
Interactive Environments . . . . . . . . . . . . . . . . . . . . . 515
Sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
Basics of Digital Sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
The RPlay Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Using TCP/IP Networking to Communicate with the Server . . . . . . . . . . . . . . . . . 520
Class l3d_sound_client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
Class l3d_sound_server_rplay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
TCP/IP Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
The Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
The Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
Running the Sample Server and Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
Non-Blocking Operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
What Data to Send . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
Collision Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
Intersection Testing and Bounding Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Sphere-to-Sphere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
Ray-to-Polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
Ray-to-Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
Sphere-to-Polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Tunneling and Sweep Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Multiple Simultaneous Collisions and Collision Response . . . . . . . . . . . . . . . . . . 541
Allowing Penetration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
Avoiding Penetration with Temporal Search . . . . . . . . . . . . . . . . . . . . . . . . 542
Class l3d_collidable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
Class l3d_collidable_sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Class l3d_polygon_3d_collidable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
Class l3d_polygon_3d_textured_lightmapped_collidable . . . . . . . . . . . . . . . . . . . 551
Class l3d_camera_collidable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
Class l3d_world_portal_textured_lightmapped_obj_colldet . . . . . . . . . . . . . . . . . 553
Plug-in Object Seeker, Class l3d_plugin_videoscape_mesh_seeker . . . . . . . . . . . . . 563
Sample Program: collide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
More Advanced Collision Detection and Response . . . . . . . . . . . . . . . . . . . . . . 576
Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
Some Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
Rigid Body Dynamics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
Real-Time Update and Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . 579
Artificial Intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
Chapter 9 What Lies Ahead? . . . . . . . . . . . . . . . . . . . . . . . . . 583
Content Development Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
Game Blender/Blender 2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
World Foundry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
Contents ix
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
x
Acknowledgments
I
n addition to my parents, Forest and Vicki Lin, I would like to thank the following individuals
who directly or indirectly played a role in the completion of this book. Thanks go to my brother
Tony, who persuaded me to download and try out the game Doom—an experience that con-
vinced me that interactive 3D graphics on the PC was finally possible. Special thanks also to Stan
Hall, who provided encouragement and advice even when it seemed that the book might not see
Y
the light of day.
FL
Solveig Haring and Margit Franz were kind enough to provide me with Internet access and a
cup of coffee for some of the longer nights in the computer lab. Ton Roosendaal provided some
very interesting insights into Blender and 3D graphics in general. My work colleagues Horst
AM
Hörtner, Werner Pankart, Klaus Starl, and Thomas Wieser were all supportive and understanding
during those times when work on the book required absence from the office. Andreas Jalsovec and
Dietmar Offenhuber gave me insight into some of the nuances of 3D modeling. Renate Eckmayr,
TE
Viju John, Azita Ghassemi, Manfred Grassegger, Ulrike Gratzer, Andrea Groisböck, Jogi and
Reni Hofmueller, Angelika Kehrer, Astrid Kirchner, Dietmar Lampert, Christine Maitz, Paula
McCaslin, Bernd Oswald, Gabi Raming, Regina Webhofer, and other individuals too numerous to
mention all expressed interest upon hearing that I was writing this book, and gave me much
needed inspiration and motivation.
Professor Deborah Trytten got me started on the right track in 3D graphics during my studies
at the University of Oklahoma. Kevin Seghetti carefully read and checked the text for technical
accuracy and provided many valuable suggestions. Thanks also to everyone at Wordware Pub-
lishing, especially Jim Hill, who shared my enthusiasm about the book and was key in actually
getting this project out the door.
Last but not least, I would like to thank the countless individuals around the world involved
with the creation and maintenance of the freely available, high quality, open source GNU/Linux
operating system and tools.
Team-Fly®
xi
Preface
A
university professor of mine once mentioned that there was no danger that the computer
science community would ever run out of interesting problems to solve. As a community,
computer scientists try to understand the nature of computation by forming theories and
attempting to prove their validity. We try to answer questions. Those theories which correctly cap-
ture the nature of computing problems contribute to the common pool of academic knowledge.
Previously unanswered questions receive answers—some more complete, some less complete.
The less complete answers raise new questions for further research; the more complete answers
are eventually adopted by industry practitioners.
3D graphics is a field that illustrates this phenomenon well. In the early days, 3D graphics was
mostly confined to academic research labs. The mathematics and geometry of 3D graphics were
questioned and explored, and the field grew as a result. Today, research in 3D graphics is still very
active, but at the same time, 3D graphics has also become mainstream. A number of graphics tech-
niques from academia have established themselves as efficient and effective enough for
widespread use. A 3D programmer should be familiar with these techniques. The purpose of this
book is to communicate these important 3D techniques to the intermediate 3D programmer in a
clear and intuitive way, using geometrical explanations supported with numerous working code
examples.
This book uses Linux as the implementation platform. The free operating system has a num-
ber of advantages which make it ideal for learning and programming 3D graphics. The most
important advantage is accessibility: the free, open source nature of Linux makes it possible for
any programmer to have access to a top-quality operating system and development environment.
This open nature has encouraged the development of massive amounts of free software (where
free refers not only to cost, but mainly to the freedom to study and modify the source code), includ-
ing software important for 3D graphics. Therefore, Linux offers any programmer the chance to get
involved with 3D graphics programming today, at no cost, without forcing the programmer to
either pay thousands of dollars in software licensing fees or to spend literally man-years of soft-
ware development time creating customized tools. Linux already offers the tools you need to do
serious 3D programming—and the freedom to use, learn from, and modify these tools.
This book builds upon the foundation laid in the introductory companion volume Linux 3D
Graphics Programming. It is assumed that you have an understanding of all of the material pre-
sented in the introductory volume; the first chapter provides a quick review of this material.
Therefore, this book is not suited for the complete beginner to 3D graphics. Such readers should
work through the introductory companion book before attempting to read this book.
xii
Introduction
W
elcome, reader! I am glad to have you along and hope that you are as excited as I am
about Linux and interactive 3D graphics programming. Take your time and enjoy the
following few pages as we leisurely discuss the goals and contents of this book.
This book is the second volume of a two-volume work on interactive 3D graphics program-
ming under Linux. First, let’s look at the two-volume work as a whole, then we’ll look more
specifically at the contents of this volume.
Taken as a whole, the two-volume work aims to provide you with the knowledge, code, and
tools to program top-notch, object-oriented, real-time 3D games and interactive graphics applica-
tions for Linux, which can also easily be ported to other platforms. By working through both
volumes, you will learn to use the most important techniques, tools, and libraries for Linux 3D
graphics: portals, OpenGL/Mesa, Xlib, 3D hardware acceleration, collision detection, shadows,
object-oriented techniques, and more. We also cover the often neglected topic of 3D modeling,
illustrating in detail how to use the professional 3D modeling package Blender, which is included
on the CD-ROM, to create animated 3D models and portal worlds for use in our interactive 3D
programs.
This second volume, titled Advanced Linux 3D Graphics Programming, covers more
advanced techniques needed for realistic display of larger datasets often used in interactive 3D
environments. Topics covered include: rendering and animation techniques for 3D polygons (3D
morphing, texture mapping, light mapping, fog), the creation of more sophisticated 3D models
with Blender (including jointed figures animated with inverse kinematics), importing such models
from Blender into our programs, hidden surface removal (portals, BSP trees, octrees, z buffers),
non-graphical issues relevant to interactive environments (special effects, collision detection, dig-
ital sound, TCP/IP networking, particle systems), and tutorials on using advanced 3D content
development systems under Linux (Game Blender and World Foundry). Sample programs are
provided, both in text form and on the CD-ROM, illustrating the concepts.
This book builds on the foundation laid by the introductory companion volume, Linux 3D
Graphics Programming. The first chapter of this book serves as a brief review of the earlier
material.
analyze and use other 3D graphics texts and programs. In the open source world of Linux, under-
standing fundamental concepts is indeed important so that you can understand and possibly
contribute to the common pool of knowledge and code. Furthermore, learning fundamental 3D
graphics concepts also enables you to understand and effectively use sophisticated 3D applica-
tions and libraries such as 3D modelers and OpenGL.
A second goal of this text is to give you plenty of hands-on experience programming 3D
graphics applications under Linux. It is one thing to understand the theoretical mechanics of an
algorithm; it is another to actually implement, debug, and optimize that same algorithm using a
particular set of programming tools. Small standalone programs are scattered throughout this text
to demonstrate key 3D graphics concepts. It is often easy to lose sight of the forest for the trees,
particularly in the complicated world of 3D graphics. Standalone sample programs address this
problem by concisely illustrating how all the necessary components of a 3D program “fit
together.” They reduce the intimidation that often accompanies the study of large, complicated
programs, and give you confidence in developing and modifying complete 3D programs under
Linux.
A third goal of this text is to help you develop and understand the techniques for developing a
reusable 3D application framework or library. In addition to the standalone programs mentioned
above, the book also develops a series of generally reusable C++ library classes for 3D graphics,
called the l3d library. This library was introduced in the introductory companion book Linux 3D
Graphics Programming and is developed further in this book. This C++ library code follows an
object-oriented approach, relying heavily on virtual functions, (multiple) inheritance, and design
patterns. In this manner, the developed library classes are usable as is but still open for extension
through subclassing. Each chapter builds upon the library classes developed in previous chapters,
either adding new classes or combining existing classes in new ways. Through subclassing, the
library classes can be adapted to work with virtually any hardware or software platform or API;
currently, the code runs under Linux and Microsoft Windows, with or without hardware accelera-
tion. The techniques used to develop the 3D library classes illustrate both valuable 3D abstractions
and generally applicable object-oriented techniques.
A fourth goal of this text is to demonstrate the excellence of the Linux platform as a graphics
programming environment. For a programmer, Linux is a dream come true. All of the source code
is available, all of the operating system features are enabled, a large number of excellent first-rate
software development tools exist, and it is all freely available, being constantly tested and
improved by thousands of programmers around the world. Linux empowers the programmer with
open source, open information, and open standards. Given this outstanding basis for development,
it is no wonder that programmers in every conceivable application area (including 3D graphics)
have flocked to Linux. This has created a wealth of 3D libraries, tools, and applications for Linux.
Linux is, therefore, an outstanding software development platform with powerful 3D tools and
software—an ideal environment for learning and practicing 3D graphics programming.
A final, personal goal of this text, and the main reason I am writing this book, is to impart to
you a sense of the excitement that 3D graphics programming offers. You, the 3D programmer,
have the power to model reality. You control every single z-buffered, Gourad-shaded, tex-
ture-mapped, perspective-correct, dynamically morphed, 24-bit, real-time pixel on the flat 2D
screen, and amazingly, your painstakingly coded bits and bytes merge to form a believable 3D
xiv Introduction
world. By working under Linux, you are no longer held back by a lack of tools or software. It’s all
out there—free for download and top quality. Linux software gives you the tools you need to real-
ize your 3D ideas.
Chapter 9 takes a look at the possible future direction of Linux and 3D graphics. We begin
with a look at two existing and exciting 3D content development systems under Linux: Game
Blender and World Foundry. We go through a brief tutorial of 3D game creation with each of these
systems. Some speculation about the future of Linux 3D graphics follows. We close by relating the
contents of the book to the field of 3D graphics as a whole.
The Appendix provides installation instructions for the CD-ROM, information on porting the
graphics code to Windows, and a list of useful references, both in electronic (WWW) and in print
form. Notations in brackets, such as [MEYE97], are detailed in the “References” section of the
Appendix.
The CD-ROM contains all sample code from the book, the Blender 3D modeling and anima-
tion suite, the World Foundry game development kit, freely available Linux 3D libraries and
applications, and a series of animated videos illustrating some of the more difficult-to-visualize
3D concepts discussed in the text.
Typographical Conventions
Used in This Book
The following typographical conventions are used in this book.
n Program code, class names, variable names, function names, filenames, and any other text
identifiers referenced by program code or the operating system are printed in a fixed-
width font.
n Commands or text to be typed in exactly as shown are printed in boldface.
n Key sequences connected by a plus (+) sign (such as Ctrl+C) mean to hold the first key while
typing the second key.
1
Chapter 1
Basic Linux 3D
Graphics Concepts
L
inux 3D graphics is a young and exciting field. The purpose of this chapter is to review the
basic concepts of Linux 3D graphics programming in order to lay a groundwork for the more
involved material in the following chapters. This book was written based on the assumption
that you already know everything in this chapter; therefore, this chapter is intentionally terse. This
chapter is meant to serve as a review, not as an introduction.
If you find some of these topics unfamiliar, I suggest that you take the time to read the com-
panion volume to this book, Linux 3D Graphics Programming. The companion book is aimed at
the beginning 3D graphics programmer with little or no 3D experience. Essentially, this chapter is
a brief review of the most important concepts in the introductory companion volume.
2D Graphics Fundamentals
2D raster graphics consist of plotted pixels on a display. The pixels are arranged in a rectangular
grid, typically accessible in memory as a linear sequence of bytes. Though we specify pixels by
their addresses in memory at the lowest level, it is better to specify pixels in terms of a 2D coordi-
nate system, with horizontal x and vertical y axes. In this book, we define the origin of the 2D pixel
coordinate system to be the upper-left corner of the screen, with the x axis increasing to the right
and the y axis increasing downward.
Under Linux, we display our graphics under the X Window System. Specifically, the
approach chosen for this book is to use XImages in ZPixmap format to display 2D graphics. This
allows us direct access to the bytes (and thus the pixels) forming the image. Each pixel can have a
particular color. Exactly how this color is specified depends on the bit depth and color model of the
X server. The bit depth determines the total number of available colors and is usually 8, 15, 16, 24,
or 32 bits. The color model is typically either indexed color (meaning that colors are specified as
indices into a fixed-size palette of colors) or true color (meaning that colors are specified directly
as a combination of red, green, blue, and possibly alpha intensities). For maximum flexibility, we
must query at run time the bit depth and color model, and dynamically determine the exact bit for-
mat required to specify pixel colors.
2 Chapter 1: Basic Linux 3D Graphics Concepts
Drawing lines can be done by an incremental algorithm, stepping along one axis by whole
pixels and using the line’s slope to determine how many pixels to step in the other axis. Drawing
polygons can be done by rasterizing the lines belonging to the left and right edges of the polygon,
and drawing horizontal lines, or spans, between the left and right edges.
Animation can be achieved through double buffering. With this technique, we have one
off-screen buffer and one on-screen buffer. We draw graphics in the off-screen buffer. When we
are finished, we copy the off-screen buffer into the on-screen buffer, at which point the graphics
become visible. Then, we draw the next frame of animation in the off-screen buffer, and repeat the
process. In this way, the on-screen buffer is continually updated with new and completed images
from the off-screen buffer, thus creating the illusion of animation.
Hardware acceleration allows us to send compact instructions to dedicated hardware, with
higher-level commands such as “draw a line” or “draw a polygon.” By sending such higher-level
commands to the hardware, and by letting the dedicated hardware then do the actual lower-level
pixel operations, a great speed-up can be achieved. Under Linux, we use the 3D library Mesa to
achieve hardware acceleration. Mesa has a syntax essentially identical to OpenGL and supports
hardware acceleration. The XFree86 4.0 project uses Mesa as part of its Direct Rendering Infra-
structure (DRI), providing for hardware-accelerated 3D graphics within a window under the X
Window System. We use the terms Mesa and OpenGL essentially interchangeably in this book.
Chapter 1: Basic Linux 3D Graphics Concepts 3
3D Graphics Fundamentals
3D graphics is the creation of a two-dimensional image or series of images on a flat computer
screen such that the visual interpretation of the image or series of images is that of a three-dimen-
sional image or series of images.
The visual interpretation of an image depends on the optics of light rays striking the retina.
Points emit light radially in all directions along straight lines, called light rays. Some subset of all
light rays enters the eye; we call this subset seen light rays. Seen light rays are refracted by the
eye’s lens, and focus onto the retina. The biochemical reactions within the retina produce the sen-
sation of vision.
If a 2D image causes the same light rays to strike the retina as those coming from a 3D object,
then the 2D image can be interpreted as a 3D object.
In 3D graphics, we want the seen light rays coming from the flat computer screen to corre-
spond to the seen light rays which would be seen if we were looking at a real 3D object located
behind the computer screen. To accomplish this, we compute intersections between the seen light
rays and the flat plane of the computer screen. These intersection points, since they lie along the
straight light rays going from the original 3D points to the eye, emit the same seen light rays as the
4 Chapter 1: Basic Linux 3D Graphics Concepts
original points. Therefore, the projected points can be visually interpreted to be the original 3D
object.
Y
FL
Computing an intersection between a seen light ray (coming from a point) and a flat plane is
called projecting the point onto the plane. In particular, this is a planar geometric perspective pro-
AM
jection. The important term is “perspective.” The fact that the projection is perspective implies
that the resulting images appear realistically foreshortened, just as would be perceived by our own
eyes or by a physical camera taking a 2D snapshot of a 3D scene.
TE
Team-Fly®
Chapter 1: Basic Linux 3D Graphics Concepts 5
A number of standard operations are defined on vectors. Important for this book are compo-
nent-wise vector addition, scalar-vector multiplication, the vector dot product, and the vector
cross product.
Equation 1-1
Equation 1-2
Equation 1-3
Equation 1-4
Perspective Projection
Given a coordinate system in which to specify (x,y,z) points, we can then apply a perspective pro-
jection to these points to obtain the projected points, for which the seen light rays are identical to
those of the original non-projected points. In its simplest form, the perspective projection of a
point (x,y,z) is:
Equation 1-5
This formula is derived by computing the intersection between the seen light ray, coming from the
point, and a flat 2D projection plane. The d term is essentially a scaling factor. A more complete
formulation of the perspective projection is:
Equation 1-6
This form of the equation explicitly specifies the use of the d term as a field of view angle theta.
Also, it reverses the y axis orientation because it maps the projected points to the 2D pixel
6 Chapter 1: Basic Linux 3D Graphics Concepts
coordinate system. As we have seen, the 2D pixel coordinate system has y increasing downwards,
while the 3D coordinate system has y increasing upwards.
Matrices
In this book, we use 4´ 4 matrices to effect transformations on points. We can also then use 4´ 1
column vectors to represent 3D points and 3D vectors. Do not confuse the terms “column vector”
and “3D vector.” The former refers to a notational convention; the latter, to a directed displace-
ment in 3D space. The latter can be expressed by using the notation provided by the former. A 3D
point expressed in column vector notation is [x,y,z,1]T. A 3D vector expressed in column vector
notation is [x,y,z,0]T. The superscripted “T” indicates that the vectors should actually be written
transposed, in a vertical format. The fourth coordinate is w, the homogeneous coordinate; points
and vectors expressed in this notation are said to be in homogeneous coordinates. The homoge-
neous w coordinate typically has a value of 1 for points and 0 for vectors. In general, for any
arbitrary non-zero value of w, the homogeneous point [x,y,z,w]T, corresponds to the location in 3D
space given by [x/w,y/w,z/w,1]T. In other words, we divide by w.
We multiply two matrices A and B, with a result called C, as follows. Treat each column of B
as a four-element vector. Treat each row of A as a four-element vector. Then, compute the value of
each element in resultant matrix C located at row i and column j as the dot product of row i in A and
column j in B. Not all matrices may be multiplied with one another; the definition of matrix multi-
plication implies that the matrices to be multiplied must be size-compatible. Also, in general,
matrix multiplication is not commutative; AB is generally not the same as BA—the multiplication
BA might not even be possible.
Multiplying a 4´ 4 matrix by a second 4´ 4 matrix yields a resulting 4´ 4 matrix whose trans-
formation is the concatenation of the transformations represented by the first two matrices. The
resulting composite transformation applies the transformation of the original right-hand matrix
first, followed by the transformation of the original left-hand matrix. (An alternative interpreta-
tion, using a changing-coordinate system view rather than a changing-point view, allows for a
left-to-right interpretation of the transformation order.) Multiplying a 4´ 4 matrix by a 4´ 1 matrix
(in other words, by a column vector representing a 3D point or a 3D vector) yields another 4´ 1
matrix which represents the 3D point or 3D vector transformed by the 4´ 4 matrix.
Equation 1-7
Rotation around
the x axis by q
degrees
Chapter 1: Basic Linux 3D Graphics Concepts 7
Equation 1-8
Rotation around
the y axis by q
degrees
Equation 1-9
Rotation around
the z axis by q
degrees
Equation 1-10
Scaling by factors
sx, sy, and sz in
the x, y, and z axes
Equation 1-11
Translation by an
offset of (tx,ty,tz)
Equation 1-12
Transformation
from camera
space to world
In the equation above, the camera is located at (VRPx,VRPy,VRPz) and is oriented with its
right-vector along (VRIx,VRIy,VRIz), its up-vector along (VUPx,VUPy,VUPz), and its for-
ward-vector along (VFWx,VFWy,VFWz).
Equation 1-13
Rotation by q
degrees about an
arbitrary vector
(u1,u2,u3)
in the world coordinate system. By inverting a matrix representing a coordinate system, we obtain
the reverse transformation. Therefore, the matrix product M–1P yields the coordinates relative to
M of the point as specified in the world coordinate system.
When you see the matrix product MP, think of this as answering the question “P, which has
been specified relative to M, is at what location in world coordinates?” When you see M–1P, think
of this as answering the question “P, which has been specified in world coordinates, is at what
location relative to M?”
#include "../lib/tool_2d/screen.h"
#include "../lib/tool_os/dispatch.h"
#include "../lib/raster/rasteriz.h"
#include "../lib/tool_2d/scrinfo.h"
#include "../lib/system/factorys.h"
//------------------------------------------------------------------------
//-
//- STEP 1: CHOOSE THE FACTORIES
//-
//------------------------------------------------------------------------
void choose_factories(void) {
factory_manager_v_0_1.choose_factories();
}
//------------------------------------------------------------------------
//-
//- STEP 2: DECLARE A PIPELINE SUBCLASS
//-
//------------------------------------------------------------------------
public:
l3d_screen *s;
my_pipeline(void);
virtual ~my_pipeline(void);
my_pipeline::my_pipeline(void) {
s = factory_manager_v_0_1.screen_factory->create(320,200);
ri = factory_manager_v_0_1.ras_2d_imp_factory->create(320,200,s->sinfo);
r = new l3d_rasterizer_2d(ri);
s->sinfo->ext_max_red =
s->sinfo->ext_max_green =
s->sinfo->ext_max_blue = 255;
x = y = dx = dy = 0;
}
10 Chapter 1: Basic Linux 3D Graphics Concepts
my_pipeline::~my_pipeline(void) {
delete s;
delete ri;
delete r;
}
void my_pipeline::update_event() {
x += dx;
y += dy;
if(x < 0) x = 0;
if(x > s->xsize-1) x = s->xsize-1;
if(y < 0) y = 0;
if(y > s->ysize-1) y = s->size-1;
}
void my_pipeline::draw_event(void) {
r->draw_point(x,y, color);
s->blit_screen();
}
main() {
choose_factories();
l3d_dispatcher *d;
my_pipeline *p;
//------------------------------------------------------------------------
//-
//- STEP 3: CREATE A DISPATCHER
//-
//------------------------------------------------------------------------
d = factory_manager_v_0_1.dispatcher_factory->create();
//------------------------------------------------------------------------
//-
//- STEP 4: CREATE A PIPELINE
//-
//------------------------------------------------------------------------
//------------------------------------------------------------------------
//-
//- STEP 5: START DISPATCHER
Chapter 1: Basic Linux 3D Graphics Concepts 11
//-
//------------------------------------------------------------------------
delete d;
delete p;
}
NOTE The following instructions assume you have already installed the source code as
described in the Appendix. In particular, the installation instructions require you to set the
$L3D environment variable to point to the installation directory. Also, you should execute
these commands in a shell window under the X Window System. See the Appendix for details
on initial installation and configuration of the source code.
First, let’s look at compiling the program. Then, we look at the structure of the program itself.
Finally, we discuss the l3d classes.
The source code for the sample program is located in directory $L3D/source/app/
drawdot. The source and binary files are located in different directory trees; the next section dis-
cusses this in detail. For now, compile and run the program as follows:
1. To compile the l3d library, type cd $L3D/source/app/lib, press Enter, type make -f
makeall.lnx, and press Enter. Notice that this makefile has a different filename than the stan-
dard name of Makefile; therefore, we specify the -f flag to tell the make command which
file is the makefile.
2. Change to the source directory for drawdot by typing cd $L3D/source/app/drawdot and
press Enter. Compile drawdot: type make -f makeall.lnx and press Enter.
3. Type cd $L3D/binaries/linux_x/float/app/drawdot to change to the binaries directory and
press Enter.
4. Notice the object and executable files from the compilation process are placed in the corre-
sponding binary directory. Type drawdot and press Enter to run the program.
5. Notice the question “which configuration?” Type 1 for now to select a normal X11 window,
and press Enter.
6. Notice the empty black window which appears. Type l. Notice the green line which moves
from left to right across the very top of the display, and that the line continues moving after
you release the key.
7. Type j. Notice that the green line moves downward.
8. Control the movement of the line with the following keys: h to move left, l to move right, j to
move down, k to move up, and Space to stop movement.
9. Type q to end the program.
Having now successfully executed the program, let’s now take a look at the organization of the l3d
directory structure, then examine the drawdot program itself.
12 Chapter 1: Basic Linux 3D Graphics Concepts
NOTE Remember, the Appendix provides instructions on how to compile all of the sample
programs at once. The preceding discussion is primarily to give you an idea of the directory
structure and the reasoning behind it.
To summarize, then, the source files for the l3d library are in $L3D/source/app/lib, and the
source files for the sample programs are all in $L3D/source/app. The primary binaries are in
$L3D/binaries/linux_x/float/app.
Chapter 1: Basic Linux 3D Graphics Concepts 13
NOTE The class name has the suffix v_0_1 to represent the fact that this is the first version
of the factory manager. Later versions of the factory manager class manage more factories.
This is an example of how subclassing can provide a historical record of program develop-
ment, within the source code itself.
14 Chapter 1: Basic Linux 3D Graphics Concepts
Choosing the factories essentially means customizing, at run time, all customizable behavior,
which then takes effect for the duration of the program. In particular, the l3d factory manager man-
ages three factories:
1. A screen factory, producing objects corresponding to the abstract interface l3d_screen.
Class l3d_screen represents the physical output device—a window under X11, a
full-screen hardware-accelerated window using Mesa, or even a DIBSection under Microsoft
Windows.
2. A rasterizer implementation factory, producing objects corresponding to the abstract interface
l3d_rasterizer_2d_imp. Class l3d_rasterizer_2d_imp represents a particu-
lar implementation of 2D rasterization concepts. We use the term “rasterizer” to denote a
software interface to a rasterizer implementation. A rasterizer implementation is a particular
Y
hardware or software component that draws 2D graphics primitives (triangles, lines, dots)
FL
into a frame buffer. Two important types of rasterizer implementations are software rasterizer
implementations, which write directly into an off-screen buffer, and hardware rasterizer
implementations, which use specialized, faster functions for hardware-accelerated pixel
AM
operations.
3. A dispatcher factory, producing objects corresponding to the abstract interface l3d_dis-
patcher. Class l3d_dispatcher represents a generalized event dispatcher in a
TE
particular operating system environment. Under X, the dispatcher intercepts X events within
a window’s event loop and passes them on transparently to our application. Using hardware
acceleration with Mesa, the dispatcher works within the event framework provided by Mesa
and GLUT, again forwarding events in a transparent way to our application. Under another
operating system, the dispatcher would need to call any OS-specific routines necessary to
capture and forward events.
All of these factories represent system-specific information: the output device, the rasterizer
implementation, and the event dispatcher. Therefore, by choosing the factories, we are essentially
dynamically configuring the program to use the desired run-time environment. In our case, the
factory manager simply asks the user which factories should be used, but more sophisticated solu-
tions are also possible. We could, for instance, have an auto-detect routine that searches for the
existence of particular hardware, and that, depending on whether or not it finds it, configures the
factory to create the appropriate software component accordingly.
Step 2. Declare a Pipeline Subclass
The second step in writing an l3d application is to declare a pipeline subclass. A pipeline is simply
a sequence of operations on data. The main loop in a game or graphics program is typically called
the pipeline. Therefore, the pipeline contains, directly or indirectly, your application’s main data
and functionality.
We say “directly or indirectly” because the pipeline might do nothing other than create
another object, to which it then delegates the main program’s responsibility. In such a case, the
pipeline is not directly responsible for the application’s data and functionality, but instead merely
serves as an interface between the dispatcher and the object actually doing the real work.
Team-Fly®
Chapter 1: Basic Linux 3D Graphics Concepts 15
The pipeline does not control execution of the program. Instead, it responds to events. The
abstract l3d_pipeline class provides a set of virtual event functions, which are automatically
called by an event dispatcher (class l3d_dispatcher, covered in the next section). By declar-
ing a subclass of l3d_pipeline, you can override the virtual event functions to provide
specific responses to specific events, without needing to know how or when these functions are
invoked.
In particular, an l3d_pipeline subclass should do three things:
1. Directly or indirectly create and store a screen object, a rasterizer implementation object, and
a rasterizer object. This is typically done in the constructor. The first two objects, the screen
and rasterizer implementation, must be created by using the already chosen factories (section
“Step 1: Choose the Proper Factories”). The third object, the rasterizer itself, is directly cre-
ated via the C++ operator new, since the rasterizer itself contains no platform-specific
dependencies. (Such dependencies are all in the rasterizer implementation, not the rasterizer.)
2. Declare internal variables, functions, and objects to store the current state of the virtual world.
3. Override the l3d_pipeline virtual event functions to handle input, update internal
objects, and draw output to the screen. Handling input and updating internal objects are both
done by using data structures specific to the application program. Drawing output to the
screen is done by using the screen, rasterizer, and rasterizer implementation objects created in
the constructor.
The first responsibility of an l3d_pipeline subclass is easy to understand. The pipeline repre-
sents the application. The application should display interactive graphics on the screen. We
therefore create and store a screen object, representing the output device, and a rasterizer imple-
mentation, representing a strategy for drawing graphics to the screen. The rasterizer itself presents
a high-level interface to rasterization functionality, implemented by the low-level tools offered by
a rasterizer implementation. Again, remember that a rasterizer implementation can be either a
software rasterizer implementation, directly manipulating bytes in an off-screen frame buffer, or a
hardware rasterizer implementation, calling hardware API functions to instruct the hardware to
draw the graphics for us. Therefore, through the rasterizer, rasterizer implementation, and screen,
our program has an interface to screen and screen-drawing functionality.
NOTE Theoretically, the screen object could also be created outside of the pipeline. (The
following discussion also applies to the rasterizer and rasterizer implementation objects.)
There is no technical reason why the screen absolutely must be created within the pipeline
constructor. In practice, though, this would make little sense. Consider that the pipeline repre-
sents the entire application logic. Creating a screen outside of the pipeline would also mean
needing to destroy the screen outside of the pipeline. This would imply some sort of a
“higher-level” layer of functionality which creates and destroys objects the pipeline needs in
order to function. This would only make sense if the screen object often needed to be used
outside of the context of the pipeline, at this “higher-level” layer. Given the current premise
that the pipeline is the application, a higher-level layer makes no sense. Therefore, in the cur-
rent architecture, there is no reason to move management of the screen object outside of the
pipeline.
16 Chapter 1: Basic Linux 3D Graphics Concepts
The second responsibility of an l3d_pipeline subclass, declaring data, is also intuitive. Since
it represents the application, the pipeline subclass must contain all data necessary for maintaining
and updating the current state of everything within the virtual world. This might include such
things as the current positions and velocities for objects of interest, energy levels for spaceships,
the prevailing wind velocity, or anything else being modeled. All of this data is stored within the
l3d_pipeline subclass in the form of member variables or objects.
The third and final responsibility of an l3d_pipeline subclass is to override virtual event
functions to respond to events. An l3d_pipeline subclass can override any of the following
virtual functions declared in l3d_pipeline:
void key_event(int ch); //- from dispatcher
void update_event(void); //- from dispatcher
void draw_event(void); //- from dispatcher
The key_event function is automatically called whenever a key is pressed in the application
window. The function is called with a parameter indicating the ASCII value of the key pressed,
thereby allowing the application to respond to the particular key pressed.
The update_event function is automatically called whenever the application is allowed to
update itself. You can think of your program as being a giant clockwork, with everything happen-
ing at each “tick” of the clock. This event function represents one “tick” in your program. At this
point you update the internal variables storing the positions of various objects, update velocities,
check for collisions, and so on.
The draw_event function is called whenever the application is allowed to draw its output to the
screen. This function typically will be called immediately after update_event, but this does
not necessarily have to be the case. In other words, the updating of the virtual world and the draw-
ing of the virtual world can be thought of as two separate threads of control, which are usually but
not necessarily synchronized.
With this general understanding of a pipeline’s structure (creation of screen, storage of vari-
ables, and response to events), we can take a closer look at the particular details of the pipeline in
the drawdot program.
The constructor for the drawdot pipeline takes care of the first responsibility of an
l3d_pipeline subclass: creation of screen, rasterizer implementation, and rasterizer objects.
In the constructor, we first ask the screen factory to create a screen and the rasterizer implementa-
tion factory to create a rasterizer implementation. We then create a rasterizer which uses the
created rasterizer implementation. The member variables s, ri, and r represent the screen,
rasterizer implementation, and rasterizer, respectively.
Chapter 1: Basic Linux 3D Graphics Concepts 17
The constructor also takes care of the second responsibility of an l3d_pipeline subclass:
management of data representing our virtual world. In our case, our virtual world consists of a sin-
gle pixel (a humble start). The following member variables are declared and initialized to keep
track of the dot’s status: color, x, y, dx, and dy. Variables x, y, dx, and dy represent the dot’s
current horizontal and vertical positions and velocities, and are all initialized to zero. The variable
color represents the dot’s current color, and is specified as follows. First, we logically define the
maximum red, green, and blue values to be 255. Then, we specify a color of (0, 255, 128), which
means a red intensity of 0, a green intensity of 255, and a blue intensity of 128, all being measured
in relation to the logical maximum of 255 which we just set. Finally, we convert this RGB color to
a “native” color appropriate for the current screen’s color depth and color model. The conversion
is done via an object of type l3d_screen_info, which encapsulates the complicated color
calculation discussed earlier. The color conversion function is called ext_to_native, as it
changes a color from an “external” RGB format into a format “native” to the XImage.
The drawdot pipeline then overrides the key_event, update_event, and draw_
event methods to respond to events. This fulfills the third and final responsibility of an
l3d_pipeline subclass, responding to events.
The key_event for the drawdot pipeline checks to see if any one of the directional keys
was pressed, and updates the dx and dy variables, representing the horizontal and vertical veloci-
ties, accordingly.
The update_event for the drawdot pipeline adds the velocities to the positional vari-
ables, and makes sure the position stays within the bounds of the screen. In other words, x += dx
and y += dy.
The draw_event for the drawdot pipeline first calls the draw_point routine of the
rasterizer, which then forwards the request to the rasterizer implementation to draw a pixel at a
particular point in a particular color. Remember that the drawing occurs off-screen (double buffer-
ing). The pixel color must be specified in “native” format for the current color depth and color
model. We already computed and stored this color earlier by using the function l3d_screen_
info::ext_to_native. After plotting the point, we call blit_screen to cause the
off-screen graphics to be copied to the screen.
Let us summarize the main idea behind the pipeline. A pipeline represents the main function-
ality of an application and is subclassed from l3d_pipeline. An l3d_pipeline subclass
has three responsibilities: creating screen-access objects, declaring world data, and responding to
events. Creating screen-access objects (screen, rasterizer implementation, and rasterizer) allows
access to the screen and screen-drawing functions. Declaring world data allows the program to
keep track of the state of all objects in the virtual world. Responding to events is how the pipeline
responds to input (through key_event), updates the virtual world (through update_event),
and draws to the screen (through draw_event, using the previously created screen-access
objects).
The pipeline does not need to worry about how or when events occur; it merely responds to
them. The pipeline’s virtual event functions are thus called from an outside source. This outside
source is the dispatcher.
18 Chapter 1: Basic Linux 3D Graphics Concepts
TIP Although the descriptions of the l3d classes presented here are not as detailed as the
full-blown development in the companion book, all of the sample programs and library code
from the introductory companion book are also included on the CD-ROM. Therefore, you
can study the provided code to understand more precisely the library concepts summarized in
the following sections.
2D Graphics
Current mainstream display hardware for personal computers is for all practical purposes flat and
two-dimensional. The classes described in the following section deal with accessing 2D screen
and drawing 2D raster graphics.
with an XImage. The application program uses a screen through the abstract l3d_screen inter-
face, and is therefore not tied to any particular display device.
The class l3d_screen_x11, subclassed from l3d_screen, represents a screen under
the X Window System, in the form of a window using an XImage for graphics display. The source
file is sc_x11.cc. The class l3d_screen_mesa (file sc_mesa.cc) represents a screen
created using GLUT and Mesa.
Relevant Screen Attributes: l3d_screen_info
The class l3d_screen_info (file scrinfo.h) is an abstract interface to screen information.
We define screen information as follows: any information which an external class needs to know
about a screen object in order to be able to work with that screen. Class l3d_screen_info
encapsulates the vital statistics about a screen and makes them available through a clean, easy
interface.
In particular, when dealing with colors and XImages, the bytes making up a pixel and their
interpretation depend on the color depth and color model of the underlying X server. Class
l3d_screen_info is a recognition of the more general concept that external users of a screen
need to access such “screen information” in order to do anything useful with the screen.
Such screen information includes:
n Maximum ranges for the specification of red, green, and blue (RGB) values
n A conversion function to convert colors specified in RGB values into native color format (i.e.,
the exact bit format) needed by the underlying screen
n Number of bytes per pixel
n A pointer to the memory of the off-screen buffer (not applicable for hardware-accelerated dis-
play devices, where such access is usually not possible)
n A method for setting internal color state (needed for OpenGL)
n Methods for computing lighting and fog tables for gradual fading of colors, needed for light
and fog effects
n Methods for applying lighting and fog tables to a particular color to obtain the resulting lit or
fogged color
NOTE It might appear tempting to merge l3d_screen_info into the l3d_screen class
itself. After all, isn’t screen information part of the screen itself? The problem appears when
we try to subclass the screen. Screen information can be handled fundamentally in two differ-
ent ways: as RGB (TrueColor) or as indexed color. Similarly, screens themselves come in a
variety of sorts: X11 screens, Mesa screens, Windows screens, DOS screens. If screen infor-
mation and the screen were merged into one class, we would have the unpleasant situation of
having several sorts of each screen: an X11 RGB screen, an X11 indexed screen, a Mesa RGB
screen, a Mesa indexed screen, a Windows RGB screen, a Windows indexed screen, a DOS
RGB screen, a DOS indexed screen. Extending the class hierarchy with a new information
type or a new screen type becomes a major headache. This situation is sometimes called a
nested generalization and indicates that the class should be split into two. For this reason, we
keep the screen information separate, in its own l3d_screen_info class hierarchy. The
l3d_screen is also separate, in its own l3d_screen class hierarchy. We can then, at run
time, mix and match screen information types and screen types freely, without the
22 Chapter 1: Basic Linux 3D Graphics Concepts
Figure 1-9:
Screen-related
classes.
Chapter 1: Basic Linux 3D Graphics Concepts 23
NOTE Separating the rasterizer and the rasterizer implementation is an example of the
generally applicable bridge design pattern, where an abstraction and its implementation are
two separate classes. This allows for the class hierarchies for the abstraction and the imple-
mentation thereof to vary and be refined independently, avoiding the multiplicative explosion
of classes discussed earlier (nested generalizations). This technique is also referred to as
using a handle.
Figure 1-10:
Rasterization-
related classes.
Y
FL
AM
TE
Team-Fly®
Chapter 1: Basic Linux 3D Graphics Concepts 25
application from the underlying concrete types to allow for future extension of code without any
change of original code.
Points
Specifying points is done through two classes: l3d_point (file point.cc) and l3d_coor-
dinate (file coord.h).
The class l3d_point represents a set of four numbers, of the form (x, y, z, w). These num-
bers represent a location in space in terms of displacements along coordinate axes.
The class l3d_coordinate represents a transformable coordinate location in space. It
stores two variables of type l3d_point, named original and transformed. The trans-
formed coordinate is always initialized to equal the original coordinate at the beginning of each
frame (method reset), after which it may be transformed by zero or more transformations
(method transform). After any number of transformations, the application may choose to save
the current location in one of the intermediate storage locations (member transformed_
intermediates), so that the old location may be referenced later. In the end, the transformed
coordinate must represent a point in 2D pixel space, since ultimately the points must be used to
plot images on-screen. The whole idea is that an l3d_coordinate is more than just a single
location in space: it is an original location, a number of optional intermediate locations, and a final
location.
The most important thing to remember about the original and transformed members
is that the transformed coordinate is the one that is finally used to display the pixels on the screen.
An application must at some point put the final pixel coordinates in the transformed member
variable.
Lists
Specifying more complex geometry (as we see in the coming section on polygons) requires us to
store lists of vertices. For this, we use a combination of two data structures: l3d_list and
l3d_two_part_list (both in file list.h). Class l3d_list is a dynamically growable
list that is like an array, but which automatically expands and allocates more space as necessary.
The elements are accessed as normal through the array index operator [ ]. Class l3d_two_
part_list is an extension of l3d_list and partitions the list of items into two parts: a fixed
part and a varying part. The fixed part is fixed in size and never changes; the varying part is based
on some dynamic calculation and changes often in size. However—and this is the whole point of
the l3d_two_part_list—both the fixed and the varying parts are accessed identically. If we
stored the fixed and varying parts in two separate lists, any references to list items would need to
specify if the item comes out of the fixed or the varying list, which makes for rather inconvenient
code. With the l3d_two_part_list, external users don’t need to worry about whether
the referenced item is fixed or varying. This uniformity of access is primarily of use in polygon
26 Chapter 1: Basic Linux 3D Graphics Concepts
clipping. Clipping can create temporary (varying) vertices which must be accessible exactly like
normal (fixed) vertices, but which may vary wildly in number from frame to frame.
Using these list classes is easy. They are declared as template classes, so that they work with
any data type. Let’s first begin with the l3d_list class.
To use l3d_list, do the following:
1. Declare an instance of the list, providing the data type as the template parameter and passing
the initial size of the list to the constructor. The initial size must be greater than zero.
2. Call next_index before storing any new element into the list, to obtain the next free index
within the list. This also causes the list to grow if you exceed the current size of the list.
3. Store and access elements in the list using the array index operator [ ]. Never attempt to access
an element you have not already allocated by having called next_index.
4. Query the number of items via the num_items member. Valid indices range from 0 to
num_items –1.
5. Effectively empty or reduce the size of the list by setting num_items to zero or a smaller
number. Never set num_items to a larger number; use next_index to increase the size of
the list.
6. Copy the list by assigning it to another l3d_list object. This makes a full copy of the list
and its contents. The two copies are completely independent of one another and contain no
pointers to common objects (unless the contents of the list themselves are pointers, in which
case the two lists contain separate copies of pointer objects, which point to the same locations
in memory).
NOTE The creation of the list elements occurs through an abstract factory; the copying of
list elements, through a possibly virtual assignment operator; the access of list elements,
through an abstract class pointer in the overloaded array index operator [ ]. This means that
by providing a new factory (via the l3d_list constructor) and overriding a virtual assign-
ment operator =, an existing l3d_list in already existing code can be extended after the
fact to work with new subclasses, and the original code is still guaranteed to work because the
internal access to the list elements is done through an abstract class pointer. A detailed exam-
ple of this technique appears in Chapter 2, when we extend the existing polygon class to store
texture coordinate information with each vertex, without needing to change any of the already
existing code. By default, an abstract factory is automatically created and destroyed if you do
not provide one; the flexibility is there if you need it, but does not burden you if you do not.
The other list class is the l3d_two_part_list. Again, a two-part list consists of a fixed part,
whose size never changes, and a varying part, whose size may change greatly over time. The list
elements in both the fixed part and the varying part are accessed identically (with the array opera-
tor [ ]). To use an l3d_two_part_list, do the following:
1. Declare an instance of the list, providing the data type as the template parameter and passing
the initial fixed size of the list to the constructor. The initial size must be greater than zero.
2. Store and access fixed items in the list by using the array index operator [ ]. Valid indices for
fixed items range from 0 to num_fixed_items –1. Do not store fixed items outside of this
range. Do not change the num_fixed_items variable.
Chapter 1: Basic Linux 3D Graphics Concepts 27
3. Call next_varying_index before storing any element in the varying part of the list, to
obtain the next free index within the varying part. This also causes the varying part of the list
to grow if you exceed the current size of the list.
4. Store and access elements into the varying part of the list using the array index operator [ ].
Never attempt to access an element in the varying part which you have not already allocated
by having called next_varying_index.
5. Query the number of varying items via the num_varying_items member. Valid indices
for varying items range from num_fixed_items to num_fixed_items + num_
varying_items – 1.
6. Effectively empty or reduce the size of the varying part of the list by setting num_vary-
ing_items to zero or a smaller number. Never set num_varying_items to a larger
number; use next_varying_index to increase the size of the varying part of the list.
7. Copy the list by assigning it to another l3d_two_part_list object. This makes a full
copy of the fixed and varying parts of the list and its contents; the two copies are completely
independent of one another and contain no pointers to common objects (unless, again, the
contents are themselves pointers).
Notice that access to both fixed and varying items is done uniformly through the array access
operator [ ].
NOTE The earlier comments about factory creation, virtual assignment, and polymorphic
access all apply to l3d_two_part_list as well. This is because l3d_two_part_list
internally uses an l3d_list to store the items.
We use two-part lists for storing vertices (i.e., objects of type l3d_coordinate) which are
later used to define polygons. A typical creation of a vertex list looks like the following:
vlist = new l3d_two_part_list<l3d_coordinate> ( 4 );
(*vlist)[0].original.X_ = float_to_l3d_real(100.0);
(*vlist)[0].original.Y_ = float_to_l3d_real(100.0);
(*vlist)[1].original.X_ = float_to_l3d_real(200.0);
(*vlist)[1].original.Y_ = float_to_l3d_real(150.0);
(*vlist)[2].original.X_ = float_to_l3d_real(150.0);
(*vlist)[2].original.Y_ = float_to_l3d_real(200.0);
for the current viewpoint (see Chapter 4). Then, for each frame, we only need to draw the polygons
that remain in the list.
Each object can update itself. This allows construction of 3D worlds in a modular fashion,
with a number of “intelligent” objects capable of acting on their own. An object can update itself
by changing its geometry (i.e., altering the vertices in the vertex list), or by applying a transforma-
tion matrix to its geometry. Such updates can take place either through subclassing or through
plug-ins. Wíth subclassing, a new object subclass overrides the virtual method update, which
updates the object as needed. With a plug-in, we write a dyamic shared library file which is loaded
at run time and which then updates the object.
Worlds
Just as groups of 3D polygons form a 3D object, groups of 3D objects form a world. Class
l3d_world (file world.h) is the class we use for storing worlds. A world is a displayable,
interactive 3D environment. It contains 3D objects to populate the world, a camera to see the
world, a rasterizer to plot pixels, and a screen to display the output.
A simple world as implemented by l3d_world is really little more than a simple list of
objects displayed with the painter’s algorithm. Later, we subclass from l3d_world to perform
more advanced visibility operations on our 3D objects, leading to a graph-like structure of objects
connected via portals. See Chapter 5 for details.
For interaction with the external events (such as the user pressing a key on the keyboard), the
world class relies on a corresponding pipeline class l3d_pipeline_world (file pi_wor.h).
The world pipeline class merely forwards events on to the world to be handled.
n Type conversions. A variable of type l3d_real can be converted to and from an integer or
floating-point value through the macros l3d_real_to_float, l3d_real_to_int,
float_to_l3d_real, and int_to_l3d_real. Furthermore, a variable of type l3d_
real can be converted to and from a fixed-point number through the macros l3d_real_
to_fixed and fixed_to_l3d_real.
n Addition and subtraction. Variables of type l3d_real may be added to and subtracted from
other variables of type l3d_real with the normal plus (+) and minus (–) operators. Addi-
tion and subtraction with variables of any other type must first use one of the type conversion
macros to convert the variable of the other type to l3d_real.
n Multiplication and division. Variables of type l3d_real can be multiplied or divided by
variables of type l3d_real or of type int only. Such multiplications or divisions must use
one of the following macros, all of which return a type of l3d_real: l3d_mulrr (multi-
plies an l3d_real by an l3d_real), l3d_mulri (multiplies an l3d_real by an
int), l3d_divrr (divides an l3d_real by an l3d_real), and l3d_divri (divides
an l3d_real by an int).
n Other functions. The iceil function returns the integer ceiling of (i.e., the smallest integer
larger than) an l3d_real. ifloor returns the integer floor of (the largest integer smaller
than) an l3d_real. The function l3d_sqrt returns an l3d_real representing the
square root of the given l3d_real argument.
NOTE The reasons for not making l3d_real a class are discussed in the Appendix of the
introductory companion book, Linux 3D Graphics Programming.
If we choose to use floating-point math (at compile time via a #define), then the l3d_real
macros do nothing other than call the ordinary cast, multiplication, or division operators. If we
choose to use fixed-point math (again, at compile time via a #define), the l3d_real macros
are defined quite differently, applying the somewhat arcane rules for fixed-point math to simulate
real-valued computations using purely integer arithmetic.
An unfortunate consequence of using the l3d_real macros for multiplication and division
is that this leads to somewhat awkward code. For instance, the following computation with float-
ing-point values:
x = a * ( ( b + c ) / d )
would need to be expressed with l3d_real types as:
x = l3d_mulrr( a , l3d_divrr( b + c , d ) )
The multiplication and division must therefore be realized as nested macro invocations. Notice,
however, that no macro invocation is necessary for the addition of b and c, since addition and
subtraction of l3d_real types can be done with the normal addition and subtraction operators.
Chapter 1: Basic Linux 3D Graphics Concepts 31
Figure 1-11: Class diagram for the most important l3d classes covered so far. For space reasons, not all classes are
displayed; the naming convention makes it obvious where the other classes would appear in the diagram. For instance,
l3d_screen_mesa is a descendant of l3d_screen, as is clear from the naming convention.
32 Chapter 1: Basic Linux 3D Graphics Concepts
TIP See the source code for the sample program fltsim on the CD-ROM (taken from the
introductory companion book Linux 3D Graphics Programming) for a typical example of cre-
ating an animated world populated with polygonal objects and a user-controllable camera.
Linux 3D Modeling
The introductory companion book Linux 3D Graphics Programming covered basic usage of the
Blender 3D modeling package for creating 3D models. We continue the coverage in this book.
Blender is zero-cost software; a fully functional version of Blender, with no artificially imposed
restrictions, is included on the CD-ROM.
Chapter 1: Basic Linux 3D Graphics Concepts 33
NOTE The term “zero-cost software” means that you can use the software without having to
pay for the software. The difference between “zero-cost software” and “free software” is that
free software comes with source code, and can be freely modified. “Free software” refers to
freedom, not price. Free software by its open nature is usually zero-cost software, but the
reverse is not true. The GNU/Linux operating system is free software, as is the source code
included with this book. See http://www.gnu.org for a full definition of “free software.”
Remember, your freedom as a professional programmer is literally at stake here.
The rest of this section briefly goes over the most important features of Blender as discussed in the
introductory companion book.
n ImageSelectWindow: Selects an image file for loading or saving, displaying a thumbnail pre-
view of each image file for easy differentiation.
The majority of the modeling work in Blender is done in the 3DWindow, where geometry is inter-
actively created in 3D, and the ButtonsWindow, where various properties and parameters can be
set. See Chapters 3 and 6 for further discussion of Blender and some of the other window types.
Y
FL
AM
Figure 1-13: All Blender
window types open
TE
simultaneously.
New windows in the Blender user interface are created by splitting existing windows. A bor-
der separates each window header from the window contents. Middle-click the border to split the
window. Right-click a border to join two adjacent windows.
A window can receive keyboard input only when the mouse cursor is positioned over that
window. As mentioned earlier, the actual 3D modeling work in Blender is done in the 3DWindow.
During modeling, the most important Blender commands can be executed by an appropriate sin-
gle-keystroke command within the 3DWindow. The following list summarizes the most important
single-keystroke commands in the 3DWindow:
Team-Fly®
Chapter 1: Basic Linux 3D Graphics Concepts 35
TIP See sample program vidflat on the CD-ROM for an example of using the
Videoscape plug-in.
Summary
This chapter flew through the basics of Linux 3D graphics programming, as covered in depth in
the introductory companion book Linux 3D Graphics Programming. We looked at:
n 2D graphics, theory, and practice under Linux and X
n Definition of 3D graphics
n 3D coordinate systems, vectors, and matrices
n Perspective projection
n Storing points, polygons, objects, and worlds in computer memory
n l3d library classes in C++ for implementing event-driven, object-oriented 3D graphics
n 3D modeling, export, and import with Blender
In a nutshell, that is the prerequisite knowledge for effectively reading the coming chapters. Don’t
worry if you are rusty on one or two concepts—the important thing to ensure, before proceeding
with the rest of the book, is that you are fairly comfortable with all or almost all of the topics dis-
cussed in this chapter.
With the basics of polygonal 3D graphics now firmly in mind, we can proceed to the next
chapter, which discusses rendering and animation techniques for 3D polygons.
39
Chapter 2
Rendering and
Animation Techniques
for 3D Polygons
T
his chapter examines rendering and animation techniques for 3D polygons. In Chapter 1, we
saw that the l3d library classes provide facilities for storing, grouping, and manipulating 3D
polygons, objects, and worlds. We also saw that l3d classes can rasterize flat-shaded poly-
gons by stepping along the polygon’s edges from top to bottom, and drawing horizontal spans of
pixels between the left and right edges. In this chapter, we extend these basic polygonal tech-
niques to achieve more interesting and realistic visual effects.
We cover the following topics in this chapter:
n 3D morphing (allowing a 3D object to smoothly change shape into another)
n Adding surface detail through texture mapping
n Using the symbolic algebra package Calc to derive the texture mapping equations
n Mathematical models for computing light intensities
n Rendering computed light intensities with light mapping
n Shadows
This chapter presents the following sample programs: morph3d illustrating vertex animation and
morphing in 3D, textest illustrating texture mapping, and lightmap illustrating light
mapping.
Note that in order to be completely correct, we would have to recompute the surface normal
vector for all affected polygons any time we made any change to the vertex list. If we change one
vertex in the vertex list, we would need to recompute the surface normals for all polygons using
that vertex. If we change all vertices in the vertex list, as is usually the case with morphing, then
we would need to recompute the surface normals for all polygons in the object. Also, any other
geometry-specific information stored with the polygons or the object would need to be recom-
puted whenever we change the vertex list, because changing the vertex list is equivalent to
changing the underlying geometry of the object.
In the following sample program morph3d, however, we won’t be recomputing the normals,
but rather just changing the vertex list. This leads to simpler and faster code, but is actually incor-
rect since the surface normals are no longer normal to the morphed geometry. Since we don’t use
the surface normal in this sample program, the fact that the post-morphing normals are incorrect is
actually not noticeable.
NOTE The VFW axis is the forward axis along which the camera is looking. The VUP axis is
the axis pointing straight up from the camera. The VRI axis is the axis pointing straight to the
right of the camera. Taken as a set, the vectors VRI, VUP, and VFW form a left-handed camera
coordinate system.
#include "../lib/geom/object/object3d.h"
#include "../lib/geom/polygon/p3_flat.h"
#include "../lib/tool_2d/screen.h"
#include "../lib/tool_os/dispatch.h"
#include "../lib/raster/rast3.h"
#include "../lib/tool_2d/scrinfo.h"
#include "../lib/geom/world/world.h"
42 Chapter 2: Rendering and Animation Techniques for 3D Polygons
#include "../lib/system/fact0_2.h"
#include "../lib/pipeline/pi_wor.h"
#include "../lib/dynamics/plugins/plugenv.h"
#include "shapes.h"
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>
#include <stdarg.h>
my_world::my_world(void)
: l3d_world(640,400)
{
l3d_screen_info *si = screen->sinfo;
camera->VRP.set(0,0,–50,0);
camera->near_z = float_to_l3d_real(5.5);
camera->far_z = int_to_l3d_real(500);
int i,j,k,onum=0;
l3d_polygon_3d_flatshaded *p;
for(int pnum=0; pnum<objects[onum]->polygons.num_items; pnum++) {
p = dynamic_cast<l3d_polygon_3d_flatshaded *>(objects[onum]->polygons[pnum]);
if(p) {
p->final_color = si->ext_to_native
(rand()%si->ext_max_red,
rand()%si->ext_max_green,
rand()%si->ext_max_blue);
}
}
if (objects[onum]==NULL) exit;
objects[onum]->modeling_xforms[1].set
( int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(i),
int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(1),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(1),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1) );
objects[onum]->modeling_xform =
objects[onum]->modeling_xforms[1] |
objects[onum]->modeling_xforms[0] ;
}
screen->refresh_palette();
}
Chapter 2: Rendering and Animation Techniques for 3D Polygons 43
main() {
l3d_dispatcher *d;
l3d_pipeline_world *p;
my_world *w;
factory_manager_v_0_2.choose_factories();
d = factory_manager_v_0_2.dispatcher_factory->create();
w = new my_world();
p = new l3d_pipeline_world(w);
d->pipeline = p;
d->event_source = w->screen;
d->start();
delete d;
delete p;
delete w;
}
Listing 2-2: shapes.h, the header file for the 3D objects for program morph3d
#include "../lib/geom/object/object3d.h"
#include "geom/vertex/verint.h"
public:
pyramid(void);
virtual ~pyramid(void);
int update(void);
};
Listing 2-3: shapes.cc, the implementation file for the 3D objects for program morph3d
#include "shapes.h"
#include <stdlib.h>
#include <string.h>
pyramid::pyramid(void) :
l3d_object(4)
{
keyframes[0] = new l3d_two_part_list<l3d_coordinate>(4);
(*keyframes[0])[0].original.set
(float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[0])[1].original.set
(float_to_l3d_real(10.0),
float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[0])[2].original.set
(float_to_l3d_real(0.),
float_to_l3d_real(10.),
44 Chapter 2: Rendering and Animation Techniques for 3D Polygons
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[0])[3].original.set
(float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(10.),
float_to_l3d_real(1.));
Y
float_to_l3d_real(10.),
float_to_l3d_real(0.),
FL
float_to_l3d_real(1.));
(*keyframes[1])[2].original.set
(float_to_l3d_real(10.),
float_to_l3d_real(0.),
AM
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[1])[3].original.set
(float_to_l3d_real(10.0),
TE
float_to_l3d_real(15.0),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
int pi;
pi = polygons.next_index();
polygons[pi] = new l3d_polygon_3d_flatshaded(3);
printf("before: %p", polygons[pi]->vlist);
polygons[pi]->vlist = &vertices;
printf("after: %p", polygons[pi]->vlist);
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=0;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=1;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=3;
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
pi = polygons.next_index();
polygons[pi] = new l3d_polygon_3d_flatshaded(3);
polygons[pi]->vlist = &vertices;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=2;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=3;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=1;
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
pi = polygons.next_index();
polygons[pi] = new l3d_polygon_3d_flatshaded(3);
polygons[pi]->vlist = &vertices;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=0;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=2;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=1;
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
pi = polygons.next_index();
polygons[pi] = new l3d_polygon_3d_flatshaded(3);
polygons[pi]->vlist = &vertices;
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 45
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=3;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=2;
(*polygons[pi]->ivertices)[polygons[pi]->ivertices->next_index()].ivertex=0;
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
num_xforms = 2;
modeling_xforms[0] = l3d_mat_rotx(0);
modeling_xforms[1].set
( float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.) );
modeling_xform=
modeling_xforms[1] | modeling_xforms[0];
currently_interpolating = false;
keyframe_no=0;
vertices = keyframes[keyframe_no];
}
pyramid::~pyramid(void) {
for(register int i=0; i<polygons.num_items; i++) {delete polygons[i]; }
for(int i=0; i<num_keyframes; i++) {
delete keyframes[i];
}
}
int pyramid::update(void) {
if(currently_interpolating) {
vertices = interp.list;
if(! interp.step()) {
currently_interpolating = false;
}
}else {
keyframe_no++;
if(keyframe_no >= num_keyframes) {keyframe_no = 0; }
int next_keyframe = keyframe_no + 1;
if(next_keyframe >= num_keyframes) {next_keyframe = 0; }
vertices = keyframes[keyframe_no];
interp.start( *keyframes[keyframe_no], *keyframes[next_keyframe],
rand()%100 + 50, 3);
currently_interpolating = true;
}
}
The program morph3d uses the l3d_world and l3d_object classes to store the virtual
world. The file main.cc is very simple: it declares a world subclass which creates some pyra-
mid objects.
The pyramid class is a morphing 3D pyramid. It changes its shape between a compact and
an elongated pyramid. To accomplish this, we added the following lines to the pyramid class:
class pyramid:public l3d_object {
static const int num_keyframes = 2;
int keyframe_no;
l3d_two_part_list<l3d_coordinate> *keyframes[2];
l3d_vertex_interpolator interp;
bool currently_interpolating;
46 Chapter 2: Rendering and Animation Techniques for 3D Polygons
The keyframes variable holds two separate, different vertex lists. Each list is one complete set
of vertex positions defining a shape for the 3D object. We create and fill these lists within the
pyramid constructor.
keyframes[0] = new l3d_two_part_list<l3d_coordinate>(4);
(*keyframes[0])[0].original.set
(float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[0])[1].original.set
(float_to_l3d_real(10.0),
float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[0])[2].original.set
(float_to_l3d_real(0.),
float_to_l3d_real(10.),
float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*keyframes[0])[3].original.set
(float_to_l3d_real(0.),
float_to_l3d_real(0.),
float_to_l3d_real(10.),
float_to_l3d_real(1.));
Now, given two different shapes of the object stored in two separate vertex lists, we use the class
l3d_vertex_interpolator, exactly as we did in the earlier program morph2d, to inter-
polate the positions of the vertices between the two vertex lists. This takes place in the method
pyramid::update. The usage of the vertex interpolator is exactly the same as before: we
specify a starting vertex list, an ending vertex list, a total count of interpolation steps, and a dimen-
sion (2D or 3D) determining which elements of the vertex are interpolated. The vertex interpolator
has its own internal, separate vertex list, which contains the interpolated “in-between” positions
between the specified starting vertex list and ending vertex list. We interpolate between vertex
lists by calling interp.step, and assign the interpolated vertex list to the pyramid’s vertex list.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 47
if(currently_interpolating) {
vertices = interp.list;
if(! interp.step()) {
currently_interpolating = false;
}
Once the interpolation has finished, after the specified number of steps have been taken and the
positions of the vertices in the vertex list have reached the positions in the target vertex list, we
then proceed to interpolate between the next two pairs of vertex lists in the array keyframes. In
this program, since we only have two vertex lists, this simply has the effect of morphing back from
the second to the first vertex list again. Notice that we pass to the vertex interpolator a random
parameter for the number of interpolation steps, so that each pyramid always morphs at a different
speed.
}else {
keyframe_no++;
if(keyframe_no >= num_keyframes) {keyframe_no = 0; }
int next_keyframe = keyframe_no + 1;
if(next_keyframe >= num_keyframes) {next_keyframe = 0; }
vertices = keyframes[keyframe_no];
interp.start( *keyframes[keyframe_no], *keyframes[next_keyframe],
rand()%100 + 50, 3);
currently_interpolating = true;
}
That’s all there is to morphing in 3D. It’s nothing more than a step-by-step interpolation of all ver-
tex positions from a starting list to an ending list, just as with the 2D case.
NOTE When morphing more complicated shapes, you will want to use a 3D modeling pro-
gram (also called a 3D modeler) to create the 3D objects. Chapter 3 covers using Blender to
do exactly this. To create models which morph into one another, it is vital that both models
have the same number of vertices and faces in the same order. The best way to ensure this is
simply to start with the first model and modify it within the 3D modeler to become the second,
target model. If you create the target model from scratch, it is almost certain that the number
or order of the vertices and faces will be different from those of the first model, which would
make morphing impossible or visually disturbing. See Chapter 3 for more details.
NOTE It is actually not “impossible” to morph between two objects with different numbers
of vertices or faces, but doing so is much more complicated than the case described above
(where a 1:1 correspondence does exist between the vertices of the two objects). You would
have to define some function to automatically map vertices to “reasonable” in-between posi-
tions, based on the shapes of the source and target objects, a topic which goes beyond the
scope of this book.
48 Chapter 2: Rendering and Animation Techniques for 3D Polygons
Lighting
Polygons, as we have treated them until now, have all been single-colored. We’ll now look at some
methods of combining color and light so that our polygons look more realistic.
For our purposes, we model light as a single, integer intensity value which changes the bright-
ness of a particular color. Therefore, we combine light and color to arrive at a final lighted color.
This is accomplished by means of lighting tables. In the lighting table, we simply look up the color
along one axis, and the light intensity along the other axis, and find an entry in the table that tells us
the final color with the specified intensity. For colors specified directly with red, green, and blue
values (as opposed to through a palette), we split the lighting tables into three separate red, green,
and blue tables, solely for the reason of saving memory.
We define intensities as going from 0 to a certain predefined maximum integer light level,
MAX_LIGHT_LEVELS. A light intensity of 0 combined with any color yields black; a light inten-
sity of MAX_LIGHT_LEVELS combined with a color yields the original color at full intensity.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 49
The light model just described assumes that the light itself has a color of white. It is also possi-
ble to model colored light sources, which not only change the intensity of the color in question, but
also change the color itself by introducing new color components contained within the light itself.
For instance, shining a red light on a white surface yields a final color of red. This can be imple-
mented most easily using an RGB color model. The color of the light is represented as a red
intensity, a green intensity, and a blue intensity. The color of the surface is represented as a red
color, a green color, and a blue color. The color of the final, lighted surface is the red color lighted
by the red intensity, combined with the green color lighted by the green intensity, combined with
the blue color lighted by the blue intensity.
We can view the lighting problem as two separate issues. The first issue is computing the light
intensity itself. The second issue is deciding where to compute and draw the light intensities (once
per polygon, once per vertex, once per pixel, and so forth). We now cover each of these topics sep-
arately, starting with the computation of light intensities.
Self Lighting
One simple lighting model is to assume that each object emits its own light energy and reflects no
light energy. In this case, each object’s light intensity is determined solely by its own inherent
brightness; it does not depend on any external factors. We can express the equation for self light-
ing as follows:
Equation 2-1
In this equation, Iself represents the computed intensity based on the self lighting model, and Io
represents the inherent brightness of the object itself.
While self intensity is an easy lighting model to represent, it is more interesting to model the
reflection of light off of surfaces, since reflected light is the majority of light we see in a typical
real-world scene.
50 Chapter 2: Rendering and Animation Techniques for 3D Polygons
Ambient Lighting
The term ambient light refers to light which has been scattered and reflected so many times, off of
so many objects, that its original direction cannot be determined. Ambient light, therefore, is
nondirectional light which equally reflects off of and equally illuminates all objects. It also pro-
vides a convenient way within a scene of modeling a global minimum light level whose exact
source is unimportant. We can express ambient lighting as a simple global constant for the entire
scene:
Equation 2-2
We could also multiply the ambient light intensity by a scaling factor, ranging from 0 to 1, repre-
senting the reflectiveness of the surface to which the ambient light is applied. This can have the
effect of reducing the effect of the ambient light if the surface is less reflective. The ambient scal-
ing factor would be stored with each object or with each polygon.
Equation 2-3
Such parameters particular to a surface which affect the incident light are called material
parameters.
Diffuse Reflection
Ambient light is still not a very satisfying light model because it affects all surfaces identically. We
may obtain a better result by modeling the diffuse reflection of light off of a surface. Matte sur-
faces, also called diffuse surfaces, scatter light rays with equal probability in all directions,
resulting in a diffuse illumination of the surface.
Mathematically, let us assume that we have a point light source and a surface for which we
wish to compute the diffuse illumination. A point light source is an infinitely small emitter of light
energy located at a single point some finite distance from the surface, which casts rays of light
energy (consisting of photons) equally in all directions. To compute the diffuse illumination of the
surface from the point light source, we must compute two quantities. The first is the amount of
Chapter 2: Rendering and Animation Techniques for 3D Polygons 51
light striking the surface. The second is the amount of light that the viewer sees reflected off of the
surface.
We compute the amount of light striking the surface by considering an infinitesimally small
beam of light energy striking a surface. For a constant light intensity, this light beam imparts a cer-
tain fixed amount of light energy to the surface it strikes. It turns out that the angle that the light
beam forms with the surface normal vector is important. If the light beam strikes the surface
directly head-on, then the resulting light intensity on the surface is at its maximum. If the light
beam strikes the surface at an angle, the light intensity on the surface decreases. This is because
the amount of surface area covered by the light beam is greater when it strikes the surface at an
angle, and thus the fixed quantity of light energy from the light beam is divided across a larger sur-
face area, yielding an overall lesser intensity per unit area of surface.
Ip is the intensity of the point light source, ranging from the darkest value of 0 to the brightest
value of 1. The angle theta is the angle between the incident light ray and the surface normal. If the
angle is zero, then the light strikes the surface head-on, the cosine of the angle is 1, and the inten-
sity is maximum. As the angle approaches 90 or –90 degrees, the cosine approaches zero, causing
the light intensity to decrease.
We can also formulate the diffuse lighting equation as follows.
Equation 2-5
Here, N is the surface normal vector, and L is the vector from the surface to the light source. Note
that both vectors must be normalized. Recall that the dot product of these two normalized vectors
is exactly the same as computing the cosine of the angle between them.
We can also multiply the computed light by a diffuse scaling factor Md, ranging from 0 to 1
depending on the diffuse reflectiveness of the material, to reduce the effect of the light for less
reflective materials.
Equation 2-6
Finally, we can also multiply the light intensity by an attenuation factor fatt between 0 and 1, so
that the computed light decreases with its distance from the light source, in accordance with the
physical reality that the amount of light energy reaching a surface decreases with distance.
Equation 2-7
It is suggested in Computer Graphics: Principles and Practice that this distance-based falloff fac-
tor be computed as follows [FOLE92].
Equation 2-8
Term dL represents the distance between the light source and the surface being illuminated, and
the terms c1, c2, and c3 are user-defined constants for the light source. The attenuation factor is
therefore inversely proportional to the distance, but is never greater than one (otherwise it would
increase, not decrease, the effect of the light). The constant c1 is chosen to prevent the denomina-
tor from being too small when dL is very small, and the constants c2 and c3 allow us to control
whether the light falls off proportional to the inverse square of the distance, or to the inverse dis-
tance, or to a combination of both. Physically, for a single point light source, the energy falls off as
the inverse square of the distance, but typical objects in real-world scenes receive light from more
than just a single point light source. This means that the inverse square energy falloff is much too
sudden, which is the reason that we introduce the c1 and c2dL terms in the denominator, allowing
for a more gradual light falloff.
Now, given the computed light intensity that reaches a surface from a point light source, the
second question we would like to answer is how much of this light a viewer sees from a particular
Chapter 2: Rendering and Animation Techniques for 3D Polygons 53
viewing position relative to the surface. This depends on how much light is reflected from the
surface.
As it turns out, the viewing position ends up having no effect on the final light intensity seen
off of a diffuse surface. This means that the diffuse light intensity as computed above represents
the amount of light seen reflected off of the surface, from any viewing angle or position. Let us
examine why this is so.
TIP Diffuse illumination depends only on the positions and orientations of the light source
and the surface. It does not depend on the position of the camera.
As we mentioned earlier, the amount of light the viewer sees from a surface depends on the
amount of light reflected by the surface. Matte surfaces, also referred to as Lambertian surfaces,
obey a law of physics called Lambert’s law. Lambert’s law states that, ideally, diffuse surfaces
reflect light, per unit area, directly proportional to the cosine between the surface normal vector
and the vector going from the surface to the viewer (call this last vector the view vector). This
means that if we consider a particular fixed amount of the surface, the greater the angle between
the normal vector and the view vector, the less reflected light we see. This would at first seem to
imply that the light intensity should decrease as the angle increases. But this is not the case,
because as the angle between the normal vector and the view vector increases, we also see more of
the surface—in particular, we see proportionally more of the surface according to the reciprocal of
the cosine of the angle between the normal and the view vector. The increase in seen surface area
exactly cancels out the decrease in light intensity per unit area. In other words, as the angle
increases, the reflected light energy per unit area decreases, but the amount of area we see
increases by exactly the same proportion, meaning that the amount of reflected light stays the
same regardless of the viewer’s position, for Lambertian surfaces.
A few comments about the fundamental assumptions of the preceding light model are in
order. The light model we just described works by computing the diffuse illumination from all
light sources off of all surfaces. This implies a strict separation between light sources and the sur-
faces being illuminated. This model is not complete because it does not account for the
inter-object reflections that take place in the real world. An object reflecting light then also
becomes a light source, further illuminating other objects, which again reflect light. One way to
simulate the effect of this complex inter-object illumination is simply to add an ambient term to
the overall lighting; since the inter-object light reflection is so complex, the resulting illumination
can be considered as without direction and ambient. A more accurate, though much more
computationally intensive, means of accounting for inter-object illumination is radiosity, which is
covered below.
Y
Another assumption we made was that our light sources were point light sources, which are
FL
located at a particular point in space and radiate light energy equally in all directions. Alternative
models of light sources are also possible. Sunlight illuminating objects on Earth, for instance, is
not really accurately modeled by a point light source, because the distance of the sun from the
AM
earth is practically infinite in comparison to the dimensions of the objects being illuminated.
Because of the great distance separating the sun and the earth, all the light rays coming from the
sun are practically parallel to one another. This means that in the lighting equations presented
TE
above, the vector L from the surface to the light source always points in the same direction for all
polygons, in the direction of the parallel rays of sunlight.
Finally, it is also possible to model spotlights, which have only an exactly defined area of
influence, such as a cone. Outside of the area of influence the light does not affect the surface’s
intensity at all. This allows for creating illumination effects on the surface with sharply defined
edges, much as a real-world spotlight only illuminates within a certain elliptical area on the
ground.
Figure 2-10: Spotlights clearly delineate
the area of influence of the light. The left
surface is illuminated with a point light;
the right, with a spotlight.
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 55
Specular Reflection
Specular reflection refers to the bright highlights which appear when a shiny surface is illumi-
nated. Such effects are not accounted for by the preceding lighting equations.
There are a number of ways of computing the specular reflection. One which we examine here
is the so-called Phong illumination model. Do not confuse Phong illumination, which deals with
the computation of specular light intensities, with Phong shading, which refers to a rendering
method (covered in the next section) for drawing already computed light intensities.
Phong illumination assumes the surface is shiny and reflective. Therefore, with Phong illumi-
nation, we compute a reflection vector, which is the reflection of the light vector L (going from the
surface to the light source) around the surface normal vector N.
The idea is that as the orientation of this reflection vector R comes nearer to the orientation of
the view vector V (going from the surface to the viewer), the amount of reflected light should
increase exponentially, causing a sudden, bright highlight to appear.
Mathematically, we can express Phong illumination as follows. Note that all vector quantities
are normalized.
Equation 2-9
Thus, the specular intensity increases exponentially as the angle alpha between the reflection vec-
tor and the view vector decreases. The exponential cosine term has this effect; if the vectors lie on
top of one another, the cosine is one, and the intensity is at a maximum. As the cosine decreases,
the intensity falls off rapidly. The cosine of the angle can be computed by dotting the reflection
vector with the view vector. The exponential factor n is a parameter describing the shininess of the
surface; the greater the value of n, the shinier the surface appears because of an exponentially
faster increase and decrease of light intensity around the highlight.
56 Chapter 2: Rendering and Animation Techniques for 3D Polygons
The only issue which remains is how to find the reflection vector. We can calculate the reflec-
tion vector by the following equation.
Equation 2-10
The easiest way to derive this is to use the rotation matrix for rotation about an arbitrary axis. Here,
we wish to rotate the vector L by 180 degrees about vector N. Recall the rotation matrix for rota-
tion about an arbitrary axis:
Equation 2-11
The elements of the matrix are the components of the vector about which the rotation is to take
place, in this case N. Replacing all u entries with corresponding N entries, multiplying the matrix
with the vector L to be rotated, and realizing that the cosine of 180 degrees is –1 and the sine of 180
degrees is 0, gives the following expression:
Equation 2-12
Equation 2-13
Notice that each row contains a subtraction with an isolated L term; thus, we can we can pull out
the entire vector [L1,L2,L3,0]T and express this as a vector subtraction. We can also isolate the
vector [N1,N2,N3,0]T. Rearranging the equation to be the difference of these two vectors yields
the following expression:
Equation 2-15
Chapter 2: Rendering and Animation Techniques for 3D Polygons 57
For a geometrical derivation of the reflection vector, see Computer Graphics: Principles and
Practice [FOLE92].
Note that unlike diffuse reflection, the effect of which was independent of the viewer position,
specular lighting depends on the viewer position, because the position of the highlight changes as
the viewer looks at the object from a different angle or position.
NOTE Specular lighting, when it is unwanted, can cause some odd lighting artifacts. I
encountered this once while programming a virtual reality application for the CAVE environ-
ment (an immersive 3D environment which surrounds the viewer on all sides with four huge
screens and provides for true stereoscopic perception and head tracking through special
shutter glasses). One odd problem that cropped up was with the floor of a room we had mod-
eled. The surface of the floor appeared to change intensity suddenly when the viewer was
located at particular positions within the world. Changing the position, number, and type of
light sources did not eliminate the problem. It turned out that the problem was the 3D model
exported by the 3D artist included a specular lighting component, which then caused the
undesired camera-dependent specular “flashes” of light on the floor. These were particularly
problematic because the floor, which was not intended to be specular, was not highly tessel-
lated, meaning that the specular light computation was applied to very large triangles. This
caused very large and intermittent white flashes. The solution was to remove the specification
of the specular component in the exported 3D model.
Dynamic lighting is lighting that changes while a user is interacting with the 3D environment.
Dynamic lighting implies a change in any factor affecting any of the lighting equations used. Such
factors include surface geometry, surface orientation, surface attributes, light position, light attrib-
utes, and lighting parameters. Based on the new parameters, we dynamically recompute the
illumination for all affected parts of all geometry, and store the new illumination values with the
geometry for later use by the light rendering process. By defining lights to only have a certain area
of influence (such as a sphere), we can limit the amount of geometry that dynamic light affects and
therefore limit the amount of computation that needs to be done when this dynamic light changes
intensity.
Fog
Fog is not really a lighting effect, but it affects the intensity of the surface being viewed, so we con-
sider it here as well. Fog simply blends an original color with a certain fog color, where the amount
of the blending is based on the distance of the surface to the viewer. Objects very far away from the
viewer fade into the fog and are thus displayed with the fog color itself. Near the viewer, the effect
of fog is negligible, and thus the surface should be displayed with its original color. From near to
far the colors should fade gradually into the fog.
Mathematically, we can express the effect of fog on a particular surface color as follows.
Equation 2-17
Here, c represents the final color of the surface with the effect of fog, cs is the original, unfogged
color of the surface, cf is the color of the fog, and f is the fog factor, which ranges from 0 to 1. If f is
0, no fog is applied and we see the original color. If f is 1, we see only the fogged color. Between 0
and 1 are varying amounts of fog. The way we actually implement this in the C++ code in the l3d
library is to scale the real-valued fog factor to lie within the integer range from zero to
MAX_LIGHT_LEVELS. We then use the integer scaled fog factor as a lookup index into a
precomputed fog table which slowly fades each color or color component (red, green, or blue)
from its original value into the fog color of white.
CAUTION The OpenGL definition of the fog factor reverses the sense of 0 and 1: a fog fac-
tor of 0 in OpenGL means that we see only the fog color; a fog factor of 1 means that we see
only the original color.
The question remains as to how to compute the fog factor f. The most correct way would be to use
radial fog, which computes the distance of each fogged surface to the viewer and uses this actual
distance as the basis of some blending function going from 0 to 1. But distance computations
require a time-consuming square root operation, and since fog is typically computed per pixel, this
can be slow. A simpler fog computation, which is less accurate, is based on the z values of each
pixel after the perspective transformation. The z values are then mapped to the range 0 to 1. See
Figure 2-13 for a comparison of radial fog versus z depth fog.
60 Chapter 2: Rendering and Animation Techniques for 3D Polygons
Having decided on a distance to use (radial distance or z depth distance), we then need to map
this distance into a fog value from 0 to 1. We can choose to map the fog linearly or exponentially.
A linear mapping causes colors to fade out at a constant rate as they move into the fog. An expo-
nential mapping causes colors to fade out more quickly as they recede into the fog. We can even
use a discontinuous fog function to cause fog only to appear at certain distances from the viewer.
NOTE An option somewhere between z depth fog and radial fog is to approximate radial
fog by taking the z depth value, then subtracting a percentage of x and y deviation from the
center of the screen. This is more accurate than z depth fog, but still faster than true radial fog.
Notice that regardless of the means of computing fog, the computation is done in camera coordi-
nates because fog is an effect which depends on distance from the camera. This is another example
of why we need the camera_transform method in classes l3d_object and l3d_poly-
gon_3d, which allows us to save the camera space coordinates for later use. The fog computation
is typically done just before polygons are about to be drawn to the screen in the rasterizer; at this
point, however, the polygons have already undergone the perspective transformation, meaning
that we must have previously saved the camera space coordinates in order to perform this late fog
computation.
TIP The rendering or shading technique influences how many lighting calculations we per-
form and at which points in 3D space.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 61
Flat Shading
Flat shading is the rendering technique we have been using so far: each polygon has one color.
With flat shading, we perform the lighting computations once per polygon, typically for the
center point of the polygon. This computed light intensity is then used for the entire polygon.
Flat shading leads to a faceted appearance of the model, because the light intensity changes
abruptly from one face to another. See Figure 2-14.
Gouraud Shading
Gouraud shading is a simple technique for combating the faceted appearance of flat-shaded poly-
gons. With Gouraud shading, we perform the lighting computations once per vertex for each
polygon. This means that we need to store and transform vertex normals along with the geometry.
Given the light values at each vertex of the polygon, Gouraud shading then interpolates these light
values while the polygon is being drawn to arrive at a separate light intensity for each pixel. After
arriving at the light intensity for the pixel being drawn, this light intensity is combined with the
color of the polygon, using a lighting table. The final, lighted color is then drawn.
Figure 2-15:
Gouraud shading
computes light
intensity at each
vertex, then
interpolates these
precomputed
intensities as the
polygon is being
drawn, giving a
unique light intensity
to every pixel.
The effect of Gouraud shading is to smooth out over the entire polygon the changes in light
intensity which occur among the polygon’s vertices. Also, since light intensities are computed at
the vertices of the polygon, and since all polygons in an object share vertices, the light changes
over the entire 3D object are also smoothed out, resulting in an overall smooth appearance of the
surface. See Figure 2-16.
62 Chapter 2: Rendering and Animation Techniques for 3D Polygons
Figure 2-16: Gouraud shading makes a surface appear smooth and round.
Note that Gouraud shading is not always desirable, because it “washes out” all the sharp
changes in light intensity which occur over the entire 3D object. If the object being modeled is
supposed to appear round, then this is fine. But, if the object is supposed to have some sharp cor-
ners or edges, then the light intensity should indeed change suddenly from one face to the next
because of the physical properties of the surface. But simple Gouraud shading would smooth over
all sharp changes in light intensity, including the desired ones.
Solving this problem would require storing vertex normals per polygon instead of per object;
this would effectively allow each shared vertex to have several vertex normals, one for each face
touching the vertex. All vertex normals for one vertex would be the same if the surface is supposed
to appear rounded at that point; the vertex normals would be different if an edge should appear
between the adjacent faces.
A popular technique for allowing smooth edged polygons and sharp edged polygons in the
same model is to use smoothing groups. With this technique, we assign a 32-bit long integer to
each polygon, which is then treated as 32 Boolean flags. During the creation of a 3D model, the 3D
artist can turn each of these on or off. When determining each vertex normal for a polygon, the
long value is subjected to a logical AND operation with the long value of each neighbor. If the
result is not zero, the neighbor’s normal is included in the calculation; if it is zero, that neighbor is
excluded. This means that each vertex can have more than one vertex normal depending on which
polygon it is being considered for. By carefully setting the bits, the artist can specify which poly-
gons affect which adjacent polygons, and thereby control smoothing (or lack thereof) on each
edge.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 63
Phong Shading
Phong shading takes Gouraud shading one step further. With Gouraud shading, we evaluated the
lighting equation once per vertex, then we interpolated the precomputed intensities while drawing
the polygon, giving an interpolated light value per pixel.
With Phong shading, we actually re-evaluate the entire lighting equation at each pixel. To do
this, we again use vertex normals, and interpolate the normal vector itself while we are drawing
the polygon. We use the interpolated normal vector in the lighting equation, and recompute the
light for each pixel. Naturally, this requires a large amount of computation for each pixel.
Light Maps
A light map is a special form of a texture map (covered in the next section). Essentially, the idea is
to associate a square grid, called the light map, with each polygon. The entire light map covers the
whole polygon, and thus each element in the light map covers a certain small area of the polygon.
We call each element in the light map a lumel, short for luminance element. To compute the light
intensities, we evaluate the lighting equation once for each lumel, meaning that we evaluate the
lighting equation several times for one polygon. The finer the grid, the more times the light equa-
tion needs to be evaluated, but also the greater the quality of the final result.
Although we evaluate the lighting equation at several points on the polygon (at each lumel),
this is only a one-time preprocessing step (assuming static and not dynamic lighting). After com-
puting and saving the intensities in the light map, we simply store the precomputed light map with
the polygon. Then, when drawing the polygon, for each pixel we determine the corresponding
lumel in the light map (a non-trivial computation, as we see in the next section), combine the
intensity in the light map with the polygon’s color by using the lighting table, and finally draw the
lighted pixel to the screen.
Light mapping can be seen as being somewhere between Gouraud and Phong shading. With
Gouraud shading, we only computed the light at the vertices of the polygon; with Phong shading,
we computed light at every single pixel. With light mapping, we define an arbitrary grid of lumels
on top of the polygon, and compute the light intensities at every grid element.
Y
The most difficult part of light mapping is the determination, during the rasterization of a par-
FL
ticular polygon, of which lumel in the light map corresponds to the current pixel on-screen which
we are drawing. When rasterizing a polygon, we know that all pixels we draw do correspond to
(i.e., result from the perspective projection of) some point on the polygon and thus to some point in
AM
the light map, but we do not immediately know which point exactly. If you have a super-human
memory, you may recall that we briefly mentioned this problem in passing in the introductory
companion book Linux 3D Graphics Programming. To summarize, it is not exactly easy or intu-
TE
itive to calculate which lumel on the polygon corresponds to the current pixel of the polygon being
drawn, but it is possible, as we will see in grueling detail in the next section. This calculation is
actually one particular application of the technique called texture mapping, which allows us to
associate arbitrary images with a polygon. Therefore, let us now turn to the topic of texture map-
ping, returning to light mapping again afterwards.
Texture Mapping
As recently as the mid-1990s, an interactive 3D graphics application with texture mapping was
quite a sensation for personal computers. Today, end users expect texture mapping even in simple
3D graphics applications. Therefore, it is important to understand this technique.
The goal of texture mapping is to draw the pixels of a polygon with colors coming from a sep-
arate 2D image. (The image can also be 3D, but for now assume the image is 2D.) The image is
called the texture; the individual pixels within the texture image are often called texels. Seen this
way, the next question is: given a pixel of a polygon on-screen, which texel from the texture
should be drawn?
The answer to this question requires us to map the image onto the polygon. For instance, we
might want to draw the pixels of the polygon so that it appears that the image were “glued on” to
the surface of the polygon. This is by far the most common application of texture mapping. To
“glue” an image onto a polygon, we need to consider the coordinate space of the polygon, and we
also need to define a new coordinate space to contain the image. This new coordinate space con-
taining the image is texture space, which we look at in more detail in the next section.
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 65
Figure 2-22: The storage in memory of the texture data: left to right and top to bottom.
The textures are internally stored in the same pixel format as required by the screen; this
means that each texel block may represent more than one byte.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 67
Listing 2-4 presents the texture classes discussed so far. It also declares two classes, l3d_
texture_space and l3d_texture_computer, which we cover in the coming sections.
Listing 2-4: texture.h
#ifndef __TEXTURE_H
#define __TEXTURE_H
#include "../../tool_os/memman.h"
#include <stdlib.h>
#include <stdio.h>
#include "../vertex/coord.h"
class l3d_texture_data {
public:
int id;
l3d_texture_data(void) {id=0; data = NULL; }
virtual ~l3d_texture_data(void) {if(data) delete [] data; }
int width, height;
unsigned char *data;
};
class l3d_texture_space {
public:
l3d_coordinate O, U, V;
};
class l3d_texture_computer {
private:
l3d_matrix _world_to_tex_matrix;
public:
static const l3d_real EPSILON_VECTOR = float_to_l3d_real(0.0001);
l3d_point O; l3d_vector U, V;
l3d_real *fovx,*fovy;
int *screen_xsize, *screen_ysize;
l3d_real u, v;
l3d_vector w = U_cross_V_vector;
l3d_vector v_x_u = cross(V_vector,U_vector),
u_x_w = cross(U_vector, w),
w_x_v = cross(w, V_vector),
o_vec(tex_space.O.original.X_,
68 Chapter 2: Rendering and Animation Techniques for 3D Polygons
tex_space.O.original.Y_,
tex_space.O.original.Z_,
int_to_l3d_real(0));
_world_to_tex_matrix.set(
w_x_v.X_, w_x_v.Y_, w_x_v.Z_, dot(o_vec,w_x_v)*int_to_l3d_real(–1),
u_x_w.X_, u_x_w.Y_, u_x_w.Z_, dot(o_vec,u_x_w)*int_to_l3d_real(–1),
v_x_u.X_, v_x_u.Y_, v_x_u.Z_, dot(o_vec,v_x_u)*int_to_l3d_real(–1),
int_to_l3d_real(0),
int_to_l3d_real(0),
int_to_l3d_real(0),
dot(U_vector,w_x_v));
return _world_to_tex_matrix;
}
};
#endif
#include "../../tool_os/memman.h"
#include "../../tool_2d/scrinfo.h"
#include <stdio.h>
class l3d_texture_loader {
protected:
l3d_screen_info *si;
public:
l3d_texture_loader(l3d_screen_info *si) {
this->si = si;
data = NULL;
}
virtual ~l3d_texture_loader(void) {};
#endif
#include "texload.h"
public:
l3d_texture_loader_ppm(l3d_screen_info *si) :
l3d_texture_loader(si) {};
void load(const char *fname);
};
#endif
70 Chapter 2: Rendering and Animation Techniques for 3D Polygons
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "../../tool_os/memman.h"
case 1: {
sscanf(tok,"%d",&width);
state=2;
return 1;
}
case 2: {
sscanf(tok,"%d",&height);
data = new unsigned char [width*height*si->bytes_per_pixel];
cur_pdata = data;
state = 3;
return 1;
}
case 3: {
int maxcol=0;
sscanf(tok,"%d",&maxcol);
si->ext_max_red = si->ext_max_green = si->ext_max_blue = maxcol;
state = 4;
return 1;
}
case 4: {
sscanf(tok,"%d",&cur_r);
state = 5;
return 1;
}
case 5: {
sscanf(tok,"%d",&cur_g);
state = 6;
return 1;
}
case 6: {
Chapter 2: Rendering and Animation Techniques for 3D Polygons 71
int blue=0;
sscanf(tok,"%d",&cur_b);
register int i;
unsigned long mask = MAX_BYTE;
char shift = 0;
state = 4;
return 1;
}
case 10: {
sscanf(tok,"%d",&width);
state = 11;
return 1;
}
case 11: {
sscanf(tok,"%d",&height);
data = new unsigned char [width*height*si->bytes_per_pixel] ;
cur_pdata = data;
state = 12;
return 1;
}
case 12: {
int maxcol=0;
sscanf(tok,"%d",&maxcol);
si->ext_max_red = si->ext_max_green = si->ext_max_blue = maxcol;
state = 13;
return 2;
}
case 13: {
unsigned char c;
sscanf(tok,"%c",&c);
cur_r = c;
state = 14;
return 1;
}
case 14: {
unsigned char c;
sscanf(tok,"%c",&c);
cur_g = c;
state = 15;
return 1;
}
case 15: {
72 Chapter 2: Rendering and Animation Techniques for 3D Polygons
unsigned char c;
sscanf(tok,"%c",&c);
cur_b = c;
register int i;
unsigned long mask = MAX_BYTE;
char shift = 0;
state = 13;
return 1;
}
fp=fopen(fname, "r");
if(!fp) {
fprintf(stderr,"Error opening texture ppm file %s", fname);
return;
}
state = 0;
fgets(s, 256, fp);
while (!feof(fp)) {
tok = strtok(s," ");
while(tok) {
if(tok[0]=='#') break;
result = process_tok(tok);
tok = strtok(NULL," ");
}
if (result==2) break;
fgets(s, 256, fp);
}
}
}
texture quality, or you sacrifice color resolution in the textures in exchange for more freedom
when computing the lighting and fog tables.
There are also a number of other useful PPM utilities for performing image processing opera-
tions on image files—smoothing, scaling, rotating, brightening, and so forth. (See the online
manual.) Also, the GIMP bitmap editing program under Linux offers many powerful image
manipulation functions in a graphical interface, allowing you to immediately preview and try out a
number of visual effects.
Y
comprising our virtual world; it only makes sense, then, that the texture image must also be posi-
FL
tioned within the 3D world.
Positioning the texture image in our 3D world first requires us to define a texture space con-
taining the texture image. Texture space is simply a normal coordinate system with three axes and
AM
an origin. We name the three texture space axes the u axis, v axis, and w axis, corresponding to the
x, y, and z axes in our normal left-handed world coordinate system.
TE
CAUTION Do not confuse the w axis, the third dimension of texture space, with the homo-
geneous w coordinate. These two terms have nothing to do with one another!
The special thing about texture space is that we associate a color with each point in the space.
Thus, you can think of texture space as a solid, colored, infinitely large cube, where each point in
the volume of the solid cube may be assigned a separate color. The convention we define for tex-
ture space is that the 2D texture image we defined in step 1 is located in the 2D square in texture
space defined by the corner points (0,0,0), (0,1,0), (1,1,0), (1,0,0). Thus, the texture image occu-
pies exactly one square unit, starting at the origin, and located in the uv plane of texture space.
Outside of this single square unit, but still in the uv plane, we can think of the texture as being
duplicated, meaning that each square unit in texture space, starting and ending on whole numbers,
contains the colors of the 2D texture image. As for the w axis, for 2D textures we impose the
restriction that the colors in texture space do not depend on w; they only depend on the (u,v) val-
ues. (3D textures also allow for color variation along the w axis.) Graphically, you can imagine
that the 2D uv plane is simply duplicated infinitely up and down the entire length of the w axis.
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 75
It is important to understand that a coordinate in texture space (also often called a (u,v) coordi-
nate) is essentially equivalent to a pixel in the texture image. Texture space is a colored space; the
texture image defines the colors in the space. Therefore, each point in the texture space corre-
sponds to a pixel in the texture image. Here are some examples: the (0,0,0) coordinate in texture
space corresponds to the first pixel in the texture image; (0.5, 0.5, 0) is the pixel in the middle of
the image; (0.999, 0.999, 0) is the last pixel in the image; and (1,1,0) is again the first pixel in the
image, because the texture repeats itself.
Since texture space is a normal 3D space, we can define it in terms of three vectors, defining
the axes, and an origin point. We name the origin point O, and the three axis vectors U, V, and W.
Since we have said that for 2D textures, which is all we use in this book, the w texture coordinate
does not affect the color in texture space, it turns out that we rarely need the W axis. When we do
need the W axis vector, we can compute it as U´ V.
NOTE Computing the W vector as U´ V has one disadvantage: the size of the resulting vec-
tor is then arbitrary. We could simply normalize the W vector after the cross product, but the
problem is that the sizes of the texture space axis vectors define the scaling of the texture in
world space. We look at this more closely in the next section. This means that by computing
the W vector dynamically, we lose the freedom to specify a custom scaling of the texture along
the w axis. Since for 2D textures, we ignore the w axis anyway, this is unimportant. But with
true 3D textures, where the w axis may be important, we should define our own W axis to have
full control over the entire specification of the 3D texture space.
This means that to store a texture space, we just need to store O, U, and V. Instead of storing the
vectors U and V, we instead store the tip points of these vectors; the base point of these vectors is
then O. As usual, we subtract base from tip to obtain the vector. We store the tip points to allow for
consistent transformation using the l3d_coordinate class, as mentioned earlier in this chap-
ter in the discussion about surface normal vectors.
Class l3d_texture_space, presented earlier in Listing 2-4, stores a texture space in this
manner. It simply has three member variables, all of type l3d_coordinate: O, U, and V. Class
l3d_texture, as we saw earlier, inherits from l3d_texture_space. This means that a
texture also has a texture space associated with it.
76 Chapter 2: Rendering and Animation Techniques for 3D Polygons
The purpose of the texture coordinate system, then, is to define the position, orientation, and
scaling of the texture image within the 3D world coordinate system.
Given a texture space, we should then define a mapping between texture space and world
space. This is nothing more than a change in coordinate system. This means that given a particular
coordinate in texture space, we should be able to find out the coordinate in world space. Con-
versely, and more commonly, we must also be able to determine, given a coordinate in world
space, what its coordinates are in texture space.
Let’s now look at each of these mappings separately.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 77
The mapping from texture space to world space is very easy to compute. Recalling the rela-
tionship between matrices and coordinate systems, we can simply write the texture coordinate
system as a matrix, using the first three columns as the axis vectors and the last column as the ori-
gin point. This matrix, when multiplied with a column vector representing a location in texture
space, then converts the texture space coordinates to world space coordinates, for the reasons dis-
cussed in the introductory companion book Linux 3D Graphics Programming.
Equation 2-18
The mapping from world space to texture space is more difficult, but it is the mapping which we
more often need. Why do we need it more often? The reason is that the final texture mapping strat-
egy we implement needs texture coordinates for each world space polygon vertex. The idea is that
we ultimately wish to associate texture coordinates—and thus colors from the texture image—
with every pixel we draw on-screen for a particular polygon. There are two ways of attacking the
problem. The first is the direct pixel-level approach, covered in the next section, where we explic-
itly compute a texture coordinate for every pixel as it is being drawn. The second is a vertex-level
approach, covered in the section titled “An Optimized Texture Mapping Strategy,” where we only
explicitly compute texture coordinates for each world space vertex of a polygon, and interpolate
these precomputed values to arrive at the texture coordinates for each pixel. For the second, ver-
tex-level approach, we know the world space coordinates of each polygon vertex, and need to
compute the texture coordinates for each vertex. In other words, given the world space coordinates
of a vertex, what are its coordinates in texture space?
So, how do we compute the matrix that maps from world space to texture space? Concep-
tually, the answer is simple: invert the matrix that converts texture space to world space. It is
tempting, but incorrect, to use the same strategy we used (in the introductory companion book
Linux 3D Graphics Programming) for deriving the camera transformation matrix. In that case, we
78 Chapter 2: Rendering and Animation Techniques for 3D Polygons
simply took the transpose of the matrix to find its inverse. But recall why this works: the vectors
for camera space are all mutually orthogonal, and are all of unit size. While our texture space vec-
tors are probably orthogonal, they are not always of unit size. This is because, as we said earlier,
we must allow for different-sized texture space axis vectors to allow for scaling of the texture.
Therefore, since the U, V, and W vectors are not of unit size, we cannot invert the matrix simply by
taking its transpose.
In this case, we need to perform a bit of tedious algebra to invert the matrix. We need to
express the matrix as a set of simultaneous equations, solve these equations algebraically, then
recast the expression into matrix form. We are going to go through every step of the algebra in
excruciating detail because it is important that you understand how to find the solution to such
mathematical problems, which arise often in 3D graphics. But don’t worry—we are going to use
the computer to do most of the algebra for us; specifically, we are going to use the symbolic alge-
bra package Calc, which is an extension to the Emacs editor. Calc is included on the CD-ROM.
The important thing is to understand what the problem is, how to cast it in a form that can be
solved by Calc, and how to use Calc.
Our problem is to invert a matrix. First of all, let’s rewrite the problem as a set of linear equa-
tions. Then, we’ll see how we can use Calc to solve the set of linear equations.
The matrix presented earlier converts from texture space to world space. This means that mul-
tiplying texture space coordinates (u,v,w) with this matrix gives us world space coordinates (x,y,z).
Mathematically, this can be expressed as:
Equation 2-19
This system of linear equations is merely a different representation for the same information con-
tained within the matrix equation. Given u, v, and z in texture space, the equations tell us how to
find x, y, and z in world space.
We now want to solve this system of equations for u, v, and w, based on known values of x, y,
and z. This takes us from world space to texture space. Solving this system of linear equations is
exactly the same as inverting the equivalent matrix.
The system of equations is solvable; there are three equations and three unknowns. You may
wish to try to solve the system of equations by hand, using the normal techniques of grade school
algebra. If your patience level is the same as mine, you will notice that this system is rather tedious
to solve. Scores of terms need to be copied from one step to the next, some terms cancel out sur-
prisingly, and others look like they will cancel out but actually do not. The danger of making a tiny
sign error somewhere is great, leading to incorrect results and wasted hours of scribbling, not to
Chapter 2: Rendering and Animation Techniques for 3D Polygons 79
mention a cramped hand. Most of the work involved in solving this system of equations seems
tedious, repetitive, manual, and error-prone. It seems like a perfect job for the computer. The Calc
package can help us here. So, let’s now take a detour to look at how to use the Calc package to
solve this system of equations.
NOTE If you are uninterested in understanding how to solve the system of equations, you
can skip this section and just use the results. However, I sincerely hope that you take the time
to learn to use Calc and to understand how to solve such problems using Calc as a tool.
Sooner or later, every 3D programmer will be confronted with solving systems of equations. A
technique as ubiquitous as texture mapping cannot be fully understood without understand-
ing the solutions to the equations.
then Calc can automatically solve this system of equations for x and y to give us the solution:
Equation 2-22
Internally, therefore, Calc has performed the typical grade school algebraic manipulations for
solving systems of equations. For this example, the equations are very easy to solve by hand, but in
the case of inverting the texture space to world space matrix, the system of equations is quite sim-
ply tedious to solve by hand. The symbolic algebra capabilities of Calc relieve us of exactly this
burden. Let’s first take a brief tour of how to use Calc, then focus on using the symbolic algebra
capabilities to solve the texture mapping equations.
NOTE Calc has excellent online info documentation, including many hands-on tutorials.
Alternatively, within Calc, press ? and type i to display the info documentation from within
Calc. Since we don’t have enough space here to describe all of Calc’s features—many of
which are very interesting and useful for solving the types of problems which arise in 3D
graphics—I encourage you to have a look at Calc’s info documentation.
With Emacs and Calc installed, you can get started as follows.
1. Type emacs and press Enter. The Emacs window appears.
2. In the Emacs window, press Alt+x to run an extended command. Notice the text “M–x” which
appears at the bottom of the screen. Type calc and press Enter. Two new small windows
appear at the bottom part of the existing Emacs window. See Figure 2-29. Notice that the cur-
sor is now located within the leftmost Calc window.
3. Press Ctrl+x, 1. This maximizes the Calc window to occupy the entire Emacs window.
To exit Calc, simply kill the Calc buffer. When the cursor is in the Calc window, press Ctrl+x,
Ctrl+k, Enter.
Next, let’s enter some simple expressions and let Calc evaluate them.
Stack-Based Computation
Fundamentally, Calc operates with a stack of mathematical expressions. The typical flow of oper-
ations in Calc is to enter entries on the stack, then to perform operations with the entries on the
stack. Some operations, such as negation, operate only on the topmost element of the stack. Other
operations, such as addition, operate on the top two elements of the stack, removing these and
replacing them with one new element, the result of the operation. This style of operation is also
called reverse Polish notation or RPN; some desk calculators operate in this mode.
Let’s now perform some basic computations to see the stack in action and to learn some typi-
cal Calc commands.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 81
1. Press ', a single quote character. This tells Calc that you are about to input an algebraic expres-
sion, and that this expression should be placed on top of the stack. Notice the prompt
“Algebraic:” which appears at the bottom of the screen.
2. Type 125 and press Enter. Notice the text “1: 125” which appears within the Calc window.
This means that entry number 1 of the stack has just been set to value 125.
3. Type n. This negates the top entry on the stack. Notice the number 125 changes to –125.
4. Press ', type 25, and press Enter. This enters a new entry on the top of the stack. The existing
entries on the stack are pushed down by one position. Notice that the entry number next to the
number –125 has changed to 2, indicating that it is now entry number 2 on the stack. Notice
the new line “1: 25” which appears, indicating that the top entry of the stack is the number 25.
5. Press +. This adds the top two entries on the stack and replaces them with a new entry on the
top of the stack representing the result. Notice that the entries –125 and 25 on the stack disap-
pear and are replaced by a new entry with a value of –100, the result of adding –125 and 25.
6. Press Shift+u to undo the last operation. Notice that the original two entries reappear.
7. Press / to divide the second entry on the stack (–125) by the first entry (25). Notice that the
result (–5) appears on top of the stack.
8. Press Shift+u to undo the last operation.
9. Press Tab. This exchanges the top two entries on the stack.
10. Press / to divide the second entry on the stack (25) by the first entry (–125). Notice that the
result (0.2) is different than before, because the order of the stack entries was different; we
have exchanged dividend and divisor.
11. Press Shift+u to undo the last operation.
12. Press Space. Notice that the top entry of the stack is duplicated. Press Space three more times
to duplicate the top entry three more times.
13. Press Backspace. Notice that the top entry of the stack disappears. Press Backspace three
more times to delete the next three entries from the top of the stack.
14. Press Ctrl+u. This allows you to enter a numerical argument to an upcoming command. Type
2 as a numerical argument. Now, enter the command to be executed with the argument of 2.
Press Space, the “duplicate entry” command.
15. Notice that the top two entries of the stack have been duplicated, because of the numerical
argument 2 which we entered earlier.
Now we know how to enter items onto the stack, how to duplicate and delete them, and how to per-
form basic arithmetic operations on them. Here is a summary of the most important Calc
commands:
n Press ' (single quote) to enter an algebraic expression.
n Press +, –, /, or * to add, subtract, divide, or multiply the top two entries of the stack, replacing
them with a new entry containing the result.
n Type n to negate the top entry on the stack.
n Press Space to duplicate the entry on top of the stack.
82 Chapter 2: Rendering and Animation Techniques for 3D Polygons
n Press Ctrl+u, type number n, and press Space to duplicate the top n entries on the stack.
n Press Tab to exchange the top two entries on the stack.
n Press Shift+u to undo operations.
n Press Backspace to delete the entry on top of the stack.
NOTE It would also have been possible to enter the vector of equations directly in one step,
instead of entering two separate equations then packing them. To enter both equations in one
step, simply separate the equations with a comma and place square brackets around the
entire list of equations during the entry of the expression.
6. Type a, then press Shift+s to solve the system of equations on the top of the stack. Notice the
prompt that appears at the bottom of the screen, “Variable to solve for:”
7. Type [x,y] and press Enter. This tells Calc that we want to solve the system of equations for
two variables, x and y. Do not forget to type the square brackets.
8. Notice that Calc solves the system and displays the solution x=7.5, y=2.5.
Note that when entering expressions containing terms that should be multiplied, you can either
explicitly specify the multiplication with the * (asterisk) character, as in “a*b”, or you can simply
separate the terms with a space, as in “a b”. You should not write the terms directly next to one
another, as in “ab”, because in this case, Calc interprets this as one term instead of two terms multi-
plied together.
Although we now know how to use Calc to solve systems of equations, we should not try just
yet to solve the texture mapping equations. Instead, we should continue to look at some other edit-
ing features of Calc. Later, when we actually solve the texture mapping equations, we will be
confronted face to face with the full complexity of the solutions, and will need all tools at our dis-
posal to reduce the solutions to the simplest form possible.
The following list summarizes the features of Calc that we use in the following sections to
solve and simplify the texture mapping equations.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 83
n Editing expressions. Calc allows you to edit the top expression on the stack. This is extremely
useful for performing manual simplifications or rearranging an existing formula so that Calc
can more easily recognize and simplify expressions. Edit the top expression on the stack as
follows. Press ', a single straight quote character. A new window, with the title “Calc Edit
Mode,” opens, containing the top expression of the stack. Use the normal Emacs editing com-
mands to edit the text of the expression. Press Ctrl+c twice to accept your changes, or Alt+#,
x to cancel your changes.
n Display modes. Calc offers different ways of displaying algebraic expressions. The Normal
mode is a compact, straight-line representation, where two variables next to one another are
understood to be multiplied together, as with standard algebraic notation. The Big mode is an
easy-to-read ASCII representation, where fractions are displayed vertically instead of hori-
zontally. The C mode is a representation using the syntax and operators of C language expres-
sions. The TeX mode is a representation suitable for inclusion in the typesetting program TeX,
which is often used in academic circles for publishing documents with large amounts of math.
(The equations in this book were all typeset using the LyX front end to the LaTeX macro
extension to the TeX program.) Change the display mode in Calc as follows. Type d. Then, for
Normal mode, press Shift+n. For Big mode, press Shift+b. For C mode, press Shift+c. For
Tex Mode, press Shift+t.
n Vector entry. Calc has facilities for dealing with row and column vectors of numbers or vari-
ables. To enter a row vector, first press ' to allow algebraic entry. Then, press [ to begin the
vector. Enter each element of the vector, separated by the comma (,) character. Press ] to end
84 Chapter 2: Rendering and Animation Techniques for 3D Polygons
the vector, and press Enter to finish the input. To enter a column vector, follow the same pro-
cedure, but separate the elements of the vector with the semicolon (;) character.
n Matrix entry. Calc also allows entry and computation with matrices of numbers or variables.
To enter a matrix, begin as if you were entering a vector: press ' then [. Enter the first row of
elements in the matrix, separating the elements of the matrix with the comma (,) character. To
finish a row, enter a semicolon (;) character, and continue typing the elements of the next row.
Continue until the matrix is complete. Then press ] and Enter.
Y
with the key sequence
FL
'[a,b,c,d;e,f,g,h;i,j,k,l;m
,n,o,p] followed by
Enter. The vector was
AM
entered with the key
sequence '[x;y;z;w]
followed by Enter.
TE
n Simplifying equations. Calc offers many commands to simplify algebraic expressions. The
most important commands are the simplify and normalize-rational commands. The simplify
command tries to simplify the expression, using rules which ordinarily might be too slow to
apply. For instance, simplify combines non-adjacent sums of terms: a+b+2a is simplified to
b+3a. The normalize-rational command attempts to rearrange a formula to be the quotient of
two polynomials. For instance, a+b/c+d/e is simplified to (ace+be+cd)/(ce). Invoke these
commands as follows. Type a to begin entering an algebraic command. Then, to apply the
simplify command, type S; to apply the normalize-rational command, type n. The simplifica-
tion occurs on the topmost element of the stack.
n Collecting terms. The collect command of Calc rearranges a formula as a polynomial in a
given variable. For instance, if we have the expression ax+bx+cx, then collecting the x terms
yields the expression (a+b+c)x. Invoke the collect command as follows. Type a to begin
entering an algebraic command, then type c. Notice the prompt “Collect terms involving:”
which appears at the bottom of the screen. Type the name of the term which should be col-
lected, and press Enter.
n Selections. Often, it is useful to operate on just one small part of an expression, rather than the
expression as a whole. Calc allows us to define a selection, which is simply a smaller portion
of an expression. After defining a selection, most Calc commands then operate only on the
selection. Most commonly, after a selection you will apply one of the simplification com-
mands (a, S or a, n) or the edit command (', the straight quote character). This allows you to
simplify or edit a smaller portion of the entire expression. Apply a selection within Calc as
follows. Move the cursor to be on top of the beginning of the subexpression that you want to
select. Type j to enter selection mode. Then, type one of the following keys to define a
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 85
selection: s for the smallest subexpression at the cursor position, m for more of the current
subexpression, n for the next subexpression after the current selection, or c to cancel selection
mode and select the entire formula. After defining a selection, notice that the rest of the equa-
tion is replaced with dot characters to indicate that these parts of the expression are tempo-
rarily inactive.
Now that we know some of the more important tools that Calc offers for editing and simplifying
expressions, let’s move on to the actual practical reality of solving systems of equations within
Calc.
3. Solve the duplicated system with one particular variable, say u, as the last one. Type a, S, type
[v,w,u] and press Enter.
4. The solution for all three variables appears, but only the solution for the last variable u is suffi-
ciently simple to be understandable. Edit the solution: press ' (straight quote). Delete
everything before and after the last solution. Press Ctrl+c twice to accept your changes. Now,
only the solution for the last variable remains on top of the stack.
5. Press Tab to exchange the top two entries of the stack. The system of equations now appears
on top of the stack, and the solution for one variable beneath it.
6. Repeat again starting at step 2, but with a different variable, say v, as the last one. Continue
until all variables have been solved for as the last variable in the solution vector.
This procedure gives you the best results from the equation solver. But often, even if a variable
was solved for as the last variable in the solution vector, the resulting equation can still be simpli-
fied further with the simplify and/or normalize-rational commands. In particular, it is often helpful
to simplify only parts of the equations at a time. Do this with the selection command. Select a rea-
sonably small subexpression, consisting of, say, four to five terms. First simplify the expression by
typing a, S, then apply the normalize-rational command by typing a, n. Then, select more of the
expression with j, m, and continue to simplify. If Calc ever fails to simplify, simply undo the oper-
ation, select a smaller subexpression, and try again. If even that fails, and you are convinced that
the solution can be simplified further, you need to help Calc out by making common terms easier
to recognize. Most often, common-but-negated terms will appear scattered throughout the expres-
sion. By editing the expression and judiciously negating terms so that no mathematical change in
value is effected (for instance, by negating both numerator and denominator of a fractional expres-
sion), it is often possible to make similar-but-negated terms exactly identical. After doing this, try
simplifying again; Calc will then usually recognize the common terms which you made explicit,
and simplify further.
To summarize, when solving systems of equations in Calc, you should solve the system multi-
ple times, each time with a different variable as the last one in the solution vector. Use the
duplicate command to avoid needing to enter the system of equations multiple times. Simplify the
resulting solutions a piece at a time by using selections, the simplify command, and the normal-
ize-rational command.
After solving the system with a, S followed by [v,w,u], the Calc window appears as in Figure
2-33. Notice the complexity of the solutions for v and w, and the relative simplicity of the solution
for u.
We then edit the expression to remove everything but the solution for the last variable, u. After
doing this, the Calc window appears as in Figure 2-34.
We can simplify the solution for u further. A naive attempt to apply the simplify and normal-
ize-rational commands to the entire expression at once results in Calc reporting a “division by 0”
error. But the expression can be simplified. By simplifying and normalizing the equation piece for
piece using selections, and by collecting terms for x, y, z, ux, uy, and uz, the expression finally sim-
plifies to the form shown in Figure 2-35. I encourage you to try to arrive at the same result yourself
by using Calc.
By repeating the solution procedure for variables v and w, we arrive at the following simpli-
fied solutions for u, v, and w.
Equation 2-23
These equations are the solution to the world-to-texture-space problem. Given variables x, y, and z
representing world space coordinates, and a texture space definition as specified by the O point
and vectors U, V, and W, these equations give us the equivalent coordinates u, v, and w in texture
space. As you can see, the solution is not completely intuitive, and would have been rather tedious
to derive by hand.
There is still more structure to be discovered in these equations. Notice the recurring pairs of
differences which appear: (vz wy – vy wz), (vx wz – vz wx), and so forth. These terms are actually
components of cross product vectors. In particular, the solution for u contains the components of
the cross product vector V´ W; the solution for v contains the components of U´ W; the solution for
w contains the components of V´ U. The denominator in each case is U(V´ W). Making this sim-
plification, and recasting the equation into matrix form, gives us Equation 2-24. The subscripts on
each cross product expression mean to take the corresponding component of the vector after the
cross product operation. For instance, (V´ W)x means to take the cross product of vectors V and W,
and to use the x component of the resulting vector.
Equation 2-24
Chapter 2: Rendering and Animation Techniques for 3D Polygons 89
Verify that this matrix form of the equation, using cross products, is the same as the Calc solution
by performing the matrix multiplication, expanding the cross products, and dividing each compo-
nent of the resulting solution vector (u, v, and wtex) by the homogeneous component whom. We
have subscripted the w texture component and the homogeneous w divisor with tex and hom to dis-
tinguish their separate functions; we use the homogeneous whom component to effect a division
on all terms, since the solution equations for u, v, and w, as shown by Calc, all have a common
denominator.
Therefore, the matrix in Equation 2-24 converts from world space coordinates to texture
space coordinates; by multiplying a column vector in world coordinates with the matrix, we obtain
a new column vector with the texture coordinates. Since this is a useful operation, we have encap-
sulated it into an l3d class: class l3d_texture_computer. Any class needing the ability to
perform texture related computations inherits from this class. The method world_to_tex_
matrix takes an object of type l3d_texture_space as a parameter, and returns the matrix,
which converts coordinates from world coordinates into the texture space of the given
l3d_texture_space object. The member variables fovx, fovy, screen_xsize, and
screen_ysize are pointers to external integers specifying the horizontal and vertical screen
size and field of view terms. We need these variables later within the Mesa 3D rasterizer to per-
form a reverse projection.
Remember that with 2D textures, we generally disregard the third texture coordinate. This
means that when converting from world space to texture space, we can discard the third compo-
nent of the resulting vector; only the first two components, u and v, are relevant for 2D image
textures.
Now we know how to convert from texture space to world space, and from world space to tex-
ture space. This is an important step in texture mapping; with this understanding, the relationship
between the texture space and world space is clearly defined, and thus the relationship between the
texture image and the world space is clearly defined. The next step is to understand the relation-
ship between screen space and texture space.
space, then convert to screen space with a perspective projection. Then, we take the resulting sys-
tem of equations, converting from texture space to screen space, and solve it for the texture space
variables, thus resulting in equations converting from screen space to texture space. Again, we
rely heavily on Calc’s help to save us from the tedium of solving the equations by hand.
Let’s start with the texture space to world space matrix, multiplied with a column vector rep-
resenting a location in texture space. The result is a location in world space.
Equation 2-25
Equation 2-26
We have said that for 2D texture images, we don’t use the w texture component, because there is
no color variation along the W axis in texture space. Therefore, the value of w does not affect the
color we obtain from the texture image; any value of w can be used in the equation. The simplest
value of w is 0, which causes the w term to conveniently disappear. After replacing w with 0, we
obtain:
Equation 2-27
We can rewrite this from matrix form into a system of linear equations.
Equation 2-28
Now, let’s apply a simple perspective projection. We do this by dividing x and y by z to yield xp
and yp (the subscript p stands for projected), resulting in Equation 2-29. This simple projection
incorrectly ignores the effect of the field of view parameters and the reversal of the y axis orienta-
tion which actually occurs with the real perspective projection. We can use Equation 2-30 to
convert from the real screen projected coordinates xs and ys, which include the field of view and y
reversal, into the simple projected coordinates xp and yp. Since it is easy to convert between the
real and the simple projected coordinates, and since the math is less tedious if we use the simple
projected coordinates, the rest of the derivation for the texture mapping equations uses xp and yp.
Any general observations about the variables xp and yp apply equally to the variables xs and ys.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 91
Let’s now rearrange the first and second equations in Equation 2-29 to move the z to the left-hand
side of the equations. This yields:
Equation 2-31
This is the system of equations we want to solve. First, let us consider what the equations in their
current form tell us, before solving the equations for other variables. In their current form, this sys-
tem of equations assumes we know the texture coordinates u and v. The equations then return three
values: the screen x coordinate multiplied by the world z coordinate, the screen y coordinate multi-
plied by the world z coordinate, and the world z coordinate. Notice that the 3D world space z
coordinate affects the first two equations. The reason is the perspective projection, which divides
by z. The texture space to world space matrix, developed earlier, relates texture space to non-pro-
jected 3D world coordinates. Therefore, to relate texture space to projected 2D screen
coordinates—which is the main goal, remember—we must “reverse project” the screen coordi-
nates by multiplying them with the world space z coordinate. Thus, the 3D z coordinate, due to the
perspective projection process, plays an important role in texture mapping.
Now that we understand what this system of equations tells us, let’s solve the system for the
variables we want to know. We wish to find the texture coordinates u and v. Since the 3D z coordi-
nate also plays a role here, it is also an unknown. So we solve the system for variables u, v, and z.
Do this by using Calc as described in the previous section, solving the system multiple times, each
time with a different variable as the last variable in the solution vector. After solving the system in
this way and simplifying the results, the final equations are as follows.
Equation 2-32
This is an important equation for texture mapping. It tells us explicitly, for a given projected coor-
dinate on-screen, the corresponding (u,v) coordinates in the texture space.
Notice that the u, v, and z variables are non-linear functions of the projected coordinates xp,
and yp. Non-linear means that a change in xp or yp does not always result in the same amount of
change in the u, v, or z variables. Mathematically, you can recognize this because the independent
variables xp and yp appear in both the numerator and denominator of a fraction, meaning that a
constant change in xp or yp leads to a non-constant change in the dependent variables u, v, or z.
Changing the denominator of a fraction by a constant amount leads to a non-constant change in the
final value; the difference between 1/2 and 1/3 is much greater than the difference between
1/10002 and 1/10003, though the denominator changes by a constant amount of one in both cases.
92 Chapter 2: Rendering and Animation Techniques for 3D Polygons
Geometrically, you can understand this non-linearity by examining Figure 2-36. When a polygon
is tilted away from the projection plane—which is essentially our computer screen—a movement
of one pixel on the screen corresponds to a non-constant movement on the polygon in 3D. This
non-linearity is a fundamental property of the perspective projection process and causes (or
indeed, is) the foreshortening which we observe in ordinary visual perception.
A further observation about Equation 2-32 is that the numerator and denominator of each
term, taken separately, are indeed linear with respect to xp and yp; it is the result, dividing numera-
tor by denominator, which is non-linear.
Linearity of an equation with respect to xp and yp (and therefore, also with respect to xs and
ys) is important because it allows us to compute the value at a particular screen location based on
the value at a previous screen location. We used the same idea when developing the incremental
algorithm for line drawing: the value for the next pixel could be calculated from the value of the
current pixel. With regard to the texture mapping equations, this means that we can compute an
initial value for the numerator and denominator based on the starting values of xp and yp for the
first pixel of the texture mapped polygon we are drawing. Then, we can incrementally compute
the numerator and denominator (requiring just one addition operation) for all other pixels in the
polygon. We must perform the division, however, to obtain the u and v coordinates.
image exists within the square in texture space defined by corner points (0,0) and (1,1). Outside of
the square (0,0) to (1,1)—that is, for u and v values greater than one or less than zero—we simply
regard the texture as being duplicated.
The coordinates within the texture map image are integer coordinates from 0 to width–1 hori-
zontally and from 0 to height–1 vertically, where width and height are both powers of two. So to
map from the real-valued (u,v) to the indices within the texture map, we simply multiply u by
width and v by height, yielding integer coordinates (i,j) within the texture map image. If u or v
were greater than one or less than zero, this could lead to integer texture map indices lying outside
of the texture image; to correct this and allow for repetition of the texture, we simply perform a
logical bit-wise AND of i and j with width–1 and height–1. This has the effect of causing any val-
ues greater than width or height to “wrap around” back to zero again. Notice that this only works
because width and height are powers of two; with powers of two, the AND operation is essentially
a modulo operation, which gives us the value of the remainder after a division. Also, note that this
wrap-around behavior helps compensate for the fact that pixels slightly outside of the polygon
might be drawn due to numerical inaccuracy inherent to the DDA polygon drawing algorithm. If a
pixel lies outside of the polygon, then a computed texture coordinate will lie outside of the texture
image, but the wrap-around behavior allows us still to retrieve a valid texel from the texture.
Instead of wrapping around, we could also clamp the values to force them to lie within the texture
image.
Given the integer coordinates (i,j) within the texture image, we simply use the color at this
location in the image (stored within the l3d_texture_data class) for drawing the current
pixel of the texture mapped polygon being rasterized on-screen.
Equation 2-33
Y
FL
In this form, we can see that the u/z, v/z, and 1/z terms are all linear functions of xp and yp. This
means that calculating the next value of u/z, v/z, or 1/z for the next pixel location can be done with
a simple addition to the value at the current pixel location. Furthermore, by dividing u/z by 1/z, we
AM
arrive at the desired texture coordinate u; similarly, by dividing v/z by 1/z, we arrive at v.
One relevant question is in which coordinate space z should be. Two obvious choices come to
mind: the world space z coordinate or the camera space z coordinate. The best coordinate space to
TE
use for the z value is the camera space z coordinate. This is because the world space z coordinate
can vary wildly; its values can be extremely large, extremely small, positive, or negative; the
world z coordinate depends on the location of the object within world space, which can be com-
pletely arbitrary. On the other hand, the camera space z coordinate is a little more restrictive; it
cannot be negative, has a minimum value (the near z clipping plane), and has a maximum value
beyond which objects are too far from the camera to be seen. These additional restrictions on the
possible values of the camera space z coordinate make it more amenable for use within the texture
mapping computations.
The linearity of u/z, v/z, and 1/z suggests to us the following texture mapping strategy.
1. Define a texture and a texture space.
2. For each polygon, compute u, and v for each vertex index of the polygon, using the world
space to texture space matrix. Store these u and v values in the new tex_coord member of
the derived l3d_polygon_ivertex_textured class, covered in the next section.
3. After transforming polygons into camera space, also save the camera space z coordinate in the
tex_coord member of l3d_polygon_ivertex_textured.
4. During rasterization, compute u/z, v/z, and 1/z for each vertex index of the polygon.
5. While stepping vertically between scanlines and horizontally between pixels, interpolate the
u/z, v/z, and 1/z values. Do this by maintaining two sets of variables, one set for the left side
and one set for the right side of the polygon. The left side of the polygon is always being
drawn from a starting left vertex to an ending left vertex; the same for the right side of the
polygon. Each vertical scanline step moves the current pixel from the starting vertex closer to
the ending vertex. Thus, we also move from the starting u/z to the ending u/z; similarly for v/z
and 1/z. This means that for each scanline, we always have a current u/z, v/z, and 1/z value for
the left side and the right side of the polygon. Then, for each pixel within the scanline, we
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 95
interpolate from the left u/z, v/z, and 1/z values to the right u/z, v/z, and 1/z values. This gives
us a u/z, v/z, and 1/z value for every pixel of the polygon we rasterize. See Figure 2-37.
6. Before drawing each pixel, divide the current u/z by 1/z, and divide v/z by 1/z. This returns the
u and v values for the current pixel. Scale these to integer texture map coordinates, extract the
appropriate pixel color from the texture map, and draw the current pixel with the color from
the texture map.
v in screen coordinates. However, they will not be “too far” from the original values, since we
know that the endpoints of the run are correct. The smaller the runs, the less noticeable the error,
but the larger the runs, the fewer divisions we must perform and the better the overall performance
of the texture mapping code.
Figure 2-38: Dividing a horizontal
row of pixels into runs. We only
compute the true u and v values
at the endpoints of the run. In
between the endpoints, we
linearly interpolate.
We’ve now covered the theory we need to perform texture mapping. The approach imple-
mented in l3d is the vertex-level approach, interpolating u/z, v/z, and 1/z. Let’s now look at the new
polygon class needed to implement this texture mapping strategy.
that the vertex index lists all now contain the extended vertex indices with texture coordinates.
Therefore, all vertex index lists can store texture coordinates. Only after replacing the factory does
the constructor then call the init method, which then creates the vertex index lists using the new
factory. This is also why the empty base class constructors are called, to prevent the base class
constructors from creating the lists before the new factory is in place. Class l3d_poly-
gon_3d_textured also declares a copy constructor and a clone method, as usual.
The member variable texture stores the texture orientation and data.
The method assign_tex_coords_from_tex_space takes a texture space as a
parameter, computes the texture coordinates based on the world space to texture space matrix, and
stores the resulting texture coordinates with each vertex index in the ivertices list. Notice that
we only use the first and second (u and v) components of the vector, since the third texture coordi-
nate is unimportant for 2D textures.
The overridden methods init_transformed and transform perform the same trans-
formation functions as in the parent class l3d_polygon_3d, but have been extended also to
initialize and transform the O, U, and V member variables as well. This means that any transforma-
tion of the polygon also transforms the polygon’s associated texture space, as well.
The overridden method after_camera_transform copies the current camera space z
coordinate of each vertex into the corresponding vertex index object. We store the z coordinate in
the tex_coord.Z_ element of the vertex index. This means that the tex_coord.X_ and
tex_coord.Y_ elements are the (u,v) texture coordinates for the vertex index; the
tex_coord.Z_ element is not the third texture coordinate w, which we don’t need, but is
instead the camera space z value, which we do need.
The overridden methods clip_segment_to_near_z and clip_near_z perform the
same functions as in the parent class l3d_polygon_3d, but have been extended also to clip the
u and v coordinates against the near z plane. Near z clipping has the effect of finding edges which
cross the near z plane, and clipping these edges to become shorter so that they do not recede
beyond the near z plane. This means that the 3D location of the endpoints of the clipped edges
changes. Since we have now associated texture coordinates with the endpoints of these edges,
clipping off the endpoints and replacing them with new endpoints means that we also must clip the
texture coordinates and assign them to the new clipped endpoint. This is very simple; during nor-
mal clipping of the geometry, we already compute a time parameter t indicating the “time” of the
intersection, which is a relative value indicating the percentage displacement between the starting
point and the ending point. We multiply the time parameter with the x, y, and z displacements to
find the exact location of the clipped point. To clip the u and v values, we also use the same time
parameter, and multiply it with the u displacement and v displacement to obtain the (u,v) coordi-
nates at the new clipped point.
Similarly, the overridden methods clip_segment_to_edge_2d and clip_to_
edge_2d have been extended not only to clip the polygon in 2D, but also to clip the texture coor-
dinates in 2D. The only tricky part is that we cannot clip the u and v values directly in 2D; instead,
we must compute the u/z, v/z, and 1/z values, clip these by using the same time parameter t used to
clip the geometry, then divide u/z and v/z by 1/z to recover the clipped u and v values. This is differ-
ent than the near z clipping case; near z clipping occurs in 3D, where the u and v values have a
98 Chapter 2: Rendering and Animation Techniques for 3D Polygons
linear relationship with the 3D x, y, and z values, as evidenced by the linear equations in the world
to texture space matrix. But for 2D clipping, the clipping occurs in 2D screen coordinates; the u
and v values do not have a linear relationship to the screen coordinates, but the u/z, v/z, and 1/z val-
ues are linear in screen space. This is why we clip these values, and not the original u and v values,
in 2D.
The method clip_to_plane, which clips the polygon against an arbitrary plane in 3D
(contrast with the 2D clipping of the previous paragraph), is also overridden to clip the texture
coordinates in 3D in addition to the geometry. Since there is ordinarily a linear relationship
between texture space and 3D world space (unless we manually assign texture coordinates; see
Chapter 3), we directly clip the u and v values in 3D, in contrast to the u/z, v/z, and 1/z clipping nec-
essary in 2D. As we saw earlier, after clipping the crossing polygon segments in 3D against the
clipping plane, the clipping plane stores an intersection_t parameter representing the per-
centage displacement of the new clipped point along the path from the start point in the crossing
segment to the end point. To clip the u and v values in 3D, we simply use this same intersec-
tion_t parameter to compute new u and v values as the same percentage displacement along the
path from the starting (u,v) values at the first vertex in the crossing segment to the ending (u,v) val-
ues at the end vertex in the crossing segment. In other words, to perform clipping in 3D, we simply
linearly interpolate the (u,v) values just as we do with the world coordinate locations.
The overridden method draw causes the polygon to draw itself; the polygon, in turn, asks its
rasterizer to draw the polygon. This means that all existing code which uses polygons—such as
the entire 3D object class—can handle texture mapped polygons with no change; the virtual call to
the draw routine always gets bound at run time to the correct polygon-specific drawing routine.
Now, let’s look at the rasterization classes that actually draw the polygons on-screen.
#include "rasteriz.h"
#include "../tool_2d/scrinfo.h"
#include "../geom/polygon/p3_flat.h"
#include "../geom/texture/texture.h"
class l3d_polygon_3d_textured;
class l3d_polygon_3d_textured_lightmapped;
Chapter 2: Rendering and Animation Techniques for 3D Polygons 99
class l3d_rasterizer_3d_imp :
virtual public l3d_rasterizer_2d_imp,
virtual public l3d_texture_computer
{
public:
l3d_rasterizer_3d_imp(int xs, int ys, l3d_screen_info *si);
public:
l3d_rasterizer_3d(l3d_rasterizer_2d_imp *i2,
l3d_rasterizer_3d_imp *i3) :
l3d_rasterizer_2d(i2)
{
imp3d = i3;
}
virtual ~l3d_rasterizer_3d(void) {}
/* virtual */ void draw_line( int x0 , int y0 , int x1 , int y1 , unsigned long col ) {
imp3d->draw_line(x0,y0,x1,y1,col);
}
};
class l3d_rasterizer_3d_imp_factory {
public:
virtual l3d_rasterizer_3d_imp *create(int xs, int ys, l3d_screen_info *si)=0;
100 Chapter 2: Rendering and Animation Techniques for 3D Polygons
};
#ifndef __ACTIVE__P3_TEX_H
#ifndef __ACTIVE__P3_CLIP_H
#include "../geom/polygon/p3_tex.h"
#include "../geom/polygon/p3_ltex.h"
#endif
#endif
#endif
l3d_rasterizer_3d_imp::l3d_rasterizer_3d_imp
(int xs, int ys, l3d_screen_info *si) :
l3d_rasterizer_2d_imp(xs,ys,si)
{}
void l3d_rasterizer_3d_imp::draw_polygon_textured
(const l3d_polygon_3d_textured *p_poly)
{}
void l3d_rasterizer_3d_imp::draw_polygon_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped *p_poly)
{}
The member variable imp3d is a pointer to a 3D rasterizer implementation object, which actually
carries out the rasterizer’s drawing requests. The 3D rasterizer implementation object is of
(abstract) type l3d_rasterizer_3d_imp, and offers the new methods draw_poly-
gon_textured and draw_polygon_textured_lightmapped. These methods are
empty in the abstract l3d_rasterizer_3d_imp class; they are overridden in the actual con-
crete 3D rasterizer implementation (see next section).
drawing capabilities. This is exactly what these virtual text functions do; they are overridden in the
concrete rasterizer implementation subclasses covered below.
The main difference between the previous flat-shaded rasterization code and the current tex-
ture mapping code is that now, while stepping along the edges vertically and the span horizontally,
we must also keep track of the current u/z, v/z, and 1/z values. We do this using two sets of vari-
ables, as mentioned earlier: one for the left edge of the polygon and one for the right edge of the
polygon. Recall, the polygon drawing code works by searching simultaneously for edges of
non-zero height on the left and right sides of the polygon, and by drawing horizontal spans of
102 Chapter 2: Rendering and Animation Techniques for 3D Polygons
pixels from top to bottom between the left and right edges. With texture mapping, as soon as we
find an edge to be drawn for the left or the right side, we immediately make note of the u/z, v/z, and
1/z values for both the starting vertex and the ending vertex of the edge. Then, we compute three
delta values indicating how much the u/z, v/z, and 1/z values change from one scanline to the next
by dividing the total change by the height of the edge. In this manner, we always know the u/z, v/z,
and 1/z values at the left and right sides of the current scanline. Then, within the scanline, we draw
one horizontal span of pixels from the left edge to the right edge. Based on the left and right u/z,
v/z, and 1/z values, and the length of the horizontal span in pixels, we can interpolate the values
from the left edge to the right edge. This gives us a u/z, v/z, and 1/z value for each pixel. We then
simply divide u/z by 1/z and v/z by 1/z to obtain the u and v values, scale these real-valued u and v
values to integer texture map coordinates, retrieve the pixel from the texture map, and draw the
pixel on-screen. We actually divide the pixels into horizontal runs of 16 pixels, only computing the
u and v values every 16th pixel and linearly interpolating the u and v values between the ends of
the run.
See Figure 2-40 for a summary of the new variables needed while rasterizing texture mapped
polygons. Notice that this is simply an extension of the flat-shaded case; we keep track of addi-
tional variables at the starting and ending vertices of the left and right edges, interpolate these
values vertically between scanlines to obtain values for the left and right edges of the scanline, and
also interpolate these values horizontally within the scanline between the left and right values of
the current scanline.
#include "ras_sw.h"
#include "rast3.h"
#include "math.h"
Chapter 2: Rendering and Animation Techniques for 3D Polygons 103
#include "../system/sys_dep.h"
class l3d_rasterizer_3d_sw_imp :
virtual public l3d_rasterizer_2d_sw_imp,
virtual public l3d_rasterizer_3d_imp
{
protected:
void compute_surface_orientation_and_size(void);
unsigned char *text_color;
public:
l3d_rasterizer_3d_sw_imp(int xs, int ys, l3d_screen_info *si);
virtual ~l3d_rasterizer_3d_sw_imp(void);
};
class l3d_rasterizer_3d_sw_imp_factory :
public l3d_rasterizer_3d_imp_factory
{
public:
l3d_rasterizer_3d_imp *create(int xs, int ys, l3d_screen_info *si);
};
#endif
{0x00, 0x00, 0x7e, 0xe7, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xe7, 0x7e},
{0x00, 0x00, 0xfc, 0xce, 0xc7, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc7, 0xce, 0xfc},
{0x00, 0x00, 0xff, 0xc0, 0xc0, 0xc0, 0xc0, 0xfc, 0xc0, 0xc0, 0xc0, 0xc0, 0xff},
{0x00, 0x00, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xfc, 0xc0, 0xc0, 0xc0, 0xff},
{0x00, 0x00, 0x7e, 0xe7, 0xc3, 0xc3, 0xcf, 0xc0, 0xc0, 0xc0, 0xc0, 0xe7, 0x7e},
{0x00, 0x00, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xff, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3},
{0x00, 0x00, 0x7e, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x7e},
{0x00, 0x00, 0x7c, 0xee, 0xc6, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06, 0x06},
{0x00, 0x00, 0xc3, 0xc6, 0xcc, 0xd8, 0xf0, 0xe0, 0xf0, 0xd8, 0xcc, 0xc6, 0xc3},
{0x00, 0x00, 0xff, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0},
{0x00, 0x00, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xdb, 0xff, 0xff, 0xe7, 0xc3},
{0x00, 0x00, 0xc7, 0xc7, 0xcf, 0xcf, 0xdf, 0xdb, 0xfb, 0xf3, 0xf3, 0xe3, 0xe3},
{0x00, 0x00, 0x7e, 0xe7, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xe7, 0x7e},
{0x00, 0x00, 0xc0, 0xc0, 0xc0, 0xc0, 0xc0, 0xfe, 0xc7, 0xc3, 0xc3, 0xc7, 0xfe},
{0x00, 0x00, 0x3f, 0x6e, 0xdf, 0xdb, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0x66, 0x3c},
{0x00, 0x00, 0xc3, 0xc6, 0xcc, 0xd8, 0xf0, 0xfe, 0xc7, 0xc3, 0xc3, 0xc7, 0xfe},
Y
{0x00, 0x00, 0x7e, 0xe7, 0x03, 0x03, 0x07, 0x7e, 0xe0, 0xc0, 0xc0, 0xe7, 0x7e},
{0x00, 0x00, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0xff},
FL
{0x00, 0x00, 0x7e, 0xe7, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3},
{0x00, 0x00, 0x18, 0x3c, 0x3c, 0x66, 0x66, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3},
{0x00, 0x00, 0xc3, 0xe7, 0xff, 0xff, 0xdb, 0xdb, 0xc3, 0xc3, 0xc3, 0xc3, 0xc3},
{0x00, 0x00, 0xc3, 0x66,
AM
0x66, 0x3c, 0x3c, 0x18, 0x3c, 0x3c, 0x66, 0x66, 0xc3},
{0x00, 0x00, 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 0x3c, 0x3c, 0x66, 0x66, 0xc3},
{0x00, 0x00, 0xff, 0xc0, 0xc0, 0x60, 0x30, 0x7e, 0x0c, 0x06, 0x03, 0x03, 0xff}
};
TE
l3d_rasterizer_3d_imp * l3d_rasterizer_3d_sw_imp_factory::create
(int xs, int ys, l3d_screen_info *si)
{
return new l3d_rasterizer_3d_sw_imp(xs,ys,si);
}
l3d_rasterizer_3d_sw_imp::l3d_rasterizer_3d_sw_imp
(int xs, int ys, l3d_screen_info *si)
: l3d_rasterizer_3d_imp(xs,ys,si),
l3d_rasterizer_2d_sw_imp(xs,ys,si),
l3d_rasterizer_2d_imp(xs,ys,si)
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 105
l3d_rasterizer_3d_sw_imp::~l3d_rasterizer_3d_sw_imp(void) {
delete [] text_color;
}
void l3d_rasterizer_3d_sw_imp::clear_buffer(void)
{
l3d_rasterizer_2d_sw_imp::clear_buffer();
}
#include "../math/fix_lowp.h"
#if 0
#define l3d_fixed float
#define l3d_real_to_fixed(x) (x)
#define fixfixdiv(a,b) ((a)/(b))
#define fixfixmul(a,b) ((a)*(b))
#define int2fix(a) ( (float)(a))
#define fix2int(a) ( (int)(a))
#define fix2float(a) (a)
#define float2fix(a) (a)
#define iceil_fix(a) ( (int)ceil((double)(a)) )
#endif
void l3d_rasterizer_3d_sw_imp::draw_polygon_textured
(const l3d_polygon_3d_textured *p_poly)
{
l3d_fixed x0,y0,x1,y1,x2,y2,x3;
l3d_fixed top_y, bottom_y;
int point_on_right=0;
int left_idx, right_idx, top_y_idx, bottom_y_idx;
l3d_point *vtemp;
int i;
int scanline;
//- (inter-scanline)
l3d_fixed volatile
left_x, left_ui, left_vi, left_zi,
left_x_start, left_y_start, left_ui_start, left_vi_start, left_zi_start,
left_x_end, left_y_end, left_ui_end, left_vi_end, left_zi_end,
left_dx_dy, left_dui_dy, left_dvi_dy, left_dzi_dy;
int
left_ceilx,
left_ceilx_start, left_ceilx_end,
left_ceily_start, left_ceily_end;
l3d_fixed volatile
right_x, right_ui, right_vi, right_zi,
right_x_start,right_y_start,right_ui_start,right_vi_start,right_zi_start,
right_x_end, right_y_end, right_ui_end, right_vi_end, right_zi_end,
right_dx_dy, right_dui_dy, right_dvi_dy, right_dzi_dy;
int
right_ceilx,
right_ceilx_start, right_ceilx_end,
right_ceily_start, right_ceily_end;
l3d_fixed volatile
u,v,z,
ui, vi, zi, dui_dx, dvi_dx, dzi_dx,
top_y = l3d_real_to_fixed(VTX(0).Y_);
top_y_idx = 0;
bottom_y = top_y;
bottom_y_idx = top_y_idx;
for(i=0; i<p_poly->clip_ivertices->num_items; i++) {
if(l3d_real_to_fixed(VTX(i).Y_) < top_y) {
top_y = l3d_real_to_fixed(VTX(i).Y_);
top_y_idx = i;
}
if(l3d_real_to_fixed(VTX(i).Y_) > bottom_y) {
bottom_y = l3d_real_to_fixed(VTX(i).Y_);
bottom_y_idx = i;
}
}
left_idx = top_y_idx;
right_idx = top_y_idx;
Chapter 2: Rendering and Animation Techniques for 3D Polygons 107
left_x_start=l3d_real_to_fixed(VTX(top_y_idx).X_);
left_ceilx_start=iceil_fix(left_x_start);
left_y_start=l3d_real_to_fixed(VTX(top_y_idx).Y_);
left_ceily_start=iceil_fix(left_y_start);
left_ceily_end=left_ceily_start;
left_zi_start = fixfixdiv(int2fix(Z_MULT_FACTOR),
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.Z_)
+ float2fix(Z_ADD_FACTOR));
left_zi_end = left_zi_start;
left_ui_start = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.X_),
left_zi_start);
left_ui_end = left_ui_start;
left_vi_start = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.Y_),
left_zi_start);
left_vi_end = left_vi_start;
right_x_start=left_x_start;
right_y_start=left_y_start;
right_ceily_start=left_ceily_start;
right_ceily_end=right_ceily_start;
right_ui_start = left_ui_start;
right_vi_start = left_vi_start;
right_zi_start = left_zi_start;
scanline = left_ceily_start;
int oneshot=1;
left_zi_end = fixfixdiv(int2fix(Z_MULT_FACTOR),
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.Z_)
+ float2fix(Z_ADD_FACTOR));
left_ui_end = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.X_),
left_zi_end);
left_vi_end = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[left_idx]))->tex_coord.Y_),
108 Chapter 2: Rendering and Animation Techniques for 3D Polygons
left_zi_end);
}
else
{
left_dx_dy =
left_dui_dy =
left_dvi_dy =
left_dzi_dy = int2fix(0);
}
}
}
//- if needed, find next right-edge whose ceily_end > current scanline
while(right_ceily_end - scanline <= 0 )
{
if (right_idx == bottom_y_idx) {
return;
}
right_idx = p_poly->next_clipidx_right(right_idx, p_poly->clip_ivertices->num_items);
Chapter 2: Rendering and Animation Techniques for 3D Polygons 109
right_y_end=l3d_real_to_fixed(VTX(right_idx).Y_);
right_ceily_end = iceil_fix(right_y_end);
if(right_ceily_end - scanline) {
right_x_end=l3d_real_to_fixed(VTX(right_idx).X_);
right_ceilx_end = iceil_fix(right_x_end);
right_zi_end = fixfixdiv(int2fix(Z_MULT_FACTOR),
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[right_idx]))->tex_coord.Z_)
+ float2fix(Z_ADD_FACTOR));
right_ui_end = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[right_idx]))->tex_coord.X_),
right_zi_end);
right_vi_end = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[right_idx]))->tex_coord.Y_),
right_zi_end);
if(right_y_end-right_y_start>=MIN_EDGE_HEIGHT) {
right_dx_dy =
fixfixdiv(right_x_end-right_x_start,right_y_end-right_y_start);
}
else
{
right_dx_dy =
right_dui_dy =
right_dvi_dy =
right_dzi_dy = int2fix(0);
}
}else {
right_x_start = l3d_real_to_fixed(VTX(right_idx).X_);
right_ceilx_start = iceil_fix(right_x_start);
right_y_start = l3d_real_to_fixed(VTX(right_idx).Y_);
right_ceily_start = iceil_fix(right_y_start);
right_zi_start = fixfixdiv(int2fix(Z_MULT_FACTOR),
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
110 Chapter 2: Rendering and Animation Techniques for 3D Polygons
& ((*(p_poly->clip_ivertices))[right_idx]))->tex_coord.Z_)
+float2fix(Z_ADD_FACTOR));
right_ui_start = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[right_idx]))->tex_coord.X_),
right_zi_start);
right_vi_start = fixfixmul(
l3d_real_to_fixed(((l3d_polygon_ivertex_textured *)
& ((*(p_poly->clip_ivertices))[right_idx]))->tex_coord.Y_),
right_zi_start);
}
}
}else {
//- a segment exactly one pixel wide (left_x == right_x)
dui_dx =
dvi_dx =
dzi_dx = int2fix(0);
}
cur_x=left_ceilx;
#define LINEAR_RUN_SHIFT 4
#define LINEAR_RUN_SIZE (1<<LINEAR_RUN_SHIFT)
z = fixfixdiv(int2fix(1), zi);
u = fixfixmul(ui,z);
v = fixfixmul(vi,z);
run_x_end = int2fix(cur_x) +
int2fix(LINEAR_RUN_SIZE);
if (run_x_end > int2fix(right_ceilx)) {
run_x_end = int2fix(right_ceilx);
}
denom = fixfixdiv
(int2fix(1) ,
zi + fixfixmul(dzi_dx, run_x_end-int2fix(cur_x)));
u_run_end=fixfixmul
(ui + fixfixmul(dui_dx,(run_x_end-int2fix(cur_x))),
denom) ;
v_run_end=fixfixmul
(vi + fixfixmul(dvi_dx,(run_x_end-int2fix(cur_x))),
denom) ;
inv_run_dx = fixfixdiv
(int2fix(1),
(run_x_end-int2fix(cur_x)));
du_dx_run = fixfixmul((u_run_end - u) ,inv_run_dx);
dv_dx_run = fixfixmul((v_run_end - v) ,inv_run_dx);
for(run_x = int2fix(cur_x);
run_x < run_x_end;
run_x+=int2fix(1))
{
cur_x += LINEAR_RUN_SIZE;
#if 0
ui += dui_dx<<LINEAR_RUN_SHIFT;
vi += dvi_dx<<LINEAR_RUN_SHIFT;
zi += dzi_dx<<LINEAR_RUN_SHIFT;
#else
ui += l3d_mulri(dui_dx, LINEAR_RUN_SIZE);
vi += l3d_mulri(dvi_dx, LINEAR_RUN_SIZE);
zi += l3d_mulri(dzi_dx, LINEAR_RUN_SIZE);
#endif
}
}
scanline++;
left_x += left_dx_dy;
right_x += right_dx_dy;
left_ui += left_dui_dy;
112 Chapter 2: Rendering and Animation Techniques for 3D Polygons
left_vi += left_dvi_dy;
left_zi += left_dzi_dy;
right_ui += right_dui_dy;
right_vi += right_dvi_dy;
right_zi += right_dzi_dy;
}
//- for the left and/or right segment(s) which just completed drawing
//- initialize xxx_start = xxx_end, to begin next segment. xxx_end is
//- then searched for in the next iteration (the while() loops)
if ( left_ceily_end - scanline <= 0 ) {
left_x_start=left_x_end;
left_y_start=left_y_end;
left_ceily_start=left_ceily_end;
left_ui_start=left_ui_end;
left_vi_start=left_vi_end;
left_zi_start=left_zi_end;
}
if ( right_ceily_end - scanline <= 0 ) {
right_x_start=right_x_end;
right_y_start=right_y_end;
right_ceily_start=right_ceily_end;
right_ui_start=right_ui_end;
right_vi_start=right_vi_end;
right_zi_start=right_zi_end;
}
}
}
#include "../math/fix_prec.h"
void l3d_rasterizer_3d_sw_imp::draw_polygon_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped *p_poly)
{
l3d_texture *old_tex;
if(p_poly->surface==NULL) {
p_poly->surface = p_poly->get_scache()->combine_lightmap_and_texmap
(p_poly->surface_xsize, p_poly->surface_ysize, p_poly);
}
p_poly->surface->O = p_poly->surface_orientation.O;
p_poly->surface->U = p_poly->surface_orientation.U;
p_poly->surface->V = p_poly->surface_orientation.V;
old_tex = p_poly->texture;
p_poly->texture = p_poly->surface;
draw_polygon_textured(p_poly);
p_poly->texture = old_tex;
}
register int i;
unsigned long mask = MAX_BYTE;
Chapter 2: Rendering and Animation Techniques for 3D Polygons 113
char shift = 0;
#define FONT_Y_SIZE 13
#define FONT_X_SIZE 8
#define FONT_SPACING 2
int i;
int font_y, font_x;
int x_offset;
x_offset = x * sinfo->bytes_per_pixel;
unsigned char *pix;
unsigned int c;
for(c=0; c<strlen(text); c++) {
int current_bit;
for(font_x=0, current_bit=0x80;
font_x < FONT_X_SIZE;
font_x++, current_bit >>= 1)
{
if(font_bitmaps[text[c]][FONT_Y_SIZE-font_y] & current_bit) {
fg_color = text_color;
for(register int b=0; b<sinfo->bytes_per_pixel;b++) {
*pix++ = *fg_color++;
}
}else {
pix += sinfo->bytes_per_pixel;
}
}
for(font_x=0; font_x < FONT_SPACING; font_x++) {
pix += sinfo->bytes_per_pixel;
}
}
}
}
Two other items deserve special mention in the texture mapping code. First, all of the real-valued
variables are of type l3d_fixed instead of l3d_real. This means that we explicitly force the
usage of fixed-point values within the texture mapping code. As with l3d_real values, we also
use macros to multiply and divide l3d_fixed values: fixfixmul and fixintmul multiply
a fixed point by another fixed-point or integer value, and fixfixdiv and fixintdiv divide a
fixed-point value by another fixed-point or integer value. Addition and subtraction of l3d_
114 Chapter 2: Rendering and Animation Techniques for 3D Polygons
fixed values, as with l3d_real, can be done with the normal C++ addition and subtraction
operators.
The reason we use fixed-point values in the texture mapping code is speed. Fixed-point arith-
metic represents real-valued numbers as integers (see the Appendix); therefore, the conversion
from a fixed-point number to an integer is quite fast—it is simply a bit shift operation. On the other
hand, converting a floating-point number to an integer is comparatively slow. This might seem
trivial, but we must convert the real-valued (u,v) coordinates to integer texture map indices for
every pixel. In the l3d library, for instance, simply changing from floating point to fixed point dou-
bled the performance of the texture mapping code.
The other item of note is the use of the Z_MULT_FACTOR and Z_ADD_FACTOR constants in
the code. Instead of computing 1/z, we compute Z_MULT_FACTOR/(z + Z_ADD_FACTOR). The
Y
1/z values in camera space can be very small; for instance, if z is 1000, then 1/z is 0.001. The
FL
Z_MULT_FACTOR has the effect of scaling the 1/z values to be slightly larger so that interpolation
between the 1/z values occurs between larger values and is thus more accurate. The
Z_ADD_FACTOR has the effect of making all z values slightly smaller before computing 1/z. The
AM
idea is that because of the near z clipping plane, none of the z values will ever be smaller than a cer-
tain value. For instance, if the near z plane is at 5, then z will always be at least 5, meaning that 1/z
is at most 1/5, or 0.2. Even with the Z_MULT_FACTOR, there is still a certain range of larger 1/z
TE
values which never occur due to the near z clipping. The purpose of the Z_ADD_FACTOR is to
move the clipped z values slightly closer to the origin, so that the range of possible 1/z values is
larger. These numerical range considerations are particularly important when using fixed-point
arithmetic since numerical accuracy is always a troublesome problem—troublesome because we
only have a fixed number of bits to store whole and fractional parts, and if our computations are
carelessly formulated, the resulting values will require either more or less precision than we have,
leading to incorrect results. Essentially, therefore, we use the Z_MULT_FACTOR and
Z_ADD_FACTOR to scale our computations such that the results always lie within the fixed preci-
sion of our fixed point values.
The text drawing code in the software rasterizer relies on a table of bitmaps, containing one
bitmap for each character, and the two overridden methods set_text_color and
draw_text. The constructor for the rasterizer implementation creates the table of bitmaps in
variable font_bitmaps. Each character is associated with an 8 by 13 bitmap, stored as a series
of 13 bytes. Each bit in each byte of each character’s bitmap represents one pixel forming the
shape of the character; a value of 1 for the bit means the pixel should be drawn to form the shape of
the character, while a value of 0 means the bit should not be drawn. The bits are interpreted such
that the most-significant (leftmost) bit is the leftmost pixel to be displayed. Since each bit is 8
bytes (an admittedly hard-coded assumption in the code), one byte represents the eight horizontal
pixels belonging to each character’s bitmap.
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 115
We use the following methods to draw the text. Method set_text_color takes a red,
green, and blue color specification in external format, converts the color to native internal screen
format, then stores the color as a series of bytes in the array text_color. Then, method draw_
text draws a given string using the previously set color. It takes each character in the string to be
printed, finds the appropriate bitmap for the character by using its ASCII code, and, for each bit in
the character’s bitmap with a value of 1, draws the color stored in text_color into the
rasterizer buffer. Thus, the string is drawn character for character, pixel by pixel into the buffer.
#include <GL/glut.h>
#include "rast3.h"
#include "ras_mesa.h"
class l3d_rasterizer_3d_mesa_imp :
virtual public l3d_rasterizer_2d_mesa_imp,
virtual public l3d_rasterizer_3d_imp
116 Chapter 2: Rendering and Animation Techniques for 3D Polygons
{
private:
GLuint fontOffset;
void make_raster_font(void);
protected:
void texture_setup(const l3d_polygon_3d_textured *p_poly);
void ogl_texturing_between_glbegin_end
(const l3d_polygon_3d_textured *p_poly);
l3d_real ui_xs, ui_ys, ui_c, vi_xs, vi_ys, vi_c, zi_xs, zi_ys, zi_c,
ui, vi, zi;
public:
l3d_rasterizer_3d_mesa_imp(int xs, int ys, l3d_screen_info *si);
void clear_buffer(void);
};
class l3d_rasterizer_3d_mesa_imp_factory :
public l3d_rasterizer_3d_imp_factory {
public:
l3d_rasterizer_3d_imp *create(int xs, int ys, l3d_screen_info *si);
};
#endif
{0x0,0x0, 0x60,0x60,0x60,0x60,0x30,0x18,0xC,0x6,0x3,0x3,0xFF},
{0x0,0x0, 0x7E,0xFF,0xC3,0xC3,0xC3,0x7E,0xC3,0xC3,0xC3,0xC3,0x7E},
{0x0,0x0, 0x7E,0xFF,0xC3,0x3,0x3,0x7F,0xC3,0xC3,0xC3,0xC3,0x7E}
};
l3d_rasterizer_3d_imp * l3d_rasterizer_3d_mesa_imp_factory::create
(int xs, int ys, l3d_screen_info *si)
{
return new l3d_rasterizer_3d_mesa_imp(xs,ys,si);
}
l3d_rasterizer_3d_mesa_imp::
l3d_rasterizer_3d_mesa_imp(int xs, int ys, l3d_screen_info *si):
l3d_rasterizer_3d_imp(xs,ys,si),
l3d_rasterizer_2d_imp(xs,ys,si),
l3d_rasterizer_2d_mesa_imp(xs,ys,si)
{
make_raster_font();
set_text_color(si->ext_max_red,
si->ext_max_green,
si->ext_max_blue);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-xsize/2,xsize/2,-ysize/2,ysize/2);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0,0,xsize,ysize);
//- vertex orientation - from being drawn, to mimic the behavior of the
//- software rasterizer.
glFrontFace(GL_CW);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
}
void l3d_rasterizer_3d_mesa_imp::clear_buffer(void)
{
l3d_rasterizer_2d_mesa_imp::clear_buffer();
}
void l3d_rasterizer_3d_mesa_imp::draw_polygon_flatshaded
(const l3d_polygon_2d_flatshaded *p_poly)
{
int i;
glBegin(GL_POLYGON);
sinfo->set_color(p_poly->final_color);
for(i=0; i<p_poly->clip_ivertices->num_items; i++) {
glVertex2i(iceil((**(p_poly->vlist))
[(*p_poly->clip_ivertices)[i].ivertex].transformed.X_)
- (xsize>>1),
(ysize>>1) -
iceil((**(p_poly->vlist))
[(*p_poly->clip_ivertices)[i].ivertex].transformed.Y_));
}
glEnd();
}
void l3d_rasterizer_3d_mesa_imp::texture_setup
(const l3d_polygon_3d_textured *p_poly)
{
if (!glIsTexture(p_poly->texture->tex_data->id)) {
printf("generating texture");
GLuint glid = (GLuint) (p_poly->texture->tex_data->id);
glGenTextures(1, & glid );
p_poly->texture->tex_data->id = glid;
glBindTexture(GL_TEXTURE_2D, p_poly->texture->tex_data->id);
Chapter 2: Rendering and Animation Techniques for 3D Polygons 119
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glEnable(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGBA,
p_poly->texture->tex_data->width,
p_poly->texture->tex_data->height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
p_poly->texture->tex_data->data);
}else {
glEnable(GL_TEXTURE_2D); //- texturing may have been turned off by another
glBindTexture(GL_TEXTURE_2D,p_poly->texture->tex_data->id);
}
void l3d_rasterizer_3d_mesa_imp::draw_polygon_textured
(const l3d_polygon_3d_textured *p_poly)
{
texture_setup(p_poly);
glBegin(GL_POLYGON);
ogl_texturing_between_glbegin_end(p_poly);
glEnd();
}
void l3d_rasterizer_3d_mesa_imp::ogl_texturing_between_glbegin_end
(const l3d_polygon_3d_textured *p_poly)
{
int i;
GLfloat plane[4];
l3d_vector plane_normal;
plane_normal = normalized(p_poly->sfcnormal.transformed
- p_poly->center.transformed);
plane[0] = plane_normal.a[0];
plane[1] = plane_normal.a[1];
plane[2] = plane_normal.a[2];
plane[3] = -plane[0] * p_poly->center.transformed.X_
- plane[1] * p_poly->center.transformed.Y_
- plane[2] * p_poly->center.transformed.Z_;
l3d_real hf=l3d_mulri(*fovx,*screen_xsize),
vf=l3d_mulri(*fovy,*screen_ysize);
float orig_z_denom =
(plane[0]*(SCREEN_X - xsize/2)/ hf -
plane[1]*(SCREEN_Y - ysize/2)/ vf +
plane[2]);
if(orig_z_denom > -0.0001) {
//- this would imply that the orig_z value is < 0.0001 ie non-positive
//- so we cancel the entire polygon drawing operation
return;
}
}
glTexCoord2f
(
((l3d_polygon_ivertex_textured *)&(*p_poly->clip_ivertices)[i])
->tex_coord.X_,
((l3d_polygon_ivertex_textured *)&(*p_poly->clip_ivertices)[i])
->tex_coord.Y_
);
l3d_real hf=l3d_mulri(*fovx,*screen_xsize),
vf=l3d_mulri(*fovy,*screen_ysize);
float orig_z = -plane[3] /
(plane[0]*(SCREEN_X - xsize/2)/ hf -
plane[1]*(SCREEN_Y - ysize/2)/ vf +
plane[2]);
GLfloat tmp[4];
tmp[0] =
(((**(p_poly->vlist))
[(*(p_poly->clip_ivertices))[i].ivertex].transformed.X_ - xsize/2)
* orig_z );
tmp[1] =
((-(**(p_poly->vlist))
[(*(p_poly->clip_ivertices))[i].ivertex].transformed.Y_ + ysize/2)
* orig_z );
tmp[2] = 1;
tmp[3] = orig_z;
glVertex4fv(tmp);
}
}
void l3d_rasterizer_3d_mesa_imp::make_raster_font(void)
{
GLuint i, j;
Chapter 2: Rendering and Animation Techniques for 3D Polygons 121
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPushAttrib (GL_LIST_BIT);
glListBase(fontOffset);
glCallLists((GLsizei) strlen(string), GL_UNSIGNED_BYTE, (GLubyte *) string);
glPopAttrib ();
void l3d_rasterizer_3d_mesa_imp::draw_polygon_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped *p_poly)
{
GLuint glid = (GLuint) (p_poly->texture->tex_data->id);
l3d_list<l3d_point> old_texcoords(10);
if (!p_poly->get_scache()->find_texdata(p_poly->texture->tex_data)) {
if (glIsTexture(p_poly->texture->tex_data->id)) {
glDeleteTextures(1, &glid);
}
glBindTexture(GL_TEXTURE_2D, p_poly->texture->tex_data->id);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glEnable(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGBA,
p_poly->texture->tex_data->width,
p_poly->texture->tex_data->height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
p_poly->texture->tex_data->data);
}else {
glBindTexture(GL_TEXTURE_2D,p_poly->texture->tex_data->id);
}
float px =
( (**p_poly->vlist) [(*p_poly->clip_ivertices)[i].ivertex].transformed.X_
- *screen_xsize/2 )
/ (*screen_xsize * *fovx);
float py =
(*screen_ysize/2
-
(**p_poly->vlist) [(*p_poly->clip_ivertices)[i].ivertex].transformed.Y_ )
/
(*screen_ysize * *fovy);
float ox = p_poly->texture->O.transformed.X_;
float oy = p_poly->texture->O.transformed.Y_;
float oz = p_poly->texture->O.transformed.Z_;
float ux = p_poly->texture->U.transformed.X_ - ox;
float uy = p_poly->texture->U.transformed.Y_ - oy;
float uz = p_poly->texture->U.transformed.Z_ - oz;
float vx = p_poly->texture->V.transformed.X_ - ox;
float vy = p_poly->texture->V.transformed.Y_ - oy;
float vz = p_poly->texture->V.transformed.Z_ - oz;
old_texcoords[old_texcoords.next_index()] =
((l3d_polygon_ivertex_textured *)&(*p_poly->clip_ivertices)[i])
->tex_coord;
((l3d_polygon_ivertex_textured *)&(*p_poly->clip_ivertices)[i])
->tex_coord.set(float_to_l3d_real(u),
float_to_l3d_real(v),
float_to_l3d_real(z),
float_to_l3d_real(1.0));
}
draw_polygon_textured(p_poly);
//- set up blend function to combine light map and text map
glEnable(GL_BLEND);
glBlendFunc(GL_ZERO, GL_SRC_ALPHA);
int i;
glDeleteTextures(1, &glid);
}
printf("generating texture");
glGenTextures(1, & glid );
p_poly->lightmap->tex_data->id = glid;
printf("ID was %d", p_poly->lightmap->tex_data->id);
glBindTexture(GL_TEXTURE_2D, p_poly->lightmap->tex_data->id);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glEnable(GL_TEXTURE_2D);
glPixelStorei(GL_UNPACK_ALIGNMENT,1);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGBA,
p_poly->lightmap->tex_data->width,
p_poly->lightmap->tex_data->height,
0,
GL_ALPHA,
GL_UNSIGNED_BYTE,
p_poly->lightmap->tex_data->data);
124 Chapter 2: Rendering and Animation Techniques for 3D Polygons
}else {
glBindTexture(GL_TEXTURE_2D,p_poly->lightmap->tex_data->id);
}
//- draw the same polygon again, but this time with the light map
glBegin(GL_POLYGON);
ogl_texturing_between_glbegin_end(p_poly);
glEnd();
Y
glDisable(GL_BLEND);
FL
}
upwards, not downwards; this is the normal y axis orientation, not the reversed y orientation. The
reason we must make this change is for the texture mapping code, as we see shortly.
Because we have redefined the origin and y axis orientation, we must rewrite all of the inher-
ited drawing routines from the 2D rasterizer to use the new coordinate system. The draw_
point, draw_line, and draw_polygon_flatshaded routines perform the same func-
tion and call the same OpenGL routines as in the parent class l3d_rasterizer_
2d_mesa_imp, only here in the 3D rasterizer we offset all of the incoming x and y coordinates to
use the new screen-centered coordinate system.
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 125
Like the software 3D rasterizer, the Mesa 3D rasterizer also overrides the routines
set_text_color and draw_text to draw text into the rasterizer buffer, although here we
must use OpenGL commands to draw text. Method set_text_color converts the given exter-
nal red, green, and blue color into the native pixel format, and stores it in the variable
text_color. Method draw_text then calls the set_color routine of the screen info
object with the color text_color, which for Mesa calls glColor4f. Then, after setting the
color, method draw_text disables texture mapping, since otherwise text might be drawn using
the currently active texture map if texture mapping had been enabled earlier, and sets the raster
position at which the text is to be drawn (in 2D screen coordinates, with the screen-centered coor-
dinate system) with glRasterPos2i. Finally, the text is drawn with the calls to
glPushAttrib, glListBase, glCallLists, and glPopAttrib. The calls to
glPushAttrib and glPopAttrib save and restore the OpenGL state variables before and
after the calls to glListBase and glCallLists. The calls to glListBase and
glCallLists execute a display list. A display list is an OpenGL-specific concept; it is a collec-
tion of OpenGL commands that can be executed as a logical unit, something like a subroutine
consisting of OpenGL commands. Each display list is assigned an integer identifier by OpenGL.
Execution of a display list may be faster than individually calling the separate OpenGL com-
mands, because the commands in the display list may be cached on or optimized by the graphics
hardware. For the case of drawing text, we use a separate display list for each character, created in
the initialization routine make_raster_font. This routine first allocates space for 128 con-
secutive display lists with glGenLists, which returns the integer identifier for the first of the
lists. Then, for each display list corresponding to a character we wish to display, we enter com-
mands into the display list by executing glNewList, which begins the list, glBitmap, which
draws the bitmap data to the screen using the current color, and glEndList to finish the list.
The method draw_polygon_textured is the new routine of interest which uses
OpenGL calls to draw a texture mapped polygon, using hardware acceleration if present. It calls
texture_setup to initialize the texture, then calls glBegin(GL_POLYGON) to draw a
polygon, and calls ogl_texturing_between_glbegin_end to actually issue the
OpenGL texture mapping commands, finishing with a call to glEnd.
The method texture_setup has the responsibility of making the texture image data avail-
able to OpenGL. Recall, OpenGL is a state machine; issued commands are affected based on the
currently set parameters. The texture is one such parameter; to draw a texture mapped polygon, we
must first select the texture. OpenGL (version 1.1 and later) stores textures in texture objects. A
texture object is referenced by a non-zero integer identifier, which we store in the id member of
the l3d_texture_data class. In method texture_setup, we first check to see if the
OpenGL texture data has already been allocated and initialized via glIsTexture, passing the
texture id. If the texture does exist, we simply make it the current texture for the subsequent draw-
ing commands, through the call to glBindTexture. If the texture does not exist, we must
create it for OpenGL. First, we call glGenTextures to return a new, unused texture id num-
ber, and store it with the texture data. We then call glBindTexture to make this texture object
current.
126 Chapter 2: Rendering and Animation Techniques for 3D Polygons
At this point we have created and bound a new texture to OpenGL, but the texture is still
empty. Therefore, we issue additional commands to make the texture image data known to
OpenGL. First, we call glTexParameteri several times to set up some parameters for the tex-
ture. The first two calls to glTexParameteri cause scaled-up (magnified) or scaled-down
(minified) textures to be drawn using the closest pixel within the texture map; the alternative
would be to use some sort of average of several pixels within the texture map, which produces a
smoother result but is slower. The second two calls to glTexParameteri cause the texture
coordinates to wrap around or repeat outside of the (0,0) to (1,1) square in texture space. Then,
we call glHint with the parameters GL_PERSPECTIVE_CORRECTION_HINT and GL_
NICEST, which causes OpenGL to try to perform the “nicest” perspective correction possible
when doing texture mapping. The perspective correction is simply the interpolation of u/z, v/z, and
1/z with a division to retrieve the mathematically correct u, v, and z values, as we already have
been doing; texture mapping without perspective correction interpolates the u, v, and z values
directly (which we did in software for short runs of 16 pixels) and is faster but leads to incorrect
and visually disturbing artifacts. The call to glEnable(GL_TEXTURE_2D) turns on the tex-
ture mapping in OpenGL.
TIP If you want to see the effect of texture mapping without perspective correction in the soft-
ware rasterizer, try setting the constant LINEAR_RUN_SHIFT in class l3d_rasterizer_
3d_sw_imp to be 16. This means that each linear run is 2^16 or 65,536 pixels long, mean-
ing that across an entire scanline, the u and v values (instead of u/z, v/z, and 1/z, which
would be correct) are linearly interpolated. This results in distortion and warping of the texture
image but faster performance because no division operations are performed within the
scanline.
Finally, and most importantly, we call glTexImage2D, which actually sends the texture image
data to OpenGL. The first parameter specifies that we are working with a 2D texture. The next
parameter is only used with multiple levels of detail, where a different texture is drawn depending
on the distance from the camera, so we set it to 0. The third parameter, which we set to GL_RGBA,
means that our texture data specifies red, green, blue, and also possibly alpha (transparency) com-
ponents for our colors. The next two parameters are the width and height of the texture; after that
comes an optional border parameter to surround the texture, which we set to 0 (no border). The
next two parameters specify the format of the texture image data: red, green, blue, and alpha com-
ponents, each as a byte—in other words, 32-bit RGBA, which is also the same pixel format
specified in the Mesa screen information class l3d_screen_info_rgb_mesa. The final
parameter is a pointer to the actual texture image data itself.
The method ogl_texturing_between_glbegin_glend issues the actual OpenGL
commands to draw the textured polygon. This routine must be called between glBegin(GL_
POLYGON) and glEnd, as the name of the routine implies. Fundamentally, the routine works as
follows: for each vertex in the polygon, we call glTexCoord2f with the (u,v) coordinates of the
polygon, which we stored in the l3d_polygon_ivertex_textured.tex_coord mem-
ber. Then, we call glVertex4fv with the spatial coordinates of the vertex. After specifying the
texture and spatial coordinates for all vertices of the polygon, the routine returns, glEnd is called,
and the texture mapped polygon is then drawn by OpenGL.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 127
There is, however, one tricky computation which must be performed within the method
ogl_texturing_between_glbegin_glend. The problem is that we must pass a
pre-projection vertex to Mesa, and must allow Mesa to perform the perspective division, in order
for the texture mapping to occur correctly. We cause Mesa to do perspective division by using
homogeneous coordinates. The last operation performed by the hardware on a coordinate is the
division by the homogeneous w component. If we set w to be the 3D z component, this division
effects the perspective projection. Our problem is that we already performed the perspective divi-
sion, in l3d_object::apply_perspective_projection, and we already know
exactly which 2D pixel location we need. So our task is, from this already known 2D pixel loca-
tion, to compute the pre-projection 3D coordinate which, when re-projected by Mesa, yields
exactly the 2D pixel location we desire. This is a form of “reverse projection,” very similar to the
way that we “reverse projected” from screen space into texture space. Here, we must reverse pro-
ject from 2D screen space back into 3D world space.
CAUTION You may be tempted to believe that we can simply save the vertex coordinates
before perspective projection, in one of the transformed intermediates within the vertex.
Then, you might think, later on within the Mesa rasterizer, we would simply retrieve the
pre-projection coordinate and pass it to Mesa. The flaw with this reasoning is that after pro-
jection, we perform an additional clipping operation in 2D, clipping the projected polygon to
the view window. This clipping can produce additional vertices, which did not directly result
from a 3D to 2D projection operation. For such vertices produced by a 2D clip, we have no a
priori knowledge of the corresponding 3D point. Therefore, we cannot just save the
pre-projection 3D coordinate ahead of time; we must dynamically compute it later as
needed.
This reverse projection from screen space into world space is quite easy and relies on the plane
equation of the polygon which we developed earlier. Consider the following system of equations:
Equation 2-35
This is a system of three equations. The first is the plane equation of the polygon, and it constrains
how the x, y, and z values in 3D relate to one another: they must lie on a plane. The a, b, and c con-
stants of the plane equation, recall, are simply the normalized x, y, and z components of the surface
normal to the plane; the d component can be calculated by substituting a known x, y, and z point on
the plane (such as the polygon center) into the plane equation and solving for d. The second two
equations are the simplified perspective projection equations ignoring field of view and y axis
reversal; as seen earlier, we can easily convert from these simple projected coordinates to the real
screen projected coordinates.
128 Chapter 2: Rendering and Animation Techniques for 3D Polygons
Solving this system for x, y, and z (by using Calc) yields the following system of equations.
Equation 2-36
These equations give us the world space coordinates from the screen space coordinates—a
“reverse projection.”
As you can see here, it is very important to express a problem in terms of known and unknown
quantities. If we are only given the perspective projection equations relating x, y, and z, we do not
have enough information to perform reverse projection. Mathematically, we have two equations
(x=x/z, y=y/z) but three unknowns (x, y, and z). Geometrically, the perspective projected point
could have come from any point in 3D which lies on the light ray. But, by adding the plane equa-
tion to the system of equations, we do have enough information to perform reverse projection.
Mathematically, we now have three equations and three unknowns. Geometrically, the additional
constraint of the plane equation means that the point in 3D must not only project to the given 2D
point, but it must also lie on the plane of the polygon. The general observation here is that an addi-
tional equation relating the known quantities allowed us to solve the system of equations. The
challenge is to determine which known information might be of use in finding some unknown
quantities.
So, to use these equations, we first convert from real screen projected coordinates to the sim-
ple projected coordinates, using the earlier Equation 2-30. Notice that this conversion relies on the
horizontal and vertical field of view terms and screen size, which is why the l3d_tex-
ture_computer class, from which the Mesa rasterizer inherits, stores these values.
CAUTION The fact that the Mesa texture mapping code requires the horizontal and vertical
field of view terms and the screen size means that you must initialize these values before
attempting to perform any texture mapping operations.
The following sample program textest initializes these field of view terms in the beginning of
the world constructor. The reason we must perform this initialization is that in the rasterizer we
store pointers to the actual variables storing the field of view and screen size terms. This allows us
to change values in the external variables (e.g., to interactively change the field of view) without
needing also to propagate the updated values to the rasterizer. But the fact that the rasterizer stores
pointers to the variables means that failure to initialize these pointers followed by an attempt to
perform Mesa texture mapping results in an attempt to access memory through an uninitialized
pointer—a practice which, to put it mildly, is best avoided. Ordinarily, such initialization would be
done by including a required parameter in the class’s constructor, so that it is not possible to create
an object incorrectly, but in this case we create the rasterizer implementation indirectly through a
factory, where the factory’s original creation interface (defined in the introductory companion
book Linux 3D Graphics Programming) was defined before we even saw the notion of field of
view. Since the code philosophy in this book emphasizes a bottom-up approach, we’ll have to live
with this small inconvenience for now. In a real production 3D program, you probably would have
Chapter 2: Rendering and Animation Techniques for 3D Polygons 129
NOTE Note that the last operation Mesa performs on the coordinate is the division by the
homogeneous component w. This is the reason that, for the Mesa 3D rasterizer, we redefined
the origin of our screen coordinate system to be the center of the screen with a normal y axis
orientation. Since the division by w is the last operation, we have no way of telling Mesa “oh,
and after the homogeneous division, please also offset the x and y values by half the screen
size and reverse the y axis orientation.” If we want Mesa to perform the perspective
division—which should be the case for texture mapping to occur correctly—then we have no
control over the post-projection coordinates, since they are taken over by the hardware at that
point.
public:
int dx_theta;
pyramid(l3d_texture_loader *l);
virtual ~pyramid(void);
/* virtual */ int update(void);
};
Listing 2-15: shapes.cc, the textured object class implementation for program textest
#include "shapes.h"
#include "../lib/tool_os/memman.h"
#include <stdlib.h>
#include <string.h>
pyramid::pyramid(l3d_texture_loader *l) :
l3d_object(4)
{
(*vertices)[0].original.set(float_to_l3d_real(0.),float_to_l3d_real(0.),float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*vertices)[1].original.set(float_to_l3d_real(10.0),float_to_l3d_real(0.),float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*vertices)[2].original.set(float_to_l3d_real(0.),float_to_l3d_real(10.),float_to_l3d_real(0.),
float_to_l3d_real(1.));
(*vertices)[3].original.set(float_to_l3d_real(0.),float_to_l3d_real(0.),float_to_l3d_real(10.),
float_to_l3d_real(1.));
l3d_polygon_3d_textured *p;
int pi;
pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured(3);
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=0;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=1;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=3;
l->load("test.ppm");
p->texture = new l3d_texture;
p->texture->tex_data = new l3d_texture_data;
p->texture->tex_data->width = l->width;
p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
p->texture->owns_tex_data = true;
p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured(3);
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=2;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=3;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=1;
p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
p->texture->owns_tex_data = false;
p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured(3);
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=0;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=2;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=1;
p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured(3);
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=3;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=2;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=0;
p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
num_xforms = 2;
xtheta=0;
modeling_xforms[0] = l3d_mat_rotx(xtheta);
modeling_xforms[1].set
( float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.),
132 Chapter 2: Rendering and Animation Techniques for 3D Polygons
modeling_xform=
modeling_xforms[1] | modeling_xforms[0];
}
pyramid::~pyramid(void) {
}
int pyramid::update(void) {
xtheta += dx_theta; if (xtheta>359) xtheta-=360;
modeling_xforms[0]=l3d_mat_rotx(xtheta);
modeling_xform=
modeling_xforms[1] | modeling_xforms[0];
}
Listing 2-16: textest.h, the custom world class declaration for program textest
#ifndef __TEXTEST_H
#define __TEXTEST_H
#include "../lib/geom/world/world.h"
#include "../lib/pipeline/pi_wor.h"
#include "../lib/geom/texture/texture.h"
#include "../lib/geom/texture/tl_ppm.h"
#endif
Listing 2-17: textest.cc, the custom world class implementation for program textest
#include "textest.h"
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>
#include <stdarg.h>
#include "../lib/geom/object/object3d.h"
#include "../lib/tool_2d/screen.h"
#include "../lib/tool_os/dispatch.h"
#include "../lib/raster/rasteriz.h"
#include "../lib/tool_2d/scrinfo.h"
#include "../lib/geom/world/world.h"
#include "../lib/geom/texture/texture.h"
#include "../lib/geom/texture/tl_ppm.h"
#include "shapes.h"
void my_world::update_all(void) {
l3d_world::update_all();
}
Chapter 2: Rendering and Animation Techniques for 3D Polygons 133
my_world::my_world(void)
: l3d_world(400,300)
{
rasterizer_3d_imp->fovx = &(camera->fovx);
rasterizer_3d_imp->fovy = &(camera->fovy);
rasterizer_3d_imp->screen_xsize = &(screen->xsize);
rasterizer_3d_imp->screen_ysize = &(screen->ysize);
camera->VRP.set(0,0,-50,0);
camera->near_z = float_to_l3d_real(5.5);
camera->far_z = int_to_l3d_real(500);
int i,j,k,onum;
if (objects[onum]==NULL) exit;
objects[onum]->modeling_xforms[1].set
( int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(-i),
int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(i/2),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(i),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1) );
objects[onum]->modeling_xform =
objects[onum]->modeling_xforms[1] |
objects[onum]->modeling_xforms[0] ;
onum++;
}
screen->refresh_palette();
screen->sinfo->compute_light_table();
screen->sinfo->compute_fog_table();
}
Listing 2-18: main.cc, the main program file for program textest
#include "textest.h"
#include "../lib/system/fact0_2.h"
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>
#include <stdarg.h>
main() {
l3d_dispatcher *d;
l3d_pipeline_world *p;
my_world *w;
factory_manager_v_0_2.choose_factories();
d = factory_manager_v_0_2.dispatcher_factory->create();
134 Chapter 2: Rendering and Animation Techniques for 3D Polygons
w = new my_world();
p = new l3d_pipeline_world(w);
d->pipeline = p;
d->event_source = w->screen;
d->start();
delete p;
delete d;
delete w;
}
The main program file, main.cc, begins by choosing the factories using the new l3d_fac-
tory_manager_v_0_2 class. This class chooses the 3D rasterizer implementation factory:
either software or OpenGL/Mesa. Then, the main program simply creates the world, pipeline, and
Y
dispatcher objects, hooks them together, and starts the dispatcher.
The class my_world is our custom world class. First, it stores pointers to the field of view
FL
and screen size variables into the 3D rasterizer, since in the case of Mesa rasterization we need
these variables to perform the reverse projection (as mentioned earlier). Next, the class creates a
AM
texture loader, of type l3d_texture_loader_ppm, which is used by the pyramid class
(covered in the next paragraph) to load the texture images from disk. Class my_world then popu-
lates the world with a number of objects of type pyramid.
TE
The class pyramid is the real focus of interest for this program. Class pyramid, inheriting
from class l3d_object, is a texture mapped, rotating pyramid. The member variable xtheta
is the current rotation angle; dx_theta is the change in xtheta per unit time. The constructor
requires a parameter of type l3d_texture_loader so that the pyramid can load the texture
image data from disk. In the constructor, we first define a vertex list, as usual. Then, we create a
number of polygons referencing the vertex list, but of the new type l3d_polygon_3d_
textured.
pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured(3);
After creating the polygon and assigning its vertex indices as usual, we then load a texture image
from disk by using the texture loader, then assign the newly loaded texture to the polygon. The tex-
ture image is a 64 by 64 pixel color image saved in PPM format. Since the polygon is the only one
referencing the texture, it is responsible for freeing the data—hence, the owns_tex_data
member is set to true.
l->load("test.ppm");
p->texture = new l3d_texture;
p->texture->tex_data = new l3d_texture_data;
p->texture->tex_data->width = l->width;
p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
p->texture->owns_tex_data = true;
Next, we have to define the orientation of the texture space, which places the texture in 3D space
relative to the polygon. In this case, we directly use the first vertex of the polygon as the texture
space origin, and the two edges of the polygon as the texture space axes. This has the effect of
causing the texture image to lie exactly on the face of the polygon.
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 135
p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];
Finally, we perform some last necessary computations in 3D. We compute the center and surface
normal of the polygon, and finally, based on the previous texture space definition, we assign (u,v)
texture coordinates to each of the polygon’s vertices by using the world to texture space matrix.
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
Creating and initializing the texture mapped polygons in this manner is actually all we need to do
in order to make use of texture mapped polygons in our programs. Having already chosen a 3D
rasterizer implementation at program start that is capable of drawing texture mapped polygons,
we are free to create texture mapped polygons and assign them to our 3D objects. All routines that
draw or otherwise manipulate 3D polygons do so through abstract pointers and virtual function
calls, meaning that the texture mapped polygons can be used with no impact to existing code.
They are finally displayed through a virtual function call to draw, which then ends up calling the
rasterizer implementation to draw the texture mapped polygon on-screen.
With texture mapping under our belts, it is now time to return to the topic of light mapping,
which builds on the topic of texture mapping.
The difference between texture mapping and light mapping is that with texture mapping, the
color in the texture map defines the final color to be displayed on-screen; with light mapping, the
light intensity in the light map does not define the color to be displayed, but instead affects an
existing color, making it either brighter or darker via a lighting table.
There are a few ways to implement light mapping. One way would be to modify the inner loop
of the rasterizer to perform a double texture mapping calculation (once for the texture map, once
for the light map), then to use the lighting table to find the value of the lighted color and display it.
This requires rather invasive changes to the rasterization code, since we now have to handle both a
texture map and a light map. We don’t explore this option further here.
Another way to implement light mapping is to introduce the concept of a surface, which is
nothing more than a precombined texture map and light map. By precombining the texture map
and light map into a surface, we can treat the new surface as a single texture, and associate this sin-
gle texture with the polygon, which can then be rasterized using the exact same texture mapping
routine as before. We’ll look at this solution in more detail shortly.
A third, and perhaps simplest, way of realizing light mapping is to perform two entire
rasterization passes for each polygon. First, we rasterize the polygon with texture but no lighting.
Then, we re-rasterize the same polygon, but using the light map instead of the texture map. During
rasterization, instead of drawing the colors of the light map (which are all just various intensities
of white), we alter the existing, already drawn colors from the texture map. The end result is the
colors of the texture map lighted by the intensities in the light map. Since this strategy requires two
entire rasterization passes, it is only really practical in the presence of hardware acceleration. We
also look at this solution in the coming sections.
Surfaces
A surface is a pre-tiled, pre-lit texture. A texture and a light map combine to form a surface; the
surface can then be treated as a normal texture. With this approach, light mapping becomes simply
a sort of preprocessing step before drawing the texture mapped polygons. In theory, this sounds
easy enough: for each texel in the texture map, simply light it according to the corresponding
lumel in the light map, and we’re done, right? In practice, several problems prevent such a
straightforward implementation. First of all, the texture map and the light map have different
sizes. Typically the light map will have a lower resolution than the texture map because a very fine
resolution of lighting detail is not always needed. Secondly, and this is where it becomes tricky,
the texture may be scaled. If we have scaled the texture down, this means that the texture then
repeats itself (we also say tiles itself) across the surface of the polygon. The problem is that
although the texels of the texture map repeat themselves across the surface of the polygon, the
lumels of the light map must not repeat themselves across the surface of the polygon. A few
moments of thought make this clear: if the lumels were also repeated across the polygon, then
altering the intensity of one lumel would alter the intensity of several points on the polygon, which
Chapter 2: Rendering and Animation Techniques for 3D Polygons 137
is not at all what is desired! Instead, each lumel should individually control the intensity of one
part of the polygon, regardless of whether the texture underneath repeats itself or not. See Figure
2-44.
The solution to this quandary is to create a surface which may be larger than the original tex-
ture. Specifically, the surface must be exactly large enough to contain the number of repetitions of
the texture occurring on the polygon; the exact number of repetitions depends on the relative sizes
and orientations of the polygon and the texture. This is what we mean when we say that a surface is
a “pre-tiled” texture. Then, we apply the light map to this larger, repeated texture. The result is a
single, large texture image, which (a) contains one or more repetitions of the original texture
image, and (b) has been illuminated by the light map.
Figure 2-45: Texture mapping without light mapping tiles the texture within the
rasterizer. Texture mapping with light mapping tiles the texture beforehand into a
surface, which is then treated as one large texture by the rasterizer.
138 Chapter 2: Rendering and Animation Techniques for 3D Polygons
After creating the pre-tiled, pre-lit surface (which may be larger than the original texture), we
must then redefine the texture coordinates of the associated polygon. If the surface is larger than
the original texture, the new texture coordinates must be appropriately smaller. For instance, if the
polygon contains exactly two repetitions of the original texture horizontally, then the original u
values go from 0 to 2. The larger surface is then twice as wide as the original texture and already
contains two repetitions of the original texture, which means that only one repetition of the surface
should be mapped to the polygon, meaning that the new u values should only go from 0 to 1.
We create surfaces “on demand”—as soon as we attempt to draw a polygon, we first of all see
if we have created a surface for the polygon. Each polygon has a unique surface. If the polygon’s
surface has already been created beforehand, we simply reuse the previous surface and draw the
polygon with the surface as the texture. If the polygon’s surface has not been created, we combine
the texture and light maps into the surface and store the surface for future use. A surface is there-
fore created as soon as it is needed to be drawn. (A predictive algorithm could also attempt to
create surfaces for polygons which are not yet visible but which might soon be visible, to improve
performance.) This all implies that we need some means of managing which surfaces have been
created and which have not. This is the role of the surface cache, which determines for a particular
polygon if its surface has already been created or not. The surface cache can also discard old sur-
faces which have not been used in a long time to save memory.
Let’s now look at the l3d classes for light mapping. The class l3d_surface is a surface.
Member variable polygon points to the polygon with which the surface is associated. Member
variable surface is a pointer to an object of type l3d_texture—after all, a surface is noth-
ing more than a texture which has been pre-tiled and pre-lit. Member variable age keeps track of
the relative age of the surface, so that older surfaces can be discarded to make space for newer sur-
faces. Of course, if a discarded surface is then referenced again, it must be regenerated.
Listing 2-19: scache.h
#ifndef __SCACHE_H
#define __SCACHE_H
#include "../../tool_os/memman.h"
class l3d_texture;
class l3d_texture_data;
#include "../../tool_2d/scrinfo.h"
#include "../../datastr/list.h"
class l3d_polygon_3d_textured_lightmapped;
class l3d_surface {
public:
l3d_surface(void);
const l3d_polygon_3d_textured_lightmapped *polygon;
l3d_texture *surface;
int age;
virtual ~l3d_surface(void);
};
Chapter 2: Rendering and Animation Techniques for 3D Polygons 139
class l3d_surface_cache {
protected:
l3d_screen_info *sinfo;
l3d_list<l3d_surface> *surfaces;
public:
l3d_surface_cache(l3d_screen_info *sinfo) {
surfaces = new l3d_list<l3d_surface> (10);
this->sinfo = sinfo;
}
virtual ~l3d_surface_cache(void) {
int i;
delete surfaces;
}
void reset(void) {
delete surfaces;
surfaces = new l3d_list<l3d_surface> (10);
}
#include "../polygon/p3_tex.h"
#include "../polygon/p3_ltex.h"
#endif
#include "../../tool_2d/si_rgb.h"
#include "../../tool_os/memman.h"
l3d_surface::l3d_surface(void) {
surface = NULL;
polygon = NULL;
}
l3d_surface::~l3d_surface(void)
{
l3d_texture *l3d_surface_cache::
combine_lightmap_and_texmap(int width,
int height,
const l3d_polygon_3d_textured_lightmapped *polygon)
{
{
int i;
140 Chapter 2: Rendering and Animation Techniques for 3D Polygons
l3d_texture *new_tex;
l3d_surface *new_surface;
int i,j,u,v;
dly = l3d_divrr(int_to_l3d_real(polygon->lightmap->tex_data->height) ,
int_to_l3d_real(new_tex->tex_data->height));
dlx = l3d_divrr(int_to_l3d_real(polygon->lightmap->tex_data->width) ,
int_to_l3d_real(new_tex->tex_data->width));
for(j=0, ly=int_to_l3d_real(0); j<new_tex->tex_data->height; j++, ly+=dly) {
for(i=0, lx=int_to_l3d_real(0); i<new_tex->tex_data->width; i++, lx+=dlx) {
}
}
return new_tex;
}
l3d_texture_data *l3d_surface_cache::
find_texdata(l3d_texture_data *texdata)
{
int i;
return (*surfaces)[i].surface->tex_data;
}
}
l3d_texture *new_tex;
l3d_surface *new_surface;
new_tex->tex_data = texdata;
new_tex->owns_tex_data = false;
return NULL;
}
Surface Cache
The surface cache is responsible for managing surfaces that have already been created. We create a
surface as soon as its associated polygon is drawn for the first time. Since creating a surface is a
time-consuming process (the texture data must be replicated and combined with the light map),
we save already created surfaces for later reuse in the surface cache.
Class l3d_surface_cache is the class implementing the surface cache. Member vari-
able sinfo points to the screen information object, so that we can know the pixel format of the
texture data. Member variable surfaces is a list of pointers to surface objects. Method reset
deletes all surfaces from the cache, thereby resetting the cache and causing all surfaces to be
regenerated as they are needed. Method find_texdata searches the cache for a particular tex-
ture data object, returning a pointer to the data if it was found, or inserting it into the cache and
returning NULL if it was not found.
Method combine_lightmap_and_texmap combines a light map and a texture map,
returning a pointer to the new surface. This method takes a width and a height as parameters,
which must be the new, possibly larger, size of the pre-tiled texture. (We’ll see in the next section
how we compute these size parameters.) First of all, we search to see if we have already created
the surface and have stored it in the cache; if so, we return a pointer to the existing surface. If not,
then we must create the surface. We do this by creating a new texture of the specified width and
142 Chapter 2: Rendering and Animation Techniques for 3D Polygons
height, which, remember, may be larger than the original texture size to allow for pre-tiling of the
texture. We then copy the original polygon texture into the newly allocated texture, repeating the
texture as necessary. Then, we loop through all texels of this pre-tiled texture, simultaneously
stepping through the lumels in the light map. This is a simple scaling operation; we merely stretch
the light map to the dimensions of the pre-tiled texture. Note that since the light map has a lower
resolution than the texture map, several texels will map to one lumel. For each texel, we call
l3d_screen_info::light_native with the intensity of the corresponding lumel; this
changes the color of the texel to correspond to the given light level.
The question remains as to how to compute the size of the pre-tiled texture. This depends, as
we said, on the relative sizes of the original texture and the polygon onto which the texture is
mapped. The routine compute_surface_orientation_and_size in class l3d_
polygon_3d_textured_lightmapped handles exactly this task. So, let’s now take a look
at the class l3d_polygon_3d_textured_lightmapped.
#include "p3_tex.h"
#include "../surface/scache.h"
class l3d_polygon_3d_textured_lightmapped :
virtual public l3d_polygon_3d_textured,
virtual public l3d_texture_computer
{
protected:
l3d_surface_cache *scache;
public:
mutable l3d_texture *surface;
//- mutable since the draw routine for s/w rendering needs to assign
//- this member (the tiled, lit surface) if the surface is not currently
//- in the cache, though the poly is otherwise passed as const. logically
//- the draw routine handles the polygon as a const, physically though
//- it can change the mutable surface member
l3d_texture_space surface_orientation;
int surface_xsize, surface_ysize;
~l3d_polygon_3d_textured_lightmapped(void) {
//- DO NOT delete the surface! it belongs to the surface cache,
//- or might not even have been created if h/w accel with 2-pass
//- rendering was used
Chapter 2: Rendering and Animation Techniques for 3D Polygons 143
delete lightmap;
}
l3d_texture *lightmap;
void compute_surface_orientation_and_size(void);
void init_transformed(void);
void transform(const l3d_matrix &m, int count);
void draw(l3d_rasterizer_3d *r ) {
r->draw_polygon_textured_lightmapped(this);
}
l3d_polygon_3d_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped &r);
l3d_polygon_2d *clone(void);
};
#endif
#include "../../raster/ras3_sw.h"
#include "../../tool_os/memman.h"
l3d_polygon_3d_textured_lightmapped::l3d_polygon_3d_textured_lightmapped
(int num_pts, l3d_surface_cache *scache):
l3d_polygon_2d(),
l3d_polygon_3d(),
l3d_polygon_3d_clippable(),
l3d_polygon_3d_textured(num_pts)
{
this->scache = scache;
lightmap = new l3d_texture;
lightmap->tex_data = new l3d_texture_data;
lightmap->owns_tex_data = true;
lightmap->tex_data->width = 64;
lightmap->tex_data->height = 64;
lightmap->tex_data->data = new unsigned char[lightmap->tex_data->width *
lightmap->tex_data->height];
int i,j;
for(j=0;j<lightmap->tex_data->height;j++) {
for(i=0;i<lightmap->tex_data->width;i++) {
lightmap->tex_data->data[i + j*lightmap->tex_data->width] = MAX_LIGHT_LEVELS/4;
}
}
surface = NULL;
void l3d_polygon_3d_textured_lightmapped::compute_surface_orientation_and_size
(void)
{
144 Chapter 2: Rendering and Animation Techniques for 3D Polygons
O = texture->O.original;
U = texture->U.original - O;
V = texture->V.original - O;
l3d_real minu,minv,maxu,maxv;
int i= 0;
l3d_point tex_point;
tex_point =
world_to_tex_matrix(*texture)
|
(**vlist) [(*ivertices)[i].ivertex].original;
u = tex_point.X_ = l3d_divrr(tex_point.X_, tex_point.W_);
v = tex_point.Y_ = l3d_divrr(tex_point.Y_, tex_point.W_);
Y
maxu = minu = tex_point.X_;
maxv = minv = tex_point.Y_;
FL
U = texture->U.original - texture->O.original;
V = texture->V.original - texture->O.original;
AM
#define VTX(i) ((**(vlist))[ (*(clip_ivertices))[i].ivertex ].transformed)
l3d_real u,v;
tex_point =
world_to_tex_matrix(*texture)
|
(**vlist) [(*ivertices)[i].ivertex].original;
u = tex_point.X_ = l3d_divrr(tex_point.X_, tex_point.W_);
v = tex_point.Y_ = l3d_divrr(tex_point.Y_, tex_point.W_);
if(u<minu) minu=u;
if(u>maxu) maxu=u;
if(v<minv) minv=v;
if(v>maxv) maxv=v;
}
iminu = ifloor(minu);
iminv = ifloor(minv);
imaxu = iceil(maxu);
imaxv = iceil(maxv);
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 145
assign_tex_coords_from_tex_space(surface_orientation);
}
void l3d_polygon_3d_textured_lightmapped::init_transformed(void) {
l3d_polygon_3d_textured::init_transformed();
surface_orientation.O.transformed = surface_orientation.O.original;
surface_orientation.U.transformed = surface_orientation.U.original;
surface_orientation.V.transformed = surface_orientation.V.original;
surface_orientation.O.transformed = m | surface_orientation.O.transformed;
surface_orientation.U.transformed = m | surface_orientation.U.transformed;
surface_orientation.V.transformed = m | surface_orientation.V.transformed;
l3d_polygon_2d* l3d_polygon_3d_textured_lightmapped::clone(void) {
return new l3d_polygon_3d_textured_lightmapped(*this);
}
l3d_polygon_3d_textured_lightmapped::l3d_polygon_3d_textured_lightmapped
(const l3d_polygon_3d_textured_lightmapped &r)
: l3d_polygon_2d(r),
l3d_polygon_3d(r),
l3d_polygon_3d_textured(r),
l3d_polygon_3d_clippable(r),
l3d_texture_computer(r)
{
scache = r.scache;
surface = r.surface;
surface_orientation = r.surface_orientation;
surface_xsize = r.surface_xsize;
surface_ysize = r.surface_ysize;
lightmap = r.lightmap;
}
The member variable scache is a pointer to the surface cache responsible for managing the sur-
face of the polygon. The member variable lightmap points to the light map, which is a small
texture containing light intensities. The member variable surface points to the surface itself,
which contains the tiled and lighted version of the texture. The surface is created by the surface
cache, which is also responsible for deleting the surface.
146 Chapter 2: Rendering and Animation Techniques for 3D Polygons
This two-pass approach requires three main modifications to our texture mapping strategy:
1. We don’t use surfaces. In other words, we don’t combine textures and light maps into one sur-
face. Therefore, we use the surface cache to store textures and light maps separately, not
surfaces.
2. We must recompute and reassign the polygon’s texture coordinates to draw the texture map
correctly, then we must restore the original texture coordinates, originally computed in com-
pute_surface_orientation_and_size, which define the position and orientation
of the light map.
3. We must tell Mesa to blend the texture map and the light map.
The Mesa rasterizer draws light mapped polygons in routine draw_polygon_textured_
lightmapped. The routine first looks for the texture data in the cache, creating the texture if it
148 Chapter 2: Rendering and Animation Techniques for 3D Polygons
was not found. Texture creation takes place via the OpenGL calls we saw earlier when discussing
the Mesa texture mapping routines. Next comes the tricky part: we must recompute the polygon’s
texture coordinates twice, once for the texture and once for the light map. The light maps have sep-
arate sizes and orientations, leading to two separate sets of texture coordinates.
To recompute the texture coordinates, we use the pixel-level approach discussed earlier. Here,
we are already in the rasterization stage of the pipeline. The polygon has been projected into 2D
and also clipped in 2D. Remember, the 2D clipping operation can introduce new 2D vertices
which do not directly result from a 3D to 2D projection operation, meaning that for such vertices
we have no 3D information. This means that we only have 2D information, but we now need to
compute the texture coordinates in 3D. The Mesa rasterizer code uses the screen space to texture
space formulas developed earlier. Note that it would have also been possible to use the reverse
projection (covered earlier in the discussion of the Mesa texture mapping routines) to retrieve the
3D world coordinates from the pixel coordinates, then use the world-to-texture matrix to find the
texture coordinates, but it is easier to use the pixel-level formulas, since they go directly from 2D
screen coordinates to texture coordinates.
Let’s step through the new computation of the texture coordinates for the texture map.
First, we start a loop for all post-projection, clipped 2D vertices:
for(int i=0; i<p_poly->clip_ivertices->num_items; i++) {
Then, we compute the simple projection coordinates from the actual screen coordinates. This
undoes the effect of the field of view correction, the y axis reversal, and the top-left instead of
screen-centered origin. We do this computation because the simple projection coordinates are
what we need for the screen space to texture space formulas. Notice that to perform this computa-
tion, we need the field of view terms and the screen size, which is why these variables are
referenced in the l3d_texture_computer class.
float px =
( (**p_poly->vlist) [(*p_poly->clip_ivertices)[i].ivertex].transformed.X_
- *screen_xsize/2 )
/ (*screen_xsize * *fovx);
float py =
(*screen_ysize/2
-
(**p_poly->vlist) [(*p_poly->clip_ivertices)[i].ivertex].transformed.Y_ )
/
(*screen_ysize * *fovy);
Next, given the simple projection coordinates, we simply implement the screen space to texture
space conversion formulas derived earlier. We compute u, v, and z coordinates for the current ver-
tex—in other words, the horizontal and vertical positions in the texture space of the texture map,
and the camera space z coordinate.
float ox = p_poly->texture->O.transformed.X_;
float oy = p_poly->texture->O.transformed.Y_;
float oz = p_poly->texture->O.transformed.Z_;
float ux = p_poly->texture->U.transformed.X_ - ox;
float uy = p_poly->texture->U.transformed.Y_ - oy;
float uz = p_poly->texture->U.transformed.Z_ - oz;
float vx = p_poly->texture->V.transformed.X_ - ox;
float vy = p_poly->texture->V.transformed.Y_ - oy;
float vz = p_poly->texture->V.transformed.Z_ - oz;
Chapter 2: Rendering and Animation Techniques for 3D Polygons 149
((l3d_polygon_ivertex_textured *)&(*p_poly->clip_ivertices)[i])
->tex_coord.set(float_to_l3d_real(u),
float_to_l3d_real(v),
float_to_l3d_real(z),
float_to_l3d_real(1.0));
Once we have computed the u, v, and z values, based on the texture map orientation, for all vertex
indices within the polygon, we then draw the polygon using the previous Mesa texture mapping
routine, draw_polygon_textured.
At this point, we now have the texture mapped polygon drawn. Next, we make a second pass
to add in the effect of the light map. Before we make the second pass, though, we must tell
OpenGL that we want the next texture that is drawn, which will be the light map, to not overwrite
the current pixels, but instead to alter their intensity. The following lines do this:
//- set up mag/min filters to make light maps smoother
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//- set up blend function to combine light map and text map
glEnable(GL_BLEND);
glBlendFunc(GL_ZERO, GL_SRC_ALPHA);
The two calls to glTexParameteri tell OpenGL to smooth the light maps. A texture map or
light map is (practically) always displayed on-screen either larger or smaller than its original
dimensions; in other words, it is very rare that one pixel in the texture map maps to exactly one and
only one pixel in the final image. More often, a texel in the texture map maps to either less than one
or more than one pixel in the final image. This leads to either stretching of the texture image,
which OpenGL calls maxification, or squashing of the image, which OpenGL calls minification.
In such cases, we can either choose the nearest texel to display, which is what we have been doing
so far in both the software and Mesa rasterizers, or we can compute an average of the nearest texel
and the surrounding texels, displaying the averaged color. Doing such averaging is, of course,
slower than simply choosing the nearest texel, but it also generally results in a better quality
image, smoothing out the changes in a texture map and making the individual texels harder to dis-
cern. Since light maps are typically low resolution, and since we are using Mesa mainly as an
interface to graphics acceleration hardware, we go ahead and turn on the averaging, assuming that
the underlying graphics acceleration hardware is capable of handling the additional computations.
The parameter GL_LINEAR tells OpenGL to perform a weighted linear average on the texels.
150 Chapter 2: Rendering and Animation Techniques for 3D Polygons
Then, we also need to tell OpenGL to blend the light map with the existing pixels from the
texture map which were already drawn in the first pass. To do this, we call glEnable(GL_
BLEND), which enables some sort of blending for the coming drawing operations, and then call
glBlendFunc(GL_ZERO, GL_SRC_ALPHA) to specify exactly how the already existing
pixels should be blended with the new pixels to be drawn from the light map. The two parameters
to glBlendFunc control the relative weight of the new (or source) and the existing (or destina-
tion) pixels, respectively. Thus, the first parameter GL_ZERO means that we display none of the
color of the light map. The second parameter GL_SRC_ALPHA means that we display the existing
color from the texture map, but using the value from the source pixel (the light map) as an alpha
value. In computer graphics, an alpha value is simply a transparency value; the greater the alpha,
the more transparent a pixel is. Thus, by treating the light map values as transparency values, we
can “see through” the brighter areas of the light map to see more of the underlying light from the
Chapter 2: Rendering and Animation Techniques for 3D Polygons 151
texture; in darker areas of the light map, we see less or none of the original texture. This is exactly
the desired illumination effect.
Then, the rest of the routine draw_polygon_texture_lightmapped draws the same
polygon again, only this time with the light map as the texture. First, we look for the light map data
in the surface cache, creating it if necessary with OpenGL calls. Then, we restore the original light
map texture coordinates for each vertex, which we saved during the first texture coordinate
recomputation. Finally, we draw the polygon just as we did in the draw_polygon_textured
routine, performing a reverse projection using the polygon’s plane to obtain the pre-projection
coordinates Mesa needs to do perspective correct texture mapping.
Listing 2-23: shapes.h, the declaration of the light mapped pyramid objects for sample
program lightmap
#include "../lib/geom/object/object3d.h"
#include "../lib/geom/polygon/p3_ltex.h"
#include "../lib/geom/texture/texture.h"
#include "../lib/geom/texture/texload.h"
Listing 2-24: shapes.cc, the implementation of the light mapped pyramid objects for sample
program lightmap
#include "shapes.h"
#include <stdlib.h>
#include <string.h>
l3d_polygon_3d_textured_lightmapped *p;
int pi;
pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured_lightmapped(3,scache);
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=0;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=1;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=3;
l->load("test.ppm");
p->texture = new l3d_texture;
p->texture->tex_data = new l3d_texture_data;
p->texture->tex_data->width = l->width;
p->texture->tex_data->height = l->height;
p->texture->tex_data->data = l->data;
p->texture->owns_tex_data = true;
p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
p->compute_surface_orientation_and_size();
p->plane.align_with_point_normal(p->center.original, normalized(p->sfcnormal.original -
p->center.original));
pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured_lightmapped(3,scache);
Chapter 2: Rendering and Animation Techniques for 3D Polygons 153
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=2;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=3;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=1;
p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
p->compute_surface_orientation_and_size();
p->plane.align_with_point_normal(p->center.original, normalized(p->sfcnormal.original -
p->center.original));
pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured_lightmapped(3,scache);
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=0;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=2;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=1;
p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
p->compute_surface_orientation_and_size();
p->plane.align_with_point_normal(p->center.original, normalized(p->sfcnormal.original -
p->center.original));
pi = polygons.next_index();
polygons[pi] = p = new l3d_polygon_3d_textured_lightmapped(3,scache);
polygons[pi]->vlist = &vertices;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=3;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=2;
(*(polygons[pi]->ivertices))[polygons[pi]->ivertices->next_index()].ivertex=0;
p->texture->owns_tex_data = true;
p->texture->O = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[0].ivertex];
p->texture->U = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[1].ivertex];
p->texture->V = (**(polygons[pi]->vlist))[(*(polygons[pi]->ivertices))[2].ivertex];
polygons[pi]->compute_center();polygons[pi]->compute_sfcnormal();
p->assign_tex_coords_from_tex_space(*p->texture);
p->compute_surface_orientation_and_size();
p->plane.align_with_point_normal(p->center.original, normalized(p->sfcnormal.original -
p->center.original));
num_xforms = 1;
modeling_xforms[0].set
( float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.),
Y
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.) );
FL
modeling_xform=
modeling_xforms[0];
}
AM
pyramid::~pyramid(void) {
for(register int i=0; i<polygons.num_items; i++) {delete polygons[i]; }
}
TE
Listing 2-25: main.cc, the main program file for sample program lightmap
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>
#include "../lib/geom/object/object3d.h"
#include "../lib/tool_2d/screen.h"
#include "../lib/tool_os/dispatch.h"
#include "../lib/raster/rasteriz.h"
#include "../lib/tool_2d/scrinfo.h"
#include "../lib/geom/world/world.h"
#include "../lib/system/fact0_2.h"
#include "../lib/geom/texture/texture.h"
#include "../lib/geom/texture/tl_ppm.h"
#include "../lib/pipeline/pi_wor.h"
#include "shapes.h"
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <math.h>
#include <stdarg.h>
class my_world :
Team-Fly®
Chapter 2: Rendering and Animation Techniques for 3D Polygons 155
public l3d_world,
public l3d_texture_computer
{
public:
my_world(void);
void update_all(void);
virtual ~my_world(void) {
delete texture_loader;
delete scache;
}
l3d_texture_loader *texture_loader;
l3d_surface_cache *scache;
void my_pipeline::key_event(int c) {
l3d_pipeline_world::key_event(c);
switch(c) {
case 'p': {
my_world *w;
w = dynamic_cast<my_world *>(world);
if(w) {
w->place_lamp();
w->reset_surface_cache();
}
}
}
}
main() {
l3d_dispatcher *d;
my_pipeline *p;
my_world *w;
factory_manager_v_0_2.choose_factories();
d = factory_manager_v_0_2.dispatcher_factory->create();
w = new my_world();
p = new my_pipeline(w);
d->pipeline = p;
d->event_source = w->screen;
d->start();
delete p;
delete d;
delete w;
}
void my_world::update_all(void) {
l3d_world::update_all();
}
156 Chapter 2: Rendering and Animation Techniques for 3D Polygons
my_world::my_world(void)
: l3d_world(400,300)
{
rasterizer_3d_imp->fovx = &(camera->fovx);
rasterizer_3d_imp->fovy = &(camera->fovy);
rasterizer_3d_imp->screen_xsize = &(screen->xsize);
rasterizer_3d_imp->screen_ysize = &(screen->ysize);
camera->VRP.set(0,0,-50,0);
camera->near_z = float_to_l3d_real(5.5);
camera->far_z = int_to_l3d_real(500);
int i,j,k,onum;
if (objects[onum]==NULL) exit;
objects[onum]->modeling_xforms[0].set
( int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(i),
int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(1),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(1),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1) );
objects[onum]->modeling_xform =
objects[onum]->modeling_xforms[0] ;
}
screen->refresh_palette();
}
int my_world::compute_light_level
(l3d_real u, l3d_real v)
{
l3d_real x,y,z;
l3d_point aLight(0,0,0,1);
l3d_vector L(aLight.X_ - x ,
aLight.Y_ - y ,
aLight.Z_ - z ,
int_to_l3d_real(0)),
N(cross(U,V));
l3d_real intensity;
l3d_real f_att=l3d_divrr(int_to_l3d_real(1) ,
(float_to_l3d_real(0.1)+float_to_l3d_real(0.02*sqrt(l3d_real_to_float(dot(L,L))))));
Chapter 2: Rendering and Animation Techniques for 3D Polygons 157
if (f_att>int_to_l3d_real(1)) f_att=int_to_l3d_real(1);
return l3d_real_to_int(intensity);
}
void my_world::place_lamp(void)
{
int iObj, iFacet;
l3d_vector facet_to_cam_vec;
l3d_vector N;
l3d_polygon_3d_clippable *pc;
object->init_nonculled_list();
object->vertices->num_varying_items = 0;
object->transform_stage = 0;
for(register int ivtx=0; ivtx<object->vertices->num_fixed_items; ivtx++) {
(*(object->vertices))[ivtx].reset();
}
l3d_polygon_3d_node *n;
n = object->nonculled_polygon_nodes;
while(n) {
n->polygon->init_clip_ivertices();
n->polygon->init_transformed();
n=n->next;
}
if (object->num_xforms) {
object->transform(object->modeling_xform);
}
object->camera_transform(camera->viewing_xform);
n = object->nonculled_polygon_nodes;
while(n) {
l3d_polygon_3d_textured_lightmapped *pt;
pt = dynamic_cast<l3d_polygon_3d_textured_lightmapped *>
(n->polygon);
if(pt) {
int lumel_y,lumel_x;
O = pt->surface_orientation.O.transformed;
U = pt->surface_orientation.U.transformed - O;
V = pt->surface_orientation.V.transformed - O;
printf("obj %d ", iObj);
O.print();
U.print();
158 Chapter 2: Rendering and Animation Techniques for 3D Polygons
V.print();
pt->lightmap->tex_data->data
[ lumel_y * pt->lightmap->tex_data->width + lumel_x ] = new_light;
}
}
}
n = n->next;
}
}
}
The pyramid class is the same as in the previous program textest; the only differences are
the pyramid does not rotate and the polygons used to make up the pyramid are of type
l3d_polygon_3d_textured_lightmapped. Since all important functions on polygons
are virtual functions, we need only create instances of our new light mapped polygons and assign
them to our 3D objects. Notice that the constructor of the pyramid, as well as the constructor of the
light mapped polygons, require a surface cache as a parameter. The surface cache is created and
destroyed by our custom world class, my_world.
The custom world class offers two new methods for light mapping: place_lamp and com-
pute_light_level. Method place_lamp transforms all polygons in the scene into camera
space. Since all polygons are light mapped polygons, this means that all light map orientations are
also transformed into camera space. Then, for each lumel in each light map, we compute the light
level by calling compute_light_level. This illuminates all lumels of all polygons in the
scene.
Method compute_light_level first converts the (u,v) coordinates of the lumel back
into camera space, by using the light map’s orientation vectors (as a coordinate system), which
have also been translated into camera space. We then compute the light intensity of the lumel
using the diffuse lighting equation presented earlier. The light position is set to be at the origin—in
other words, the location of the camera. We clamp the maximum light intensity from the lighting
equation only to be half of the maximum allowable light intensity. This ensures that cumulative
applications of light can still cause an increase in intensity.
Chapter 2: Rendering and Animation Techniques for 3D Polygons 159
NOTE We convert from 2D lumel (u,v) space into 3D camera space (3D space with the
camera at the origin in the standard orientation), instead of converting from 2D lumel space
into 3D world space (3D space with the camera somewhere other than the origin and ori-
ented arbitrarily), then perform the lighting calculations in 3D camera space. It would also be
possible to convert from lumel space to world space, by applying the inverse of the camera
transformation matrix to go from lumel space to camera space to world space. Then, we
would perform the lighting computations in world space. But this extra conversion to world
space is unnecessary. We just need to convert from 2D lumel space into some 3D space where
we can determine the 3D light positions, 3D light vectors, and 3D polygon surface normal
vectors, so that we can perform the 3D lighting computations. Both camera space and world
space are fine for this purpose. In this particular example, it is simply easier to use camera
space coordinates, though you may find it more intuitive to think of lighting computations as
occurring in world space. As long as we can examine the relative 3D positions of lights, light
vectors, and normal vectors, it doesn’t matter which 3D coordinate system we use to do the
lighting computations.
Testing to see whether any polygon lies between the light source and the lumel requires the
creation of a vector from the light source to the lumel and an intersection test to see if the vector
intersects any polygon in the scene. Creating the vector is easy: its starting point in 3D is the light
160 Chapter 2: Rendering and Animation Techniques for 3D Polygons
source, and its ending point in 3D is the lumel. The intersection test is harder. We learned how to
intersect a parametric line with a plane when we looked at clipping against arbitrarily oriented
planes. The problem here is that we don’t want to intersect a line with an entire infinite plane, but
only with a polygon—a tiny polygonal cut-out portion of a plane. This requires the line-plane test
developed earlier in this chapter, plus an additional test to see if the point of intersection lies within
the bounds of the polygon. Chapter 8 develops exactly this line-polygon intersection test, for the
purposes of collision detection.
With the line-polygon intersection test defined, computing shadows simply reduces to a
brute-force check to see if any polygon obstructs any light source for every lumel in the scene.
This works, but requires a lot of computation. One way of getting around this is to divide the scene
up into separate larger regions, such as cubes, with each region containing many polygons. We
then intersect the light ray with the boundaries of region. If the light ray does not intersect a region,
then it cannot intersect any of the polygons in the region. This can reduce the total amount of work
needed to compute shadows. A related way of computing shadows has to do with an explicit visi-
bility encoding by means of portals (see Chapter 5).
NOTE There are also many other ways of computing shadows. Visible surface determina-
tion, covered in Chapters 4 and 5, is a huge area in 3D graphics. Computing shadows is also
essentially a visible surface problem; we must determine which surfaces are visible from the
viewpoint of the light source. Surfaces which can be “seen” by the light are illuminated; those
invisible to the light are in shadow.
Summary
In this chapter, we saw several visual techniques for 3D polygons. These techniques include 3D
morphing, light computation, texture mapping with PPM image files, and light mapping. We
solved some equations using the symbolic algebra package Calc, and observed the linearity of u/z,
v/z, and 1/z in screen coordinates, and the non-linearity of u, v, and z in screen coordinates.
We now have a pretty good handle on 3D polygons, but our sample programs have still used
fairly basic 3D objects: pyramids consisting of just a few triangles. Creating more complicated
objects requires the use of a 3D modeling package. The next chapter covers using the free 3D
modeling package Blender to create more interesting models for use in our 3D programs.
161
Chapter 3
3D Modeling with
Blender
T
he Blender 3D modeling package is a powerful tool for creating 3D objects. Chapter 1
already reviewed the basics of using Blender to create simple polygonal models for import
into an l3d program. In this chapter, we create more complex models, illustrating the use of
Blender’s animation and texture mapping features. In particular, we cover the following topics:
n Assigning texture coordinates to models
n Using VRML to export and import texture coordinates
n Creating compatible morph targets
n Viewing animations in Blender
n Using inverse kinematics and rotoscoping
This chapter is organized around two tutorials. The first illustrates a 3D morph between a cube and
a texture-mapped tree. The second creates a jointed, animated human figure.
Figure 3-2
Y
with vertices. This displays vertex keys in the IpoWindow.
FL
4. Ensure that the ICONBUT with the image of a key is not active.
Notice the two horizontal lines within the window. The bottommost orange line represents the
first, baseline vertex key. The first vertex key is always displayed in orange. The second and all
AM
additional vertex keys are displayed as blue horizontal lines. The order of the vertex keys is from
bottom to top.
Also notice the curved line. This is the interpolation curve, which interpolates from the first
TE
key to the second key (vertically) over time (horizontally). The vertical green line indicates the
current frame number. If you change the current frame with the arrow keys, the green line moves
left or right along the time axis accordingly.
Here are some operations you can perform in the IpoWindow. Don’t try these out just now
because the current configuration is important for completing this tutorial. You can right-click a
vertex key in the IpoWindow to select it, at which point it changes to a lighter color (light orange
for the first vertex key, light blue for the additional vertex keys). Type g to grab and move the
selected vertex key(s) to another location, or x to delete a vertex key. If you add vertex keys in the
Team-Fly®
Chapter 3: 3D Modeling with Blender 165
3DWindow (remember that you must change the frame number to insert multiple vertex keys),
you can see how more and more blue lines (the vertex keys) appear within the IpoWindow.
For now, make sure that only two vertex keys are present, and that the second vertex key, the
topmost one, is selected (displayed in light blue).
Figure 3-5
4. Press Left Arrow a few times to decrease the frame number by a few frames. Notice that the
sharp point you created at the corner of the mesh slowly starts to disappear. As you decrease
the frame number, Blender automatically interpolates the vertex positions to the appropriate
vertex key (computed by Blender by intersecting the interpolation curve in the IpoWindow
with the vertex key for the current frame number), which in this case is the first vertex key.
Since the first vertex key contains the unmodified cube, the mesh morphs back to the original
cube the closer the frame number is to the first frame.
166 Chapter 3: 3D Modeling with Blender
Figure 3-6
5. Press Right Arrow to return the frame number back to frame 21. Notice that the sharp point
reappears as you approach frame 21, the time at which the second vertex key was inserted.
Importantly, and quite fortunately, if we save a Videoscape mesh from Blender, the exported
geometry reflects the current state of the mesh for the currently active frame. This means that to
export two compatible morph targets, we simply change to frame 1, write the first mesh, then
change to frame 21, at which point Blender automatically interpolates the vertices to the second
vertex key, and write a second mesh.
Now, you should again be at frame 21. Your job now is to enter EditMode and deform the
mesh into something interesting. Any shape will do, but for this tutorial I chose to deform the
mesh into the shape of a palm tree. Here are some hints for making the palm tree:
1. Start by making the bottom half of the cube longer and thinner to represent the trunk. Next,
make the top flat and somewhat x-shaped. Finally, pull down the outer edges of the top x to
make the leaves droop down.
2. Select groups of vertices using border select mode, and translate, rotate, or scale these groups
of vertices. Creating the tree by moving each vertex individually would be very time
consuming.
3. Be careful not to pull vertices around in physically impossible ways. For instance, if a vertex
is originally on one side of an edge in the original model, you should not pull the vertex such
that it is on the other side of the edge. If you do so, this would reverse the front/back orienta-
tion of the faces connected with the vertex, which leads to incorrect display later. Think of
yourself as a sculptor working with a malleable balloon; you can stretch or compress the bal-
loon to change its shape, but if you try to poke the balloon through itself, the whole thing
blows up in your face.
Chapter 3: 3D Modeling with Blender 167
The file treemorph.blend (in the same directory as the source code for the next sample pro-
gram, vids3d) contains the palm tree created by deforming the cube.
NOTE The following paragraphs describe the use of the (u,v) texture editor, a relatively new
feature of Blender. This feature appears to have some small problems in Blender version
1.80, so you might want to use Blender 2.0 to do texturing.
Applying textures in this way requires us to use a special mode of Blender called FaceSelect
mode. Toggle this mode on and off by pressing f. (You can think of this as being analogous to
EditMode, which is toggled on and off with Tab.) In FaceSelect mode, right-clicking a face in the
3DWindow causes it to be selected. Press Shift+right-click to select multiple faces, b to select
faces using border select mode, a to select/deselect all faces, or h to temporarily hide selected
faces. Notice that these keys are the same ones that are used in EditMode for selecting vertices, or
outside of EditMode for selecting objects. Blender draws the selected faces with a yellow dotted
border.
NOTE In EditMode, you can select faces in a limited fashion by selecting all vertices belong-
ing to the face. The problem with EditMode selection is that it selects all faces sharing the
selected vertices; in other words, face selection in EditMode is actually an alternative interpre-
tation of vertex selection, which sometimes makes face selection awkward in EditMode. On
the other hand, FaceSelect mode allows you to selectively select or deselect faces at the face
level, independent of the vertices.
You may find it easiest to use FaceSelect mode in the perspective view, rotating the viewpoint as
needed to display the faces to be selected. Also, first hiding obstructing faces by pressing Alt+h is
a useful way of making selecting easier.
After selecting faces in FaceSelect mode, Blender then allows you to map a texture to the
selected faces in various ways. To do this, you first must load an image texture image. Do this in a
window of type ImageWindow. It is easiest if you have an ImageWindow open next to the
3DWindow. Load an image via the Load button in the window header of the ImageWindow. Note
that if the size of the ImageWindow is too small, the window header might be cut off, and the Load
button might be invisible. Temporarily maximize the ImageWindow in this case to see the Load
button. For the purposes of interactive texture mapping within Blender, the image should be
square; its width and height must be identical. Supported file formats are JPEG, Targa, Iris, or
HamX (a custom Blender format).
NOTE Note that Blender does not support the PPM format, which is the format used by l3d,
so we need to have two copies of our texture images, one for Blender and one for l3d. You
can use a program such as GIMP to convert between image formats. GIMP automatically rec-
ognizes file extensions, so converting from JPEG to PPM is really as simple as loading the
JPEG and saving as PPM. Alternatively, the programs cjpeg and djpeg allow for batch
mode conversion to and from the JPEG format.
Loading an image in the ImageWindow associates the texture image with the currently selected
faces. Since our Videoscape plug-in object only supports one texture image per object, as men-
tioned above, you should select all faces and then load an image to associate the same image file
with all faces. Do this as follows:
Chapter 3: 3D Modeling with Blender 169
n Cylinder: Applies a cylindrical mapping. The texture space is an imaginary cylinder drawn
around the object, and the texture coordinates are determined by projecting vertices of each
polygon onto this cylinder. A cylindrical mapping is good for wrapping a label texture around
a bottle.
n Sphere: Applies a spherical mapping. The texture space is an imaginary sphere around the
object, and the texture coordinates are determined by projecting vertices of each polygon onto
the sphere. A spherical mapping is good for wrapping a texture around a sphere or sphere-like
object—a creature’s head, for instance.
n Bounds to 64, bounds to 128: Uses the current projection of the polygons in the 3DWindow
for computing the texture coordinates, rounding these to fit within a bounding box of 64´ 64
or 128´ 128 pixels, respectively.
n Standard 64, standard 128, standard 256: Uses the default square mapping, where every face
is mapped to an identical square subset of the texture with a size of 64´ 64, 128´ 128, or
256´ 256 pixels, respectively. This is the initial face mapping.
n From window: Uses the current projection of the polygons in the 3DWindow for computing
texture coordinates.
After selecting one of these mappings, Blender computes the mapping and stores a (u,v) texture
coordinate with every vertex of every selected face. The results of the mapping are immediately
visible in the 3DWindow (assuming that textured display mode has been activated with Alt+z).
Furthermore, the (u,v) mapping for the selected faces is displayed as a set of connected verti-
ces in the ImageWindow. Remember that the ImageWindow displays the texture, which
represents the texture space; the (u,v) texture coordinates for each vertex are locations in texture
space, which can thus be displayed as points on the image. Within the ImageWindow, you can then
use the normal mesh editing keys to select, grab, scale, or rotate the (u,v) coordinates within the
ImageWindow, which has the effect of repositioning the point within texture space and thus
assigning new texture coordinates to the original polygon’s vertex. This allows you, for instance,
to exactly map a certain part of the texture to fill an entire face, by moving the points correspond-
ing to the face’s vertices onto the appropriate locations on the image in the ImageWindow.
The texture file treetex.jpg (in the same directory as the treemorph.blend file) has
been applied to the tree by using a cylindrical mapping for the trunk and a window-based mapping
for the leaves. The texture file contains two parts. The top part of the texture is green, for the
leaves; the bottom part of the texture is brown, for the trunk.
You can now see why we earlier stated that it is not necessarily very limiting that our
Videoscape plug-in only allows one texture per object. By defining a suitably large texture con-
taining many smaller images, and by assigning specific parts of the texture to specific faces, we
can effectively apply several textures to different faces, even though physically we are only using
one texture.
For this example of the palm tree, you can perform the mapping as follows:
1. Select all faces in the trunk of the tree model. You will need to rotate the model and extend the
selection with Shift+right-click a few times to do this, because it is not possible to select a
face when you are viewing it from its back side.
2. Press u and apply a cylindrical mapping to all the faces in the trunk of the tree.
3. In the ImageWindow, press a to select all texture vertices.
4. Press s to scale down the texture vertices vertically. Move the mouse cursor vertically towards
the center of the selected vertices and middle-click. This limits the scaling in the vertical
direction. Scale down the vertices until they are as high as the brown portion of the texture.
5. Press s to scale down the texture vertices horizontally. Move the mouse cursor horizontally
towards the center of the selected vertices and middle-click. Scale down the vertices until they
are as wide as the brown portion of the texture.
6. Press g to translate the texture vertices. Move them so that all vertices are located within the
brown portion of the texture.
At this point, you have cylindrically wrapped the brown portion of the texture around the selected
faces, which is a fairly good way of doing the mapping, since the trunk of a tree is essentially a
cylinder.
Proceed in a similar fashion to wrap the green portion of the texture around the leaves, but use
the From Window mapping. First, look at the tree from the side view in the 3DWindow so that the
flat sides of the leaves are visible. Then, select all leaf faces, apply a texture mapping from the
window view, and scale and translate the vertices in the ImageWindow to be positioned over the
correct portion of the texture (the green part in this case).
Figure 3-12
If you want to tweak the texture coordinates, you simply need to drag the appropriate vertices
in the ImageWindow. Again, you would do this to exactly control which parts of the image get
mapped to which parts of which faces. After repositioning a vertex in the ImageWindow, the
3DWindow automatically displays the results of the new mapping. To see the results of the map-
ping while you are still moving the vertex and before you have confirmed its new position,
activate the Lock button in the header of the ImageWindow.
CAUTION Single-pass light mapping does not work if (u,v) coordinates have been directly
assigned as described above.
It is important to note that manually assigning (u,v) vertices in this manner creates a non-linear
mapping between texture space and world space. Each vertex in the ImageWindow represents a
vertex of a polygon in 3D space. By manually positioning these vertices in the ImageWindow, we
create a connection between the vertex location in 3D space and a (u,v) location in texture space.
In other words, we create an arbitrarily defined mapping between world space and texture space.
This means that with this method, we have no simple matrix that converts between texture and
world space, as we did in Chapter 2, where we always used the texture space to calculate the tex-
ture coordinates as opposed to letting the user arbitrarily position them. The lack of a simple
conversion between texture and world space when using direct (u,v) texture coordinate assign-
Chapter 3: 3D Modeling with Blender 173
ment implies that any techniques requiring a texture-world space or world-texture space conver-
sion cannot be applied. In particular, the single-pass light mapping strategy of Chapter 2, using a
single pre-tiled surface combining texture and light map information, cannot be applied with
objects that have been textured through direct (u,v) assignment, because this technique requires us
to convert from texture (or lumel) space back into world space, which is not (easily) possible if the
(u,v) coordinates have been arbitrarily assigned. Note that a two-pass light mapping strategy is
still entirely possible even with arbitrarily assigned (u,v) coordinates, because in this case the sec-
ond lighting pass has nothing to do with the arbitrary (u,v) coordinates, instead using a
well-defined light map space. It’s only with the single-pass light mapping that we have problems.
For this reason, the Videoscape plug-in creates textured polygons, but not textured light mapped
polygons.
NOTE The newest free version of Blender supports Python scripting (previously this was only
Y
available in the commercial C-key version). Therefore, it should also be possible to write a
FL
Python script to export the texture coordinates.
The texture coordinates file is the mesh.uv file we spoke of earlier when discussing the parame-
AM
ter string passed to the Videoscape plug-in object. It is simply an ASCII file, where each line
contains a single (u,v) coordinate. The ordering of the lines in the texture coordinates file should
be the same as the order of the vertex indices defining the faces in the Videoscape file. Note that
the texture coordinates are associated with each vertex index defining a face, and not with the
TE
common vertices shared by all faces. This is because even though two adjacent faces may share a
vertex, the texture coordinate at the shared vertex might be different for the two faces. Thus, tex-
ture coordinates are defined per face, and the texture coordinates file specifies one (u,v) coordinate
for each vertex index of each face.
Again, you can create this texture coordinates file simply by cutting out the appropriate por-
tion of a VRML exported file, as follows:
1. Save the file as VRML by pressing Space to bring up the Toolbox, choosing File, then Save
VRML.
2. Edit the VRML file in your favorite text editor. Search for the beginning of the definition of
your mesh object; the line begins with the keyword DEF (for definition) and contains the
name of the mesh.
3. Search for the texture coordinate definition for this mesh. It follows the DEF line and begins
with the keyword TextureCoordinate2. Following this line is a list of several texture
coordinates.
4. Save the list of texture coordinates into a separate file. I suggest giving this file an extension of
.uv, for instance, meshname.uv.
5. Specify this meshname.uv file in the parameter string passed to the Videoscape plug-in
object, as discussed earlier. In this way, the (u,v) coordinates are read out of the file while
loading the geometry from the Videoscape mesh file, and the texture mapping is applied.
CAUTION VRML export of texture coordinates on triangular faces does not appear to work
in Blender. So, only use quadrilaterals if you wish to export texture information.
Team-Fly®
Chapter 3: 3D Modeling with Blender 175
Unfortunately, it appears that there is a small bug in the Blender VRML export of texture coordi-
nates with triangular faces. In the experiments I performed, the VRML exported texture
coordinates are simply not correct if the faces are triangular. Making the faces into quadrilaterals
(four-sided) seems to fix the problem. Thus, to export manually assigned texture coordinates in
this manner, make sure all your faces have four sides.
Also, notice that the orientation of the v axis in texture space is different in Blender/VRML
than it is in l3d. We have been using the convention that the upper-left pixel in the texture image is
(0,0) in texture space. Blender’s exported VRML texture coordinates, however, take the lower-
left pixel as point (0,0) in texture space. This means that, for texture coordinates in the range 0.0 to
1.0 (which is the case if all of the texture coordinates lie within the image in Blender’s
ImageWindow), the v coordinate we use in l3d is 1.0–v from the mesh.uv file, which essentially
flips the orientation of the v axis.
public:
pyramid(l3d_two_part_list<l3d_coordinate> *keyframe0,
l3d_two_part_list<l3d_coordinate> *keyframe1);
virtual ~pyramid(void);
int update(void);
};
#include <stdlib.h>
#include <string.h>
#include "../lib/tool_os/memman.h"
pyramid::pyramid(l3d_two_part_list<l3d_coordinate> *keyframe0,
l3d_two_part_list<l3d_coordinate> *keyframe1)
:
l3d_object(100)
{
keyframes[0] = keyframe0;
keyframes[1] = keyframe1;
currently_interpolating = false;
keyframe_no=0;
176 Chapter 3: 3D Modeling with Blender
pyramid::~pyramid(void) {
//- set vertices pointer to NULL to prevent parent l3d_object from trying
//- to delete these vertices, because they do not belong to us
vertices = NULL;
}
int pyramid::update(void) {
if(currently_interpolating) {
vertices = interp.list;
if(! interp.step()) {
currently_interpolating = false;
}
}else {
keyframe_no++;
if(keyframe_no >= num_keyframes) {keyframe_no = 0; }
int next_keyframe = keyframe_no + 1;
if(next_keyframe >= num_keyframes) {next_keyframe = 0; }
vertices = keyframes[keyframe_no];
interp.start( *keyframes[keyframe_no], *keyframes[next_keyframe],
rand()%100 + 50, 3);
currently_interpolating = true;
}
#include "../lib/geom/object/object3d.h"
#include "../lib/geom/polygon/p3_flat.h"
#include "../lib/tool_2d/screen.h"
#include "../lib/tool_os/dispatch.h"
#include "../lib/raster/rast3.h"
#include "../lib/tool_2d/scrinfo.h"
#include "../lib/geom/world/world.h"
#include "../lib/system/fact0_2.h"
#include "../lib/pipeline/pi_wor.h"
#include "../lib/dynamics/plugins/plugenv.h"
#include "../lib/geom/texture/tl_ppm.h"
#include "../lib/dynamics/plugins/vidmesh/vidmesh.h"
#include "../lib/tool_os/memman.h"
#include "shapes.h"
#include <stdlib.h>
Chapter 3: 3D Modeling with Blender 177
#include <string.h>
#include <stdio.h>
#include <math.h>
#include <stdarg.h>
my_world::my_world(void)
: l3d_world(400,300)
{
l3d_screen_info *si = screen->sinfo;
camera->VRP.set(0,0,-50,0);
camera->near_z = float_to_l3d_real(5.5);
camera->far_z = int_to_l3d_real(500);
rasterizer_3d_imp->fovx = &(camera->fovx);
rasterizer_3d_imp->fovy = &(camera->fovy);
rasterizer_3d_imp->screen_xsize = &(screen->xsize);
rasterizer_3d_imp->screen_ysize = &(screen->ysize);
int i,j,k,onum=0;
//- create two videoscape plug-in objects, one for the first morph
//- target and one for the second
l3d_texture_loader *texloader;
l3d_surface_cache *scache;
texloader = new l3d_texture_loader_ppm(si);
scache = new l3d_surface_cache(si);
l3d_plugin_environment *plugin_env0, *plugin_env1;
morph_target0->plugin_loader =
factory_manager_v_0_2.plugin_loader_factory->create();
morph_target0->plugin_loader->load("../lib/dynamics/plugins/vidmesh/vidmesh.so");
morph_target0->plugin_constructor =
(void (*)(l3d_object *, void *))
morph_target0->plugin_loader->find_symbol("constructor");
morph_target0->plugin_update =
(void (*)(l3d_object *))
morph_target0->plugin_loader->find_symbol("update");
morph_target0->plugin_destructor =
(void (*)(l3d_object *))
morph_target0->plugin_loader->find_symbol("destructor");
morph_target0->plugin_copy_data =
(void (*)(const l3d_object *, l3d_object *))
178 Chapter 3: 3D Modeling with Blender
morph_target0->plugin_loader->find_symbol("copy_data");
if(morph_target0->plugin_constructor) {
(*morph_target0->plugin_constructor) (morph_target0,plugin_env0);
}
morph_target1->plugin_loader =
factory_manager_v_0_2.plugin_loader_factory->create();
morph_target1->plugin_loader->load("../lib/dynamics/plugins/vidmesh/vidmesh.so");
morph_target1->plugin_constructor =
(void (*)(l3d_object *, void *))
morph_target1->plugin_loader->find_symbol("constructor");
morph_target1->plugin_update =
(void (*)(l3d_object *))
morph_target1->plugin_loader->find_symbol("update");
morph_target1->plugin_destructor =
(void (*)(l3d_object *))
morph_target1->plugin_loader->find_symbol("destructor");
morph_target1->plugin_copy_data =
(void (*)(const l3d_object *, l3d_object *))
morph_target1->plugin_loader->find_symbol("copy_data");
if(morph_target1->plugin_constructor) {
(*morph_target1->plugin_constructor) (morph_target1,plugin_env1);
}
*objects[onum] = *morph_target0;
objects[onum]->num_xforms = 2;
objects[onum]->modeling_xforms[0] = l3d_mat_rotx(0);
objects[onum]->modeling_xforms[1].set
( float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.), float_to_l3d_real(0.),
float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(0.), float_to_l3d_real(1.) );
objects[onum]->modeling_xform=
objects[onum]->modeling_xforms[1] | objects[onum]->modeling_xforms[0];
objects[onum]->vertices = morph_target0->vertices;
l3d_screen_info_indexed *si_idx;
l3d_screen_info_rgb *si_rgb;
l3d_polygon_3d_flatshaded *p;
for(int pnum=0; pnum<objects[onum]->polygons.num_items; pnum++) {
p = dynamic_cast<l3d_polygon_3d_flatshaded *>(objects[onum]->polygons[pnum]);
if(p) {
p->final_color = si->ext_to_native
(rand()%si->ext_max_red,
rand()%si->ext_max_green,
Chapter 3: 3D Modeling with Blender 179
rand()%si->ext_max_blue);
}
}
if (objects[onum]==NULL) exit;
objects[onum]->modeling_xforms[1].set
( int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(i),
int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(0), int_to_l3d_real(1),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1), int_to_l3d_real(1),
int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(0), int_to_l3d_real(1) );
objects[onum]->modeling_xform =
objects[onum]->modeling_xforms[1] |
objects[onum]->modeling_xforms[0] ;
}
screen->refresh_palette();
}
main() {
l3d_dispatcher *d;
l3d_pipeline_world *p;
my_world *w;
factory_manager_v_0_2.choose_factories();
d = factory_manager_v_0_2.dispatcher_factory->create();
w = new my_world();
p = new l3d_pipeline_world(w);
d->pipeline = p;
d->event_source = w->screen;
d->start();
delete d;
delete p;
delete w;
}
The program is almost identical to the morphing pyramid program from Chapter 2. First, we cre-
ate two Videoscape plug-in objects and load the cube and the tree object into each of these objects.
Then, we create a series of morphable pyramid objects, just as we did in Chapter 2. (The name pyr-
amid is here somewhat of a misnomer, since the morphable objects are not pyramids in this case,
but the naming remains the same to emphasize the similarity of the code with that of Chapter 2.)
In this case, instead of explicitly assigning the vertices in the source and destination morph
vertex lists, we simply use the vertex lists from the already loaded plug-in objects. Then, each pyr-
amid object morphs itself from its source to its target vertex list, and back again, using the vertex
interpolator class.
The logic, therefore, is essentially the same as the simpler morphing program of Chapter 2,
only now we are morphing between two real 3D models, created and textured within Blender. The
logic is simple, yet the results look quite good in 3D.
Note that in this morphing program, in contrast to the morphing program of Chapter 2, we do
recalculate the surface normals during morphing. This is because the Mesa texture mapping rou-
tines require a reverse projection, as we saw in Chapter 2. Performing this reverse projection
180 Chapter 3: 3D Modeling with Blender
during Mesa texture mapping requires us to have an accurate surface normal vector, which is why
we recalculate it whenever we interpolate the vertex positions.
In Blender terminology, each of these segments is called a limb; in other literature, these seg-
ments are sometimes called bones. A series of connected limbs is called a chain or an IK chain.
Blender also refers to a chain as an Ika, short for inverse kinematics. The connecting points
between the chains are called joints. The start of the chain is the root or base of the chain. The last
point in the chain is called the effector.
The important thing about an Ika chain is that moving the effector—a single point—leaves the
base fixed in place, and automatically moves all points in between the base and the effector. In the
arm example above, think of moving the effector as being equivalent to moving the hand. By mov-
ing the hand, which is located at the tip of the arm, we automatically cause the rest of the arm to be
correctly repositioned. This makes animation in many cases much easier; we just pull the hand
(effector) to the new position and the entire arm follows.
You can also move the base of an Ika chain. Moving the base of an Ika moves the entire chain
and the effector, without changing the relative positions of the joints. You move the base of an Ika
in order to reposition the entire Ika.
The alternative to inverse kinematics is forward kinematics, abbreviated FK. To position the
arm with forward kinematics, you move the top part of the arm, which also moves the bottom part
of the arm with it. Then, you move the bottom part of the arm, which then moves without affecting
the top. This finally determines the position of the hand, located at the end of the arm. FK implies a
simple hierarchical relationship between the top limb (the parent) and the bottom limb (the child).
You can create such a simple hierarchical relationship for use in forward kinematics by pressing
Ctrl+p (although not all hierarchical relationships are created for FK animation purposes).
182 Chapter 3: 3D Modeling with Blender
Just remember the following in order to distinguish between inverse and forward kinematics:
In inverse kinematics, you move the lowest item in a hierarchy (the effector), which automatically
successively moves all higher items in the hierarchy. In forward kinematics, you explicitly start at
the highest level in the hierarchy, and successively manually move the lower items in the hierar-
chy, which finally determines the position of the lowest item in the hierarchy. Inverse kinematics
works from the bottom up; forward kinematics works from the top down.
The choice of which is more appropriate—inverse kinematics or forward kinematics—
depends on the most convenient way you like to think about the animation process. Generally, it’s
easier to think of limb animation in terms of inverse kinematics. For instance, when animating an
arm, you usually think of the animation in terms of moving the hand (the effector) towards an
object, and not in terms of moving the upper and lower arms individually. But since inverse kine-
matics automatically recalculates the joint positions, it also gives you less control over the exact
animation. When animating a tail or a snake with many segments, it might be better to use forward
kinematics.
For this tutorial, we’ll use IK for animating the arms and legs of a human-like figure to create
a simple walking motion. The first step is to understand how to create and manipulate Ikas in
Blender.
Chapter 3: 3D Modeling with Blender 183