COMPUTER GRAPHICS
CIA 2018
1) RGB Color Model:
The RGB model is an additive color system where colors are created by combining Red, Green, and Blue light in varying intensities. Each
component typically ranges from 0–255, forming over 16 million colors. It’s widely used in displays, monitors, cameras, and digital imaging
because screens emit light directly, unlike printers which use subtractive color models.
2) Raster and Random-Scan Displays:
Raster displays use a pixel grid, updating the screen line by line, ideal for images, videos, and complex graphics. Random-scan (vector)
displays draw images directly as lines or curves using electron beams, producing sharp lines but inefficient for filled areas. Raster is
dominant in modern screens, while random-scan is mostly historical or specialized.
3) Plasma Panel and LCD Monitors:
Plasma panels use tiny cells containing ionized gas; when electrically charged, they emit ultraviolet light that excites phosphors to display
color. They offer high contrast and wide viewing angles. LCDs use liquid crystals modulated by electric fields to block or pass backlight
through RGB filters, consuming less power and being thinner. Both are flat-panel display technologies.
4) Major Application Areas of Computer Graphics:
Computer graphics are widely applied across industries for visualization, interaction, and design. In entertainment, they power animation,
video games, and special effects in films. In design and engineering (CAD/CAM), they enable architects and engineers to model, simulate,
and test products virtually. In education and training, interactive visualizations aid learning and flight simulators train pilots. In medical
imaging, graphics help visualize CT, MRI, and ultrasound scans for diagnosis. In scientific research, they represent large datasets,
simulations, and molecular structures. Business applications include data visualization, charts, and dashboards. Geographic Information
Systems (GIS) rely on graphics for maps and spatial analysis. Overall, computer graphics combine efficiency with realism, enabling better
communication, creativity, and decision-making.
5) Structure and Working of CRT:
A Cathode Ray Tube (CRT) is a vacuum tube display device that uses an electron gun and fluorescent screen. The electron gun emits a
focused electron beam, accelerated by anode plates. Deflection plates or magnetic coils control the beam’s direction to scan horizontally
and vertically across the screen. The inner screen is coated with phosphor, which glows when struck by electrons, forming images. For color
CRTs, red, green, and blue phosphor dots are arranged in patterns with a shadow mask to ensure electrons hit the correct color. The screen
is refreshed rapidly (50–100 times/sec) to avoid flicker. CRTs offer good brightness and fast response, but are bulky, power-hungry, and now
largely replaced by flat-panel displays like LCDs and LEDs.
CIA 2017
1) Shadow Mask and Beam Penetration:
In shadow mask CRTs, a perforated metal sheet ensures electron beams strike the correct red, green, or blue phosphor dots, producing
accurate colors. In beam penetration CRTs, two phosphor layers (red and green) are used. Low-speed electrons excite the red layer, while
higher-speed electrons penetrate deeper to excite green, creating additional colors like orange and yellow.
2) Two Flat Panel Monitors:
LCD (Liquid Crystal Display): Uses liquid crystals modulated by electric fields to block or allow backlight through RGB filters. Thin,
lightweight, and energy-efficient.
Plasma Display: Uses ionized gas-filled cells producing ultraviolet light, which excites phosphors to emit visible colors. Provides high
brightness, wide viewing angles, and excellent contrast but consumes more power and generates heat compared to LCDs.
3) Raster and Random-Scan Displays:
Raster displays scan the screen line by line using pixels; ideal for images, video, and complex graphics. They dominate modern monitors
and TVs.
Random-scan displays (vector displays) directly draw lines and curves via electron beams, producing sharp geometric images but
unsuitable for detailed pictures. Raster ensures flexibility, while random-scan offered precision for early CAD systems.
4)Different Color Models in Detail:
Color models provide mathematical methods to represent and manipulate colors in computer graphics.
1. RGB Model: An additive model using Red, Green, and Blue light. Intensities range from 0–255, producing millions of colors.
Common in monitors, cameras, and displays.
2. CMY/CMYK Model: A subtractive model using Cyan, Magenta, Yellow, and Black. Mainly used in printing, as pigments absorb
rather than emit light.
3. HSV (Hue, Saturation, Value): Represents color in terms of shade (hue), intensity (saturation), and brightness (value). Useful for
artistic design and image editing.
4. HLS (Hue, Lightness, Saturation): Similar to HSV but uses lightness instead of brightness, making it more intuitive for human
perception.
5. YIQ/YUV Models: Separate brightness (luminance) from color information (chrominance). These are widely used in television
broadcasting and video compression.
Each model is suited to specific domains—display, printing, design, or broadcasting—ensuring accurate and context-appropriate color
representatio
MAIN PAPER 2018
Part-I (20 words each)
a) What is GUI?
GUI (Graphical User Interface) allows users to interact with a system using icons, menus, and windows instead of commands.
b) What is Plasma Channel?
A plasma channel is the gas-filled cell in plasma displays, which emits ultraviolet light to excite phosphors for producing visible colors.
c) Define Clipping.
Clipping is the process of displaying only the visible portion of objects within a defined window while discarding outside parts.
d) What do you understand by Virtual Reality?
Virtual Reality (VR) is a computer-simulated environment that immerses users in interactive, three-dimensional experiences using special
devices.
e) What is Image Processing?
Image processing involves manipulating digital images using algorithms for enhancement, compression, recognition, and analysis of useful
information.
f) What do you mean by Shearing?
Shearing is a 2D transformation that distorts an object by shifting its shape in one direction while keeping parallel lines.
g) Write name of any two 2D Transformation techniques.
Translation and Rotation are two common 2D transformation techniques used to move and reorient objects in computer graphics.
h) Explain Viewport.
Viewport is the display area on the output device where the contents of a defined window are mapped for visualization.
i) Give any two advantages of Flat Panel Display Devices.
Flat panels are lightweight and compact, consuming less power while offering higher resolution compared to traditional CRT displays.
j) What do you mean by Scan Conversion?
Scan conversion converts geometric data (lines, shapes) into pixels for raster display, enabling computer graphics objects to be displayed.
k) By which transformation dragging can be achieved in Computer Graphics?
Dragging can be achieved using translation transformation, which repositions objects from one location to another within the coordinate
system.
l) What do you mean by Mapping?
Mapping is the process of transforming coordinates or data from one representation space to another, such as window-to-viewport
transformation.
Part-II (40–50 words each)
a) What is Flat Panel Display? Explain.
A Flat Panel Display is a lightweight, slim display technology replacing CRTs. It uses plasma cells or liquid crystals to produce images. Flat
panels consume less power, offer higher resolution, and support portability. Common types include LCD, LED, and Plasma screens, widely
used in laptops, televisions, and smartphones.
b) What do you understand by 2D Transformation? Discuss.
2D transformations are operations applied to two-dimensional objects to change their position, size, shape, or orientation. Basic
transformations include translation (moving objects), scaling (resizing), rotation (turning around origin), reflection (mirroring), and shearing
(distorting shape). These transformations are represented mathematically using matrices for efficient computation in computer graphics
systems.
c) Differentiate between Window and Viewport.
A Window is a selected rectangular area in world coordinates that defines the portion of the scene to be displayed. A Viewport is the
corresponding area on the display device where this content is mapped. Window handles input space, while viewport handles output
mapping.
d) Discuss any one digital image processing technique in brief.
One digital image processing technique is Image Enhancement, which improves image quality for better interpretation. Techniques include
contrast adjustment, noise removal, and sharpening. For example, medical imaging uses enhancement to make CT or MRI scans clearer. It
helps highlight important features while suppressing irrelevant details, improving visual analysis.
Unit-I
Q1: Explain how RGB to CMY conversion is done. Discuss its uses.
RGB is an additive model, while CMY is subtractive. To convert, each CMY component is calculated as:
C = 1 – R, M = 1 – G, Y = 1 – B (where R, G, B values are normalized between 0–1). For example, if RGB = (0.2, 0.4, 0.6), then CMY = (0.8, 0.6,
0.4). CMYK (with black added) is commonly used in printers, as inks absorb light instead of emitting it. The conversion is essential in
desktop publishing, graphic design, and printing workflows. It ensures that digital colors created on displays (RGB-based) are accurately
reproduced on paper using inks (CMYK-based).
Q2: Differentiate between Random Scan Displays and Raster Scan Displays with suitable examples.
Random Scan Displays (vector displays) directly draw images using electron beams along the exact geometric paths of lines or curves. They
produce sharp, flicker-free lines but are inefficient for complex filled images. Example: early CAD systems.
Raster Scan Displays divide the screen into a pixel grid, refreshing line by line from top to bottom. They are well-suited for complex images,
pictures, and videos, though sometimes less sharp for lines. Example: modern TVs, monitors.
Key difference: random-scan works best for line drawings, while raster is universal, supporting text, images, and animation. Raster
dominates modern graphics systems due to its flexibility and hardware compatibility.
Unit-II
Q3: Implement the DDA Algorithm to draw a line from (0,0) to (6,6). Explain vector generation of line.
The Digital Differential Analyzer (DDA) algorithm increments one coordinate in equal steps while calculating the other using slope. For line
(0,0) → (6,6):
dx = 6, dy = 6, so steps = max(dx, dy) = 6.
X increment = dx/steps = 1, Y increment = dy/steps = 1.
Starting at (0,0), successive points are (1,1), (2,2), (3,3), … (6,6).
This gives the rasterized line.
Vector generation: Instead of pixels, vector displays use direct electron beam deflections to trace the line continuously, yielding
smooth images. DDA fits raster systems, while vector displays bypass pixel calculations.
Q4: Write translation matrix for a triangle (10,2), (20,2), (15,5) with translation vector (-5,3). Also draw translated triangle.
Translation matrix in homogeneous coordinates:
[10Tx01Ty001]\begin{bmatrix} 1 & 0 & Tx \\ 0 & 1 & Ty \\ 0 & 0 & 1 \end{bmatrix}
Here, Tx = –5, Ty = 3.
So, matrix =
[10−5013001]\begin{bmatrix} 1 & 0 & -5 \\ 0 & 1 & 3 \\ 0 & 0 & 1 \end{bmatrix}
Applying it:
(10,2) → (5,5)
(20,2) → (15,5)
(15,5) → (10,8)
Thus, the translated triangle vertices are (5,5), (15,5), and (10,8). Graphically, the entire triangle shifts left by 5 units and upward
by 3 units. Translation does not distort shape—only position changes.
Unit-III
Q5: What is Back Face Removal Algorithm? Discuss.
Back Face Removal (or Back-Face Culling) is used in 3D graphics to eliminate surfaces not visible to the viewer. A polygon face is considered
a back face if its normal vector points away from the viewing direction. Mathematically, if the dot product between the surface normal and
view vector is greater than zero, it is hidden. This reduces rendering load by avoiding unnecessary calculations for unseen surfaces. Widely
used in real-time graphics (games, simulations), it speeds up rendering and optimizes memory usage. However, it cannot handle cases like
overlapping objects—other algorithms (like depth buffer) are used additionally.
Q6: Discuss two approaches used to determine hidden surfaces. Explain any one.
Two major hidden surface removal approaches are:
1. Z-buffer algorithm (Depth buffer): Stores depth values for each pixel. When a new surface projects on the same pixel, depth is
compared, and only the closest surface is displayed. Simple, hardware-supported, but memory-intensive.
2. Painter’s Algorithm: Surfaces are sorted by depth and painted from farthest to nearest. Closer surfaces overwrite farther ones.
Easy for convex scenes but fails with cyclic overlaps.
Among these, Z-buffer is widely used in modern GPUs due to efficiency and direct hardware implementation.
Unit-IV
Q7: How to capture and store a digital image? Discuss any two file formats for storage of a digital image.
Capturing a digital image involves converting light signals into electrical signals using a sensor (e.g., CCD/CMOS). These signals are digitized
into pixel values. Each pixel is assigned intensity and color information, forming a digital matrix. Storage requires compression and file
formatting.
Two formats:
JPEG: Compressed, lossy format for photos. Reduces size significantly but sacrifices some detail.
PNG: Lossless compression, supports transparency, best for graphics and logos.
Thus, image capture involves sensing, digitization, and encoding into efficient file formats suitable for use and transmission.
Q8: Discuss various elements and application areas of the digital image processing system.
A digital image processing system consists of:
Image Acquisition: Sensors capture raw images.
Preprocessing: Enhancements like noise removal and normalization.
Segmentation: Divides image into meaningful regions.
Representation/Compression: Encodes image efficiently.
Recognition/Interpretation: Extracts useful information.
Applications include medical imaging (MRI, CT scans), remote sensing (satellite images), biometrics (fingerprints, face
recognition), industrial inspection, document processing (OCR), and security surveillance. Image processing improves visual
quality, aids decision-making, and supports automation across domains.
MAIN PAPER 2017
Q1
1) What is Computer Graphics?
Computer graphics is creating, manipulating, and representing visual images using computers, including 2D/3D images, animations, and
interactive graphics.
2) What is Aspect Ratio?
Aspect ratio is the ratio of a display’s width to its height, determining image shape and screen proportion.
3) Define PHIGS
PHIGS (Programmer’s Hierarchical Interactive Graphics System) is a standard API for creating, storing, and manipulating 3D graphics
structures.
4) What is Flood Filling Algorithm?
Flood filling algorithm fills connected pixels with a specific color, starting from a seed, used in paint and graphics.
5) What do you mean by Scaling?
Scaling is a transformation resizing objects proportionally along x, y (and z) axes; types: uniform scaling, non-uniform scaling.
6) What is Transformation? Mention its Types
Transformation changes object position, orientation, or size; types include translation, rotation, scaling, reflection, shearing, and composite
transformations.
7) Define Area Subdivision Method
Area subdivision method recursively divides a graphics area into smaller regions to detect and clip lines or polygons efficiently.
8) Give an Example of Curve Clipping
Clipping a Bézier curve within a rectangular viewport is an example of curve clipping in computer graphics.
9) What do you mean by Polygon Clipping?
Polygon clipping is removing portions of a polygon outside a defined boundary or window, keeping only the visible section.
10) What is Digital Image Processing?
Digital image processing involves manipulating digital images using algorithms to enhance, restore, or extract information from them.
11) Define Quantization
Quantization converts continuous amplitude values of an image into discrete levels for digital representation and storage.
12) Steps to Get Reflected Image Through an Arbitrary Line
Translate line to origin, rotate line to x-axis, apply reflection, reverse rotation, reverse translation to get reflected image.
Q2
1) Explain Color Model RGB. Compare it with HSV
RGB (Red, Green, Blue) represents colors by mixing primary light components. Used in displays and imaging. HSV (Hue, Saturation, Value)
represents colors by tone, vibrancy, and brightness. Unlike RGB, HSV is more intuitive for humans and useful in graphics editing, color
selection, and image processing.
2) Differentiate Between Circle and Ellipse Drawing Algorithm
Circle algorithms (Midpoint, Bresenham) draw points equidistant from center using radius. Ellipse algorithms calculate two radii (x and y
axes) with varying slope. Circle is a special case of ellipse. Ellipse drawing involves more complex decision parameters, while circle uses
simpler, symmetric calculations for faster computation.
3) Write Notes
a) Cyrus-Beck Algorithm
Cyrus-Beck is a parametric line-clipping algorithm for convex polygons. It uses line parameters and polygon edge normals to determine
intersections and visible portions efficiently. More general than Cohen-Sutherland, handling arbitrary convex windows.
b) Window and Viewport
Window defines the portion of the world to display. Viewport maps the window onto a display screen region. Clipping ensures only objects
inside the window appear in the viewport, allowing scaling, translation, and proper representation of graphics on output devices.
4) Write Short Notes
a) Image Compression
Image compression reduces file size by removing redundancy or irrelevant information. Lossless preserves all data; lossy sacrifices some
quality for higher compression. Used in storage, transmission, and faster processing, e.g., PNG (lossless), JPEG (lossy).
b) Resolution
Resolution is the number of pixels per unit area in a digital image, defining clarity and detail. Higher resolution provides sharper, more
detailed images; lower resolution may appear pixelated. Resolution affects printing, display quality, and image processing accuracy.
1) Differentiate Between Random Scan Display and Raster Scan Display with Example
Random scan displays, also called vector displays, draw images by directing the electron beam along object lines. Only visible lines are
drawn, making it ideal for line graphics such as engineering CAD, architectural blueprints, and diagrams. It requires less memory but is
slower for complex images. Raster scan displays draw images line by line across the screen using a frame buffer. They are suitable for
realistic images, photographs, and animations. They consume more memory but can display detailed and colored images efficiently.
Refresh rates differ: raster scan needs constant refresh to maintain the image, while random scan can refresh faster for simple graphics.
Example: Random scan – architectural blueprint display; Raster scan – digital photograph on a monitor. Random scan uses vector
generators, whereas raster scan relies on frame buffers for storing pixel data.
2) Describe the Functionalities of Refresh Cathode Ray Tube (CRT)
A refresh CRT maintains a visible image on the screen by directing an electron beam across a phosphor-coated surface. When the beam
strikes the phosphor, it emits light, creating pixels that form the image. In raster scan systems, the beam scans line by line from top-left to
bottom-right, while in random scan systems, it moves along object lines directly. Refresh CRTs store image information either in a frame
buffer (raster) or in vector memory (random scan). Their key functionalities include sustaining the display without flicker, controlling beam
deflection and intensity, supporting color displays using red, green, and blue phosphors, and maintaining image stability over time. They
also allow real-time updates of graphics, making them essential for early interactive computer graphics systems, video displays, and
simulation applications. The refresh mechanism ensures continuous visibility of dynamic and static images.
3) Write an Example of Bresenham’s Line Drawing Algorithm and Trace the Algorithm for Points (2,1) to (10,12)
Bresenham’s line drawing algorithm selects pixels closest to a theoretical line using integer arithmetic, avoiding floating-point calculations.
For a line from (2,1) to (10,12), calculate dx = 8, dy = 11, slope > 1. Initialize decision parameter p = 2dx – dy = 5. Start at (2,1). Move
primarily along the y-axis, checking p. If p ≥ 0, increment both x and y and update p = p + 2dx – 2dy. If p < 0, increment y only and p = p +
2dx. Repeat until reaching (10,12). The selected pixels approximate the line with minimal error. Bresenham’s method is efficient for raster
devices, as it avoids multiplication or division. This makes it widely used in graphics libraries and embedded systems for line rendering on
screens.
4) Write Short Notes
a) Three Dimensional Transformation
3D transformations change the position, orientation, or size of objects in three-dimensional space. Types include translation (moving
objects along axes), scaling (resizing uniformly or non-uniformly), rotation (around X, Y, or Z axes), reflection, and shearing. Transformations
are represented using matrices and applied to vertices of 3D models. Composite transformations combine multiple operations to achieve
complex effects. They are essential in CAD, gaming, animation, and simulation, allowing realistic motion, modeling, and spatial
manipulation of objects.
b) Area Filling Algorithm
Area filling algorithms fill enclosed regions with color or patterns. Boundary fill algorithms fill until they reach a boundary color. Flood fill
algorithms fill connected pixels from a seed point, covering all reachable areas. These are used in painting software, CAD, and graphics
applications to efficiently color polygons or shapes. Proper implementation ensures uniform filling, avoiding gaps or leaks, and supports
both 4-connected and 8-connected pixel neighborhoods.
5) Explain the Dimensional Viewing Pipeline in Detail
The dimensional viewing pipeline transforms 3D world objects into 2D display coordinates. Steps: 1) Modeling transformation positions
objects in world coordinates. 2) Viewing transformation aligns objects relative to the virtual camera or view reference coordinate system.
3) Projection transformation converts 3D coordinates to 2D using parallel or perspective projections. 4) Clipping removes portions outside
the viewing volume. 5) Viewport transformation maps the clipped coordinates onto the display screen. 6) Scan conversion converts
coordinates into pixels for raster devices. This pipeline ensures correct positioning, scaling, and projection, maintaining spatial
relationships. It is essential for interactive graphics, CAD, simulations, and rendering realistic 3D scenes onto 2D screens, ensuring accurate
representation and visual coherence.
6) Explain Cohen-Sutherland Algorithm with Suitable Example
The Cohen-Sutherland algorithm clips lines against a rectangular window efficiently using region codes. Each line endpoint receives a 4-bit
code representing its position relative to the window (top, bottom, left, right). Lines entirely inside (0000) are accepted; those entirely
outside the same region are rejected. Partially inside lines are clipped by calculating intersections with window boundaries, updating
endpoints iteratively. Example: Window xmin=2, xmax=8, ymin=3, ymax=9; line from (1,4) to (10,6). Endpoint codes: (1,4)=0101,
(10,6)=1010. Both points outside window → clip to (2,4) and (8,6). Repeats until line is fully visible. Efficiently reduces computations,
widely used in computer graphics for line rendering inside viewports.
7) How to Capture and Store a Digital Image? Discuss Two File Formats
Digital images are captured using sensors such as cameras or scanners, converting light into electronic signals. These signals are sampled to
produce pixel values representing intensity and color. After digitization, images are stored in files using specific formats.
File Formats:
1. JPEG – uses lossy compression to reduce size, suitable for photographs. It sacrifices minor detail for storage efficiency.
2. PNG – uses lossless compression, preserving full image quality and supporting transparency, suitable for graphics and web
images.
Captured images can be further processed, edited, or transmitted for visualization, analysis, or storage. Proper format selection ensures
optimal balance of quality, size, and application needs.
8) Write Short Notes
a) Digital Image Processing
Digital image processing uses computers to manipulate images to enhance, restore, analyze, or extract information. Techniques include
filtering, segmentation, edge detection, and morphological operations. Applications include medical imaging, satellite analysis, industrial
inspection, and pattern recognition.
b) Image Enhancement
Image enhancement improves visual quality or emphasizes features. Methods include contrast stretching, histogram equalization,
smoothing, sharpening, and color adjustment. Enhances interpretability for human observers or automated systems, aiding medical,
industrial, and remote sensing applications.