Cit Lecture Note Part B by P - K - Joseph
Cit Lecture Note Part B by P - K - Joseph
SECTION A
Topic: Computer Architecture and Computer Organization
Introduction
Definition of Computer Architecture and Computer Organization
Computer architecture refers to the design of a computer system, including its
structure and the way its components interact with each other. It involves
understanding how the hardware components like the CPU, memory, and input/output
devices are organized and connected to each other to enable the computer to perform
tasks efficiently. Think of it as the blueprint or design plan for building a computer,
outlining how all the parts work together to execute programs and process data.
Computer organization, on the other hand, focuses on the operational aspects of a
computer system. It deals with how the various hardware components are arranged
and controlled to carry out tasks. This includes the specific details of how data is
processed, stored, and transferred within the computer. Computer organization is
concerned with optimizing the performance of the hardware components to achieve
efficient computation. It's like the actual implementation or arrangement of the
components based on the architectural design to make the computer function
effectively.
Why computer Architecture?
Understanding the inner workings of computers is crucial for several reasons:
1. Empowerment: Understanding how computers work empowers individuals to
use technology more effectively. It's like knowing how to drive a car - you can do
more than just sit in the passenger seat and let others take control.
2. Problem-solving: Knowing the basics of computer architecture and organization
enables individuals to troubleshoot issues when things go wrong. It's like being
able to diagnose a problem with your car when it breaks down.
3. Career Opportunities: In today's digital world, computer skills are in high
demand across various industries. Understanding computer architecture opens
up career opportunities in fields like software development, cybersecurity, data
analysis, and more.
4. Innovation: Understanding the inner workings of computers fosters innovation.
It allows individuals to come up with creative solutions, develop new software or
hardware technologies, and contribute to the advancement of computing.
5. Effective Communication: Whether you're collaborating with colleagues or
explaining technical concepts to non-technical stakeholders, having a grasp of
computer architecture helps in effective communication. It's like speaking the
language of computers, which facilitates smoother interactions in the tech
world.
6. Critical Thinking: Learning about computer architecture encourages critical
thinking and problem-solving skills. It involves breaking down complex systems
into smaller components, analyzing their interactions, and finding optimal
solutions.
7. Adaptability: Technology is constantly evolving, and understanding the
fundamentals of computer architecture prepares individuals to adapt to new
technologies and stay relevant in a rapidly changing digital landscape.
REVIEW QUESTION
TODO
Form a group amongst you, for each group, take turns to explain to
your members your understanding of computer architecture and
Computer organization using some form of analogy!
Let us now examine the differences between computer architecture and computer
organization based on several aspect areas:
2. Memory Hierarchy
Types of memory:
i. RAM (Random Access Memory)
RAM is a type of volatile memory used by computers to temporarily store data
and program instructions that the CPU needs to access quickly.
It is called "random access" because any memory cell can be accessed directly
and quickly, regardless of its location.
RAM loses its data when the computer is turned off, which is why it's considered
volatile.
RAM is essential for running programs and multitasking, as it provides fast
access to data and instructions.
Examples of RAM types include DDR (Double Data Rate) and DDR2, DDR3,
DDR4, and DDR5, which represent different generations with varying speeds and
capacities.
ii. ROM (Read-Only Memory)
ROM is a type of non-volatile memory that stores firmware or permanent
instructions that are not meant to be modified.
It contains essential system instructions, such as the BIOS (Basic Input/Output
System), which initializes hardware components during the computer's boot-up
process.
Unlike RAM, ROM retains its data even when the computer is powered off,
making it non-volatile.
ROM is called "read-only" because its contents cannot be easily modified or
overwritten by the user.
Examples of ROM include PROM (Programmable Read-Only Memory), EPROM
(Erasable Programmable Read-Only Memory), and EEPROM (Electrically
Erasable Programmable Read-Only Memory), each with different characteristics
regarding programmability and erasability.
iii. Cache Memory
Cache memory is a small, high-speed memory located on the CPU chip or in
close proximity to it.
It serves as a buffer between the CPU and main memory (RAM), storing
frequently accessed data and instructions to speed up data processing.
Cache memory operates on the principle of locality, exploiting the tendency of
programs to access the same data or instructions repeatedly.
There are different levels of cache memory, including L1 (Level 1), L2 (Level 2),
and sometimes L3 cache, with each level providing progressively larger storage
capacity and slightly slower access speeds.
iv. Secondary Storage (Hard drives, SSDs)
Secondary storage refers to non-volatile storage devices used to store data
permanently or semi-permanently.
Examples of secondary storage devices include hard disk drives (HDDs) and
solid-state drives (SSDs), which store data magnetically and electronically,
respectively.
Secondary storage devices have larger capacities compared to RAM and ROM
but are slower in terms of data access and retrieval.
They are used for long-term storage of operating systems, applications, user
files, and other data that needs to be retained even when the computer is
powered off.
SSDs are becoming increasingly popular due to their faster read and write
speeds compared to traditional HDDs, although they are typically more
expensive.
3. Input/Output Organization
Input/Output devices and their interfaces.
Input/output (I/O) devices and their interfaces are essential components of a
computer system, facilitating communication between the user and the
machine. Input devices enable users to input data and commands into the
computer, while output devices present processed information to the user.
Common input devices include keyboards, mice, touchscreens, and
microphones, allowing users to interact with the computer by typing, clicking,
tapping, or speaking. Output devices such as monitors, printers, speakers, and
projectors display or present processed data, enabling users to visualize
information, print documents, listen to audio, or view presentations. Interfaces,
such as USB (Universal Serial Bus), HDMI (High-Definition Multimedia Interface),
and Ethernet, serve as communication channels between the computer and
external devices, enabling data transfer and connectivity. Together, input/output
devices and their interfaces play a crucial role in facilitating user interaction with
the computer and exchanging information between the computer system and
external devices.
I/O communication methods
This refers to the various techniques and protocols used to facilitate
communication between input/output (I/O) devices and the central processing
unit (CPU) within a computer system. We shall examine different ways in which
data is transferred between the CPU and external devices such as keyboards,
mice, printers, and storage devices.
i. Programmed I/O (polling)
Programmed I/O is the simplest and most basic method of I/O
communication, also known as polling.
In this method, the CPU continuously checks the status of the I/O device
by repeatedly reading a status register or flag associated with the device.
The CPU waits in a loop, periodically checking if the device is ready to
send or receive data.
When the device is ready, the CPU transfers data to or from the device
using direct memory access (DMA) or data transfer instructions.
Programmed I/O is straightforward to implement but can be inefficient as
it ties up the CPU, wasting valuable processing time in waiting for I/O
operations to complete.
ii. Interrupt-driven I/O
In interrupt driven I/O, the CPU is not actively involved in continuously
polling the I/O device.
Instead, the CPU initiates an I/O operation and continues executing other
tasks while waiting for the device to signal completion.
When the I/O device has completed its operation or requires attention, it
sends an interrupt signal to the CPU, causing the CPU to temporarily
suspend its current task and service the interrupt.
The CPU then executes an interrupt service routine (ISR) to handle the
interrupt, which typically involves transferring data between the device
and memory.
Interrupt-driven I/O allows the CPU to perform other tasks while waiting
for I/O operations to complete, improving overall system efficiency
compared to programmed I/O.
iii. Direct Memory Access (DMA)
DMA is an I/O communication method that enables data transfer between
an I/O device and memory without involving the CPU.
In DMA, a specialized DMA controller is used to manage data transfer
between the I/O device and memory independently of the CPU.
The CPU initiates the DMA transfer by setting up the DMA controller with
the necessary transfer parameters, such as the source and destination
addresses and the number of bytes to transfer.
Once configured, the DMA controller takes control of the system bus and
transfers data directly between the I/O device and memory, bypassing the
CPU.
DMA significantly reduces CPU overhead and improves system
performance by offloading data transfer tasks from the CPU to the DMA
controller, allowing the CPU to focus on other tasks.
REVIEW QUESTION
SECTION C
Topic: Computer operation (fetch and execute cycle, Interrupts, stack operations)
The Fetch and Execute Cycle is the fundamental
Introduction to Interrupts
Interrupts are signals generated by hardware
devices or software to request attention from
the CPU. When an interrupt occurs, the CPU
temporarily suspends its current execution to
handle the interrupt request. Interrupts are
used to handle events that require immediate
attention, such as input/output (I/O)
operations, timer events, or hardware errors.
They allow the CPU to efficiently manage
multiple tasks and respond promptly to
external events without wasting processing time.
Types of Interrupts: Hardware and Software:
Hardware Interrupts: These interrupts are generated by external hardware devices to
request attention from the CPU. Examples include I/O interrupts, timer interrupts, and
hardware malfunction interrupts.
Software Interrupts: Also known as traps or exceptions, software interrupts are
generated by the CPU in response to specific conditions detected during program
execution. Examples include division by zero, invalid memory access, or system calls
made by programs to request operating system services.
Interrupt Handling Mechanisms
Interrupt handling involves a series of steps performed by the CPU to respond to an
interrupt:
1. Interrupt Detection: The CPU detects an interrupt request from an external
hardware device or as a result of a software-triggered event.
2. Interrupt Acknowledgment: The CPU acknowledges the interrupt request and
determines its type (hardware or software).
3. Interrupt Service Routine (ISR) Invocation: The CPU transfers control to the
appropriate Interrupt Service Routine (ISR) associated with the interrupt type.
4. ISR Execution: The ISR executes the necessary tasks to handle the interrupt,
which may involve saving the CPU's current state, processing the interrupt, and
performing any required actions.
5. Interrupt Completion: Once the ISR completes its tasks, control returns to the
interrupted program or task, and normal execution resumes.
Importance and Applications of Interrupts in Computer Systems
Interrupts are essential for efficient multitasking and real-time processing in computer
systems. They allow the CPU to respond promptly to external events and handle
concurrent tasks without wasting processing time. Interrupts are widely used in various
applications, including:
Input/Output (I/O) operations: Interrupts facilitate communication between the CPU
and external devices, enabling efficient data transfer and device control.
Time-sensitive operations: Interrupts are used to handle timer events and maintain
system timing for tasks such as scheduling, task switching, and real-time processing.
Error handling: Interrupts help detect and handle hardware errors, software exceptions,
and other abnormal conditions that may occur during program execution.
System management: Interrupts are used by the operating system to manage system
resources, handle system calls, and provide services to user programs.
REVIEW QUESTION
SECTION D
Topic: Assembly language programming, addressing modes
Introduction to low-level programming languages
Low-level programming languages are like the building blocks of computer software,
allowing programmers to communicate directly with the hardware. Unlike high-level
languages such as Python or Java, which offer abstraction and readability, low-level
languages like assembly language and machine code operate closer to the hardware,
providing detailed control over the computer's resources. In low-level languages,
programmers work with instructions that correspond directly to the machine's binary
code, manipulating memory, registers, and input/output operations at a granular level.
While low-level programming requires a deeper understanding of computer
architecture, it offers unparalleled efficiency and precision, making it essential for tasks
like device driver development, operating system design, and embedded systems
programming.
Basics of assembly language
Starting to learn assembly language requires setting up the necessary tools and
resources. Here's a step-by-step guide to help you get started:
1. Choose an Assembly Language and Platform
Assembly language varies depending on the processor architecture and platform you're
targeting (e.g., x86 for Intel-based PCs, ARM for mobile devices). Decide on the specific
assembly language and platform you want to learn.
2. Install an Assembler
Assemblers are tools that convert assembly language code into machine code
executable by the CPU. Choose and install an assembler that supports the assembly
language and platform you've chosen. Some popular assemblers include NASM
(Netwide Assembler) for x86 assembly and GNU Assembler (GAS) for various
architectures.
3. Choose a Text Editor
You'll need a text editor to write your assembly code. Choose a simple text editor or an
integrated development environment (IDE) that supports syntax highlighting for
assembly language. Examples include Visual Studio Code, Sublime Text, or Notepad++.
4. Syntax
Syntax in assembly language refers to the rules and conventions used to write assembly
code. It includes the structure of instructions, labels, comments, and directives.
Assembly language instructions consist of an operation mnemonic followed by
operands. For example, the syntax for adding two numbers in x86 assembly language
is:
add eax, ebx ; Adds the contents of EBX register to EAX register
5. Registers
Registers are small, high-speed storage locations within the CPU that hold data
temporarily during program execution. In x86 assembly language, common registers
include EAX, EBX, ECX, and EDX, among others. Each register has a specific purpose,
such as holding data, addresses, or status flags. For example:
6. Instructions
Instructions in assembly language correspond directly to machine code instructions
executed by the CPU. They perform operations such as arithmetic, logical, data
movement, and control flow. Each instruction consists of an operation mnemonic
followed by operands. For example:
mov eax, ebx ; Move the contents of EBX register into EAX register
add eax, 10 ; Add the value 10 to the EAX register
sub ebx, ecx ; Subtract the contents of ECX register from the EBX register
REVIEW QUESTION
SECTION E
In the section, we shall consider some case studies and modern trend on the discussions
on computer architecture and organization. You will be required to do some research
here and make a presentation thereafter. (What to do will be given during the classes)
Each department has been assigned a part to do, carefully undertake this assessment
and make all necessary submissions within three weeks.
Submissions will be through this form link [Link]
Make sure you saved your work with firstname_lastname_regnumber.pdf
Case Studies
CSC Department: x86
CYB Department: ARM
Emerging trends in computer architecture
IFT Department: Quantum computing
SOE Department: Neuromorphic computing