0% found this document useful (0 votes)
59 views16 pages

Cit Lecture Note Part B by P - K - Joseph

Uploaded by

victorychijioke3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views16 pages

Cit Lecture Note Part B by P - K - Joseph

Uploaded by

victorychijioke3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

FEDERAL UNIVERSITY OF TEC HNOLOGY OWERRI

SCHOOL OF INFORMATION AND COMMUNICATION TECHNOLOGY


BY: P.K. Joseph (MSc, PhD in view)
Course Code: CIT204
Course Title: Computer Architecture and Organization I
Course Unit: 2
Department(s): CSC, CYB, SOE, IFT
Course Outlines PART B:
Assembly language programming, addressing modes. Design assignments. Review of
computer systems basic structure and types. Differences between computer
architecture and computer organization. Review of Computer operation, fetch and
execute cycle, interrupts, stack operations.

SECTION A
Topic: Computer Architecture and Computer Organization
Introduction
Definition of Computer Architecture and Computer Organization
Computer architecture refers to the design of a computer system, including its
structure and the way its components interact with each other. It involves
understanding how the hardware components like the CPU, memory, and input/output
devices are organized and connected to each other to enable the computer to perform
tasks efficiently. Think of it as the blueprint or design plan for building a computer,
outlining how all the parts work together to execute programs and process data.
Computer organization, on the other hand, focuses on the operational aspects of a
computer system. It deals with how the various hardware components are arranged
and controlled to carry out tasks. This includes the specific details of how data is
processed, stored, and transferred within the computer. Computer organization is
concerned with optimizing the performance of the hardware components to achieve
efficient computation. It's like the actual implementation or arrangement of the
components based on the architectural design to make the computer function
effectively.
Why computer Architecture?
Understanding the inner workings of computers is crucial for several reasons:
1. Empowerment: Understanding how computers work empowers individuals to
use technology more effectively. It's like knowing how to drive a car - you can do
more than just sit in the passenger seat and let others take control.
2. Problem-solving: Knowing the basics of computer architecture and organization
enables individuals to troubleshoot issues when things go wrong. It's like being
able to diagnose a problem with your car when it breaks down.
3. Career Opportunities: In today's digital world, computer skills are in high
demand across various industries. Understanding computer architecture opens
up career opportunities in fields like software development, cybersecurity, data
analysis, and more.
4. Innovation: Understanding the inner workings of computers fosters innovation.
It allows individuals to come up with creative solutions, develop new software or
hardware technologies, and contribute to the advancement of computing.
5. Effective Communication: Whether you're collaborating with colleagues or
explaining technical concepts to non-technical stakeholders, having a grasp of
computer architecture helps in effective communication. It's like speaking the
language of computers, which facilitates smoother interactions in the tech
world.
6. Critical Thinking: Learning about computer architecture encourages critical
thinking and problem-solving skills. It involves breaking down complex systems
into smaller components, analyzing their interactions, and finding optimal
solutions.
7. Adaptability: Technology is constantly evolving, and understanding the
fundamentals of computer architecture prepares individuals to adapt to new
technologies and stay relevant in a rapidly changing digital landscape.

REVIEW QUESTION

Using 10 aspect areas, highlight the differences between computer


architecture and computer organization.

TODO

Form a group amongst you, for each group, take turns to explain to
your members your understanding of computer architecture and
Computer organization using some form of analogy!
Let us now examine the differences between computer architecture and computer
organization based on several aspect areas:

ASPECT COMPUTER ARCHITECTURE COMPUTER


ORGANIZATION
DEFINITION Focuses on the design and Concerned with the
attributes of a computer operational aspects and how
system's hardware and components are
software interface. interconnected to execute
instructions.
SCOPE Broadly covers the overall Deals with the
structure, behavior, and implementation and
attributes of a computer operational details of the
system. hardware components within
the system.
COMPONENTS Includes CPU, memory, Deals with CPU registers,
input/output devices, and their ALU, control unit, memory
interactions. hierarchy, and their
organization within the
system.
FOCUS Emphasizes the instruction set Emphasizes the actual
architecture (ISA) and the implementation and
design principles of the operational details of the
hardware and software hardware components.
interface.
LEVEL OF Operates at a higher level of Operates at a lower level of
ABSTRACTION abstraction, focusing on the abstraction, focusing on the
overall system structure and internal details and
design concepts. functionalities of components.
DESIGN Involves decisions related to Involves decisions related to
DECISIONS instruction set design, CPU design, memory
addressing modes, and hierarchy, instruction cycle,
memory organization. and data path design.
OPTIMIZATION Focuses on optimizing Focuses on optimizing
performance, power efficiency, hardware utilization, data
and compatibility with transfer rates, and system
software applications. reliability.
SECTION B
Topic: Review of computer systems
basic structure and types
Historical Overview {For your reading
pleasure}
Computer Organization and Design: The
Hardware/Software Interface" by David A.
Patterson and John L. Hennessy.

Structured Computer Organization" by Andrew


S. Tanenbaum.

Computer Architecture: A Quantitative Image Extracted from [Link]


Approach" by John L. Hennessy and David A.
Patterson.
Basic
} components of a computer system
A computer system comprises several basic components that work together to execute
programs and process data. At its core is the Central Processing Unit (CPU), often
referred to as the brain of the computer, which carries out instructions and performs
arithmetic and logical operations. The CPU interacts with memory, including Random
Access Memory (RAM) for temporarily storing data and program instructions, and
Read-Only Memory (ROM) for holding firmware and essential system instructions.
Input devices such as keyboards, mice, and touchscreens allow users to input data,
while output devices like monitors, printers, and speakers display or present processed
information. Storage devices such as hard disk drives (HDDs) and solid-state drives
(SSDs) store data permanently or semi-permanently. These components are
interconnected through the motherboard, which provides power and data connections,
facilitating communication and data transfer within the system. Together, these basic
components form the foundation of a computer system, enabling it to perform a wide
range of tasks efficiently.
1. CPU Architecture Image Extracted from
[Link]
Role of the CPU in data processing
The Central Processing Unit (CPU) serves as the "brain" of a computer,
responsible for executing instructions and performing calculations required for
data processing. The CPU's role is crucial in processing data efficiently and
effectively. Some of roles of the CPU in data processing, along with specific
examples include:
a) Fetching Instructions: The CPU fetches instructions from the computer's
memory, decoding them to understand the operation to be performed.
Example: Consider a simple instruction like "add two numbers." The CPU fetches
this instruction from memory and decodes it to understand that it needs to
perform an addition operation.
b) Executing Instructions: Once the instructions are fetched and decoded, the CPU
executes them by performing the required operations, such as arithmetic
calculations, logical comparisons, or data movement.
Example: If the CPU fetches an instruction to add two numbers, it performs the
addition operation by fetching the numbers from memory, adding them
together using its Arithmetic Logic Unit (ALU), and storing the result back into
memory.
c) Control Unit Operation: The CPU's Control Unit coordinates and controls the
execution of instructions, managing the flow of data between different
components within the CPU and between the CPU and other parts of the
computer system.
Example: When executing a program, the Control Unit ensures that instructions
are executed in the correct sequence and that data is transferred to and from the
appropriate locations in memory.
d) Data Processing: The CPU processes data by manipulating it according to the
instructions it receives. This includes tasks such as performing arithmetic
operations, comparing values, moving data between memory locations, and
executing logical operations.
Example: In a spreadsheet application, the CPU processes data by performing
calculations on the numbers entered into cells, such as adding up a column of
numbers or calculating averages.
e) Parallel Processing: Modern CPUs often feature multiple cores, allowing them
to execute multiple instructions simultaneously through parallel processing. This
enhances the CPU's ability to process large amounts of data more quickly.
Example: In a multi-core CPU, different cores can work on separate tasks
concurrently, such as running multiple applications simultaneously or processing
multiple threads within a single application.
Components of the CPU

i. ALU (Arithmetic Logic Unit)


The Arithmetic Logic Unit (ALU) is like the calculator of a computer, performing
all the math and logical operations needed to process data. It's responsible for
tasks like adding, subtracting, multiplying, and dividing numbers, as well as
comparing numbers to see if they're equal, greater than, or less than each other.
Additionally, the ALU handles logical operations, which involve making decisions
based on conditions like "if this is true, then do that." For example, if you're
playing a game and your character needs to jump over an obstacle, the ALU
helps the computer decide whether your character's jump is high enough to clear
the obstacle. In simpler terms, the ALU is the part of the computer that does all
the math and makes decisions based on the numbers it's working with.
ii. Control Unit
In simple terms, the Control Unit is like the manager of a computer. Its main job
is to oversee and coordinate all the activities happening inside the computer's
brain, known as the Central Processing Unit (CPU). The Control Unit tells the CPU
what to do next by fetching instructions from the computer's memory, decoding
them to understand what they mean, and then directing the CPU to carry out
those instructions. It manages the flow of data within the CPU and between the
CPU and other parts of the computer system, ensuring that everything happens
in the right order and at the right time. Just like a manager keeps things running
smoothly in a company, the Control Unit keeps the computer running smoothly
by making sure all the different parts work together properly.
iii. Registers
Registers are small, super-fast storage areas inside the Central Processing Unit
(CPU) of a computer. They're like tiny workspaces where the CPU can quickly access
and store small pieces of information it needs to perform calculations and carry out
instructions. Think of them as the CPU's scratchpad or temporary memory, where it
keeps track of important data while it's working on tasks. Registers are used to store
things like numbers, addresses in memory, and instructions that the CPU needs to
execute. Because they're so fast and located directly within the CPU, registers help
speed up the computer's performance by allowing it to quickly retrieve and
manipulate data without having to constantly access the slower main memory.
Overall, registers play a crucial role in enabling the CPU to efficiently process
information and execute programs.

2. Memory Hierarchy
Types of memory:
i. RAM (Random Access Memory)
RAM is a type of volatile memory used by computers to temporarily store data
and program instructions that the CPU needs to access quickly.
It is called "random access" because any memory cell can be accessed directly
and quickly, regardless of its location.
RAM loses its data when the computer is turned off, which is why it's considered
volatile.
RAM is essential for running programs and multitasking, as it provides fast
access to data and instructions.
Examples of RAM types include DDR (Double Data Rate) and DDR2, DDR3,
DDR4, and DDR5, which represent different generations with varying speeds and
capacities.
ii. ROM (Read-Only Memory)
ROM is a type of non-volatile memory that stores firmware or permanent
instructions that are not meant to be modified.
It contains essential system instructions, such as the BIOS (Basic Input/Output
System), which initializes hardware components during the computer's boot-up
process.
Unlike RAM, ROM retains its data even when the computer is powered off,
making it non-volatile.
ROM is called "read-only" because its contents cannot be easily modified or
overwritten by the user.
Examples of ROM include PROM (Programmable Read-Only Memory), EPROM
(Erasable Programmable Read-Only Memory), and EEPROM (Electrically
Erasable Programmable Read-Only Memory), each with different characteristics
regarding programmability and erasability.
iii. Cache Memory
Cache memory is a small, high-speed memory located on the CPU chip or in
close proximity to it.
It serves as a buffer between the CPU and main memory (RAM), storing
frequently accessed data and instructions to speed up data processing.
Cache memory operates on the principle of locality, exploiting the tendency of
programs to access the same data or instructions repeatedly.
There are different levels of cache memory, including L1 (Level 1), L2 (Level 2),
and sometimes L3 cache, with each level providing progressively larger storage
capacity and slightly slower access speeds.
iv. Secondary Storage (Hard drives, SSDs)
Secondary storage refers to non-volatile storage devices used to store data
permanently or semi-permanently.
Examples of secondary storage devices include hard disk drives (HDDs) and
solid-state drives (SSDs), which store data magnetically and electronically,
respectively.
Secondary storage devices have larger capacities compared to RAM and ROM
but are slower in terms of data access and retrieval.
They are used for long-term storage of operating systems, applications, user
files, and other data that needs to be retained even when the computer is
powered off.
SSDs are becoming increasingly popular due to their faster read and write
speeds compared to traditional HDDs, although they are typically more
expensive.

3. Input/Output Organization
Input/Output devices and their interfaces.
Input/output (I/O) devices and their interfaces are essential components of a
computer system, facilitating communication between the user and the
machine. Input devices enable users to input data and commands into the
computer, while output devices present processed information to the user.
Common input devices include keyboards, mice, touchscreens, and
microphones, allowing users to interact with the computer by typing, clicking,
tapping, or speaking. Output devices such as monitors, printers, speakers, and
projectors display or present processed data, enabling users to visualize
information, print documents, listen to audio, or view presentations. Interfaces,
such as USB (Universal Serial Bus), HDMI (High-Definition Multimedia Interface),
and Ethernet, serve as communication channels between the computer and
external devices, enabling data transfer and connectivity. Together, input/output
devices and their interfaces play a crucial role in facilitating user interaction with
the computer and exchanging information between the computer system and
external devices.
I/O communication methods
This refers to the various techniques and protocols used to facilitate
communication between input/output (I/O) devices and the central processing
unit (CPU) within a computer system. We shall examine different ways in which
data is transferred between the CPU and external devices such as keyboards,
mice, printers, and storage devices.
i. Programmed I/O (polling)
Programmed I/O is the simplest and most basic method of I/O
communication, also known as polling.
In this method, the CPU continuously checks the status of the I/O device
by repeatedly reading a status register or flag associated with the device.
The CPU waits in a loop, periodically checking if the device is ready to
send or receive data.
When the device is ready, the CPU transfers data to or from the device
using direct memory access (DMA) or data transfer instructions.
Programmed I/O is straightforward to implement but can be inefficient as
it ties up the CPU, wasting valuable processing time in waiting for I/O
operations to complete.
ii. Interrupt-driven I/O
In interrupt driven I/O, the CPU is not actively involved in continuously
polling the I/O device.
Instead, the CPU initiates an I/O operation and continues executing other
tasks while waiting for the device to signal completion.
When the I/O device has completed its operation or requires attention, it
sends an interrupt signal to the CPU, causing the CPU to temporarily
suspend its current task and service the interrupt.
The CPU then executes an interrupt service routine (ISR) to handle the
interrupt, which typically involves transferring data between the device
and memory.
Interrupt-driven I/O allows the CPU to perform other tasks while waiting
for I/O operations to complete, improving overall system efficiency
compared to programmed I/O.
iii. Direct Memory Access (DMA)
DMA is an I/O communication method that enables data transfer between
an I/O device and memory without involving the CPU.
In DMA, a specialized DMA controller is used to manage data transfer
between the I/O device and memory independently of the CPU.
The CPU initiates the DMA transfer by setting up the DMA controller with
the necessary transfer parameters, such as the source and destination
addresses and the number of bytes to transfer.
Once configured, the DMA controller takes control of the system bus and
transfers data directly between the I/O device and memory, bypassing the
CPU.
DMA significantly reduces CPU overhead and improves system
performance by offloading data transfer tasks from the CPU to the DMA
controller, allowing the CPU to focus on other tasks.

REVIEW QUESTION

What are the primary differences between programmed I/O,


interrupt-driven I/O, and Direct Memory Access (DMA) as I/O
communication methods, and how do these differences impact
system efficiency and CPU utilization?

SECTION C
Topic: Computer operation (fetch and execute cycle, Interrupts, stack operations)
The Fetch and Execute Cycle is the fundamental

Image extracted from CSC Wiki


process by which a CPU (Central Processing Unit)
retrieves instructions from memory, interprets them,
and performs the necessary operations. It is a
continuous loop that allows the CPU to execute
instructions sequentially, one after the other.
The cycle consists of the following phases:
a) Fetch Phase: In this phase, the CPU retrieves
the next instruction from memory using the
program counter (PC), which stores the memory address of the next instruction
to be executed. The instruction is then loaded into the instruction register (IR)
within the CPU.
b) Decode Phase: Once the instruction is fetched, the CPU decodes it to determine
the operation to be performed and the operands involved. This involves
interpreting the opcode (operation code) and any accompanying operands or
addressing modes.
c) Execute Phase: In this phase, the CPU performs the operation specified by the
decoded instruction. This could involve arithmetic or logical operations, data
movement, control flow changes, or interactions with I/O devices.
After the execution phase, the cycle repeats, with the CPU fetching the next instruction
and continuing the process until the program is complete or interrupted. The Fetch and
Execute Cycle is crucial for the operation of a computer system, as it forms the basis for
executing programs and processing data. It ensures that instructions are fetched from
memory, decoded, and executed in a controlled and sequential manner, enabling the
CPU to perform complex tasks and execute programs efficiently.

Introduction to Interrupts
Interrupts are signals generated by hardware
devices or software to request attention from
the CPU. When an interrupt occurs, the CPU
temporarily suspends its current execution to
handle the interrupt request. Interrupts are
used to handle events that require immediate
attention, such as input/output (I/O)
operations, timer events, or hardware errors.
They allow the CPU to efficiently manage
multiple tasks and respond promptly to
external events without wasting processing time.
Types of Interrupts: Hardware and Software:
Hardware Interrupts: These interrupts are generated by external hardware devices to
request attention from the CPU. Examples include I/O interrupts, timer interrupts, and
hardware malfunction interrupts.
Software Interrupts: Also known as traps or exceptions, software interrupts are
generated by the CPU in response to specific conditions detected during program
execution. Examples include division by zero, invalid memory access, or system calls
made by programs to request operating system services.
Interrupt Handling Mechanisms
Interrupt handling involves a series of steps performed by the CPU to respond to an
interrupt:
1. Interrupt Detection: The CPU detects an interrupt request from an external
hardware device or as a result of a software-triggered event.
2. Interrupt Acknowledgment: The CPU acknowledges the interrupt request and
determines its type (hardware or software).
3. Interrupt Service Routine (ISR) Invocation: The CPU transfers control to the
appropriate Interrupt Service Routine (ISR) associated with the interrupt type.
4. ISR Execution: The ISR executes the necessary tasks to handle the interrupt,
which may involve saving the CPU's current state, processing the interrupt, and
performing any required actions.
5. Interrupt Completion: Once the ISR completes its tasks, control returns to the
interrupted program or task, and normal execution resumes.
Importance and Applications of Interrupts in Computer Systems
Interrupts are essential for efficient multitasking and real-time processing in computer
systems. They allow the CPU to respond promptly to external events and handle
concurrent tasks without wasting processing time. Interrupts are widely used in various
applications, including:
Input/Output (I/O) operations: Interrupts facilitate communication between the CPU
and external devices, enabling efficient data transfer and device control.
Time-sensitive operations: Interrupts are used to handle timer events and maintain
system timing for tasks such as scheduling, task switching, and real-time processing.
Error handling: Interrupts help detect and handle hardware errors, software exceptions,
and other abnormal conditions that may occur during program execution.
System management: Interrupts are used by the operating system to manage system
resources, handle system calls, and provide services to user programs.
REVIEW QUESTION

This is the only review question that requires feedback.


At first, you are required to do personal study on the topic “Stack
Operation”, then
use this form link [Link]
and answer the questions therein.
TIPS
Ensure your feedback and answers are unique and complete.

SECTION D
Topic: Assembly language programming, addressing modes
Introduction to low-level programming languages
Low-level programming languages are like the building blocks of computer software,
allowing programmers to communicate directly with the hardware. Unlike high-level
languages such as Python or Java, which offer abstraction and readability, low-level
languages like assembly language and machine code operate closer to the hardware,
providing detailed control over the computer's resources. In low-level languages,
programmers work with instructions that correspond directly to the machine's binary
code, manipulating memory, registers, and input/output operations at a granular level.
While low-level programming requires a deeper understanding of computer
architecture, it offers unparalleled efficiency and precision, making it essential for tasks
like device driver development, operating system design, and embedded systems
programming.
Basics of assembly language
Starting to learn assembly language requires setting up the necessary tools and
resources. Here's a step-by-step guide to help you get started:
1. Choose an Assembly Language and Platform
Assembly language varies depending on the processor architecture and platform you're
targeting (e.g., x86 for Intel-based PCs, ARM for mobile devices). Decide on the specific
assembly language and platform you want to learn.
2. Install an Assembler
Assemblers are tools that convert assembly language code into machine code
executable by the CPU. Choose and install an assembler that supports the assembly
language and platform you've chosen. Some popular assemblers include NASM
(Netwide Assembler) for x86 assembly and GNU Assembler (GAS) for various
architectures.
3. Choose a Text Editor
You'll need a text editor to write your assembly code. Choose a simple text editor or an
integrated development environment (IDE) that supports syntax highlighting for
assembly language. Examples include Visual Studio Code, Sublime Text, or Notepad++.
4. Syntax
Syntax in assembly language refers to the rules and conventions used to write assembly
code. It includes the structure of instructions, labels, comments, and directives.
Assembly language instructions consist of an operation mnemonic followed by
operands. For example, the syntax for adding two numbers in x86 assembly language
is:

add eax, ebx ; Adds the contents of EBX register to EAX register

5. Registers
Registers are small, high-speed storage locations within the CPU that hold data
temporarily during program execution. In x86 assembly language, common registers
include EAX, EBX, ECX, and EDX, among others. Each register has a specific purpose,
such as holding data, addresses, or status flags. For example:

mov eax, 10 ; Move the value 10 into the EAX register


mov ebx, 20 ; Move the value 20 into the EBX register

6. Instructions
Instructions in assembly language correspond directly to machine code instructions
executed by the CPU. They perform operations such as arithmetic, logical, data
movement, and control flow. Each instruction consists of an operation mnemonic
followed by operands. For example:
mov eax, ebx ; Move the contents of EBX register into EAX register
add eax, 10 ; Add the value 10 to the EAX register
sub ebx, ecx ; Subtract the contents of ECX register from the EBX register

REVIEW QUESTION

Assemble language and machine code are very similar in


literature and practice, outline 10 relationship that exist between
them.
TIPS
You may need to do further studies on assembly language
programming.

SECTION E
In the section, we shall consider some case studies and modern trend on the discussions
on computer architecture and organization. You will be required to do some research
here and make a presentation thereafter. (What to do will be given during the classes)
Each department has been assigned a part to do, carefully undertake this assessment
and make all necessary submissions within three weeks.
Submissions will be through this form link [Link]
Make sure you saved your work with firstname_lastname_regnumber.pdf

Case Studies
CSC Department: x86
CYB Department: ARM
Emerging trends in computer architecture
IFT Department: Quantum computing
SOE Department: Neuromorphic computing

You might also like