Microprocessor Systems Learning Module: Chapter 1
1. Fundamentals of Microprocessors
A microprocessor is also known as the Central Processing Unit (CPU), which executes
instructions and controls electronic devices.
The four main functions of a microprocessor:
o Fetch: Retrieve instructions from memory.
o Decode: Interpret the instructions.
o Execute: Perform the operation.
o Write-back: Store the result.
Microprocessor properties:
o Clock Speed (Hz): Determines processing speed.
o Word Length: Number of bits processed per cycle (e.g., 8-bit, 16-bit, 32-bit, 64-
bit).
o Cache Memory: Temporary high-speed memory for faster processing.
o Number of Cores: Multi-core processors allow parallel processing.
2. Evolution of Microprocessors (Intel Series)
4-bit Microprocessors
Intel 4004 (1971): First commercially available microprocessor.
8-bit Microprocessors
Intel 8008 (1972): First 8-bit processor.
Intel 8080 (1973): Faster and used in early computers.
Intel 8085 (1977): Included a built-in clock generator.
16-bit Microprocessors
Intel 8086 (1978): First X86 architecture processor.
Intel 8088 (1979): Used in IBM PC, had an 8-bit data bus.
Intel 80286 (1982): Introduced virtual memory.
32-bit Microprocessors
Intel 80386 (1985): First 32-bit microprocessor.
Intel 80486 (1989): Introduced cache memory.
Modern Processors
Pentium Series (1993-2000s): Became dominant in personal computers.
Core i3/i5/i7 (2008-2010s): Used in modern PCs, introduced hyper-threading and
turbo boost.
3. Microprocessor Architectures
CISC vs. RISC
CISC (Complex Instruction Set Computing)
o More complex instructions.
o Example: Intel x86, AMD64.
RISC (Reduced Instruction Set Computing)
o Simple, efficient instructions for faster execution.
o Example: ARM, RISC-V, PowerPC.
4. Microcontroller Systems
A microcontroller is a small computer on a single chip, used for specific tasks.
Components of a Microcontroller:
o CPU (Processes instructions).
o Memory (ROM/RAM/EEPROM) (Stores programs and data).
o I/O Ports (Connects to external devices like sensors).
o ADC/DAC (Converts between analog and digital signals).
o Serial Communication Interfaces (UART, SPI, I2C, USB).
Types of Microcontrollers
1. 8-bit (Intel 8051, AVR, PIC16) – Used in basic applications.
2. 16-bit (PIC24, MSP430) – Used in industrial systems.
3. 32-bit (ARM Cortex-M, STM32, ESP32) – Used in IoT, automation.
5. Key Terms & Definitions
ALU (Arithmetic Logic Unit): Performs calculations and logic operations.
CU (Control Unit): Directs data flow in the CPU.
Registers: Small, fast memory for temporary data storage.
Cache Memory: High-speed memory that stores frequently used data.
Bus System: Pathway for data transfer between components.
o Data Bus: Transfers data.
o Address Bus: Specifies memory locations.
o Control Bus: Sends control signals.
Interrupts: Signals that pause CPU execution for urgent tasks.
Topic: Data Representation and Number Systems Chapter 2-1
1. Introduction to Data Representation
Data Representation refers to the way data is stored and processed in a computer.
Computers understand only binary (0s and 1s), so all data (numbers, text, images, audio,
and video) must be converted into binary.
2. Number Systems and Conversions
Conversions:
Binary to Decimal: Multiply each bit by 2ⁿ and sum.
o Example: 1011₂ = (1×2³) + (0×2²) + (1×2¹) + (1×2⁰) = 11₁₀
Decimal to Binary: Repeatedly divide by 2 and take the remainders.
o Example: 25₁₀ → 11001₂
Hexadecimal to Binary: Convert each hex digit into a 4-bit binary equivalent.
o Example: 3A₁₆ → 0011 1010₂
Binary to Hexadecimal: Group bits into 4s and convert.
o Example: 10110110₂ → B6₁₆
3. Character Representation (Encoding Systems)
A. ASCII (American Standard Code for Information Interchange)
7-bit encoding (128 characters) with an 8-bit extended version (256 characters).
Example:
o 'A' → 01000001₂ (65₁₀)
o '5' → 00110101₂ (53₁₀)
B. EBCDIC (Extended Binary Coded Decimal Interchange Code)
8-bit encoding used mainly in IBM mainframes.
Example:
o 'A' → 11000001₂ (193₁₀)
C. Unicode
Supports all global languages and symbols.
Encoding types:
o UTF-8: 1-4 bytes, backward compatible with ASCII.
o UTF-16: 2 or 4 bytes.
o UTF-32: Fixed 4-byte representation.
Example:
o 'A' → U+0041
4. Integer Representation
A. Unsigned Integers
Only positive numbers (0 to 255 for 8-bit).
Example: 1101₂ = 13₁₀
B. Signed Integers (Handling Negative Numbers)
1. Sign-Magnitude: First bit (MSB) = Sign bit (0 = positive, 1 = negative).
o Example: 10001100₂ = -12 (sign bit = 1, magnitude = 12).
2. One’s Complement: Invert all bits.
o Example: +5 → 00000101₂, -5 → 11111010₂
3. Two’s Complement: Invert all bits and add 1.
o Example: -5 → One’s complement = 11111010₂, add 1 → 11111011₂
5. Floating-Point Representation (IEEE 754 Standard)
Used to store real numbers (decimals).
Consists of three parts:
o Sign bit (1 bit): 0 = positive, 1 = negative.
o Exponent (8 bits for single-precision, 11 bits for double-precision).
o Mantissa (23 bits for single, 52 bits for double).
Single-Precision (32-bit) Example:
Convert -26.625₁₀ to IEEE 754:
1. Convert to binary: 26.625₁₀ = 11010.101₂
2. Normalize: 1.1010101 × 2⁴
3. Convert exponent: 4 + 127 (bias) = 131 → 10000011₂
4. Mantissa: 10101010000000000000000
5. Final representation:
o Sign bit: 1
o Exponent: 10000011
o Mantissa: 10101010000000000000000
o Final IEEE 754 binary: 1 10000011 10101010000000000000000
6. Error Detection and Correction
1. Parity Bits: Adds an extra bit to detect errors.
o Even Parity: Ensures even number of 1s.
o Odd Parity: Ensures odd number of 1s.
2. Hamming Code: Corrects single-bit errors using parity bits.
3. Checksum: Used in data transmission to verify accuracy.
Topic: Data Representation and Number Systems Chapter 2-2
1. Introduction to Data Representation
Computers process and store all types of data as binary (0s and 1s).
Types of data representation:
o Numbers (Integers, Floating-Point)
o Characters (ASCII, EBCDIC, Unicode)
o Images, Audio, and Video (Binary encoding)
2. Number Systems and Conversions
Conversions:
Binary to Decimal: Multiply each bit by 2ⁿ and sum.
o Example: 1011₂ = (1×2³) + (0×2²) + (1×2¹) + (1×2⁰) = 11₁₀
Decimal to Binary: Divide by 2, record remainders.
Binary to Hexadecimal: Group into 4-bit chunks and convert.
o Example: 10111010₂ = BA₁₆
3. Character Encoding Standards
ASCII (American Standard Code for Information Interchange)
7-bit encoding (128 characters).
Extended ASCII (8-bit, 256 characters).
Example:
o ‘A’ = 65₁₀ = 01000001₂
o ‘a’ = 97₁₀ = 01100001₂
EBCDIC (Extended Binary Coded Decimal Interchange Code)
8-bit encoding, used in IBM systems.
Example:
o ‘A’ = C1₁₆
o ‘B’ = C2₁₆
Unicode
Supports multiple languages and symbols.
Encoding Types:
o UTF-8: 1-4 bytes per character (most common on the web).
o UTF-16: 2 or 4 bytes per character.
o UTF-32: Fixed 4-byte encoding.
Example: Unicode ‘A’ = U+0041
4. Fixed and Floating-Point Number Representation
Fixed-Point Representation
Used for integers (positive and negative).
Signed Magnitude, One’s Complement, Two’s Complement.
Example (Two’s Complement):
o +5 = 00000101₂
o -5 = 11111011₂
Floating-Point Representation (IEEE 754 Standard)
Used for real numbers (fractions, scientific notation).
Single Precision (32-bit):
o 1 bit (Sign) + 8 bits (Exponent) + 23 bits (Mantissa).
Double Precision (64-bit):
o 1 bit (Sign) + 11 bits (Exponent) + 52 bits (Mantissa).
Example:
o 3.5 in IEEE 754 (32-bit) = 0 10000000 11000000000000000000000
5. Error Detection and Correction Codes
Parity Bits: Add extra bit for even/odd parity checking.
Checksum: Sum of data values used for verification.
Hamming Code: Detects and corrects single-bit errors.
CRC (Cyclic Redundancy Check): Used in networks.