Definition of Interfacing
Interfacing is the systematic process by which a microprocessor or microcontroller communicates
with external devices, such as memory units and input/output (I/O) peripherals. It involves both
hardware design (circuitry) and software routines that enable two different components to work
together harmoniously. The external devices may include RAM, ROM, keyboards, displays, sensors,
or communication modules—each of which has its own electrical and logical requirements. Since the
microprocessor operates with a specific set of signals, voltages, timings, and protocols, it cannot
directly interact with external devices that have different characteristics. Therefore, interfacing
provides a bridge between the processor and these components, ensuring that all data transfers,
command executions, and control signals occur correctly and efficiently.
At its core, interfacing involves matching the electrical and logical signal requirements of the
external device with those of the microprocessor. This includes ensuring compatibility of voltage
levels, timing coordination, control signal logic, and data formats. The design may require using
buffers, latches, decoders, multiplexers, or specialized interfacing chips like the Intel 8255
Programmable Peripheral Interface. Additionally, software plays a crucial role in managing the
communication process, such as reading data from sensors, writing values to displays, or enabling
memory read/write operations through precise instruction sequences.
The ultimate goal of interfacing is to allow seamless integration of various subsystems into a cohesive
embedded or computing system, enabling the processor to read, write, and control external
elements reliably. Without proper interfacing, the microprocessor would remain isolated and incapable
of performing meaningful tasks in the real world, as it relies entirely on these connections to interact
with its environment.
Types of Interfacing
There are two major types of interfacing based on the devices involved:
1. Memory Interfacing
Memory interfacing is the process of connecting memory devices—such as RAM (Random Access
Memory) and ROM (Read-Only Memory)—to a microprocessor in such a way that the processor
can store, access, and retrieve data or instructions from these memory units correctly and
efficiently. Memory is an essential component of any computing system, and memory interfacing
ensures that the microprocessor can seamlessly communicate with the memory by aligning address,
data, and control signals between the two. Since the processor and memory units operate using
different electrical and logical specifications, interfacing acts as the critical mechanism to match these
requirements.
1. Purpose and Role of Memory Interfacing
In a microprocessor-based system, memory stores both the program instructions and the data needed
for execution. For the microprocessor to execute a program, it must be able to:
Fetch instructions from ROM (non-volatile memory used to store permanent programs),
Read and write temporary data in RAM (volatile memory used during program execution).
To accomplish this, the processor must send addresses to the memory, initiate appropriate control
signals (like Read or Write), and then either receive data from memory or send data to it. The hardware
circuitry and logical control needed to support this data exchange is the foundation of memory
interfacing.
2. Key Components Involved in Memory Interfacing
Interfacing is the backbone of any microprocessor-based system. It enables the connection,
communication, and coordination between the microprocessor and external devices, such as
memory units and input/output (I/O) peripherals. To achieve successful interfacing, several key
concepts and tasks must be understood and implemented correctly. These include the use of buses,
addressing mechanisms, control signaling, and timing coordination. Among these, the most
foundational concept is the system bus architecture, which serves as the main channel for data
movement and device interaction.
1. Buses: The Communication Pathway
A bus is a group of parallel electrical lines (wires or traces on a circuit board) that collectively transmit
data, addresses, or control signals. The bus system forms the central communication highway in a
microprocessor system, linking the CPU, memory units, and peripheral devices. It allows for the
organized exchange of information and ensures that all components can interact in a structured and
synchronized manner.
The system bus is typically divided into three main categories:
a) Address Bus
The address bus is responsible for carrying the address of a specific memory location or I/O device
that the microprocessor intends to access. Each device or memory cell in a system is assigned a
unique binary address, and the processor uses this address to identify where to read or write data.
Key Characteristics:
Unidirectional: The address flows only from the microprocessor to memory or I/O devices.
The number of address lines determines the addressable memory space.
o For example, the 8085 microprocessor has 16 address lines (A0–A15), allowing it
to access 2¹⁶ = 65,536 (or 64 KB) of memory or device locations.
These lines are connected to decoders or memory mapping circuits during interfacing to
select the correct device.
Task in Interfacing:
The microprocessor places the desired address on the address bus, which is then decoded by the
interfacing logic to generate a chip select signal that activates the specific memory or I/O device.
b) Data Bus
The data bus carries the actual data being transferred between the microprocessor and external
devices. This data may represent instruction codes, operands, or results from a computation.
Key Characteristics:
Bidirectional: Data flows in both directions. The processor can send data to or receive data
from memory or I/O devices.
The width of the data bus determines how much data can be transferred at one time.
o The 8085 microprocessor, for example, has an 8-bit data bus (D0–D7), meaning it
can transfer 8 bits of data (1 byte) at a time.
For processors like the 8086, the data bus is 16 bits wide, allowing for faster data handling.
Task in Interfacing:
The data bus must be correctly connected to the corresponding data lines of the memory or I/O
devices. Latches and buffers are often used to synchronize data transfer and to drive the data bus over
longer distances or slower devices.
c) Control Bus
The control bus carries control signals that manage the direction and timing of data transfers. These
signals determine whether the current operation is a read or write, whether the operation is directed at
memory or an I/O device, and whether the device or processor is currently in control of the bus.
Common Control Signals:
MEMR’ (Memory Read): Activated when the processor wants to read data from memory.
MEMW’ (Memory Write): Activated to write data into memory.
IOR’ (I/O Read): Signals that the processor intends to read from an I/O device.
IOW’ (I/O Write): Signals that the processor is sending data to an I/O device.
RD’, WR’, IO/M’: Control lines used in conjunction with address decoding to distinguish
between memory and I/O operations.
Task in Interfacing:
Control signals are combined with address decoding outputs to generate precise control pulses that
enable or disable devices. This ensures that only the intended device responds during a given bus
cycle, preventing data corruption or bus conflicts.
2. Coordination Among Buses
For successful interfacing, all three buses must work in synchronization:
The address bus selects the target device or memory location.
The control bus indicates the type of operation (read/write and memory/I/O).
The data bus carries the actual data to be written or read.
Interfacing logic must decode addresses, manage control signals, and facilitate timely data transfer
while respecting the timing constraints of all connected devices.
3. Importance of These Tasks in Interfacing
These fundamental tasks—managing the address, data, and control buses—are crucial for:
Selecting the correct device to communicate with.
Ensuring the right operation is performed (read or write).
Protecting data integrity through proper timing and synchronization.
Enabling scalability, allowing more devices to be interfaced through techniques like
address decoding.
Conclusion
Understanding the roles of the address bus, data bus, and control bus is essential to mastering
interfacing in microprocessor systems. These buses form the foundation of all communication within
a system. Interfacing tasks built upon these buses, such as address decoding and control signal
management, ensure that devices can exchange information accurately and efficiently. Without these
key concepts and tasks, microprocessors would be unable to interact with memory or the external
world, rendering them non-functional in real-world applications.
3. Steps in Memory Interfacing
The process of memory interfacing involves several important steps:
Step 1: Determine Memory Size and Address Range
The first task is to decide the total amount of memory required and how it will be divided among
different types (e.g., 16KB ROM and 32KB RAM). Each memory chip is assigned a unique range of
addresses so the microprocessor knows where to access it.
Step 2: Address Decoding
Since multiple memory devices may be connected to the microprocessor, it must be able to uniquely
identify each device. Address decoding logic is used to generate chip select (CS) signals for
individual memory chips. This can be achieved using:
Full decoding (each address corresponds to a unique location) using 3-to-8 or 4-to-16
decoders.
Partial decoding (some address bits are ignored), which simplifies hardware but can lead to
mirroring (multiple addresses pointing to the same location).
Step 3: Data Bus Connection
The data lines of the microprocessor are connected directly to the corresponding data lines of the
memory chips. If the data bus width of the processor and memory are different, data multiplexing or
latching techniques may be used.
Step 4: Control Signal Management
Proper control signals are needed to ensure correct timing and direction of data flow. For example,
when the processor wants to read data, the RD’ signal is activated, and when it wants to write, the WR’
signal is asserted. These are connected to the memory's read and write control inputs.
Step 5: Interfacing Circuit Implementation
The entire interfacing setup—address decoding, chip select logic, control signal routing, and data bus
connection—is implemented using logic gates, decoders, and buffers. The goal is to ensure that when
the microprocessor issues a memory address along with a control signal, the correct memory chip is
selected and the correct operation (read/write) is performed.
4. Example of Memory Interfacing with 8086
Suppose an 8086 microprocessor (which has 20 address lines and 16-bit data bus) needs to interface
with a 32KB ROM and 32KB RAM:
32KB = 2¹⁵, so 15 address lines are needed for each memory chip.
The ROM may be mapped from address 00000H to 07FFFH.
The RAM may be mapped from 08000H to 0FFFFH.
The upper address lines (A15–A19) are decoded to generate the Chip Select signals for
ROM and RAM.
The IO/M', RD', and WR' signals from the 8086 are used for generating control signals to
enable read/write operations in memory.
5. Importance of Memory Interfacing
Without proper memory interfacing, the microprocessor would not be able to execute programs, access
data, or perform any meaningful operations. Efficient memory interfacing ensures:
Fast and error-free access to instructions and data.
Proper allocation of memory space.
Flexibility to expand memory as needed.
Reduced hardware complexity using optimized decoding and control logic.
In conclusion, memory interfacing is a fundamental and intricate part of microprocessor system
design. It serves as the backbone for enabling the processor to interact with its most critical
resources—program and data memory. A thorough understanding of address decoding, control signals,
and bus architecture is essential for successful implementation of memory interfacing.
2. I/O Interfacing
Input/Output (I/O) interfacing is the process of connecting external input and output devices to the
microprocessor so that it can interact with the outside world. While a microprocessor by itself is a
computation engine, it becomes useful only when it can receive inputs from the external
environment (input devices) and send outputs (to output devices). These devices may include
keyboards, displays, LEDs, printers, sensors, motors, storage devices, and communication
interfaces like USB or serial ports. Since these peripherals operate using electrical and logical
standards that differ from those of the microprocessor, I/O interfacing ensures proper electrical
compatibility, signal synchronization, and control.
1. Purpose and Need for I/O Interfacing
Microprocessors need a structured way to send and receive data to/from devices. However, I/O devices
are generally slower, may use different data widths, and often require control signals such as "ready",
"enable", or "acknowledge". Hence, direct communication is not feasible without additional
circuitry. I/O interfacing provides:
A method for the processor to read data from input devices (like keyboards or sensors).
A mechanism to send data to output devices (like displays or actuators).
Signal buffering, latching, and timing adjustments.
Efficient data transfer using programmed, interrupt-driven, or DMA methods.
2. Types of I/O Interfacing
I/O interfacing can be broadly divided into two categories:
a) Programmed I/O
In programmed I/O, the processor continuously checks the status of the I/O device by executing
instructions. It sends or receives data only when the device is ready. This method is simple but
inefficient, as it consumes a lot of processing time in polling the devices.
b) Interrupt-driven I/O and DMA
In this method, the I/O device interrupts the processor only when it needs attention. It reduces the
processor's burden and improves efficiency. Direct Memory Access (DMA) allows devices to
communicate directly with memory without involving the processor, which is useful for high-speed
data transfer like in audio or video systems.
3. I/O Addressing Techniques
The microprocessor must be able to uniquely identify each I/O device. There are two ways to assign
addresses to I/O devices:
a) Memory-Mapped I/O
In this method, I/O devices are treated as memory locations. The same address, data, and control
lines used for memory are also used for I/O. Devices occupy memory space and are accessed using
standard memory access instructions.
b) Isolated I/O (Standard I/O)
Here, I/O devices have a separate address space. Special instructions like IN and OUT are used to
access them. The I/O and memory are clearly separated, allowing up to 256 input and 256 output ports
(for 8-bit addressing).
4. Components Used in I/O Interfacing
Several hardware components are used to interface I/O devices to the microprocessor:
a) Latches and Buffers
Buffers are used to match electrical characteristics and prevent loading the microprocessor.
Latches hold data temporarily so that slower I/O devices can read/write data after the
processor completes its operation.
b) Programmable Peripheral Interface (e.g., Intel 8255)
These chips allow multiple I/O ports to be configured as input or output. The 8255, for example, offers
three 8-bit ports (Port A, B, and C) which can be programmed individually.
c) Decoders and Address Decoding Logic
Used to generate unique control signals like I/O Chip Select (IOCS) based on the address issued by
the microprocessor. This ensures that only the selected device communicates with the processor at a
given time.
5. I/O Communication Modes
There are two major data communication methods used in I/O interfacing:
a) Serial Communication Interface
In serial I/O interfacing, data is transmitted one bit at a time over a single line. It uses fewer lines and
is ideal for long-distance communication. Devices like serial ports (RS-232, UART), modems, and
Bluetooth modules use this method.
b) Parallel Communication Interface
In parallel communication, multiple bits (usually a byte or word) are transferred simultaneously over
multiple lines. It is faster than serial communication but requires more wires and is suitable for short-
distance, high-speed data transfer. Devices like printers and data buses typically use parallel
interfacing.
6. I/O Interfacing Operation Example
Let’s consider how data is transferred using an input device (e.g., a keypad):
1. The keypad is scanned and data (say, the pressed key code) is loaded into a port (via latch).
2. The device raises a signal (like an interrupt or flag) to the microprocessor.
3. The processor sends a read signal to the latch and retrieves the data through the data bus.
4. The device clears the interrupt and loads the next data.
For output (e.g., LED display):
1. The microprocessor sends data to the output port.
2. The data is latched temporarily.
3. The connected LED or display device reads this data and shows the output.
4. The processor can then write the next data.
7. Summary of the Process
I/O interfacing involves:
Assigning address to each I/O device (via decoding).
Generating proper control signals for read/write operations.
Using buffers/latches for stable data transfer.
Sending or receiving data via parallel or serial methods.
Handling timing and speed differences between processor and devices.
8. Importance of I/O Interfacing
Without I/O interfacing, the microprocessor cannot perform real-world tasks. It enables the system to:
Take inputs from users or sensors.
Control actuators, displays, and other devices.
Communicate with external systems and peripherals.
Achieve automation and intelligent control in embedded systems.
In essence, I/O interfacing is what transforms a microprocessor from a theoretical computation
unit into a practical, interactive system that powers everyday technologies like digital clocks,
washing machines, robotic arms, and computers.
Communication Interfaces: Serial vs Parallel
When interfacing external devices with a microprocessor, data must be transmitted between the
processor and the peripheral. The method of data transmission significantly impacts the speed,
complexity, and cost of the system. Two primary data transfer techniques are used: serial
communication and parallel communication. Each has distinct characteristics, advantages,
limitations, and application areas. These communication interfaces determine how the microprocessor
interacts with the outside world and are selected based on the system's requirements for speed, wiring,
distance, and synchronization.
1. Serial Communication Interface
In serial communication, data is transferred one bit at a time over a single communication line or
channel. The bits are sent sequentially, starting from the most significant bit (MSB) or the least
significant bit (LSB), depending on the protocol. This approach drastically reduces the number of
physical connections required and is ideal for situations where wiring must be minimized, such as in
embedded systems, remote devices, or wireless communication modules.
Key Characteristics:
Transmission Line: Uses one line for data (plus one for ground, and sometimes additional
lines for control).
Speed: Slower compared to parallel due to bit-by-bit transmission.
Cost and Complexity: Lower cost and simpler physical design due to fewer wires.
Distance Suitability: Excellent for long-distance communication, as it is less susceptible
to interference and signal skew.
Synchronization: May use asynchronous (start/stop bits) or synchronous (clocked)
protocols.
Common Serial Standards:
RS-232: A traditional standard used in PCs for connecting modems and serial peripherals.
UART (Universal Asynchronous Receiver Transmitter): Converts data between serial
and parallel forms; used in embedded systems.
USB (Universal Serial Bus): A high-speed serial protocol commonly used in modern
computers and embedded applications.
How It Works:
A full byte (8 bits) from the processor is converted into a serial bit stream.
Bits are transmitted one after another over the serial line.
On the receiving end, the serial stream is reassembled into a byte using a shift register.
The processor then reads the byte as if it had arrived all at once.
Applications:
Communication between microcontrollers and sensors.
Data transmission over modems or long cables.
Interfaces like USB drives, Bluetooth, and GPS modules.
2. Parallel Communication Interface
In parallel communication, multiple bits are transmitted simultaneously across multiple data lines.
Each bit of the data word is assigned its own separate physical wire, allowing the entire byte or word
to be transmitted in one clock cycle. This leads to faster data transfer, making it ideal for short-
distance, high-speed communication, such as inside a computer or on a circuit board.
Key Characteristics:
Transmission Lines: Requires as many lines as the number of bits (e.g., 8 lines for an 8-bit
transfer), plus control and ground lines.
Speed: Significantly faster than serial communication for short distances due to
simultaneous bit transfer.
Cost and Complexity: More expensive and complex in terms of cabling and connectors.
Distance Suitability: Poor for long distances due to signal degradation, crosstalk, and
timing mismatch.
Synchronization: Typically relies on a common clock signal to synchronize data transfers.
Examples of Parallel Interfaces:
Data buses inside computers (e.g., connecting CPU and RAM).
Printer ports in older PCs (parallel port).
LCD and LED displays receiving data from microcontrollers.
How It Works:
The processor places an entire data byte or word on the data bus.
All bits are transferred simultaneously to the receiving device.
Control signals indicate whether the operation is a read or write and help synchronize the
transfer.
Applications:
Internal communication in microprocessor-based systems.
Connecting processors to memory (RAM, ROM).
Interfacing with peripherals like parallel printers and displays.
Comparison Table: Serial vs Parallel Communication
Feature Serial Communication Parallel Communication
Number of Data 1 (plus ground and control Multiple lines (one per data bit)
Lines lines)
Speed Slower (bit-by-bit) Faster (byte or word at once)
Wiring Complexity Simple, fewer wires Complex, many wires
Cost Lower Higher
Data Transfer Long distances Short distances
Distance
Interference Less (lower skew and More (timing mismatches common)
Susceptibility crosstalk)
Examples RS-232, USB, UART Printer ports, RAM data buses
Use Case External, long-range Internal, high-speed components
devices
Conclusion
The choice between serial and parallel communication interfaces depends on the specific needs of
the application. Serial communication is optimal for long-distance, cost-sensitive, and low-pin-count
systems, while parallel communication is preferred in environments that require high data rates
over short distances, such as within digital systems and microcontroller boards. Both interfaces play a
vital role in designing efficient microprocessor-based systems and ensuring reliable communication
with a wide range of peripherals.
Key Steps in Interfacing Logic
Interfacing logic is essential to ensure accurate communication between a microprocessor and external
memory or I/O devices. Since multiple devices may be connected to the same buses (address, data, and
control), the microprocessor must be able to uniquely identify, select, and activate the desired device
at the correct time. This is achieved through a coordinated series of steps involving hardware circuits
and software instructions that together form the interfacing logic. The following are the key steps
involved in this process:
1. Address Decoding
The first and most fundamental step in interfacing logic is address decoding. Each device—whether a
memory chip or an I/O peripheral—must have a unique address so that the microprocessor knows
exactly which device to communicate with. When the processor places an address on the address bus,
the decoding circuit examines certain high-order address lines to determine which device the address
belongs to.
Function:
Converts the binary address from the processor into a selection signal for a specific device.
Uses combinational logic (such as decoders, multiplexers, or logic gates) to monitor
specific address lines.
Generates an address pulse (or preliminary chip select signal) when the address matches
the predefined range for a device.
Example:
If a device is mapped to addresses 8000H to 8FFFH, then the decoding logic is designed to activate
only when the processor outputs an address within this range.
2. Generating Select Signals (Chip Select or I/O Select)
Once the address decoding logic has produced an address pulse, it must be further refined by
combining it with relevant control signals to ensure that the device is selected only during a valid
operation. This step involves logically ANDing the address pulse with signals like:
RD’ (Read) – to indicate a read operation.
WR’ (Write) – to indicate a write operation.
IO/M’ – to distinguish between memory and I/O operations.
This combination ensures that even if the correct address is generated, the device is selected only
when the processor intends to read or write.
Function:
Combines the address pulse with appropriate control lines.
Produces a Device Select Pulse (Chip Select for memory or I/O Select for peripherals).
Ensures that the selection is valid only during a specific operation (read or write).
Importance:
This prevents devices from being unintentionally activated and ensures data integrity and system
stability.
3. Enabling the Device
The final step in the interfacing logic is to activate the device using the device select pulse. This
signal enables the device's internal circuits to participate in data transfer. For memory devices, it
activates the memory chip, allowing it to place data on the data bus (read) or accept data from the
processor (write). For I/O devices, it enables the I/O port or latch to either send or receive data.
Function:
Triggers the selected device’s internal logic.
For memory, allows data to be accessed from or written into a specific memory location.
For I/O, allows the microprocessor to send data to an output device or receive data from an
input device.
Data Transfer Process:
If the operation is a read, the device places data on the data bus, and the processor reads it.
If the operation is a write, the processor places data on the data bus, and the device accepts
it.
Control Signals Role:
These operations are synchronized with the system clock and control lines to ensure
accurate timing and sequencing.
Summary of the Sequence
To summarize, the interfacing logic ensures precise device communication through this structured
process:
1. Address Decoding → Determines which device is being addressed.
2. Select Signal Generation → Validates the operation using control signals.
3. Device Enabling → Activates the selected device for data transfer.
This logical flow guarantees that only the correct device is accessed during each instruction cycle,
preventing data collision and ensuring reliable communication between the microprocessor and
external hardware. Proper implementation of these steps is critical in both memory and I/O interfacing
and forms the backbone of all microprocessor-based systems.
Data Transfer Operations in I/O Interfacing
In microprocessor-based systems, the fundamental goal of I/O interfacing is to enable the exchange of
data between the processor and external devices such as keyboards, displays, sensors, motors, and
storage units. However, these devices operate at speeds, timings, and protocols different from those of
the microprocessor. Therefore, a well-defined mechanism is necessary to ensure synchronized,
efficient, and error-free data transfer. These operations are carried out using I/O ports, control
signals, buffers, latches, and in many cases, interrupts or flags.
The data transfer operations in I/O interfacing can be classified into two categories: input operations
(reading data from an external device into the processor) and output operations (sending data from
the processor to an external device). Each follows a systematic sequence to ensure proper coordination
between the processor and peripheral devices.
1. Input Operations
An input operation involves transferring data from an external device (e.g., keyboard, sensor, switch)
into the microprocessor. Since the processor cannot directly read from most devices due to electrical or
timing mismatches, the data from the device is first latched or stored temporarily before being read by
the processor.
Step-by-Step Input Sequence:
1. Data Loading by the Input Device:
o The input device, such as a sensor or keyboard, senses or receives data from the
environment.
o This data is placed into a temporary storage register or latch (e.g., in an I/O port
like 8255).
2. Signaling the Processor:
o Once the data is ready, the device sends a signal to the processor to indicate its
status.
o This is done using a flag bit, status signal, or hardware interrupt.
o In programmed I/O, the processor polls the status register repeatedly to check for
readiness.
3. Data Read by the Processor:
o The processor initiates a read operation by asserting the RD’ control signal along
with the appropriate I/O address.
o The data from the latch or port is placed on the data bus and read into the processor.
4. Preparing for Next Input:
o After the data is read, the input device clears the flag or interrupt signal.
o It then captures or loads the next data item for future transfer.
Purpose:
This ensures that the processor only reads valid and complete data, and the input device does not
overwrite data before it has been read. This sequencing is especially important in real-time systems.
2. Output Operations
An output operation involves sending data from the microprocessor to an external device, such as an
LED display, printer, motor controller, or speaker. Similar to input operations, output devices typically
cannot accept data directly from the microprocessor and require intermediate storage.
Step-by-Step Output Sequence:
1. Data Transfer from Processor to Output Port:
o The processor places the data on the data bus.
o It then asserts the WR’ signal along with the I/O address corresponding to the output
port.
o The data is latched into the port or register connected to the output device.
2. Output Device Receives Data:
o The output device monitors the control signals or port status.
o Once data is latched, the device reads or processes it (e.g., an LED lights up, a motor
turns on).
3. Processor Readies Next Data:
o After ensuring the current data has been successfully sent and read, the processor
can send the next byte.
o In some cases, a ready or acknowledge signal is sent back from the output device.
Purpose:
This ordered flow prevents data from being missed or overwritten. It ensures that each byte is
successfully processed before the next one is sent, especially in time-sensitive applications.
Synchronized Communication and Error Prevention
The outlined sequences in both input and output operations are critical to maintaining
synchronization between the processor and I/O devices. Without such structured operations:
The processor might read incomplete or invalid data.
An output device might miss data or receive it out of sequence.
There could be data corruption, especially if timing mismatches are not handled.
Hence, additional techniques such as handshaking, status checking, interrupt-driven data transfer,
or Direct Memory Access (DMA) are often implemented to handle these operations in complex
systems more efficiently.
Conclusion
Data transfer operations in I/O interfacing follow a disciplined process involving:
Temporary data storage via latches or buffers.
Control signaling to manage timing and readiness.
Coordination between processor instructions and device responses.
This structured sequence ensures that data is neither lost nor corrupted, regardless of whether it is
being received from or sent to an external device. It enables the microprocessor to effectively control
and interact with the real world, making it a crucial part of any embedded or digital computing system.
Device Selection and Address Decoding
1. Device Selection and Address Decoding – Introduction
In microprocessor-based systems, the microprocessor communicates with many external devices such
as memory chips, input/output (I/O) ports, sensors, displays, and more. However, the microprocessor
alone does not have the ability to identify or directly control a specific external device. Instead, it
places a unique binary address on its address bus. The role of address decoding is to interpret this
binary address and generate a specific signal (called a select pulse) that enables the correct device to
respond. This process is known as device selection, and it ensures that at any given moment, only one
device is actively communicating with the microprocessor, avoiding conflicts or bus contention.
2. Role of Address Decoding
When a microprocessor sends out an address, it needs a way to ensure that the right device responds.
For example, if the address F000H is meant for a particular I/O port, only that port should respond and
no other. Address decoding logic is responsible for this. It compares the address sent by the
microprocessor with predefined address ranges assigned to various devices. If a match occurs, the
decoder generates a short pulse called a chip select or device select pulse. This pulse is used to
activate or enable the appropriate device. Devices not selected remain inactive, effectively ignoring
any activity on the data and control buses.
3. Use of Control Signals in Device Selection
Address decoding alone does not complete the process. To make device selection more accurate, the
address signal is further combined with control signals generated by the microprocessor. These
include:
RD’ (Read): Active-low signal used to read data from memory or I/O.
WR’ (Write): Active-low signal used to write data to memory or I/O.
IO/M’: Signal that differentiates between memory (0) and I/O (1) operations.
For example, even if the address matches a device's assigned address, the device will only be enabled
if the control signals indicate the right type of operation (read/write, I/O/memory). The logic that
combines the decoded address and control lines ensures that the Device Select Pulse is generated only
when all necessary conditions are met. This prevents errors and ensures precise communication.
4. Device Select Pulse and Activation
The Device Select Pulse, also known as Chip Select (CS’) or I/O Select Pulse, is the output
generated after successful decoding of the address and control signals. This pulse is sent to the target
device's select input pin, enabling it to either send data to or receive data from the microprocessor.
While the select signal is active, the device is allowed to place its data on the data bus (in case of a
read operation) or accept data from the bus (in case of a write operation). All other devices remain
disabled during this period, which ensures exclusive communication between the processor and the
selected device.
5. Absolute (Full) Address Decoding
In absolute decoding, all the higher-order address lines are used to generate a unique and specific
address for each device. This means only one exact address combination will activate a particular
device. For instance, if a device is assigned the address F000H, the decoder will check all relevant bits
(e.g., A4 to A15) to match this value precisely. If the bits match, it generates the select signal.
The primary advantage of absolute decoding is that there is no ambiguity—each device responds to
exactly one address. This is especially useful in complex systems where accurate address mapping is
necessary. However, a major disadvantage is that more hardware is needed to decode all address lines,
such as additional gates or larger decoder ICs. This increases the cost, space, and design complexity
of the system.
6. Partial Address Decoding
In contrast to absolute decoding, partial decoding uses only a few of the higher-order address lines
to identify devices. This method does not provide a unique address to each device but rather allows a
range of addresses to activate the same device. For example, if only A14 and A15 are used for
decoding, then several addresses (such as C000H, C400H, C800H, CC00H) will all select the same
device.
Partial decoding greatly reduces hardware complexity and cost. Fewer gates or smaller decoder ICs are
required, which is advantageous in small embedded systems or cost-sensitive designs. However, the
downside is the possibility of address mirroring, where multiple addresses point to the same device.
While this is acceptable in some systems, it can lead to confusion or software errors if not carefully
managed. Therefore, this approach is best suited for systems with limited address space or where exact
address mapping is not critical.
7. Application of Address Decoding in I/O Interfacing
In I/O interfacing, address decoding ensures that each I/O device (such as a keyboard, display, or
ADC) can be accessed uniquely using its assigned port address. For example, in the 8085
microprocessor, I/O addressing is supported via the IN and OUT instructions which use 8-bit port
addresses. During execution, the microprocessor places the port address on the address bus, and the
decoder generates a corresponding select pulse.
The control signal IO/M’ is set to 1 during I/O operations to distinguish them from memory accesses.
The decoder checks the upper address bits (like A8 to A15) and when a match occurs, it generates the
I/O Select signal. This signal is logically combined with RD’ or WR’ to create a Read Enable or
Write Enable pulse for the selected port. This ensures accurate and isolated communication between
the processor and I/O devices.
8. Importance of Proper Address Decoding
Proper address decoding is essential to ensure the reliability, predictability, and stability of
microprocessor-based systems. If two devices respond to the same address due to poor decoding, it can
lead to bus contention, data corruption, or system crashes. Conversely, if no device responds due to
a decoding error, the system may experience failure to read or write data.
Hence, a well-designed decoding logic guarantees that:
Each device responds to a unique address or range.
Only one device is active at a time.
Software can predictably communicate with hardware.
The system behaves in a controlled and error-free manner.
Conclusion
In summary, device selection and address decoding are fundamental mechanisms that allow a
microprocessor to communicate effectively with external memory and I/O devices. By translating
addresses into select signals using either full or partial decoding, the microprocessor can ensure
exclusive interaction with the intended peripheral. While absolute decoding offers precision, partial
decoding reduces hardware complexity. Whichever method is used, the goal is the same—to manage
multiple devices without conflict and maintain smooth system operation.
I/O-Mapped I/O (Isolated I/O)
I/O-mapped I/O, also called Isolated I/O, is a method of interfacing external input and output
devices with a microprocessor where I/O devices are assigned a dedicated and separate address
space, distinct from the memory address space. This means that the microprocessor treats I/O devices
differently from memory components during data transfers.
In this approach, the processor uses specialized instructions, such as IN (for reading data from an
input port) and OUT (for writing data to an output port), to communicate with the I/O devices. These
instructions are specifically designed to perform I/O operations and access the I/O port addresses that
are different from the memory addresses. For example, in the Intel 8085 microprocessor, I/O-mapped
I/O uses an 8-bit address space, which allows addressing up to 256 I/O ports (from address 00H to
FFH).
One of the main advantages of I/O-mapped I/O is that it preserves the memory address space,
allowing the full range of memory addresses to be used exclusively for RAM or ROM. Additionally,
since the control signals used in I/O-mapped I/O (like IO/M’, RD’, and WR’) are different from those
used in memory-mapped I/O, the processor can clearly distinguish between memory operations and
I/O operations. This also simplifies the design of interfacing circuits.
However, I/O-mapped I/O comes with a limitation: the number of I/O devices is limited to the size of
the dedicated I/O address space (e.g., 256 ports in an 8085 system). Moreover, standard memory
instructions like MOV, MVI, LDA, and STA cannot be used with I/O devices in this mode; only the IN
and OUT instructions are valid.
In summary, I/O-mapped I/O is an efficient and hardware-friendly way to interface external devices
when the number of peripherals is limited and memory space must be preserved. It separates memory
and I/O spaces logically and uses dedicated instructions and control signals to manage data transfer.
Peripheral I/O Instructions in 8085 (I/O-Mapped I/O)
In I/O-mapped I/O, also called isolated I/O, the microprocessor communicates with peripheral
devices using a separate 8-bit address space and two dedicated instructions: IN and OUT. These
instructions allow the microprocessor to perform data transfer operations with I/O ports without
interfering with the memory address space. In this scheme, input and output devices are treated as
completely separate from memory devices, which simplifies hardware design and saves memory
space.
IN Instruction
Purpose and Description:
The IN instruction is used when the microprocessor needs to receive data from an external input
device. Its primary purpose is to read a byte of data from a specified input port and place that data
into the accumulator. This instruction enables the processor to interface with peripherals such as
keyboards, sensors, or switches. It makes use of the 8-bit port address specified in the instruction and
activates the appropriate control signals to complete the input operation.
Syntax: IN port-address
Working:
1. The 8085 places the specified port address (8-bit) on the lower address bus lines A0–
A7.
2. The control signal IO/M’ = 1 to indicate an I/O operation.
3. The RD’ (Read) signal is activated.
4. The selected input port places its data on the data bus.
5. The microprocessor reads the data and loads it into the accumulator.
Example:
IN 05H — Reads data from the input port at address 05H and stores it in the accumulator.
OUT Instruction
Purpose and Description:
The OUT instruction is used when the microprocessor needs to send data to an external output device.
Its primary purpose is to transfer the content of the accumulator to the output port whose address is
specified in the instruction. This instruction allows the processor to output data to peripherals like
LEDs, printers, displays, or actuators. It ensures that the data from the processor is correctly written to
the selected I/O device.
Syntax: OUT port-address
Working:
1. The microprocessor places the 8-bit port address on the address lines A0–A7.
2. The control signal IO/M’ = 1 is activated to specify an I/O operation.
3. The WR’ (Write) signal is activated.
4. The content of the accumulator is placed on the data bus.
5. The selected output device receives the data from the data bus.
Example:
OUT 09H — Sends the content of the accumulator to the output port at address 09H.
Key Characteristics of Peripheral I/O Instructions
1. Limited Address Space (8-bit Port Addressing)
In I/O-mapped I/O, only 8 bits are used for addressing I/O ports, which limits the total
number of addressable I/O devices to 256 ports (00H to FFH). This is because the IN and
OUT instructions can specify only 8-bit port addresses. While this restricts the number of
I/O ports, it also simplifies the hardware as fewer address lines are needed.
2. Dedicated Instructions (IN and OUT)
The 8085 microprocessor uses two specific instructions—IN and OUT—to perform data
transfers with I/O devices. These instructions are reserved solely for communication with
I/O ports and cannot be used to access memory. This makes I/O operations distinct from
memory operations at both the hardware and instruction-set levels.
3. Accumulator-Based Data Transfer
Both IN and OUT instructions only use the accumulator for data movement. During an
IN instruction, the data from the input port is placed into the accumulator. In an OUT
instruction, data from the accumulator is sent to the output port. No other register can
directly send or receive data using these instructions.
4. Separate Address Space for I/O and Memory
I/O-mapped I/O maintains a clear distinction between memory and I/O address spaces.
The control signal IO/M’ = 1 differentiates I/O operations from memory operations (where
IO/M’ = 0). This ensures that memory and I/O devices do not conflict, even if they share
similar address values.
5. Simple Hardware Design
Because of the limited address space and the use of dedicated instructions, the hardware
required for I/O-mapped interfacing is relatively simple and cost-effective. Decoding logic
only needs to respond to 8 address lines, and interfacing logic can be compact due to the
exclusive use of accumulator-based data transfers.
6. Efficient for Small Systems
Peripheral I/O instructions are ideal for simple or small embedded systems where only a
limited number of I/O devices are used. Since fewer I/O ports are needed, and data transfer
logic is streamlined, the system remains efficient and easy to manage.
7. Cannot Perform Arithmetic or Logical Operations on I/O Ports Directly
Since data must pass through the accumulator, direct arithmetic or logical operations on
I/O ports are not possible. Data must be transferred into the accumulator before any
computation, making it less flexible than memory-mapped I/O in terms of data
manipulation.
These characteristics make peripheral I/O instructions in the 8085 microprocessor well-suited for
applications where clear separation between memory and I/O operations is beneficial and where
simplicity in hardware design is a priority.
Realization of Input and Output Ports
The realization of input and output (I/O) ports is a foundational concept in microprocessor systems,
where the goal is to establish a reliable communication pathway between the processor and external
peripherals such as keyboards, displays, sensors, or actuators. This involves designing hardware (logic
circuits) and using specific instructions that allow the CPU to either receive (input) or transmit
(output) data. These circuits are typically referred to as interfacing devices or I/O ports, and they act
as intermediaries that match the voltage levels, timing, and signal requirements of both the
microprocessor and the external devices.
Fundamental Hardware Elements
At its core, an input port is typically implemented using a tri-state buffer, whereas an output port is
realized using a latch. These components serve critical roles:
Input Port (Buffer): When the microprocessor wants to read data from an input device,
such as a keyboard or sensor, a control signal (usually IOR' or RD') enables the buffer. Once
the buffer is enabled, it allows the data from the external device to flow onto the system data
bus. The processor then issues a read instruction to capture this data into its accumulator or
another register. Buffers are often tri-state to ensure they do not interfere with the bus when
not in use.
Output Port (Latch): When the processor needs to send data to an output device, like an
LED display or motor controller, it places the data on the data bus and triggers a latch using
a clock or write control signal. The latch captures and holds the data, allowing the output
device to access it continuously even after the processor has moved on to other tasks. This
mechanism ensures data persistence, as bus signals are momentary.
Data Transfer Sequences
The typical data transfer operation between the microprocessor and peripheral devices follows
structured steps depending on whether the device is for input or output:
For Input Devices:
1. The input device loads data into the I/O port (buffer).
2. The port notifies the processor (via interrupt or polling) that data is available.
3. The processor reads the data from the port using a read instruction.
4. Once read, the device can load new data for the next operation.
For Output Devices:
1. The processor places data on the data bus and writes it to the output port (latch).
2. The port then signals the connected device to read the data.
3. The output device uses the data, and the process can repeat for the next data value.
Address Decoding: How Devices Are Selected
A crucial step in realizing I/O ports is address decoding, which ensures that each I/O device responds
only to its designated address. Address decoding involves two main tasks:
1. Generating an I/O Address Pulse: The address lines of the microprocessor are partially or
fully decoded using logic gates (like AND, NAND) to produce a signal that activates when a
specific address is present on the address bus.
2. Combining with Control Signals: The address pulse is then combined with control signals
such as IOR’ (I/O Read) or IOW’ (I/O Write) to create a final chip select or device enable
signal. Only when both the correct address and the appropriate control signal are present
will the device respond.
There are two methods of decoding:
Absolute Decoding: All high-order address lines are decoded to ensure each device has a
unique address, reducing the risk of address conflicts.
Partial Decoding: Only some lines are used for decoding, which simplifies hardware and
reduces cost but may lead to address aliasing (where one device appears at multiple
addresses).
Using Interfacing ICs (Example: 8255 PPI)
Specialized I/O chips like the Intel 8255 Programmable Peripheral Interface (PPI) greatly simplify
port realization. The 8255 provides three ports (Port A, B, and C), each 8 bits wide, and can be
configured using a control word to operate in various modes:
Mode 0 (Basic I/O): No handshaking; simple input/output. Ports can be latched.
Mode 1 (Strobed I/O): Uses handshake signals for data synchronization.
Mode 2 (Bidirectional I/O): Only Port A can be used for both input and output.
The IC uses control lines (like RD’, WR’, CS, A0, A1) to coordinate operations and manage
communication with the CPU.
I/O Ports in 8051 Microcontroller
Microcontrollers like the 8051 have internal I/O ports with built-in hardware for easy interfacing:
Port 0: True bidirectional with alternate function as address/data bus; needs external pull-up
resistors.
Port 1: Purely for I/O with internal pull-up resistors.
Port 2: I/O port that also provides higher-order address lines during external memory
access.
Port 3: Each pin has an alternate function (serial, interrupts, timers) and can still be used as
I/O.
The 8051 ports allow for direct manipulation of hardware and also distinguish between reading from
the latch and reading from the pin, making it versatile in embedded system design.
Definition of Memory-Mapped I/O:
Memory-mapped I/O is a method of interfacing input/output (I/O) devices with a microprocessor
where each I/O device is assigned a unique memory address, just like any standard memory location.
Instead of having a separate set of instructions and address space exclusively for I/O operations,
memory-mapped I/O allows the microprocessor to access I/O devices using the same instructions and
address space that are used to access memory. This means that both memory and I/O devices share the
same address bus and occupy the same memory space.
In simpler terms, the microprocessor treats an I/O device (like a keyboard, display, or sensor) as if it
were a part of the memory. For example, if a temperature sensor is mapped to memory location
F100H, the processor can access its data by simply reading from that memory address using normal
data transfer instructions such as LDA F100H. There is no need for special I/O instructions like IN or
OUT, as used in I/O-mapped I/O.
Principle of Memory-Mapped I/O:
The core principle behind memory-mapped I/O is uniformity and simplicity in data handling. The
microprocessor uses a unified memory space where both RAM and I/O devices coexist. Since the
processor cannot inherently distinguish between memory and I/O devices in this setup, the
differentiation is made based on how the address decoding hardware and control logic are designed.
When the processor wants to access an I/O device, it places the device’s assigned memory address on
the address bus. The address decoding circuit recognizes this address and activates the select line of
the respective I/O device. Data is then transferred using standard memory read (MEMR) or memory
write (MEMW) signals. This approach removes the need for separate control lines like IOR and IOW
used in I/O-mapped I/O.
Because standard memory instructions like MOV, LDA, STA, and MOVX (in 8051) are used,
programmers can interact with I/O devices using the same commands they use for memory, allowing
for more straightforward and flexible coding. This also enables the use of arithmetic and logic
operations directly on I/O data, as it resides in the same address space as regular memory.
However, this comes with a trade-off—since both I/O devices and memory locations share the same
address range, some of the addressable memory space is sacrificed for I/O devices, effectively
reducing the maximum usable memory for programs and data.
Mechanism of Memory-Mapped I/O
The mechanism of memory-mapped I/O involves integrating I/O devices into the memory address
space of a microprocessor so that each I/O device is assigned a unique address just like a memory
location. This allows the processor to use regular memory access instructions to perform
input/output operations. The entire data exchange between the processor and the I/O devices is
handled using the same buses and control lines used for memory operations.
1. Shared Address Space
In memory-mapped I/O, there is no separate I/O address space. Both memory and I/O devices are part
of a single, unified address space. For example, if a processor like the Intel 8085 has a 16-bit address
bus, it can address up to 64KB (from 0000H to FFFFH). In memory-mapped I/O, certain address
ranges (e.g., 8000H to 8FFFH) may be reserved for I/O devices, while the rest is used for memory.
This means that accessing an I/O device is as simple as reading from or writing to a memory
address that has been mapped to that device. For instance, if a display device is mapped at location
9000H, the processor can send data to the display simply using a memory write instruction like STA
9000H.
2. Address Decoding
The address decoding circuit plays a crucial role in this mechanism. It interprets the address placed
on the address bus by the microprocessor and generates appropriate chip select signals to activate
either memory chips or I/O devices.
For example, if a particular I/O device is assigned the address F0F0H, then when the processor sends
this address on the bus, the decoder activates that specific I/O device. The decoder circuit ensures that
no memory or other device responds to this address, thereby avoiding address conflicts.
This decoding can be absolute (using all address lines for unique device identification) or partial
(using only selected higher-order address lines, which is simpler but may lead to address repetition for
multiple devices).
3. Control Signals
Since the I/O devices are treated as memory, memory control signals are used to manage read and
write operations. These include:
MEMR (Memory Read): Activated when the processor reads from a memory-mapped I/O
input device.
MEMW (Memory Write): Activated when the processor writes to a memory-mapped I/O
output device.
The generation of these signals depends on the microprocessor:
In the 8085, these are derived from internal control lines. When MIO/ is low and RD is low,
it signals a memory read (MEMR). When MIO/ is low and WR is low, it indicates a
memory write (MEMW).
In the 8086, the M/IO signal being logic 1 indicates a memory operation, and memory
control signals are used accordingly.
4. Data Transfer Instructions
The same load and store instructions used for accessing memory are used for I/O data transfer.
Examples include:
LDA address: Load data from a memory-mapped I/O input device into the accumulator.
STA address: Store the accumulator’s data into a memory-mapped I/O output device.
In the 8051 microcontroller, instructions like MOVX A, @DPTR are used to transfer data
from external memory-mapped I/O devices.
Because of this, no special I/O instructions like IN or OUT are needed.
5. Data and Address Bus Usage
Data Bus: Shared by both memory and I/O devices. It transfers data during input and output
operations.
Address Bus: Used to uniquely identify each I/O device using its mapped address.
Control Bus: Carries the control signals (MEMR, MEMW) necessary to execute the
memory-style read/write operations with the I/O devices.
6. Timing and Synchronization
Timing control is critical in memory-mapped I/O, especially when I/O devices are slower than the
processor. Wait states or ready signals may be employed to synchronize the processor with the device’s
speed, ensuring accurate data transfer without corruption.
Summary
In memory-mapped I/O, I/O devices behave like memory locations. The microprocessor uses the same
buses, same instructions, and same control signals for both memory and I/O operations. Address
decoding logic differentiates memory from I/O devices, and standard memory read/write instructions
are used for data transfer. While this approach simplifies programming and hardware control, it
sacrifices part of the addressable memory space for I/O.
Memory-mapped I/O mechanism — Addressing and Control Signals and Data Transfer Instructions
1. Addressing and Control Signals
In the memory-mapped I/O mechanism, both memory units and I/O devices reside in the same address
space. This means that the microprocessor does not differentiate between an I/O device and a memory
location based on the address alone. Hence, control signals must play a critical role in determining the
nature of the operation. In microprocessors like the Intel 8085, the IO/M̅ control signal is used to
distinguish memory operations from I/O operations. When IO/M̅ is low (logic 0), it indicates a
memory operation; when it is high (logic 1), it indicates an I/O operation. However, in memory-
mapped I/O, I/O devices are treated like memory, so the microprocessor always keeps IO/M̅ low when
accessing such devices.
To complete the read or write process, the RD̅ (Read) and WR̅ (Write) signals are combined with the
state of IO/M̅. When IO/M̅ is low and RD̅ is active (low), it triggers the MEMR (Memory Read)
signal. Similarly, if IO/M̅ is low and WR̅ is active (low), it activates the MEMW (Memory Write)
signal. These two control signals (MEMR and MEMW) are used to control both memory units and I/O
devices, since memory-mapped I/O devices behave like memory. In contrast, the Intel 8086 uses a
slightly different signal called M/IO. Here, a logic 1 indicates a memory operation and 0 indicates an
I/O operation, but for memory-mapped I/O, memory signals are still used because the devices are
mapped into memory space.
The main takeaway is that address decoding circuitry and control signals work together to ensure
that when a specific address corresponding to an I/O device is accessed, that device responds
appropriately using the same signals and procedures used for regular memory locations. This
allows a smooth and integrated method for handling both memory and I/O devices uniformly.
2. Data Transfer Instructions
One of the major benefits of memory-mapped I/O is the ability to use standard memory-access
instructions for performing I/O operations. This greatly simplifies programming and minimizes the
need for specialized input/output instructions. For example, if a keyboard input device is mapped to
the memory address F0F0H, the microprocessor can read data from it by executing the LDA F0F0H
instruction. This instruction loads the data from the specified memory address (which actually
corresponds to an I/O device) directly into the accumulator register of the processor. There is no need
for any specialized input instruction like IN.
Similarly, if an output device like an LED display is mapped to address 8000H, the processor can send
data to it by using the STA 8000H instruction. This command stores the contents of the accumulator
into the specified memory location (in this case, the address of the output device), effectively writing
data to the I/O port. In both cases, the memory read/write control signals (MEMR or MEMW) will
be activated, enabling proper communication with the device.
Because memory-mapped I/O uses the same instruction set as memory operations, it provides high
flexibility. Programmers can leverage instructions like MOV, LDA, STA, and even block transfer
operations (like using loops) for interacting with devices, which reduces coding effort and simplifies
system software. This also enables efficient interrupt-driven or DMA-based I/O schemes where
memory-mapped addresses can be directly manipulated by the CPU or external controllers. However,
a trade-off is that a portion of the memory address space must be reserved for I/O devices, slightly
reducing the available memory for regular data or programs.
Advantage of Memory-Mapped I/O
1. Direct Use of Arithmetic and Data Manipulation Instructions on I/O Data
One of the most significant advantages of memory-mapped I/O is that it allows the processor to use all
its standard arithmetic, logical, and data transfer instructions on I/O device data, just as it does with
regular memory. In conventional I/O-mapped systems, interaction with I/O devices requires special
instructions like IN and OUT, which simply move data between the accumulator and a port, without
allowing operations directly on the port contents. In contrast, memory-mapped I/O assigns a memory
address to each I/O device. Therefore, the processor can execute instructions like ADD, SUB, MOV, or
even more complex operations directly using data from the I/O port’s address. For example, it can
fetch data from an input device at a memory-mapped location, perform arithmetic on it, and store the
result back to another I/O-mapped location. This greatly enhances the programming flexibility and
computational efficiency of the system.
2. Simplified Hardware Design
Memory-mapped I/O significantly reduces hardware complexity by eliminating the need for separate
control lines and data paths for I/O operations. Since I/O devices in this technique are treated as
memory locations, the existing address and data buses used for memory operations can be reused for
I/O transactions. The same control signals used for memory operations — such as MEMR (Memory
Read) and MEMW (Memory Write) — are also utilized for communicating with I/O devices. This
reduces the number of signal lines required on the microprocessor and minimizes the amount of logic
required to design the system. For instance, there is no need to design separate decoder logic or
allocate additional address decoding circuits for a separate I/O space. As a result, the overall system
becomes more compact, cost-effective, and easier to implement and maintain.
3. Uniform Access Mechanism Enables Potential for Memory Protection
Another benefit of memory-mapped I/O is that it allows I/O devices to benefit from the same memory
management and protection schemes used for standard memory. Because all devices are addressed as
part of the memory space, the processor's memory management unit (MMU), if present, can enforce
access control policies on both memory and I/O regions. This means that the system can prevent
unauthorized access to critical I/O devices in the same way it restricts access to protected memory
regions. For example, in multitasking systems or real-time applications, certain I/O operations may
need to be restricted to privileged modes or specific processes. Memory-mapped I/O makes such
protections feasible, enhancing the system's reliability, security, and robustness by reducing the
chances of accidental or malicious interference with hardware devices.
Disadvantages of Memory-Mapped I/O
1. Reduced Effective Memory Space
One of the primary drawbacks of memory-mapped I/O is that it consumes a portion of the processor's
addressable memory space for interfacing I/O devices. Since I/O devices are assigned memory
addresses in the same space as RAM and ROM, every I/O device occupies one or more addresses that
could otherwise be used for storing program code or data. For example, in the 8085 microprocessor,
which supports 64KB of addressable memory (0000H to FFFFH), if 4KB of address space is allocated
to I/O devices, only 60KB is left for actual program and data memory. This becomes a critical
limitation in applications where memory space is already constrained, such as embedded systems or
microcontroller-based designs. As the number of I/O devices increases, the available memory for code
and data progressively decreases, potentially affecting performance and limiting system capabilities.
2. Increased Software Complexity
While memory-mapped I/O allows the use of standard data manipulation instructions on I/O ports, this
flexibility can increase software complexity. Programmers must carefully manage the address space to
avoid conflicts between memory and I/O devices. There is also a need to ensure that the correct
memory-mapped addresses are used during every operation, and mistakes in address referencing can
lead to improper functioning of either memory access or I/O communication. Unlike I/O-mapped I/O
(which uses unique I/O instructions that clearly define device interaction), memory-mapped I/O does
not explicitly distinguish between memory and I/O in the program code. This ambiguity may lead to
bugs or errors during debugging or code maintenance, especially in large or modular systems.
3. Risk of Accidental Data Corruption
Because I/O devices are accessed using the same instructions and address space as memory, there is an
increased risk of accidental writes to I/O ports, especially if the address boundaries between memory
and I/O are not clearly defined or protected. For example, if a programmer or process mistakenly
writes data to an address thinking it belongs to a RAM location, but it actually maps to an output
device like a motor controller, it might unintentionally activate hardware components or change device
states. Such accidental operations can result in hardware malfunction, data corruption, or even safety
hazards in critical systems. Without robust memory protection or careful software design, this lack of
separation between memory and I/O increases the potential for unintended interactions.
Implementation in Specific Microprocessors
8085 Microprocessor
In the 8085, memory-mapped I/O treats I/O devices as full-fledged memory elements using 16-bit
addresses. When a device is mapped to a specific address, say F0F0H, and the instruction LDA F0F0H
is executed, the microprocessor places F0F0H on the address bus and activates MEMR. The connected
I/O device recognizes the address and places its data on the data bus for the processor to read.
Similarly, STA 8000H sends the accumulator’s content to the device at 8000H using the MEMW
control signal.
8086 Microprocessor
The 8086 has a larger addressable memory space of 1 MB and supports memory-mapped I/O by
design. It uses the M/IO control line to indicate whether a transaction is related to memory. In
memory-mapped I/O, this line is set to logic high. Since the 8086 allows data manipulation with
memory-mapped devices using its rich instruction set, it can perform operations like MOV [address],
AL to send data to an output device or MOV AL, [address] to read from an input device.
8051 Microcontroller
The 8051 has a Harvard architecture, which means it separates program and data memory spaces. I/O
devices in 8051 are also memory-mapped and accessed using load/store instructions. It uses MOVX
instructions for accessing external memory and I/O, e.g., MOVX A, @DPTR reads a byte from the
external memory location pointed by DPTR. Since many I/O devices are interfaced externally, port
pins are consumed, leaving fewer general-purpose I/O lines. Internally, addresses 80H–FFH are
reserved for SFRs (Special Function Registers), which include I/O ports and timers, and are accessed
like memory using direct instructions.
Comparison of memory mapped I/O and Peripheral mapped I/O
Feature Memory-Mapped I/O Peripheral-Mapped I/O (I/O-
Mapped I/O)
Address Space I/O devices share the same address space I/O devices have a separate address
Usage as memory. That means both memory and space, independent from the
I/O are addressed using the same address memory address space.
bus.
Number of I/O Depends on the size of the address bus. Limited to 256 input ports and 256
Ports Supported For example, with a 16-bit address bus output ports (total 512 ports) due to
(8085), up to 64 KB addresses can be 8-bit port addressing.
used, including I/O.
Instruction Set I/O operations use standard memory- Requires dedicated I/O
Used related instructions such as LDA, STA, instructions like IN and OUT for
MOV, LXI, etc., enabling powerful transferring data between the CPU
operations. and peripherals.
Control Signals Uses memory control signals (MEMR for Requires dedicated I/O control
Required memory read and MEMW for memory signals (IOR for I/O read and IOW
write). No separate signals are needed for for I/O write), which are different
I/O. from memory signals.
Instruction Slightly more complex because Simpler to use since IN and OUT
Complexity instructions like LDA 8000H or STA instructions deal with 8-bit port
9000H are used, which require 16-bit numbers (e.g., IN 05H, OUT 0AH).
address operands.
Memory Reduces usable memory space since I/O Preserves full memory space for
Utilization devices occupy part of the memory programs and data, as I/O devices
address space, limiting the size of the do not occupy memory addresses.
actual memory available for
programs/data.
Data Allows arithmetic, logical, and data No direct manipulation of I/O data
Manipulation manipulation operations to be directly is possible; data must first be read
Capabilities performed on I/O data as it is accessed like into registers before operations can
memory. be performed.
Hardware Hardware is generally simpler, as only Slightly more complex hardware,
Complexity one decoding scheme is required for both due to the need to implement
memory and I/O. separate address decoding and
control logic for I/O.
Speed of May be slightly slower due to the use of Typically faster access to I/O ports
Operation longer instructions and full 16-bit using short, dedicated instructions.
addressing.
Common Usage Used in systems where high flexibility in Used in systems where maximum
I/O data processing is needed or where I/O memory utilization is important
ports are limited and address space is not a and where devices are accessed with
major concern. simple I/O instructions.
Example in 8085 LDA F0F0H reads data from an input IN 05H reads from port 05H. OUT
device at address F0F0H. STA 9000H 0AH writes to port 0AH.
sends data to output device at 9000H.
Suitability Better for microcontrollers and small Preferred in general-purpose
embedded systems where tight integration microprocessor systems where
of memory and I/O is beneficial. memory and I/O need to be
independently scalable.