OSTIM TECHNICAL UNIVERSITY
SENG 360
SYSTEM
PROGRAMMING
Slides by Dr. Eray Yağdereli
Department of Software Engineering
Spring 2025
TOPIC – 01
Buses and I/O
Computer Components
Memory
© 2010 John Wiley & Sons Ltd. 3
Computer Components
Introduction to Buses and I/O
Computer interconnection structure must support the
following types of transfers:
© 2010 John Wiley & Sons Ltd. 4
Computer Components
Introduction to Buses and I/O
A bus is a communication pathway connecting two or more devices.
A key characteristic of a bus is that it is a shared transmission medium.
It can also be defined as:
“A collection of wires and protocols through which data is transmitted
from one of the source devices to its destinations”.
© 2010 John Wiley & Sons Ltd. 5
Computer Components
Introduction to Buses and I/O
© 2010 John Wiley & Sons Ltd. 6
Computer Components
Introduction to Buses and I/O
Computer Buses
A system bus typically consists of around fifty to hundreds of separate lines.
Each line is assigned a particular meaning or function. Although there are
many different bus designs, on any bus the lines can be classified into three
functional groups :
• Data,
• Address and,
• Control lines.
In addition, there may be power distribution lines that supply power to the
attached modules.
© 2010 John Wiley & Sons Ltd. 7
Computer Components
Introduction to Buses and I/O
Computer Buses – Data Bus
CPU is connected to memory and I/O devices via data, address and
control buses.
Data bus is bidirectional and transfers information (memory data and
instructions, I/O data) to and from CPU.
• Data lines that provide a path for moving data among system
modules and I/O devices.
• May consist of 32, 64, 128, or more separate lines and the number
of lines is referred to as the width of the data bus.
• The number of lines determines how many bits can be transferred
at a time.
• The width of the data bus is a key factor in determining overall
system performance.
© 2010 John Wiley & Sons Ltd. 8
Computer Components
Introduction to Buses and I/O
Usually Unidirectional Bidirectional
© 2010 John Wiley & Sons Ltd. 9
Computer Components
Introduction to Buses and I/O
Computer Buses – Control Bus
Typical control lines include:
• Memory write: causes data on the bus to be written into the addressed location
• Memory read: causes data from the addressed location to be placed on the bus
• I/O write: causes data on the bus to be output to the addressed I/O port
• I/O read: causes data from the addressed I/O port to be placed on the bus
• Transfer ACK: indicates that data have been accepted from or placed on the bus
• Bus request: indicates that a module needs to gain control of the bus
• Bus grant: indicates that a requesting module has been granted control of the
bus
• Interrupt request: indicates that an interrupt is pending
• Interrupt ACK: acknowledges that the pending interrupt has been recognized
• Clock: is used to synchronize operations
• Reset: initializes all modules.
© 2010 John Wiley & Sons Ltd. 10
Computer Components
Introduction to Buses and I/O
Computer Buses (cont.)
Each line of a bus has multiple sources and destinations. If two
devices transmit during the same time period, their signals will
overlap and become garbled. Thus, only one device at a time
can successfully transmit.
© 2010 John Wiley & Sons Ltd. 11
Buses and I/O
Introduction to Buses and I/O
Computer Buses (cont.)
© 2010 John Wiley & Sons Ltd.
Typical Computer Bus Structure 12
Computer Components
Buses and I/O
Facilitation for Designing and Building Computer Systems
• Provides a well defined interface for computer peripheral devices
which have very varying characteristics through which facilitates
and eases the new computer system and component development.
Versatility and Simplicity:
• Expandability, makes easier addition of new devices to a computer
system,
• Enables, peripheral devices can be moved among computer
systems that use the same bus standard.
Low Cost:
• A single set of wires is utilized and shared in multiple ways.
© 2010 John Wiley & Sons Ltd. 13
Computer Components
Buses and I/O
Disadvantages of Buses
Bus Data Transfer Bandwidth Limit: The data transfer bandwidth of
the bus is fixed and does not scale with the addition of more
components which can limit the maximum I/O throughput.
Lack of Concurrent Operation Capability: In other words one
transaction (transmission of data) can take place at a time on buses,
It might create a communication bottleneck.
The Maximum Bus Speed Limit: It is largely caused by:
• The length of the bus, (hence the greater the propagation delay),
• The number of devices on the bus, the more devices attached to
the bus, the greater the bus length
• The need to support a different range of devices with:
Widely varying latencies,
Widely varying data transfer rates.
© 2010 John Wiley & Sons Ltd. 14
Computer Components
Buses and I/O
Types of Buses
In an ideal world, a single communication system (Single Bus)
would satisfy the needs and requirements of all system
components and I/O devices. However, for many practical
reasons including different data transfer rates of computer
component devices, cost, backward compatibility, suitability for
each application etc. numerous interconnection schemes are
employed in a single system.
Accordingly, most bus based computer systems utilize multiple
buses, generally arranged in a hierarchy.
In literature, the bus that connects to main memory is
called as “system bus”.
© 2010 John Wiley & Sons Ltd. 15
Computer Components
Buses and I/O
Computer Buses (Cont.)
(System Bus)
© 2010 John Wiley & Sons Ltd. 16
Computer Components
Buses and I/O
Types of Buses
Since CPU performance depends heavily on a high-bandwidth, low-
latency connection to main memory, processor busses (System Buses)
employ leading-edge signaling technology running at very high
frequencies. Processor busses are also frequently updated and improved,
usually with every processor generation. Hence, all the devices that
connect to the processor bus (typically the CPU, the memory controller,
and the I/O bridge; often referred to as the chip set) need to be updated
at regular intervals.
Because of this update requirement, there is little or no pressure on
processor busses to maintain backward compatibility beyond more than
one processor generation. Hence, not only does the signaling technology
evolve quickly, but also the protocols used to communicate across the
bus adapt quickly to take advantage of new opportunities for improving
performance.
© 2010 John Wiley & Sons Ltd. 17
Computer Components
Buses and I/O
Types of Buses
In contrast to the processor bus, a typical I/O bus evolves much more
slowly, since backward compatibility with legacy I/O adapters is a
primary design constraint. In fact, systems will frequently support
multiple generations of I/O busses to enable use of legacy adapters also.
For example, many PC systems support both the Peripheral Component
Interconnect Express - PCI Express (PCI-E) which is a point-to-point and
serial link and its obsolete predecessor 32 bit Parallel PCI or Conventional
PCI Bus interface which dates back to 1995. Virtually all PC motherboards
have at least one 32 bit PCI slot, since there are many specialized
expansion cards, in use today that have never transitioned to PCI Express.
Computer component device developers and manufacturers follow the
processor developments behind, such as Intel's 12th and 13th generation
Core CPUs support PCIe Gen. 5.0 since 2021, but PCIe 5.0 compatible
graphics cards and storage devices are not on the market yet.
© 2010 John Wiley & Sons Ltd. 18
Traditional Two-Bus Architecture
© 2010 John Wiley & Sons Ltd. 19
Computer Components
Buses and I/O - Types of Buses
• Only one or a small number of backplane buses tap into the processor
memory bus,
• Processor-memory bus is only used for processor-memory traffic,
• I/O buses are connected to the “backplane bus”.
• Advantage: Data traffic and load on the processor bus is greatly
reduced
© 2010 John Wiley & Sons Ltd. 20
Computer Components
Buses and I/O - Types of Buses
• Separate pin set for each
function below:
Memory bus
Caches
Graphics bus (for fast frame
buffer)
I/O busses are connected to
the backplane bus
• Advantage: Data traffic and load
on the processor bus is greatly
reduced
© 2010 John Wiley & Sons Ltd. Intel’s Northbridge and Southbridge Architecture 21
Buses and I/O - Types of Buses
Intel’s Northbridge and Southbridge Architecture
© 2010 John Wiley & Sons Ltd. 22
Computer Components
I/O Address Mapping Methods
As the same address bus is used for both memory and I/O, how
does hardware distinguish between memory reads/writes and
I/O reads/writes?
Two approaches:
• Memory mapped I/O
– Devices and memory share the same address space,
– I/O looks just like memory read/write,
– No special commands for I/O,
• Large selection of memory access commands available for I/O.
• Isolated I/O
– Separate address spaces,
– Need I/O or memory select lines,
– Special commands for I/O,
• Limited (capability) set for I/O. 23
© 2010 John Wiley & Sons Ltd.
Computer Components
I/O Address Mapping Methods
Memory mapped I/O:
The same address space, is used and shared for both memory and I/O
devices.
Reads and writes to I/O device interface registers are done with the same
machine instructions (MOVE, LOAD, STORE, etc.) which are used to read and
write memory locations.
© 2010 John Wiley & Sons Ltd. 24
Computer Components
Memory Mapped I/O
The entire memory address space is divided into memory space
and I/O space.
Advantages:
• Simpler CPU design,
• No special instructions for I/O accesses.
Disadvantages:
• I/O devices reduce the amount of memory space available for
application programs.
Any addresses that are used for I/O device interface address will be
unavailable for memory devices. (If there is an I/O device at address 400,
there cannot also be a memory location with address 400.) Depending on
how the I/O addresses are chosen, not all of the available physical
memory space may be contiguous.
• The address decoder needs to decode the full address bus to avoid
conflict with memory addresses.
© 2010 John Wiley & Sons Ltd. 25
Computer Components
I/O Address Mapping Methods
© 2010 John Wiley & Sons Ltd. 26
Computer Components
I/O Address Mapping Methods
Isolated I/O (Separate I/O):
A separate address spaces to be occupied exclusively by I/O device interfaces. In
this approach, CPU has a set of instructions dedicated to reading and writing I/O
ports. The most popular example of this approach is the Intel x86 architecture,
which defines IN and OUT instructions that are used to transfer data between
CPU registers and I/O devices.
© 2010 John Wiley & Sons Ltd. 27
Computer Components
I/O Address Mapping Methods
Isolated I/O (Separate I/O):
• Two separate address spaces for memory and I/O.
– Less expensive address decoders than those needed for memory-
mapped I/O (Why?)
• For systems using a single bus for both memory and I/O devices
an additional control signal called MEM/𝐈𝐎 , is required to prevent both
memory and I/O trying to place data on the bus simultaneously.
– MEM/𝐈𝐎 is low for I/O use and high for memory use.
• Special I/O instructions (such as IN and OUT) are required to access I/O
devices
© 2010 John Wiley & Sons Ltd. 28
Computer Components
I/O Address Mapping Methods – Isolated I/O
System with separate buses for I/O devices and memory
© 2010 John Wiley & Sons Ltd. 29
Single-bus system with separate I/O and memory address space
© 2010 John Wiley & Sons Ltd. 30
Computer Components
I/O Operation Techniques - Data Transfer Methods
Four data transfer methods are possible for I/O operations between I/O
interfaces and memory :
• Programmed I/O - Polling
– Processor executes a program that gives it direct control of the I/O
operation
– Data are exchanged between the processor and the I/O module
• Interrupt-driven I/O
– Processor issues an I/O command, continues to execute other
instructions, and is interrupted by the I/O module when the latter has
completed its work
• Direct memory access (DMA)
– The I/O module and main memory exchange data directly without
processor involvement
• Channels (Input/Output Processors)
– A Channel is simply an additional, independent and a special-purpose
programmable processor that supervises I/O; it is able to (among
other things) perform data transfers in DMA mode.
© 2010 John Wiley & Sons Ltd. 31
Computer Components
I/O Operation Techniques
Programmed I/O – Polling
The CPU determines whether each device is ready by periodically querying
the appropriate bits in its status register. Because this monitoring and
control of I/O devices is done by a program written for that purpose, it is
referred to as program-controlled I/O.
Because the monitoring is done via status register polling, this method is also
sometimes known as polled I/O.
Advantages:
Main advantage of this technique is its obvious simplicity with respect to
hardware. Unlike the other methods we examine, NO signals or devices are
required for data transfer other than the CPU, the I/O device’s interface
registers, and the bus that connects them.
This lack of hardware complexity is attractive because it reduces
implementation cost.
© 2010 John Wiley & Sons Ltd. 32
Computer Components
I/O Operation Techniques
Programmed I/O – Polling
Disadvantages:
1. Data Transfer must be performed by CPU (OS)
2. Polling Overhead: Waste of CPU cycles and adverse implications for
system performance.
• Polling Code would have to be executed periodically for every device in
the system even they do not have any data to send —a task made more
complex by the likelihood that some devices will need to transfer data
more frequently than others.
• Any time spent polling I/O devices and transferring data is time not spent
by the CPU performing other useful computations.
• If the polling loop is executed frequently, much time will be taken from
other activities; if it is executed infrequently, I/O devices will experience
long waits for service. (Data may even be lost if the device interfaces
provide insufficient buffering capability.)
3. Each data transfer consumes at least two bus cycles. (One cycle for
reading
© 2010 John I/O
Wiley & Sons Ltd.device and second bus cycle for writing to memory) 33
Computer Components
I/O Operation Techniques
Programmed I/O Overhead – Polling Overhead
• Parameters
500 MHz CPU
Polling event takes 400 cycles
• Overhead for polling a mouse 30 times per second?
(30 poll/s) * [(400 c/poll)/(500M c/s)] = 0.002%
Not bad
• Overhead for polling a 4 MB/s disk with 16 B interface?
(4M B/s)/(16 B/poll) * [(400 c/poll)/(500M c/s)] = 20%
Not good
• This is the overhead of polling, with or without data transfer
• Really bad if disk is not being used
© 2010 John Wiley & Sons Ltd. 34
Computer Components
I/O Operation Techniques
© 2010 John Wiley & Sons Ltd. 35
Computer Components
I/O Operation Techniques
Interrupt-driven I/O
CPU sets the I/O interface to send an interrupt request if it is ready for data
transfer. I/O device uses a (hardware) interrupt request line to notify the CPU
when it wants to send or receive data.
The interrupt service routine (in CPU) contains the code to perform the data
transfer(s).
© 2010 John Wiley & Sons Ltd. 36
Computer Components
I/O Operation Techniques
Interrupt-driven I/O
When the I/O interface sends an interrupt request, The processor
interrupts its current program, runs the interrupt service routine in which
the data transfer is executed, and then resumes its former processing.
In this technique, the CPU does not check the status, but it is still the
responsibility of the processor to perform the data transfer.
• I/O interrupts are asynchronous
– Not associated with any instruction
– Don’t need to be handled immediately
• I/O interrupts are prioritized
– Synchronous interrupts (e.g., page faults) have highest priority
– High-bandwidth I/O devices have higher priority than
low-bandwidth ones
© 2010 John Wiley & Sons Ltd. 37
Computer Components
I/O Operation Techniques
Interrupt-driven I/O
Advantages:
• The CPU does not need to check the status continuously. The “polling
overhead“ problem does not exist in this technique.
• I/O code can be very efficient because each device can have its own
interrupt handler tailored to its unique characteristics. (For best
performance, there should be enough interrupt request lines that
each device can have one dedicated to it; otherwise, service routines
will have to be shared, and the processor will still have to perform
some processing to distinguish between interrupt sources.)
• The CPU can run other programs while data transfer taking place
between the I/O interface and peripheral I/O device itself.
• Suitable for general-purpose systems in which there are a variety of
devices that require data transfers, especially when these transfers are
of varying sizes and occur at more or less random times
© 2010 John Wiley & Sons Ltd. 38
Computer Components
I/O Operation Techniques
Interrupt-driven I/O
Disadvantages:
• Interrupt processing has its own overhead (saving the return address,
program status, and registers, as well as performing some other
operations). At the end of the service routine, return address and
program status are read.
• The use of interrupts complicates the system and the CPU hardware
somewhat, but interrupts are standard equipment on modern
microprocessors.
• The overhead of the CPU having to execute instructions to perform
the data transfer .
• Interrupt-driven I/O is not suitable for applications where I/O
operations are performed very frequently or for those in which
transfers involve large blocks of data or must be done at very high
speeds.
© 2010 John Wiley & Sons Ltd. 39
Computer Components
I/O Operation Techniques
Interrupt-driven I/O Overhead
• Parameters
500 MHz CPU
Interrupt handler takes 400 cycles
Data transfer takes 100 cycles
4 MB/s, 16 B interface disk transfers data only 5% of time
• Data transfer (x) time
• 0.05 * (4M B/s)/(16 B/xfer)*[(100 c/xfer)/(500M c/s)] = 0.25%
• Overhead for interrupts?
0.05 * (4M B/s)/(16 B/poll) * [(400 c/poll)/(500M c/s)] = 1%
• Overhead for polling?
(4M B/s)/(16 B/poll) * [(400 c/poll)/(500M c/s)] = 20%
© 2010 John Wiley & Sons Ltd. 40
Computer Components
I/O Operation Techniques
Direct Memory Access – (DMA)
In a system using either program-controlled or interrupt-driven I/O, the
obvious middleman—and potential bottleneck—is the CPU itself and it is
responsible for transferring data between memory and I/O interfaces.
The CPU must execute a number of instructions for each I/O transfer.
In order to expedite I/O by performing direct transfers of data between two
slave devices (such as main memory and an I/O port), there must be another
device in the system, in addition to the CPU, that is capable of becoming the
bus master and generating the control signals necessary to coordinate the
transfer. Such a device is referred to as a direct memory access controller
(DMA controller, or just DMAC).
So the Direct Memory Access (DMA) technique is addition of DMAC
hardware module on the system bus and realization of data transfer between
I/O device and memory by DMAC itself without processor control.
© 2010 John Wiley & Sons Ltd. 41
Computer Components
I/O Operation Techniques
Direct Memory Access – (DMA)
A typical DMAC is not a programmable processor like the system CPU, but a
hardware state machine capable of carrying out certain operations when
commanded by the CPU.
When the CPU needs to read or write a block of data, it initializes the DMAC
by sending the necessary information (address, size, transfer mode etc.).
Specifically, the DMAC is capable of generating the address and control and
timing signals necessary to activate memory and I/O devices and transfer
data over the system bus. By simultaneously activating the signals that cause
a given device to place data on the bus and memory to accept data from the
bus (or vice versa), the DMAC can cause a direct input or output data transfer
to occur.
Thus, CPU delegates responsibility for the I/O operation to the DMAC and it
can continue with its other programs during the transfer of data.
© 2010 John Wiley & Sons Ltd. 42
Computer Components
I/O Operation Techniques
Direct Memory Access – (DMA)
The data does not go through the CPU. The DMAC uses the system bus only
when the processor does not need it, or it must force the processor to
suspend the bus operations temporarily. The DMA technique is suitable for
applications where large volumes of data are transferred and I/O operations
are performed very frequently (Why?).
I/O Device
DMAC
© 2010 John Wiley & Sons Ltd. 43
Computer Components
I/O Operation Techniques
Direct Memory Access – (DMA)
• Block I/O memory
transfers without
processor control
• Transfers entire
blocks (e.g., pages,
video frames) at a
time
• Can use bus “burst”
transfer mode if
Block Diagram of Typical DMAC
available
• Only interrupts
processor when
done (or if error
occurs)
© 2010 John Wiley & Sons Ltd. 44
Computer Components
I/O Operation Techniques
© 2010 John Wiley & Sons Ltd. 45
Computer Components
I/O Operation Techniques
Alternative Direct Memory Access (DMA) Configurations
The DMA mechanism can be configured in a variety of ways. Some
possibilities are shown in the Figure given in the previous slide. In the first
example, all modules share the same system bus. This configuration, while it
may be inexpensive, is clearly inefficient. As with processor controlled
programmed I/O, each transfer of a word consumes two bus cycles. (One
cycle for reading I/O device and second bus cycle for writing to memory)
In Figures (b) and (c), the system bus that the DMA module shares with the
processor and memory is used by the DMA module only to exchange data
with memory. The exchange of data between the DMA and I/O modules
takes place on I/O bus which is between I/O device and DMAC NOT on the
system bus.
© 2010 John Wiley & Sons Ltd. 46
Computer Components
I/O Operation Techniques
Direct Memory Access – (DMA)
Advantages:
• The CPU can run other programs while I/O operation is proceeding.
• Eliminates the need for the CPU fetching and executing instructions to
perform each data transfer.
• Large blocks of data are normally transferred in or out at least twice
as quickly when compared to if the CPU performed them directly
because each byte or word of data is only moved once (directly to or
from memory) rather than twice (into and then out of the CPU).
– Speed advantage in dealing with large block I/O transfers has
made DMA a standard feature, not just in high-performance
systems, but in general-purpose machines as well.
© 2010 John Wiley & Sons Ltd. 47
Computer Components
I/O Operation Techniques
Direct Memory Access – (DMA)
Disadvantages:
• Requires more hardware than both of the program-controlled or
interrupt-driven I/O approaches. (The CPU still needs all the circuitry
and connections to be able to poll devices and receive interrupts to be
aware of the need for a transfer to occur, but the DMAC is an
additional device that adds its own cost and complexity to the system.)
• More complex way of handling I/O compared to program-controlled
or interrupt-driven I/O approaches.
• Increased software overhead of having to set up the DMA channel
parameters for each transfer.
– Therefore, DMA is usually less efficient than I/O performed directly
by the CPU for transfers of small amounts of data.
• CPU can not access main memory while I/O operation is taking place
unless bus cycle stealing mode or bus split transaction method is
utilized.
© 2010 John Wiley & Sons Ltd. 48
Computer Components
I/O Operation Techniques
DMA I/O Overhead
• Parameters
500 MHz CPU
Interrupt handler takes 400 cycles
Data transfer takes 100 cycles
4 MB/s, 16 B interface disk transfers data 50% of time
DMA setup takes 1600 cycles, transfer one 16KB page at a time
• Processor overhead for interrupt-driven I/O?
400 cycles for Interrupt handler + 100 cycles for Data transfer = 500 cycles
0.5 * (4M B/s)/(16 B/i-xfer)*[(500 c/i-xfer)/(500M c/s)] = 12.5%
• Processor overhead with DMA?
Processor only gets involved once per page, not once per 16 B
400 cycles for Interrupt handler + 1600 cycles for DMA setup = 2000 cycles
0.5 * (4M B/s)/(16K B/page) * [(2000 c/page)/(500M c/s)] = 0.05%
© 2010 John Wiley & Sons Ltd. 49
Computer Components
I/O Operation Techniques
DMA I/O Overhead
• Parameters
500 MHz CPU
Interrupt handler takes 400 cycles
Data transfer takes 100 cycles
4 MB/s, 16 B interface disk transfers data 50% of time
DMA setup takes 1600 cycles, transfer one 16KB page at a time
• Processor overhead for interrupt-driven I/O?
400 cycles for Interrupt handler + 100 cycles for Data transfer = 500 cycles
0.5 * (4M B/s)/(16 B/i-xfer)*[(500 c/i-xfer)/(500M c/s)] = 12.5%
• Processor overhead with DMA?
Processor only gets involved once per page, not once per 16 B
400 cycles for Interrupt handler + 1600 cycles for DMA setup = 2000 cycles
0.5 * (4M B/s)/(16K B/page) * [(2000 c/page)/(500M c/s)] = 0.05%
© 2010 John Wiley & Sons Ltd. 50
Computer Components
I/O Operation Techniques
I/O Overhead Example Question
Calculate the processor overhead for the Interrupt driven I/O and DMA
I/O by using the following parameters? (For simplicity in your calculations
you can take Megabytes = 106 Bytes, Kilobytes=103 Bytes, Gigahertz=109,
512≈500,
• Parameters
1 Gigahertz CPU
Interrupt handler takes 400 cycles
Data transfer takes 100 cycles
128 MB/sec disk with 512 bytes interface, disk transfers data at
50% of time
DMA setup takes 1600 cycles and it transfers a 64 KB page at a
time.
© 2010 John Wiley & Sons Ltd. 51
Computer Components
I/O Operation Techniques
I/O Overhead Example Solution
The Interrupt driven I/O overhead calculation is given below:
• Each Interrupt driven I/O will consume 400 + 100 = 500 cycles
[1/2 * (128 * 106 )/ 512] * (500 cycl/1*109)
= (1/8 * 106 ) * (500/1*109)
= 62,5 * 106 / 109 = 0,0625 = %6,25 overhead
• The direct memory access (DMA) I/O overhead calculation is given
below:
• Each DMA I/O will consume 1600 + 400 (for Interrupt hander used to
inform the CPU when I/O is finished) = 2000 cycles
1/2 * [(128 * 106 )/64 * 103)] * (2000 /1*109)
= 103 * 2000 /109 = 0,002 = % 0,2 overhead
© 2010 John Wiley & Sons Ltd. 52
Computer Components
I/O Operation Techniques
Channels
A channel is an independent and a special-purpose programmable
processor that manages I/Os and functions as a more sophisticated I/O
controller than a DMA device and can be programmed to do complex data
transfers. A channel performs extensive error detection and correction, data
formatting, and code conversion. Unlike DMA, the channel can interrupt CPU
under any error condition.
Channels as a separate system component date back to the IBM mainframes
of the 1960s. IBM referred to its I/O processors as channel processors. These
devices were very simple, von Neumann–type processors. The channel
processors had their own register sets and program counters but shared main
memory with the system CPU. Their programs were made up of channel
commands with a completely different instruction set architecture from the
main CPU. (This instruction set was very simple and optimized for I/O.)
© 2010 John Wiley & Sons Ltd. 53
Computer Components
I/O Operation Techniques
Channels
Channel processors could not execute machine instructions, nor could the
CPU execute channel commands. The CPU and the channel processor
communicated by writing and reading information in a shared
communication area in memory.
There are two types of channels: multiplexer and selector.
A multiplexer channel is connected to several low and medium-speed
devices (card readers, paper tape readers, etc.). The channel scans these
devices in turn and collects data into a buffer.
A selector channel interfaces high-speed devices such as magnetic tapes
and disks to the memory. These devices can keep a channel busy because of
their high data-transfer rates. Although several devices are connected to each
selector channel, the channel stays with one device until the data transfer
from that device is complete.
© 2010 John Wiley & Sons Ltd. 54
Computer Components
I/O Operation Techniques - Channels
© 2010 John Wiley & Sons Ltd. 55