0% found this document useful (0 votes)
9 views34 pages

Co Module 2 Notes

The document discusses Input/Output (I/O) organization in computer systems, detailing how data is exchanged between a computer and various I/O devices through buses and interrupts. It explains the mechanisms of accessing I/O devices, including memory-mapped I/O, polling, interrupts, and direct memory access, highlighting the importance of efficient data transfer and synchronization. Additionally, it covers the hardware aspects of interrupts, enabling and disabling them, and the role of the processor in managing these operations to optimize performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views34 pages

Co Module 2 Notes

The document discusses Input/Output (I/O) organization in computer systems, detailing how data is exchanged between a computer and various I/O devices through buses and interrupts. It explains the mechanisms of accessing I/O devices, including memory-mapped I/O, polling, interrupts, and direct memory access, highlighting the importance of efficient data transfer and synchronization. Additionally, it covers the hardware aspects of interrupts, enabling and disabling them, and the role of the processor in managing these operations to optimize performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

18CS34 COMPUTER ORGANIZATION III SEM MOD-2

MODULE 2:
Input / Output Organization: Accessing I/O Devices, Interrupts – Interrupt Hardware, Direct Memory
Access, Buses, Interface Circuits, Standard I/O Interfaces – PCI Bus, SCSI Bus, USB.

One of the basic features of a computer is its ability to exchange data with other devices. This
communication capability enables a human operator, for example, to use a keyboard and a display screen
to process text and graphics. We make extensive use of computers to communicate with other computers
over the Internet and access information around the globe. In other applications, computers are less visible but
equally important. They are an integral part of home appliances, manufacturing equipment, transportation
systems, banking and point-of-sale terminals. In such applications, input to a computer may come from a
sensor switch, a digital camera, a microphone, or a fire alarm. Output may be a sound signal to be sent to a
speaker or a digitally coded command to change the speed of a motor, open a valve, or cause a robot to
move in a specified manner. In short, a general-purpose computer should have the ability to exchange
information with a wide range of devices in varying environments.
ACCESSING I/O DEVICES
The bus enables all the devices connected to it to exchange information. Typically, it consists of three sets of
lines used to carry address, data, and control signals. Each I/O device is assigned a unique set of addresses. When
the processor places a particular address on the address lines, the device that recognizes this address responds to
the commands issued on the control lines. The processor requests either a read or a write operation, and the
requested data are transferred over the data lines. When I/O devices and the memory share the same address
space, the arrangement is called memory-mapped 1/0.
With memory-mapped I/O, any machine instruction that can access memory can be used to transfer data
to or from an I/O device. For example, if DATAIN is the address

Figure 4.1 A single-bus structure.

of the input buffer associated with the keyboard, the instruction


Move DATAIN,RO
reads the data from DATAIN and stores them into processor register RO. Similarly, the instruction
Move RO, DATAOUT
sends the contents of register RO to location DATAOUT, which may be the output data buffer of a display unit
or a printer.
When building a computer system based on these processors, the designer has the option of connecting I/O
devices to use the special I/O address space or simply incorporating them as part of the memory address
space. The latter approach is by far the most common as it leads to simpler software. One advantage of a
separate I/O address space is that I/0 devices deal with fewer address lines. A special signal on the bus
indicates that the requested read or write transfer is an I/0 operation. When this signal is asserted, the
memory unit ignores the requested transfer. The I/O devices examine the low-order bits of the address bus to
determine whether they should respond.

Department of Computer Science & Engineering, HMSIT, Tumkur 1


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

The address decoder enables the device to recognize its address when this address appears on the address
lines. The data register holds the data being transferred to or from the processor. The status register contains
information relevant to the operation of the I/O device. Both the data and status registers are connected to
the data bus and assigned unique addresses.

The address decoder, the data and status registers, and the control circuitry required to coordinate I/0
transfers constitute the device's interface circuit.
I/O devices operate at speeds that are vastly different from that of the processor. An instruction that reads
a character from the keyboard should be executed only when a character is available in the input buffer of
the keyboard interface. Also, we must make sure that an input character is read only once.
For an input device such as a keyboard, a status flag, SIN, is included in the interface circuit as part of the
status register. This flag is set to 1 when a character is entered at the keyboard and cleared to 0 once this
character is read by the processor. Hence, by checking the SIN flag, the software can ensure that it is
always reading valid data. This is often accomplished in a program loop that repeatedly reads the status
register and checks the state of SIN. When SIN becomes equal to 1, the program reads the input data
register. A similar procedure can be used to control output operations using an output status flag,
SOUT.

Let us consider a simple example of I/O operations involving a keyboard and a display device in a
computer system. The four registers are used in the data transfer operations. Register STATUS contains
two control flags, SIN and SOUT, which provide status information for the keyboard and the display
unit, respectively. The two flags KIRQ and DIRQ in this register are used in conjunction with interrupts.

Department of Computer Science & Engineering, HMSIT, Tumkur 2


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

Figure 4.3 Registers in keyboard and display interfaces.


WAIT Move #LINE,R Initialize memory
TestBi O pointer. Test SIN.
t #O,STAT Wait for character to be entered.
K Branc US Read character.
h=0 WAITK Test SOUT.
Move DATAIN, Wait for display to become ready.
TestBi R1 Send character to display.
WAIT
t #1,STAT Store charater and advance pointer.
Branc US Check if Carriage Return.
h=0 WAITD If not,
D reads a line of characters from
This program theget anotherand stores it in a memory buffer starting at location
keyboard
LINE. Then, it calls a subroutine PROCESS to process the input line. As each character is mad, it is echoed
back to the display. Register RO is used as a pointer to the memory buffer area. The contents of RO are
updated using the Autoincrement addressing mode so that successive characters are stored in successive
memory locations.
Each character is checked to see if it is the Carriage Return (CR) character, which has the ASCII code OD
(hex). If it is, a Line Feed character (ASCII code OA) is sent to move the cursor one line down on the
display and subroutine PROCESS is called. Otherwise, the program loops back to wait for another
character from the keyboard.
This example illustrates program-controlled 1/0, in which the processor repeatedly checks a status flag to
achieve the required synchronization between the processor and an input or output device. We say that the
processor polls the device. There are two other commonly used mechanisms for implementing I/O
operations: interrupts and direct memory access. In the case of interrupts, synchronization is achieved by
having the I/O device send a special signal over the bus whenever it is ready for a data transfer operation. Direct
memory access is a technique used for high-speed I/O devices. It involves having the device interface
transfer data directly to or from the memory, without continuous involvement by the processor.

INTERRUPTS
During this period, the processor is not performing any useful computation. There are many situations
where other tasks can be performed while waiting for an I/O device to become ready. To allow this to
happen, we can arrange for the I/O device to alert the processor when it becomes ready. It can do so by
sending a hardware signal called an interrupt to the processor. At least one of the bus control lines,
called an interrupt-request line, is usually dedicated for this purpose. Since the processor is no longer
required to continuously check the status of external devices, it can use the waiting period to perform other
useful functions. Indeed, by using interrupts, such waiting periods can ideally be eliminated.
Consider a task that requires some computations to be performed and the results to be printed on a
line printer. This is followed by more computations and output, and so on. Let the program consist of
two routines, COMPUTE and PRINT. Assume that COMPUTE produces a set of n lines of output, to be
printed by the PRINT routine.
The required task may be performed by repeatedly executing first the COMPUTE routine and then the
PRINT routine. The printer accepts only one line of text at a time. Hence, the PRINT routine must send
one line of text, wait for it to be printed, then send the next line, and so on, until all the results have
been printed. The disadvantage of this simple approach is that the processor spends a considerable
amount of time waiting for the printer to become ready. If it is possible to overlap printing and
computation, that is, to execute the COMPUTE routine while printing is in progress, a faster overall speed
of execution will result. This may be achieved as follows. First, the COMPUTE routine is executed to
produce the first n lines of output. Then, the PRINT routine is executed to send the first line of text to the
printer. At this point, instead of waiting for the line to be printed; the PRINT routine may be temporarily
suspended and execution of the COMPUTE routine continued. Whenever the printer becomes ready, it

Department of Computer Science & Engineering, HMSIT, Tumkur 3


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

alerts the processor by sending an interrupt-request signal. In response, the processor interrupts execution
of the COMPUTE routine and transfers control to the PRINT routine. The PRINT routine sends the
second line to the printer and is again suspended. Then the interrupted COMPUTE routine resumes
execution at the point of interruption. This process continues until all n lines have been printed and the
PRINT routine ends.
The PRINT routine will be restarted whenever the next set of n lines is available for printing. If
COMPUTE takes longer to generate n lines than the time required to print them, the processor will be
performing useful computations all the time.

This example illustrates the concept of interrupts. The routine executed in response to an interrupt request
is called the interrupt-service routine, which is the PRINT routine in our example. Interrupts bear
considerable resemblance to subroutine calls. The processor first completes execution of instruction i. Then, it
loads the program counter with the address of the first instruction of the interrupt-service routine. For the time
being, let us assume that this address is hardwired in the processor. After execution of the interrupt-service
routine, the processor has to come back to instruction i + 1. Therefore, when an interrupt occurs, the current contents
of the PC, which point to instruction i + 1, must be put in temporary storage in a known location. A Return from-
interrupt instruction at the end of the interrupt-service routine reloads the PC from that temporary storage
location, causing execution to resume at instruction i + 1. In many processors, the return address is saved on the
processor stack. Alternatively, it may be saved in a special location, such as a register provided for this purpose.
We 'should note that as part of handling interrupts, the processor must inform the device that its request has been
recognized so that it may remove its interrupt-request signal. This may be accomplished by means of a special control
signal on the bus. An interrupt-acknowledge signal, used in some of the interrupt schemes to be discussed later,
serves this function. A common alternative is to have the transfer of data between the processor and the I/O device
interface accomplish the same purpose. The execution of an instruction in the interrupt-service routine that accesses
a status or data register in the device interface implicitly informs the device that its interrupt request has been
recognized.
In fact, the two programs often belong to different users. Therefore, before starting execution of the interrupt-service
routine, any information that may be altered during the execution of that routine must be saved. This information must be
restored before execution of the interrupted program is resumed. In this way, the original program can continue
execution without being affected in any

Department of Computer Science & Engineering, HMSIT, Tumkur 4


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

way by the interruption, except for the time delay. The information that needs to be saved and restored typically
includes the condition code flags and the contents of any registers used by both the interrupted program and the
interrupt-service routine.
The task of saving and restoring information can be done automatically by the processor or by program
instructions. Most modem processors save only the minimum amount of information needed to maintain the
integrity of program execution. This is because the process of saving and restoring registers involves memory
transfers that increase the total execution time, and hence represent execution overhead. Saving registers also increases
the delay between the time an interrupt request is received and the start of execution of the interrupt-service routine.
This delay is called interrupt latency. The processor saves only the contents of the program counter and the
processor status register. Any additional information that needs to be saved must be saved by program instructions at
the beginning of the interrupt-service routine and restored at the end of the routine.
An interrupt is more than a simple mechanism for coordinating I/O transfers. In a general sense, interrupts
enable transfer of control from one program to another to be initiated by an event external to the computer. Execution
of the interrupted program resumes after the execution of the interrupt-service routine has been completed. The
concept of interrupts is used in operating systems and in many control applications where processing of certain
routines must be accurately timed relative to external events. The latter type of application is referred to as real-
time processing.

INTERRUPT HARDWARE
All devices are connected to the line via switches to ground. To request an interrupt, a device closes its associated
switch. Thus, if all interrupt-request signals INTR1 to INT& are inactive, that is, if all switches are open, the
voltage on the interrupt-request line will be equal to Vdd. This is the inactive state of the line. When a device
requests an interrupt by closing its switch, the voltage on the line drops to 0, causing the interrupt-request signal,
INTR, received by the processor to go to 1. Since the closing of one or more switches will cause the line voltage
to drop to 0, the value of INTR is the logical OR of the requests from individual devices, that is,

Figure 4.6 An equivalent circuit for an open-drain bus used to implement a


common interrupt-request line.

INTR = INT& + • • • + INTRn


special gates known as open-collector (for bipolar circuits) or open-drain (for MOS circuits) are used to
drive the INTR line. The output of an open-collector or an open-drain gate is equivalent to a switch to
ground that is open when the gate's input is in the 0 state and closed when it is in the 1 state. The voltage
level, hence the logic state, at the output of the gate is determined by the data applied to all the gates connected
to the bus, according to the equation given above. Resistor R is called a pull-up resistor because it pulls the
line voltage up to the high-voltage state when the switches are open.

ENABLING AND DISABLING INTERRUPTS


The facilities provided in a computer must give the programmer complete control over the events that take
place during program execution. The arrival of an interrupt request from an external device causes the

Department of Computer Science & Engineering, HMSIT, Tumkur 5


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

processor to suspend the execution of one program and start the execution of another. A fundamental facility
found in all computers is the ability to enable and disable such interruptions as desired.
For an interrupt request from the printer should be accepted only if there are output lines to be printed. After
printing the last line of a set of n lines, interrupts should be disabled until another set becomes available for
printing. For these reasons, some means for enabling and disabling interrupts must be available to the
programmer. A simple way is to provide machine instructions, such as Interrupt-enable and Interrupt-disable
that perform these functions.
Let us consider in detail the specific case of a single interrupt request from one device. When a device
activates the interrupt-request signal, it keeps this signal activated until it learns that the processor has accepted its
request. This means that the interrupt-request signal will be active during execution of the interrupt-service
routine, perhaps until an instruction is reached that accesses the device in question. It is essential to ensure that
this active request signal does not lead to successive interruptions, causing the system to enter an infinite loop
from which it cannot recover. Several mechanisms are available to solve this problem.
The first possibility is to have the processor hardware ignore the interrupt-request line until the execution of
the first instruction of the interrupt-service routine has been completed. Then, by using an Interrupt-disable
instruction as the first instruction in the interrupt-service routine, the programmer can ensure that no further
interruptions will occur until an Interrupt-enable instruction is executed. Typically, the Interrupt-enable
instruction will be the last instruction in the interrupt-service routine before the Return-from-interrupt
instruction. The processor must guarantee that execution of the Return-from-interrupt instruction is
completed before further interruption can occur.
The second option, which is suitable for a simple processor with only one interrupt-request line, is to have the
processor automatically disable interrupts before starting the execution of the interrupt-service routine. After
saving the contents of the PC and the processor status register (PS) on the stack, the processor performs the
equivalent of executing an Interrupt-disable instruction. It is often the case that one bit in the PS register,
called Interrupt-enable, indicates whether interrupts are enabled.
In the third option, the processor has a special interrupt-request line for which the interrupt-handling circuit
responds only to the leading edge of the signal. Such a line is said to be edge-triggered. In this case, the
processor will receive only one request, regardless of how long the line is activated. Hence, there is no danger
of multiple interruptions and no need to explicitly disable interrupt requests from this line.
The sequence of events involved in handling an interrupt request from a single device. Assuming that
interrupts are enabled, the following is a typical scenario:
1. The device raises an interrupt request.
2. The processor interrupts the program currently being executed. Interrupts are disabled by changing the
control bits in the PS (except in the case of edge-triggered interrupts).
3. The device is informed that its request has been recognized, and in response, it deactivates the
interrupt-request signal.
4. The action requested by the interrupt is performed by the interrupt-service routine.
5. Interrupts are enabled and execution of the interrupted program is resumed.

HANDLING MULTIPLE DEVICES


The situation where a number of devices capable of initiating interrupts are connected to the processor. Because these
devices are operationally independent, there is no definite order in which they will generate interrupts. For example,
device X may request an interrupt while an interrupt caused by device Y is being serviced, or several devices may
request interrupts at exactly the same time. This gives rise to a number of questions:
I. How can the processor recognize the device requesting an interrupt?
2.Given that different devices are likely to require different interrupt-service routines, how can the processor obtain
the starting address of the appropriate routine in each case?
3.Should a device be allowed to interrupt the processor while another interrupt is being serviced?
4.How should two or more simultaneous interrupt requests be handled?

Department of Computer Science & Engineering, HMSIT, Tumkur 6


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

When a request is received over the common interrupt-request line additional information is needed to identify the
particular device that activated the line. Furthermore, if two devices have activated the line at the same time, it must be
possible to break the tie and select one of the two requests for service. When the interrupt-service routine for the
selected device has been completed, the second request can be serviced.
The information needed to determine whether a device is requesting an interrupt is available in its status register.
When a device raises an interrupt request, it sets to l one of the bits in its status register, which we will call the
IRQ bit. For example, bits KIRQ and DIRQ in Figure 4.3 are the interrupt request bits for the keyboard and the
display, respectively. The simplest way to identify the interrupting device is to have the interrupt-service routine poll
all the I/O devices connected to the bus. The first device encountered with its IRQ bit set is the device that should be
serviced. An appropriate subroutine is called to provide the requested service.
The polling scheme is easy to implement. Its main disadvantage is the time spent interrogating the IRQ bits of all the
devices that may not be requesting any service. An alternative approach is to use vectored interrupts.
To reduce the time involved in the polling process, a device requesting an interrupt may identify itself directly to the
processor. Then, the processor can immediately start executing the corresponding interrupt-service routine. The term
vectored interrupts refers to all interrupt-handling schemes based on this approach.
A device requesting an interrupt can identify itself by sending a special code to the processor over the bus. This
enables the processor to identify individual devices even if they share a single interrupt-request line. The code supplied
by the device may represent the starting address of the interrupt-service routine for that device. The code length is
typically in the range of 4 to 8 bits. The remainder of the address is supplied by the processor based on the area in its
memory where the addresses for interrupt-service routines are located.
This arrangement implies that the interrupt-service routine for a given device must always start at the same location.
The programmer can gain some flexibility by storing in this location an instruction that causes a branch to the appropriate
routine. In many computers, this is done automatically by the interrupt-handling mechanism. The location pointed to
by the interrupting device is used to store the starting address of the interrupt-service routine. The processor reads this
address, called the interrupt vector, and loads it into the PC. The interrupt vector may also include a new value for the
processor status register.

Interrupt Nesting
A computer that keeps track of the time of day using a real-time clock. This is a device that sends interrupt requests
to the processor at regular intervals. For each of these requests, the processor executes a short interrupt-service routine
to increment a set of counters in the memory that keep track of time in seconds, minutes, and so on. Proper operation
requires that the delay in responding to an interrupt request from the real-time clock be small in comparison with the
interval between two successive requests. To ensure that this requirement is satisfied in the presence of other
interrupting devices, it may be necessary to accept an interrupt request from the clock during the execution of an
interrupt-service routine for another device.
A multiple-level priority organization means that during execution of an interrupt-service routine, interrupt
requests will be accepted from some devices but not from others, depending upon the device's priority. To
implement this scheme, we can assign a priority level to the processor that can be changed under program
control. The priority level of the processor is the priority of the program that is currently being executed. The
processor accepts interrupts only from devices that have priorities higher than its own. At the time the
execution of an interrupt-service routine for some device is started, the priority of the processor is raised to that
of the device. This action disables interrupts from devices at the same level of priority or lower. However,
interrupt requests from higher-priority devices will continue to be accepted.
The processor's priority is usually encoded in a few bits of the processor status word. It can be changed by
program instructions that write into the PS. These are privileged instructions, which can be executed only
while the processor is running in the supervisor mode. The processor is in the supervisor mode only when
executing operating system routines. It switches to the user mode before beginning to execute application
programs. Thus, a user program cannot accidentally, or intentionally, change the priority of the processor and
disrupt the system's operation. An attempt to execute a privileged instruction while in the user mode leads to a
special type of interrupt called a privilege exception.

Department of Computer Science & Engineering, HMSIT, Tumkur 7


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

Simultaneous Requests
Polling the status registers of the I/O devices is the simplest such mechanism. In this case, priority is determined by
the order in which the devices are polled. When vectored interrupts are used, we must ensure that only one device
is selected to send its interrupt vector code. A widely used scheme is to connect the devices to form a daisy
chain. The interrupt-request line INTR is common to all devices. The interrupt-acknowledge line, INTA, is
connected in a daisy-chain fashion, such that the INTA signal propagates serially through the devices. When
several devices raise an interrupt request and the INTR line is activated, the processor responds by setting the
INTA line to 1. This signal is received by device 1. Device 1 passes the signal on to device 2 only if it does
not require any service. If device 1 has a pending request for interrupt, it blocks the INTA signal and proceeds
to put its identifying code on the data lines. Therefore, in the daisy-chain arrangement, the device that is electrically
closest to the processor has the highest priority. The second device along the chain has second highest priority, and so
on.
The main advantage of the scheme is that it allows the processor to accept interrupt requests from some devices
but not from others, depending upon their priorities. The two schemes may be combined to produce the more
general structure in Figure 4.8b. Devices are organized in groups, and each group is connected at a different priority
level. Within a group, devices are connected in a daisy chain. This organization is used in many computer systems.

Department of Computer Science & Engineering, HMSIT, Tumkur 8


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

CONTROLLING DEVICE REQUEST


There are two independent mechanisms for controlling interrupt requests. At the device end, an interrupt-enable bit in
a control register determines whether the device is allowed to generate an interrupt request. At the processor end,
either an interrupt enable bit in the PS register or a priority structure determines whether a given interrupt request will
be accepted.

Consider a processor that uses the vectored interrupt scheme, where the starting address of Example 4,3
the interrupt-service routine is stored at memory location INTVEC. Interrupts are enabled
by setting to l an interrupt-enable bit, IE, in the processor status word, which we assume
is bit 9. A keyboard and a display unit connected to this processor have the status, control,
and data registers shown in Figure 4.3.
Assume that at some point in a program called Main we wish to read an input line
starting at location LINE. To perform this operation using interrupts, we need to initialize the interrupt process.
This may be accomplished as follows:
I. Load the starting address of the interrupt-service routine in location INTVEC.
2. Load the address LINE in a memory location PNTR. The interrupt-service routine will use this location as a
pointer to store the input characters in the memory.
3. Enable keyboard interrupts by setting bit 2 in register CONTROL to I.
4. Enable interrupts in the processor by setting to 1 the IE bit in the processor status register, PS.

Once this initialization is completed, typing a character on the keyboard will cause an interrupt request to be
generated by the keyboard interface. The program being executed at that time will be interrupted and the
interrupt-service routine will be executed. This routine has to perform the following tasks:
1. Read the input character from the keyboard input data register. This will cause the interface circuit to remove
its interrupt request.
2. Store the character in the memory location pointed to by PNTR, and increment PNTR.
3. When the end of the line is reached, disable keyboard interrupts and inform program Main.
4. Return from interrupt.
EXCEPTIONS
An interrupt is an event that causes the execution of one program to be suspended and the execution of another
program to begin. So far, we have dealt only with interrupts caused by requests received during I/O data
transfers. However, the interrupt mechanism is used in a number of other situations.
The term exception is often used to refer to any event that causes an interruption. Hence, I/O interrupts are
one example of an exception. We now describe a few other kinds of exceptions.
Recovery from Errors
Computers use a variety of techniques to ensure that all hardware components are operating properly. For
example, many computers include an error-checking code in

Department of Computer Science & Engineering, HMSIT, Tumkur 9


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

the main memory, which allows detection of errors in the stored data. If an error occurs, the control hardware
detects it and informs the processor by raising an interrupt.
The processor may also interrupt a program if it detects an error or an unusual condition while executing
the instructions of this program.
When exception processing is initiated as a result of such errors, the processor proceeds in exactly the
same manner as in the case of an I/O interrupt request. It suspends the program being executed and starts an
exception-service routine. This routine takes appropriate action to recover from the error, if possible, or to
inform the user about it. Recall that in the case of an I/O interrupt, the processor completes execution of the
instruction in progress before accepting the interrupt. However, when an interrupt is caused by an error,
execution of the interrupted instruction cannot usually be completed, and the processor begins exception
processing immediately.
Debugging
Another important type of exception is used as an aid in debugging programs. System software usually
includes a program called a debugger, which helps the programmer find errors in a program. The debugger uses
exceptions to provide two important facilities called trace and breakpoints. When a processor is operating in
trace mode, an exception occurs after execution of every instruction, using the debugging program as the
exception-service routine. The debugging program enables the user to examine the contents of registers, memory
locations, and soon. On return from the debugging program, the next instruction in the program being debugged is
executed, then the debugging program is activated again. The trace exception is disabled during the execution of the
debugging program.

Breakpoints provide a similar facility, except that the program being debugged is interrupted only at specific points
selected by the user. An instruction called Trap or Software-interrupt is usually provided for this purpose.
Privilege Exception
To protect the operating system of a computer from being corrupted by user programs, certain instructions can be
executed only while the processor is in the supervisor mode. These are called privileged instructions. For example,
when the processor is running in the user mode, it will not execute an instruction that changes the priority level of
the processor or that enables a user program to access areas in the computer memory that have been allocated to other
users. An attempt to execute such an instruction will produce a privilege exception, causing the processor to switch to
the supervisor mode and begin executing an appropriate routine in the operating system.

Department of Computer Science & Engineering, HMSIT, Tumkur 10


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

DIRECT MEMORY ACCESS


Data are transferred by executing instructions such as
Move DATAIN, RO
An instruction to transfer input or output data is executed only after the processor determines that the I/0 device is
ready. To do this, the processor either polls a status flag in the device interface or waits for the device to send an
interrupt request, considerable overhead is incurred, because several program instructions must be executed for each
data word transferred.
To transfer large blocks of data at high speed, an alternative approach is used. A special control unit may be
provided to allow transfer of a block of data directly between an external device and the main memory, without
continuous intervention by the processor. This approach is called direct memory access, or DMA.
DMA transfers are performed by a control circuit that is part of the I/0 device interface. We refer to this circuit
as a DMA controller. The DMA controller performs the functions that 'would normally be carried out by the
processor when accessing the main memory. For each word transferred, it provides the memory address and
all the bus signals that control data transfer. Since it has to transfer blocks of data, the DMA controller must
increment the memory address for successive words and keep track of the number of transfers.
Although a DMA controller can transfer data without intervention by the processor, its operation must be under
the control of a program executed by the processor. To initiate the transfer of a block of words, the processor
sends the starting address, the number of words in the block, and the direction of the transfer. On receiving this
information, the DMA controller proceeds to perform the requested operation. When the entire block has been
transferred, the controller informs the processor by raising an interrupt signal.
While a DMA transfer is taking place, the program that requested the transfer cannot continue, and the processor
can be used to execute another program. After the DMA transfer is completed, the processor can return to the
program that requested the transfer.
I/O operations are always performed by the operating system of the computer in response to a request from
an application program. The OS is also responsible for suspending the execution of one program and starting
another. When the transfer is completed, the DMA controller informs the processor by sending an interrupt
request..

Two registers are used for storing the starting address and the word count. The third register contains status and
control flags. The R/W bit determines the direction of the transfer. When this bit is set to 1 by a program instruction,
the controller performs a read operation, that is, it transfers data from the memory to the I/O device. Otherwise, it
performs a write operation. When the controller has completed transferring a block of data and is ready to receive
another command, it sets the Done flag to 1. Bit 30 is the Interrupt-enable flag, IE. When this flag is set to 1, it causes
the controller to raise an interrupt after it has completed transferring a block of data. Finally, the controller sets the
IRQ bit to 1 when it has requested an interrupt.

Department of Computer Science & Engineering, HMSIT, Tumkur 11


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

A DMA controller connects a high-speed network to the computer bus. The disk controller, which controls two
disks, also has DMA capability and provides two DMA channels. It can perform two independent DMA operations, as
if each disk had its own DMA controller. The registers needed to store the memory address, the word count, and so on
are duplicated, so that one set can be used with each device.
To start a DMA transfer of a block of data from the main memory to one of the disks, a program writes the
address and word count information into the registers of the corresponding channel of the disk controller. It also provides
the disk controller with information to identify the data for future retrieval. The DMA controller proceeds independently
to implement the specified operation. When the DMA transfer is completed, this fact is recorded in the status and
control register of the DMA channel by setting the Done bit.
Memory accesses by the processor and the DMA controllers are interwoven. Requests by DMA devices for using the
bus are always given higher priority than processor requests. Among different DMA devices, top priority is given to
high-speed peripherals such as a disk, a high-speed network interface, or a graphics display device. Since the
processor originates most memory access cycles, the DMA controller can be said to "steal" memory cycles from
the processor. Hence, this interweaving technique is usually called cycle stealing. Alternatively, the DMA
controller may be given exclusive access to the main memory to transfer a block of data without interruption.
This is known as block or burst mode.
BUS ARBITRATION
The device that is allowed to initiate data transfers on the bus at any given time is called the bus master. When the
current master relinquishes control of the bus, another device can acquire this status. Bus arbitration is the process by
which the next device to become the bus master is selected and bus mastership is transferred to it. The selection of the
bus master must take into account the needs of various devices by establishing a priority system for gaining access to
the bus.
There are two approaches to bus arbitration: Centralized and Distributed.
In centralized arbitration, a single bus arbiter performs the required arbitration.
In distributed arbitration, all devices participate in the selection of the next bus master.
CENTRALIZED ARBITRATION
The bus arbiter may be the processor or a separate unit connected to the bus. In this case, the processor is normally
the bus master unless it grants bus mastership to one of the DMA controllers. A DMA controller indicates that it
needs to become the bus master by activating the Bus-Request line, BR.

Department of Computer Science & Engineering, HMSIT, Tumkur 12


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

The signal on the Bus-Request line is the logical OR of the bus requests from all the devices connected to it. When
Bus-Request is activated, the processor activates the Bus-Grant signal, BG1, indicating to the DMA controllers that
they may use the bus when it becomes free. Otherwise, it passes the grant downstream by asserting BG2. The
current bus master indicates to all devices that it is using the bus by activating another open-collector line called
Bus-Busy, BBSY. Hence, after receiving the Bus-Grant signal, a DMA controller waits for Bus-Busy to become
inactive, then assumes mastership of the bus. At this time, it activates Bus-Busy to prevent other devices from using
the bus at the same time.

The timing diagram shows the sequence of events for the devices as DMA controller 2 requests and acquires bus
mastership and later releases the bus. During its tenure as the bus master, it may perform one or more data transfer
operations, depending on whether it is operating in the cycle stealing or block mode. After it releases the bus, the
processor resumes bus mastership.
Several such pairs may be provided, in an arrangement similar to that used for multiple interrupt requests. This
arrangement leads to considerable flexibility in determining the order in which requests from different devices are
serviced. The arbiter circuit ensures that only one request is granted at any given time, according to a predefined
priority scheme. For example, if there are four bus request lines, BR1 through BR4, a fixed priority scheme may be
used in which BR1 is given top priority and BR4 is given lowest priority.
Distributed. Arbitration
Distributed arbitration means that all devices waiting to use the bus have equal responsibility in carrying out the
arbitration process, without using a central arbiter. Each device on the bus is assigned a 4-bit identification number.
When one or more devices request the bus, they assert the Start-Arbitration signal and place their 4-bit ID
numbers on four open-collector lines, ARBO through ARB3. A winner is selected as a result of the interaction among
the signals transmitted over these lines by all contenders. The net outcome is that the code on the four lines represents
the request that has the highest ID number.

Department of Computer Science & Engineering, HMSIT, Tumkur 13


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

Assume that two devices, A and B, having ID numbers 5 and 6, respectively, are requesting the use of
the bus. Device A transmits the pattern 0101, and device B transmits the pattern 0110. The code seen by both
devices is 0111. Each device compares the pattern on the arbitration lines to its own ID, starting from the
most significant bit. If it detects a difference at any bit position, it disables its drivers at that bit position and
for all lower-order bits. It does so by placing a 0 at the input of these drivers. In the case of our example,
device A detects a difference on line ARB1. Hence, it disables its drivers on lines ARB1 and ARBO. This
causes the pattern on the arbitration lines to change to 0110, which means that B has won the contention. Note
that, since the code on the priority lines is 0111 for a short period, device B may temporarily disable its
driver on line ARBO. However, it will enable this driver again once it sees a 0 on line ARB I resulting from
the action by device A.
Decentralized arbitration has the advantage of offering higher reliability, because operation of the bus is not
dependent on any single device. The processor, main memory, and I/O devices can be interconnected by means
of a common bus whose primary function is to provide a communications path for the transfer of data. The
bus includes the lines needed to support interrupts and arbitration.
The bus lines used for transferring data may be grouped into three types: data, address, and
control lines. The control signals specify whether a read or a write operation is to be performed. Usually, a
single a R/W line is used. It specifies Read when set to 1 and Write when set to 0. When several operand
sizes are possible, such as byte, word, or long word, the required size of data is indicated.
The bus control signals also carry timing information. They specify the times at which the processor and
the I/O devices may place data on the bus or receive data from the bus. A variety of schemes have been
devised for the timing of data transfers over a bus. These can be broadly classified as either synchronous or
asynchronous schemes.
In any data transfer operation, one device plays the role of a master. This is the device that initiates data
transfers by issuing read or write commands on the bus; hence, it may be called an initiator. Normally, the
processor acts as the master, but other devices with DMA capability may also become bus masters. The
device addressed by the master is referred to as a slave or target.

SYNCHRONOUS BUS
In a synchronous bus, all devices derive timing information from a common clock line. Equally spaced pulses on this
line define equal time intervals. In the simplest form of a synchronous bus, each of these intervals constitutes a bus cycle
during which one data transfer can take place. The address and data lines in this and subsequent figures are shown as
high and low at the same time. This is a common convention indicating that some lines are high and some low,
depending on the particular address or data pattern being transmitted. The crossing points indicate the times at which
these patterns change. A signal line in an indeterminate or high impedance state is represented by an intermediate
level half-way between the low and high signal levels.

Department of Computer Science & Engineering, HMSIT, Tumkur 14


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

Let us consider the sequence of events


during an input (read) operation. At time to,
the master places the device address on the
address lines and sends an appropriate
command on the control lines. In this case, the
command will indicate an input operation and
specify the length of the operand to be read,
if necessary. Information travels over the bus
at a speed determined by its physical and
electrical characteristics. The clock pulse
width, t1 — to, must be longer than the
maximum propagation delay between two
devices connected to the bus. It also has to be
long enough to allow all devices to decode
the address and control signals so that the
addressed device (the slave) can respond at
time 6 . It is important that slaves take no
action or place any data on the bus before 6 .
The information on the bus is unreliable
during the period to to t1 because signals are
changing state. The addressed slave places
the requested input data on the data lines at
time t1.

At the end of the clock cycle, at time t2,


the master strobes the data on the data
lines into its input buffer. In this context,
"strobe" means to capture the values of
the data at a given instant and store them into a buffer. For data to be loaded correctly into any storage device,
such as a register built with flip-flops, the data must be available at the input of that device for a period
greater than the setup time of the device. Hence, the period t2 ti must be greater than the maximum
propagation time on the bus plus the setup time of the input buffer register of the master.
The exact times at which signals actually change state are somewhat different from those shown because of
propagation delays on bus wires and in the circuits of the devices It shows two views of each signal, except
the clock. Because signals take time to travel from one device to another, a given signal transition is seen by
different devices at different times. One view shows the signal as seen by the master and the other as seen
by the slave. We assume that the clock changes are seen at the same time by all devices on the bus. System
designers spend considerable effort to ensure that the clock signal satisfies this condition.

ASYNCHRONOUS BUS
An alternative scheme for controlling data transfers on the bus is based on the use of a handshake between
the master and the slave. The common clock is replaced by two timing control lines, Master-ready and
Slave-ready. The first is asserted by the master to indicate that it is ready for a transaction, and the second is a
response from the slave.
In principle, a data transfer controlled by a handshake protocol proceeds as follows. The master places the
address and command information on the bus. Then it indicates

to all devices that it has done so by activating the Master-ready line. This causes all devices on the bus to
decode the address. The selected slave performs the required operation and informs the processor it has done
so by activating the Slave-ready line. The master waits for Slave-ready to become asserted before it removes
its signals from the bus. In the case of a read operation, it also strobes the data into its input buffer.

Department of Computer Science & Engineering, HMSIT, Tumkur 15


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

t4 — The master removes the address and command information from the bus. The delay between t3 and
t4 is again intended to allow for bus skew. Erroneous addressing may take place if the address, as seen by
some device on the bus, starts to
change while the Master-ready
signal is still equal to 1.
t5 — When the device interface
receives the 1 to 0 transition of the
Master-ready signal, it removes
the data and the Slave-ready
signal from the bus. This
completes the input transfer. In
this case, the master places the
output data on the data lines at the
same time that it transmits the
address and command
information. The selected slave
strobes the data into its output
buffer when it receives the
Master-ready signal and
indicates that it has done so by
setting the Slave-ready signal to
1. The remainder of the cycle is
identical to the input operation.
In the timing diagrams, it is
assumed that the master
compensates for bus skew and
address decoding delay. It
introduces the delays from to to t1
and from t3 to t4 for this purpose.
If this delay provides sufficient
time for the I/O device interface
to decode the address, the
interface circuit can use the
Master-ready signal directly to
gate other signals to or from the
bus.
The handshake signals are fully
interlocked. A change of state in
one signal is followed by a change
in the other signal. Hence this
scheme is known as a full
handshake. It provides the highest
degree of flexibility and reliability.

INTERFACE CIRCUITS
An I/O interface consists of the circuitry required to connect an I/0 device to a computer bus. On one side of the
interface we have the bus signals for address, data, and control. On the other side we have a data path with its

Department of Computer Science & Engineering, HMSIT, Tumkur 16


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

associated controls to transfer data between the interface and the I/O device. This side is called a port, and it
can be classified as either a parallel or a serial port. A parallel port transfers data in the form of a number of
bits, typically 8 or 16, simultaneously to or from the device. A serial port transmits and receives data one bit at a
time. Communication with the bus is the same for both formats; the conversion from the parallel to the serial
format, and vice versa, takes place inside the interface circuit.
In the case of a parallel port, the connection between the device and the computer uses a multiple-pin
connector and a cable with as many wires, typically arranged in a flat configuration. The circuits at either end
are relatively simple, as there is no need to convert between parallel and serial formats. This arrangement is
suitable for devices that are physically close to the computer. Before discussing a specific interface circuit
example, let us recall the functions of an I/O interface.
An I/O interface does the following:
1. Provides a storage buffer for at least one word of data (or one byte, in the case of byte-oriented devices)
2. Contains status flags that can be accessed by the processor to determine whether the buffer is full (for
input) or empty (for output)
3. Contains address-decoding circuitry to determine when it is being addressed by the processor
4. Generates the appropriate timing signals required by the bus control scheme
5. Performs any format conversion that may be necessary to transfer data between the bus and the 1/0 device,
such as parallel-serial conversion in the case of a serial port

PARALLEL PORT
First, we describe circuits for an 8-bit input port and an 8-bit output port. Then, we combine the two circuits to
show how the interface for a general-purpose 8-bit parallel port can be designed. We assume that the interface
circuit is connected to a 32-bit processor that uses memory-mapped I/O and the asynchronous bus protocol
depicted in Figures 4.26 and 4.27.

Figure 4.28 shows the hardware components needed for connecting a keyboard to a processor. A typical
keyboard consists of mechanical switches that are normally open. When a key is pressed, its switch closes
and

Establishes a path for an electrical signal. This signal is detected by an encoder circuit that generates
the ASCII code for the corresponding character. A difficulty with such push-button switches is that
the contacts bounce when a key is pressed. Although bouncing may last only one or two milliseconds,
this is long enough for the computer to observe a single pressing of a key as several distinct electrical

Department of Computer Science & Engineering, HMSIT, Tumkur 17


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

events; this single pressing could be erroneously interpreted as the key being pressed and released rapidly
several times. The effect of bouncing must be eliminated.
The output of the encoder consists of the bits that represent the encoded character and one control
signal called Valid, which indicates that a key is being pressed. This information is sent to the interface
circuit, which contains a data register, DATAIN, and a status flag, SIN. When a key is pressed, the
Valid signal changes from 0 to 1, causing the ASCII code to be loaded into DATAIN and SIN to be set to
1. The status flag SIN is cleared to 0 when the processor reads the contents of the DATAIN register. The
interface circuit is connected to an asynchronous bus on which transfers are controlled using the
handshake signals Master-ready and Slave-ready. The third control line, R/W distinguishes read and
write transfers.
Figure 4.29 shows a suitable circuit for an input interface. The output lines of the DATAIN register are
connected to the data lines of the bus by means of three-state drivers, which are turned on when the
processor issues a read instruction with the address that selects this register. The SIN signal is generated
by a status flag circuit. This signal is also sent to the bus through a three-state driver. It is connected to
bit DO, which means it will appear as bit 0 of the status register. Other bits of this register do not
contain valid information. An address decoder is used to select the input interface when the high-order
31 bits of an address correspond to any of the addresses assigned to this interface. Address bit AO
determines whether the status or the data registers is to be read when the Master-ready signal is active.
The control handshake is accomplished by activating the Slave-ready signal when either Read-status or
Read-data is equal to 1.

A possible implementation of the status flag circuit is shown in Figure 4.30. An edge-triggered D flip-flop is set
to 1 by a rising edge on the Valid signal line. This event changes the state of the NOR latch such that SIN is set to
1. The state of this latch must not change while SIN is being read by the processor. Hence, the circuit ensures that

Department of Computer Science & Engineering, HMSIT, Tumkur 18


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

SIN can be set only while Master-ready is equal to 0. Both the flip-flop and the latch are reset to 0 when Read-
data is set to 1 to read the DATAIN register.

The input and output


interfaces just described can
be combined into a single
interface. In this case, the
overall interface is selected
by the high-order 30 bits of
the address. Address bits Al
and AO select one of the
three addressable locations
in the interface, namely, the
two data registers and the
status register. The status
register contains the flags
SIN and SOUT in bits 0 and
1, respectively. Since such
locations in I/O interfaces are
often referred to as registers,
we have used the labels RS1
and RSO (for Register
Select) to denote the two
inputs that determine the
register being selected.
The Ready and Accept
lines are the handshake
control lines on the
processor bus side, and
hence would be connected
to Master-ready and Slave-
ready. The input signal My-

Department of Computer Science & Engineering, HMSIT, Tumkur 19


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

address should be connected to the output of an address decoder that recognizes the address assigned to the
interface. There are three register select lines, allowing up to eight registers in the interface, input and output
data, data direction, and control and status registers for various modes of operation. An interrupt request
output, INTR, is also provided. It should be connected to the interrupt-request line on the computer bus.
Parallel interface circuits that have the features illustrated in Figure 4.34 are often encountered in practice.
An example of their use in an embedded system is described in Chapter 9. Instead of having just one port for
connecting an I/O device, two or more ports may be provided.
A timing diagram for an output operation is shown in Figure 4.36. The processor sends the data at the same
time as the address, in clock cycle 1. The Timing logic sets Go to 1 at the beginning of clock cycle 2, and the
rising edge of that signal loads the output data into register DATAOUT. An input operation that reads the
status register follows a similar timing pattern. The Timing logic block moves to the Respond state directly
from the Idle state because the requested data are available in a register and can be transmitted immediately.
As a result, the transfer is one clock cycle shorter than that shown in Figure 4.25. In a situation where some
time is needed before the data becomes available, the state machine should enter a wait state first and move to
Respond only when the data are ready.

SERIAL PORT
A serial port is used to connect the processor to I/O devices that require transmission of data one bit at a
time. The key feature of an interface circuit for a serial port is that it is capable of communicating in a bit-
serial fashion on the device side and in a bit-parallel fashion on the bus side. The transformation between the
parallel and serial formats is achieved with shift registers that have parallel access capability. It includes the
familiar DATAIN and DATAOUT registers. The input shift register accepts bit-serial input from the I/O
device. When all 8 bits of data have been received, the contents of this shift register are loaded in parallel into
the DATAIN register. Similarly, output data in the DATAOUT register are loaded into the output shift
register, from which the bits are shifted out and sent to the I/O device.

Department of Computer Science & Engineering, HMSIT, Tumkur 20


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

The part of the interface that deals with the bus is the same as in the parallel interface described earlier. The
status flags SIN and SOUT serve similar functions. The SIN flag is set to 1 when new data are loaded in
DATAIN; it is cleared to 0 when the processor reads the contents of DATAIN. As soon as the data are
transferred from the input shift register into the DATAIN register, the shift register can start accepting the
next 8-bit character from the I/O device. The SOUT flag indicates whether the output buffer is available. It is
cleared to 0 when the processor writes new data into the DATAOUT register and set to 1 when data are
transferred from DATAOUT into the output shift register.

The double buffering used in the input and output paths is important. A simpler interface could be
implemented by turning DATAIN and DATAOUT into shift registers and eliminating the shift registers in
Figure 4.37. However, this would impose awkward restrictions on the operation of the I/O device; after
receiving one character from the serial line, the device cannot start receiving the next character until the
processor reads the contents of DATAIN. Thus, a pause would be needed between two characters to allow
the processor to read the input data. With the double buffer, the transfer of the second character can begin as
soon as the first character is loaded from the shift register into the DATAIN register. Thus, provided the
processor reads the contents of DATAIN before the serial transfer of the second character is completed, the
interface can receive a continuous stream of serial data. An analogous situation occurs in the output path of
the interface.
Because it requires fewer wires, serial transmission is convenient for connecting devices that are physically
far away from the computer. The speed of transmission, often given as a bit rate, depends on the nature of

Department of Computer Science & Engineering, HMSIT, Tumkur 21


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

the devices connected. To accommodate a range of devices, a serial interface must be able to use a range of
clock speeds.
Because serial interfaces play a vital role in connecting I/O devices, several widely used standards have
been developed is known as a Universal Asynchronous Receiver Transmitter (UART). It is intended for use
with low-speed serial devices.

STANDARD I/O INTERFACES


. A typical personal computer, for example, includes a printed circuit board called the motherboard. This
board houses the processor chip, the main memory, and some I/O interfaces. It also has a few connectors into
which additional interfaces can be plugged.
The processor bus is the bus defined by the signals on the processor chip itself. Devices that require a very
high speed connection to the processor, such as the main memory, may be connected directly to this bus. The
motherboard usually provides another bus that can support more devices. The two buses are interconnected
by a circuit, which we will call a bridge, that translates the signals and protocols of one bus into those of the
other. Devices connected to the expansion bus appear to the processor as if they were connected directly to
the processor's own bus. The only difference is that the bridge circuit introduces a small delay in data
transfers between the processor and those devices.
It is also dependent on the electrical characteristics of the processor chip, such as its clock speed. The
expansion bus is not subject to these limitations, and therefore it can use a standardized signaling scheme. A
number of standards have been developed. Some standards have been developed through industrial
cooperative efforts, even among competing companies driven by their common self-interest in having
compatible products.
Three widely used bus standards, PCI (Peripheral Component Interconnect), SCSI (Small Computer
System Interface), and USB (Universal Serial Bus). The PCI standard defines an expansion bus on the
motherboard. SCSI and USB are used for connecting additional devices, both inside and outside the
computer box. The SCSI bus is a high-speed parallel bus intended for devices such as disks and video
displays. The USB bus uses serial transmission to suit the needs of equipment ranging from keyboards to
game controls to internet connections. It also shows a connection to an Ethernet. The Ethernet is a widely
used local area network, providing a high-speed connection among computers in a building or a university
campus.

Department of Computer Science & Engineering, HMSIT, Tumkur 22


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

A given computer may use more than one bus standard. A typical Pentium computer has both a PCI bus
and an ISA bus, thus providing the user with a wide range of devices to choose from.

PHERIPHERAL COMPONENET INTERCONNECT (PCI BUS)

The PCI bus is a good example of a system bus that grew out of the need for standardization. It supports
the functions found on a processor bus but in a standardized format that is independent of any particular
processor. Devices connected to the PCI bus appear to the processor as if they were connected directly to the
processor bus. They are assigned addresses in the. memory address space of the processor.

The PCI follows a sequence of bus standards that were used primarily in IBM PCs. Early PCs used the 8-
bit XT bus, whose signals closely mimicked those of Intel's 80x86 processors. Later, the 16-bit bus used on
the PC AT computers became known as the ISA bus. Its extended 32-bit version is known as the EISA bus.
The PCI was developed as a low-cost bus that is truly processor independent. Its design anticipated a rapidly
growing demand for bus bandwidth to support high-speed disks and graphic and video devices, as well as the
specialized needs of multiprocessor systems. As a result, the PCI is still popular as an industry standard
almost a decade after it was first introduced in 1992.
An important feature that the PCI pioneered is a plug-and-play capability for connecting I/O devices. To
connect a new device, the user simply connects the device interface board to the bus. The software takes care
of the rest.

DMA TRANSFER
Data are transferred between the cache and the main memory in bursts of several words each. The words
involved in such a transfer are stored at successive memory locations. When the processor (actually the cache
controller) specifies an address and requests a read operation from the main memory, the memory responds
by sending a sequence of data words starting at that address. Similarly, during a write operation, the processor

Department of Computer Science & Engineering, HMSIT, Tumkur 23


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

sends a memory address followed by a sequence of data words, to be written in successive memory locations
starting t that address. The PCI is designed primarily to support this mode of operation. A read or a write
operation involving a single word is simply treated as a burst of length one.
The bus supports three independent address spaces: memory, I/O, and configuration. The first two are self-
explanatory. The I/0 address space is intended for use with processors, such as Pentium, that have a separate
I/O address space.
The address is needed only long enough for the slave to be selected. The slave can store the address in its
internal buffer. Thus, the address is needed on the bus for one clock cycle only, freeing the address lines to be
used for sending data in subsequent clock cycles. The result is a significant cost reduction because the
number of wires on a bus is an important cost factor. This approach is used in the PCI bus.

At any given time, one device is the bus master. It has the right to initiate data transfers by issuing read and
write commands. A master is called an initiator in PCI terminology. This is either a processor or a DMA
controller. The addressed device that responds to read and write commands is called a target.
Device Configuration
When an I/0 device is connected to a computer, several actions are needed to configure both the device and
the software that communicates with it. A typical device interface card for an ISA bus, for example, has a
number of jumpers or switches that have to be set by the user to select certain options. Once the device is
connected, the software needs to know the address of the device. It may also need to know relevant device
characteristics, such as the speed of the transmission link, whether parity bits are used, and so on.

Department of Computer Science & Engineering, HMSIT, Tumkur 24


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

The PCI simplifies this process by incorporating in each I/O device interface a small configuration ROM
memory that stores information about that device. The configuration ROMs of all devices are accessible in
the configuration address space. The PCI initialization software reads these ROMs whenever the system is
powered up or reset. In each case, it determines whether the device is a printer, a keyboard, an Ethernet
interface, or a disk controller. It can further learn about various device options and characteristics.
Devices are assigned addresses during the initialization process. This means that during the bus
configuration operation, devices cannot be accessed based on their address, as they have not yet been
assigned one. Hence, the configuration address space uses a different mechanism.
The configuration software scans all 21 locations in the configuration address space to identify which
devices are present. Each device may request an address in the I/O space or in the memory space. The device
is then assigned an address by writing that address into the appropriate device register. The configuration
software also sets such parameters as the device interrupt priority. The PCI bus has four interrupt-request
lines. By writing into a device configuration register, the software instructs the device as to which of these
lines it can use to request an interrupt. If a device requires initialization, the initialization code is stored in a
ROM in the device interface. The PCI software reads this code and executes it to perform the required
initialization.
This process relieves the user from having to be involved in the configuration process. The user simply
plugs in the interface board and turns on the power. The software does the rest. The device is ready to use.
Electrical Characteristics
The PCI bus has been defined for operation with either a 5- or 3.3-V power supply. The motherboard may
be designed to operate with either signaling system. Connectors on expansion boards are designed to ensure
that they can be plugged only in a compatible motherboard.

SCSI Bus

The acronym SCSI stands for Small Computer System Interface. It refers to a standard bus defined by the
American National Standards Institute (ANSI) under the designation X3.131 [2]. In the original
specifications of the standard, devices such as disks are connected to a computer via a 50-wire cable, which
can be up to 25 meters in length and can transfer data at rates up to 5 megabytes/s.

The SCSI bus standard has undergone many revisions, and its data transfer capability has increased very
rapidly, almost doubling every two years. SCSI-2 and SCSI-3 have been defined, and each has several
options. A SCSI bus may have eight data lines, in which case it is called a narrow bus and transfers data one
byte at a time. Alternatively, a wide SCSI bus has 16 data lines and transfers data 16 bits at a time. There are
also several options for the electrical signaling scheme used. The bus may use single-ended transmission
(SE), where each signal uses one wire, with a common ground return for all signals. In another option,
differential signaling is used, where a separate return wire is provided for each signal.
The SCSI connector may have 50, 68, or 80 pins. The maximum transfer rate in commercial devices that
are currently available varies from 5 megabytes/s to 160 megabytes/s. The most recent version of the
standard is intended to support transfer rates up to 320 megabytesls, and 640 megabytes/s is anticipated a
little later. The maximum transfer rate on a given bus is often a function of the length of the cable and the
number of devices connected, with higher rates for a shorter cable and fewer devices.
Devices connected to the SCSI bus are not part of the address space of the processor in the same way as
devices connected to the processor bus. The SCSI bus is connected to the processor bus through a SCSI
controller. This controller uses DMA to transfer data packets from the main memory to the device, or vice
versa. A packet may contain a block of data, commands from the processor to the device, or status
information about the device.
A controller connected to a SCSI bus is one of two types — an initiator or a target. An initiator has the
ability to select a particular target and to send commands specifying the operations to be performed. Clearly,
the controller on the processor side, such as the SCSI controller must be able to operate as an initiator. The

Department of Computer Science & Engineering, HMSIT, Tumkur 25


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

disk controller operates as a target. It carries out the commands it receives from the initiator. The initiator
establishes a logical connection with the intended target.
Once this connection has been established, it can be suspended and restored as needed to transfer
commands and bursts of data. While a particular connection is suspended, other devices can use the bus to
transfer information. This ability to overlap data transfer requests is one of the key features of the SCSI bus
that leads to its high performance.
Data transfers on the SCSI bus are always controlled by the target controller. To send a command to a
target, an initiator requests control of the bus and, after winning arbitration, selects the controller it wants to
communicate with and hands control of the bus over to it. Then the controller starts a data transfer operation
to receive a command from the initiator.
Let us examine a complete disk read operation as an example. The initiator controller as taking certain
actions, it should be clear that it performs these actions after receiving appropriate commands from the
processor. Assume that the processor wishes to read a block of data from a disk drive and that these data are
stored in two disk sectors that are not contiguous. The processor sends a command to the SCSI controller,
which causes the following sequence of events to take place:
1. The SCSI controller, acting as an initiator, contends for control of the bus.
2. When the 'initiator wins the arbitration process, it selects the target controller and hands over control of
the bus to it.
3. The target starts an output operation (from initiator to target); in response to this, the initiator sends a
command specifying the required read operation.
4. The target, realizing that it first needs to perform a disk seek operation, sends a message to the initiator
indicating that it will temporarily suspend the connection between them. Then it releases the bus.
5. The target controller sends a command to the disk drive to move the read head to the first sector
involved in the requested read operation. Then, it reads the data stored in that sector and stores them in a data
buffer. When it is ready to begin transferring data to the initiator, the target requests control of the bus. After it
wins arbitration, it reselects the initiator controller, thus restoring the suspended connection.
6. The target transfers the contents of the data buffer to the initiator and then suspends the connection
again. Data are transferred either 8 or 16 bits in parallel, depending on the width of the bus.

7. The target controller sends a command to the disk drive to perform another seek operation. Then, it
transfers the contents of the second disk sector to the initiator, as before. At the end of this transfer, the logical
connection between the two controllers is terminated.
8. As the initiator controller receives the data, it stores them into the main memory using the DMA
approach.
9. The SCSI controller sends an interrupt to the processor to inform it that the requested operation has been
completed.
The SCSI bus standard defines a wide range of control messages that can be exchanged between the
controllers to handle different types of I/O devices. Messages are also defined to deal with various error or
failure conditions that might arise during device operation or data transfer.
Bus Signals
For simplicity we show the signals for a narrow bus (8 data lines). Note that all signal names are preceded
by a minus sign. This indicates that the signals are active, or that a data line is equal to 1, when they are in the
low-voltage state. The bus has no address lines. Instead, the data lines are used to identify the bus controllers
involved during the selection or reselection process and during bus arbitration. For a narrow bus, there are
eight possible controllers, numbered 0 through 7, and each is associated with the data line that has the same
number. A wide bus accommodates up to 16 controllers. A controller places its own address or the address of
another controller on the bus by activating the corresponding data line. Thus, it is possible to have more than
one address on the bus at the same time, as in the arbitration process we describe next. Once a connection is
established between two lers, there is no further need for addressing, and the data lines are used to carry data.

Department of Computer Science & Engineering, HMSIT, Tumkur 26


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

The main phases involved in the operation of the SCSI bus are
1. Arbitration
2. Selection
3. Information Transfer
4. Reselection.

Arbitration
The bus is free when the —BSY signal is in the inactive (high-voltage) state. Any controller can request
the use of the bus while it is in this state. Since two or more controllers may generate such a request at the
same time, an arbitration scheme must be implemented. A controller requests the bus by asserting the —BSY
signal and by asserting its associated data line to identify itself. The SCSI bus uses a simple distributed
arbitration scheme.
Each controller on the bus is assigned a fixed priority, with controller 7 having the highest priority. When
—BSY becomes active, all controllers that are requesting the bus examine the data lines and determine
whether a higher-priority device is requesting the bus at the same time. The controller using the highest-
numbered line realizes that it has won the arbitration process. All other controllers disconnect from the bus
and wait for —BSY to become inactive again.

Department of Computer Science & Engineering, HMSIT, Tumkur 27


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

In Figure 4.42, we have assumed that controller 6 is an initiator that wishes to establish a connection to
controller 5. After winning arbitration, controller 6 proceeds to the selection phase, in which it identifies the
target.

Selection

Having won arbitration, controller 6 continues to assert —BSY and —DB6 (its address). It indicates that it
wishes to select controller 5 by asserting the —SEL and then the —DB5 lines. Any other controller that may
have been involved in the arbitration phase, such as controller 2 in the figure, must stop driving the data lines
once the —SEL line becomes active, if it has not already done so. After placing the address of the target
controller on the bus, the initiator releases the —BSY line.
The selected target controller responds by asserting —BSY. This informs the initiator that the connection it is
requesting has been established, so that it may remove the address information from the data lines. The
selection process is now complete, and the target controller (controller 5) is asserting —BSY. From this point
on, controller 5 has control of the bus, as required for the information transfer phase.

Information Transfer
The information transferred between two controllers may consist of commands from the initiator to the
target, status responses from the target to the initiator, or data being transferred to or from the I/O device.
Handshake signaling is used to control information transfers in the same manner with the target controller
taking the role of the bus master. The —REQ and —ACK signals replace the Master-ready and Slave-ready.
The target asserts —I/O during an input operation (target to initiator), and it asserts —C/D to indicate that the
information being transferred is either a command or a status response rather than data.
At the end of the transfer, the target controller releases the —BSY signal, thus freeing the bus for use by
other devices. Later, it may reestablish the connection to the initiator controller when it is ready to transfer
more data. This is done in the reselection operation described next.
Reselection

Department of Computer Science & Engineering, HMSIT, Tumkur 28


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

When a logical connection is suspended and the target is ready to restore it, the target must first gain
control of the bus. It starts an arbitration cycle, and after winning arbitration, it selects the initiator controller in
exactly the same manner as described above. But with the roles of the target and initiator reversed, the
initiator is now asserting —BSY. Before data transfer can begin, the initiator must hand control over to the
target. This is achieved by having the target controller assert —BSY after selecting the initiator.

UNIVERSAL SERIAL BUS (USB)


A modern computer system is likely to involve a wide variety of devices such as keyboards, microphones,
cameras, speakers, and display devices. Most computers also have a wired or wireless connection to the
Internet. A key requirement in such an environment is the availability of a simple, low-cost mechanism to
connect these devices to the computer, and an important recent development in this regard is the introduction
of the Universal Serial Bus (USB) . This is an industry standard developed through a collaborative effort of
several computer and communications companies, including Compaq, Hewlett-Packard, Intel, Lucent,
Microsoft, Norte' Networks, and Philips.
The USB supports two speeds of operation, calledlow-speed (1.5 megabitsls) and full-speed (12 megabitsls).
The most recent revision of the bus specification (USB 2.0) introduced a third speed of operation, called
high-speed (480 megabits/s). The USB is quickly gaining acceptance in the market place, and with the
addition of the high-speed capability it may well become the interconnection method of choice for most
computer devices.
The USB has been designed to meet several key objectives:
 Provide a simple, low-cost, and easy to use interconnection system that overcomes
the difficulties due to the limited number of I/O ports available on a computer
 Accommodate a wide range of data transfer characteristics for I/O devices, including
telephone and Internet connections
 Enhance user convenience through a "plug-and-play" mode of operation
Port Limitation
The parallel and serial ports provide a general-purpose point of connection through which a variety of low- to
medium-speed devices can be connected to a computer. The user may also need to know how to configure
the device and the software. An objective of the USB is to make it possible to add many devices to a
computer system at any time, without opening the computer box.
Device Characteristics
The kinds of devices that may be connected to a computer cover a wide range of functionality. The speed,
volume, and timing constraints associated with data transfers to and from such devices vary significantly.
In the case of a keyboard, one byte of data is generated every time a key is pressed, which may happen at any
time. These data should be transferred to the computer promptly. Since the event of pressing a key is not
synchronized to any other event in a computer system, the data generated by the keyboard are called
asynchronous
A variety of simple devices that may be attached to a computer generate data of a similar nature — low speed
and asynchronous. Computer mice and the controls and manipulators used with video games are good
examples.
The sampling process yields a continuous stream of digitized samples that arrive at regular intervals,
synchronized with the sampling clock. Such a data stream is called isochronous, meaning that successive
events are separated by equal periods of time.
An important requirement in dealing with sampled voice or music is to maintain precise timing in the
sampling and replay processes. A high degree of jitter (variability in sample timing) is unacceptable. Hence,
the data transfer mechanism between a computer and a music system must maintain consistent delays from
one sample to the next. Otherwise, complex buffering and retiming circuitry would be needed. On the other
hand, occasional errors or missed samples can be tolerated. They either go unnoticed by the listener or they
may cause an unobtrusive click. No sophisticated mechanisms are needed to ensure perfectly connect data
delivery.

Department of Computer Science & Engineering, HMSIT, Tumkur 29


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

Data transfers for images and video have similar requirements, but at much higher data transfer bandwidth.
The term bandwidth refers to the total data transfer capacity of a communications channel, measured in a
suitable unit such as bits or bytes per second.
Their connection to the computer must provide a data transfer bandwidth of at least 40 or 50 megabits/s.
Delays on the order of a millisecond are introduced by the disk mechanism. Hence, a small additional delay
introduced while transferring data to or from the computer is not important, and jitter is not an issue.
Plug-and-Play
As computers become part of everyday life, their existence should become increasingly transparent. For
example, when operating a home theater system, which includes at least one computer, the user should not
find it necessary to turn the computer off or to restart the system to connect or disconnect a device.
The plug-and-play feature means that a new device, such as an additional speaker, can be connected at any
time while the system is operating. The system should detect the existence of this new device automatically,
identify the appropriate device-driver software and any other facilities needed to service that device, and
establish the appropriate addresses and logical connections to enable them to communicate.
The plug-and-play requirement has many implications at all levels in the system, from the hardware to the
operating system and the applications software. One of the primary objectives of the design of the USB has
been to provide a plug-and-play capability.
USB Architecture
A serial transmission format has been chosen for the USB because a serial bus satisfies the low-cost and
flexibility requirements. Clock and data information are encoded together and transmitted as a single signal.
Hence, there are no limitations on clock frequency or distance arising from data skew. Therefore, it is
possible W provide a high data transfer bandwidth by using a high clock frequency. As pointed out earlier,
the USB offers three bit rates, ranging from 1.5 to 480 megabits/s, to suit the needs of different I/O devices.

To accommodate a large number of devices that can be added or removed at any time, the USE has the
tree structure. Each node of the tree has a device called a hub, which acts as an intermediate control point
between the host and the I/O devices. At the root of the tree, a root hub connects the entire tree to the host
computer. The leaves of the tree are the I/O devices being served (for example, keyboard, Internet
connection, speaker, or digital TV), which are called functions in USB terminology. For consistency with the
rest of the discussion in the book, we will refer to these devices as I/O devices.
The tree structure enables many devices to be connected while using only simple point-to-point serial
links. Each hub has a number of ports where devices may be connected, including other hubs. In normal
operation, a hub copies a message that it receives from its upstream connection to all its downstream ports.

Department of Computer Science & Engineering, HMSIT, Tumkur 30


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

As a result, a message sent by the host computer is broadcast to all I/O devices, but only the addressed device
will respond to that message. The tree makes it possible to connect a large number of devices to a computer
through a few ports (the root hub). At the same time, each I/O device is connected through a serial point-to-
point connection. This is an important consideration in facilitating the plug-and-play feature, as we will see
shortly. Also, because of electrical transmission considerations, serial data transmission on such a connection
is much easier than parallel transmission on buses of the form. Much higher data rates and longer cables can
be used.
The USB operates strictly on the basis of polling. A device may send a message only in response to a poll
message from the host. Hence, upstream messages do not encounter conflicts or interfere with each other, as
no two devices can send messages at the same time. This restriction allows hubs to be simple, low-cost
devices.

The mode of operation described above is observed for all devices operating at either low speed or full
speed. However, one exception has been necessitated by the introduction of high-speed operation in USB
version 2.0. Consider the situation in Figure 4.44. Hub A is connected to the root hub by a high-speed link.
This hub serves one high-speed device, C, and one low-speed device, D. Normally, a message to device D
would be sent at low speed from the root hub. At 1.5 megabits/s, even a short message takes several tens of
microseconds. For the duration of this message, no other data transfers can take place, thus reducing the
effectiveness of the high-speed links and introducing unacceptable delays for high-speed devices. The USB
standard specifies the hardware details of USE interconnections as well as the organization and requirements
of the host software. The purpose of the USB software is to provide bidirectional communication links
between application software and I/O devices. These links are called pipes. Any data entering at one end of a
pipe is delivered at the other end. Issues such as addressing, timing, or error detection and recovery are
handled by the USB protocols.
The software that transfers data to or from a given I/O device is called the device driver for that device. The
device drivers depend on the characteristics of the devices they support. Hence, a more precise description of
the USB pipe is that it connects an I/O device to its device driver. It is established when a device is connected
and assigned a unique address by the USB software. Once established, data may flow through the pipe at any
time.

Department of Computer Science & Engineering, HMSIT, Tumkur 31


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

Addressing
I/O devices are normally identified by assigning them a unique memory address. In fact, a device usually
has several addressable locations to enable the software to send and receive control and status information
and to transfer data
When a USB is connected to a host computer, its root hub is attached to the processor bus, where it appears
as a single device. The host software communicates with individual devices attached to the USB by sending
packets of information, which the root hub forwards to the appropriate device in the USB tree.
Each device on the USB, whether it is a hub or an I/0 device, is assigned a 7-bit address. This address is
local to the USB tree and is not related in any way to the addresses used on the processor bus. A hub may
have any number of devices or other hubs connected to it, and addresses are assigned arbitrarily. Periodically,
the host polls each hub to collect status information and learn about new devices that ay have been added or
disconnected. When the host is informed that a new device has been connected, it uses a sequence of
commands to send a reset signal on the corresponding hub port, read information from the device about its
capabilities, send configuration information to the device, and assign the device a unique USB address. Once
this sequence is completed the device begins normal operation and responds only to the new address.
The host software is in complete control of the procedure. It is able to sense that a device has been
connected, to read information about the device, which is typically stored in a small read-only memory in the
device hardware, to send commands that will configure the device by enabling or disabling certain features or
capabilities, and finally to assign a unique USE address to the device. The only action required from the user
is to plug the device into a hub port and to turn on its power switch.
Locations in the device to or from which data transfer can take place, such as status, control, and data
registers, are called endpoints. They are identified by a 4-bit number. A USB pipe, which is a bidirectional
data transfer channel, is connected to one such pair. The pipe connected to endpoints number 0 exists all the
time, including immediately after a device is powered on or reset. This is the control pipe that the USB
software uses in the power-on procedure. As part of that procedure, other pipes using other endpoint pairs
may be established, depending on the needs and complexity of the device.
USB Protocols
All information transferred over the USB is organized in packets, where a packet consists of one or more
bytes of information. There are many types of packets that perform a variety of control functions. We
illustrate the operation of the USB by giving a few examples of the key packet types and show how they are
used.
The information transferred on the USB can be divided into two broad categories: control and data.
Control packets perform such tasks as addressing a device to initiate data transfer, acknowledging that data
have been received correctly, or indicating an error. Data packets carry information that is delivered to a
device. For example, input and output data are transferred inside data packets.
A packet consists of one or more fields containing different kinds of information. The first field of any
packet is called the packet identifier, PID, which identifies the type of that packet. There are four bits of
information in this field, but they are transmitted twice. The first time they are sent with their true values, and
the second time with each bit complemented, as shown in Figure 4.45a. This enables the receiving device to
verify that the PID byte has been received correctly.
The four PID bits identify one of 16 different packet types. Some control packets, such as ACK
(Acknowledge), consist only of the PID byte. Control packets used for controlling data transfer operations are
called token packets. They have the format shown in Figure 4.45b. A token packet starts with the PM field,
using one of two PID values to distinguish between an IN packet and an OUT packet, which control input
and output transfers, respectively. The PID field is followed by the 7-bit address of a device and the 4-bit
endpoint number within that device. The packet ends with 5 bits for error checking, using a method called
cyclic redundancy check (CRC). The CRC bits are computed based on the contents of the address and
endpoint fields. By performing an inverse computation, the receiving device can determine whether the
packet has been received correctly.

Department of Computer Science & Engineering, HMSIT, Tumkur 32


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

Data packets, which carry input and output data, have the format shown in Figure 4.45c. The packet
identifier field is followed by up to 8192 bits of
data, then 16 error-checking bits. Three different
PID patterns re used to identify data packets, so
that data packets may be numbered 0, 1, or 2:
Consider an output device connected to a USB
hub, which in turn is connected to a host
computer. An example of an output operation is
shown in Figure 4.46. The host computer sends a
token packet of type OUT to the hub, followed
by a data packet containing the output data. The
PID field of the data packet identifies it as data
packet number O. The hub verifies that the
transmission has been error free by checking the
error control bits, then sends an acknowledgment
packet (ACK) back to the host. The hub
forwards the token and data packets
downstream. All I/O devices receive this
sequence of packets, bilt only the device that
recognizes its address in the token packet accepts
the data in the packet that follows. After
verifying that transmission has been error free, it
sends an ACK packet to the hub.
Successive data packets on a full-speed or
low-speed pipe carry the numbers 0 and 1,
alternately. This simplifies recovery from
transmission errors. If a token, data, or acknowledgment packet is lost as a result of a transmission error, the
sender resends the entire sequence. By checking the data packet number in the HD field, the receiver can
detect and discard duplicate packets. High-speed data packets are sequentially numbered 0, 1, 2, 0, and so on.
Input operations follow a similar procedure. The host sends a token packet of type IN containing the device
address. In effect, this packet is a poll asking the device to send any input data it may have. The device

Department of Computer Science & Engineering, HMSIT, Tumkur 33


18CS34 COMPUTER ORGANIZATION III SEM MOD-2

responds by sending a data packet followed by an ACK. If it has no data ready, it responds by sending a
negative acknowledgment (NAK) instead.
Electrical Characteristics
The cables used for USB connections consist of four wires. Two are used to carry power, +5 V and
Ground. Thus, a hub or an I/O device may be powered directly from the bus, or it may have its own external
power connection. The other two wires are used to carry data. Different signaling schemes are used for
different speeds of transmission. At low speed, is and Os are transmitted by sending a high voltage state (5 V)
on one or the other of the two signal wires. For high-speed links, differential transmission is used.

LIST OF IMPORTANT QUESTIONS


1. Define exceptions. Explain two kinds of exceptions.
2. Define bus arbitration. Explain in detail both approach of bus arbitration.
3. What is an interrupt; with example illustrate the concept of interrupts
4. Explain in detail the situation where a number of devices capable of initiating interrupts are
connected to the processor? How to resolve the problems?
5. Explain the following terms a) interrupt service routine b) interrupt latency c) interrupt disabling.
6. With a diagram explain daisy chaining technique.
7. Draw the arrangement of a single bus structure and brief about memory mapped I/O.
8. Explain interrupt enabling, interrupt disabling, edge triggering with respect to interrupts
9. Draw the arrangement for bus arbitrations using a daisy chain and explain in brief.
10. With neat sketches explain various methods for handling multiple interrupt requests.
11. Define memory mapped I/O and I/O mapped I/O with examples
12. Explain how interrupt request from several I/O devices can be communicated to a processor through
a single INTR line.
13. What are the different methods of DMA. Explain in brief.
14. Define terms cycle stealing and block mode.
15. Explain the important functions of a I/O interface with a neat block diagram
16. What is DMA? Explain the hardware registers that are required in a DMA controller chip. Explain
the use of DMA controller in a computer system with a neat diagram
17. Define privileged instruction and explain how privileged exception occurs
18. Explain with a block diagram a general 8 bit parallel interface.
19. With the help of a data transfer signals explain how a read operation is performed using PCI bus.
20. Explain briefly bus arbitration phase in SCSI bus.
21. Draw the block diagram of universal bus(USB)structure connected to the host computer
22. Briefly explain all fields of packets that are used for communication between a host and a
device connected to an USB port.
23. Draw the hardware components needed for connecting a keyboard to a processor and explain
in brief.
24. List the SCSI bus signals with their functionalities.

Department of Computer Science & Engineering, HMSIT, Tumkur 34

You might also like