0% found this document useful (0 votes)
32 views17 pages

CA (Unit, 5,4)

Cache memory is a high-speed memory that acts as a buffer between the CPU and main memory, improving data access times. It has various levels, including registers, cache, main memory, and secondary memory, with performance measured by hit ratios. Additionally, the document discusses virtual memory, page replacement algorithms, types of interrupts, and Direct Memory Access (DMA), highlighting their advantages and disadvantages.

Uploaded by

jatinjain6869
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views17 pages

CA (Unit, 5,4)

Cache memory is a high-speed memory that acts as a buffer between the CPU and main memory, improving data access times. It has various levels, including registers, cache, main memory, and secondary memory, with performance measured by hit ratios. Additionally, the document discusses virtual memory, page replacement algorithms, types of interrupts, and Direct Memory Access (DMA), highlighting their advantages and disadvantages.

Uploaded by

jatinjain6869
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Unit 5

Cache Memory is a special very high-speed memory. It is used to speed up and


synchronize with high-speed CPU. Cache memory is costlier than main memory or
disk memory but more economical than CPU registers. Cache memory is an extremely
fast memory type that acts as a buffer between RAM and the CPU. It holds
frequently requested data and instructions so that they are immediately available to
the CPU when needed. Cache memory is used to reduce the average time to access
data from the Main memory. The cache is a smaller and faster memory that stores
copies of the data from frequently used main memory locations. There are various
different independent caches in a CPU, which store instructions and data.

Levels of memory:
 Level 1 or Register – It is a type of memory in which data is stored and
accepted that are immediately stored in CPU. Most commonly used
register is accumulator, Program counter, address register etc.
 Level 2 or Cache memory – It is the fastest memory which has faster
access time where data is temporarily stored for faster access.
 Level 3 or Main Memory – It is memory on which computer works
currently. It is small in size and once power is off data no longer stays in
this memory.
 Level 4 or Secondary Memory – It is external memory which is not as fast
as main memory but data stays permanently in this memory.
The performance of the cache is in terms of the hit ratio.

The CPU searches the data in the cache when it requires writing or read any data from
the main memory. In this case, two cases may occur as follows:

 If the CPU finds that data in the cache, a cache hit occurs and it reads the
data from the cache.

 On the other hand, if it does not find that data in the cache, a cache
miss occurs. Furthermore, during cache miss, the cache allows the entry of
data and then reads data from the main memory.

 Therefore, we can define the hit ratio as the number of hits divided by the
sum of hits and misses.
hit ratio = hit / (hit + miss)

= number of hits/total accesses


Also, we can improve cache performance by:

 using a higher cache block size.

 higher associativity.

 reducing the miss rate.

 reducing the time to hit in the cache.

Advantages of Cache Memory

The advantages are as follows:

 It is faster than the main memory.

 The access time is quite less in comparison to the main memory.

 The speed of accessing data increases hence, the CPU works faster.

 Moreover, the performance of the CPU also becomes better.

 The recent data stores in the cache and therefore, the outputs are faster.

Disadvantages of Cache Memory

The disadvantages are as follows:

 It is quite expensive.
 The storage capacity is limited.
Cache Mapping: There are three different types of mapping used for the purpose of
cache memory which is as follows: Direct mapping, Associative mapping, and Set-
Associative mapping. These are explained below.
A. Direct Mapping
The simplest technique, known as direct mapping, maps each block of main memory
into only one possible cache line. or In Direct mapping, assign each memory block to a
specific line in the cache. If a line is previously taken up by a memory block when a
new block needs to be loaded, the old block is trashed. An address space is split into
two parts index field and a tag field. The cache is used to store the tag field
whereas the rest is stored in the main memory. Direct mapping`s performance is
directly proportional to the Hit ratio.

B. Associative Mapping
In this type of mapping, the associative memory is used to store content and
addresses of the memory word. Any block can go into any line of the cache. This
means that the word id bits are used to identify which word in the block is
needed.This enables the placement of any word at any place in the cache memory. It
is considered to be the fastest and the most flexible mapping form. In associative
mapping the index bits are zero.

C. Set-associative Mapping
Set-associative mapping allows that each word that is present in the cache can have
two or more words in the main memory for the same index address. Set associative
cache mapping combines the best of direct and associative cache mapping
techniques. the cache consists of a number of sets, each of which consists of a
number of lines.

Virtual memory

Computers need memory (it is a temporary storage area which holds the
data and instructions that the CPU needs) to execute all programs.
Windows operating systems use virtual memory technology to increase
computer's memory capability.

This technology will leave a part of hard disk space to act as memory.
Virtual memory combines your computer's RAM with temporary space on
your hard disk. When RAM runs low, virtual memory helps to move data
from RAM to a space called a paging file. Moving data to paging file can
free up the RAM so your computer can complete its work.

virtual memory sometimes is also known as “page file”.


Page Replacement Algorithms
1. First In First Out (FIFO): This is the simplest page replacement algorithm.
In this algorithm, the operating system keeps track of all pages in the memory
in a queue, the oldest page is in the front of the queue. When a page needs to
be replaced page in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page
frames.Find the number of page faults.

Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the
empty slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is
not available in memory so it replaces the oldest page slot i.e 1. —>1 Page
Fault. 6 comes, it is also not available in memory so it replaces the oldest page
slot i.e 3 —>1 Page Fault. Finally, when 3 come it is not available so it replaces
0 1 page fault.
2. Optimal Page replacement: In this algorithm, pages are replaced which would
not be used for the longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with
4 page frame. Find number of page fault.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots
—> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7
because it is not used for the longest duration of time in the future.—>1 Page
fault. 0 is already there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page
Fault.
Now for the further page reference string —> 0 Page fault because they are
already available in the memory.
3. Least Recently Used: In this algorithm, page will be replaced which is least
recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,
3 with 4 page frames. Find number of page faults.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots
—> 4 Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7
because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are
already available in the memory.

Input /output interrupt


In this method, the CPU issues a read command to the I/O device about the status
and then goes on to do some useful work. When the I/O device is ready, the I/O device
sends an interrupt signal to the processor.
When the CPU receives the interrupt signal from the I/O device, it checks the status.
If the status is ready, then the CPU read the word from the I/O device and writes the
word into the main memory. If the operation was done successfully, then the
processor goes on to the next instruction.
Types of interrupts

1.External Interrupts

External interrupts come from input-output (l/0) devices, from a timing device,
from a circuit monitoring the power supply, or from any other external source. The
timeout interrupt can result from a program that is in an endless loop and thus
exceeded its time allocation.

2.Internal Interrupts

Internal interrupts are also called traps. error conditions generally appear as a result
of premature termination of the instruction execution. The service program that
processes the internal interrupt determines the corrective measure to be taken.
The main difference between internal and external interrupts is that the internal
interrupt is initiated by some exceptional condition caused by the program itself
rather than by an external event. Internal interrupts are synchronous with the
program while external interrupts are asynchronous. If the program is rerun, the
internal interrupts will appear in the same place each time. External interrupts
depend on external conditions that are independent of the program being executed at
the time.

3.Software Interrupts

A software interrupt is initiated by executing an instruction. A software interrupt is


a special call instruction that behaves like an interrupt rather than a subroutine call.
It can be used by the programmer to initiate an interrupt procedure at any desired
point in the program.

Priority Interrupt

A priority interrupt is a system which decides the priority at which various devices,
which generates the interrupt signal at the same time, will be serviced by the CPU.
The system has authority to decide which conditions are allowed to interrupt the CPU,
devices with high speed transfer such as magnetic disks are given high priority and
slow devices such as keyboards are given low priority.
When two or more devices interrupt the computer simultaneously, the computer
services the device with the higher priority first.

Daisy Chaining Priority

This way of deciding the interrupt priority consists of serial connection of all the
devices which generates an interrupt signal. The device with the highest priority is
placed at the first position followed by lower priority devices and the device which
has lowest priority among all is placed at the last in the chain.

In daisy chaining system all the devices are connected in a serial form. The interrupt
line request is common to all devices. If any device has interrupt signal in low level
state then interrupt line goes to low level state and enables the interrupt input in the
CPU.

The following figure shows the block diagram for daisy chaining priority system.

Direct Memory Access (DMA) :


DMA Controller is a hardware device that allows I/O devices to directly access
memory with less participation of the processor. DMA controller needs the same old
circuits of an interface to communicate with the CPU and Input/Output devices.
The unit communicates with the CPU through data bus and control lines. Through the
use of the address bus and allowing the DMA and RS register to select inputs, the
register within the DMA is chosen by the CPU.
DMA in Computer
Architecture
DMA controller registers :
The DMA controller has three registers as follows.
 Address register – It contains the address to specify the desired location
in memory.
 Word count register – It contains the number of words to be transferred.
 Control register – It specifies the transfer mode.

Advantages and Disadvantages

The advantages of the DMA controller include the following,

 It allows a peripheral device to read or write from memory without using the
CPU.
 DMA controller increases the operations of the memory by avoiding CPU
involvement.
 It reduces the overload work on the CPU.
 For every transfer, simply a few clock cycles are necessary.
 DMA decreases the required clock cycle to read/write a block of data.

disadvantages

 DMA controller-based systems are expensive.


 System price can be increased.

Unit 4

Instruction Codes
• An instruction code is a group of bits that instruct the computer to perform a
specific operation.
• The operation code of an instruction is a group of bits that define operations such
as addition, subtraction, shift, complement, etc
. • An instruction must also include one or more operands, which indicate the registers
and/or memory addresses from which data is taken or to which data is deposited.
.computer have Processor Register and instruction code with two parts. The first part
specifies the operation to be performed and second specifies an address. The memory
address tells where the operand in memory will be found.

Instructions are stored in one section of memory and data in another.

Common Bus System

The basic computer has 8 registers, a memory unit and a control unit. Paths must be
provided to transfer data from one register to another. An efficient method for
transferring data in a system is to use a Common Bus System. The output of
registers and memory are connected to the common bus.

Computer Instructions

There are three types of formats:

1. Memory Reference Instruction


It uses 12 bits to specify the address and 1 bit to specify the addressing mode
(I). I is equal to 0 for direct address and 1 for indirect address.

2. Register Reference Instruction


These instructions are recognized by the opcode 111 with a 0 in the left most bit of
instruction. The other 12 bits specify the operation to be executed.
3. Input-Output Instruction
These instructions are recognized by the operation code 111 with a 1 in the left most
bit of instruction. The remaining 12 bits are used to specify the input-output
operation.

Format of Instruction

1. An operation code field that specifies the operation to be performed.

2. An address field that designates the memory address or register.

3. A mode field that specifies the way the operand of effective address is
determined.

Computers may have instructions of different lengths containing varying number of


addresses. The number of address field in the instruction format depends upon the
internal organization of its registers.

Adressing Modes and Instruction Cycle


The operation field of an instruction specifies the operation to be performed. This
operation will be executed on some data which is stored in computer registers or the
main memory. The way any operand is selected during the program execution is
dependent on the addressing mode of the instruction.

Types of Addressing Modes

Immediate Mode

In this mode, the operand is specified in the instruction itself. An immediate mode
instruction has an operand field.

For example: ADD 7, which says Add 7 to contents of accumulator. 7 is the operand
here.

Register Mode

In this mode the operand is stored in the register and this register is present in CPU.
The instruction has the address of the Register where the operand is stored.
Advantages

 Shorter instructions and faster instruction fetch.

 Faster memory access to the operand(s)

Disadvantages

 Very limited address space

 Using multiple registers helps performance but it complicates the instructions.

Auto Increment/Decrement Mode

In this the register is incremented or decremented after or before its value is used.

Direct Addressing Mode

In this mode, effective address of operand is present in instruction itself.

 Single memory reference to access data.

 No additional calculations to find the effective address of the operand.


For Example: ADD R1, 4000 - In this the 4000 is effective address of operand.

NOTE: Effective Address is the location where operand is present.

Indirect Addressing Mode

In this, the address field of instruction gives the address where the effective address
is stored in memory. This slows down the execution, as this includes multiple memory
lookups to find the operand.
Relative Addressing Mode

In this the contents of PC(Program Counter) is added to address part of instruction


to obtain the effective address.

Stack Addressing Mode

In this mode, operand is at the top of the stack. For example: ADD, this instruction
will POP top two items from the stack, add them, and will then PUSH the result to
the top of the stack.

Instruction Cycle

An instruction cycle, also known as fetch-decode-execute cycle is the basic


operational process of a computer. This process is repeated continuously by CPU from
boot up to shut down of computer.

Following are the steps that occur during an instruction cycle:

1. Fetch the Instruction

The instruction is fetched from memory address that is stored in PC(Program


Counter) and stored in the instruction register IR. At the end of the fetch operation,
PC is incremented by 1 and it then points to the next instruction to be executed.

2. Decode the Instruction

The instruction in the IR is executed by the decoder.

3. Read the Effective Address

If the instruction has an indirect address, the effective address is read from the
memory. Otherwise operands are directly read in case of immediate operand
instruction.

4. Execute the Instruction

The Control Unit passes the information in the form of control signals to the
functional unit of CPU. The result generated is stored in main memory or sent to an
output device.

The cycle is then repeated by fetching the next instruction. Thus in this way the
instruction cycle is repeated continuously.
Flowchart for instruction cycle
Registers
• Computer instructions are stored in consecutive locations and are executed
sequentially; this requires a register which can stored the address of the next
instruction; we call it the Program Counter.
• We need registers which can hold the address at which a memory operand is stored
as well as the value itself.
• We need a place where we can store – temporary data – the instruction being
executed, – a character being read in – a character being written out
Register is a very fast computer memory, used to store data/instruction in-execution.
A Register is a group of flip-flops with each flip-flop capable of storing one bit of
information. An n-bit register has a group of n flip-flops and is capable of storing
binary information of n-bits.

A register consists of a group of flip-flops and gates. The flip-flops hold the binary
information and gates control when and how new information is transferred into a
register. Various types of registers are available commercially. The simplest register is
one that consists of only flip-flops with no external gates.

Following are some commonly used registers:

1. Accumulator: This is the most common register, used to store data taken out
from the memory.

2. General Purpose Registers: This is used to store data intermediate results


during program execution. It can be accessed via assembly programming.

3. Special Purpose Registers: Users do not access these registers. These registers
are for Computer system,

o MAR: Memory Address Register are those registers that holds the
address for memory unit.

o MBR: Memory Buffer Register stores instruction and data received from
the memory and sent from the memory.

o PC: Program Counter points to the next instruction to be executed.


o IR: Instruction Register holds the instruction to be executed.

You might also like