0% found this document useful (0 votes)
118 views4 pages

Mapping Function

The document discusses three mapping functions for cache memory: direct mapping, associative mapping, and block-set-associative mapping. Direct mapping assigns specific main memory blocks to specific cache blocks, while associative mapping allows any main memory block to reside in any cache block, offering greater flexibility. Block-set-associative mapping combines elements of both, grouping cache blocks into sets to reduce search overhead while maintaining some flexibility in block placement.

Uploaded by

annjosepht83
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views4 pages

Mapping Function

The document discusses three mapping functions for cache memory: direct mapping, associative mapping, and block-set-associative mapping. Direct mapping assigns specific main memory blocks to specific cache blocks, while associative mapping allows any main memory block to reside in any cache block, offering greater flexibility. Block-set-associative mapping combines elements of both, grouping cache blocks into sets to reduce search overhead while maintaining some flexibility in block placement.

Uploaded by

annjosepht83
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Computer Organization and Architecture – Mapping Functions And Replacement Algorithms

Mapping Functions

The mapping functions are used to map a particular block of main memory to a particular block of
cache. This mapping function is used to transfer the block from main memory to cache memory.
Three different mapping functions are available:

Direct mapping:

A particular block of main memory can be brought to a particular block of cache memory. So, it is not
flexible.

Associative mapping:

In this mapping function, any block of Main memory can potentially reside in any cache block
position. This is much more flexible mapping method.

Block-set-associative mapping:

In this method, blocks of cache are grouped into sets, and the mapping allows a block of main
memory to reside in any block of a specific set. From the flexibility point of view, it is in between to
the other two methods.

All these three mapping methods are explained with the help of an example.

Consider a cache of 4096 (4K) words with a block size of 32 words. Therefore, the cache is organized
as 128 blocks. For 4K words, required address lines are 12 bits. To select one of the block out of 128
blocks, we need 7 bits of address lines and to select one word out of 32 words, we need 5 bits of
address lines. So the total 12 bits of address is divided for two groups, lower 5 bits are used to select
a word within a block, and higher 7 bits of address are used to select any block of cache memory.

Let us consider a main memory system consisting 64K words. The size of address bus is 16 bits. Since
the block size of cache is 32 words, so the main memory is also organized as block size of 32 words.
Therefore, the total number of blocks in main memory is 2048 (2K x 32 words = 64K words). To
identify any one block of 2K blocks, we need 11 address lines. Out of 16 address lines of main
memory, lower 5 bits are used to select a word within a block and higher 11 bits are used to select a
block out of 2048 blocks.

Number of blocks in cache memory is 128 and number of blocks in main memory is 2048, so at any
instant of time only 128 blocks out of 2048 blocks can reside in cache memory. Therefore, we need
mapping function to put a particular block of main memory into appropriate block of cache memory.

Direct Mapping Technique:

The simplest way of associating main memory blocks with cache block is the direct mapping
technique. In this technique, block k of main memory maps into block k modulo m of the cache,
where m is the total number of blocks in cache. In this example, the value of m is 128. In direct
mapping technique, one particular block of main memory can be transferred to a particular block of
cache which is derived by the modulo function.

Since more than one main memory block is mapped onto a given cache block position, contention
may arise for that position. This situation may occurs even when the cache is not full. Contention is
resolved by allowing the new block to overwrite the currently resident block. So the replacement
algorithm is trivial.

The detail operation of direct mapping technique is as follows:

The main memory address is divided into three fields. The field size depends on the memory capacity
and the block size of cache. In this example, the lower 5 bits of address is used to identify a word
within a block. Next 7 bits are used to select a block out of 128 blocks (which is the capacity of the
cache). The remaining 4 bits are used as a TAG to identify the proper block of main memory that is
mapped to cache.

When a new block is first brought into the cache, the high order 4 bits of the main memory address
are stored in four TAG bits associated with its location in the cache. When the CPU generates a
memory request, the 7-bit block address determines the corresponding cache block. The TAG field of
that block is compared to the TAG field of the address. If they match, the desired word specified by
the low-order 5 bits of the address is in that block of the cache.

If there is no match, the required word must be accessed from the main memory, that is, the
contents of that block of the cache is replaced by the new block that is specified by the new address
generated by the CPU and correspondingly the TAG bit will also be changed by the high order 4 bits

of the address. The whole arrangement for direct mapping technique is shown in the figure
below.

Figure : Direct-mapping cache

Associated Mapping Technique:

In the associative mapping technique, a main memory block can potentially reside in any cache block
position. In this case, the main memory address is divided into two groups, low-order bits identifies
the location of a word within a block and high-order bits identifies the block. In the example here, 11
bits are required to identify a main memory block when it is resident in the cache , high-order 11 bits
are used as TAG bits and low-order 5 bits are used to identify a word within a block. The TAG bits of
an address received from the CPU must be compared to the TAG bits of each block of the cache to
see if the desired block is present.

In the associative mapping, any block of main memory can go to any block of cache, so it has got the
complete flexibility and we have to use proper replacement policy to replace a block from cache if
the currently accessed block of main memory is not present in cache. It might not be practical to use
this complete flexibility of associative mapping technique due to searching overhead, because the
TAG field of main memory address has to be compared with the TAG field of all the cache block. In
this example, there are 128 blocks in cache and the size of TAG is 11 bits. The whole arrangement of
Associative Mapping Technique is shown in the figure below.

Figure : Associated Mapping Cache

Block-Set-Associative Mapping Technique:

This mapping technique is intermediate to the previous two techniques. Blocks of the cache are
grouped into sets, and the mapping allows a block of main memory to reside in any block of a
specific set. Therefore, the flexibility of associative mapping is reduced from full freedom to a set of
specific blocks. This also reduces the searching overhead, because the search is restricted to number
of sets, instead of number of blocks. Also the contention problem of the direct mapping is eased by
having a few choices for block replacement.

Consider the same cache memory and main memory organization of the previous example. Organize
the cache with 4 blocks in each set. The TAG field of associative mapping technique is divided into
two groups, one is termed as SET bit and the second one is termed as TAG bit. Each set contains 4
blocks, total number of set is 32. The main memory address is grouped into three parts: low-order 5
bits are used to identifies a word within a block. Since there are total 32 sets present, next 5 bits are
used to identify the set. High-order 6 bits are used as TAG bits.

The 5-bit set field of the address determines which set of the cache might contain the desired block.
This is similar to direct mapping technique, in case of direct mapping, it looks for block, but in case of
block-set-associative mapping, it looks for set. The TAG field of the address must then be compared
with the TAGs of the four blocks of that set. If a match occurs, then the block is present in the cache;
otherwise the block containing the addressed word must be brought to the cache. This block will
potentially come to the corresponding set only.

Since, there are four blocks in the set, we have to choose appropriately which block to be replaced if
all the blocks are occupied. Since the search is restricted to four block only, so the searching
complexity is reduced. The whole arrangement of block-set-associative mapping technique is shown
in the figure below.

It is clear that if we increase the number of blocks per set, then the number of bits in SET field is
reduced. Due to the increase of blocks per set, complexity of search is also increased. The extreme
condition of 128 blocks per set requires no set bits and corresponds to the fully associative mapping
technique with 11 TAG bits. The other extreme of one block per set is the direct mapping method.

Figure : Block-set Associated mapping Cache with 4 blocks per set

You might also like