Cache Memory Mapping Techniques with Diagram and Example

Cache Memory Mapping Techniques with Diagram and Example

Cache memory mapping technique is an important topic to be considered in the domain of computer organisation. There are 3 different types of cache memory mapping techniques. In this article, we will discuss what is cache memory mapping, the 3 types of cache memory mapping techniques and also some important facts related to cache memory mapping like what is Cache HIT and Cache MISS in details. If you want to know more about cache memory and locality of reference then you can read this article here.

What is cache memory mapping?

So, first of all, what does the term cache memory mapping represent? It basically tells us that which word of main memory will be placed at which location of the cache memory. So the basic idea is it just represents a mapping between the cache addresses and the main memory addresses referring to the same unit of information.

So, if we say a proper definition of cache memory mapping, then it can be defined as “The transformation of data from the main memory to the cache memory is referred to as cache memory mapping.” 

We know that cache memory is placed in between main memory and CPU to enhance the performance of a computer system. When CPU needs an instruction, then it searches the address of that instruction in the cache memory. If the required instruction is found in the cache then it is termed as Cache HIT. When the required instruction is not found in cache memory, it is called Cache MISS.

Some important facts related to Cache HIT and Cache MISS:

  • The ratio between Cache HIT and the total search attempt is called HIT ratio.
  • The ratio between Cache MISS and total search attempt is known as MISS ratio.
  • The summation of HIT ratio and MISS ratio is always 1.
  • The maximum value of Cache HIT or Cache MISS is 1.
  • The minimum value of Cache HIT or Cache MISS is 0.

When a Cache MISS occurs, that means when the CPU doesn’t find the required instructions in the cache memory, the CPU tries to fetch the instruction from primary memory. As the locality of reference is applicable in the computer system, instead of fetching a single instruction a group of instruction are sent from primary memory to cache. These groups of instructions are called blocks.

Entire primary memory is divided into equal sized blocks. Cache memory is also divided into same-sized blocks. For placing primary memory blocks into cache memory blocks, 3 different mapping techniques are available.

Cache memory mapping techniques:

  1. Direct Mapping
  2. Associative Mapping
  3. Set – Associative Mapping

Direct Mapping:

The direct mapping concept is if the ith block of main memory has to be placed at the jth block of cache memory then, the mapping is defined as:

j = i % (number of blocks in cache memory)

Let’s see this with the help of an example.

Suppose, there are 4096 blocks in primary memory and 128 blocks in the cache memory. Then the situation is like if I want to map the 0th block of main memory into the cache memory, then I apply the above formula and I get:

0 % 128 = 0

So, the 0th block of main memory is mapped to the 0th block of cache memory. Here, 128 is the total number of blocks in the cache memory.

1 % 128 = 1

2 % 128 = 2

Similarly, the 1st block of main memory will be mapped to the 1st block of cache, then 2nd block to 2nd block of the cache and so on.

So, this is how direct mapping in the cache memory is done. The following diagram illustrates the direct mapping process.

direct mapping technique

Associative Mapping:

In the direct cache memory mapping technique, the problem was every block of main memory was directly mapped to the cache memory. So, the major drawback was the high conflict miss. That means we had to replace a cache memory block even when other blocks in the cache memory were present as empty.

Suppose, I have already loaded the 0th block of main memory to the 0th block of cache memory using the direct mapping technique. Now consider, the next block that I need is 128. Even if the 1,2,3… all blocks of cache memory are empty, I still have to map the 128 block of main memory to the 0th block of cache memory since,

128 % 128 = 0

Therefore, I have to change the previously loaded 0th block of main memory to the 128 block. So, that was the reason for high conflict miss. That means I have to replace a cache block even if the other cache blocks are present as empty. To overcome this drawback of direct mapping technique, the concept of associative mapping technique was introduced.

The idea of associative mapping technique is to avoid the high conflict miss, any block of main memory can be placed anywhere in the cache memory. Associative mapping technique is the fastest and most flexible mapping technique. We can have the following diagram to illustrate the associative mapping process.

Set Associative mapping:

Set associative mapping is introduced to overcome the high conflict miss in the direct mapping technique and the large tag comparisons in case of associative mapping. In this cache memory mapping technique, the cache blocks are divided into sets. Here the set size is always in the power of 2, i.e. if the cache has 2 blocks per set then it is called as 2-way set associative. Similarly, if it has 4 blocks per set then it is called as 4-way set associative.

It basically means that instead of just referring to the cache block directly we will refer to the particular sets present in the cache memory. So basically the concept is we map a particular block of main memory to a particular set of cache and within that set, the block can be mapped to any of the cache blocks that are available.

Consider a system with 128 cache memory blocks and 4096 primary memory blocks. Here we are considering 2 blocks in each set, or simply we are considering a 2-way set associative process. Since there are 2 blocks in each set, so there will be total 64 sets in our cache memory.

Here, to determine the proper set position in which the main memory will be placed we use a concept i.e. if the  ith block of main memory has to be placed in the jth block of cache memory then,

j = i % (number of sets in cache)

After determining the cache position, the primary memory block may be placed in any block inside the set. Following diagram illustrates this process.

set associative mapping technique

Liked this article? Then please like our Facebook Page & stay connected with us.

Also Read: Components of CPU and their functions

Debarshi Das

Debarshi Das is a passionate blogger & full-stack JavaScript developer from Guwahati, Assam. He has a deep interest in robotics too. He holds a BSc degree in Information Technology & currently pursuing Masters of Computer Application (MCA) from a premier govt. engineering college. He is also certified as a chip-level computer hardware expert from an ISO certified institute.

This Post Has One Comment

  1. i enjoyed your lecture notes. keep it up for such great work.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Close Menu