Question

In: Computer Science

Q1/ A- What is cache memory and how it works? B- What are the three cache...

Q1/

A- What is cache memory and how it works?

B- What are the three cache mapping approaches and what is the pros and cons of each approach?

C- What is the cache replacement policies and read/write policies?

Solutions

Expert Solution

A:-

An area or the type of the computer memory in which frequently used informations and data (set of informations) can be temporarily stored and can be stored quickly and accessed quickly i.e the access time is very less.

Cache memory briefly stores data, information and projects that are usually utilized by the CPU. At the point when information is required, the CPU will naturally go to reserve or the cache memory looking for quicker information access. This is on the grounds that RAM is increasingly slow further away from the CPU. At the point when information is found in store memory, this is known as a cache hit or reserve hit . A store hit empowers the processor to recover information rapidly, making your general framework more effective.

Since reserve or cache memory is a lot more modest than RAM, the information it stores is just brief, thus it may not hold the data that the processor needs. At the point when the store doesn't have the processor's necessary information, this is known as a cache miss, and in this occurrence the CPU will move onto the hard drive and use RAM.

B:-

Three cache mapping approcahes are as follows:-

1-Direct Mapping

Pros:-

This cache mapping approach is very power efficient and the reason is that it avoids the serach in all cache lines.

This cache mapping approach is very simple and inexpensive and the reasom is that it requires cheap hardware as only one tag needs to be checked at a time.

Cons:-

The main disadvantage of this cache mapping aprroach is that it has low cache hit rate and the reason is that only one cache line is available in the set. and the cache line is replaced, every time a new memory is referenced to the same set of memory and this is main resaon of causing conflict miss.

2:-Fully associative cache

Pros:-Fully associative cache structure provides us the pliability of placing memory block in any of the cache lines and hence full utilization of the cache.
This cache mapping approach provides better cache hit rate.
This cache mapping approach offers the pliability of utilizing a good sort of replacement algorithms if a cache miss occurs.

Cons:-This cache mapping approach is slow because it takes time to iterate through all the lines.
This cache mapping approach is power hungry because it has got to iterate over entire cache set to locate a block.
This cache mapping approach is the  most expensive of all methods, thanks to the high cost of associative-comparison hardware.

3:-Set-associative cache

Pros;-This cache mapping approach may be a trade-off between direct-mapped and fully associative cache.
This cache mapping approach offers the pliability of using replacement algorithms if a cache miss occurs.

Cons:-This cache mapping approach  won't effectively use all the available cache lines within the cache and suffers from conflict miss.

C:- Cache Replacement policies are as follows:-

The goal of cache replacement policy is, to spot and predict which cache block won't be needed within the future. Note cache replacement policy isn't applicable to direct mapped cache, since each cache block features a fixed location in direct mapped cache.

The Replacement policies include many algorithms such as random, least frequently used / LFU, least recently used / LRU, FIFO order, etc.

Random replacement picks a victim randomly . it's an easy and fast implementation, but it's equally likely to exchange a useful block as a useless block.

LRU (Least Recently used) picks victim that's accessed least recently. It takes advantages of temporal locality, but it's complexed in implementation.

LFU (Least frequently used) picks victim that's accessed least frequently. It helps with cache performance, but it's also complexed in implementation since it must keep access count for every block.

FIFO i,e ( First in First Out )order picks the oldest block as victim, and it approximates the LRU(Least Recently used) with simpler hardware,

Cache Read Policy:-

Cache are often direct mapped, fully associative, and set-associative.

For direct mapped cache, a cache block can only be placed in one specific location, determined by cache block number, and therefore the system address are often partitioned within the following way. during this case, cache only stores the accompany with data of the entire cache block.
For fully associative cache, a cache block are often placed anywhere within the cache, and therefore the system address are often partitioned within the following way. during this case, cache stores the tag also because the cache block data.
A set-associative cache are some things in between direct mapped cache and fully associative cache. The Cache memory is partitioned in various sets, and a cache block can only be placed in one particular set. However, each set can have multiple ways, and ways are fully associative, thus a cache block are often stored anywhere within a group . The system address are often partitioned within the following way. Again, cache stores the tag also because the cache block data.
There are several factors that should be kept in mind when choosing the read policy:
Block conflict rate
Cache utilization
Control logic complexity
Cache access speed

Cache write Policy:-

There are 2 sorts of write policies on a cache hit: write-through and write back. Write-through cache will update both cache and main memory on a cache hit, while write-back cache will update main memory only a cache block is evicted.

Write-back cache has lower requirement for memory bandwidth and cache-memory bus bandwidth, and it's faster on a write hit since there's no got to await main memory update; write-through cache simplifies the I/O because main memory is usually up-to-date, but cache hit penalty is that the same as miss penalty, introducing more complexed scheduling and replacement.

There are 2 sorts of write policies on a cache miss: no-write-allocate / write-around, and write-allocate. The Write-around cache technique or method will only update main memory on a cache miss, while write-allocate cache will update main memory and fetch the cache block in cache.

Usually, the combinations are:

Write-through + write-around
Write-back + write allocate
The first combination makes sense: albeit there are subsequent writes to an equivalent block, the write must still attend the most memory. The second combination is in hope that subsequent write to an equivalent block are often “captured” by cache.

  


Related Solutions

Techopedia  defines disk cache as; A disk cache is a cache memory that is used to speed...
Techopedia  defines disk cache as; A disk cache is a cache memory that is used to speed up the process of storing and accessing data from the host hard disk. It enables faster processing of reading/writing, commands and other input and output process between the hard disk, the memory and computing components. A fundamental principle of programming is that any system interacting with multiple components requires the use of buffers. Buffers insulate components from each other’s duty cycles, but too many...
How synaptic plasticity works on learning&memory?
How synaptic plasticity works on learning&memory?
In a memory system, when the access time of the cache is 10ns and the access...
In a memory system, when the access time of the cache is 10ns and the access time of the main memory is 50ns, what is the hit ratio of the cache if the effective access time is 10% larger than the access time of the cache? (Up to 4 digits below the decimal point) please detail explanation
1. What does the term transparent mean when applied to a memory cache? 2. Do the...
1. What does the term transparent mean when applied to a memory cache? 2. Do the terms fetch-execute and fetch-store refer to the same concept? Explain.
briefly explain the principle that a cache improves the performance of memory access. For a computer,...
briefly explain the principle that a cache improves the performance of memory access. For a computer, suppose that the access to the cache takes 6 ns, and the access to the memory takes 40 ns, what’s effective access time (EAT) given a hit ratio of 90%?
Q1: ROE, ROA, ROIC are given for Company A and Company B (unrelated to Flash Memory)....
Q1: ROE, ROA, ROIC are given for Company A and Company B (unrelated to Flash Memory). Company A Company B Interest rate 6.0% 8.0% Income tax rate 17.0% 17.0% Debt 585 100 Equity 348 833 TOTAL LIAB+EQUITY 933 933 EBIT 86 86 - Interest expense 35.1 8 Earnings before tax 50.9 78 - Income tax 8.7 13.3 Earnings after tax 42.2 64.7 Ratio Fraction Ratio Fraction Numerator Denominator RETURN ON EQUITY (ROE) 12.1% 42.2/348 7.8% 64.7/833 Earnings after tax Equity...
·         Discuss how memory works and how it becomes inaccurate. Give real life examples of the...
·         Discuss how memory works and how it becomes inaccurate. Give real life examples of the concepts you are summarizing.
In the real time world, in which type of servers can we increase the Cache Memory....
In the real time world, in which type of servers can we increase the Cache Memory. How it would be beneficial for the performance of the server?.
(a) What is an RDD in Spark? What is it used for? (b) What is In-Memory...
(a) What is an RDD in Spark? What is it used for? (b) What is In-Memory Computing? Briefly describe its advantages.
What is false memory and how does it affect our memory?
What is false memory and how does it affect our memory?
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT