In: Computer Science
Q1/
A- What is cache memory and how it works?
B- What are the three cache mapping approaches and what is the pros and cons of each approach?
C- What is the cache replacement policies and read/write policies?
A:-
An area or the type of the computer memory in which frequently used informations and data (set of informations) can be temporarily stored and can be stored quickly and accessed quickly i.e the access time is very less.
Cache memory briefly stores data, information and projects that are usually utilized by the CPU. At the point when information is required, the CPU will naturally go to reserve or the cache memory looking for quicker information access. This is on the grounds that RAM is increasingly slow further away from the CPU. At the point when information is found in store memory, this is known as a cache hit or reserve hit . A store hit empowers the processor to recover information rapidly, making your general framework more effective.
Since reserve or cache memory is a lot more modest than RAM, the information it stores is just brief, thus it may not hold the data that the processor needs. At the point when the store doesn't have the processor's necessary information, this is known as a cache miss, and in this occurrence the CPU will move onto the hard drive and use RAM.
B:-
Three cache mapping approcahes are as follows:-
1-Direct Mapping
Pros:-
This cache mapping approach is very power efficient and the reason is that it avoids the serach in all cache lines.
This cache mapping approach is very simple and inexpensive and the reasom is that it requires cheap hardware as only one tag needs to be checked at a time.
Cons:-
The main disadvantage of this cache mapping aprroach is that it has low cache hit rate and the reason is that only one cache line is available in the set. and the cache line is replaced, every time a new memory is referenced to the same set of memory and this is main resaon of causing conflict miss.
2:-Fully associative cache
Pros:-Fully associative cache
structure provides us the pliability of placing memory block in any
of the cache lines and hence full utilization of the cache.
This cache mapping approach provides better cache hit rate.
This cache mapping approach offers the pliability of utilizing a
good sort of replacement algorithms if a cache miss occurs.
Cons:-This cache mapping approach is
slow because it takes time to iterate through all the lines.
This cache mapping approach is power hungry because it has got to
iterate over entire cache set to locate a block.
This cache mapping approach is the most expensive of all
methods, thanks to the high cost of associative-comparison
hardware.
3:-Set-associative cache
Pros;-This cache mapping approach may
be a trade-off between direct-mapped and fully associative
cache.
This cache mapping approach offers the pliability of using
replacement algorithms if a cache miss occurs.
Cons:-This cache mapping approach won't effectively use all the available cache lines within the cache and suffers from conflict miss.
C:- Cache Replacement policies are as follows:-
The goal of cache replacement policy is, to spot and predict which cache block won't be needed within the future. Note cache replacement policy isn't applicable to direct mapped cache, since each cache block features a fixed location in direct mapped cache.
The Replacement policies include many algorithms such as random, least frequently used / LFU, least recently used / LRU, FIFO order, etc.
Random replacement picks a victim randomly . it's an easy and fast implementation, but it's equally likely to exchange a useful block as a useless block.
LRU (Least Recently used) picks victim that's accessed least recently. It takes advantages of temporal locality, but it's complexed in implementation.
LFU (Least frequently used) picks victim that's accessed least frequently. It helps with cache performance, but it's also complexed in implementation since it must keep access count for every block.
FIFO i,e ( First in First Out )order picks the oldest block as victim, and it approximates the LRU(Least Recently used) with simpler hardware,
Cache Read Policy:-
Cache are often direct mapped, fully associative, and set-associative.
For direct mapped cache, a cache block can only be placed in one
specific location, determined by cache block number, and therefore
the system address are often partitioned within the following way.
during this case, cache only stores the accompany with data of the
entire cache block.
For fully associative cache, a cache block are often placed
anywhere within the cache, and therefore the system address are
often partitioned within the following way. during this case, cache
stores the tag also because the cache block data.
A set-associative cache are some things in between direct mapped
cache and fully associative cache. The Cache memory is partitioned
in various sets, and a cache block can only be placed in one
particular set. However, each set can have multiple ways, and ways
are fully associative, thus a cache block are often stored anywhere
within a group . The system address are often partitioned within
the following way. Again, cache stores the tag also because the
cache block data.
There are several factors that should be kept in mind when choosing
the read policy:
Block conflict rate
Cache utilization
Control logic complexity
Cache access speed
Cache write Policy:-
There are 2 sorts of write policies on a cache hit: write-through and write back. Write-through cache will update both cache and main memory on a cache hit, while write-back cache will update main memory only a cache block is evicted.
Write-back cache has lower requirement for memory bandwidth and cache-memory bus bandwidth, and it's faster on a write hit since there's no got to await main memory update; write-through cache simplifies the I/O because main memory is usually up-to-date, but cache hit penalty is that the same as miss penalty, introducing more complexed scheduling and replacement.
There are 2 sorts of write policies on a cache miss: no-write-allocate / write-around, and write-allocate. The Write-around cache technique or method will only update main memory on a cache miss, while write-allocate cache will update main memory and fetch the cache block in cache.
Usually, the combinations are:
Write-through + write-around
Write-back + write allocate
The first combination makes sense: albeit there are subsequent
writes to an equivalent block, the write must still attend the most
memory. The second combination is in hope that subsequent write to
an equivalent block are often “captured” by cache.