In: Computer Science
The title of the course is OS. All answers should be based on that. Please do not copy and paste answers on chegg or on google for me. All answers should be based on your understanding on the course. Please try as much to answer the questions based on what is asked and not setting your own questions and answering them. Let it be if you want copy and paste answers.
**********************************************************************************************************************************************************************************************************************
(2)
(a). Designers are constantly undertaking research into
possible improvement that can make to various types
of memory that computers use. Often times, the
designers grapple with issues in their design of
computer memories.
(i). Critically compare and contrast cache memory
and magnetic disk storage in a computer system
that you frequently use. You may draw a
suitable diagram and/or a table of comparisons to illustrate your
answer, if you desire.
(ii).Discuss three possible issues that cache memory
designers, in your opinion, must address in their
bid to come out with effective and efficient
cache memory to satisfy the demands of
computer users; and explain how they can solve
each of the issues you have discussed.
(b).A business student has approached you for assistance
in doing an ICT assignment that he has been
struggling with, regarding modes of execution of
instructions in a computer system.
(i). Clearly explain, in your own words, the
difference between User mode and Kernel
mode of execution of instructions in a
computer system; draw diagram to illustrate
your answer.
(ii). Describe the circumstances for which System
Calls may be invoked; and explain how the
operating system responds to such invocation.
a)
i)
cache memory | magnetic disk use |
Volatile(This volatility means that when power to the system is turned off, the data is lost) |
Non-volatile(storage is non-volatile, so it retains data even without power.) |
The memory includes cache memory in RAM. It formally includes storage as well and secondary memory. |
Although storage is also a type of memory, it differs from cache and primary memory because it is non-volatile |
High-performance data located close to the CPU. SRAM is more expensive than DRAM, and DRAM more expensive than storage media |
Slower speeds but much higher capacity at a lower cost |
Upgradeable, expensive compared to storage media. |
Upgradeable, HDD is affordable, and SSD prices are dropping closer to be commensurate with fast HDD. |
higher performance | storage performance is much slower than memory |
cache for better efficiency with frequently repeated instructions and primary to communicate CPU instructions with other computer devices and components. | Stores data until scheduled data movement or deletion. Unpowered hard disk and tape will retain data indefinitely |
temporarily stored data | permanently magnetized |
ii) issue
1)Cache Fundamentals CPU caches are normally associative memories; the key is a (real or virtual) memory address. Because of the cultures of building highly associative memories, most CPU cache memories are organized as two-dimensional arrays. The the first dimension is the set, and the second dimension is the set associativity. The set ID is determined by a function of the address bits of the memory request. The line ID within a set is determined by matching the address tags in the target set with the reference address. Caches with set associativity of one are commonly referred to as direct-mapped caches while caches with set associativity greater than one are referred to as set-associative caches. If there is only one set, the cache is called fully-associative. Each cache entry consists of some data and a tag that identifyes the main memory address of that data. Whether a memory request can be satised by the cache is determined by comparing the requested address with the address tags in the tag array. There are thus two parts to cache access. One is to access the tag array and perform the tag comparison to determine if the data is in the cache. The other is to access the data array to bring out the requested data. For a set-associative cache, the results of the tag comparison are used to select the requested line from within the set driven out of the data array.
2)Performance Evaluation Trace-driven simulation is the standard methodology for the study and evaluation of cache memory design; some trace-driven simulation results appear later in this paper. Trace-driven simulation is a form of event driven simulation, in which the events consist of those collected from a real system rather than those generated randomly. For cache memory studies, the traces consist of sequences of memory reference addresses. Traces may be collected by a variety of hardware and/or software methods. A comprehensive discussion of this technique and its strengths and weaknesses is in among the traces used in this paper is a trace of the server-side of a workload similar to the Transaction Processing Performance Council's benchmark C (TPC-C). This was collected with a software tracing tool on an IBM RISC System/6000 system running AIX. Our other traces consist of ve integer-intensive programs (Compress, Gcc, Go, Li, and Vortex) and three floating-point intensive applications (Apis, Su2cor, and Turb3d) from the SPEC95 benchmark suite. These traces were collected with the Shade tool on SUN Sparc Systems running Solaris. In our simulations, the rest 50 million instructions of each trace are used for cache warm-up purposes.
3)Access Time and Miss Ratio Targets The performance of a cache is determined both by the fraction of memory requests it can satisfy (hit/miss ratio) and the speed at which it can satisfy them (access time). There have been numerous studies on cache hit/miss ratios with respect to the cache and line sizes, and the set associativity ,. In general, larger caches with higher set associativity have higher hit ratios. Unfortunately, such cache topologies tend to incur longer access times, because in a set-associative cache, after the tags for the lines in the set are read out, a comparison is performed (in parallel) and then a mux is used to select the data corresponding to the matching tag. For instance, results from the on-chip timing model, cacti, suggest that a 16KB direct-mapped cache with 16-byte lines is about 20% faster than a similar 2-way set-associative cache. As addresses become longer, the tag comparisons are slower. A general strategy for simultaneously achieving fast access time and the high hit ratio is to have a fast and a slow access path. The fast path achieves fast access time for the majority of memory references while the slow path boosts the effective hit ratio. We refer to these two cases as fast access and slow access respectively. Techniques for achieving fast cache access while maintaining high hit ratios can be broadly classied into the following four categories: Decoupled cache: The data array access and line selection are carried out independently of the tag array access and comparison so as to circumvent the delay imbalance between the paths through the tag and data arrays. Multiple-access cache: A direct-mapped cache is accessed sequentially more than once in order to achieve the access time of a direct-mapped cache for the fast access and the hit ratio of a set-associative cache as a whole. Augmented cache: A direct-mapped cache is augmented with a small fully-associative cache to improve the overall hit ratio without lengthening the access time. Multi-level cache: A small and fast upstream cache is used for the fast access while one or more larger and slower downstream caches are used to capture the fastaccess misses with minimal penalties.
b)i) compression user and kernel mode
user Mode | Kernal Mode |
User Mode is the restricted mode, which the application programs are executing and starts out. | Kernel Mode is the privileged mode, which the computer enters when accessing hardware resources. |
User Mode is considered as the slave mode or the restricted mode. | Kernel mode is the system mode, master mode, or the privileged mode. |
In User mode, a process gets its own address space. | In Kernel Mode, processes get single address space. |
In User Mode, if an interrupt occurs, only one process fails. | In Kernel Mode, if an interrupt occurs, the whole operating system might fail. |
In user mode, there are restrictions to access kernel programs. Cannot access them directly. | In kernel mode, both user programs and kernel programs can be accessed. |
ii)System call
When a program in user mode requires access to RAM or a hardware resource, it must ask the kernel to provide access to that resource. This is done via something called a system call.
When a program makes a system call, the mode is switched from user mode to kernel mode. This is called a context switch.
Then the kernel provides the resource which the program requested. After that, another context switch happens which results in a change of mode from kernel mode back to user mode.
Generally, system calls are made by the user-level programs in the following situations: