In: Computer Science
1. Explain how threads are used by the CPU to process tasks by describing a modern example, e.g., the multi-core mobile phone that you use every day has an interesting organisation of threads. However, it can be any other modern example of hardware that uses ―threads.
2. There are a number of techniques used by CPU designers to improve the performance of their processors. However, these optimisation strategies do not always work – for some workloads, they may have no effect, or even cause performance to degrade. What is a circumstance where simultaneous multi-threading (S.M.T.) cannot offer any advantage, or possibly even cause a performance decrease
1). ANSWER :
GIVENTHAT :
How threads are used by the CPU to process tasks :
How Threads Work
So, are you still with us? We finally made it to threads!
A thread is the unit of execution within a process. A process can have anywhere from just one thread to many threads.
Process vs. Thread
When a process starts, it is assigned memory and resources. Each thread in the process shares that memory and resources. In single-threaded processes, the process contains one thread. The process and the thread are one and the same, and there is only one thing happening.
In multithreaded processes, the process contains more than one thread, and the process is accomplishing a number of things at the same time (technically, sometimes it’s almost at the same time—read more on that in the “What about Parallelism and Concurrency?” section below).
We talked about the two types of memory available to a process or a thread, the stack and the heap. It is important to distinguish between these two types of process memory because each thread will have its own stack, but all the threads in a process will share the heap.
Threads are sometimes called lightweight processes because they have their own stack but can access shared data. Because threads share the same address space as the process and other threads within the process, the operational cost of communication between the threads is low, which is an advantage. The disadvantage is that a problem with one thread in a process will certainly affect other threads and the viability of the process itself.
2). CPU designers to improve the performance of their processors :
CPUs no longer deliver the same kind of of performance improvements as in the past, raising questions across the industry about what comes next.
The growth in processing power delivered by a single CPU core began stalling out at the beginning of the decade, when power-related issues such as heat and noise forced processor companies to add more cores rather than pushing up the clock frequency. Multi-core designs, plus a boost in power and performance at the next processor node, provided enough improvement in performance to sustain the processor industry for the past several process nodes. But as the benefits from technology scaling slow down, or for many companies stop completely, this is no a longer viable approach.
This reality has implications well beyond CPU design. Software developers have come to expect ever-growing compute and memory resources, but the CPU no longer can deliver the kinds of performance benefits that scaling used to provide. Software programmability and rich feature sets have been a luxury provided by Moore’s Law, which has provided a cushion for both hardware and software engineers.
“Because of Moore’s Law, the way that computing has grown and accelerated is partly because Intel and others kept pushing on the next generation node, and thus the need to optimize the compute engine itself has been less important,” says Nilam Ruparelia, senior director for strategic marketing in Microsemi, a Microchip company. “But it also happened because software productivity has gone up faster than Moore’s Law. If you make it easy to program, you enable a greater number of people to program. The ability of software to do a variety of things has grown up significantly.”