In: Computer Science
Task 6 – CPU Architecture
Please explain in your own words for each part of Question 6. Quote
your references in your ―References‖ /
―Bibliography‖
(a)
1. For modern CPU architectures please explain what is
true about this statement.
“Why you cannot use CPU clock-speed only to compare computer
performance.”
Marks will be awarded for research and clear explanations into CPU
concepts of
threads, multi – threads, cores, relationship between cores and
threads, multi-
tasking and multi-processing.
2. Explain how threads are used by the CPU to process
tasks by describing a modern example, e.g., the multi-core mobile
phone that you use every
day has an interesting organisation of threads.
However, it can be any other modern example of hardware that uses
―threads‖.
(b) There are a number of techniques used by CPU
designers to improve
the performance of their processors. However, these optimisation
strategies do not
always work – for some workloads, they may have no effect, or even
cause performance to degrade.
What is a circumstance where simultaneous multi-threading (S.M.T.)
cannot offer
any advantage, or possibly even cause a performance decrease
Why you cannot use CPU clock-speed only to compare computer performance.
Dynamic Clock Speed Adjustments
Modern CPUs also aren’t fixed at a single speed, particularly laptop, smartphone, tablet, and other mobile CPUs where power efficiency and heat production are major concerns. Instead, the CPU runs at a slower speed when idle (or when you’re not doing too much) and a faster speed under load. The CPU dynamically increases and decreases its speed when needed. When doing something demanding, the CPU will increase its clock rate, get the work done as quickly as possible, and get back to the slower clock rate that allows it to save more power.
So, if you’re shopping for a laptop, you’ll also want to consider this. Bear in mind that cooling is a factor, too — a CPU in an Ultrabook may only be able to run at its top speed for a certain amount of time before running at a lower speed because it can’t be properly cooled. The CPU may not be able to maintain top speed all the time due to overheating concerns. On the other hand, a computer with the exact same CPU but better cooling may have better, more consistent performance at top speeds if it can keep the CPU cool enough to run at those top speeds for longer.
2.How threads are used by the CPU to process tasks:
How Threads Work
So, are you still with us? We finally made it to threads!
A thread is the unit of execution within a process. A process can have anywhere from just one thread to many threads.
Process vs. Thread
When a process starts, it is assigned memory and resources. Each thread in the process shares that memory and resources. In single-threaded processes, the process contains one thread. The process and the thread are one and the same, and there is only one thing happening.
In multithreaded processes, the process contains more than one thread, and the process is accomplishing a number of things at the same time (technically, sometimes it’s almost at the same time—read more on that in the “What about Parallelism and Concurrency?” section below).
We talked about the two types of memory available to a process or a thread, the stack and the heap. It is important to distinguish between these two types of process memory because each thread will have its own stack, but all the threads in a process will share the heap.
Threads are sometimes called lightweight processes because they have their own stack but can access shared data. Because threads share the same address space as the process and other threads within the process, the operational cost of communication between the threads is low, which is an advantage. The disadvantage is that a problem with one thread in a process will certainly affect other threads and the viability of the process itself.
CPU designers to improve the performance of their processors
CPUs no longer deliver the same kind of of performance improvements as in the past, raising questions across the industry about what comes next.
The growth in processing power delivered by a single CPU core began stalling out at the beginning of the decade, when power-related issues such as heat and noise forced processor companies to add more cores rather than pushing up the clock frequency. Multi-core designs, plus a boost in power and performance at the next processor node, provided enough improvement in performance to sustain the processor industry for the past several process nodes. But as the benefits from technology scaling slow down, or for many companies stop completely, this is no a longer viable approach.
This reality has implications well beyond CPU design. Software developers have come to expect ever-growing compute and memory resources, but the CPU no longer can deliver the kinds of performance benefits that scaling used to provide. Software programmability and rich feature sets have been a luxury provided by Moore’s Law, which has provided a cushion for both hardware and software engineers.
“Because of Moore’s Law, the way that computing has grown and accelerated is partly because Intel and others kept pushing on the next generation node, and thus the need to optimize the compute engine itself has been less important,” says Nilam Ruparelia, senior director for strategic marketing in Microsemi, a Microchip company. “But it also happened because software productivity has gone up faster than Moore’s Law. If you make it easy to program, you enable a greater number of people to program. The ability of software to do a variety of things has grown up significantly.”