0% found this document useful (0 votes)
25 views3 pages

Computer Memory

The document discusses the main design variables in multicore organizations, including the number of processor cores, cache memory allocation, interconnect methods, and hardware multithreading support. It highlights the advantages of shared L2 caches, such as improved inter-core communication and higher cache hit rates, compared to dedicated caches. Additionally, it outlines reasons for the shift to multicore designs, including power limitations, diminishing returns on instruction-level parallelism, and the focus on throughput over latency.

Uploaded by

ashroughy.chy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views3 pages

Computer Memory

The document discusses the main design variables in multicore organizations, including the number of processor cores, cache memory allocation, interconnect methods, and hardware multithreading support. It highlights the advantages of shared L2 caches, such as improved inter-core communication and higher cache hit rates, compared to dedicated caches. Additionally, it outlines reasons for the shift to multicore designs, including power limitations, diminishing returns on instruction-level parallelism, and the focus on throughput over latency.

Uploaded by

ashroughy.chy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

1.

At a top level, what are the main design variables in a multicore


organization?

The main design variables in a multicore organization are:

●​ The number of processor cores on the chip.


●​ The amount and level of cache memory (L1, L2, L3).
●​ The allocation of cache memory, specifically whether caches are dedicated to
each core or shared among multiple cores. This includes choices like:
○​ Dedicated L1 caches per core (for instructions and data).
○​ Choosing between dedicated vs. shared L2 cache.
○​ The inclusion of a shared L3 cache.
●​ The interconnect method (e.g., internal bus, crossbar switch) that the cores use
to communicate with each other and with shared caches and memory.
●​ The support for hardware multithreading (e.g., SMT - Simultaneous
Multithreading) within each core.

2. List some advantages of a shared L2 cache among cores compared to


separate dedicated L2 caches for each core.

A shared L2 cache offers several advantages over dedicated per-core L2 caches:

1.​ Improved Inter-Core Communication: Cores can communicate and share data
much faster through the shared cache than if they had to go out to main memory.
2.​ Dynamic Cache Allocation: The total cache space is shared. If one core has a
high-demand task requiring more cache, it can use a larger portion of the shared
cache, while an idle core uses less. This leads to more efficient use of the total
cache resources.
3.​ Higher Cache Hit Rate: Data fetched by one core and potentially needed by
another (a common scenario in parallel programs) is already in the shared cache,
reducing access to slower main memory and improving overall performance.
4.​ Reduced Complexity: A single, large shared cache can sometimes be simpler to
manage from a hardware perspective than coordinating multiple private caches
to maintain coherence.
3. Give several reasons for designers' choice to move to a multicore
organization rather than increase parallelism within a single processor.

Designers moved to multicore organizations for several key reasons:

1.​ Power and Thermal Limitations: Increasing the clock speed and complexity of a
single processor (e.g., using deeper pipelines for more Instruction-Level
Parallelism - ILP) leads to an exponential increase in power consumption and
heat dissipation (power wall). Multicore designs allow for better performance
within strict power budgets by using multiple simpler, more power-efficient cores
running at a moderate clock speed.
2.​ Diminishing Returns on ILP: It became increasingly difficult for hardware and
compilers to find enough parallel instructions within a single thread of execution
to keep a very complex, deeply pipelined processor busy. Multicore shifts the
burden of finding parallelism to the software developer (Thread-Level Parallelism
- TLP), which is more abundant in modern applications.
3.​ Throughput over Latency: For many server and multimedia workloads, executing
multiple tasks or threads simultaneously (increasing system throughput) is more
important than reducing the latency of a single task. Multiple cores are inherently
better at handling multiple threads.

4. A machine has 128 KB of virtual address space with each page size being
4 KB and a 32 KB of main memory. How many pages and page frames exist
in this machine?

●​ Number of Pages (Virtual):​


Virtual Address Space = 128 KB​
Page Size = 4 KB​
Number of Pages = Virtual Address Space / Page Size = 128 KB / 4 KB = 32
pages
●​ Number of Page Frames (Physical):​
Main Memory Size = 32 KB​
Page Size = 4 KB (Page Frames are the same size as pages)​
Number of Page Frames = Main Memory Size / Page Size = 32 KB / 4 KB = 8
page frames

You might also like