In: Computer Science
Pages not Associated with Files, Stored in Swap Space
On Write Attempts to Shared Pages, Copying Pages Then Writing to the Copy
Selecting Pages with Highest Count
Page Replacement Model Based on Working-Set Strategy
Selecting Pages Not Accessed In Longest Time
Demand Paging Scheme that Bring Pages Into Memory Only When Referenced
Frames Selected for Replacement by Page-Replacement Algorithms
Alternative to Paging Involving Compressing Frames to Decrease Memory Usage
Page Faults per Unit Time
A FIFO Algorithm that Clears Reference Bits, But Does Not Replace the Page
Operating-System Algorithms For Allocating Frames Among Frame Demands
Paging Memory at High Rates When There is Insufficient Physical Memory for Virtual Memory Demand
Allowing Processes to Select Replacement Frames from Specifically Allocated Frame Pool
Computer Architecture Feature in Which Memory Access Time Varies Based on Which Core Threads are Running
Splitting Allocated Memory into Chunks for Objects of a Given Size. As Objects are freed, Chunks Coalesce To Eliminate Fragmentation
Exception Resulting from Reference to Non-Memory-Resident Memory Page
List of Pages Accessed Over a Period of Time
Selecting the Page with Lowest Count
Allowing Execution of Processes Not Completely in Memory. Separating Address Space From Physical into Logical for Easier Programming and Larger Name Space.
Writing Zeros Into Pages Before Making Them Available (Prevents Old Data Access)
Instance of Class or Data Structure
Algorithm that Chooses Victim Frames
Allocating More Virtual Memory Existing Physical Memory
Assigning Equal Numbers of frames to Processes
Allocating memory from Fixed-Size Segments Consisting of Physically Contiguous Pages
Logical View of Process Memory Storage
Allowing Processes to Select Replacement Frames from Entire Frame Pool
Page Faults Per Memory Access Attempt
Buddy System Allocator that Satisfies Memory Requests IN Powers Of 2 From Fixed-Sized Segments Consisting of Contiguous Pages
Buddy System Combining Adjacent Freed Memory into Larger Segments
Set of Pages in Most Recent Page References
Two or More Slabs
Routines that Scan memory and Free Frames to Maintain a Minimum Level of Available Free Memory
Assigning Page Frames According to Process Size
Page Replacement Algorithm with Lowest Possible Page-Fault Rate
An MMU Bit Indicating Referenced Pages
Tendency of Processes to Reference Memory in Patterns.
Selecting Physical Memory Frames To Be Replaced on New Page Allocation
Memory Access Model Based on Tracking Most Recently Accessed Pages
Kernel Data Structure Containing List of Available Physical Memory Frames
Observation that page-fault rates may increase as the number of allocated frames increases
Compressed Space Divided by Original Space
Limited Set of Most Recently Accessed Pages
Algorithms that Avoid Thrashing by Not Allowing Processes to Steal Frames from Other Processes
Circular Queue Containing Candidate Victim Frames, Replaced only if Not Recently Referenced
Secondary Storage Backing-Store Used For Out-Of-Memory Pages
An MMU Bit Used to Indicate Modified Frames Requiring Save Before Replacement
Memory Management: Bringing in Pages from Storage as Needed Rather than, e.g., in their Entirety at Process Load Time
Linux Routine that Executes When Free Memory is Very Low, Terminating Processes to Free Memory
System Call Which Makes Duplicate Child Processes, Suspends Parents, Shares Address Space for Child and Parent for both read and write operations.
================================================================
Match Based on the above answers:-
Virtual Memory
Virtual Address Space
Demand Paging
Page Fault
Pure Demand Paging
Locality of Reference
Swap Space
Free-Frame List
Zero-Fill-On-Demand
Page-Fault Rate
Anonymous Memory
Copy-On-Write
Virtual Memory Fork
Over-Allocating
Page Replacement
Victim Frames
Modify Bit
Dirty Bit
Frame-Allocation Algorithm
Page-Replacement Algorithm
Reference String
Belady's Anomaly
Optimal Page-Replacement Algorithm
Least Recently Used (LRU)
Reference Bit
Second-Chance Page-Replacement Algorithm
Clock
Least Frequently Used (LFU)
Most Frequently Used (MFU)
Equal Allocation
Proportional Allocation
Global Replacement
Local Replacement
Reapers
Out-Of-Memory (OOM) killer
Non-Uniform Memory Access (NUMA)
Thrashing
Local Replacement Algorithm
priority replacement algorithm
Locality Model
Working-Set Model
Memory Access Model Based on Tracking Most Recently Accessed Pages
Working-Set Window
Working Set
Page-Fault Frequency (PFF)
Memory Compression
Compression Ratio
Buddy System
Power-of-2 Allocator
Coalescing
slab allocation
Cache
Object
A smaller page size leads to smaller page tables – False – need
more entries because we have more pages
• A smaller page size leads to more TLB misses – True – less likely
that page will encompass address we are after.
• A smaller page size leads to fewer page faults – True
In practice, initial faulting-in is usually less important than
page faults generated once the application is up and running.
Once the app is up and running, reducing the page size results in a
more accurate determination of work set, which is in-turn smaller,
which requires less memory, which in-turn leads to a lower page
fault rate.
• A smaller page size reduces paging I/O throughput – True
• Threads are cheaper to create than processes – True
• Kernel-scheduled threads are cheaper to create than user-level
threads – False
• A blocking kernel-scheduled thread blocks all threads in the
process – False. This is true for user level threads
• Threads are cheaper to context switch than processes – True –
don’t have to save the address space
• A blocking user-level thread blocks the process - True
• Different user-level threads of the same process can have
different scheduling priorities – False. User level threads aren’t
scheduled, they yield() the CPU
• All kernel-scheduled threads of a process share the same virtual
address space – True
• The optimal page replacement algorithm is the best choice in
practice – False - it’s impossible to implement
• The operating system is not responsible for resource allocation
between competing processes – False – it is responsible for
this
• System calls do not change to privilege mode of the processor –
False – we trap into the kernel so we do change the privilege
mode
• The Challenge-Response authentication protocol is susceptible to
replay attacks by intruders snooping on the network – False
• A scheduler favouring I/O-bound processes usually does not
significantly delay the completion of CPU-bound processes – True –
the I/O processes get the I/O and then yield the CPU again.
Q2 a)
99 TLB hits, which needs only a single memory access (the actual
one required)
.0095 TLB miss, one read from memory to get the page table entry to
load TLB, then one read to read actual memory.
.0005 pagefault plus eventual read from memory plus something like
one of the following (I'd take a few variation here)
1) A memory reference to update the page table
2) A memory reference to update the page table and a memory
reference for the hardware to re-read the same entry to refill the
TLB
3) Any reasonable scebario that results in valid page table and
refilled TLB.
I'll assume option 2 for final answer
thus
.99 * 100ns + .0095 *2 * 100ns + .0005 (3 * 100ns + 5ms)
would be one answer I would accept.
Q3 a)
i) FCFS order traversed: 119,58,114,28,111,55,103,30,75
tracks traversed: 547
ii) SSTF order traversed: 75,58,55,30,28,103,111,114,119
tracks traversed: 143
iii) SCAN order traversed: 103,111,114,119,75,58,55,30,28
tracks traversed: 130
iv) C-SCAN order traversed: 103,111,114,119,28,30,55,58,75
tracks traversed: 177
Q5 b)
It is in the multi process lecture notes.
http://cgi.cse.unsw.edu.au/~cs3231/06s1/lectures/lect21.pdf
Starting at page 30.
Just some notes that may be good to point out
Test-and-Set
Hardware locks the bus during the TSL instruction to
prevent memory accesses by any other CPU
Issue:
Spinning on a lock requires bus locking which
slows all other CPUs down
– Independent of whether other CPUs need a lock or not
– Causes bus contention
Caching does not help this test and set.
Hence to reduce bus contention.
• Read before TSL
– Spin reading the lock variable
waiting for it to change
– When it does, use TSL to acquire
the lock
• Allows lock to be shared read-only
in all caches until its released
– no bus traffic until actual release
• No race conditions, as acquisition
is still with TSL.