2. Background
Virtual memory – separation of user logical memory from physical
memory.
Only part of the program needs to be in memory for execution
Logical address space can therefore be much larger than physical
address space
Allows address spaces to be shared by several processes
Allows for more efficient process creation
Virtual memory can be implemented via:
Demand paging
Demand segmentation
4. Demand Paging
Bring a page into memory only when it is needed
Less I/O needed
Less memory needed
Faster response
More users
Page is needed reference to it
invalid reference abort
not-in-memory bring to memory
Lazy swapper – never swaps a page into memory unless page will be
needed
Swapper that deals with pages is a pager
6. Valid-Invalid Bit
With each page table entry a valid–invalid bit is associated
(v in-memory, i not-in-memory)
Initially valid–invalid bit is set to i on all entries
Example of a page table snapshot:
During address translation, if valid–invalid bit in page table entry
is I page fault
….
v
v
v
v
i
i
i
Frame # valid-invalid bit
page table
8. Page Fault
If there is a reference to a page, first reference to that page will trap to
operating system:
page fault
1. Operating system looks at another table to decide:
- Invalid reference abort
- Just not in memory
2. Get empty frame
3. Swap page into frame
4. Reset tables
5. Set validation bit = v
6. Restart the instruction that caused the page fault
10. Performance of Demand Paging
Page Fault Rate 0 p 1.0
if p = 0 no page faults
if p = 1, every reference is a fault
Effective Access Time (EAT)
EAT = (1 – p) x memory access
+ p *page fault time(page fault overhead
+ swap page out
+ swap page in
+ restart overhead
)
11. What happens if there is no free frame?
Page replacement – find some page in memory, but not
really in use, swap it out
algorithm
performance – want an algorithm which will result in
minimum number of page faults
Same page may be brought into memory several times
12. Page Replacement
Prevent over-allocation of memory by modifying page-fault service
routine to include page replacement
Use modify (dirty) bit to reduce overhead of page transfers – only
modified pages are written to disk
Page replacement completes separation between logical memory and
physical memory – large virtual memory can be provided on a smaller
physical memory
14. Basic Page Replacement
1. Find the location of the desired page on disk
2. Find a free frame:
- If there is a free frame, use it
- If there is no free frame, use a page replacement
algorithm to select a victim frame
3. Bring the desired page into the (newly) free frame; update the page
and frame tables
4. Restart the process
16. Page Replacement Algorithms
Want lowest page-fault rate
Evaluate algorithm by running it on a particular string of memory
references (reference string) and computing the number of page faults
on that string
In all our examples, the reference string is
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
17. First-In-First-Out (FIFO) Algorithm
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
3 frames (3 pages can be in memory at a time per process)
4 frames
Belady’s Anomaly: more frames more page faults
1
2
3
1
2
3
4
1
2
5
3
4
9 page faults
1
2
3
1
2
3
5
1
2
4
5 10 page faults
4
4 3
19. Optimal Page Replacement
Basic idea
replace the page that will not be referenced for the longest time
This gives the lowest possible fault rate
Impossible to implement
Does provide a good measure for other techniques
20. Optimal Algorithm
Replace page that will not be used for longest period of time
4 frames example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
How do you know this?
Used for measuring how well your algorithm performs
1
2
3
4
6 page faults
4 5
24. Basic idea
replace the page in memory that has not been accessed
for the longest time
Optimal policy looking back in time
as opposed to forward in time
fortunately, programs tend to follow similar behavior
25. LRU Algorithm (Cont.)
Stack implementation – keep a stack of page numbers in a double link
form:
Page referenced:
move it to the top
requires 6 pointers to be changed
No search for replacement
27. LRU
major solutions
counters
hardware clock “ticks” on every memory reference
the page referenced is marked with this “time”
the page with the smallest “time” value is replaced
stack
keep a stack of references
on every reference to a page, move it to top of stack
page at bottom of stack is next one to be replaced
28. LRU
Stack Algorithm ( Class of page replacement algorithms)
Optimal and LRU – stack algorithm
No Belady anomaly
Stack algorithm is an algorithm for which it can be shown that the set
of pages in memory for n frames is always a subset of set of pages that
would be in memory with n+1 frames
29. LRU Approximation page Replacement
Hardware systems doesn’t support LRU
Reference Bit for a page set by hardware
Reference bits are associated with entry of
page table
Additional Reference bits Algorithm
(reference bits are added)
Second chance Algorithm – FIFO ,circular
queue – clock algorithm
30. Counting Algorithms
Keep a counter of the number of references that have been made to
each page
LFU Algorithm: replaces page with smallest count
MFU Algorithm: based on the argument that the page with the
smallest count was probably just brought in and has yet to be used
31. Allocation of frames
The number of frames allocated must not exceed the total number of
frames available.
At least minimum number of frames must be allocated.
As the number of frames allocated to each process decreases, number
of page faults increases.
Frame allocation:
Equal allocation: divide m frames among n processes, left out frames
can be used as buffer pool.
Proportional allocation: total frames are proportionally split among n
processes depending on their requirement
Ai=si/S*m and S=∑si
,where size of virtual memory of process pi is si, m is total number of
frames,ai is the number of frames allocated to process pi
32. Global vs. Local Allocation
Global replacement – process selects a replacement frame from the
set of all frames; one process can take a frame from another
Local replacement – each process selects from only its own set of
allocated frames
33. Thrashing
If a process does not have “enough” pages, the page-fault rate is
very high(high paging activity). This leads to:
low CPU utilization
operating system thinks that it needs to increase the degree of
multiprogramming
another process added to the system
Thrashing a process is busy swapping pages in and out
Effect of thrashing can be reduced by local replacement algorithm.
35. Locality model
Process migrates from one locality to another, there are two
types of locality
Spatial locality : Once memory is referenced, it is highly
possible that nearby locations will be referenced.
Temporal locality: memory locations referenced recently are
likely to be referenced again.
Localities may overlap
36. Working set model
Set of pages the process is currently using is called its working set.
If the page is not used then it is dropped from its working set after certain time
(∆)
The working set accuracy depends on the value of (∆)
If (∆) is too small, it does not cover entire locality.
If (∆) is too large, it could overlap several localities.
Usage of working set model
• OS allocates enough number of frames to working set of each process.
• If extra frames are available, another process can be started.
• If sum of working set size exceeds the total number of frames available, the OS
selects and suspends process.
• Suspended process is restarted later.
Advantages:
Prevents thrashing, optimizes CPU utilization.
37. Page fault frequency(PFF)
Main reason for thrashing is high PFF
Upper and lower bound on the desired page fault rate must be
established.
If page fault rate is above upper bound then another frame is allocated
to the process.
If page fault rate is less than lower bound, frame is taken away from
process.
If page fault rate increases and there are no free frames then process
must be suspended.
38. Demand segmentation
Memory is allocated in segments instead of pages.
Each segment has segment descriptor which keeps track of segment
size
Segment descriptor consist of valid bit which indicate if the segment is
in memory or not.
If the segment is not in memory, trap is generated to OS segment fault
occurs,OS the swaps required segment.
Accesses bit is set when segment is either read or written.