2. What is Memory?
Memory:
A large array of words / bytes, each having its own address
Instruction Execution Cycle
Basic Hardware
Main memory and registers
Must provide protection to the user processes
The Base register: holds the smallest legal physical memory
address
The Limit register: specifies the size of the range
The base and limit registers can be loaded only by the operating
system with a special privileged instruction
CPU Disk
Cache
CPU Disk
3. Base & Limit Registers
Make sure that each process has
a separate memory space
Determine the range of legal
addresses the process may
access
Protection of memory
CPU hardware compare the
address generated in user mode
with the registers
Base and Limit registers
can be loaded only by the OS with a
special privileged instruction
Operating
System
Process 1
Process 2
Process 3
Process N
0
2560
0
3000
0
4250
0
6750
5
8080
0
102400
4250
0
2500
5
base
limit
4. Hardware address protection with base and
limit registers
CPU > <
base base + limit
Memory
address Ye
s
Ye
s
No No
trap to operating system
monitor – addressing error
5. Address Binding
For execution, a program has to be brought into
memory
Waiting processes form Input queue
Addresses may be represented in different ways
during program execution
A compiler will bind symbolic addresses to relocatable
addresses
The linkage editor(linker) or loader will in turn bind the
relocatable addresses to absolute addresses
Binding is a mapping from one address space to another
Compile time binding: absolute code
Load time binding: relocatable code
Execution time binding
6. Logical Vs. Physical Address Space
Logical: generated by CPU
Physical: seen by memory unit
Both addresses are
Identical when generated by
compile time and load time binding
Different when generated by
execution time binding
Logical address Virtual address
Virtual address space
Physical address space
Memory Management Unit:
Run-time mapping from virtual to
physical addresses
Memory
Relocatio
n register
20000
+
CPU
Logical
Address
356
Physical
Address
20356
MMU
User program sees logical addresses
They never sees real physical
addresses
Dynamic relocation using a relocation register
7. Dynamic Loading
A routine is not loaded until it is called.
All routines are kept on disk in a relocatable load format
The relocatable linking loader loads the desired routine into memory
Advantage:
An unused routine is never loaded
Useful when large amounts of code are needed to handle infrequently
occurring cases
Does not require special support from the operating system
Dynamic Linking and Shared Libraries
Static linking
Dynamic Linking
Linking is postponed until execution time
A stub is included in the image for each library routine reference
A small piece of code indicating how to locate the appropriate memory-
resident library routine how to load the library if the routine is not already
present
Shared Libraries
Requires help from the operating system.
8. Swapping
What is swapping?
A process can be swapped temporarily out of memory to
a backing store and then brought back into memory for
continued execution.(e.g. Round robin CPU scheduling)
User processes
space
Operating
System
Process
P1
Process
P9
Process
P7
Swap
out
Swap in
Backing
Store
Main
Memory
9. Swapping
When priority based scheduling: Roll out, Roll in
A process that is swapped out will be swapped back in
same memory space previously occupied
Requires a backing store:
Fast disk
Must provide direct access to memory images
Context-switch time high
The major part of the swap time: transfer time
transfer time amount of memory swapped
Process must be completely idle to swap out
10. Contiguous Memory Allocation
Memory mapping and protection
Using a relocation register and a limit register
Size of operating-system may change dynamically
Transient operating-system code
Memory allocation
Divide memory into several fixed-sized partitions
A table keeps information of occupied and available partitions
One large block of available memory: A hole
A set of holes of various sizes scattered throughout memory
Dynamic storage allocation problem
How to satisfy a request of size n from a list of free holes
First fit: Allocate the first hole that is big enough
Best fit: Allocate the smallest hole that is big enough
Worst fit: Allocate the largest hole
OS
P1
P2
11. Dynamic Memory Allocation Strategies
20K
45K
A
25K
40K
D
C (20K)
20K
A
25K
40K
30K
C
25K
D
B (30)
20K
A
25K
40K
30K
C
25K
B
X (20K)
20K
A
25K
20K
30K
C
25K
B
X
First Fit Strategy Best Fit Strategy Worst Fit Strategy
12. Example:
User input = 12K
process size
Best Fit
6K
14K
19K
11K
13K
Worst Fit First Fit
6K
14K
19K
11K
6K
14K
11K
13K
6K
19K
11K
13K
12K
12K
12K
13. Example:
User input = 300K
process 100K
sizes 400K
100K
200K
300K
500K
600K
Worst Fit
100K
200K
300K
50
0K
300K
100K
400K
300K
100K
200K
300K
500K
600K
300K
100K
400K
100K
Best fit
100K
200K
300K
50
0K
600K
300K
100K
400K
Next Fit
100K
200K
300K
500K
600K
First Fit
300K
100K
400K
100K
14. 1. Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in
order), how would the first-fit, best-fit, and worst-fit algorithms place processes
of 212 Kb, 417 Kb, 112 Kb, and 426 Kb (in order)?
Which algorithm makes the most efficient use of memory?
2. Consider the requests from processes in given order 300K, 25K, 125K and
50K. Let there be two blocks of memory available of size 150K followed by
a block size 350K.
Which of the partition allocation schemes can satisfy above requests?
3. Consider a swapping system in which memory consists of the following hole
sizes in memory order: 10K, 4K, 20K, 15K, and 9K. Which hole is taken for
successive segment requests of: (a) 8K (b) 12K (c) 10K for first fit, best fit, and
worst fit.
15. Contiguous Memory Allocation
Fragmentation
The first-fit and best-fit strategies suffer from external
fragmentation
External fragmentation may be a minor or a major
problem
Internal fragmentation is also possible
memory that is internal to a partition but is not being used
Solution to external fragmentation:
Compaction
shuffle the memory contents so as to place all free memory together
in one large block
Not possible if static address binding at assembly or load time
Permit the logical address space of the processes to be
noncontiguous.
44K
D
B
16K
C
A
G
(65K)
F
10K
E
44K
60K
B
16K
C
A
F
10K
D
(50K)
16. Paging
Permits the physical address space of a process to be noncontiguous.
Main memory as well as backing store also has the fragmentation problems
Basic method
Breaking physical memory into fixed-sized blocks called frames
Breaking logical memory into blocks of the same size called pages
CPU p d
f
f d f0000 …0000
f1111….0000
p
f
Page
table
Physical memory
Logical
Address
Physical
Address
17. Paging
The page size is defined by the hardware: typically a
power of 2
The translation of a logical address into a page number
and page offset
Suppose logical address space is 2m and page size is 2n
addressing units
How to map logical address into physical address?
We have no external fragmentation but may have some
internal fragmentation
Process size is expressed in pages
The first page of the process is loaded into one of the allocated
frames, and the frame number is put in the page table for this
P d
page number page offset
m - n n
18. Frames : Physical Addresses divided into no. of
fixed size blocks
Pages : Logical Addresses divided into no. of
fixed size blocks
Frames Size = Page Size
No.frames= Physical Address Space/Frame
size
No.Pages= Logical Address Space/Page
size
19. Address generated by CPU divided into:
Page Number(p) : No. of bits required to represent the
pages in logical address space.
Page Offset (d) : No. of bits required to represent
particular space in pages of logical address space.
Physical Address divided into:
Frame Number(f): No. of bits required to represent the
frame in physical address space.
Frame Offset (d) : No. of bits required to represent
particular space in frames of physical address space.
23. Let program consist of 8 pages and 16 frames of
memory. A page consist of 4096 words.
Page 0-> Frame 2, Page 4-> frame 15
Page 6-> Frame 5, Page 7-> frame 9
No other pages in memory.
Translate Address:111000011110000
000000000000000
111000011110000 1001000011110000
Physical
0010000000000000
0000000000000000
Physical
24. Paging
The clear separation between the user's view of
memory and the actual physical memory
Frame table
Status of page frame & to which page of which process it
is allocated
How page tables are implemented?
A page table for each process, a pointer to page table in
PCB
As a set of dedicated registers are used
Use of registers is efficient when page table is reasonably small
For large page tables, page tables are in main memory, a page
table base register(PTBR) points to page table in memory
Problem: time required to access a user memory (two memory
access)
25. Paging
How page tables are implemented?
Standard solution: a Translation Look-aside Buffer (TLB)
Associative, high-speed memory
Entry in TLB
Searching within TLB is fast but is expensive
Number of entries are small
Contains only a few of the page-table entries
What is TLB hit and TLB miss?
A memory reference to the page table must be made
The frame number is obtained
Use it to access memory
Add the page number and frame number to the TLB
Some TLBs store address-space identifiers (ASIDs) in each
TLB entry
used to provide address-space protection for the process
key value
26. Paging hardware with TLB.
CPU p d
f
f d f0000 …0000
f1111….0000
p
f
Page
table
Physical memory
Logical
Address
Physical
Address
Page
numbe
r
frame
number
TLB hit
TLB
miss
TLB table
27. Effective Memory Access Time (EAT)
Hit ratio (H)
The percentage of times that a particular page number is
found in the TLB
effective access time =cache hit ratio*cache access time+
cache miss ratio *(cache access time +main memory
access time)
Effective access time:
Let :
T: Time requires to access TLB
P: Time required to access page table
M: Time required to access memory
EAT = {(H)×(T+M) + (1- H)×(T+P+M)}
28. Examples:
An 80-percent hit ratio
20 nanoseconds to search TLB
100 nanoseconds to access memory
Then:
Access time if TLB hits?
Access Time if TLB miss?
Effective Access Time?
29. Solution:
1 Access time if TLB hits?
=20+100=120
2 Access time if TLB miss?
=20+100+100 =220
3. Effective Access Time?
=hit ratio *TLB hits + hit miss ratio* TLB miss = 140
30. Example:
An 60-percent hit ratio
10 nanoseconds to search TLB
80 nanoseconds to access memory
Effective Access Time ???
Solution : = 0.6*(10+80) + (1-0.6)*(10+2*80)
= 0.6 * (90) + 0.4 * (170)
= 122
31. Example
Effective access time = 180msec
TLB access time = 40msec
Main memory access time = 120msec
Hit Ratio ???
Solution : h=0.833
32. Memory Protection in Paging
Protection bit associated with each frame
Kept in page table
One bit can define a page to be read-write or read-only
Can also provide separate protection bits for each kind of
access
A valid-invalid bit
Valid-
the associated page is in the process's logical address space
and is thus a legal page
Invalid
the page is not in the process's logical address space
Operating system sets this bit for each page to allow or
disallow access to the page
Page Table Length Register(PTLR)
to indicate the size of the page table
4 V
6 V
1 V
3 V
o I
0 I
0
1
2
3
4
5
33. Shared Pages
It is possible to share common code so as it is
possible to share pages of common code
Such code is re-entrant code (or pure code)
non-self-modifying code
Heavily used programs can also be shared
To be sharable, the code must be re-entrant
Process 1
4
6
1
Process 2
4
3
1 Process 3
4
5
1
ed2
data2
ed1
data3
data1
0
1
2
3
4
5
6
7
Page Table for Process 1
Page Table for Process 2 Page Table for Process 3
ed1
data1
ed2
ed1
data 3
ed2
ed1
data 2
ed2
34. Segmentation
User views memory as a collection of
variable-sized segments
Each segments is of variable length
Elements within a segment are identified
by their offset from the beginning of the
segment
Segmentation
A logical address space is a collection of
segments
Has a name and a length
The addresses are specified with
the segment name
the offset within the segment
< Segment-number , offset >
Subroutine
Stack
Symbol
Table
main
program
Logical
Address
User’s view of a
program
35. Segmentation
Address generated by the CPU is divided into:
Segment number (s): Number of bits required to represent
the segment.
Segment offset (d): Number of bits required to represent the
size of the segment.
Must map two dimensional user defined address into one
dimensional physical address
A Segment Table:
Entry in segment table contains
Segment base: Contains the starting physical address
where the segment resides in memory
Segment limit: Specifies the length of the segment
An array of base-limit register pairs
36. CPU S d
limit base
< +
s
ye
s
no
Segment
table
trap: addressing
error
Physical
memory
38. Advantages :
No Internal fragmentation.
Segment Table consumes less space in comparison to
Page table in paging.
Disadvantage:
As processes are loaded and removed from the memory,
the free memory space is broken into little pieces, causing
External fragmentation