SlideShare uma empresa Scribd logo
1 de 48
Baixar para ler offline
Memory Management - Concepts
Beuth Hochschule

Summer Term 2014

!
Pictures (C) W. Stallings, if not stated otherwise
Operating Systems I PT / FF 14
Memory Management
• Beside compute time, memory is the most important operating system resource 

• von Neumann model: Memory as linear set of bytes with constant access time

• Memory utilization approaches

• Without operating system, one control flow
• Every loaded program must communicate with the hardware by itself

• With operating system, one control flow
• Memory used by operating system, device drivers and the program itself

• With operating system and multi-tasking
• Better processor utilization on locking I/O operations

• Multiple programs use the same memory
2
Operating Systems I PT / FF 14
Memory Management - Address Space
• CPU fetches instructions from memory according to the program counter value

• Instructions may cause additional loading from and storing to memory locations
• Address space: Set of unique location identifiers (addresses) 

• Memory address regions that are available to the program at run-time

• All systems today work with a contiguous / linear address space per process

• In a concurrent system, address spaces must be isolated from each other

-> mapping of address spaces to physical memory

• Mapping approach is predefined by the hardware / operating system combination

• Not every mapping model works on all hardware

• Most systems today implement a virtual address space per process
3
Operating Systems I PT / FF 14
Linear Address Space
4
(fromIBMdeveloperWorks)
Lineare Adreßräume (2)
Memory
Address
Space 1
Address
Space 2
Address
Space 3
Address
Space 3
Address
Space 1
Address
Space 2
Operating Systems I PT / FF 14
Addressing Schemes
• Physical / absolute address

• Actual location of data in physical memory

• If some data is available at address a, it is accessed at this location

• Relative address
• Location in relation to some known address

• Logical address
• Memory reference independent from current physical location of data 

• Access to data at physical location a is performed by using the logical address a‘

• It exists a mapping f with f(a') = a

• f can be implemented in software or hardware -> MMU
5
Operating Systems I PT / FF 14
Logical Address Space
6
Logischer Adressbereich
000C0000 − 000C7FFF
000A0000 − 000BFFFF
000E0000 − 000EFFFF
000F0000 − 000FFFFF
00000000 − 0009FFFF
000C8000 − 000DFFFF
00100000 − 02FFFFFF
FFFE0000 − FFFEFFFF
FFFF0000 − FFFFFFFF
03000000 − FFFDFFFF
physikalischer Adreßraumlogischer Adreßraum
00000000 − 02F9FFFF
Lineare Adreßräume (2)
Memory
Address
Space 1
Address
Space 2
Address
Space 3
Address
Space 3
Address
Space 1
Address
Space 2
Operating Systems I PT / FF 14
Memory Management Unit (MMU)
• Hardware device that maps logical to physical addresses

• The MMU is part of the processor

• Re-programming the MMU is a privileged operation, can only be performed in
privileged (kernel) mode

• The MMU typically implements one or more

mapping approaches

• The user program deals with logical addresses only

• Never sees the real physical addresses

• Transparent translation with each instruction

executed
7
Operating Systems I PT / FF 148
Addressing Schemes
Logical Address Space
Core MMU
Instruction
fetch
Logical

address
Physical Address Space
Physical

address
Mapping
information
Operating Systems I PT / FF 14
Memory Hierarchy
• If operating system manages the memory, it must 

consider all levels of memory
• Memory hierarchy

• Differences in capacity, costs and access time

• Bigger capacity for lower costs and speed

• Principal of locality

• Static RAM (SRAM)
• Fast, expensive, keeps stored value once 

it is written

• Used only for register files and caches

• Dynamic RAM (DRAM)
• Capacitor per Bit, demands re-fresh 

every 10-100ms
9
FusionDrive intro from Apple keynote 2012
Fast Expensive Small
Slow Cheap Large
non-volatile
volatile
Operating Systems I PT / FF 14
Memory Hierarchy
10
http://tjliu.myweb.hinet.net/
• The operating system has to manage the memory hierarchy

• Programs should have comparable performance on different memory architectures

• In some systems, parts of the cache invalidation are a software task (e.g. TLB)
Operating Systems I PT / FF 14
Memory Hierarchy
• Available main and secondary memory is a shared resource among all processes

• Can be allocated and released by operating system and application

• Programmers are not aware of other processes in the same system

• Main memory is expensive, volatile and fast, good for short-term usage

• Secondary memory is cheaper, typically not volatile and slower, good for long-term

• Flow between levels in the memory hierarchy is necessary for performance

• Traditionally solved by overlaying and swapping
• Reoccurring task for software developers - delegation to operating system

• In multiprogramming, this becomes a must
11
Operating Systems I PT / FF 14
Swapping
• In a multiprogramming environment

• Blocked (and ready) processes can
be temporarily swapped out of main
to secondary memory 

• Allows for execution of other
processes

• With physical addresses

• Processes will be swapped in into
same memory space that they
occupied previously

• With logical addresses

• Processes can be swapped in at
arbitrary physical addresses

• Demands relocation support
12
Opera&ng)
system)
User)
)space)
Process)
P1)
Process)
P2)
Swap)
out)
Swap)
in)
Main)memory)
Backing)store)
Operating Systems I PT / FF 14
Memory Management - Protection and Sharing
• With relocation capabilities and concurrency, the operating system must implement
protection mechanisms against accidental or intentional unwanted process
interference

• Protection mechanism must work at run-time for each memory reference

• Program location in memory is unpredictable

• Dynamic address calculation in program code is possible for attackers

• Mechanism therefore must be supported by hardware

• Software solution would need to screen all (!) memory referencing 

• Sharing
• Allow controlled sharing of memory regions, if protection is satisfied

• Both features can be implemented with a relocation mechanism
13
Operating Systems I PT / FF 14
Memory Management - Partitioning
• With relocation and isolation, the operating system
can manage memory partitions
• Reserve memory partitions on request

• Recycle unused / no longer used memory
partitions, implicitly or explicitly

• Swap out the content of temporarily unused
memory partitions

• Memory management must keep state per
memory partition

• Different partitioning approaches have different
properties

• Traditional approach was one partition per process
14
Lineare Adreßräume (2)
Memory
Address
Space 1
Address
Space 2
Address
Space 3
Address
Space 3
Address
Space 1
Address
Space 2
• Partitioning approaches can be evaluated by their

• Fragmentation behavior, performance, overhead
• Hypothetical example: Fixed partition size, bit mask for partition state

• Small block size -> large bit mask -> small fragmentation

Large block size -> small bit mask -> large fragmentation

• External Fragmentation

• Total memory space exists to satisfy a request, but it is not contiguous

• Internal Fragmentation

• Allocated memory may be slightly larger than requested memory

• Size difference is memory internal to a partition, but not being used
Operating Systems I PT / FF 14
Memory Management - Partitioning
15
Operating Systems I PT / FF 14
Memory Partitioning - Dynamic Partitioning
• Variable length of
partitions, as needed by
new processes

• Used in the IBM OS/MVT
system

• Leads to external
fragmentation
16
Only P4 is ready -
swap out takes
place
Operating Systems I PT / FF 14
Memory Partitioning - Dynamic Partitioning
• External fragmentation can be overcome 

by compaction
• Operating system is shifting partitions so 

that free memory becomes one block

• Time investment vs. performance gain

• Demands relocation support

• Placement algorithms for unequal fixed

and dynamic partitioning

• Best-fit: Chose the partition that is 

closest in size

• First-fit: Pick first partition that is large enough

• Next-fit: Start check from the last chosen partition and pick next match
17
Example: 16MB allocation
Operating Systems I PT / FF 14
Memory Partitioning - Placement
• Best-fit: Use smallest free partition match

• Sort free partitions by increasing size, choose first match

• Leaves mostly unusable fragments, compaction must be done more frequently

• Overhead for sorting and searching, tendency for small fragments at list start

• Typically worst performer

• Worst-fit: Use largest free partition match

• Sort free partitions by decreasing size, choose first match

• Leave fragments with large size to reduce internal fragmentation

• Tendency for small fragments at list end and large fragments at list start

• Results in small search overhead, but higher fragmentation; sorting overhead
18
Operating Systems I PT / FF 14
Memory Partitioning - Placement
• First-fit: 

• Maintain list of free partitions ordered by addresses

• Tendency for small fragments at list start and large fragments at list end

• In comparison to best-fit, it can work much faster with less external fragmentation

• Next-fit: 

• Tendency to fragment the large free block at the end of memory

• Leads to more equal distribution of fragments
19
Operating Systems I PT / FF 14
Compaction
• Combine adjacent free memory 

regions to a larger region

• Removes external fragmentation

• Can be performed ...

• ... with each memory 

de-allocation

• ... when the system in inactive

• ... if an allocation request fails

• ... if a process terminates

• Compaction demands support for relocation
20
Kompaktifizierung (1)
 . . . muß spätestens ausgelöst werden, w
Anforderung nicht mehr erfüllbar ist
job 1
job 2
OS
0
300
500
600
2100
job 3
job 4
800
1200
job 1
job 2
OS
0
300
500
600
2100
1200
job 4
job 3
1000
job 1
job 4
job 3
job 2
OS
0
300
500
600
1000
1200
1500
2100
1900
job 1
job 4
job 2
OS
0
300
500
600
1500
2100
1900
job 3
600 Worte bewegt 400 Worte bewegtAusgangspunkt 200 Worte bewegt
Operating Systems I PT / FF 14
Compaction
• Very resource-consuming activity, avoid whenever possible

• Time for mandatory compaction can be delayed

• Placement strategies and memory de-allocation approaches

• Example: Linux

• First part of the algorithm looks for movable regions from memory start

• Second part of the algorithm looks for free regions from memory end back

• Uses existing mechanisms for memory migration (NUMA systems)

• Also very interesting research topic for heap management and garbage collection
21
Operating Systems I PT / FF 14
Memory Management - Segmentation
• Each process has several data sections being mapped to its address space
• Application program code (procedures, modules, classes) and 

library program code
• Differences in purpose, developer, reliability and quality

• Stack(s)
• (X86) ESP register points to highest element in the stack, one per process

• (X86) PUSH / POP assembler instructions

• (X86) CALL leaves return address on the stack, 

RET returns to current address on the stack

• Parameter hand-over in method calls, local variables

• Heap(s)
• Memory dynamically allocated and free‘d during runtime by the program code
22
Operating Systems I PT / FF 14
Process Data Sections
23
Interne Organisation
Address
Space
Code
Data
Stack
Data
Code
Data
Data
Code
Code
Data
Code
Code
Code
Heap
Adreßräume in Unix-Deriva
Expansion
Area
Code
End of Data
(break)
Top of Stack
Stack
Start of Address Space
End of Address Space
Initialised
Data
BSS
(zeroed)
Unix
BSS = „Block Started By Symbol“
-> non-initialized static and
global variables
(C) J. Nolte, BTU
• Segmentation:
• Split process address space into segments

• Variable length up to a given maximum

• Like dynamic partitioning, but

• Partitions don‘t need to be contiguous - no internal fragmentation

• External fragmentation is reduced with multiple partitions per process
• Large segments can be used for process isolation (like in the partitioning idea)

• Mid-size segments can be used for separating application code, libraries and stack

• Small segments can be used for object and record management

• Each logical memory address is a tuple of segment number and segment-relative address
(offset)

• Translated to base / limit values by the MMU
Operating Systems I PT / FF 14
Partitioning by Segmentation
24
Operating Systems I PT / FF 14
Memory Protection - Bounds / Limit
• Every process has
different limit, either
base/limit pair or
bounds pair

• Processor has only
one valid configuration
at a time

• Operating system
manages limits as part
of the processor
context per process
25
Schranken (4)
unused
user 2
user 1
base
hardware prototype
base
base
base
operating system
software prototypes
limit limit
limit
limit
(C) J. Nolte, BTU
Operating Systems I PT / FF 14
Segmentation Granularity
26
Grobe Segmentierung (3)
Expansion
Area
Code
Stack
Initialised
Data
BSS
(zeroed)
Code
Initialised
Data
BSS
(zeroed)
Stack
Logical Physical
base/limit
base/limit
base/limit
• One base/limit pair per segment
• Configuration of base/limit registers:

• Implicitly through activities (code /data fetch)

• Explicitly through the operating system

• Segmentation on module level can help to
implement shared memory

• Code or data sharing

• Shared libraries get their own segment, 

mapped to different processes

• Also good for inter-process communication

• Separated or combined utilization of a segment for
code and data of program modules
Moduln (1)
Data
Data
Data
Code
Data
Code
Code
Module 0
Module 2
Module 3
Module 1
Module 0..3
Module 0..3
Separated Combined
Code
Code
Moduln (1)
Data
Data
Data
Code
Data
Code
Code
Module 0
Module 2
Module 3
Module 1
Module 0..3
Module 0..3
Separated Combined
Code
Code
(C) J. Nolte, BTU
Operating Systems I PT / FF 14
Segmentation Granularity
27
Mittelgranulare Segmentierung (1)
Code 0Module 0
Module 2
Module 1 Code 1
Code 2
Data 2
Data 0
Data 1
Data 1
Code 1
Data 0
Data 2
Code 2
logical physical
base/limit
base/limit
base/limit
base/limit
base/limit
base/limit
Code 0
(C) J. Nolte, BTU
Operating Systems I PT / FF 14
Segment Tables
• With multiple base/limit pairs per process, a segment table must be maintained

• Table is in main memory, but must be evaluated by the MMU
28
Operating Systems I PT / FF 14
Memory Management - Paging
• Segmentation / partitioning always have a fragmentation problem

• Fixed-size partitions lead to internal fragmentation
• Variable-sized partitions lead to external fragmentation
• Solution: Paging

• Partition memory into small equal fixed-size chunks - (page) frames

• Partition process address space into chunks of the same size - pages

• No external fragmentation, only small internal fragmentation in the last page

• One page table per process

• Maps each process page to a frame - entries for all pages needed

• Used by processor MMU do translate logical to physical addresses
29
30
Operating Systems I PT / FF 14
Memory Management - Paging
31
Operating Systems I PT / FF 14
Memory Management - Paging
• Page frame size depends on processor 

• 512 Byte (DEC VAX, IBM AS/400), 

4K or 4MB (Intel X86) 

• 4K (IBM 370, PowerPC), 4K up to 16MB (MIPS) 

• 4KB up to 4MB (UltraSPARC), 512x48Bit (Atlas)

• Non-default size only possible with multi-level paging (later)

• Each logical address is represented by a tuple (page number, offset) 

• Page number is an index into the page table

• Page Table Entry (PTE) contains physical start address of the frame

• Change of process -> change of logical address space -> change of active page table

• Start of the active page table in physical memory is stored in a MMU register
32
Operating Systems I PT / FF 14
Page Table Sizes
33
Address Space Page Size Number of Pages
Page Table Size
per Process
2 2 2 2
2 2 2 2
2 2 2 256 GB
2 2 2 256 MB
2 2 2 16.7 PB
2 2 2 16 TB
Operating Systems I PT / FF 14
Address Translation
• Adding a page frame (physical address range) to a logical address space

• Add the frame number to a free position in the page table

• Adding logical address space range to another logical address space

• Compute frame number from page number

• Add to target page table

• Determining the physical address for a logical address

• Determine page number, lookup in the page table for the frame start address

• Add offset

• Determining the logical address for a physical address

• Determine page number from frame start address, add offset
34
• Mapping of pages to frames can change constantly during run-time

• Each memory access demands another one for the page table information

• Necessary to cache

page table lookup

results for performance

optimization

• Translation Lookaside 

Buffer (TLB)
• Page number +

complete PTE per entry

• Hardware can

check TLB entries in

parallel for page

number match
Operating Systems I PT / FF 14
Paging - Page Tables
35
Operating Systems I PT / FF 14
Protection and Sharing
• Logical addressing with paging allows sharing of address space regions

• Shared code - multiple program instances, libraries, operating system code

• Shared data - concurrent applications, inter-process communication

• Protection based on paging mechanisms

• Individual rights per page maintained in the page table (read, write, execute)

• On violation, the MMU triggers a processor exception (trap)

• Address space often has unused holes (e.g. between stack and heap)

• If neither process nor operating system allocated this region, 

it is marked as invalid in the page table

• On access, the processor traps
36
Operating Systems I PT / FF 14
Th NX Bit
• The Never eXecute bit marks a page as not executable

• Very good protection against stack or heap-based overflow attacks

• Well-known for decades in non-X86 processor architectures

• AMD decided to add it to the AMD64 instruction set, Intel adopted it since P4

• Demands operating system support for new page table structure

• Support in all recent Windows and Linux versions
37
Operating Systems I PT / FF 14
X86 Page Table Entry (PTE)
38
•! M01+%$0J6+9%0.+%0..0/%-I%M01+%:0J6+%K"$.#+9%SM:K9T%
•! 406#8%M:K9%>0F+%$L-%g+689)%%
–! M01+%U.0,+%?5,J+.%SMU?T%
–! U6019%8+9@.#J#"1%9$0$+%0"8%=.-$+@;-"%-I%$>+%=01+%
29
Page frame number VU P Cw Gi L D A Cd Wt O W
Res (writable on MP Systems)
Res
Res
Global
Res (large page if PDE)
Dirty
Accessed
Cache disabled
Write through
Owner
Write (writable on MP Systems)
valid
Reserved bits
are used only
when PTE is
not valid
31 12 0
M:K%'$0$59%0"8%M.-$+@;-"%Z#$9%%
SR"$+6%P^_%-"6/T%
30
Operating Systems I PT / FF 14
Virtual Memory
• Common concepts in segmentation and paging

• All memory references are translated to physical addresses at run-time

• Address spaces are transparently broken up into non-contiguous pieces

• Combination allows to not have all parts of an address space in main memory
• Commonly described as virtual memory concept

• Size of virtual memory is limited by address space size and secondary storage

• More processes can be maintained in main memory

• Process address space occupation may exceed the available physical memory

• First reported for Atlas computer (1962)
39
Operating Systems I PT / FF 14
Virtual Memory with Paging
• Frames can be dynamically mapped into address spaces

• Example: Dynamic heap extension, creation of shared regions

• Demands page table modification for the address space owner

• Frames can be mapped into multiple different logical address spaces

• Page-in / page-out: Taking a frame out of the logical address space

• Mark the address space region as invalid in the page table

• Move the (page) data to somewhere else, release the frame 

• On access, the trap handler of the operating system is called

• Triggers the page swap-in to allow the access (same frame ?)

• This is often simply called swapping, even though it is page swapping
40
Operating Systems I PT / FF 14
Page Swapping
41
/B:
1
0
1
(
)
0
1
0D
1
)0
1
1
)2
1
0
1

Mais conteúdo relacionado

Mais procurados

Fragmentaton
Fragmentaton Fragmentaton
Fragmentaton
sanjana mun
 
Thrashing allocation frames.43
Thrashing allocation frames.43Thrashing allocation frames.43
Thrashing allocation frames.43
myrajendra
 

Mais procurados (20)

Virtual memory
Virtual memoryVirtual memory
Virtual memory
 
Cache memory
Cache memoryCache memory
Cache memory
 
Operating system 34 contiguous allocation
Operating system 34 contiguous allocationOperating system 34 contiguous allocation
Operating system 34 contiguous allocation
 
Memory organization
Memory organizationMemory organization
Memory organization
 
Memory management
Memory managementMemory management
Memory management
 
Paging and Segmentation in Operating System
Paging and Segmentation in Operating SystemPaging and Segmentation in Operating System
Paging and Segmentation in Operating System
 
9 virtual memory management
9 virtual memory management9 virtual memory management
9 virtual memory management
 
Memory management in operating system | Paging | Virtual memory
Memory management in operating system | Paging | Virtual memoryMemory management in operating system | Paging | Virtual memory
Memory management in operating system | Paging | Virtual memory
 
Chapter 9 - Virtual Memory
Chapter 9 - Virtual MemoryChapter 9 - Virtual Memory
Chapter 9 - Virtual Memory
 
Kernel I/O subsystem
Kernel I/O subsystemKernel I/O subsystem
Kernel I/O subsystem
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategies
 
Structure of the page table
Structure of the page tableStructure of the page table
Structure of the page table
 
Memory management ppt coa
Memory management ppt coaMemory management ppt coa
Memory management ppt coa
 
Fragmentaton
Fragmentaton Fragmentaton
Fragmentaton
 
RAID
RAIDRAID
RAID
 
Memory management
Memory managementMemory management
Memory management
 
Memory management
Memory managementMemory management
Memory management
 
Os unit 3
Os unit 3Os unit 3
Os unit 3
 
Thrashing allocation frames.43
Thrashing allocation frames.43Thrashing allocation frames.43
Thrashing allocation frames.43
 
Memory hierarchy
Memory hierarchyMemory hierarchy
Memory hierarchy
 

Destaque

Memory management in os
Memory management in osMemory management in os
Memory management in os
Sumant Diwakar
 
36 fragmentaio nnd pageconcepts
36 fragmentaio nnd pageconcepts36 fragmentaio nnd pageconcepts
36 fragmentaio nnd pageconcepts
myrajendra
 
Understanding operating systems 5th ed ch03
Understanding operating systems 5th ed ch03Understanding operating systems 5th ed ch03
Understanding operating systems 5th ed ch03
BarrBoy
 
Windows memory management
Windows memory managementWindows memory management
Windows memory management
Tech_MX
 

Destaque (20)

Memory management
Memory managementMemory management
Memory management
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Operating System-Memory Management
Operating System-Memory ManagementOperating System-Memory Management
Operating System-Memory Management
 
Memory management
Memory managementMemory management
Memory management
 
Hierarchical Storage Management
Hierarchical Storage ManagementHierarchical Storage Management
Hierarchical Storage Management
 
Concurrent Triple Band Low Noise Amplifier Design
Concurrent Triple Band Low Noise Amplifier DesignConcurrent Triple Band Low Noise Amplifier Design
Concurrent Triple Band Low Noise Amplifier Design
 
Memory management in os
Memory management in osMemory management in os
Memory management in os
 
Memory management in sql server
Memory management in sql serverMemory management in sql server
Memory management in sql server
 
Contiguous Memory Allocation-R.D.Sivakumar
Contiguous Memory Allocation-R.D.SivakumarContiguous Memory Allocation-R.D.Sivakumar
Contiguous Memory Allocation-R.D.Sivakumar
 
Bab 4
Bab 4Bab 4
Bab 4
 
main memory
main memorymain memory
main memory
 
Pratima fragmentation
Pratima fragmentationPratima fragmentation
Pratima fragmentation
 
Address Binding Scheme
Address Binding SchemeAddress Binding Scheme
Address Binding Scheme
 
36 fragmentaio nnd pageconcepts
36 fragmentaio nnd pageconcepts36 fragmentaio nnd pageconcepts
36 fragmentaio nnd pageconcepts
 
Memory Management in Windows 7
Memory Management in Windows 7Memory Management in Windows 7
Memory Management in Windows 7
 
Consolidation sap
Consolidation sapConsolidation sap
Consolidation sap
 
Understanding operating systems 5th ed ch03
Understanding operating systems 5th ed ch03Understanding operating systems 5th ed ch03
Understanding operating systems 5th ed ch03
 
Windows memory management
Windows memory managementWindows memory management
Windows memory management
 
Operating Systems: File Management
Operating Systems: File ManagementOperating Systems: File Management
Operating Systems: File Management
 
Operating system memory management
Operating system memory managementOperating system memory management
Operating system memory management
 

Semelhante a Operating Systems 1 (9/12) - Memory Management Concepts

Chapter1 Computer System Overview.ppt
Chapter1 Computer System Overview.pptChapter1 Computer System Overview.ppt
Chapter1 Computer System Overview.ppt
ShikhaManrai1
 

Semelhante a Operating Systems 1 (9/12) - Memory Management Concepts (20)

Memory Management Strategies - II.pdf
Memory Management Strategies - II.pdfMemory Management Strategies - II.pdf
Memory Management Strategies - II.pdf
 
Memory Management.pdf
Memory Management.pdfMemory Management.pdf
Memory Management.pdf
 
Lecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptxLecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptx
 
Ch8 main memory
Ch8   main memoryCh8   main memory
Ch8 main memory
 
Memory Management in Operating Systems for all
Memory Management in Operating Systems for allMemory Management in Operating Systems for all
Memory Management in Operating Systems for all
 
Unit iiios Storage Management
Unit iiios Storage ManagementUnit iiios Storage Management
Unit iiios Storage Management
 
M20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptxM20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptx
 
07-MemoryManagement.ppt
07-MemoryManagement.ppt07-MemoryManagement.ppt
07-MemoryManagement.ppt
 
CSE2010- Module 4 V1.pptx
CSE2010- Module 4 V1.pptxCSE2010- Module 4 V1.pptx
CSE2010- Module 4 V1.pptx
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
Operating Systems 1 (2/12) - Hardware Basics
Operating Systems 1 (2/12) - Hardware BasicsOperating Systems 1 (2/12) - Hardware Basics
Operating Systems 1 (2/12) - Hardware Basics
 
memory managment on computer science.ppt
memory managment on computer science.pptmemory managment on computer science.ppt
memory managment on computer science.ppt
 
Memory Management Strategies - I.pdf
Memory Management Strategies - I.pdfMemory Management Strategies - I.pdf
Memory Management Strategies - I.pdf
 
Chapter1 Computer System Overview.ppt
Chapter1 Computer System Overview.pptChapter1 Computer System Overview.ppt
Chapter1 Computer System Overview.ppt
 
Chapter01-rev.pptx
Chapter01-rev.pptxChapter01-rev.pptx
Chapter01-rev.pptx
 
CS6401 OPERATING SYSTEMS Unit 3
CS6401 OPERATING SYSTEMS Unit 3CS6401 OPERATING SYSTEMS Unit 3
CS6401 OPERATING SYSTEMS Unit 3
 
Chapter1 Computer System Overview Part-1.ppt
Chapter1 Computer System Overview Part-1.pptChapter1 Computer System Overview Part-1.ppt
Chapter1 Computer System Overview Part-1.ppt
 
ch8 Memory Management OS.pptx
ch8 Memory Management OS.pptxch8 Memory Management OS.pptx
ch8 Memory Management OS.pptx
 
Cs8493 unit 3
Cs8493 unit 3Cs8493 unit 3
Cs8493 unit 3
 
Ch4 memory management
Ch4 memory managementCh4 memory management
Ch4 memory management
 

Mais de Peter Tröger

Mais de Peter Tröger (20)

WannaCry - An OS course perspective
WannaCry - An OS course perspectiveWannaCry - An OS course perspective
WannaCry - An OS course perspective
 
Cloud Standards and Virtualization
Cloud Standards and VirtualizationCloud Standards and Virtualization
Cloud Standards and Virtualization
 
Distributed Resource Management Application API (DRMAA) Version 2
Distributed Resource Management Application API (DRMAA) Version 2Distributed Resource Management Application API (DRMAA) Version 2
Distributed Resource Management Application API (DRMAA) Version 2
 
OpenSubmit - How to grade 1200 code submissions
OpenSubmit - How to grade 1200 code submissionsOpenSubmit - How to grade 1200 code submissions
OpenSubmit - How to grade 1200 code submissions
 
Design of Software for Embedded Systems
Design of Software for Embedded SystemsDesign of Software for Embedded Systems
Design of Software for Embedded Systems
 
Humans should not write XML.
Humans should not write XML.Humans should not write XML.
Humans should not write XML.
 
What activates a bug? A refinement of the Laprie terminology model.
What activates a bug? A refinement of the Laprie terminology model.What activates a bug? A refinement of the Laprie terminology model.
What activates a bug? A refinement of the Laprie terminology model.
 
Dependable Systems - Summary (16/16)
Dependable Systems - Summary (16/16)Dependable Systems - Summary (16/16)
Dependable Systems - Summary (16/16)
 
Dependable Systems - Hardware Dependability with Redundancy (14/16)
Dependable Systems - Hardware Dependability with Redundancy (14/16)Dependable Systems - Hardware Dependability with Redundancy (14/16)
Dependable Systems - Hardware Dependability with Redundancy (14/16)
 
Dependable Systems - System Dependability Evaluation (8/16)
Dependable Systems - System Dependability Evaluation (8/16)Dependable Systems - System Dependability Evaluation (8/16)
Dependable Systems - System Dependability Evaluation (8/16)
 
Dependable Systems - Structure-Based Dependabiilty Modeling (6/16)
Dependable Systems - Structure-Based Dependabiilty Modeling (6/16)Dependable Systems - Structure-Based Dependabiilty Modeling (6/16)
Dependable Systems - Structure-Based Dependabiilty Modeling (6/16)
 
Dependable Systems -Software Dependability (15/16)
Dependable Systems -Software Dependability (15/16)Dependable Systems -Software Dependability (15/16)
Dependable Systems -Software Dependability (15/16)
 
Dependable Systems -Reliability Prediction (9/16)
Dependable Systems -Reliability Prediction (9/16)Dependable Systems -Reliability Prediction (9/16)
Dependable Systems -Reliability Prediction (9/16)
 
Dependable Systems -Fault Tolerance Patterns (4/16)
Dependable Systems -Fault Tolerance Patterns (4/16)Dependable Systems -Fault Tolerance Patterns (4/16)
Dependable Systems -Fault Tolerance Patterns (4/16)
 
Dependable Systems - Introduction (1/16)
Dependable Systems - Introduction (1/16)Dependable Systems - Introduction (1/16)
Dependable Systems - Introduction (1/16)
 
Dependable Systems -Dependability Means (3/16)
Dependable Systems -Dependability Means (3/16)Dependable Systems -Dependability Means (3/16)
Dependable Systems -Dependability Means (3/16)
 
Dependable Systems - Hardware Dependability with Diagnosis (13/16)
Dependable Systems - Hardware Dependability with Diagnosis (13/16)Dependable Systems - Hardware Dependability with Diagnosis (13/16)
Dependable Systems - Hardware Dependability with Diagnosis (13/16)
 
Dependable Systems -Dependability Attributes (5/16)
Dependable Systems -Dependability Attributes (5/16)Dependable Systems -Dependability Attributes (5/16)
Dependable Systems -Dependability Attributes (5/16)
 
Dependable Systems -Dependability Threats (2/16)
Dependable Systems -Dependability Threats (2/16)Dependable Systems -Dependability Threats (2/16)
Dependable Systems -Dependability Threats (2/16)
 
Verteilte Software-Systeme im Kontext von Industrie 4.0
Verteilte Software-Systeme im Kontext von Industrie 4.0Verteilte Software-Systeme im Kontext von Industrie 4.0
Verteilte Software-Systeme im Kontext von Industrie 4.0
 

Último

Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
PECB
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
SanaAli374401
 

Último (20)

SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 

Operating Systems 1 (9/12) - Memory Management Concepts

  • 1. Memory Management - Concepts Beuth Hochschule Summer Term 2014 ! Pictures (C) W. Stallings, if not stated otherwise
  • 2. Operating Systems I PT / FF 14 Memory Management • Beside compute time, memory is the most important operating system resource • von Neumann model: Memory as linear set of bytes with constant access time • Memory utilization approaches • Without operating system, one control flow • Every loaded program must communicate with the hardware by itself • With operating system, one control flow • Memory used by operating system, device drivers and the program itself • With operating system and multi-tasking • Better processor utilization on locking I/O operations • Multiple programs use the same memory 2
  • 3. Operating Systems I PT / FF 14 Memory Management - Address Space • CPU fetches instructions from memory according to the program counter value • Instructions may cause additional loading from and storing to memory locations • Address space: Set of unique location identifiers (addresses) • Memory address regions that are available to the program at run-time • All systems today work with a contiguous / linear address space per process • In a concurrent system, address spaces must be isolated from each other
 -> mapping of address spaces to physical memory • Mapping approach is predefined by the hardware / operating system combination • Not every mapping model works on all hardware • Most systems today implement a virtual address space per process 3
  • 4. Operating Systems I PT / FF 14 Linear Address Space 4 (fromIBMdeveloperWorks) Lineare Adreßräume (2) Memory Address Space 1 Address Space 2 Address Space 3 Address Space 3 Address Space 1 Address Space 2
  • 5. Operating Systems I PT / FF 14 Addressing Schemes • Physical / absolute address • Actual location of data in physical memory • If some data is available at address a, it is accessed at this location • Relative address • Location in relation to some known address • Logical address • Memory reference independent from current physical location of data • Access to data at physical location a is performed by using the logical address a‘ • It exists a mapping f with f(a') = a • f can be implemented in software or hardware -> MMU 5
  • 6. Operating Systems I PT / FF 14 Logical Address Space 6 Logischer Adressbereich 000C0000 − 000C7FFF 000A0000 − 000BFFFF 000E0000 − 000EFFFF 000F0000 − 000FFFFF 00000000 − 0009FFFF 000C8000 − 000DFFFF 00100000 − 02FFFFFF FFFE0000 − FFFEFFFF FFFF0000 − FFFFFFFF 03000000 − FFFDFFFF physikalischer Adreßraumlogischer Adreßraum 00000000 − 02F9FFFF Lineare Adreßräume (2) Memory Address Space 1 Address Space 2 Address Space 3 Address Space 3 Address Space 1 Address Space 2
  • 7. Operating Systems I PT / FF 14 Memory Management Unit (MMU) • Hardware device that maps logical to physical addresses • The MMU is part of the processor • Re-programming the MMU is a privileged operation, can only be performed in privileged (kernel) mode • The MMU typically implements one or more
 mapping approaches • The user program deals with logical addresses only • Never sees the real physical addresses • Transparent translation with each instruction
 executed 7
  • 8. Operating Systems I PT / FF 148 Addressing Schemes Logical Address Space Core MMU Instruction fetch Logical
 address Physical Address Space Physical
 address Mapping information
  • 9. Operating Systems I PT / FF 14 Memory Hierarchy • If operating system manages the memory, it must 
 consider all levels of memory • Memory hierarchy • Differences in capacity, costs and access time • Bigger capacity for lower costs and speed • Principal of locality • Static RAM (SRAM) • Fast, expensive, keeps stored value once 
 it is written • Used only for register files and caches • Dynamic RAM (DRAM) • Capacitor per Bit, demands re-fresh 
 every 10-100ms 9 FusionDrive intro from Apple keynote 2012 Fast Expensive Small Slow Cheap Large non-volatile volatile
  • 10. Operating Systems I PT / FF 14 Memory Hierarchy 10 http://tjliu.myweb.hinet.net/ • The operating system has to manage the memory hierarchy • Programs should have comparable performance on different memory architectures • In some systems, parts of the cache invalidation are a software task (e.g. TLB)
  • 11. Operating Systems I PT / FF 14 Memory Hierarchy • Available main and secondary memory is a shared resource among all processes • Can be allocated and released by operating system and application • Programmers are not aware of other processes in the same system • Main memory is expensive, volatile and fast, good for short-term usage • Secondary memory is cheaper, typically not volatile and slower, good for long-term • Flow between levels in the memory hierarchy is necessary for performance • Traditionally solved by overlaying and swapping • Reoccurring task for software developers - delegation to operating system • In multiprogramming, this becomes a must 11
  • 12. Operating Systems I PT / FF 14 Swapping • In a multiprogramming environment • Blocked (and ready) processes can be temporarily swapped out of main to secondary memory • Allows for execution of other processes • With physical addresses • Processes will be swapped in into same memory space that they occupied previously • With logical addresses • Processes can be swapped in at arbitrary physical addresses • Demands relocation support 12 Opera&ng) system) User) )space) Process) P1) Process) P2) Swap) out) Swap) in) Main)memory) Backing)store)
  • 13. Operating Systems I PT / FF 14 Memory Management - Protection and Sharing • With relocation capabilities and concurrency, the operating system must implement protection mechanisms against accidental or intentional unwanted process interference • Protection mechanism must work at run-time for each memory reference • Program location in memory is unpredictable • Dynamic address calculation in program code is possible for attackers • Mechanism therefore must be supported by hardware • Software solution would need to screen all (!) memory referencing • Sharing • Allow controlled sharing of memory regions, if protection is satisfied • Both features can be implemented with a relocation mechanism 13
  • 14. Operating Systems I PT / FF 14 Memory Management - Partitioning • With relocation and isolation, the operating system can manage memory partitions • Reserve memory partitions on request • Recycle unused / no longer used memory partitions, implicitly or explicitly • Swap out the content of temporarily unused memory partitions • Memory management must keep state per memory partition • Different partitioning approaches have different properties • Traditional approach was one partition per process 14 Lineare Adreßräume (2) Memory Address Space 1 Address Space 2 Address Space 3 Address Space 3 Address Space 1 Address Space 2
  • 15. • Partitioning approaches can be evaluated by their • Fragmentation behavior, performance, overhead • Hypothetical example: Fixed partition size, bit mask for partition state • Small block size -> large bit mask -> small fragmentation
 Large block size -> small bit mask -> large fragmentation • External Fragmentation • Total memory space exists to satisfy a request, but it is not contiguous • Internal Fragmentation • Allocated memory may be slightly larger than requested memory • Size difference is memory internal to a partition, but not being used Operating Systems I PT / FF 14 Memory Management - Partitioning 15
  • 16. Operating Systems I PT / FF 14 Memory Partitioning - Dynamic Partitioning • Variable length of partitions, as needed by new processes • Used in the IBM OS/MVT system • Leads to external fragmentation 16 Only P4 is ready - swap out takes place
  • 17. Operating Systems I PT / FF 14 Memory Partitioning - Dynamic Partitioning • External fragmentation can be overcome 
 by compaction • Operating system is shifting partitions so 
 that free memory becomes one block • Time investment vs. performance gain • Demands relocation support • Placement algorithms for unequal fixed
 and dynamic partitioning • Best-fit: Chose the partition that is 
 closest in size • First-fit: Pick first partition that is large enough • Next-fit: Start check from the last chosen partition and pick next match 17 Example: 16MB allocation
  • 18. Operating Systems I PT / FF 14 Memory Partitioning - Placement • Best-fit: Use smallest free partition match • Sort free partitions by increasing size, choose first match • Leaves mostly unusable fragments, compaction must be done more frequently • Overhead for sorting and searching, tendency for small fragments at list start • Typically worst performer • Worst-fit: Use largest free partition match • Sort free partitions by decreasing size, choose first match • Leave fragments with large size to reduce internal fragmentation • Tendency for small fragments at list end and large fragments at list start • Results in small search overhead, but higher fragmentation; sorting overhead 18
  • 19. Operating Systems I PT / FF 14 Memory Partitioning - Placement • First-fit: • Maintain list of free partitions ordered by addresses • Tendency for small fragments at list start and large fragments at list end • In comparison to best-fit, it can work much faster with less external fragmentation • Next-fit: • Tendency to fragment the large free block at the end of memory • Leads to more equal distribution of fragments 19
  • 20. Operating Systems I PT / FF 14 Compaction • Combine adjacent free memory 
 regions to a larger region • Removes external fragmentation • Can be performed ... • ... with each memory 
 de-allocation • ... when the system in inactive • ... if an allocation request fails • ... if a process terminates • Compaction demands support for relocation 20 Kompaktifizierung (1)  . . . muß spätestens ausgelöst werden, w Anforderung nicht mehr erfüllbar ist job 1 job 2 OS 0 300 500 600 2100 job 3 job 4 800 1200 job 1 job 2 OS 0 300 500 600 2100 1200 job 4 job 3 1000 job 1 job 4 job 3 job 2 OS 0 300 500 600 1000 1200 1500 2100 1900 job 1 job 4 job 2 OS 0 300 500 600 1500 2100 1900 job 3 600 Worte bewegt 400 Worte bewegtAusgangspunkt 200 Worte bewegt
  • 21. Operating Systems I PT / FF 14 Compaction • Very resource-consuming activity, avoid whenever possible • Time for mandatory compaction can be delayed • Placement strategies and memory de-allocation approaches • Example: Linux • First part of the algorithm looks for movable regions from memory start • Second part of the algorithm looks for free regions from memory end back • Uses existing mechanisms for memory migration (NUMA systems) • Also very interesting research topic for heap management and garbage collection 21
  • 22. Operating Systems I PT / FF 14 Memory Management - Segmentation • Each process has several data sections being mapped to its address space • Application program code (procedures, modules, classes) and 
 library program code • Differences in purpose, developer, reliability and quality • Stack(s) • (X86) ESP register points to highest element in the stack, one per process • (X86) PUSH / POP assembler instructions • (X86) CALL leaves return address on the stack, 
 RET returns to current address on the stack • Parameter hand-over in method calls, local variables • Heap(s) • Memory dynamically allocated and free‘d during runtime by the program code 22
  • 23. Operating Systems I PT / FF 14 Process Data Sections 23 Interne Organisation Address Space Code Data Stack Data Code Data Data Code Code Data Code Code Code Heap Adreßräume in Unix-Deriva Expansion Area Code End of Data (break) Top of Stack Stack Start of Address Space End of Address Space Initialised Data BSS (zeroed) Unix BSS = „Block Started By Symbol“ -> non-initialized static and global variables (C) J. Nolte, BTU
  • 24. • Segmentation: • Split process address space into segments • Variable length up to a given maximum • Like dynamic partitioning, but • Partitions don‘t need to be contiguous - no internal fragmentation • External fragmentation is reduced with multiple partitions per process • Large segments can be used for process isolation (like in the partitioning idea) • Mid-size segments can be used for separating application code, libraries and stack • Small segments can be used for object and record management • Each logical memory address is a tuple of segment number and segment-relative address (offset) • Translated to base / limit values by the MMU Operating Systems I PT / FF 14 Partitioning by Segmentation 24
  • 25. Operating Systems I PT / FF 14 Memory Protection - Bounds / Limit • Every process has different limit, either base/limit pair or bounds pair • Processor has only one valid configuration at a time • Operating system manages limits as part of the processor context per process 25 Schranken (4) unused user 2 user 1 base hardware prototype base base base operating system software prototypes limit limit limit limit (C) J. Nolte, BTU
  • 26. Operating Systems I PT / FF 14 Segmentation Granularity 26 Grobe Segmentierung (3) Expansion Area Code Stack Initialised Data BSS (zeroed) Code Initialised Data BSS (zeroed) Stack Logical Physical base/limit base/limit base/limit • One base/limit pair per segment • Configuration of base/limit registers: • Implicitly through activities (code /data fetch) • Explicitly through the operating system • Segmentation on module level can help to implement shared memory • Code or data sharing • Shared libraries get their own segment, 
 mapped to different processes • Also good for inter-process communication • Separated or combined utilization of a segment for code and data of program modules Moduln (1) Data Data Data Code Data Code Code Module 0 Module 2 Module 3 Module 1 Module 0..3 Module 0..3 Separated Combined Code Code Moduln (1) Data Data Data Code Data Code Code Module 0 Module 2 Module 3 Module 1 Module 0..3 Module 0..3 Separated Combined Code Code (C) J. Nolte, BTU
  • 27. Operating Systems I PT / FF 14 Segmentation Granularity 27 Mittelgranulare Segmentierung (1) Code 0Module 0 Module 2 Module 1 Code 1 Code 2 Data 2 Data 0 Data 1 Data 1 Code 1 Data 0 Data 2 Code 2 logical physical base/limit base/limit base/limit base/limit base/limit base/limit Code 0 (C) J. Nolte, BTU
  • 28. Operating Systems I PT / FF 14 Segment Tables • With multiple base/limit pairs per process, a segment table must be maintained • Table is in main memory, but must be evaluated by the MMU 28
  • 29. Operating Systems I PT / FF 14 Memory Management - Paging • Segmentation / partitioning always have a fragmentation problem • Fixed-size partitions lead to internal fragmentation • Variable-sized partitions lead to external fragmentation • Solution: Paging • Partition memory into small equal fixed-size chunks - (page) frames • Partition process address space into chunks of the same size - pages • No external fragmentation, only small internal fragmentation in the last page • One page table per process • Maps each process page to a frame - entries for all pages needed • Used by processor MMU do translate logical to physical addresses 29
  • 30. 30
  • 31. Operating Systems I PT / FF 14 Memory Management - Paging 31
  • 32. Operating Systems I PT / FF 14 Memory Management - Paging • Page frame size depends on processor • 512 Byte (DEC VAX, IBM AS/400), 
 4K or 4MB (Intel X86) • 4K (IBM 370, PowerPC), 4K up to 16MB (MIPS) • 4KB up to 4MB (UltraSPARC), 512x48Bit (Atlas) • Non-default size only possible with multi-level paging (later) • Each logical address is represented by a tuple (page number, offset) • Page number is an index into the page table • Page Table Entry (PTE) contains physical start address of the frame • Change of process -> change of logical address space -> change of active page table • Start of the active page table in physical memory is stored in a MMU register 32
  • 33. Operating Systems I PT / FF 14 Page Table Sizes 33 Address Space Page Size Number of Pages Page Table Size per Process 2 2 2 2 2 2 2 2 2 2 2 256 GB 2 2 2 256 MB 2 2 2 16.7 PB 2 2 2 16 TB
  • 34. Operating Systems I PT / FF 14 Address Translation • Adding a page frame (physical address range) to a logical address space • Add the frame number to a free position in the page table • Adding logical address space range to another logical address space • Compute frame number from page number • Add to target page table • Determining the physical address for a logical address • Determine page number, lookup in the page table for the frame start address • Add offset • Determining the logical address for a physical address • Determine page number from frame start address, add offset 34
  • 35. • Mapping of pages to frames can change constantly during run-time • Each memory access demands another one for the page table information • Necessary to cache
 page table lookup
 results for performance
 optimization • Translation Lookaside 
 Buffer (TLB) • Page number +
 complete PTE per entry • Hardware can
 check TLB entries in
 parallel for page
 number match Operating Systems I PT / FF 14 Paging - Page Tables 35
  • 36. Operating Systems I PT / FF 14 Protection and Sharing • Logical addressing with paging allows sharing of address space regions • Shared code - multiple program instances, libraries, operating system code • Shared data - concurrent applications, inter-process communication • Protection based on paging mechanisms • Individual rights per page maintained in the page table (read, write, execute) • On violation, the MMU triggers a processor exception (trap) • Address space often has unused holes (e.g. between stack and heap) • If neither process nor operating system allocated this region, 
 it is marked as invalid in the page table • On access, the processor traps 36
  • 37. Operating Systems I PT / FF 14 Th NX Bit • The Never eXecute bit marks a page as not executable • Very good protection against stack or heap-based overflow attacks • Well-known for decades in non-X86 processor architectures • AMD decided to add it to the AMD64 instruction set, Intel adopted it since P4 • Demands operating system support for new page table structure • Support in all recent Windows and Linux versions 37
  • 38. Operating Systems I PT / FF 14 X86 Page Table Entry (PTE) 38 •! M01+%$0J6+9%0.+%0..0/%-I%M01+%:0J6+%K"$.#+9%SM:K9T% •! 406#8%M:K9%>0F+%$L-%g+689)%% –! M01+%U.0,+%?5,J+.%SMU?T% –! U6019%8+9@.#J#"1%9$0$+%0"8%=.-$+@;-"%-I%$>+%=01+% 29 Page frame number VU P Cw Gi L D A Cd Wt O W Res (writable on MP Systems) Res Res Global Res (large page if PDE) Dirty Accessed Cache disabled Write through Owner Write (writable on MP Systems) valid Reserved bits are used only when PTE is not valid 31 12 0 M:K%'$0$59%0"8%M.-$+@;-"%Z#$9%% SR"$+6%P^_%-"6/T% 30
  • 39. Operating Systems I PT / FF 14 Virtual Memory • Common concepts in segmentation and paging • All memory references are translated to physical addresses at run-time • Address spaces are transparently broken up into non-contiguous pieces • Combination allows to not have all parts of an address space in main memory • Commonly described as virtual memory concept • Size of virtual memory is limited by address space size and secondary storage • More processes can be maintained in main memory • Process address space occupation may exceed the available physical memory • First reported for Atlas computer (1962) 39
  • 40. Operating Systems I PT / FF 14 Virtual Memory with Paging • Frames can be dynamically mapped into address spaces • Example: Dynamic heap extension, creation of shared regions • Demands page table modification for the address space owner • Frames can be mapped into multiple different logical address spaces • Page-in / page-out: Taking a frame out of the logical address space • Mark the address space region as invalid in the page table • Move the (page) data to somewhere else, release the frame • On access, the trap handler of the operating system is called • Triggers the page swap-in to allow the access (same frame ?) • This is often simply called swapping, even though it is page swapping 40
  • 41. Operating Systems I PT / FF 14 Page Swapping 41 /B:
  • 42. 1
  • 43. 0
  • 44. 1
  • 45. (
  • 47. 0
  • 48. 1
  • 49. )
  • 51. 1
  • 52. 0
  • 53. 1
  • 54. (
  • 55. )
  • 56. :
  • 57. Operating Systems I PT / FF 14 Trashing • A state in which the system spends most of its time swapping process pieces rather than executing instructions • Avoidance by operating system • Tries to guess which pieces are least likely to be used in the near future • Based on historic data in the system run • Was a major research topic in the 1970‘s • Solutions rely on principle of locality • Program and data references within a process tend to cluster • Only a few pieces of a process will be needed over a short period of time • Can be used by hardware and software for trashing avoidance 42
  • 58. Operating Systems I PT / FF 14 Virtual Memory + Page Size • The smaller the page size, the lesser the amount of internal fragmentation • However, more pages are required per process • More pages per process means larger page tables • For large programs, some portion of the page tables must be in virtual memory • Most secondary-memory devices favor a larger page size for block transfer 43 Computer Page Size Atlas 512 48-bit words Honeywell-Multics 1024 36-bit words IBM 370/XA and 370/ESA 4 Kbytes VAX family 512 bytes IBM AS/400 512 bytes DEC Alpha 8 Kbytes MIPS 4 Kbytes to 16 Mbytes UltraSPARC 8 Kbytes to 4 Mbytes Pentium 4 Kbytes or 4 Mbytes IBM POWER 4 Kbytes Itanium 4 Kbytes to 256 Mbytes
  • 59. Operating Systems I PT / FF 14 Policies for Virtual Memory • Ultimate goal: Minimize page faults for better performance • Fetch policy • Demand paging: Only bring pages to memory when they are referenced • Many page faults on process start • For long-running applications, principle of locality improves performance • Prepaging: Bring in more than only the demanded page into main memory • Facilitates the block device nature of secondary storage • Inefficient if extra pages are never used • Different from ,swapping‘ • Placement policy (e.g. with NUMA) and replacement policy 44
  • 60. Operating Systems I PT / FF 14 Replacement Policy • Frame locking: The page currently stored in that frame may not be replaced • Kernel of the OS as well as key control structures are held in locked frames • I/O buffers and time-critical areas may be locked into main memory frames • Locking is achieved by associating a lock bit with each frame • Permanent restriction on replacement policy algorithm • Algorithms • Optimal replacement policy • Selects the page for which the time to the next reference is the longest • Can be shown that this policy results in the fewest number of page faults • Impossible to implement, but good base-line for comparison 45
  • 61. Operating Systems I PT / FF 14 Replacement Policy • Algorithms • Least recently used (LRU) • Replaces the page that has not been referenced for the longest time • Should be the page least likely to be referenced in the near future • Difficult to implement, e.g. by tagging with last time of reference (overhead) • First-in-first-out (FIFO) • Treats page frames allocated to a process as a circular buffer • Pages are removed in round-robin style - simple to implement • Page that has been in memory the longest is replaced 46
  • 62. Operating Systems I PT / FF 14 No Swapping ? • Some sources argue that systems without swapping perform better • Some counter-arguments: • Swapping removes information only used once from main memory • Initialization code or dead code • Event-driven code that may never be triggered in the current system run • Constant data • Resources being loaded on start-up • Extra memory generated by swapping is typically used for the file system cache • Operating systems are heavily optimized for not swapping the wrong pages • Memory ,honks‘ would get an unfair advantage in system resource usage 47
  • 63. Operating Systems I PT / FF 14 Summary • Different well-established memory management principles • Logical vs. physical addresses • Segmentation, Paging, Swapping • Memory management relies on hardware support • Page table / segment table implementation • Translation Lookaside Buffer • Trap on page fault • In combination with secondary storage, virtual memory can be implemented • Adress space size and usage decoupled from main memory organization 48