SlideShare a Scribd company logo
1 of 28
Download to read offline
Amrita
School
of
Engineering,
Bangalore
Ms. Harika Pudugosula
Lecturer
Department of Electronics & Communication Engineering
• Background
• Swapping
• Contiguous Memory Allocation
• Segmentation
• Paging
• Structure of the Page Table
2
Swapping
• A process must be in memory to be executed
• A process can be swapped temporarily out of memory to a backing store and then
brought back into memory for continued execution
• Swapping makes it possible for the total physical address space of all processes to
exceed the real physical memory of the system, thus increasing the degree of
multiprogramming in a system
3
4
Swapping
fig: Swapping of two processes using a disk as a backing store
• Standard swapping involves moving processes between main memory and a backing
store
• The backing store is commonly a fast disk
• Backing store
• It must be large enough to accommodate copies of all memory images(data
stored on the database resides in memory) for all users, and it must provide
direct access to these memory images
• The system maintains a ready queue consisting of all processes whose memory
images are on the backing store or in memory and are ready to run
• Whenever the CPU scheduler decides to execute a process, it calls the dispatcher
• The dispatcher checks to see whether the next process in the queue is in memory
• If it is not, and if there is no free memory region, the dispatcher swaps out a
process currently in memory and swaps in the desired process
• It then reloads registers and transfers control to the selected process
• The context-switch time in such a swapping system is fairly high 5
Standard Swapping
• Let’s assume that the user process is 100 MB in size and the backing store is a
standard hard disk with a transfer rate of 50 MB per second
• The actual transfer of the 100-MB process to or from main memory takes
100 MB/50 MB per second = 2 seconds
• The swap time is 2000 millisecond
• Since we must swap both out and in, the total swap time is about 4,000 milliseconds
• In priority algorithm, in RR algorithm, swapping is used to swap process
• Usually swapped out process will be brought to same memory space previously
occupied if binding done during load time
• If binding done during execution time, the process can be moved to different places,
and hence physical address's are computed
• The major part of the swap time is transfer time
• The total transfer time is directly proportional to the amount of memory swapped
6
Standard Swapping
• Swapping is constrained by the following factors as well
• If we want to swap a process, we must be sure that it is completely idle. if a process
is waiting for an I/O operation when we want to swap that process to free up
memory, what will happens?
• Two options -
1. never swap a process with pending I/O
2. execute I/O operations only into operating-system buffers
• Double buffering -
• Copy the I/O data from a process's buffer to kernel memory before swapping
out the process and then copying the same data from kernel memory to user
memory, before the user process can access it
• Standard swapping is not used in modern OS as it requires too much swapping
time and provides too little execution time to be a reasonable memory-
management solution
7
Standard Swapping
• Modified versions of swapping, the following variants are used -
1. Common version
• Swapping is normally disabled but will start if the amount of free memory falls
below a threshold amount(basically low)
• Swapping is halted when the amount of free memory increases
2. Another variation
• It involves swapping portions of processes—rather than entire processes—to
decrease swap time
8
Standard Swapping
• Most operating systems for PCs and servers support some modified version of
swapping
• Mobile systems typically do not support swapping in any form
• Mobile devices generally use flash memory rather than more spacious hard disks
as their persistent storage
• Reasons why mobile operating-system designers avoid swapping -
• space constraint is one reason
• the limited number of writes that flash memory can tolerate before it becomes
unreliable
• the poor throughput between main memory and flash memory in the mobile
devices
• Instead of using swapping, use other methods to free memory if low -
• iOS asks apps to voluntarily relinquish allocated memory
• Read-only data thrown out and reloaded from flash if needed
9
Swapping on Mobile Systems
• Instead of using swapping, use other methods to free memory if low -
• iOS asks apps to voluntarily relinquish allocated memory
• Read-only data thrown out and reloaded from flash if needed
• Failure to free can result in termination
• Android terminates apps if low free memory, but first writes application state to
flash for fast restart
• So, developers for mobile systems must carefully allocate and release memory to
ensure that their applications do not use too much memory or suffer from memory
leaks
10
Swapping on Mobile Systems
Contiguous Memory Allocation
• The main memory must accommodate both the operating system and the various
user processes - need to allocate main memory in the most efficient way possible
• The memory is usually divided into two partitions:
• one for the resident operating system
• one for the user processes
• Operating System may be in either low memory or high memory
• The major factor affecting this decision is the location of the interrupt vector
• Since the interrupt vector is often in low memory, programmers usually place the
operating system in low memory
• How to allocate available memory to the processes that are in the input queue
waiting to be brought into memory?
11
Contiguous Memory Allocation
• How to allocate available memory to the processes that are in the input queue
waiting to be brought into memory?
12
• The issue of memory protection - how to prevent a process from accessing
memory that it does not own?
• If we have a system with a relocation register, together with a limit register
• The relocation register contains the value of the smallest physical address
• The limit register contains the range of logical addresses
• The MMU maps the logical address dynamically by adding the value in the
relocation register, this mapped address is sent to memory
13
Memory Protection
• When the CPU scheduler selects a process for execution, the dispatcher loads the
relocation and limit registers with the correct values as part of the context switch
• Because every address generated by a CPU is checked against these registers, we can
protect both the operating system and the other users’ programs and data from
being modified by this running process
• The relocation-register scheme provides an effective way to allow the operating
system’s size to change dynamically
• Example -
• The operating system contains code and buffer space for device drivers
• If a device driver is not commonly used, we do not want to keep the code and
data in memory, as we might be able to use that space for other purposes
• Such code is sometimes called transient operating-system code, it comes and
goes as needed. Thus, using this code changes the size of the operating system
during program execution
14
Memory Protection
• The simplest methods for allocating memory is to divide memory into several fixed-
sized partitions. Each partition may contain exactly one process, the degree of
multiprogramming is bound by the number of partitions
• Multiple partition method
• When a partition is free, a process is selected from the input queue and is
loaded into the free partition
• When the process terminates, the partition becomes available for another
process
• This method was originally used by the IBM OS/360 operating system (called MFT
- Multiprogramming with Fixed Tasks )but is no longer in use
• Fixed-partition scheme
• It is used primarily in batch environments - (called MVT - Multiprogramming with
Variable Tasks)
• Many of the ideas presented here are also applicable to a time-sharing
environment in which pure segmentation is used for memory management
15
Memory Allocation
• Variable-partition scheme
• The operating system keeps a table indicating which parts of memory are
available and which are occupied
• Initially, all memory is available for user processes and is considered one large
block of available memory, a hole
• Eventually, memory contains a set of holes of various sizes
• When a process arrives, it is allocated memory from a hole large enough to
accommodate it
• Process exiting frees its partition, adjacent free partitions combined
16
Memory Allocation
• As processes enter the system, they are put into an input queue
• OS takes into account the memory requirements of each process and the amount of
available memory space in determining which processes are allocated memory
• When a process is allocated space, it is loaded into memory, and it can then
compete for CPU time
• When a process terminates, it releases its memory, which the operating system may
then fill with another process from the input queue
• At any given time, we have a list of available block sizes and an input queue
• Memory is allocated to processes until, the memory requirements of the next
process cannot be satisfied—that is, no available block of memory is large enough
to hold that process
• The operating system can then wait until a large enough block is available, or it can
skip down the input queue to see whether the smaller memory requirements of
some other process can be met.
17
Memory Allocation
18
Memory Allocation
• A set of holes of various sizes scattered throughout memory
• When a process arrives and needs memory, OS searches the set for a hole that is
large enough for this process
• If the hole is too large, it is split into two parts.
• One part is allocated to the arriving process, the other is returned to the set of holes
• When a process terminates, it releases its block of memory, which is then placed
back in the set of holes
• If the new hole is adjacent to other holes, these adjacent holes are merged to form
one larger hole
• At this point, the system may need to check whether there are processes waiting for
memory and whether this newly freed and recombined memory could satisfy the
demands of any of these waiting processes
• This procedure is a particular instance of the general dynamic storage allocation
problem, which concerns how to satisfy a request of size n from a list of free holes
19
Memory Allocation
• Solution to dynamic storage allocation problem -
• The first-fit, best-fit, and worst-fit strategies are the ones most commonly used to
select a free hole from the set of available holes
• First fit -
• Allocate the first hole that is big enough
• Searching can start either at the beginning of the set of holes or at the location
where the previous first-fit search ended
• We can stop searching as soon as we find a free hole that is large enough
• Best fit -
• Allocate the smallest hole that is big enough
• We must search the entire list, unless the list is ordered by size
• This strategy produces the smallest leftover hole
• Worst fit -
• Allocate the largest hole
• We must search the entire list, unless it is sorted by size
• This strategy produces the largest leftover hole, which may be more useful than
the smaller leftover hole from a best-fit approach
20
Memory Allocation
• Simulations tells both first fit and best fit are better than worst fit in terms of both
time and storage utilization
• Neither first nor best fit is clearly better in terms of storage utilization
• First fit is faster
• Example -
21
Memory Allocation
Memory
partitions
100 500 200 300 600
RAM
212 426
417 112 Processes
2
2
22
23
• Both the first-fit and best-fit strategies for memory allocation suffer from external
fragmentation
• As processes are loaded and removed from memory, the free memory space is
broken into little pieces
• External fragmentation -
• When there is enough total memory space to satisfy a request but the available
spaces are not contiguous
• Storage is fragmented into a large number of small holes
• This fragmentation problem can be severe, have a block of free (or wasted)
memory between every two processes
• Another factor is which end of a free block is allocated
• No matter which algorithm is used, however, external fragmentation will be a
problem
• Given N allocated blocks, another 0.5 N blocks will be lost to fragmentation. That is,
one-third of memory may be unusable! This property is known as the 50-percent
rule 24
Fragmentation
• Memory fragmentation can be internal as well as external
• Internal fragmentation -
• Consider a multiple-partition allocation scheme with a hole of 18,464 bytes.
Suppose that the next process requests 18,462 bytes
• If we allocate exactly the requested block, we are left with a hole of 2 bytes
• The overhead to keep track of this hole will be substantially larger than the hole
itself
• The general approach to avoiding this problem is to break the physical memory
into fixed-sized blocks and allocate memory in units based on block size
• With this approach, the memory allocated to a process may be slightly larger
than the requested memory
• The difference between these two numbers is internal fragmentation—unused
memory that is internal to a partition
25
Fragmentation
• Solutions to the problem of external fragmentation -
• Compaction -
• To shuffle the memory contents so as to place all free memory together in one
large block
• Compaction is possible only when the process can move anywhere in the
memory from the address where it is loaded first (Here, relocation is dynamic)
• The simplest compaction algorithm moves all processes toward one end of
memory, all holes move in the other direction, producing one large hole of
available memory
• Non contiguous memory allocation -
• To permit the logical address space of the processes to be noncontiguous, thus
allowing a process to be allocated physical memory wherever such memory is
available
• Two complementary techniques achieve this solution - segmentation and paging
26
Fragmentation
27
References
1. Silberscatnz and Galvin, “Operating System Concepts,” Ninth Edition,
John Wiley and Sons, 2012.
28
Thank you

More Related Content

Similar to Memory Management Strategies - II.pdf

MK Sistem Operasi.pdf
MK Sistem Operasi.pdfMK Sistem Operasi.pdf
MK Sistem Operasi.pdfwisard1
 
07-MemoryManagement.ppt
07-MemoryManagement.ppt07-MemoryManagement.ppt
07-MemoryManagement.ppthello509579
 
UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...
UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...
UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...LeahRachael
 
Computer Architecture & Organization.ppt
Computer Architecture & Organization.pptComputer Architecture & Organization.ppt
Computer Architecture & Organization.pptFarhanaMariyam1
 
Operating Systems 1 (9/12) - Memory Management Concepts
Operating Systems 1 (9/12) - Memory Management ConceptsOperating Systems 1 (9/12) - Memory Management Concepts
Operating Systems 1 (9/12) - Memory Management ConceptsPeter Tröger
 
CSE2010- Module 4 V1.pptx
CSE2010- Module 4 V1.pptxCSE2010- Module 4 V1.pptx
CSE2010- Module 4 V1.pptxMadhuraK13
 
Chapter1 Computer System Overview.ppt
Chapter1 Computer System Overview.pptChapter1 Computer System Overview.ppt
Chapter1 Computer System Overview.pptShikhaManrai1
 
Module 1 Introduction.ppt
Module 1 Introduction.pptModule 1 Introduction.ppt
Module 1 Introduction.pptshreesha16
 
Operating system memory management
Operating system memory managementOperating system memory management
Operating system memory managementrprajat007
 
memory managment on computer science.ppt
memory managment on computer science.pptmemory managment on computer science.ppt
memory managment on computer science.pptfootydigarse
 
Parallel Processing & Pipelining in Computer Architecture_Prof.Sumalatha.pptx
Parallel Processing & Pipelining in Computer Architecture_Prof.Sumalatha.pptxParallel Processing & Pipelining in Computer Architecture_Prof.Sumalatha.pptx
Parallel Processing & Pipelining in Computer Architecture_Prof.Sumalatha.pptxSumalatha A
 
Operating System BCA 301
Operating System BCA 301Operating System BCA 301
Operating System BCA 301cpjcollege
 
Chapter1 Computer System Overview Part-1.ppt
Chapter1 Computer System Overview Part-1.pptChapter1 Computer System Overview Part-1.ppt
Chapter1 Computer System Overview Part-1.pptShikhaManrai1
 

Similar to Memory Management Strategies - II.pdf (20)

chapter1.ppt
chapter1.pptchapter1.ppt
chapter1.ppt
 
MK Sistem Operasi.pdf
MK Sistem Operasi.pdfMK Sistem Operasi.pdf
MK Sistem Operasi.pdf
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
07-MemoryManagement.ppt
07-MemoryManagement.ppt07-MemoryManagement.ppt
07-MemoryManagement.ppt
 
UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...
UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...
UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...
 
Ch4 memory management
Ch4 memory managementCh4 memory management
Ch4 memory management
 
Os unit 3
Os unit 3Os unit 3
Os unit 3
 
Computer Architecture & Organization.ppt
Computer Architecture & Organization.pptComputer Architecture & Organization.ppt
Computer Architecture & Organization.ppt
 
Operating Systems 1 (9/12) - Memory Management Concepts
Operating Systems 1 (9/12) - Memory Management ConceptsOperating Systems 1 (9/12) - Memory Management Concepts
Operating Systems 1 (9/12) - Memory Management Concepts
 
CSE2010- Module 4 V1.pptx
CSE2010- Module 4 V1.pptxCSE2010- Module 4 V1.pptx
CSE2010- Module 4 V1.pptx
 
Chapter1 Computer System Overview.ppt
Chapter1 Computer System Overview.pptChapter1 Computer System Overview.ppt
Chapter1 Computer System Overview.ppt
 
Chapter07_ds.ppt
Chapter07_ds.pptChapter07_ds.ppt
Chapter07_ds.ppt
 
Memory management
Memory managementMemory management
Memory management
 
Module 1 Introduction.ppt
Module 1 Introduction.pptModule 1 Introduction.ppt
Module 1 Introduction.ppt
 
Operating system memory management
Operating system memory managementOperating system memory management
Operating system memory management
 
ch1.ppt
ch1.pptch1.ppt
ch1.ppt
 
memory managment on computer science.ppt
memory managment on computer science.pptmemory managment on computer science.ppt
memory managment on computer science.ppt
 
Parallel Processing & Pipelining in Computer Architecture_Prof.Sumalatha.pptx
Parallel Processing & Pipelining in Computer Architecture_Prof.Sumalatha.pptxParallel Processing & Pipelining in Computer Architecture_Prof.Sumalatha.pptx
Parallel Processing & Pipelining in Computer Architecture_Prof.Sumalatha.pptx
 
Operating System BCA 301
Operating System BCA 301Operating System BCA 301
Operating System BCA 301
 
Chapter1 Computer System Overview Part-1.ppt
Chapter1 Computer System Overview Part-1.pptChapter1 Computer System Overview Part-1.ppt
Chapter1 Computer System Overview Part-1.ppt
 

More from Harika Pudugosula

Artificial Neural Networks_Part-2.pptx
Artificial Neural Networks_Part-2.pptxArtificial Neural Networks_Part-2.pptx
Artificial Neural Networks_Part-2.pptxHarika Pudugosula
 
Artificial Neural Networks_Part-1.pptx
Artificial Neural Networks_Part-1.pptxArtificial Neural Networks_Part-1.pptx
Artificial Neural Networks_Part-1.pptxHarika Pudugosula
 
Multithreaded Programming Part- III.pdf
Multithreaded Programming Part- III.pdfMultithreaded Programming Part- III.pdf
Multithreaded Programming Part- III.pdfHarika Pudugosula
 
Multithreaded Programming Part- II.pdf
Multithreaded Programming Part- II.pdfMultithreaded Programming Part- II.pdf
Multithreaded Programming Part- II.pdfHarika Pudugosula
 
Multithreaded Programming Part- I.pdf
Multithreaded Programming Part- I.pdfMultithreaded Programming Part- I.pdf
Multithreaded Programming Part- I.pdfHarika Pudugosula
 
Memory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdfMemory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdfHarika Pudugosula
 
Memory Management Strategies - III.pdf
Memory Management Strategies - III.pdfMemory Management Strategies - III.pdf
Memory Management Strategies - III.pdfHarika Pudugosula
 
Virtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfVirtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfHarika Pudugosula
 
Virtual Memory Management Part - I.pdf
Virtual Memory Management Part - I.pdfVirtual Memory Management Part - I.pdf
Virtual Memory Management Part - I.pdfHarika Pudugosula
 
Operating System Structure Part-II.pdf
Operating System Structure Part-II.pdfOperating System Structure Part-II.pdf
Operating System Structure Part-II.pdfHarika Pudugosula
 
Operating System Structure Part-I.pdf
Operating System Structure Part-I.pdfOperating System Structure Part-I.pdf
Operating System Structure Part-I.pdfHarika Pudugosula
 
Lecture-4_Process Management.pdf
Lecture-4_Process Management.pdfLecture-4_Process Management.pdf
Lecture-4_Process Management.pdfHarika Pudugosula
 
Lecture_3-Process Management.pdf
Lecture_3-Process Management.pdfLecture_3-Process Management.pdf
Lecture_3-Process Management.pdfHarika Pudugosula
 

More from Harika Pudugosula (20)

Artificial Neural Networks_Part-2.pptx
Artificial Neural Networks_Part-2.pptxArtificial Neural Networks_Part-2.pptx
Artificial Neural Networks_Part-2.pptx
 
Artificial Neural Networks_Part-1.pptx
Artificial Neural Networks_Part-1.pptxArtificial Neural Networks_Part-1.pptx
Artificial Neural Networks_Part-1.pptx
 
Introduction.pptx
Introduction.pptxIntroduction.pptx
Introduction.pptx
 
CPU Scheduling Part-III.pdf
CPU Scheduling Part-III.pdfCPU Scheduling Part-III.pdf
CPU Scheduling Part-III.pdf
 
CPU Scheduling Part-II.pdf
CPU Scheduling Part-II.pdfCPU Scheduling Part-II.pdf
CPU Scheduling Part-II.pdf
 
CPU Scheduling Part-I.pdf
CPU Scheduling Part-I.pdfCPU Scheduling Part-I.pdf
CPU Scheduling Part-I.pdf
 
Multithreaded Programming Part- III.pdf
Multithreaded Programming Part- III.pdfMultithreaded Programming Part- III.pdf
Multithreaded Programming Part- III.pdf
 
Multithreaded Programming Part- II.pdf
Multithreaded Programming Part- II.pdfMultithreaded Programming Part- II.pdf
Multithreaded Programming Part- II.pdf
 
Multithreaded Programming Part- I.pdf
Multithreaded Programming Part- I.pdfMultithreaded Programming Part- I.pdf
Multithreaded Programming Part- I.pdf
 
Deadlocks Part- III.pdf
Deadlocks Part- III.pdfDeadlocks Part- III.pdf
Deadlocks Part- III.pdf
 
Deadlocks Part- II.pdf
Deadlocks Part- II.pdfDeadlocks Part- II.pdf
Deadlocks Part- II.pdf
 
Deadlocks Part- I.pdf
Deadlocks Part- I.pdfDeadlocks Part- I.pdf
Deadlocks Part- I.pdf
 
Memory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdfMemory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdf
 
Memory Management Strategies - III.pdf
Memory Management Strategies - III.pdfMemory Management Strategies - III.pdf
Memory Management Strategies - III.pdf
 
Virtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfVirtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdf
 
Virtual Memory Management Part - I.pdf
Virtual Memory Management Part - I.pdfVirtual Memory Management Part - I.pdf
Virtual Memory Management Part - I.pdf
 
Operating System Structure Part-II.pdf
Operating System Structure Part-II.pdfOperating System Structure Part-II.pdf
Operating System Structure Part-II.pdf
 
Operating System Structure Part-I.pdf
Operating System Structure Part-I.pdfOperating System Structure Part-I.pdf
Operating System Structure Part-I.pdf
 
Lecture-4_Process Management.pdf
Lecture-4_Process Management.pdfLecture-4_Process Management.pdf
Lecture-4_Process Management.pdf
 
Lecture_3-Process Management.pdf
Lecture_3-Process Management.pdfLecture_3-Process Management.pdf
Lecture_3-Process Management.pdf
 

Recently uploaded

Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitterShivangiSharma879191
 
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncWhy does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncssuser2ae721
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)dollysharma2066
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...121011101441
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...Chandu841456
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHC Sai Kiran
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
An introduction to Semiconductor and its types.pptx
An introduction to Semiconductor and its types.pptxAn introduction to Semiconductor and its types.pptx
An introduction to Semiconductor and its types.pptxPurva Nikam
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEroselinkalist12
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 

Recently uploaded (20)

Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
 
8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter
 
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncWhy does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECH
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
An introduction to Semiconductor and its types.pptx
An introduction to Semiconductor and its types.pptxAn introduction to Semiconductor and its types.pptx
An introduction to Semiconductor and its types.pptx
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 

Memory Management Strategies - II.pdf

  • 2. • Background • Swapping • Contiguous Memory Allocation • Segmentation • Paging • Structure of the Page Table 2
  • 3. Swapping • A process must be in memory to be executed • A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued execution • Swapping makes it possible for the total physical address space of all processes to exceed the real physical memory of the system, thus increasing the degree of multiprogramming in a system 3
  • 4. 4 Swapping fig: Swapping of two processes using a disk as a backing store
  • 5. • Standard swapping involves moving processes between main memory and a backing store • The backing store is commonly a fast disk • Backing store • It must be large enough to accommodate copies of all memory images(data stored on the database resides in memory) for all users, and it must provide direct access to these memory images • The system maintains a ready queue consisting of all processes whose memory images are on the backing store or in memory and are ready to run • Whenever the CPU scheduler decides to execute a process, it calls the dispatcher • The dispatcher checks to see whether the next process in the queue is in memory • If it is not, and if there is no free memory region, the dispatcher swaps out a process currently in memory and swaps in the desired process • It then reloads registers and transfers control to the selected process • The context-switch time in such a swapping system is fairly high 5 Standard Swapping
  • 6. • Let’s assume that the user process is 100 MB in size and the backing store is a standard hard disk with a transfer rate of 50 MB per second • The actual transfer of the 100-MB process to or from main memory takes 100 MB/50 MB per second = 2 seconds • The swap time is 2000 millisecond • Since we must swap both out and in, the total swap time is about 4,000 milliseconds • In priority algorithm, in RR algorithm, swapping is used to swap process • Usually swapped out process will be brought to same memory space previously occupied if binding done during load time • If binding done during execution time, the process can be moved to different places, and hence physical address's are computed • The major part of the swap time is transfer time • The total transfer time is directly proportional to the amount of memory swapped 6 Standard Swapping
  • 7. • Swapping is constrained by the following factors as well • If we want to swap a process, we must be sure that it is completely idle. if a process is waiting for an I/O operation when we want to swap that process to free up memory, what will happens? • Two options - 1. never swap a process with pending I/O 2. execute I/O operations only into operating-system buffers • Double buffering - • Copy the I/O data from a process's buffer to kernel memory before swapping out the process and then copying the same data from kernel memory to user memory, before the user process can access it • Standard swapping is not used in modern OS as it requires too much swapping time and provides too little execution time to be a reasonable memory- management solution 7 Standard Swapping
  • 8. • Modified versions of swapping, the following variants are used - 1. Common version • Swapping is normally disabled but will start if the amount of free memory falls below a threshold amount(basically low) • Swapping is halted when the amount of free memory increases 2. Another variation • It involves swapping portions of processes—rather than entire processes—to decrease swap time 8 Standard Swapping
  • 9. • Most operating systems for PCs and servers support some modified version of swapping • Mobile systems typically do not support swapping in any form • Mobile devices generally use flash memory rather than more spacious hard disks as their persistent storage • Reasons why mobile operating-system designers avoid swapping - • space constraint is one reason • the limited number of writes that flash memory can tolerate before it becomes unreliable • the poor throughput between main memory and flash memory in the mobile devices • Instead of using swapping, use other methods to free memory if low - • iOS asks apps to voluntarily relinquish allocated memory • Read-only data thrown out and reloaded from flash if needed 9 Swapping on Mobile Systems
  • 10. • Instead of using swapping, use other methods to free memory if low - • iOS asks apps to voluntarily relinquish allocated memory • Read-only data thrown out and reloaded from flash if needed • Failure to free can result in termination • Android terminates apps if low free memory, but first writes application state to flash for fast restart • So, developers for mobile systems must carefully allocate and release memory to ensure that their applications do not use too much memory or suffer from memory leaks 10 Swapping on Mobile Systems
  • 11. Contiguous Memory Allocation • The main memory must accommodate both the operating system and the various user processes - need to allocate main memory in the most efficient way possible • The memory is usually divided into two partitions: • one for the resident operating system • one for the user processes • Operating System may be in either low memory or high memory • The major factor affecting this decision is the location of the interrupt vector • Since the interrupt vector is often in low memory, programmers usually place the operating system in low memory • How to allocate available memory to the processes that are in the input queue waiting to be brought into memory? 11
  • 12. Contiguous Memory Allocation • How to allocate available memory to the processes that are in the input queue waiting to be brought into memory? 12
  • 13. • The issue of memory protection - how to prevent a process from accessing memory that it does not own? • If we have a system with a relocation register, together with a limit register • The relocation register contains the value of the smallest physical address • The limit register contains the range of logical addresses • The MMU maps the logical address dynamically by adding the value in the relocation register, this mapped address is sent to memory 13 Memory Protection
  • 14. • When the CPU scheduler selects a process for execution, the dispatcher loads the relocation and limit registers with the correct values as part of the context switch • Because every address generated by a CPU is checked against these registers, we can protect both the operating system and the other users’ programs and data from being modified by this running process • The relocation-register scheme provides an effective way to allow the operating system’s size to change dynamically • Example - • The operating system contains code and buffer space for device drivers • If a device driver is not commonly used, we do not want to keep the code and data in memory, as we might be able to use that space for other purposes • Such code is sometimes called transient operating-system code, it comes and goes as needed. Thus, using this code changes the size of the operating system during program execution 14 Memory Protection
  • 15. • The simplest methods for allocating memory is to divide memory into several fixed- sized partitions. Each partition may contain exactly one process, the degree of multiprogramming is bound by the number of partitions • Multiple partition method • When a partition is free, a process is selected from the input queue and is loaded into the free partition • When the process terminates, the partition becomes available for another process • This method was originally used by the IBM OS/360 operating system (called MFT - Multiprogramming with Fixed Tasks )but is no longer in use • Fixed-partition scheme • It is used primarily in batch environments - (called MVT - Multiprogramming with Variable Tasks) • Many of the ideas presented here are also applicable to a time-sharing environment in which pure segmentation is used for memory management 15 Memory Allocation
  • 16. • Variable-partition scheme • The operating system keeps a table indicating which parts of memory are available and which are occupied • Initially, all memory is available for user processes and is considered one large block of available memory, a hole • Eventually, memory contains a set of holes of various sizes • When a process arrives, it is allocated memory from a hole large enough to accommodate it • Process exiting frees its partition, adjacent free partitions combined 16 Memory Allocation
  • 17. • As processes enter the system, they are put into an input queue • OS takes into account the memory requirements of each process and the amount of available memory space in determining which processes are allocated memory • When a process is allocated space, it is loaded into memory, and it can then compete for CPU time • When a process terminates, it releases its memory, which the operating system may then fill with another process from the input queue • At any given time, we have a list of available block sizes and an input queue • Memory is allocated to processes until, the memory requirements of the next process cannot be satisfied—that is, no available block of memory is large enough to hold that process • The operating system can then wait until a large enough block is available, or it can skip down the input queue to see whether the smaller memory requirements of some other process can be met. 17 Memory Allocation
  • 19. • A set of holes of various sizes scattered throughout memory • When a process arrives and needs memory, OS searches the set for a hole that is large enough for this process • If the hole is too large, it is split into two parts. • One part is allocated to the arriving process, the other is returned to the set of holes • When a process terminates, it releases its block of memory, which is then placed back in the set of holes • If the new hole is adjacent to other holes, these adjacent holes are merged to form one larger hole • At this point, the system may need to check whether there are processes waiting for memory and whether this newly freed and recombined memory could satisfy the demands of any of these waiting processes • This procedure is a particular instance of the general dynamic storage allocation problem, which concerns how to satisfy a request of size n from a list of free holes 19 Memory Allocation
  • 20. • Solution to dynamic storage allocation problem - • The first-fit, best-fit, and worst-fit strategies are the ones most commonly used to select a free hole from the set of available holes • First fit - • Allocate the first hole that is big enough • Searching can start either at the beginning of the set of holes or at the location where the previous first-fit search ended • We can stop searching as soon as we find a free hole that is large enough • Best fit - • Allocate the smallest hole that is big enough • We must search the entire list, unless the list is ordered by size • This strategy produces the smallest leftover hole • Worst fit - • Allocate the largest hole • We must search the entire list, unless it is sorted by size • This strategy produces the largest leftover hole, which may be more useful than the smaller leftover hole from a best-fit approach 20 Memory Allocation
  • 21. • Simulations tells both first fit and best fit are better than worst fit in terms of both time and storage utilization • Neither first nor best fit is clearly better in terms of storage utilization • First fit is faster • Example - 21 Memory Allocation
  • 22. Memory partitions 100 500 200 300 600 RAM 212 426 417 112 Processes 2 2 22
  • 23. 23
  • 24. • Both the first-fit and best-fit strategies for memory allocation suffer from external fragmentation • As processes are loaded and removed from memory, the free memory space is broken into little pieces • External fragmentation - • When there is enough total memory space to satisfy a request but the available spaces are not contiguous • Storage is fragmented into a large number of small holes • This fragmentation problem can be severe, have a block of free (or wasted) memory between every two processes • Another factor is which end of a free block is allocated • No matter which algorithm is used, however, external fragmentation will be a problem • Given N allocated blocks, another 0.5 N blocks will be lost to fragmentation. That is, one-third of memory may be unusable! This property is known as the 50-percent rule 24 Fragmentation
  • 25. • Memory fragmentation can be internal as well as external • Internal fragmentation - • Consider a multiple-partition allocation scheme with a hole of 18,464 bytes. Suppose that the next process requests 18,462 bytes • If we allocate exactly the requested block, we are left with a hole of 2 bytes • The overhead to keep track of this hole will be substantially larger than the hole itself • The general approach to avoiding this problem is to break the physical memory into fixed-sized blocks and allocate memory in units based on block size • With this approach, the memory allocated to a process may be slightly larger than the requested memory • The difference between these two numbers is internal fragmentation—unused memory that is internal to a partition 25 Fragmentation
  • 26. • Solutions to the problem of external fragmentation - • Compaction - • To shuffle the memory contents so as to place all free memory together in one large block • Compaction is possible only when the process can move anywhere in the memory from the address where it is loaded first (Here, relocation is dynamic) • The simplest compaction algorithm moves all processes toward one end of memory, all holes move in the other direction, producing one large hole of available memory • Non contiguous memory allocation - • To permit the logical address space of the processes to be noncontiguous, thus allowing a process to be allocated physical memory wherever such memory is available • Two complementary techniques achieve this solution - segmentation and paging 26 Fragmentation
  • 27. 27 References 1. Silberscatnz and Galvin, “Operating System Concepts,” Ninth Edition, John Wiley and Sons, 2012.