SlideShare uma empresa Scribd logo
1 de 37
UNIT II
 Memory management is the functionality of an operating system which handles or
manages primary memory and moves processes back and forth between main
memory and disk during execution.
 Memory management keeps track of each and every memory location, regardless
of either it is allocated to some process or it is free.
 It checks how much memory is to be allocated to processes.
 It decides which process will get memory at what time.
 It tracks whenever some memory gets freed or unallocated and correspondingly it
updates the status.
 Logical Address is generated by CPU while a program is running.
 The logical address is virtual address as it does not exist physically, therefore, it is
also known as Virtual Address.
 This address is used as a reference to access the physical memory location by
CPU.
 The hardware device called Memory-Management Unit is used for mapping
logical address to its corresponding physical address.
 Physical Address identifies a physical location of required data in a memory.
 The user never directly deals with the physical address but can access by its
corresponding logical address.
 The user program generates the logical address and thinks that the program is
running in this logical address but the program needs physical memory for its
execution, therefore, the logical address must be mapped to the physical address
by MMU before they are used.
 The term Physical Address Space is used for all physical addresses corresponding
to the logical addresses in a Logical address space.
PARAMENTER LOGICAL ADDRESS PHYSICAL ADDRESS
Basic generated by CPU location in a memory unit
Address Space
Logical Address Space is
set of all logical addresses
generated by CPU in
reference to a program.
Physical Address is set of
all physical addresses
mapped to the
corresponding logical
addresses.
Visibility
User can view the logical
address of a program.
User can never view
physical address of
program.
Generation generated by the CPU Computed by MMU
Access
The user can use the
logical address to access
the physical address.
The user can indirectly
access physical address
but not directly.
 Swapping is a mechanism in which a process can be swapped temporarily out of
main memory (or move) to secondary storage (disk) and make that memory
available to other processes. At some later time, the system swaps back the
process from the secondary storage to main memory.
 Though performance is usually affected by swapping process but it helps in
running multiple and big processes in parallel and that's the reason Swapping is
also known as a technique for memory compaction.
 As processes are loaded and removed from memory, the free memory space is
broken into little pieces. It happens after sometimes that processes cannot be
allocated to memory blocks considering their small size and memory blocks
remains unused. This problem is known as Fragmentation.
S.N. Fragmentation & Description
1 External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it is
not contiguous, so it cannot be used.
2 Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left unused, as
it cannot be used by another process.
Fragmentation is of two types −
In Operating Systems, Paging is a storage mechanism used to retrieve processes from
the secondary storage into the main memory in the form of pages.
 The main idea behind the paging is to divide each process in the form of pages. The
main memory will also be divided in the form of frames.
 One page of the process is to be stored in one of the frames of the memory. The pages
can be stored at the different locations of the memory but the priority is always to find
the contiguous frames or holes.
 Pages of the process are brought into the main memory only when they are required
otherwise they reside in the secondary storage.
 Different operating system defines different frame sizes. The sizes of each frame must
be equal. Considering the fact that the pages are mapped to the frames in Paging,
page size needs to be as same as frame size.
 Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the
main memory will be divided into the collection of 16 frames of 1 KB each.
 There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each
process is divided into pages of 1 KB each so that one page can be stored in one
frame.
 Initially, all the frames are empty therefore pages of the processes will get stored
in the contiguous way.
 Page address is called logical address and represented by page number and
the offset.
 Frame address is called physical address and represented by a frame number and
the offset.
Logical Address = Page number + page offset
Physical Address = Frame number + page offset
Here is a list of advantages and disadvantages of paging −
 Paging reduces external fragmentation, but still suffer from internal
fragmentation.
 Paging is simple to implement and assumed as an efficient memory management
technique.
 Due to equal size of the pages and frames, swapping becomes very easy.
 Page table requires extra memory space, so may not be good for a system having
small RAM.
 Segmentation is a memory management technique in which each job is divided into
several segments of different sizes, one for each module that contains pieces that
perform related functions. Each segment is actually a different logical address space of
the program.
 When a process is to be executed, its corresponding segmentation are loaded into non-
contiguous memory though every segment is loaded into a contiguous block of
available memory.
 Segmentation memory management works very similar to paging but here segments
are of variable-length where as in paging pages are of fixed size.
 A program segment contains the program's main function, utility functions, data
structures, and so on. The operating system maintains a segment map table for every
process and a list of free memory blocks along with segment numbers, their size and
corresponding memory locations in main memory. For each segment, the table stores
the starting address of the segment and the length of the segment. A reference to a
memory location includes a value that identifies a segment and an offset.
 A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a
hard disk that's set up to emulate the computer's RAM.
Following are the situations, when entire program is not required to be loaded fully in
main memory.
 User written error handling routines are used only when an error occurred in the data
or computation.
 Certain options and features of a program may be used rarely.
 Many tables are assigned a fixed amount of address space even though only a small
amount of the table is actually used.
 The ability to execute a program that is only partially in memory would counter many
benefits.
 Less number of I/O would be needed to load or swap each user program into memory.
 A program would no longer be constrained by the amount of physical memory that is
available.
 Each user program could take less physical memory, more programs could be run the
same time, with a corresponding increase in CPU utilization and throughput.
 Modern microprocessors
intended for general-
purpose use, a memory
management unit, or
MMU, is built into the
hardware. The MMU's job
is to translate virtual
addresses into physical
addresses. A basic example
is given below −
 A demand paging system is
quite similar to a paging
system with swapping where
processes reside in secondary
memory and pages are loaded
only on demand, not in
advance.
Advantages
 Large virtual memory.
 More efficient use of memory.
 There is no limit on degree of multiprogramming.
Disadvantages
 Number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management
techniques.
 A page fault occurs when a program attempts to access a block of memory that is
not stored in the physical memory, or RAM. The fault notifies the operating
system that it must locate the data in virtual memory, then transfer it from the
storage device, such as an HDD or SSD, to the system RAM.
First In First Out (FIFO) algorithm
 Oldest page in main memory is
the one which will be selected for
replacement.
 Easy to implement, keep a list,
replace pages from the tail and
add new pages at the head.
 An optimal page-replacement
algorithm has the lowest page-fault
rate of all algorithms. An optimal
page-replacement algorithm exists,
and has been called OPT or MIN.
 Replace the page that will not be used
for the longest period of time. Use the
time when a page is to be used.
 Page which has not been used for the
longest time in main memory is the
one which will be selected for
replacement.
 Easy to implement, keep a list, replace
pages by looking back into time.
 The page with the smallest count is the one which will be selected for
replacement.
 This algorithm suffers from the situation in which a page is used heavily during
the initial phase of a process, but then is never used again.
 This algorithm is based on the argument that the page with the smallest count
was probably just brought in and has yet to be used.
Performance of Demand Paging
 The performance of demand paging is often measured in terms of the effective access
time.
 Effective access time is the amount of time it takes to access memory, if the cost of page
faults are amortized over all memory accesses.
 In some sense it is an average or expected access time.
ea = (1 - p) * ma + p*pft
ea = effective access time
ma = physical memory (core) access time
pft = page fault time
p = probability of a page fault occuring
(1-p) = the probability of accessing memory in an available frame
The page fault time is the sum of the additional overhead associated with accessing a
page in the backing store.
This includes additional context switches, disk latency and transfer time associated with
page-in and page-out operations,
the overhead of executing an operating system trap,.
There are various constraints to the strategies for the allocation of frames:
 You cannot allocate more than the total number of available frames.
 At least a minimum number of frames should be allocated to each process. This
constraint is supported by two reasons. The first reason is, as less number of
frames are allocated, there is an increase in the page fault ratio, decreasing the
performance of the execution of the process. Secondly, there should be enough
frames to hold all the different pages that any single instruction can reference.
 Equal allocation:
 In a system with x frames and y processes, each process gets equal number of
frames, i.e. x/y.
 For instance, if the system has 48 frames and 9 processes, each process will get 5
frames.
 The three frames which are not allocated to any process can be used as a free-
frame buffer pool.
 Disadvantage: In systems with processes of varying sizes, it does not make much
sense to give each process equal frames. Allocation of a large number of frames to
a small process will eventually lead to the wastage of a large number of allocated
unused frames.
 Proportional allocation: Frames are allocated to each process according to the
process size.
For a process pi of size si, the number of allocated frames is ai = (si/S)*m,
 where S is the sum of the sizes of all the processes and m is the number of frames
in the system.
 For instance, in a system with 62 frames, if there is a process of 10KB and
another process of 127KB, then the first process will be allocated (10/137)*62 = 4
frames and the other process will get (127/137)*62 = 57 frames.
 Advantage: All the processes share the available frames according to their needs,
rather than equally.
 Local replacement: When a process needs a page which is not in the memory, it can
bring in the new page and allocate it a frame from its own set of allocated frames only.
 Advantage: The pages in memory for a particular process and the page fault ratio is
affected by the paging behavior of only that process.
 Disadvantage: A low priority process may hinder a high priority process by not making
its frames available to the high priority process.
 Global replacement: When a process needs a page which is not in the memory, it can
bring in the new page and allocate it a frame from the set of all frames, even if that
frame is currently allocated to some other process; that is, one process can take a
frame from another.
Advantage: Does not hinder the performance of processes and hence results in greater
system throughput.
 Disadvantage: The page fault ratio of a process can not be solely controlled by the
process itself. The pages in memory for a process depends on the paging behavior of
other processes as well.
 If the page fault and then swapping happening very frequently at higher rate,
then operating system has to spend more time to swap these pages. This state is
called thrashing. Because of this, CPU utilization is going to be reduced.
 Working Set Model
 This model is based on locality. What locality is saying, the page used recently can
be used again and also the pages which are nearby this page will also be used.
Working set means set of pages in the most recent D time. The page which
completed its D amount of time in working set automatically dropped from it. So
accuracy of working set depends on D we have chosen. This working set model
avoid thrashing while keeping the degree of multiprogramming as high as
possible.
 Page Fault Frequency
 It is some direct approach than working set model. When thrashing occurring we
know that it has few number of frames. And if it is not thrashing that means it
has too many frames. Based on this property we assign an upper and lower bound
for the desired page fault rate. According to page fault rate we allocate or remove
pages. If the page fault rate become less than the lower limit, frames can be
removed from the process. Similarly, if the page fault rate become more than the
upper limit, more number of frames can be allocated to the process. And if no
frames available due to high page fault rate, we will just suspend the processes
and will restart them again when frames available.
 END OF UNIT II

Mais conteúdo relacionado

Mais procurados

Process management in os
Process management in osProcess management in os
Process management in osMiong Lazaro
 
Os unit 3 , process management
Os unit 3 , process managementOs unit 3 , process management
Os unit 3 , process managementArnav Chowdhury
 
Report on Mobile security
Report  on Mobile securityReport  on Mobile security
Report on Mobile securityKavita Rastogi
 
Contiguous Memory Allocation.ppt
Contiguous Memory Allocation.pptContiguous Memory Allocation.ppt
Contiguous Memory Allocation.pptinfomerlin
 
Memory organization
Memory organizationMemory organization
Memory organizationAL- AMIN
 
Process management os concept
Process management os conceptProcess management os concept
Process management os conceptpriyadeosarkar91
 
Deadlock Avoidance - OS
Deadlock Avoidance - OSDeadlock Avoidance - OS
Deadlock Avoidance - OSMsAnita2
 
Multiprocessor architecture
Multiprocessor architectureMultiprocessor architecture
Multiprocessor architectureArpan Baishya
 
introduction To Operating System
introduction To Operating Systemintroduction To Operating System
introduction To Operating SystemLuka M G
 
Computer architecture virtual memory
Computer architecture virtual memoryComputer architecture virtual memory
Computer architecture virtual memoryMazin Alwaaly
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategiesDr. Loganathan R
 
Introduction to Operating System
Introduction to Operating SystemIntroduction to Operating System
Introduction to Operating SystemDivya S
 

Mais procurados (20)

Process management in os
Process management in osProcess management in os
Process management in os
 
Os unit 3 , process management
Os unit 3 , process managementOs unit 3 , process management
Os unit 3 , process management
 
operating system structure
operating system structureoperating system structure
operating system structure
 
Scheduling algorithms
Scheduling algorithmsScheduling algorithms
Scheduling algorithms
 
Memory management
Memory managementMemory management
Memory management
 
Process state in OS
Process state in OSProcess state in OS
Process state in OS
 
Report on Mobile security
Report  on Mobile securityReport  on Mobile security
Report on Mobile security
 
Contiguous Memory Allocation.ppt
Contiguous Memory Allocation.pptContiguous Memory Allocation.ppt
Contiguous Memory Allocation.ppt
 
Memory Hierarchy
Memory HierarchyMemory Hierarchy
Memory Hierarchy
 
Memory organization
Memory organizationMemory organization
Memory organization
 
Process management os concept
Process management os conceptProcess management os concept
Process management os concept
 
Deadlock Avoidance - OS
Deadlock Avoidance - OSDeadlock Avoidance - OS
Deadlock Avoidance - OS
 
Os Threads
Os ThreadsOs Threads
Os Threads
 
Multiprocessor architecture
Multiprocessor architectureMultiprocessor architecture
Multiprocessor architecture
 
memory hierarchy
memory hierarchymemory hierarchy
memory hierarchy
 
Cache Memory
Cache MemoryCache Memory
Cache Memory
 
introduction To Operating System
introduction To Operating Systemintroduction To Operating System
introduction To Operating System
 
Computer architecture virtual memory
Computer architecture virtual memoryComputer architecture virtual memory
Computer architecture virtual memory
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategies
 
Introduction to Operating System
Introduction to Operating SystemIntroduction to Operating System
Introduction to Operating System
 

Semelhante a Os unit 2

Paging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory managementPaging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory managementkazim Hussain
 
STORAGE MANAGEMENT AND PAGING ALGORITHMS.pptx
STORAGE MANAGEMENT AND PAGING ALGORITHMS.pptxSTORAGE MANAGEMENT AND PAGING ALGORITHMS.pptx
STORAGE MANAGEMENT AND PAGING ALGORITHMS.pptxDivyaKS18
 
Virtual Memory
Virtual MemoryVirtual Memory
Virtual MemoryArchith777
 
Operating system Memory management
Operating system Memory management Operating system Memory management
Operating system Memory management Shashank Asthana
 
Virtual Memory
Virtual MemoryVirtual Memory
Virtual Memoryvampugani
 
Abhaycavirtual memory and the pagehit.pptx
Abhaycavirtual memory and the pagehit.pptxAbhaycavirtual memory and the pagehit.pptx
Abhaycavirtual memory and the pagehit.pptxwemoji5816
 
Memory management Assignment Help
Memory management Assignment HelpMemory management Assignment Help
Memory management Assignment HelpJosephErin
 
Memory Managementgggffffffffffgggggggggg
Memory ManagementgggffffffffffggggggggggMemory Managementgggffffffffffgggggggggg
Memory ManagementgggffffffffffggggggggggBHUPESHRAHANGDALE200
 
Memory management ppt coa
Memory management ppt coaMemory management ppt coa
Memory management ppt coaBharti Khemani
 
CSI-503 - 9. Virtual Memory
CSI-503 - 9. Virtual MemoryCSI-503 - 9. Virtual Memory
CSI-503 - 9. Virtual Memoryghayour abbas
 

Semelhante a Os unit 2 (20)

Paging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory managementPaging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory management
 
unit5_os (1).pptx
unit5_os (1).pptxunit5_os (1).pptx
unit5_os (1).pptx
 
STORAGE MANAGEMENT AND PAGING ALGORITHMS.pptx
STORAGE MANAGEMENT AND PAGING ALGORITHMS.pptxSTORAGE MANAGEMENT AND PAGING ALGORITHMS.pptx
STORAGE MANAGEMENT AND PAGING ALGORITHMS.pptx
 
virtual memory
virtual memoryvirtual memory
virtual memory
 
Virtual Memory
Virtual MemoryVirtual Memory
Virtual Memory
 
UNIT-2 OS.pptx
UNIT-2 OS.pptxUNIT-2 OS.pptx
UNIT-2 OS.pptx
 
Operating system Memory management
Operating system Memory management Operating system Memory management
Operating system Memory management
 
virtual memory
virtual memoryvirtual memory
virtual memory
 
Virtual Memory
Virtual MemoryVirtual Memory
Virtual Memory
 
Abhaycavirtual memory and the pagehit.pptx
Abhaycavirtual memory and the pagehit.pptxAbhaycavirtual memory and the pagehit.pptx
Abhaycavirtual memory and the pagehit.pptx
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Memory management Assignment Help
Memory management Assignment HelpMemory management Assignment Help
Memory management Assignment Help
 
Memory management
Memory managementMemory management
Memory management
 
Opetating System Memory management
Opetating System Memory managementOpetating System Memory management
Opetating System Memory management
 
Memory Managementgggffffffffffgggggggggg
Memory ManagementgggffffffffffggggggggggMemory Managementgggffffffffffgggggggggg
Memory Managementgggffffffffffgggggggggg
 
Memory management ppt coa
Memory management ppt coaMemory management ppt coa
Memory management ppt coa
 
Operating system
Operating systemOperating system
Operating system
 
CSI-503 - 9. Virtual Memory
CSI-503 - 9. Virtual MemoryCSI-503 - 9. Virtual Memory
CSI-503 - 9. Virtual Memory
 
Memory Hierarchy
Memory HierarchyMemory Hierarchy
Memory Hierarchy
 
Module 4 memory management
Module 4 memory managementModule 4 memory management
Module 4 memory management
 

Mais de Arnav Chowdhury

Startup Funding and Strategies for Future
Startup Funding and Strategies for FutureStartup Funding and Strategies for Future
Startup Funding and Strategies for FutureArnav Chowdhury
 
Marketing Management Introduction.pptx
Marketing Management Introduction.pptxMarketing Management Introduction.pptx
Marketing Management Introduction.pptxArnav Chowdhury
 
Marketing Management Product.pptx
Marketing Management Product.pptxMarketing Management Product.pptx
Marketing Management Product.pptxArnav Chowdhury
 
Institutional Support to Entrepreneurship
Institutional Support to EntrepreneurshipInstitutional Support to Entrepreneurship
Institutional Support to EntrepreneurshipArnav Chowdhury
 
New Venture Expansion and Exit Strategies
New Venture Expansion and Exit StrategiesNew Venture Expansion and Exit Strategies
New Venture Expansion and Exit StrategiesArnav Chowdhury
 
Creating a Business Plan
Creating a Business PlanCreating a Business Plan
Creating a Business PlanArnav Chowdhury
 
Business Research Methodology ( Data Collection)
Business Research Methodology ( Data Collection)Business Research Methodology ( Data Collection)
Business Research Methodology ( Data Collection)Arnav Chowdhury
 
Business Research Methods (Introduction)
Business Research Methods (Introduction)Business Research Methods (Introduction)
Business Research Methods (Introduction)Arnav Chowdhury
 
Planning and organizing Entrepreneurial Venture
Planning and organizing Entrepreneurial VenturePlanning and organizing Entrepreneurial Venture
Planning and organizing Entrepreneurial VentureArnav Chowdhury
 
Fundamentals of Entrepreneurship
Fundamentals of EntrepreneurshipFundamentals of Entrepreneurship
Fundamentals of EntrepreneurshipArnav Chowdhury
 
Unit v: Cyber Safety Mechanism
Unit v: Cyber Safety MechanismUnit v: Cyber Safety Mechanism
Unit v: Cyber Safety MechanismArnav Chowdhury
 
UNIT IV:Security Measurement Strategies
UNIT IV:Security Measurement StrategiesUNIT IV:Security Measurement Strategies
UNIT IV:Security Measurement StrategiesArnav Chowdhury
 
Unit iii: Common Hacking Techniques
Unit iii: Common Hacking TechniquesUnit iii: Common Hacking Techniques
Unit iii: Common Hacking TechniquesArnav Chowdhury
 
Information Technology and Modern Gadgets
Information Technology and Modern GadgetsInformation Technology and Modern Gadgets
Information Technology and Modern GadgetsArnav Chowdhury
 

Mais de Arnav Chowdhury (20)

Startup Funding and Strategies for Future
Startup Funding and Strategies for FutureStartup Funding and Strategies for Future
Startup Funding and Strategies for Future
 
Marketing Management Introduction.pptx
Marketing Management Introduction.pptxMarketing Management Introduction.pptx
Marketing Management Introduction.pptx
 
Marketing Management Product.pptx
Marketing Management Product.pptxMarketing Management Product.pptx
Marketing Management Product.pptx
 
Institutional Support to Entrepreneurship
Institutional Support to EntrepreneurshipInstitutional Support to Entrepreneurship
Institutional Support to Entrepreneurship
 
New Venture Expansion and Exit Strategies
New Venture Expansion and Exit StrategiesNew Venture Expansion and Exit Strategies
New Venture Expansion and Exit Strategies
 
Creating a Business Plan
Creating a Business PlanCreating a Business Plan
Creating a Business Plan
 
Business Research Methodology ( Data Collection)
Business Research Methodology ( Data Collection)Business Research Methodology ( Data Collection)
Business Research Methodology ( Data Collection)
 
Business Research Methods (Introduction)
Business Research Methods (Introduction)Business Research Methods (Introduction)
Business Research Methods (Introduction)
 
Planning and organizing Entrepreneurial Venture
Planning and organizing Entrepreneurial VenturePlanning and organizing Entrepreneurial Venture
Planning and organizing Entrepreneurial Venture
 
Fundamentals of Entrepreneurship
Fundamentals of EntrepreneurshipFundamentals of Entrepreneurship
Fundamentals of Entrepreneurship
 
ICT tools in Education
ICT tools in EducationICT tools in Education
ICT tools in Education
 
Unit v: Cyber Safety Mechanism
Unit v: Cyber Safety MechanismUnit v: Cyber Safety Mechanism
Unit v: Cyber Safety Mechanism
 
UNIT IV:Security Measurement Strategies
UNIT IV:Security Measurement StrategiesUNIT IV:Security Measurement Strategies
UNIT IV:Security Measurement Strategies
 
Unit iii: Common Hacking Techniques
Unit iii: Common Hacking TechniquesUnit iii: Common Hacking Techniques
Unit iii: Common Hacking Techniques
 
Cyber Crime
Cyber CrimeCyber Crime
Cyber Crime
 
Information Technology and Modern Gadgets
Information Technology and Modern GadgetsInformation Technology and Modern Gadgets
Information Technology and Modern Gadgets
 
Unit iv FMIS
Unit iv FMISUnit iv FMIS
Unit iv FMIS
 
Unit iii FMIS
Unit iii FMISUnit iii FMIS
Unit iii FMIS
 
Unit ii FMIS
Unit ii FMISUnit ii FMIS
Unit ii FMIS
 
Unit iv graphics
Unit iv  graphicsUnit iv  graphics
Unit iv graphics
 

Último

Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Victor Rentea
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusZilliz
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Zilliz
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelDeepika Singh
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...apidays
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontologyjohnbeverley2021
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityWSO2
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Bhuvaneswari Subramani
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamUiPathCommunity
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 

Último (20)

Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 

Os unit 2

  • 2.  Memory management is the functionality of an operating system which handles or manages primary memory and moves processes back and forth between main memory and disk during execution.  Memory management keeps track of each and every memory location, regardless of either it is allocated to some process or it is free.  It checks how much memory is to be allocated to processes.  It decides which process will get memory at what time.  It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
  • 3.  Logical Address is generated by CPU while a program is running.  The logical address is virtual address as it does not exist physically, therefore, it is also known as Virtual Address.  This address is used as a reference to access the physical memory location by CPU.  The hardware device called Memory-Management Unit is used for mapping logical address to its corresponding physical address.
  • 4.  Physical Address identifies a physical location of required data in a memory.  The user never directly deals with the physical address but can access by its corresponding logical address.  The user program generates the logical address and thinks that the program is running in this logical address but the program needs physical memory for its execution, therefore, the logical address must be mapped to the physical address by MMU before they are used.  The term Physical Address Space is used for all physical addresses corresponding to the logical addresses in a Logical address space.
  • 5.
  • 6. PARAMENTER LOGICAL ADDRESS PHYSICAL ADDRESS Basic generated by CPU location in a memory unit Address Space Logical Address Space is set of all logical addresses generated by CPU in reference to a program. Physical Address is set of all physical addresses mapped to the corresponding logical addresses. Visibility User can view the logical address of a program. User can never view physical address of program. Generation generated by the CPU Computed by MMU Access The user can use the logical address to access the physical address. The user can indirectly access physical address but not directly.
  • 7.  Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move) to secondary storage (disk) and make that memory available to other processes. At some later time, the system swaps back the process from the secondary storage to main memory.  Though performance is usually affected by swapping process but it helps in running multiple and big processes in parallel and that's the reason Swapping is also known as a technique for memory compaction.
  • 8.
  • 9.  As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes cannot be allocated to memory blocks considering their small size and memory blocks remains unused. This problem is known as Fragmentation. S.N. Fragmentation & Description 1 External fragmentation Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous, so it cannot be used. 2 Internal fragmentation Memory block assigned to process is bigger. Some portion of memory is left unused, as it cannot be used by another process. Fragmentation is of two types −
  • 10. In Operating Systems, Paging is a storage mechanism used to retrieve processes from the secondary storage into the main memory in the form of pages.  The main idea behind the paging is to divide each process in the form of pages. The main memory will also be divided in the form of frames.  One page of the process is to be stored in one of the frames of the memory. The pages can be stored at the different locations of the memory but the priority is always to find the contiguous frames or holes.  Pages of the process are brought into the main memory only when they are required otherwise they reside in the secondary storage.  Different operating system defines different frame sizes. The sizes of each frame must be equal. Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as same as frame size.
  • 11.
  • 12.  Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory will be divided into the collection of 16 frames of 1 KB each.  There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is divided into pages of 1 KB each so that one page can be stored in one frame.  Initially, all the frames are empty therefore pages of the processes will get stored in the contiguous way.
  • 13.
  • 14.
  • 15.  Page address is called logical address and represented by page number and the offset.  Frame address is called physical address and represented by a frame number and the offset. Logical Address = Page number + page offset Physical Address = Frame number + page offset
  • 16. Here is a list of advantages and disadvantages of paging −  Paging reduces external fragmentation, but still suffer from internal fragmentation.  Paging is simple to implement and assumed as an efficient memory management technique.  Due to equal size of the pages and frames, swapping becomes very easy.  Page table requires extra memory space, so may not be good for a system having small RAM.
  • 17.  Segmentation is a memory management technique in which each job is divided into several segments of different sizes, one for each module that contains pieces that perform related functions. Each segment is actually a different logical address space of the program.  When a process is to be executed, its corresponding segmentation are loaded into non- contiguous memory though every segment is loaded into a contiguous block of available memory.  Segmentation memory management works very similar to paging but here segments are of variable-length where as in paging pages are of fixed size.  A program segment contains the program's main function, utility functions, data structures, and so on. The operating system maintains a segment map table for every process and a list of free memory blocks along with segment numbers, their size and corresponding memory locations in main memory. For each segment, the table stores the starting address of the segment and the length of the segment. A reference to a memory location includes a value that identifies a segment and an offset.
  • 18.
  • 19.  A computer can address more memory than the amount physically installed on the system. This extra memory is actually called virtual memory and it is a section of a hard disk that's set up to emulate the computer's RAM. Following are the situations, when entire program is not required to be loaded fully in main memory.  User written error handling routines are used only when an error occurred in the data or computation.  Certain options and features of a program may be used rarely.  Many tables are assigned a fixed amount of address space even though only a small amount of the table is actually used.  The ability to execute a program that is only partially in memory would counter many benefits.  Less number of I/O would be needed to load or swap each user program into memory.  A program would no longer be constrained by the amount of physical memory that is available.  Each user program could take less physical memory, more programs could be run the same time, with a corresponding increase in CPU utilization and throughput.
  • 20.  Modern microprocessors intended for general- purpose use, a memory management unit, or MMU, is built into the hardware. The MMU's job is to translate virtual addresses into physical addresses. A basic example is given below −
  • 21.  A demand paging system is quite similar to a paging system with swapping where processes reside in secondary memory and pages are loaded only on demand, not in advance.
  • 22. Advantages  Large virtual memory.  More efficient use of memory.  There is no limit on degree of multiprogramming. Disadvantages  Number of tables and the amount of processor overhead for handling page interrupts are greater than in the case of the simple paged management techniques.
  • 23.  A page fault occurs when a program attempts to access a block of memory that is not stored in the physical memory, or RAM. The fault notifies the operating system that it must locate the data in virtual memory, then transfer it from the storage device, such as an HDD or SSD, to the system RAM.
  • 24. First In First Out (FIFO) algorithm  Oldest page in main memory is the one which will be selected for replacement.  Easy to implement, keep a list, replace pages from the tail and add new pages at the head.
  • 25.  An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. An optimal page-replacement algorithm exists, and has been called OPT or MIN.  Replace the page that will not be used for the longest period of time. Use the time when a page is to be used.
  • 26.  Page which has not been used for the longest time in main memory is the one which will be selected for replacement.  Easy to implement, keep a list, replace pages by looking back into time.
  • 27.  The page with the smallest count is the one which will be selected for replacement.  This algorithm suffers from the situation in which a page is used heavily during the initial phase of a process, but then is never used again.
  • 28.  This algorithm is based on the argument that the page with the smallest count was probably just brought in and has yet to be used.
  • 29. Performance of Demand Paging  The performance of demand paging is often measured in terms of the effective access time.  Effective access time is the amount of time it takes to access memory, if the cost of page faults are amortized over all memory accesses.  In some sense it is an average or expected access time. ea = (1 - p) * ma + p*pft ea = effective access time ma = physical memory (core) access time pft = page fault time p = probability of a page fault occuring (1-p) = the probability of accessing memory in an available frame The page fault time is the sum of the additional overhead associated with accessing a page in the backing store. This includes additional context switches, disk latency and transfer time associated with page-in and page-out operations, the overhead of executing an operating system trap,.
  • 30. There are various constraints to the strategies for the allocation of frames:  You cannot allocate more than the total number of available frames.  At least a minimum number of frames should be allocated to each process. This constraint is supported by two reasons. The first reason is, as less number of frames are allocated, there is an increase in the page fault ratio, decreasing the performance of the execution of the process. Secondly, there should be enough frames to hold all the different pages that any single instruction can reference.
  • 31.  Equal allocation:  In a system with x frames and y processes, each process gets equal number of frames, i.e. x/y.  For instance, if the system has 48 frames and 9 processes, each process will get 5 frames.  The three frames which are not allocated to any process can be used as a free- frame buffer pool.  Disadvantage: In systems with processes of varying sizes, it does not make much sense to give each process equal frames. Allocation of a large number of frames to a small process will eventually lead to the wastage of a large number of allocated unused frames.
  • 32.  Proportional allocation: Frames are allocated to each process according to the process size. For a process pi of size si, the number of allocated frames is ai = (si/S)*m,  where S is the sum of the sizes of all the processes and m is the number of frames in the system.  For instance, in a system with 62 frames, if there is a process of 10KB and another process of 127KB, then the first process will be allocated (10/137)*62 = 4 frames and the other process will get (127/137)*62 = 57 frames.  Advantage: All the processes share the available frames according to their needs, rather than equally.
  • 33.  Local replacement: When a process needs a page which is not in the memory, it can bring in the new page and allocate it a frame from its own set of allocated frames only.  Advantage: The pages in memory for a particular process and the page fault ratio is affected by the paging behavior of only that process.  Disadvantage: A low priority process may hinder a high priority process by not making its frames available to the high priority process.  Global replacement: When a process needs a page which is not in the memory, it can bring in the new page and allocate it a frame from the set of all frames, even if that frame is currently allocated to some other process; that is, one process can take a frame from another. Advantage: Does not hinder the performance of processes and hence results in greater system throughput.  Disadvantage: The page fault ratio of a process can not be solely controlled by the process itself. The pages in memory for a process depends on the paging behavior of other processes as well.
  • 34.  If the page fault and then swapping happening very frequently at higher rate, then operating system has to spend more time to swap these pages. This state is called thrashing. Because of this, CPU utilization is going to be reduced.
  • 35.  Working Set Model  This model is based on locality. What locality is saying, the page used recently can be used again and also the pages which are nearby this page will also be used. Working set means set of pages in the most recent D time. The page which completed its D amount of time in working set automatically dropped from it. So accuracy of working set depends on D we have chosen. This working set model avoid thrashing while keeping the degree of multiprogramming as high as possible.
  • 36.  Page Fault Frequency  It is some direct approach than working set model. When thrashing occurring we know that it has few number of frames. And if it is not thrashing that means it has too many frames. Based on this property we assign an upper and lower bound for the desired page fault rate. According to page fault rate we allocate or remove pages. If the page fault rate become less than the lower limit, frames can be removed from the process. Similarly, if the page fault rate become more than the upper limit, more number of frames can be allocated to the process. And if no frames available due to high page fault rate, we will just suspend the processes and will restart them again when frames available.
  • 37.  END OF UNIT II