SlideShare uma empresa Scribd logo
1 de 35
Baixar para ler offline
Part III
  Storage Management
Chapter 8: Memory Management
Address Generation
qAddress generation has three stages:
    vCompile: compiler
    vLink: linker or linkage editor
    vLoad: loader




  compiler          linker            loader
                                               memory


source          object             load
 code           module            module
Three Address Binding Schemes
qCompile Time: If the complier knows the
 location a program will reside, the compiler
 generates absolute code. Example: compile-go
 systems and MS-DOS .COM-format programs.
qLoad Time: A compiler may not know the
 absolute address. So, the compiler generates
 relocatable code. Address binding is delayed
 until load time.
qExecution Time: If the process may be moved
 in memory during its execution, then address
 binding must be delayed until run time. This is
 the commonly used scheme.
Address Generation: Compile Time
Linking and Loading
Address Generation: Static Linking
Loaded into
   Memory
qCode and data are
 loaded into memory at
 addresses 10000 and
 20000, respectively.
qEvery unresolved
 address must be
 adjusted.
Logical, Virtual, Physical Address
qLogical Address: the address generated by the
 CPU.
qPhysical Address: the address seen and used by
 the memory unit.
qVirtual Address: Run-time binding may
 generate different logical address and physical
 address. In this case, logical address is also
 referred to as virtual address. (Logical = Virtual
 in this course)
Dynamic Loading
qSome routines in a program (e.g., error handling)
 may not be used frequently.
qWith dynamic loading, a routine is not loaded until
 it is called.
qTo use dynamic loading, all routines must be in a
 relocatable format.
qThe main program is loaded and executes.
qWhen a routine A calls B, A checks to see if B is
 loaded. If B is not loaded, the relocatable linking
 loader is called to load B and updates the address
 table. Then, control is passed to B.
Dynamic Linking
qDynamic loading postpones the loading of routines
 until run-time. Dynamic linking postpones both
 linking and loading until run-time.
qA stub is added to each reference of library routine.
 A stub is a small piece of code that indicates how to
 locate and load the routine if it is not loaded.
qWhen a routine is called, its stub is executed. The
 routine is loaded, the address of that routine
 replaces the stub, and executes the routine.
qDynamic linking usually applies to language and
 system libraries. A Windows DLL is a dynamic
 linking library.
Major Memory Management Schemes
 qMonoprogramming Systems: MS-DOS
 qMultiprogramming Systems:
  vFixed Partitions
  vVariable Partitions
  vPaging
Monoprogramming Systems

 0                  0                  0

         OS                                     OS


                         user prog.
                                            user prog.
      user prog.

                                            device drivers
                            OS                 in ROM
max                max                max
Why Multiprogramming?

qSuppose a process spends a fraction of p of its
 time in I/O wait state.
qThen, the probability of n processes being all in
 wait state at the same time is pn.
qThe CPU utilization is 1 – pn.
qThus, the more processes in the system, the
 higher the CPU utilization.
qWell, since CPU power is limited, throughput
 decreases when n is sufficiently large.
Multiprogramming with Fixed Partitions
       q Memory is divided into n (possibly unequal) partitions.
       q Partitioning can be done at the startup time and altered
         later on.
       q Each partition may have a job queue. Or, all partitions
         share the same job queue.

            OS                             OS

300k                             300k
        partition 1                     partition 1

200k                             200k
        partition 2                     partition 2

150k    partition 3              150k   partition 3
100k    partition 4              100k   partition 4
Relocation and Protection
                            q Because executables
                              may run in any
base                          partition, relocation
limit           OS            and protection are
                              needed.
                            q Recall the base/limit
                              register pair for
                              memory protection.
                            q It could also be used
           a user program     for relocation.
                            q Linker generates
                              relocatable code
                              starting with 0. The
                              base register contains
                              the starting address.
Relocation and Protection
              protection          relocation

                  limit            base




                                                      memory
                            yes
CPU                <                  +
        logical
        address        no                  physical
                                           address
                       not your space
                       traps to the OS
                       addressing error
Relocation: How does it work?
Multiprogramming with Variable Partitions
 qThe OS maintains a memory pool. When a job
  comes in, the OS allocates whatever a job needs.
 qThus, partition sizes are not fixed, The number
  of partitions also varies.

  OS            OS           OS             OS
   A            A             A             A

                B             B            free
  free
                              C             C
               free
                             free          free
Memory Allocation: 1/2
qWhen a memory request comes, we must
 search all free spaces (i.e., holes) to find a
 suitable one.
qThere are some commonly seen methods:
   vFirst Fit: Search starts at the beginning of the set of
    holes and allocate the first large enough hole.
   vNext Fit: Search starts from where the previous first-
    fit search ended.
   vBest-Fit: Allocate the smallest hole that is larger
    than the request one.
   vWorst-Fit: Allocate the largest hole that is larger
    than the request one.
Memory Allocation: 2/2
qIf the hole is larger than the requested size, it is cut
 into two. The one of the requested size is given to
 the process, the remaining one becomes a new hole.
qWhen a process returns a memory block, it becomes
 a hole and must be combined with its neighbors.
          before X is freed           after X is freed
          A      X       B        A                 B

          A      X                A

                 X       B                          B

                 X
Fragmentation
qProcesses are loaded and removed from memory,
 eventually the memory will be cut into small holes
 that are not large enough to run any incoming
 process.
qFree memory holes between allocated ones are
 called external fragmentation.
qIt is unwise to allocate exactly the requested
 amount of memory to a process, because of
 address boundary alignment requirements or the
 minimum requirement for memory management.
qThus, memory that is allocated to a partition, but is
 not used, are called internal fragmentation.
External/Internal Fragmentation
                           allocated partition
           external
  used     fragmentation
                                 used
  free
  used                          un-used
  used

  used
  free                internal fragmentation

  used
Compaction for External Fragmentation
qIf processes are relocatable, we may move used
 memory blocks together to make a larger free
 memory block.
        used                     used

        free                     used

        used                     used

        used                     used

        used
                                 free
                                  free
        free

        used                     used
Paging: 1/2
qThe physical memory is divided into fixed-sized
 page frames, or frames.
qThe virtual address space is also divided into
 blocks of the same size, called pages.
qWhen a process runs, its pages are loaded into
 page frames.
qA page table stores the page numbers and their
 corresponding page frame numbers.
qThe virtual address is divided into two fields:
 page number and offset (with that page).
logical address
                               Paging: 2/2                physical
                                                          memory
      p                   d                       0
    page #      offset within the page            1
      logical
                                                  2
      memory                                          d
0                                page table       3
                           0             7
1                                                 4
      d                    1             2
                           2             9        5
2
                           3             5
                                                  6
3
                                                  7

                                                  8
          logical address <1, d> translates to    9
          physical address <2,d>
                                                 10
Address Translation
Address Translation: Example
Hardware Support
qPage table may be stored in special registers if the
 number of pages is small.
qPage table may be stored in physical memory, and a
 special register, page-table base register, points to
 the page table.
qUse translation look-aside buffer (TLB). TLB stores
 recently used pairs (page #, frame #). It compares
 the input page # against the stored ones. If a match
 is found, the corresponding frame # is the output.
 Thus, no physical memory access is required.
qThe comparison is carried out in parallel and is fast.
qTLB normally has 64 to 1,024 entries.
Translation Look-Aside Buffer
             valid

                         page # frame #
                     Y 123         79

                     Y 374        199
p (page #)           N 906           3
                     Y 767        100
                                                if page # = 767,
                     N 222        999           Output frame # = 100

                     Y     23     946


If the TLB reports no hit, then we go for a page table look up!
Fragmentation in a Paging System
qDoes a paging system have fragmentation?
qPaging systems do not have external
 fragmentation, because un-used page frames
 can be used by the next process.
qPaging systems do have internal fragmentation.
qBecause the address space is divided into equal
 size pages, all but the last one will be filled
 completely. Thus, the last page contains
 internal fragmentation and may be 50% full.
Protection in a Paging System
qIs it required to protect among users in a paging
 system? No, because different processes use
 different page tables.
qHowever, we can use a page table length register
 that stores the length of a process’s page table. In
 this way, a process cannot access the memory
 beyond its region. Compare this with the
 base/limit register pair.
qWe can also add read-only, read-write, or execute
 bits in page table to enforce r-w-e permission.
qWe can also add a valid/invalid bit to each page
 entry to indicate if the page is in memory.
Shared Pages
  q Pages may be shared by multiple processes.
  q If the code is a re-entrant (or pure) one, a program does
    not modify itself, routines can also be shared!
logical space       page table
0     A         0                                M
1     B         1
2     X         2
                                                 B


logical space       page table                   X
0     M         0
1     N         1
2     X         2                                A


                                                 N
Multi-Level Page Table
                                      virtual address
                        index 1      index 2       index 3        offset
page table                 8             6            6              12
base register                                                              memory




                          level 1
                        page table      level 2
                                      page table       level 3
                                                     page table
q There are 256, 64 and 64 entries in level 1, 2 and 3
  page tables.
q Page size is 4K = 4,096 bytes.
q Virtual space size = (28*26*26 pages)*4K= 232 bytes
Inverted Page Table: 1/2
qIn a paging system, each process has its own page
 table, which usually has many entries.
qTo save space, we can build a page table which has
 one entry for each page frame. Thus, the size of this
 inverted page table is equal to the number of page
 frames.
qEach entry in an inverted page table has two items:
  vProcess ID: the owner of this frame
  vPage Number: the page number in this frame
qEach virtual address has three sections:
            <process-id, page #, offset>
Inverted Page Table: 2/2                                 memory



           logical address                  physical address
CPU        pid p # d                           k       d


                        inverted page table

                                                               k   d
                                              k


                             pid   page #


This search can be
implemented with
hashing

Mais conteúdo relacionado

Mais procurados

Memory management
Memory managementMemory management
Memory management
Slideshare
 
Computer memory management
Computer memory managementComputer memory management
Computer memory management
Kumar
 

Mais procurados (20)

Memory management
Memory managementMemory management
Memory management
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Paging and segmentation
Paging and segmentationPaging and segmentation
Paging and segmentation
 
Memory management
Memory managementMemory management
Memory management
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Memory managment
Memory managmentMemory managment
Memory managment
 
Virtual memory
Virtual memoryVirtual memory
Virtual memory
 
Virtual Memory
Virtual MemoryVirtual Memory
Virtual Memory
 
virtual memory
virtual memoryvirtual memory
virtual memory
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Computer memory management
Computer memory managementComputer memory management
Computer memory management
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategies
 
Memory Management | Computer Science
Memory Management | Computer ScienceMemory Management | Computer Science
Memory Management | Computer Science
 
Understanding memory management
Understanding memory managementUnderstanding memory management
Understanding memory management
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OS
 
Memory management
Memory managementMemory management
Memory management
 
Memory management OS
Memory management OSMemory management OS
Memory management OS
 
Introduction of Memory Management
Introduction of Memory Management Introduction of Memory Management
Introduction of Memory Management
 
Memory management
Memory managementMemory management
Memory management
 
Overview of Distributed Systems
Overview of Distributed SystemsOverview of Distributed Systems
Overview of Distributed Systems
 

Semelhante a Memory

2800-lecture8-memeory-management in operating system.pdf
2800-lecture8-memeory-management in operating system.pdf2800-lecture8-memeory-management in operating system.pdf
2800-lecture8-memeory-management in operating system.pdf
YawkalAddis
 
ECECS 472572 Final Exam ProjectRemember to check the errata
ECECS 472572 Final Exam ProjectRemember to check the errata ECECS 472572 Final Exam ProjectRemember to check the errata
ECECS 472572 Final Exam ProjectRemember to check the errata
EvonCanales257
 
ECECS 472572 Final Exam ProjectRemember to check the errat.docx
ECECS 472572 Final Exam ProjectRemember to check the errat.docxECECS 472572 Final Exam ProjectRemember to check the errat.docx
ECECS 472572 Final Exam ProjectRemember to check the errat.docx
tidwellveronique
 
ECECS 472572 Final Exam ProjectRemember to check the err.docx
ECECS 472572 Final Exam ProjectRemember to check the err.docxECECS 472572 Final Exam ProjectRemember to check the err.docx
ECECS 472572 Final Exam ProjectRemember to check the err.docx
tidwellveronique
 
Please do ECE572 requirementECECS 472572 Final Exam Project (W.docx
Please do ECE572 requirementECECS 472572 Final Exam Project (W.docxPlease do ECE572 requirementECECS 472572 Final Exam Project (W.docx
Please do ECE572 requirementECECS 472572 Final Exam Project (W.docx
ARIV4
 

Semelhante a Memory (20)

os
osos
os
 
2800-lecture8-memeory-management in operating system.pdf
2800-lecture8-memeory-management in operating system.pdf2800-lecture8-memeory-management in operating system.pdf
2800-lecture8-memeory-management in operating system.pdf
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
 
Operating System
Operating SystemOperating System
Operating System
 
Bab 4
Bab 4Bab 4
Bab 4
 
Pune-Cocoa: Blocks and GCD
Pune-Cocoa: Blocks and GCDPune-Cocoa: Blocks and GCD
Pune-Cocoa: Blocks and GCD
 
Operating Systems Part III-Memory Management
Operating Systems Part III-Memory ManagementOperating Systems Part III-Memory Management
Operating Systems Part III-Memory Management
 
Lecture20-21-22.ppt
Lecture20-21-22.pptLecture20-21-22.ppt
Lecture20-21-22.ppt
 
ECECS 472572 Final Exam ProjectRemember to check the errata
ECECS 472572 Final Exam ProjectRemember to check the errata ECECS 472572 Final Exam ProjectRemember to check the errata
ECECS 472572 Final Exam ProjectRemember to check the errata
 
Ch3-2
Ch3-2Ch3-2
Ch3-2
 
ECECS 472572 Final Exam ProjectRemember to check the errat.docx
ECECS 472572 Final Exam ProjectRemember to check the errat.docxECECS 472572 Final Exam ProjectRemember to check the errat.docx
ECECS 472572 Final Exam ProjectRemember to check the errat.docx
 
ECECS 472572 Final Exam ProjectRemember to check the err.docx
ECECS 472572 Final Exam ProjectRemember to check the err.docxECECS 472572 Final Exam ProjectRemember to check the err.docx
ECECS 472572 Final Exam ProjectRemember to check the err.docx
 
Cephalocon apac china
Cephalocon apac chinaCephalocon apac china
Cephalocon apac china
 
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
Common Support Issues And How To Troubleshoot Them - Michael Hackett, Vikhyat...
 
Please do ECE572 requirementECECS 472572 Final Exam Project (W.docx
Please do ECE572 requirementECECS 472572 Final Exam Project (W.docxPlease do ECE572 requirementECECS 472572 Final Exam Project (W.docx
Please do ECE572 requirementECECS 472572 Final Exam Project (W.docx
 
[Paper reading] Interleaving with Coroutines: A Practical Approach for Robust...
[Paper reading] Interleaving with Coroutines: A Practical Approach for Robust...[Paper reading] Interleaving with Coroutines: A Practical Approach for Robust...
[Paper reading] Interleaving with Coroutines: A Practical Approach for Robust...
 
Operating system ch#7
Operating system ch#7Operating system ch#7
Operating system ch#7
 
Containers > VMs
Containers > VMsContainers > VMs
Containers > VMs
 
Backing up Wikipedia Databases
Backing up Wikipedia DatabasesBacking up Wikipedia Databases
Backing up Wikipedia Databases
 

Mais de Mohd Arif

Bootp and dhcp
Bootp and dhcpBootp and dhcp
Bootp and dhcp
Mohd Arif
 
Arp and rarp
Arp and rarpArp and rarp
Arp and rarp
Mohd Arif
 
User datagram protocol
User datagram protocolUser datagram protocol
User datagram protocol
Mohd Arif
 
Project identification
Project identificationProject identification
Project identification
Mohd Arif
 
Project evalaution techniques
Project evalaution techniquesProject evalaution techniques
Project evalaution techniques
Mohd Arif
 
Presentation
PresentationPresentation
Presentation
Mohd Arif
 
Pointers in c
Pointers in cPointers in c
Pointers in c
Mohd Arif
 
Peer to-peer
Peer to-peerPeer to-peer
Peer to-peer
Mohd Arif
 
Overview of current communications systems
Overview of current communications systemsOverview of current communications systems
Overview of current communications systems
Mohd Arif
 
Overall 23 11_2007_hdp
Overall 23 11_2007_hdpOverall 23 11_2007_hdp
Overall 23 11_2007_hdp
Mohd Arif
 
Objectives of budgeting
Objectives of budgetingObjectives of budgeting
Objectives of budgeting
Mohd Arif
 
Network management
Network managementNetwork management
Network management
Mohd Arif
 
Networing basics
Networing basicsNetworing basics
Networing basics
Mohd Arif
 
Iris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platformIris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platform
Mohd Arif
 
Ip sec and ssl
Ip sec and  sslIp sec and  ssl
Ip sec and ssl
Mohd Arif
 
Ip security in i psec
Ip security in i psecIp security in i psec
Ip security in i psec
Mohd Arif
 
Intro to comp. hardware
Intro to comp. hardwareIntro to comp. hardware
Intro to comp. hardware
Mohd Arif
 

Mais de Mohd Arif (20)

Bootp and dhcp
Bootp and dhcpBootp and dhcp
Bootp and dhcp
 
Arp and rarp
Arp and rarpArp and rarp
Arp and rarp
 
User datagram protocol
User datagram protocolUser datagram protocol
User datagram protocol
 
Project identification
Project identificationProject identification
Project identification
 
Project evalaution techniques
Project evalaution techniquesProject evalaution techniques
Project evalaution techniques
 
Presentation
PresentationPresentation
Presentation
 
Pointers in c
Pointers in cPointers in c
Pointers in c
 
Peer to-peer
Peer to-peerPeer to-peer
Peer to-peer
 
Overview of current communications systems
Overview of current communications systemsOverview of current communications systems
Overview of current communications systems
 
Overall 23 11_2007_hdp
Overall 23 11_2007_hdpOverall 23 11_2007_hdp
Overall 23 11_2007_hdp
 
Objectives of budgeting
Objectives of budgetingObjectives of budgeting
Objectives of budgeting
 
Network management
Network managementNetwork management
Network management
 
Networing basics
Networing basicsNetworing basics
Networing basics
 
Loaders
LoadersLoaders
Loaders
 
Lists
ListsLists
Lists
 
Iris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platformIris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platform
 
Ip sec and ssl
Ip sec and  sslIp sec and  ssl
Ip sec and ssl
 
Ip security in i psec
Ip security in i psecIp security in i psec
Ip security in i psec
 
Intro to comp. hardware
Intro to comp. hardwareIntro to comp. hardware
Intro to comp. hardware
 
Heap sort
Heap sortHeap sort
Heap sort
 

Último

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Último (20)

Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 

Memory

  • 1. Part III Storage Management Chapter 8: Memory Management
  • 2. Address Generation qAddress generation has three stages: vCompile: compiler vLink: linker or linkage editor vLoad: loader compiler linker loader memory source object load code module module
  • 3. Three Address Binding Schemes qCompile Time: If the complier knows the location a program will reside, the compiler generates absolute code. Example: compile-go systems and MS-DOS .COM-format programs. qLoad Time: A compiler may not know the absolute address. So, the compiler generates relocatable code. Address binding is delayed until load time. qExecution Time: If the process may be moved in memory during its execution, then address binding must be delayed until run time. This is the commonly used scheme.
  • 7. Loaded into Memory qCode and data are loaded into memory at addresses 10000 and 20000, respectively. qEvery unresolved address must be adjusted.
  • 8. Logical, Virtual, Physical Address qLogical Address: the address generated by the CPU. qPhysical Address: the address seen and used by the memory unit. qVirtual Address: Run-time binding may generate different logical address and physical address. In this case, logical address is also referred to as virtual address. (Logical = Virtual in this course)
  • 9. Dynamic Loading qSome routines in a program (e.g., error handling) may not be used frequently. qWith dynamic loading, a routine is not loaded until it is called. qTo use dynamic loading, all routines must be in a relocatable format. qThe main program is loaded and executes. qWhen a routine A calls B, A checks to see if B is loaded. If B is not loaded, the relocatable linking loader is called to load B and updates the address table. Then, control is passed to B.
  • 10. Dynamic Linking qDynamic loading postpones the loading of routines until run-time. Dynamic linking postpones both linking and loading until run-time. qA stub is added to each reference of library routine. A stub is a small piece of code that indicates how to locate and load the routine if it is not loaded. qWhen a routine is called, its stub is executed. The routine is loaded, the address of that routine replaces the stub, and executes the routine. qDynamic linking usually applies to language and system libraries. A Windows DLL is a dynamic linking library.
  • 11. Major Memory Management Schemes qMonoprogramming Systems: MS-DOS qMultiprogramming Systems: vFixed Partitions vVariable Partitions vPaging
  • 12. Monoprogramming Systems 0 0 0 OS OS user prog. user prog. user prog. device drivers OS in ROM max max max
  • 13. Why Multiprogramming? qSuppose a process spends a fraction of p of its time in I/O wait state. qThen, the probability of n processes being all in wait state at the same time is pn. qThe CPU utilization is 1 – pn. qThus, the more processes in the system, the higher the CPU utilization. qWell, since CPU power is limited, throughput decreases when n is sufficiently large.
  • 14. Multiprogramming with Fixed Partitions q Memory is divided into n (possibly unequal) partitions. q Partitioning can be done at the startup time and altered later on. q Each partition may have a job queue. Or, all partitions share the same job queue. OS OS 300k 300k partition 1 partition 1 200k 200k partition 2 partition 2 150k partition 3 150k partition 3 100k partition 4 100k partition 4
  • 15. Relocation and Protection q Because executables may run in any base partition, relocation limit OS and protection are needed. q Recall the base/limit register pair for memory protection. q It could also be used a user program for relocation. q Linker generates relocatable code starting with 0. The base register contains the starting address.
  • 16. Relocation and Protection protection relocation limit base memory yes CPU < + logical address no physical address not your space traps to the OS addressing error
  • 18. Multiprogramming with Variable Partitions qThe OS maintains a memory pool. When a job comes in, the OS allocates whatever a job needs. qThus, partition sizes are not fixed, The number of partitions also varies. OS OS OS OS A A A A B B free free C C free free free
  • 19. Memory Allocation: 1/2 qWhen a memory request comes, we must search all free spaces (i.e., holes) to find a suitable one. qThere are some commonly seen methods: vFirst Fit: Search starts at the beginning of the set of holes and allocate the first large enough hole. vNext Fit: Search starts from where the previous first- fit search ended. vBest-Fit: Allocate the smallest hole that is larger than the request one. vWorst-Fit: Allocate the largest hole that is larger than the request one.
  • 20. Memory Allocation: 2/2 qIf the hole is larger than the requested size, it is cut into two. The one of the requested size is given to the process, the remaining one becomes a new hole. qWhen a process returns a memory block, it becomes a hole and must be combined with its neighbors. before X is freed after X is freed A X B A B A X A X B B X
  • 21. Fragmentation qProcesses are loaded and removed from memory, eventually the memory will be cut into small holes that are not large enough to run any incoming process. qFree memory holes between allocated ones are called external fragmentation. qIt is unwise to allocate exactly the requested amount of memory to a process, because of address boundary alignment requirements or the minimum requirement for memory management. qThus, memory that is allocated to a partition, but is not used, are called internal fragmentation.
  • 22. External/Internal Fragmentation allocated partition external used fragmentation used free used un-used used used free internal fragmentation used
  • 23. Compaction for External Fragmentation qIf processes are relocatable, we may move used memory blocks together to make a larger free memory block. used used free used used used used used used free free free used used
  • 24. Paging: 1/2 qThe physical memory is divided into fixed-sized page frames, or frames. qThe virtual address space is also divided into blocks of the same size, called pages. qWhen a process runs, its pages are loaded into page frames. qA page table stores the page numbers and their corresponding page frame numbers. qThe virtual address is divided into two fields: page number and offset (with that page).
  • 25. logical address Paging: 2/2 physical memory p d 0 page # offset within the page 1 logical 2 memory d 0 page table 3 0 7 1 4 d 1 2 2 9 5 2 3 5 6 3 7 8 logical address <1, d> translates to 9 physical address <2,d> 10
  • 28. Hardware Support qPage table may be stored in special registers if the number of pages is small. qPage table may be stored in physical memory, and a special register, page-table base register, points to the page table. qUse translation look-aside buffer (TLB). TLB stores recently used pairs (page #, frame #). It compares the input page # against the stored ones. If a match is found, the corresponding frame # is the output. Thus, no physical memory access is required. qThe comparison is carried out in parallel and is fast. qTLB normally has 64 to 1,024 entries.
  • 29. Translation Look-Aside Buffer valid page # frame # Y 123 79 Y 374 199 p (page #) N 906 3 Y 767 100 if page # = 767, N 222 999 Output frame # = 100 Y 23 946 If the TLB reports no hit, then we go for a page table look up!
  • 30. Fragmentation in a Paging System qDoes a paging system have fragmentation? qPaging systems do not have external fragmentation, because un-used page frames can be used by the next process. qPaging systems do have internal fragmentation. qBecause the address space is divided into equal size pages, all but the last one will be filled completely. Thus, the last page contains internal fragmentation and may be 50% full.
  • 31. Protection in a Paging System qIs it required to protect among users in a paging system? No, because different processes use different page tables. qHowever, we can use a page table length register that stores the length of a process’s page table. In this way, a process cannot access the memory beyond its region. Compare this with the base/limit register pair. qWe can also add read-only, read-write, or execute bits in page table to enforce r-w-e permission. qWe can also add a valid/invalid bit to each page entry to indicate if the page is in memory.
  • 32. Shared Pages q Pages may be shared by multiple processes. q If the code is a re-entrant (or pure) one, a program does not modify itself, routines can also be shared! logical space page table 0 A 0 M 1 B 1 2 X 2 B logical space page table X 0 M 0 1 N 1 2 X 2 A N
  • 33. Multi-Level Page Table virtual address index 1 index 2 index 3 offset page table 8 6 6 12 base register memory level 1 page table level 2 page table level 3 page table q There are 256, 64 and 64 entries in level 1, 2 and 3 page tables. q Page size is 4K = 4,096 bytes. q Virtual space size = (28*26*26 pages)*4K= 232 bytes
  • 34. Inverted Page Table: 1/2 qIn a paging system, each process has its own page table, which usually has many entries. qTo save space, we can build a page table which has one entry for each page frame. Thus, the size of this inverted page table is equal to the number of page frames. qEach entry in an inverted page table has two items: vProcess ID: the owner of this frame vPage Number: the page number in this frame qEach virtual address has three sections: <process-id, page #, offset>
  • 35. Inverted Page Table: 2/2 memory logical address physical address CPU pid p # d k d inverted page table k d k pid page # This search can be implemented with hashing