SlideShare uma empresa Scribd logo
1 de 16
Baixar para ler offline
Unit VI
MEMORY MANAGEMENT
1 Background
Memory most of the time means main memory (MM). Memory is central to the operation of a
modern computer system. Memory consists of a large array of words or bytes, each with its own
address. It is a resource which must be allocated and deallocated.
Memory accesses and memory management are a very important part of modern computer
operation. Every instruction has to be fetched from memory before it can be executed, and most
instructions involve retrieving data from memory or storing data in memory or both.
Basic Hardware
Main memory and the registers built into the processor itself are the only storage that the CPU can
access directly. There are machine instructions that take memory addresses as arguments, but none
that take disk addresses. Therefore, any instructions in execution, and any data being used by the
instructions, must be in one of these direct-access storage devices. If the data are not in memory,
they must be moved there before the CPU can operate on them.
Memory accesses to registers are very fast and memory accesses to main memory are comparatively
slow. So, an intermediary fast memory “cache” built into CPUs. The basic idea of the cache is to
transfer chunks of memory at a time from the main memory to the cache, and then to access
individual memory locations one at a time from the cache.
Along with the concern for relative speed of accessing physical memory, we also ensure correct
operation to protect the operating system from access by user processes and, in addition, to protect
user processes from one another. This protection must be provided by the hardware.
To make sure that each process has a separate memory space, we need the ability to determine the
range of legal addresses that the process may access and to ensure that the process can access only
these legal addresses. We can provide this protection by using two registers, usually a base and a
limit, as illustrated in following Figure:
The base register holds the smallest legal physical memory address; the limit register specifies the
size of the range. For example, if the base register holds 300040 and the limit register is 120900,
then the program can legally access all addresses from 300040 through 420939 (inclusive).
Every memory access made by a user process is checked against these two registers, and if a
memory access is attempted outside the valid range, then a fatal error is generated. The OS
obviously has access to all existing memory locations, as this is necessary to swap users' code and
data in and out of memory. It should also be obvious that changing the contents of the base and limit
registers is a privileged activity, allowed only to the OS kernel.
2. Logical versus Physical Address Space
The address generated by the CPU is a logical address, whereas the address actually seen by the
memory hardware is a physical address. The set of all logical addresses used by a program
composes the logical address space, and the set of all corresponding physical addresses composes
the physical address space.
The run time mapping of logical to physical addresses is handled by the memory-management unit,
MMU using different methods available. One of the method is generalization of base-register
scheme. Here, the base register is now termed a relocation register. The value in the relocation
register is added to every address generated by a user process at the time the address is sent to
memory (see Figure given on next page).
For example, if the base is at 14000, then an attempt by the user to address location 0 is
dynamically relocated to location 14000; an access to location 346 is mapped to location 14346.
User programs and CPU never see physical addresses. User programs work entirely in logical
address space, and any memory references or manipulations are done using purely logical
addresses. Only when the address gets sent to the physical memory chips is the physical memory
address generated.
We now have two different types of addresses: logical addresses (in the range 0 to max) and
physical addresses (in the range R+0 to R+max for a base value R). The user generates only logical
addresses and thinks that the process runs in locations 0 to max. However, these logical addresses
are be mapped to physical addresses by memory-mapping hardware, before they are used.
3. Swapping
A process must be loaded into memory in order to execute. If there is not enough memory available
to keep all running processes in memory at the same time, then some processes who are not
currently using the CPU may have their memory swapped out to a fast local disk called the backing
store. The backing store is commonly a fast disk. It must be large enough to accommodate copies of
all memory images for all users, and it must provide direct access to these memory images.
For example, in a multiprogramming environment with a round-robin CPU-scheduling algorithm,
when a quantum expires, the memory manager will start to swap out the process that just finished
and to swap another process into the memory space that has been freed (Figure given below).
A variant of this swapping policy is used for priority-based scheduling algorithms. If a higher-
priority process arrives and wants service, the memory manager can swap out the lower-priority
process and then load and execute the higher-priority process. When the higher-priority process
finishes, the lower-priority process can be swapped back in and continued. This variant of swapping
is sometimes called roll out, roll in.
If we want to swap a process, we must be sure that it is idle, or there are no pending I/O operations.
(Otherwise the pending I/O operation could write into the wrong process's memory space.) The
solution is to either swap only totally idle processes, or do I/O operations only into and out of OS
buffers, which are then transferred to or from process's main memory as a second step.
4. contiguous memory allocation
The main memory must accommodate both the operating system and the various user processes. We
therefore need to allocate main memory in the most efficient way possible. This section explains
one common method, contiguous memory allocation.
The memory is usually divided into two partitions: one for the resident operating system and one for
the user processes. We can place the operating system in either low memory or high memory. Here,
we consider only the situation in which the operating system resides in low memory.
It is desirable to keep several user processes to reside in memory at the same time. In. contiguous
memory allocation, each process is contained in a single contiguous section of memory.
Memory Allocation
• One of the simplest methods for allocating memory is to divide memory into several fixed-
sized partitions. Each partition may contain exactly one process. Thus, the degree of
multiprogramming is bound by the number of partitions. In this multiple-partition method,
when a partition is free, a process is selected from the input queue and is loaded into the free
partition. When the process terminates, the partition becomes available for another process.
• In the variable-partition scheme, the operating system keeps a table indicating which parts
of memory are available and which are occupied. Initially, all memory is available for user
processes and is considered one large block of available memory a hole.
As processes enter the system, they are put into an input queue. The operating system takes into
account the memory requirements of each process and the amount of available memory space in
determining which processes are allocated memory. When a process is allocated space, it is loaded
into memory, and it can then compete for CPU time. When a process terminates, it releases its
memory which the operating system may then fill with another process from the input queue.
At any given time, then, we have a list of available block sizes and an input queue. The operating
system can order the input queue according to a scheduling algorithm. If no available block of
memory (or hole) is large enough to hold that process. The operating system can then wait until a
large enough block is available, or it can skip down the input queue to see whether the smaller
memory requirements of some other process can be met.
When a process arrives and needs memory, the system searches the set for a hole that is large
enough for this process. If the hole is too large, it is split into two parts. One part is allocated to the
arriving process; the other is returned to the set of holes. When a process terminates, it releases its
block of memory, which is then placed back in the set of holes. If the new hole is adjacent to other
holes, these adjacent holes are merged to form one larger hole. At this point, the system may need to
check whether there are processes waiting for memory and whether this newly freed and
recombined memory could satisfy the demands of any of these waiting processes.
Partition Selection Algorithm
To satisfy a request of size n from a list of free holes, the most commom strategies are first-fit, best-
fit and worst-fit.
• First fit. Search the list of holes until one is found that is big enough to satisfy the request,
and assign a portion of that hole to that process. Whatever fraction of the hole not needed by
the request is left on the free list as a smaller hole. Subsequent requests may start looking
either from the beginning of the list or from the point at which this search ended.
• Best fit. Allocate the smallest hole that is big enough to satisfy the request. This saves large
holes for other process requests that may need them later, but the resulting unused portions
of holes may be too small to be of any use, and will therefore be wasted. Keeping the free
list sorted can speed up the process of finding the right hole.
• Worst fit. Allocate the largest hole available. This strategy produces the largest leftover
hole, which may be more useful than the smaller leftover hole from a best-fit approach.
Fragmentation
As processes are loaded and removed from memory in first-fit and best-fit (strategies of multiple-
partition allocation scheme), the free memory space is broken into little pieces. External
fragmentation exists when there is enough total memory space to satisfy a request but the available
spaces are not contiguous; storage is fragmented into a large number of small holes. This
fragmentation problem can be severe. In the worst case, we could have a block of free (or wasted)
memory between every two processes. If all these small pieces of memory were in one big free
block instead, we might be able to run several more processes.
In the fixed-partition allocation scheme, the physical memory is broken into fixed-sized blocks and
allocate memory in units based on block size. With this approach, the memory allocated to a process
may be slightly larger than the requested memory. The difference between these two numbers is
internal fragmentation – unused memory that is internal to a partition.
One solution to the problem of external fragmentation is compaction. The goal is to shuffle the
memory contents so as to place all free memory together in one large block. We must determine the
cost of compaction algorithm. The simplest compaction algorithm is to move all processes toward
one end of memory; all holes move in the other direction, producing one large hole of available
memory. This scheme can be expensive.
Another possible solution to the external-fragmentation problem is to allow a process to use non-
contiguous blocks of physical memory (Non-contiguous memory allocation). Two complementary
techniques achieve this solution: paging and segmentation.
5. Paging
Paging is a memory-management scheme that:
• permits the physical address space a process to be noncontiguous.
• avoids external fragmentation and the need for compaction by allocating memory in equal
sized blocks known as pages.
Basic Method
• The basic method for implementing paging involves breaking physical memory into fixed-
sized blocks called frames and breaking logical memory into blocks of the same size called
pages.
• When a process is to be executed, its pages are loaded into any available memory frames
from their source (a file system or the backing store). The backing store is divided into
fixed-sized blocks that are of the same size as the memory frames.
• Every address generated by the CPU (logical address) is divided into two parts: a page
number (p) and a page offset (d). The page number is used as an index into a page table.
• A page table contains the base address of each page in physical memory i.e. corresponding
frame. So, it (page table) maps the page number to a frame number, to yield a physical
address which also has two parts: The frame number and the offset within that frame.
• This base address of frame is combined with the page offset to define the physical memory
address that is sent to the memory unit. The paging model of memory is shown below:
Example for mapping of logical and physical memory: page 2 of the program's logical memory
is currently stored in frame 3 of physical memory:
Page numbers, frame numbers, and frame sizes are determined by the computer architecture
(hardware). The size of a page is typically a power of 2, varying between 512 bytes and 16
MB per page. For example, if the logical address size is 2m
and the page size is 2n
, then the
high-order m-n bits of a logical address designate the page number and the remaining n bits
represent the offset. Thus, the logical address is as follows, where p is an index into the page
table and d is the displacement within the page.
Paging example for a 32-byte memory with 4-byte pages: In this example, a process has
16 bytes of logical memory, mapped in 4 byte pages into 32 bytes of physical memory.
Here, n= 2 and m = 4 for page size and logical address respectively.
▪ Logical address 0 is page 0, offset 0. Indexing into the page table, we find that page
0 is in frame 5. Thus, logical address 0 maps to physical address 20 [= (5 x 4) + 0].
(Logical Address 0 = 0000 – bold digits ti show high order m-n i.e 4-2=2. Unbold
digits represent offset.)
▪ Logical address 3 (page 0, offset 3) maps to physical address 23 [ = (5 x 4) + 3].
(Logical Address 3 = 0011)
▪ Logical address 4 is page 1, offset 0; according to the page table, page 1 is mapped
to frame 6. Thus, logical address 4 maps to physical address 24 [ = ( 6 x 4) + 0].
(Logical Address 4 = 0100)
▪ Logical address 13 maps to physical address 9. (Logical Address 13 = 1101).
Note: The following figure does not show this mapping in the page table as index 3
of page table contains some other page. For accessing this logical address, page
should be replaced in the page table, so that it corresponds to proper physical
address. There are different page replacement algorithms available, which is
discussed later.
When we use a paging scheme, we have no external fragmentation: any free frame can be allocated
to a process that needs it. However, we may have some internal fragmentation. If page size is 2,048
bytes, a process of 72,766 bytes will need 35 pages plus 1,086 bytes. It will be allocated 36 frames,
resulting in internal fragmentation of 2,048 – 1,086 = 962 bytes. In the worst case, if a process
would need n pages plus 1 byte, it would be allocated n+1 frames, resulting in internal
fragmentation of almost an entire frame.
Small page sizes are desirable. However, overhead is involved in each page-table entry, and this
overhead is reduced as the size of the pages increases. Also, disk I/0 is more efficient when the
amount data being transferred is larger. Today, pages typically are between 4 KB and 8 KB in size
(and some systems support even larger page sizes). Usually, each page-table entry is 4 bytes long,
but that size can vary as well. A 32-bit entry can point to one of 232
physical page frames. If frame
size is 4KB (212
bit), it translates to 244
bytes (or 16 TB) of addressable physical memory.( 32 + 12
= 44 bits of physical address space.)
When a process arrives in the system to be executed, its size is examined in terms of pages. Each
page of the process needs one frame. Thus, if the process requires n pages, at least n frames must be
available in memory. If n frames are available, they are allocated to this arriving process from a
free-frame list, and inserted into that process's page table. (Figure given below shows Free frames
(a) before allocation and (b) after allocation).
Processes are blocked from accessing anyone else's memory because all of their memory requests
are mapped through their page table. There is no way for them to generate an address that maps into
any other process's memory space.
Since the operating system is managing physical memory, it must be aware of the allocation details
of physical memory-which frames are allocated,which frames are available, total numbers of frames
and so on. This nformation is generally kept in a data structure called a frame table. The frame
table has one entry for each physical page frame, indicating whether the latter is free or allocated
and, if it is allocated, to which page of which process or processes.
The operating system maintains a copy of the page table for each process, just as it maintains a copy
of the instruction counter and register counter. This copy is used to translate logical addresses to
physical addresses. In addition, the operating system must ensure that if a user makes a system call
(to do I/O, for example) and provides an address as a parameter (a buffer, for instance), that address
must be mapped to produce the correct physical address.
Hardware Support for paging
Page lookups must be done for every memory reference, and whenever a process gets swapped in or
out of the CPU, its page table must be swapped in and out too, along with the instruction registers,
etc. It is therefore appropriate to provide hardware support for this operation, in order to make it as
fast as possible and to make process switches as fast as possible also. The hardware implementation
of the page table can be done in several ways:
1. In the simplest case, the page table is implemented as a set of dedicated registers. The CPU
dispatcher reloads these registers, just as it reloads the other registers. Instructions to load or
modify the page-table registers are, of course, privileged, so that only the operating system
can change them.
2. The use of registers for the page table is satisfactory if the page table is reasonably small
(for example, 256 entries). For machines, who allow the page table to be very large (for
example, 1 million entries), the use of fast registers to implement the page table is not
feasible. Rather, the page table is kept in main memory, and a page-table base register
(PTBR) points to the page table. Changing page tables requires changing only this one
register, substantially reducing context-switch time.
The problem (disadvantage) with this approach is that every memory access now requires
two memory refrences to access a memory locations – first one for the page-table entry
using PTBR for finding the page frame number and then another one to access the desired
memory location. So, time required to access a user memory location just got half as fast i.e.
memory access is slowed by a factor of 2.
The standard solution to this problem is to use a special, small, fast-lookup hardware cache,
called a translation look-aside buffer (TLB) (or associative memory). Each entry in the
TLB consists of two parts: a key (or tag) and a value. The benefit of the TLB is that it can
search an entire table for a key value in parallel, and if it is found anywhere in the table, then
the corresponding lookup value is returned. So, when the associative memory is presented
with an item, the item is compared with all keys simultaneously. If the item is found, the
corresponding value field is returned. The search is fast, but the hardware is expensive. So,
the number of entries in a TLB is small, numbering between 64 and 1,024. It is therefore
used as a cache device.
The TLB is used with page tables in the following way:
1. The TLB contains only a few of the page-table entries.
2. When a logical address is generated by the CPU, its page number is presented to the
TLB. If the page number is found, its corresponding frame number is used to access
the physical memory. (TLB hit)
3. If the page number is not in the TLB (TLB miss), a memory reference is made to the
page table (kept in main memory) to obtain frame number for that page number,
using which we access the desired memory location. In addition, we add the page
number and frame number to the TLB, so that they will be found quickly on the next
reference.
4. If the TLB is already full of entries, the operating system must select one entry for
replacement. Replacement policies range from least recently used (LRU) to random.
5. Some TLBs allow some entries to be wired down, which means that they cannot be
removed from the TLB. Typically TLB entries for kernel code are wired down.
Paging with TLB is shown in following figure.
The flow chart for TLB miss is shown below:
The percentage of times that a particular page number is found in the TLB is called the hit ratio. An
80-percent hit ratio, for example, means that we find the desired page number in the TLB 80
percent of the time. If it takes 20 nanoseconds to search the TLB and 100 nanoseconds to access
memory, then a mapped-memory access takes 120 nanoseconds when the page number is in the
TLB (in case of TLB hit).
If we fail to find the page number in the TLB (20 nanoseconds), then we must first access memory
for the page table and frame number (100 nanoseconds) and then access the desired byte in memory
(100 nanoseconds), for a total of 220 nanoseconds.
To find the effective memory-access time, we weight weight the case by its probability:
effective access time = 0.80 x 120 + 0.20 x 220 = 140 nanoseconds.
In this example, we suffer a 40-percent slowdown in memory-access time (from 100 to 140
nanoseconds).
For a 98-percent hit ratio, we have,
effective access time = 0.98 x 120 + 0.02 x 220 = 122 nanoseconds.
This increased hit rate produces only a 22 percent slowdown in access time.
Structure of the page table/Types of Page Table
1. Hierachical paging: In this structure, page table is divided into smaller pieces. One way is
to use a two-level paging algorithm, in which the page table itself is also paged (as shown
below).
Two-level paging example: A logical address (on 32-bit machine resulting 232
logical
address size with 4K (212
) page size) is divided into:
1. a page number consisting of 20 bits ( 232
/ 212
= 220
or 1 million).
2. a page offset consisting of 12 bits.
Since the page table with 220
entry is paged, the page number is further divided into:
1. a 10-bit page number (p1).
2. a 10-bit page offset (p2).
Thus, a logical address is as follows:
where p1 is an index into the outer page table, which identifies where in memory to find one
page of an inner page table and p2 is the displacement within the page of the outer page
table, which finds a specific entry in that inner page table. This inner page table entry
identifies a particular frame in physical memory. ( The remaining 12 bits of the 32 bit logical
address are the offset within the 4K frame. )
The address-translation method for this architecture is shown in following figure.
Because address translation works from the outer page table inward, this scheme is also
known as a forward-mapped page table.
There can exist more level of page tables creating hierarchical structures of page tables
depending on system architecture (such as 64-bit machine may find two-level paging
scheme in appropriate).
2. Hashed Page Tables: A common approach for handling address spaces larger than 32 bits is
to use a hashed page table, with the hash value being the virtual page number. Each entry
in the hash table contains a linked list of elements that hash to the same location (to handle
collisions). Each element in this table consists of three fields:
(1) the virtual page number,
(2) the value of the mapped page frame, and
(3) a pointer to the next element in the linked list.
The algorithm works as follows:
1. The virtual page number in the virtual address is hashed into the hash table.
2. The virtual page number is compared with field 1 in the first element in the linked
list.
3. If there is a match, the corresponding page frame (field 2) is used to form the desired
physical address.
4. If there is no match, subsequent entries in the linked list are searched for a matching
virtual page number.
This scheme is shown in following figure:
3. Inverted Page Tables: Usually, each process has an associated page table. The page table
has one entry for each page that the process is using, so each page table may consist of
millions of entries. These tables may consume large amounts of physical memory.
Another approach is to use an inverted page table. Instead of a table listing all of the pages
for a particular process, an inverted page table lists all of the pages currently loaded in
memory, for all processes. ( i.e. there is one entry per frame instead of one entry per page. )
Here is a simple example is given to understand this method, where each virtual address in
the system consists of a triple:
<process-id, page-number, offset>.
Each inverted page-table entry is a pair <process-id, page-number> where the process-id
assumes the role of the address-space identifier. When a memory reference occurs, part of
the virtual (logical) address, consisting of <process-id, page-number>, is presented to the
memory subsystem. The inverted page table is then searched for a match. If a match is
found-say, at entry i-then the physical address <i, offset> is generated. If no match is found,
then an illegal address access has been attempted.
Although this scheme decreases the amount of memory needed to store each page table, it increases
the amount of time needed to search the table when a page reference occurs as as it may be
necessary to search the entire table in order to find the desired page or to discover that it is not
there.
Also, systems that use inverted page tables have difficulty in implementing shared memory. Shared
memory is used to map multiple logical pages to a common physical frame, but inverted page tables
map each frame to one and only one process.
6. Segmentation
Paging separates user's view of memory (logical memory) from the actual physical memory.
Segmentation is a memory-management scheme that implements user view of memory. Users
prefer to view memory as a collection of variable-sized segments such as main program, routines,
variables, stacks, etc, with no necessary ordering among them.
So, logical address space can be viewed as collection of segments, each segment having a name and
a length. User specifies each logical address consisting of a segment name and the offset within the
segment. For simplicity of implementation, segments are numbered and are referred to by a segment
number, rather than by a segment name. Thus, a logical address consists of a two tuple:
<segment-number, offset>
(Whereas in paging scheme, user specifies only a single address, which is partitioned by the
hardware into a page number and an offset, all invisible to the programmer.)
Segmentation Hardware
To keep track of each segment, a segment table is maintained by the OS. Each entry in the segment
table consistes of two fields: segment base and segment limit. The segment base contains the
starting address of the segment in physical memory and the segment limit specifies the length of the
segment. The segment number is used as an index to the segment table.
A logical address consists of two parts: a segment number, s, and an offset into that segment, d. The
segment number is used as an index to the segment table. The offset d of the logical address must be
between 0 and the segment limit. If it is not, invalid-address error is generated (trap to the OS).
Main Program
Stack
Variables
Segment 0
Segment 1
Segment 2
When an offset is legal, it is added to the segment base to produce the address in physical memory
of the desired byte. The segment table is thus essentially an array of base-limit register pairs.
As an example, consider the situation shown in figure given below. We have five segments
numbered from 0 through 4. The segments are stored in physical memory as shown. The segment
table has a separate entry for each segment, giving the beginning address of the segment in physical
memory (or base) and the length of that segment (or limit). For example, segment 2 is 400 bytes
long and begins at location 4300. Thus, a reference to byte 53 of segment 2 is mapped onto location
4300 +53= 4353. A reference to segment 3, byte 852, is mapped to 3200 (the base of segment 3) +
852 = 4052. A reference to byte 1222 of segment 0 would result in a trap to the operating system, as
this segment is only 1,000 bytes long.

Mais conteúdo relacionado

Mais procurados

Virtual memory
Virtual memoryVirtual memory
Virtual memoryAnuj Modi
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategiesDr. Loganathan R
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OSvampugani
 
Free Space Management, Efficiency & Performance, Recovery and NFS
Free Space Management, Efficiency & Performance, Recovery and NFSFree Space Management, Efficiency & Performance, Recovery and NFS
Free Space Management, Efficiency & Performance, Recovery and NFSUnited International University
 
Computer architecture memory system
Computer architecture memory systemComputer architecture memory system
Computer architecture memory systemMazin Alwaaly
 
Cache memory ppt
Cache memory ppt  Cache memory ppt
Cache memory ppt Arpita Naik
 
contiguous memory allocation.pptx
contiguous memory allocation.pptxcontiguous memory allocation.pptx
contiguous memory allocation.pptxRajapriya82
 
Multi core-architecture
Multi core-architectureMulti core-architecture
Multi core-architecturePiyush Mittal
 
Os Swapping, Paging, Segmentation and Virtual Memory
Os Swapping, Paging, Segmentation and Virtual MemoryOs Swapping, Paging, Segmentation and Virtual Memory
Os Swapping, Paging, Segmentation and Virtual Memorysgpraju
 
Cache coherence
Cache coherenceCache coherence
Cache coherenceEmployee
 
Memory management ppt
Memory management pptMemory management ppt
Memory management pptManishaJha43
 
Memory map
Memory mapMemory map
Memory mapaviban
 
Operating System-Concepts of Process
Operating System-Concepts of ProcessOperating System-Concepts of Process
Operating System-Concepts of ProcessShipra Swati
 
Chapter 11 - File System Implementation
Chapter 11 - File System ImplementationChapter 11 - File System Implementation
Chapter 11 - File System ImplementationWayne Jones Jnr
 

Mais procurados (20)

Virtual memory
Virtual memoryVirtual memory
Virtual memory
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategies
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OS
 
Free Space Management, Efficiency & Performance, Recovery and NFS
Free Space Management, Efficiency & Performance, Recovery and NFSFree Space Management, Efficiency & Performance, Recovery and NFS
Free Space Management, Efficiency & Performance, Recovery and NFS
 
Memory management OS
Memory management OSMemory management OS
Memory management OS
 
Memory virtualization
Memory virtualizationMemory virtualization
Memory virtualization
 
Computer architecture memory system
Computer architecture memory systemComputer architecture memory system
Computer architecture memory system
 
Cache memory ppt
Cache memory ppt  Cache memory ppt
Cache memory ppt
 
contiguous memory allocation.pptx
contiguous memory allocation.pptxcontiguous memory allocation.pptx
contiguous memory allocation.pptx
 
Multi core-architecture
Multi core-architectureMulti core-architecture
Multi core-architecture
 
Os Swapping, Paging, Segmentation and Virtual Memory
Os Swapping, Paging, Segmentation and Virtual MemoryOs Swapping, Paging, Segmentation and Virtual Memory
Os Swapping, Paging, Segmentation and Virtual Memory
 
Virtual memory
Virtual memoryVirtual memory
Virtual memory
 
Virtual Memory
Virtual MemoryVirtual Memory
Virtual Memory
 
Cache coherence
Cache coherenceCache coherence
Cache coherence
 
Memory management ppt
Memory management pptMemory management ppt
Memory management ppt
 
Paging and segmentation
Paging and segmentationPaging and segmentation
Paging and segmentation
 
Memory map
Memory mapMemory map
Memory map
 
Operating System-Concepts of Process
Operating System-Concepts of ProcessOperating System-Concepts of Process
Operating System-Concepts of Process
 
Memory management
Memory managementMemory management
Memory management
 
Chapter 11 - File System Implementation
Chapter 11 - File System ImplementationChapter 11 - File System Implementation
Chapter 11 - File System Implementation
 

Destaque

Ch11: File System Interface
Ch11: File System InterfaceCh11: File System Interface
Ch11: File System InterfaceAhmar Hashmi
 
Os solved question paper
Os solved question paperOs solved question paper
Os solved question paperAnkit Bhatnagar
 
Operating system concepts (notes)
Operating system concepts (notes)Operating system concepts (notes)
Operating system concepts (notes)Sohaib Danish
 
Operating Systems - File Management
Operating Systems -  File ManagementOperating Systems -  File Management
Operating Systems - File ManagementDamian T. Gordon
 
FINAL DEMO TOPIC
FINAL DEMO TOPICFINAL DEMO TOPIC
FINAL DEMO TOPICbunso18
 
TKP notes for Java
TKP notes for JavaTKP notes for Java
TKP notes for JavaLynn Langit
 
java-Unit4 chap2- awt controls and layout managers of applet
java-Unit4 chap2- awt controls and layout managers of appletjava-Unit4 chap2- awt controls and layout managers of applet
java-Unit4 chap2- awt controls and layout managers of appletraksharao
 
Chapter iv(modern gui)
Chapter iv(modern gui)Chapter iv(modern gui)
Chapter iv(modern gui)Chhom Karath
 
Java Swing Custom GUI MVC Component Tutorial
Java Swing Custom GUI MVC Component TutorialJava Swing Custom GUI MVC Component Tutorial
Java Swing Custom GUI MVC Component TutorialSagun Dhakhwa
 
Chapter 10 - File System Interface
Chapter 10 - File System InterfaceChapter 10 - File System Interface
Chapter 10 - File System InterfaceWayne Jones Jnr
 
Implementation of page table
Implementation of page tableImplementation of page table
Implementation of page tableguestff64339
 
Virtual memory managment
Virtual memory managmentVirtual memory managment
Virtual memory managmentSantu Kumar
 
Operating system notes
Operating system notesOperating system notes
Operating system notesSANTOSH RATH
 
Teaching demonstration evaluation form
Teaching demonstration evaluation formTeaching demonstration evaluation form
Teaching demonstration evaluation forms junaid
 
GUI Programming In Java
GUI Programming In JavaGUI Programming In Java
GUI Programming In Javayht4ever
 

Destaque (20)

File Systems
File SystemsFile Systems
File Systems
 
Chapter 1 swings
Chapter 1 swingsChapter 1 swings
Chapter 1 swings
 
Ch11: File System Interface
Ch11: File System InterfaceCh11: File System Interface
Ch11: File System Interface
 
Os solved question paper
Os solved question paperOs solved question paper
Os solved question paper
 
Operating system concepts (notes)
Operating system concepts (notes)Operating system concepts (notes)
Operating system concepts (notes)
 
Operating Systems - File Management
Operating Systems -  File ManagementOperating Systems -  File Management
Operating Systems - File Management
 
FINAL DEMO TOPIC
FINAL DEMO TOPICFINAL DEMO TOPIC
FINAL DEMO TOPIC
 
TKP notes for Java
TKP notes for JavaTKP notes for Java
TKP notes for Java
 
java-Unit4 chap2- awt controls and layout managers of applet
java-Unit4 chap2- awt controls and layout managers of appletjava-Unit4 chap2- awt controls and layout managers of applet
java-Unit4 chap2- awt controls and layout managers of applet
 
Chapter iv(modern gui)
Chapter iv(modern gui)Chapter iv(modern gui)
Chapter iv(modern gui)
 
Java Swing Custom GUI MVC Component Tutorial
Java Swing Custom GUI MVC Component TutorialJava Swing Custom GUI MVC Component Tutorial
Java Swing Custom GUI MVC Component Tutorial
 
Chapter 10 - File System Interface
Chapter 10 - File System InterfaceChapter 10 - File System Interface
Chapter 10 - File System Interface
 
Java packages
Java packagesJava packages
Java packages
 
Implementation of page table
Implementation of page tableImplementation of page table
Implementation of page table
 
Java swing
Java swingJava swing
Java swing
 
Virtual memory managment
Virtual memory managmentVirtual memory managment
Virtual memory managment
 
Operating system notes
Operating system notesOperating system notes
Operating system notes
 
10 File System
10 File System10 File System
10 File System
 
Teaching demonstration evaluation form
Teaching demonstration evaluation formTeaching demonstration evaluation form
Teaching demonstration evaluation form
 
GUI Programming In Java
GUI Programming In JavaGUI Programming In Java
GUI Programming In Java
 

Semelhante a Memory Management

Unit iiios Storage Management
Unit iiios Storage ManagementUnit iiios Storage Management
Unit iiios Storage Managementdonny101
 
Operating Systems - memory management
Operating Systems - memory managementOperating Systems - memory management
Operating Systems - memory managementMukesh Chinta
 
Memory Managementgggffffffffffgggggggggg
Memory ManagementgggffffffffffggggggggggMemory Managementgggffffffffffgggggggggg
Memory ManagementgggffffffffffggggggggggBHUPESHRAHANGDALE200
 
Chapter 9 OS
Chapter 9 OSChapter 9 OS
Chapter 9 OSC.U
 
Paging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory managementPaging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory managementkazim Hussain
 
Operating system Memory management
Operating system Memory management Operating system Memory management
Operating system Memory management Shashank Asthana
 
Mainmemoryfinal 161019122029
Mainmemoryfinal 161019122029Mainmemoryfinal 161019122029
Mainmemoryfinal 161019122029marangburu42
 
Bab 4
Bab 4Bab 4
Bab 4n k
 

Semelhante a Memory Management (20)

Unit iiios Storage Management
Unit iiios Storage ManagementUnit iiios Storage Management
Unit iiios Storage Management
 
Opetating System Memory management
Opetating System Memory managementOpetating System Memory management
Opetating System Memory management
 
Operating Systems - memory management
Operating Systems - memory managementOperating Systems - memory management
Operating Systems - memory management
 
Dynamic loading
Dynamic loadingDynamic loading
Dynamic loading
 
Operating system
Operating systemOperating system
Operating system
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
UNIT-2 OS.pptx
UNIT-2 OS.pptxUNIT-2 OS.pptx
UNIT-2 OS.pptx
 
Memory Managementgggffffffffffgggggggggg
Memory ManagementgggffffffffffggggggggggMemory Managementgggffffffffffgggggggggg
Memory Managementgggffffffffffgggggggggg
 
CS6401 OPERATING SYSTEMS Unit 3
CS6401 OPERATING SYSTEMS Unit 3CS6401 OPERATING SYSTEMS Unit 3
CS6401 OPERATING SYSTEMS Unit 3
 
Operating System
Operating SystemOperating System
Operating System
 
Chapter 9 OS
Chapter 9 OSChapter 9 OS
Chapter 9 OS
 
Module 4 memory management
Module 4 memory managementModule 4 memory management
Module 4 memory management
 
Paging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory managementPaging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory management
 
Operating system Memory management
Operating system Memory management Operating system Memory management
Operating system Memory management
 
Main memoryfinal
Main memoryfinalMain memoryfinal
Main memoryfinal
 
Mainmemoryfinal 161019122029
Mainmemoryfinal 161019122029Mainmemoryfinal 161019122029
Mainmemoryfinal 161019122029
 
unit5_os (1).pptx
unit5_os (1).pptxunit5_os (1).pptx
unit5_os (1).pptx
 
Cs8493 unit 3
Cs8493 unit 3Cs8493 unit 3
Cs8493 unit 3
 
Os unit 2
Os unit 2Os unit 2
Os unit 2
 
Bab 4
Bab 4Bab 4
Bab 4
 

Mais de Shipra Swati

Operating System-Process Scheduling
Operating System-Process SchedulingOperating System-Process Scheduling
Operating System-Process SchedulingShipra Swati
 
Operating System-Introduction
Operating System-IntroductionOperating System-Introduction
Operating System-IntroductionShipra Swati
 
Fundamental of Information Technology - UNIT 8
Fundamental of Information Technology - UNIT 8Fundamental of Information Technology - UNIT 8
Fundamental of Information Technology - UNIT 8Shipra Swati
 
Fundamental of Information Technology - UNIT 7
Fundamental of Information Technology - UNIT 7Fundamental of Information Technology - UNIT 7
Fundamental of Information Technology - UNIT 7Shipra Swati
 
Fundamental of Information Technology - UNIT 6
Fundamental of Information Technology - UNIT 6Fundamental of Information Technology - UNIT 6
Fundamental of Information Technology - UNIT 6Shipra Swati
 
Fundamental of Information Technology
Fundamental of Information TechnologyFundamental of Information Technology
Fundamental of Information TechnologyShipra Swati
 
Process Synchronization
Process SynchronizationProcess Synchronization
Process SynchronizationShipra Swati
 
Preparing for Infrastructure Management (Part 1)
Preparing for Infrastructure Management (Part 1)Preparing for Infrastructure Management (Part 1)
Preparing for Infrastructure Management (Part 1)Shipra Swati
 

Mais de Shipra Swati (20)

Operating System-Process Scheduling
Operating System-Process SchedulingOperating System-Process Scheduling
Operating System-Process Scheduling
 
Operating System-Introduction
Operating System-IntroductionOperating System-Introduction
Operating System-Introduction
 
Java unit 11
Java unit 11Java unit 11
Java unit 11
 
Java unit 14
Java unit 14Java unit 14
Java unit 14
 
Java unit 12
Java unit 12Java unit 12
Java unit 12
 
Java unit 7
Java unit 7Java unit 7
Java unit 7
 
Java unit 3
Java unit 3Java unit 3
Java unit 3
 
Java unit 2
Java unit 2Java unit 2
Java unit 2
 
Java unit 1
Java unit 1Java unit 1
Java unit 1
 
OOPS_Unit_1
OOPS_Unit_1OOPS_Unit_1
OOPS_Unit_1
 
Ai lab manual
Ai lab manualAi lab manual
Ai lab manual
 
Fundamental of Information Technology - UNIT 8
Fundamental of Information Technology - UNIT 8Fundamental of Information Technology - UNIT 8
Fundamental of Information Technology - UNIT 8
 
Fundamental of Information Technology - UNIT 7
Fundamental of Information Technology - UNIT 7Fundamental of Information Technology - UNIT 7
Fundamental of Information Technology - UNIT 7
 
Fundamental of Information Technology - UNIT 6
Fundamental of Information Technology - UNIT 6Fundamental of Information Technology - UNIT 6
Fundamental of Information Technology - UNIT 6
 
Fundamental of Information Technology
Fundamental of Information TechnologyFundamental of Information Technology
Fundamental of Information Technology
 
Disk Management
Disk ManagementDisk Management
Disk Management
 
Deadlocks
DeadlocksDeadlocks
Deadlocks
 
Process Synchronization
Process SynchronizationProcess Synchronization
Process Synchronization
 
Avl excercise
Avl excerciseAvl excercise
Avl excercise
 
Preparing for Infrastructure Management (Part 1)
Preparing for Infrastructure Management (Part 1)Preparing for Infrastructure Management (Part 1)
Preparing for Infrastructure Management (Part 1)
 

Último

Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...anjaliyadav012327
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxShobhayan Kirtania
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 

Último (20)

Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptx
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 

Memory Management

  • 1. Unit VI MEMORY MANAGEMENT 1 Background Memory most of the time means main memory (MM). Memory is central to the operation of a modern computer system. Memory consists of a large array of words or bytes, each with its own address. It is a resource which must be allocated and deallocated. Memory accesses and memory management are a very important part of modern computer operation. Every instruction has to be fetched from memory before it can be executed, and most instructions involve retrieving data from memory or storing data in memory or both. Basic Hardware Main memory and the registers built into the processor itself are the only storage that the CPU can access directly. There are machine instructions that take memory addresses as arguments, but none that take disk addresses. Therefore, any instructions in execution, and any data being used by the instructions, must be in one of these direct-access storage devices. If the data are not in memory, they must be moved there before the CPU can operate on them. Memory accesses to registers are very fast and memory accesses to main memory are comparatively slow. So, an intermediary fast memory “cache” built into CPUs. The basic idea of the cache is to transfer chunks of memory at a time from the main memory to the cache, and then to access individual memory locations one at a time from the cache. Along with the concern for relative speed of accessing physical memory, we also ensure correct operation to protect the operating system from access by user processes and, in addition, to protect user processes from one another. This protection must be provided by the hardware. To make sure that each process has a separate memory space, we need the ability to determine the range of legal addresses that the process may access and to ensure that the process can access only these legal addresses. We can provide this protection by using two registers, usually a base and a limit, as illustrated in following Figure:
  • 2. The base register holds the smallest legal physical memory address; the limit register specifies the size of the range. For example, if the base register holds 300040 and the limit register is 120900, then the program can legally access all addresses from 300040 through 420939 (inclusive). Every memory access made by a user process is checked against these two registers, and if a memory access is attempted outside the valid range, then a fatal error is generated. The OS obviously has access to all existing memory locations, as this is necessary to swap users' code and data in and out of memory. It should also be obvious that changing the contents of the base and limit registers is a privileged activity, allowed only to the OS kernel. 2. Logical versus Physical Address Space The address generated by the CPU is a logical address, whereas the address actually seen by the memory hardware is a physical address. The set of all logical addresses used by a program composes the logical address space, and the set of all corresponding physical addresses composes the physical address space. The run time mapping of logical to physical addresses is handled by the memory-management unit, MMU using different methods available. One of the method is generalization of base-register scheme. Here, the base register is now termed a relocation register. The value in the relocation register is added to every address generated by a user process at the time the address is sent to memory (see Figure given on next page).
  • 3. For example, if the base is at 14000, then an attempt by the user to address location 0 is dynamically relocated to location 14000; an access to location 346 is mapped to location 14346. User programs and CPU never see physical addresses. User programs work entirely in logical address space, and any memory references or manipulations are done using purely logical addresses. Only when the address gets sent to the physical memory chips is the physical memory address generated. We now have two different types of addresses: logical addresses (in the range 0 to max) and physical addresses (in the range R+0 to R+max for a base value R). The user generates only logical addresses and thinks that the process runs in locations 0 to max. However, these logical addresses are be mapped to physical addresses by memory-mapping hardware, before they are used. 3. Swapping A process must be loaded into memory in order to execute. If there is not enough memory available to keep all running processes in memory at the same time, then some processes who are not currently using the CPU may have their memory swapped out to a fast local disk called the backing store. The backing store is commonly a fast disk. It must be large enough to accommodate copies of all memory images for all users, and it must provide direct access to these memory images. For example, in a multiprogramming environment with a round-robin CPU-scheduling algorithm, when a quantum expires, the memory manager will start to swap out the process that just finished and to swap another process into the memory space that has been freed (Figure given below). A variant of this swapping policy is used for priority-based scheduling algorithms. If a higher- priority process arrives and wants service, the memory manager can swap out the lower-priority process and then load and execute the higher-priority process. When the higher-priority process finishes, the lower-priority process can be swapped back in and continued. This variant of swapping is sometimes called roll out, roll in. If we want to swap a process, we must be sure that it is idle, or there are no pending I/O operations. (Otherwise the pending I/O operation could write into the wrong process's memory space.) The solution is to either swap only totally idle processes, or do I/O operations only into and out of OS buffers, which are then transferred to or from process's main memory as a second step.
  • 4. 4. contiguous memory allocation The main memory must accommodate both the operating system and the various user processes. We therefore need to allocate main memory in the most efficient way possible. This section explains one common method, contiguous memory allocation. The memory is usually divided into two partitions: one for the resident operating system and one for the user processes. We can place the operating system in either low memory or high memory. Here, we consider only the situation in which the operating system resides in low memory. It is desirable to keep several user processes to reside in memory at the same time. In. contiguous memory allocation, each process is contained in a single contiguous section of memory. Memory Allocation • One of the simplest methods for allocating memory is to divide memory into several fixed- sized partitions. Each partition may contain exactly one process. Thus, the degree of multiprogramming is bound by the number of partitions. In this multiple-partition method, when a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process. • In the variable-partition scheme, the operating system keeps a table indicating which parts of memory are available and which are occupied. Initially, all memory is available for user processes and is considered one large block of available memory a hole. As processes enter the system, they are put into an input queue. The operating system takes into account the memory requirements of each process and the amount of available memory space in determining which processes are allocated memory. When a process is allocated space, it is loaded into memory, and it can then compete for CPU time. When a process terminates, it releases its memory which the operating system may then fill with another process from the input queue. At any given time, then, we have a list of available block sizes and an input queue. The operating system can order the input queue according to a scheduling algorithm. If no available block of memory (or hole) is large enough to hold that process. The operating system can then wait until a large enough block is available, or it can skip down the input queue to see whether the smaller memory requirements of some other process can be met. When a process arrives and needs memory, the system searches the set for a hole that is large enough for this process. If the hole is too large, it is split into two parts. One part is allocated to the arriving process; the other is returned to the set of holes. When a process terminates, it releases its block of memory, which is then placed back in the set of holes. If the new hole is adjacent to other holes, these adjacent holes are merged to form one larger hole. At this point, the system may need to check whether there are processes waiting for memory and whether this newly freed and recombined memory could satisfy the demands of any of these waiting processes. Partition Selection Algorithm To satisfy a request of size n from a list of free holes, the most commom strategies are first-fit, best- fit and worst-fit. • First fit. Search the list of holes until one is found that is big enough to satisfy the request, and assign a portion of that hole to that process. Whatever fraction of the hole not needed by the request is left on the free list as a smaller hole. Subsequent requests may start looking either from the beginning of the list or from the point at which this search ended.
  • 5. • Best fit. Allocate the smallest hole that is big enough to satisfy the request. This saves large holes for other process requests that may need them later, but the resulting unused portions of holes may be too small to be of any use, and will therefore be wasted. Keeping the free list sorted can speed up the process of finding the right hole. • Worst fit. Allocate the largest hole available. This strategy produces the largest leftover hole, which may be more useful than the smaller leftover hole from a best-fit approach. Fragmentation As processes are loaded and removed from memory in first-fit and best-fit (strategies of multiple- partition allocation scheme), the free memory space is broken into little pieces. External fragmentation exists when there is enough total memory space to satisfy a request but the available spaces are not contiguous; storage is fragmented into a large number of small holes. This fragmentation problem can be severe. In the worst case, we could have a block of free (or wasted) memory between every two processes. If all these small pieces of memory were in one big free block instead, we might be able to run several more processes. In the fixed-partition allocation scheme, the physical memory is broken into fixed-sized blocks and allocate memory in units based on block size. With this approach, the memory allocated to a process may be slightly larger than the requested memory. The difference between these two numbers is internal fragmentation – unused memory that is internal to a partition. One solution to the problem of external fragmentation is compaction. The goal is to shuffle the memory contents so as to place all free memory together in one large block. We must determine the cost of compaction algorithm. The simplest compaction algorithm is to move all processes toward one end of memory; all holes move in the other direction, producing one large hole of available memory. This scheme can be expensive. Another possible solution to the external-fragmentation problem is to allow a process to use non- contiguous blocks of physical memory (Non-contiguous memory allocation). Two complementary techniques achieve this solution: paging and segmentation. 5. Paging Paging is a memory-management scheme that: • permits the physical address space a process to be noncontiguous. • avoids external fragmentation and the need for compaction by allocating memory in equal sized blocks known as pages. Basic Method • The basic method for implementing paging involves breaking physical memory into fixed- sized blocks called frames and breaking logical memory into blocks of the same size called pages. • When a process is to be executed, its pages are loaded into any available memory frames from their source (a file system or the backing store). The backing store is divided into fixed-sized blocks that are of the same size as the memory frames.
  • 6. • Every address generated by the CPU (logical address) is divided into two parts: a page number (p) and a page offset (d). The page number is used as an index into a page table. • A page table contains the base address of each page in physical memory i.e. corresponding frame. So, it (page table) maps the page number to a frame number, to yield a physical address which also has two parts: The frame number and the offset within that frame. • This base address of frame is combined with the page offset to define the physical memory address that is sent to the memory unit. The paging model of memory is shown below: Example for mapping of logical and physical memory: page 2 of the program's logical memory is currently stored in frame 3 of physical memory:
  • 7. Page numbers, frame numbers, and frame sizes are determined by the computer architecture (hardware). The size of a page is typically a power of 2, varying between 512 bytes and 16 MB per page. For example, if the logical address size is 2m and the page size is 2n , then the high-order m-n bits of a logical address designate the page number and the remaining n bits represent the offset. Thus, the logical address is as follows, where p is an index into the page table and d is the displacement within the page. Paging example for a 32-byte memory with 4-byte pages: In this example, a process has 16 bytes of logical memory, mapped in 4 byte pages into 32 bytes of physical memory. Here, n= 2 and m = 4 for page size and logical address respectively. ▪ Logical address 0 is page 0, offset 0. Indexing into the page table, we find that page 0 is in frame 5. Thus, logical address 0 maps to physical address 20 [= (5 x 4) + 0]. (Logical Address 0 = 0000 – bold digits ti show high order m-n i.e 4-2=2. Unbold digits represent offset.) ▪ Logical address 3 (page 0, offset 3) maps to physical address 23 [ = (5 x 4) + 3]. (Logical Address 3 = 0011) ▪ Logical address 4 is page 1, offset 0; according to the page table, page 1 is mapped to frame 6. Thus, logical address 4 maps to physical address 24 [ = ( 6 x 4) + 0]. (Logical Address 4 = 0100)
  • 8. ▪ Logical address 13 maps to physical address 9. (Logical Address 13 = 1101). Note: The following figure does not show this mapping in the page table as index 3 of page table contains some other page. For accessing this logical address, page should be replaced in the page table, so that it corresponds to proper physical address. There are different page replacement algorithms available, which is discussed later. When we use a paging scheme, we have no external fragmentation: any free frame can be allocated to a process that needs it. However, we may have some internal fragmentation. If page size is 2,048 bytes, a process of 72,766 bytes will need 35 pages plus 1,086 bytes. It will be allocated 36 frames, resulting in internal fragmentation of 2,048 – 1,086 = 962 bytes. In the worst case, if a process would need n pages plus 1 byte, it would be allocated n+1 frames, resulting in internal fragmentation of almost an entire frame. Small page sizes are desirable. However, overhead is involved in each page-table entry, and this overhead is reduced as the size of the pages increases. Also, disk I/0 is more efficient when the amount data being transferred is larger. Today, pages typically are between 4 KB and 8 KB in size (and some systems support even larger page sizes). Usually, each page-table entry is 4 bytes long, but that size can vary as well. A 32-bit entry can point to one of 232 physical page frames. If frame size is 4KB (212 bit), it translates to 244 bytes (or 16 TB) of addressable physical memory.( 32 + 12 = 44 bits of physical address space.) When a process arrives in the system to be executed, its size is examined in terms of pages. Each page of the process needs one frame. Thus, if the process requires n pages, at least n frames must be available in memory. If n frames are available, they are allocated to this arriving process from a free-frame list, and inserted into that process's page table. (Figure given below shows Free frames (a) before allocation and (b) after allocation).
  • 9. Processes are blocked from accessing anyone else's memory because all of their memory requests are mapped through their page table. There is no way for them to generate an address that maps into any other process's memory space. Since the operating system is managing physical memory, it must be aware of the allocation details of physical memory-which frames are allocated,which frames are available, total numbers of frames and so on. This nformation is generally kept in a data structure called a frame table. The frame table has one entry for each physical page frame, indicating whether the latter is free or allocated and, if it is allocated, to which page of which process or processes. The operating system maintains a copy of the page table for each process, just as it maintains a copy of the instruction counter and register counter. This copy is used to translate logical addresses to physical addresses. In addition, the operating system must ensure that if a user makes a system call (to do I/O, for example) and provides an address as a parameter (a buffer, for instance), that address must be mapped to produce the correct physical address. Hardware Support for paging Page lookups must be done for every memory reference, and whenever a process gets swapped in or out of the CPU, its page table must be swapped in and out too, along with the instruction registers, etc. It is therefore appropriate to provide hardware support for this operation, in order to make it as fast as possible and to make process switches as fast as possible also. The hardware implementation of the page table can be done in several ways: 1. In the simplest case, the page table is implemented as a set of dedicated registers. The CPU dispatcher reloads these registers, just as it reloads the other registers. Instructions to load or modify the page-table registers are, of course, privileged, so that only the operating system can change them. 2. The use of registers for the page table is satisfactory if the page table is reasonably small (for example, 256 entries). For machines, who allow the page table to be very large (for example, 1 million entries), the use of fast registers to implement the page table is not feasible. Rather, the page table is kept in main memory, and a page-table base register (PTBR) points to the page table. Changing page tables requires changing only this one register, substantially reducing context-switch time. The problem (disadvantage) with this approach is that every memory access now requires two memory refrences to access a memory locations – first one for the page-table entry using PTBR for finding the page frame number and then another one to access the desired memory location. So, time required to access a user memory location just got half as fast i.e. memory access is slowed by a factor of 2. The standard solution to this problem is to use a special, small, fast-lookup hardware cache, called a translation look-aside buffer (TLB) (or associative memory). Each entry in the TLB consists of two parts: a key (or tag) and a value. The benefit of the TLB is that it can search an entire table for a key value in parallel, and if it is found anywhere in the table, then the corresponding lookup value is returned. So, when the associative memory is presented with an item, the item is compared with all keys simultaneously. If the item is found, the corresponding value field is returned. The search is fast, but the hardware is expensive. So, the number of entries in a TLB is small, numbering between 64 and 1,024. It is therefore used as a cache device.
  • 10. The TLB is used with page tables in the following way: 1. The TLB contains only a few of the page-table entries. 2. When a logical address is generated by the CPU, its page number is presented to the TLB. If the page number is found, its corresponding frame number is used to access the physical memory. (TLB hit) 3. If the page number is not in the TLB (TLB miss), a memory reference is made to the page table (kept in main memory) to obtain frame number for that page number, using which we access the desired memory location. In addition, we add the page number and frame number to the TLB, so that they will be found quickly on the next reference. 4. If the TLB is already full of entries, the operating system must select one entry for replacement. Replacement policies range from least recently used (LRU) to random. 5. Some TLBs allow some entries to be wired down, which means that they cannot be removed from the TLB. Typically TLB entries for kernel code are wired down. Paging with TLB is shown in following figure.
  • 11. The flow chart for TLB miss is shown below: The percentage of times that a particular page number is found in the TLB is called the hit ratio. An 80-percent hit ratio, for example, means that we find the desired page number in the TLB 80 percent of the time. If it takes 20 nanoseconds to search the TLB and 100 nanoseconds to access memory, then a mapped-memory access takes 120 nanoseconds when the page number is in the TLB (in case of TLB hit). If we fail to find the page number in the TLB (20 nanoseconds), then we must first access memory for the page table and frame number (100 nanoseconds) and then access the desired byte in memory (100 nanoseconds), for a total of 220 nanoseconds. To find the effective memory-access time, we weight weight the case by its probability: effective access time = 0.80 x 120 + 0.20 x 220 = 140 nanoseconds. In this example, we suffer a 40-percent slowdown in memory-access time (from 100 to 140 nanoseconds). For a 98-percent hit ratio, we have, effective access time = 0.98 x 120 + 0.02 x 220 = 122 nanoseconds. This increased hit rate produces only a 22 percent slowdown in access time.
  • 12. Structure of the page table/Types of Page Table 1. Hierachical paging: In this structure, page table is divided into smaller pieces. One way is to use a two-level paging algorithm, in which the page table itself is also paged (as shown below). Two-level paging example: A logical address (on 32-bit machine resulting 232 logical address size with 4K (212 ) page size) is divided into: 1. a page number consisting of 20 bits ( 232 / 212 = 220 or 1 million). 2. a page offset consisting of 12 bits. Since the page table with 220 entry is paged, the page number is further divided into: 1. a 10-bit page number (p1). 2. a 10-bit page offset (p2). Thus, a logical address is as follows: where p1 is an index into the outer page table, which identifies where in memory to find one page of an inner page table and p2 is the displacement within the page of the outer page table, which finds a specific entry in that inner page table. This inner page table entry identifies a particular frame in physical memory. ( The remaining 12 bits of the 32 bit logical address are the offset within the 4K frame. )
  • 13. The address-translation method for this architecture is shown in following figure. Because address translation works from the outer page table inward, this scheme is also known as a forward-mapped page table. There can exist more level of page tables creating hierarchical structures of page tables depending on system architecture (such as 64-bit machine may find two-level paging scheme in appropriate). 2. Hashed Page Tables: A common approach for handling address spaces larger than 32 bits is to use a hashed page table, with the hash value being the virtual page number. Each entry in the hash table contains a linked list of elements that hash to the same location (to handle collisions). Each element in this table consists of three fields: (1) the virtual page number, (2) the value of the mapped page frame, and (3) a pointer to the next element in the linked list. The algorithm works as follows: 1. The virtual page number in the virtual address is hashed into the hash table. 2. The virtual page number is compared with field 1 in the first element in the linked list. 3. If there is a match, the corresponding page frame (field 2) is used to form the desired physical address. 4. If there is no match, subsequent entries in the linked list are searched for a matching virtual page number. This scheme is shown in following figure:
  • 14. 3. Inverted Page Tables: Usually, each process has an associated page table. The page table has one entry for each page that the process is using, so each page table may consist of millions of entries. These tables may consume large amounts of physical memory. Another approach is to use an inverted page table. Instead of a table listing all of the pages for a particular process, an inverted page table lists all of the pages currently loaded in memory, for all processes. ( i.e. there is one entry per frame instead of one entry per page. ) Here is a simple example is given to understand this method, where each virtual address in the system consists of a triple: <process-id, page-number, offset>. Each inverted page-table entry is a pair <process-id, page-number> where the process-id assumes the role of the address-space identifier. When a memory reference occurs, part of the virtual (logical) address, consisting of <process-id, page-number>, is presented to the memory subsystem. The inverted page table is then searched for a match. If a match is found-say, at entry i-then the physical address <i, offset> is generated. If no match is found, then an illegal address access has been attempted. Although this scheme decreases the amount of memory needed to store each page table, it increases the amount of time needed to search the table when a page reference occurs as as it may be necessary to search the entire table in order to find the desired page or to discover that it is not there. Also, systems that use inverted page tables have difficulty in implementing shared memory. Shared memory is used to map multiple logical pages to a common physical frame, but inverted page tables map each frame to one and only one process.
  • 15. 6. Segmentation Paging separates user's view of memory (logical memory) from the actual physical memory. Segmentation is a memory-management scheme that implements user view of memory. Users prefer to view memory as a collection of variable-sized segments such as main program, routines, variables, stacks, etc, with no necessary ordering among them. So, logical address space can be viewed as collection of segments, each segment having a name and a length. User specifies each logical address consisting of a segment name and the offset within the segment. For simplicity of implementation, segments are numbered and are referred to by a segment number, rather than by a segment name. Thus, a logical address consists of a two tuple: <segment-number, offset> (Whereas in paging scheme, user specifies only a single address, which is partitioned by the hardware into a page number and an offset, all invisible to the programmer.) Segmentation Hardware To keep track of each segment, a segment table is maintained by the OS. Each entry in the segment table consistes of two fields: segment base and segment limit. The segment base contains the starting address of the segment in physical memory and the segment limit specifies the length of the segment. The segment number is used as an index to the segment table. A logical address consists of two parts: a segment number, s, and an offset into that segment, d. The segment number is used as an index to the segment table. The offset d of the logical address must be between 0 and the segment limit. If it is not, invalid-address error is generated (trap to the OS). Main Program Stack Variables Segment 0 Segment 1 Segment 2
  • 16. When an offset is legal, it is added to the segment base to produce the address in physical memory of the desired byte. The segment table is thus essentially an array of base-limit register pairs. As an example, consider the situation shown in figure given below. We have five segments numbered from 0 through 4. The segments are stored in physical memory as shown. The segment table has a separate entry for each segment, giving the beginning address of the segment in physical memory (or base) and the length of that segment (or limit). For example, segment 2 is 400 bytes long and begins at location 4300. Thus, a reference to byte 53 of segment 2 is mapped onto location 4300 +53= 4353. A reference to segment 3, byte 852, is mapped to 3200 (the base of segment 3) + 852 = 4052. A reference to byte 1222 of segment 0 would result in a trap to the operating system, as this segment is only 1,000 bytes long.