2. TOPICS
■ CPU Scheduling: Process Concept
■ Scheduling Concepts
■ Types of Schedulers
■ Process State Diagram
■ Scheduling Algorithms
■ Algorithms Evaluation
■ System calls for Process Management
■ Multiple Processor Scheduling
■ Concept of Threads
3. ■ Memory Management
■ Different Memory Management Techniques
■ Partitioning, Swapping, Segmentation, Paging, Paged
Segmentation, Comparison of these techniques,
■ Techniques for supporting the execution of large programs
■ Overlay, Dynamic Linking and Loading
■ Virtual Memory – Concept, Implementation by Demand
Paging etc.
4. PROGRAM
A program is a piece of code which may be a single line or millions of lines.
A computer program is usually written by a computer programmer in a
programming language.
For example, here is a simple program written in C programming language −
#include<stdio.h>
int main ()
{
printf("Hello, World! n");
return0;
}
A computer program is a collection of instructions that performs a specific
task when executed by a computer.
5. Processs
A process is defined as an entity which represents the basic unit of work
to be implemented in the system.
To put it in simple terms, we write our computer programs in a text file
and when we execute this program, it becomes a process which performs
all the tasks mentioned in the program
A process is basically a program in execution. The execution of a process
must progress in a sequential fashion.
When a program is loaded into the memory and it becomes a process,
it can be divided into four sections ─ stack, heap, data and text.
6. The following image shows a simplified layout of a
process inside main memory
1. Stack: The process stack contain the temporary data
such as method function parameters return address and
local variables.
2. Heap: This is the memory allocated to a process
dynamically, during its run time.
3. Data: This has global and static variables.
4.Text: This consists of the ongoing activity of
a process that is represented by the value of the Program
Counter and the contents of the processor’s
registers.(Executable code)
7. When we compare a program with a process, we can conclude that a
process is a dynamic instance of a computer program.
A part of a computer program that performs a well-defined task is
known as an algorithm.
A collection of computer programs, libraries and related data are
referred to as software.
8. Process Control Block(PCB)
Process Control Block also known as TCB(Task Control Block).
A process control block (PCB) contains information about the process, i.e. registers,
quantum, priority, etc. The process table is an array of PCB’s, that means logically contains
a PCB for all of the current processes in the system.
While creating a process the operating system performs several operations. To identify
the processes, it assigns a process identification number (PID) to each process. As the
operating system supports multi-programming, it needs to keep track of all the processes.
For this task, the process control block (PCB) is used to track the process’s execution
status.
A process control block (PCB) is a data structure used by computer operating systems to
store all the information about a process. It is also known as a process descriptor. When a
process is created (initialized or installed), the operating system creates a corresponding
process control block.
9. Process state – It stores the respective state of the process.
Process number – Every process is assigned with a unique id known as
process ID or PID which stores the process identifier.
Program counter – It stores the counter which contains the address of the
next instruction that is to be executed for the process.
Register – These are the CPU registers which includes: accumulator, base,
registers and general purpose registers.
Memory limits – This field contains the information about memory
management system used by operating system. This may include the page
tables, segment tables etc.
Open files list – This information includes the list of files opened for a
process.
10.
11. Context Switching
A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from the
same point at a later time.
Whenever the CPU shifts from one process to another it need to save the context
of the running process so that it can be loaded again when it gets the CPU next
time. Therefore, this context is represented in the PCB of the process. Switching the
CPU from one process to another process requires performing a state save of the
current process and a state restore of a different process. This task is known as
context switch.
When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process
control block. After this, the state for the process to run next is loaded from its own
PCB and used to set the PC, registers, etc. At that point, the second process can
start executing.
12. When the process is switched, the following information is stored for later use.
1. Program Counter
2. Scheduling information
3. Memory limit
4. Currently used register
5. Changed State
6. I/O State information
7. Accounting information
13. Scheduling Concepts
The process scheduling is the activity of the process manager that handles
the removal of the running process from the CPU and the selection of
another process on the basis of a particular strategy.
Process scheduling is an essential part of Multiprogramming operating
systems. Such operating systems allow more than one process to be loaded
into the executable memory at a time and the loaded process shares the
CPU using time multiplexing.
14. Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the
same execution state are placed in the same queue.
The Operating System maintains the following important process scheduling
queues −
Job queue− this queue keeps all the processes in the system.
Ready queue− this queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
Device queues− the processes which are blocked due to unavailability of an I/O
device constitute this queue.
15. Types Of Scheduler
A scheduler is a type of system software that allows you to handle process
scheduling.(Manages the Process).
Three types of the scheduler are 1) Long term 2) Short term 3) Medium-term
Long term scheduler regulates the program and select process from the queue and
loads them into memory for execution.
1. Long Term Scheduler(LTS)
2. Short Term Scheduler(STS)
3. Medium Term Scheduler(MTS)
16. Long Term Scheduler
Long-term scheduling involves selecting the processes from the storage pool in
the secondary memory and loading them into the ready queue in the main
memory for execution. This is handled by the long-term scheduler or job
scheduler. The long-term scheduler controls the degree of multiprogramming.
On some systems, the long-term scheduler may not be available or minimal.
Time-sharing operating systems have no long term scheduler. When a process
changes the state from new to ready, then there is use of long-term scheduler.
17. Short Term Scheduler
It is also called as CPU Scheduler or dispatchers.
STS selects a process among the process that are ready to execute and allocates
CPU to one of them.
That means STS/CPU scheduler makes the descision of which process to execute
next.
The short-term scheduler executes much more frequently than the long-term
scheduler as a process may execute only for a few milliseconds.
The choices of the short term scheduler are very important. If it selects a process
with a long burst time, then all the processes after that will have to wait for a long
time in the ready queue. This is known as starvation and it may happen if a wrong
decision is made by the short-term scheduler.
The main goal of short term scheduler is to boost the system performance
according to set criteria.
18. Medium Term Scheduler
Medium-term scheduling involves swapping out a process from main memory.
The process can be swapped in later from the point it stopped executing. This can
also be called as suspending and resuming the process and is done by the
medium-term scheduler.
This is helpful in reducing the degree of multiprogramming. Swapping is also
useful to improve the mix of I/O bound and CPU bound processes in the memory.
A running process can become suspended if it makes an I/O request. A
suspended processes can’t make any progress towards completion. In order to
remove the process from memory and make space for other processes, the
suspended process should be moved to secondary storage.
20. Process States
1. New - A Program which is going to be picked up by the Operating System
into the main memory.
2. Ready - whenever a process is created, it directly enters in the ready
state, in which it waits for the CPU. The Process which are ready for the
execution and reside in main the memory are called ready state process.
3. Running- One of the process from the ready state will be choosen by the
operating system depending upon the scheduling algorithm. The process that
is accessing the CPU, that process is in Running state.
4. Wait - When a process waits for a certain resource or the input/output
then the Operating system moves this process to the other process.
21. Termination state - When a process finishes its execution, it comes in the
termination state.
Suspended wait - If the process in wait state, requires a resource, and if there is
lack of resources, OS removes that process and put it in the secondary memory.
These processes complete their execution once the main memory gets available
and thier execution once the main memory gets available and their wait is
finished.
22. Scheduling Algorithm
Scheduling algorithms schedule processes on the processor in an efficient
and effective manner. This scheduling is done by a Process Scheduler. It
maximizes CPU utilization by increasing throughput.
These algorithms are either non-preemptive or preemptive. Non-
preemptive algorithms are designed so that once a process enters the
running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler
may preempt a low priority running process anytime when a high priority
process enters into a ready state.
23. There are six popular process scheduling algorithms which we are going to
discuss −
1. First Come First Serve (FCFS) is an operating system
scheduling algorithm that automatically executes
queued requests and processes in order of their arrival. It is the easiest and
simplest CPU scheduling algorithm. In this type of algorithm, processes which
requests the CPU first get the CPU allocation first. This is managed with a FIFO
queue. The full form of FCFS is First Come First Serve.
24. 2. Shortest Job First SJF-
Shortest Job first has the advantage of having a minimum average waiting time
among all scheduling algorithms.
It is a Greedy Algorithm.
It may cause starvation if shorter processes keep coming. This problem can be
solved using the concept of ageing.
It is practically infeasible as Operating System may not know burst time and
therefore may not sort them. While it is not possible to predict execution time,
several methods can be used to estimate the execution time for a job, such as a
weighted average of previous execution times. SJF can be used in specialized
environments where accurate estimates of running time are available.
25. 3. Priority Scheduling -
Priority Scheduling is a method of scheduling processes that is based on
priority. In this algorithm, the scheduler selects the tasks to work as per the
priority.
The processes with higher priority should be carried out first, whereas jobs
with equal priorities are carried out on a round-robin or FCFS basis. Priority
depends upon memory requirements, time requirements, etc.
4. Round Robin - The name of this algorithm comes from the round-robin
principle, where each person gets an equal share of something in turns. It is
the oldest, simplest scheduling algorithm, which is mostly used for
multitasking.
In Round-robin scheduling, each ready task runs turn by turn only in a cyclic
queue for a limited time slice. This algorithm also offers starvation free
execution of processes.
26. Multilevel queue scheduling
Multilevel queue scheduling is used when processes in the ready queue can be
divided into different classes where each class has its own scheduling needs.
For instance, foreground or interactive processes and background or batch
processes are commonly divided.
Advantages of Multilevel Queue Scheduling
With the help of this scheduling we can apply
various kind of scheduling for different kind of
processes:
For System Processes: First Come First Serve(FCFS)
Scheduling.
For Interactive Processes: Shortest Job First (SJF)
Scheduling.
For Batch Processes: Round Robin(RR) Scheduling
27. System calls for process management
System calls are usually made when a process in user mode requires access to a
resource. Then it requests the kernel to provide the resource via a system call
The interface between a process and an operating system is provided by system calls. In
general, system calls are available as assembly language instructions. They are also
included in the manuals used by the assembly level programmers.
Process Control
fork() - To create a child process
exec() - Loads the selected program into the memory.
exit()- Terminates the process.
wait() - A process may wait for another process to complete its execution
getpid()- Process ID
getppid()- Process ID of parent.
28. Algorithm Evaluation
There are many scheduling algorithms, each with its own parameters. As a result, selecting an
algorithm can be difficult. The first problem is defining the criteria to be used in selecting an
algorithm. Criteria are often defined in terms of CPU utilization, response time, or throughput.
To select an algorithm, we must first define the relative importance of these measures. Our
criteria may include several measures, such as:
Maximizing CPU utilization under the constraint that the maximum response time is 1
second
Maximizing throughput such that turnaround time is (on average) linearly proportional
to total execution time once the selection criteria have been defined, we want to
evaluate the algorithms under cosnsideration. We next describe the various evaluation
methods we can use
29. Threads
Thread is the smallest executable unit of a process. For example, when you run a notepad
program, operating system creates a process and starts the execution of main thread of
that process.
A process can have multiple threads. Each thread will have their own task and own path of
execution in a process. For example, in a notepad program, one thread will be taking user
inputs and another thread will be printing a document.
All threads of the same process share memory of that process. As threads of the same
process share the same memory, communication between the threads is fast.
Threads are Smallest sequence of programmed instruction that can be managed
independently by a scheduler.
30.
31. A thread comprises of its own
Thread ID – Unique ID for a thread in execution
Program counter – Keeps track of instruction to execute
System Register set – Active Variables of thread
Stack – All execution history (Can be used for debugging)
Different Types of Thread Models
Also, there are two different types of processes
Single-threaded processes
Multi-threaded processes
Types of Threads
Threads are implemented in 2 ways –
User level threads - User managed thread.
Kernel level threadd - Os managed thread acting on kernel, an operating system core.
32. User-Level Thread
The user-level threads are implemented by users and the kernel is not aware
of the existence of these threads. It handles them as if they were single-
threaded processes. User-level threads are small and much faster than
kernel level threads. They are represented by a program counter(PC), stack,
registers and a small process control block. Also, there is no kernel
involvement in synchronization for user-level threads.
Kernel-Level Threads
Kernel-level threads are handled by the operating system directly and the
thread management is done by the kernel. The context information for the
process as well as the process threads is all managed by the kernel. Because
of this, kernel-level threads are slower than user-level threads