1. Preemptive vs. Non-preemptive
Chapter 6: CPU Scheduling
Scheduling
• Always want to have CPU working • Non-preemptive scheduling
• Usually many processes in ready queue – A new process is selected to run either
– Ready to run on CPU • when a process terminates or
– Consider a single CPU here • when an explicit system request causes a wait state
(e.g., I/O or wait for child)
• Need strategies
• Preemptive scheduling
– Selecting next process to run
– For allocating CPU time – New process selected to run also when
– What happens after a process does a system call? • An interrupt occurs
• Short-term scheduling • When new processes become ready
– Must not take much CPU time
Performance Criteria Scheduling Algorithms
• CPU utilization • First-come, First-Served (FCFS)
– Percentage of time that CPU is busy (and not idle), over – Complete the jobs in order of arrival
some period of time
• Shortest Job First (SJF)
• Throughput
– Complete the job with shortest next CPU requirement
– Number of jobs completed per unit time (e.g., burst)
• Turnaround time – Provably optimal w.r.t. average waiting time
– Time interval from submission of a process until • Priority
completion of the process
– Processes have a priority number
• Waiting time – Allocate CPU to process with highest priority
– Sum of the time periods spent in the ready queue
• Round-Robin (RR)
• Response time – Each process gets a small unit of time on CPU (time
– Time from submission until first output/input quantum or time slice)
– May approximate by time from submission until first – For now, assume a FIFO queue of processes
access to CPU
FCFS: First-Come First-Served Solution: Gantt Chart Method
P1 P2 P3 P4 P5
• Implement with a FIFO ready queue
20 32 40 56 60
• Major disadvantage can be long wait times
• Example • Waiting times:
• P1: 0
– Draw Gantt chart
• P2: 20
– Compute the average wait time for processes
• P3: 32
with the following burst times and queue order:
• P4: 40
• P1: 20, P2: 12, P3: 8, P4: 16, P5: 4
• P5: 56
• Average wait time: 29.6
1
2. SJF: Shortest Job First SJF Solution
P5 P3 P2 P4 P1
• The job with the shortest next CPU burst
4 12 24 40 60
time is selected
• Example (from before): • Waiting times:
– CPU job burst times: • P1: 40
• P1: 20, P2: 12, P3: 8, P4: 16, P5: 4 • P2: 12
– Draw Gantt chart and compute the average • P3: 4
waiting time given SJF CPU scheduling • P4: 24
• P5: 0
• Average wait time: 16
SJF Example Estimate
• Provably shortest average wait time
Say, α = 0.5
• However, requires future knowledge •
τ0 = 10
• May have an estimate, to predict next CPU burst •
– E.g., base on last CPU burst and a number summarizing
• CPU burst, t = 6
history of CPU bursts
τn+1 = α * t + (1 - α) * τn • What is estimate of next CPU burst?
– Where t is the last CPU burst value, α is a constant τ1 = 0.5 * 6 + 0.5 * 10 = 8
indicating how much to base estimate on last CPU
burst, and τn is the last estimate
Which Scheduling Algorithms
Priority Scheduling
Can be Preemptive?
• Have to decide on a numbering scheme
– 0 can be highest or lowest
• FCFS (First-come, First-Served)
• FCFS as priority: all have equal priorities
– Non-preemptive
• SJF as priority: priority is reciprocal of predicted
• SJF (Shortest Job First)
CPU burst
– Can be either
• Priorities can be
– Choice when a new job arrives
– Internal
– Can preempt or not
• according to O/S factors (e.g., memory requirements)
– External: e.g., User importance • Priority
– Static: fixed for the duration of the process – Can be either
– Dynamic – Choice when a processes priority changes or when a
higher priority process arrives
• Changing during processing
• E.g., as a function of amount of CPU usage, or length of time
waiting (a solution to indefinite blocking or starvation)
2
3. RR (Round Robin) Scheduling Solution
completes
completes completes completes completes
• Give each process a unit of time (time slice,
quantum) of execution on CPU P1 P2 P3 P4 P5 P1 P2 P3 P4 P1 P2 P4 P1 P4 P1
• Then move to next process 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60
• Waiting times:
• Continue until all processes completed
• P1: 16 + 12 + 8 + 4 = 40
• Example
• P2: 4 + 16 + 12 = 32
– CPU job burst times & order in queue
• P3: 8 + 16 = 24
• P1: 20, P2: 12, P3: 8, P4: 16, P5: 4
• P4: 12 + 16 + 8 = 36
– Draw Gantt chart, and compute average wait time
• P5: 16
• Average wait time: 29.6
Calculate Other Measurements
Response Time Calculations
• Response time
– Estimate by time from job submission to time to first
CPU dispatch Job FCFS SJF RR
– Assume all jobs submitted at same time, in order given
P1 0 40 0
• Turnaround time
– Time interval from submission of a process until
P2 20 12 4
completion of the process
FCFS P3 32 4 8
P1 P2 P3 P4 P5
20 32 40 56 60
P4 40 24 12
SJF P5 P3 P2 P4 P1
P5 56 0 16
4 12 24 40 60
Average 29.6 16 8
RR P1 P2 P3 P4 P5 P1 P2 P3 P4 P1 P2 P4 P1 P4 P1
4 8 12 16 20 24 28 32 36 40 44 48 52 56 60
Performance Characterstics of
Turnaround Time Calculations
Scheduling Algorithms
Job FCFS SJF RR
P1 20 60 60 • Different algorithms will have different
performance characteristics
P2 32 24 44
• RR (Round Robin)
P3 40 12 32 – Good average response time
• Important for interactive or timesharing systems
P4 56 40 48
• SJF
P5 60 4 20 – Best average waiting time
– Some overhead w.r.t. estimates of CPU burst length
Average 41.6 28 40.8
Assume processes submitted at same time
3
4. Context Switching Issues Example
• These calculations have not taken context switch • Calculate average wait time for RR (round
duration into account
robin) scheduling, for
– In general, the context switch will take time
– Processes: P1: 24, P2: 4, P3: 4
– Just like the CPU burst of a process takes time
– Assume this arrival order
– Response time, wait time etc. will be affected by
context switch time
– Quantum = 4; context switch time = 1
• RR (Round Robin) & quantum duration
– The smaller the time quantum, the better the average
response time, but the more system overhead
– Want the quantum large compared to context switch
time
Solution: Average Wait Time
Multi-level Ready Queues
With Context Switch Time
• Multiple ready queues
P1 P2 P3 P1 P1 P1 P1 P1 – For different types of processes (e.g., system, vs. user
processes)
45 9 10 14 15 19 20 24 25 29 30 34 35 39
– For different priority processes (e.g., Mach)
• P1: 0 + 11 + 4 = 15
• Each queue can
• P2: 5
– Have a different scheduling algorithm
• P3:10
– Receive a different amount of CPU time
• Average: 10 – Have movement of processes to another queue
(feedback);
• e.g., if a process uses too much CPU time, put in a lower
(This is a case for dynamically varying the priority queue
time quantum, as in Mach.) • If a process is getting too little CPU time, put it in a higher
priority queue
Synchronization Issues
Multiprocessor Scheduling
• Symmetric multiprocessing
• When a computer has more than one processor, • Involves synchronization of access to global ready
need a method of dispatching processes queue
• Types of ready queues – E.g., only one processor must execute a job at one time
– Local: dispatch to a specific processor
• Processors: CPU1, CPU2, CPU3, …
– Global: dispatch to any processor (“load sharing”)
• When a processor (e.g., CPU1) accesses the ready
• Processor/process relationship queue
– Run on only a specific processor (e.g., if it must use a – All other processors (CPU2, CPU3, …) must wait, and
device on that processor’s private bus) be denied access to the ready queue
– Run on any processor – The accessing processor (e.g., CPU1) will remove a
• Symmetric: Each processor does own scheduling process from the ready, and dispatch it on itself
– Then that processor will make the ready queue
• Master/slave:
available for use by the other CPU’s (CPU2, CPU3, …)
– Master processor dispatches processes to slaves
4
5. Pre-emptive Scheduling &
Operating System Design
• With pre-emptive CPU scheduling, a new process
can run when interrupt occurs
• What if thread A was in the middle of updating
data structures, and was put back in ready queue
– Either on disk or in shared memory
• If thread B also accesses same data structures
– May access data structures in an inconsistent state
• Need mechanisms for cooperative data access
– Both in Kernel
• Kernel, in general, needs to handle interrupts
• Don’t want to loose interrupts
• Real-time & multi-processor issues
• May need preemption in the kernel itself
– And by multiple processes/threads
5