SlideShare uma empresa Scribd logo
1 de 9
1
Process Synchronization (Galvin)
Outline
 CHAPTER OBJECTIVES
 To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data.
 To present both software and hardware solutions of the critical-section problem.
 To examine several classical process-synchronization problems.
 To explore several tools that are used to solve process synchronization problems.
 BACKGROUND
 THE CRITICAL SECTION PROBLEM
 PETERSON'S SOLUTION
 SYNCHRONIZATION HARDWARE
 MUTEX LOCKS
 SEMAPHORES
o Semaphore Usage
o Semaphore Implementation
o Deadlocks and Starvation
o Priority Inversion
 CLASSIC PROBLEMS OF SYNCHRONIZATION
o The Bounded-Buffer Problem
o The Readers–Writers Problem
o The Dining-Philosophers Problem
 MONITORS
o Monitor Usage
o Dining-Philosophers Solution
o Using Monitors
o Implementing a Monitor
o Using Semaphores
o Resuming Processes within a Monitor
 SYNCHRONIZATION EXAMPLES
o Synchronization in Windows
o Synchronization in Linux
o Synchronization in Solaris
o Pthreads Synchronization
 ALTERNATIVE APPROACHES
o Transactional Memory
o OpenMP
o Functional Programming Languages
Contents
A cooperating process is one that canaffect or be affectedbyother processes executing inthe system. Cooperating processescaneither directlyshare a
logical address space (that is, both code anddata)or be allowed to share data onlythroughfiles or messages. The former case is achievedthroughthe
use of threads, discussedinChapter 4. Concurrent accessto shareddata mayresult in data inconsistency, however. Inthis chapter, we discussvarious
mechanisms to ensure the orderlyexecution ofcooperating processesthat share a logical address space, sothat data consistencyis maintained.
BACKGROUND
 We’ve alreadyseen that processes canexecute concurrentlyor in parallel. Section 3.2.2 introducedthe role of processsched ulingand
described how the CPU scheduler switches rapidlybetween processesto provide concurrent execution. This means that one process may
onlypartiallycomplete execution before another processis scheduled. Infact, a process maybe interruptedat anypoint in its instruction
2
Process Synchronization (Galvin)
stream, andthe processing core maybe assigned to execute instructions of another process. Additionally, Section4.2 introducedparallel
execution, in whichtwo instruction streams (representing different processes) execute simultaneouslyon separate processing cores. Inthis
chapter, we explain how concurrent or parallelexecutioncan contribute to issues involving the integrityof data sharedbyseveral processes.
 In Chapter 3, we developeda modelof a system consisting of cooperating sequential processesor threads, all running asynchronouslyand
possiblysharing data. We illustratedthismodel withthe producer–consumer problem, whichis representative ofoperatingsystems.
Specifically, in Section3.4.1, we describedhow a boundedbuffer couldbe usedto enable processes to share memory.
 Coming to the bounded buffer problem, as we pointedout, our original solutionallowedat most BUFFER SIZE − 1 items inthe b uffer at the
same time. Suppose we want to modifythe algorithm to remedythis deficiency. One possibilityis to addaninteger variable counter,
initializedto 0. counter is incrementedeverytime we adda newitemto the buffer andis decrementedeverytime we remove o ne itemfrom
the buffer. The code for the producer and consumer processes canbe modifiedas follows:

Although the producer andconsumer routines shown above are correct separately, theymaynot function correctlywhenexecuted
concurrently. As anillustration, suppose that the value ofthe variable counter is currently5 andthat the producer andconsumer processes
concurrentlyexecute the statements “counter++” and“counter--”. Following the execution ofthese two statements, the value of the variable
counter maybe 4, 5, or 6! The onlycorrect result, though, is counter == 5, which is generated correctlyif the producer andconsumer execute
separately.
Note: Page 205 of 9th edition (which we have read well) shows whythe value of the counter may be incorrect. It is due to the way the
statements "Counter++" or "Counter--" are implemented in assembly(and hence machine language) on a typical machine. Since we
know it well, we don't clutter the content here. The following starts after that part in book.
 We would arrive at this incorrect state because we allowed bothprocesses to manipulate the variable counter concurrently. A situationlike
this, where several processes access and manipulate the same data concurrentlyandthe outcome of the execution depends on th e particular
order in which the access takes place, is calleda race condition. To guard against the race condition above, we needto ensure that onlyone
process at a time canbe manipulating the variable counter. To make sucha guarantee, we require that the processesbe synchronized in
some way.
 Situations such as the one just describedoccur frequentlyinoperating systems as different parts of the systemmanipulate resources.
Furthermore, as we have emphasizedinearlier chapters, the growing importance ofmulticore systems has brought anincreased emphasis
on developing multithreadedapplications. In such applications, several threads—which are quite possiblysharing data—are running in
parallel on different processing cores. Clearly, we want anychanges that result fromsucha ctivities not to interfere with one another.
Because of the importance of thisissue, we devote a major portionof thischapter to process synchronization and coordination among
cooperating processes.
THE CRITICAL SECTION PROBLEM
We beginour considerationof process synchronizationbydiscussing the so-called critical-section
problem. Consider a system consisting of n processes{P0, P1, ..., Pn−1}. Each process hasa segment of
code, calleda critical section, in whichthe processmaybe changingcommonvariables, updating a
table, writing a file, andsoon. The important feature ofthe system is that, whenone process is
executinginits criticalsection, no other process is allowedto execute inits critical section. That is, no
two processes are executingintheir critical sections at the same time. The critical-section problem is to
design a protocol that the processes can use to cooperate. Each process must request permission to
enter its critical section. The sectionof code implementingthis request is the entry section. The critical
sectionmaybe followedbyan exit section. The remainingcode is the remaindersection. The general
structure ofa typical process Pi is shown inFigure 5.1. The entrysection and exit sectionare enclosedinboxes to highlight these important segments of
code.
A solution to the critical-section problem must satisfythe following three requirements:
 Mutual exclusion. If processPi is executing inits critical section, thenno other processes can be executing in their critical sections.
 Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, thenonlythose processesthat
are not executingintheir remainder sections canparticipate in decidingwhichwill enter its critical sectionnext, andthis selectioncannot be
postponed indefinitely.
3
Process Synchronization (Galvin)
 Bounded waiting. There exists a bound, or limit, onthe number of times that other processes are allowed to enter their critical sections after a
process has made a request to enter its critical sectionandbefore that request is granted.
At a givenpoint intime, manykernel-mode processesmaybe active inthe operatingsystem. As a result, the code implementing an operatingsystem
(kernel code)is subject to several possible race conditions. Consider as anexample a kernel data structure that maintains a list of all openfiles inthe
system. This list must be modifiedwhena newfile is openedor closed(adding the file to the list or removing it from the list). If two processes were to
open filessimultaneously, the separate updatesto this list couldresult in a race condition. Other kerneldata structures that are prone to possible race
conditions include structures for maintaining memoryallocation, for maintaining process lists, andfor interrupt handling. It is upto kernel developers to
ensure that the operatingsystemis free from such race conditions.
Two general approachesare usedto handle criticalsections in operating systems: preemptive kernels andnon-preemptive kernels. A preemptive kernel
allows a processto be preempted while it is running in kernel mode. A nonpreemptive kerneldoesnot allow a process running inkernel mode to be
preempted;a kernel-mode process will run untilit exits kernel mode, blocks, or voluntarilyyields control of the CPU. Obviously, a non-preemptive kernel
is essentiallyfree fromrace conditions onkerneldata structures, as onlyone process is active in the kernel at a time. We cannot saythe same about
preemptive kernels, sotheymust be carefullydesignedto ensure that sharedkernel data are free from race conditions. Preemptive kernelsare
especiallydifficult to designfor SMParchitectures, since inthese environments it is possible for two kernel-mode processes to runsimultaneouslyon
different processors.
PETERSON’S SOLUTION
We now illustrate a classic software-basedsolution to the critical-sectionproblem known as Peterson’s solution. Because ofthe waymodern
computer architectures perform basic machine-language instructions, such as loadand store, there are noguarantees that Peterson’s solutionwill
work correctlyon such architectures. However, we present the solutionbecause it provides a goodalgorithmic descriptionof solving the critical-section
problemand illustrates some of the complexitiesinvolvedindesigningsoftware that addressesthe requirements of mutualexclusion, progress, and
boundedwaiting.
Peterson’s solutionis restrictedto two processesthat alternate executionbetweentheir criticalsections andremainder sections. The processesare
numberedP0 andP1. For convenience, whenpresenting Pi, we use Pj to denote the other process;that is, j equals 1 − i. Peterson’s solutionrequires the
two processes to share two data items: int turn; and boolean flag[2];
The variable turnindicateswhose turnit is to enter its critical section. That is, if turn== i, then process Pi is
allowedto execute in its critical section. The flag arrayis usedto indicate if a process is ready to enter its
critical section. For example, ifflag[i] is true, this value indicatesthat Pi is readyto enter its critical section.
With an explanation ofthese data structures complete, we are nowreadyto describe the algorithm shownin
Figure 5.2. To enter the critical section, processPi first sets flag[i] to be true andthensets turnto the value
j, therebyasserting that ifthe other process wishes to enter the critical section, it cando so. If both processes
try to enter at the same time, turn willbe set to bothi andj at roughlythe same time. Only one of these
assignments will last;the other willoccur but will be overwrittenimmediately. The eventual value of turn
determines which ofthe two processes is allowedto enter its criticalsectionfirst.
We now prove that this solutionis correct. We needto showthat:1. Mutual exclusionis preserved. 2. The progress requirement is satisfied. 3. The
bounded-waitingrequirement is met.
To prove property1, we note that each Pi enters its criticalsectiononlyif either flag[j]== false or turn == i. Alsonote that, ifbothprocesses canbe
executingintheir critical sections at the same time, then flag[0]== flag[1]== true. These two observations implythat P0 and P1 couldnot have
successfullyexecutedtheir while statements at about the same time, since the value of turncanbe either 0 or 1 but cannot be both. Hence, one of the
processes —say, Pj—must have successfullyexecuted the while statement, whereas Pi had to execute at least one additionalstatement (“turn == j”).
However, at that time, flag[j]== true andturn == j, andthis condition will persist as longas Pj is inits critical section;as a result, mutualexclusionis
preserved.
To prove properties 2 and3, we note that a process Pi canbe preventedfrom entering the critical section onlyifit is stuckinthe while loopwith the
condition flag[j]== true and turn == j;this loopis the onlyone possible. If Pj is not readyto enter the critical section, then flag[j]== false, andPi can enter
its critical section. If Pj has set flag[j]to true andis also executing inits while statement, theneither turn == i or turn == j. If turn == i, then Pi will enter the
critical section. If turn== j, then Pj willenter the critical section. However, once Pj exits its critical section, it will reset flag[j]to false, allowing Pi to enter
its critical section. If Pj resets flag[j]to true,it must also set turn to i. Thus, since Pi does not change the value of the variable turnwhile executing the
while statement, Pi will enter the critical section(progress) after at most one entrybyPj (boundedwaiting).
PETERSON’S SOLUTION (WIKIPEDIA)
4
Process Synchronization (Galvin)
The algorithmuses twovariables, flag and turn . A flag[n] value of true indicates that the process n wants to enter the critical section. Entrance to
the criticalsectionis grantedfor processP0 if P1 does not want to enter its critical section or if P1 has givenpriority to P0 bysetting turn to 0 .
The algorithmsatisfies the three essential criteria to solve the critical sectionproblem, provided that changes to the variables turn , flag[0] ,
and flag[1] propagate immediatelyandatomically. The while conditionworks evenwith preemption.
The three criteria are mutual exclusion, progress, and
boundedwaiting.
Since turncantake onone oftwo values, it canbe replaced
bya single bit, meaning that the algorithms requires only
three bits of memory.
Mutual exclusion
P0 and P1 can never be in the critical section at the same
time:If P0 is inits critical section, thenflag[0]is true. In
addition, either flag[1]is false (meaning P1 has left its
critical section), or turn is 0 (meaningP1 is just now trying
to enter the critical section, but graciouslywaiting), or P1 is
at label P1_gate (trying to enter its critical section, after
settingflag[1]to true but before settingturn to 0 and busywaiting). So ifbothprocesses are intheir critical sections thenwe conclude that the state
must satisfyflag[0]andflag[1]andturn = 0 and turn= 1. No state cansatisfybothturn= 0 and turn = 1, so there can be nostate where bothprocesses
are in their critical sections. (Thisrecounts an argument that is made rigorous in.[5])
Progress
Progress is definedas the following:ifno process is executing in its critical sectionandsome processes wishto enter their criticalsections, thenonly
those processes that are not executing in their remainder sections can participate inmakingthe decisionas to whichprocess will enter its critical section
next. This selectioncannot be postponedindefinitely.[3] A process cannot immediatelyre-enter the critical section if the other processhas set its flag to
saythat it wouldlike to enter its criticalsection.
Bounded waiting
Boundedwaiting, or boundedbypassmeans that the number of times a processis bypassed byanother process after it has indicatedits desire to enter
the criticalsectionis bounded bya functionof the number of processes in the system.[3][4]:11 In Peterson's algorithm, a processwill never wait longer
than one turnfor entrance to the critical section:After givingpriorityto the other process, thisprocesswill runto completion and set its flagto 1,
therebynever allowingthe other process to enter the criticalsection.
SYNCHRONIZATION HARDWARE
As mentioned, software-based solutions such as Peterson’s are not guaranteed to work on
modern computer architectures. Inthe followingdiscussions, we explore several more solutions to
the critical-sectionproblemusing techniquesranging from hardware to software-based APIs
available to bothkerneldevelopers andapplicationprogrammers. All these solutions are basedon
the premiseof locking—that is, protectingcritical regions throughthe use oflocks. As we shall
see, the designs ofsuch locks canbe quite sophisticated. We start bypresentingsome simple
hardware instructions that are available onmanysystems and showing how theycanbe used
effectivelyinsolving the critical-sectionproblem. Hardware featurescanmake any
programmingtask easier andimprove systemefficiency.
The critical-sectionproblemcould be solvedsimplyina single-processor environment if we
could prevent interrupts from occurringwhile a shared variable was being modified. Inthis
way, we could be sure that the current sequence of instructions wouldbe allowedto execute in
order without preemption. No other instructions wouldbe run, sono unexpected modifications
could be made to the sharedvariable. This is oftenthe approachtakenby
nonpreemptive kernels. Unfortunately, this solution is not as feasible ina
multiprocessor environment. Disablinginterrupts ona multiprocessor can
be time consuming, since the message is passedto allthe processors. This
message passingdelays entryinto eachcritical section, andsystem
efficiencydecreases. Also consider the effect ona system’s clock if the clock
is kept updated byinterrupts.
Manymoderncomputer systems therefore provide special hardware
instructions that allowus either to test andmodifythe content of a wordor to swap the contents of twowords atomically—that is, as one
uninterruptibleunit. We canuse these special instructions to solve the critical-sectionproblemina relativelysimple manner. We abstract the main
concepts behindthese types of instructions bydescribingthe test_ and_ set() and compare_and_swap() instructions.
The atomic test_and_set() instructioncan be defined as shown in Figure 5.3. If the machine supports the test_and_set() instruction, thenwe can
implement mutual exclusionbydeclaring a boolean variable lock, initializedto false. The structure of process Pi is shown inFigure 5.4.
5
Process Synchronization (Galvin)
The compare_and_swap() instruction, in contrast to the test_and_set()
instruction, operates onthree operands;it is defined inFigure 5.5. The
operandvalue is set to newvalue onlyif the expression (*value==
exected) is true. Regardless, compare_and_swap() always returns the
originalvalue of the variable value. Like the test_and_set() instruction,
compare_and_swap()is executedatomically. Mutualexclusioncanbe
provided as follows:a global variable (lock) is declaredandis initializedto
0. The first process that invokescompare_and_swap()will set lock to 1. It
will thenenter its critical section,because the original value of lockwas
equal to the expected value of0. Subsequent callsto
compare_and_swap()will not succeed, because lock now is not equal to
the expectedvalue of0. Whena process exits its critical section, it sets lockback to 0,
which allows another process to enter its critical section. The structure of process Pi is
showninFigure 5.6.
Althoughthese algorithms satisfythe mutual-exclusionrequirement, theydo not
satisfythe bounded-waiting requirement. InFigure 5.7, we present another algorithm
using the test_and_set() instructionthat satisfies all the critical-sectionrequirements.
The commondata structures are boolean waiting[n];
boolean lock;
These data structures are initialized to false. To prove that the mutual xclusion
requirement is met, we note that processPi canenter its criticalsectiononlyif either
waiting[i]== false or key == false. The value of keycanbecome falseonlyif the
test_and_set() is executed. The first process to execute the test_and_set() will find key
== false; all others must wait. The variable waiting[i]canbecome false onlyifanother
process leaves its critical section;onlyone waiting[i]is set to false, maintainingthe
mutual-exclusionrequirement. To prove that the progress requirement is met, we note
that the arguments presented for mutual exclusionalsoapplyhere, since a process exitingthe criticalsectioneither sets lock to false or sets waiting[j]to
false. Both allow a process that is waiting to enter its critical sectionto proceed. To prove that the bounded-waiting requirement is met, we note that,
when a process leaves its critical section, it scans the arraywaitinginthe cyclic ordering(i + 1, i + 2, ..., n − 1, 0, ..., i − 1). It designates the first process in
this orderingthat is inthe entrysection(waiting[j]== true) as the next one to enter the critical section. Anyprocesswaiting to enter its critical section
will thus dosowithinn − 1 turns.
Details describing the implementationof the atomic test_and_set() and compare_and_swap() instructions are discussed more fullyin books on
computer architecture.
MUTEX LOCKS
 The hardware-based solutions to the critical-section problem presentedinSection 5.4 are complicatedas well as generallyinaccessible to
applicationprogrammers. Instead, operating-systems designers buildsoftware tools to solve the critical-sectionproblem. The simplest of these
tools is the mutex lock. (In fact, the termmutex is short for mutual exclusion.) We use the mutex lock to protect critical regions andthus
prevent race conditions. That is, a process must acquire the lock before entering a critical section;it releasesthe lock when it exits the critical
section. The acquire()functionacquires the lock, andthe release() function releasesthe lock, as illustratedinFigure 5.8.
 A mutex lockhas a booleanvariable available whose value indicates if the lockis
available or not. Ifthe lock is available, a call to acquire() succeeds, and the lock is
then consideredunavailable. A processthat attempts to acquire anunavailable lockis
blockeduntil the lockis released. The definition ofacquire() andrelease() are as
follows:
 Calls to either acquire() or release()must be performedatomically. Thus, mutex locks are oftenimplementedusing one of the hardware
mechanisms describedinSection 5.4, and we leave the descriptionof this technique as an exercise.
 The maindisadvantage ofthe implementation givenhere is that it requires busy waiting. While a process is in its critical section, anyother
process that triesto enter its critical sectionmust loop continuouslyin the call to acquire(). In fact, thistype ofmutex lock is alsocalleda
spinlock because the process “spins” while waiting for the lock to become available. (We see the same issue with the code examples
illustrating the test_and_set() instructionandthe compare_and_swap()instruction.) This continual loopingis clearlya problem in a real
multiprogramming system, where a single CPU is sharedamongmanyprocesses. Busywaiting wastes CPU cycles that some other process
might be able to use productively.
 Spinlocks do have anadvantage, however, inthat no context switchis requiredwhena process must wait ona lock, anda context switchmay
take considerable time. Thus, whenlocks are expectedto be held for short times, spinlocks are useful. Theyare oftenemployed on
multiprocessor systems where one threadcan“spin” onone processor while another threadperforms its critical sectiononanother processor.
6
Process Synchronization (Galvin)
Later in thischapter (Section5.7), we examine how mutex locks can be used to solve classical synchronization problems. We alsodiscusshow
these locks are used inseveral operating systems, as well as in Pthreads.
SEMAPHORES
 Mutex locks, as we mentioned earlier, are generallyconsidered the simplest of synchronization
tools. Inthis section, we examine a more robust tool that can behave similarlyto a mutex lockbut
can alsoprovide more sophisticatedways for processes to synchronize their activities. A
semaphore S is aninteger variable that, apart from initialization, is accessedonlythroughtwo
standardatomic operations: wait()and signal(). The definitions of wait()andsignal() are as follows:
 All modifications to the integer value of the semaphore inthe wait() andsignal() operations must be executedindivisibly. That is, whenone
process modifies the semaphore value, noother process can simultaneouslymodifythat same semaphore value. Inaddition, inthe case of
wait(S), the testing of the integer value of S (S ≤ 0), as well as its possible modification(S--), must be executedwithout interruption. We shall
see how these operations canbe implemented in Section5.6.2. First, let’s see how semaphores canbe used.
Semaphore Usage
 Operating systems often distinguishbetweencountingandbinarysemaphores. The value of a counting semaphore can range over an
unrestricteddomain. The value of a binary semaphore can range onlybetween 0 and 1. Thus, binarysemaphores behave similarlyto mutex
locks. Infact, onsystems that donot provide mutex locks, binarysemaphores canbe usedinstead for providingmutual exclusion.
 Counting semaphores canbe usedto control accessto a given resource consistingof a finite number of instances. The semaphore is initialized
to the number ofresources available. Eachprocessthat wishes to use a resource performs a wait() operation onthe semaphore (thereby
decrementing the count). When a process releases a resource, it performs a signal() operation(incrementing the count). Whenthe count for
the semaphore goesto 0, all resourcesare being used. After that, processes that wish to use a resource will block untilthe count becomes
greater than0.
 We can also use semaphores to solve various synchronizationproblems. For example, consider two concurrentlyrunningprocesses:P1 witha
statement S1 and P2 witha statement S2. Suppose we require that S2 be executed onlyafter S1 has completed. We canimplement this
scheme readilybylettingP1 andP2 share a commonsemaphore synch, initialized to 0. In process P1, we insert the statements
S1;
signal(synch);
In processP2, we insert the statements
wait(synch);
S2;
Because synch is initialized to 0, P2 will execute S2 onlyafter P1 has invoked signal(synch), which is after statement S1 has beenexecuted.
Semaphore Implementation
 Recall that the implementationof mutex locks discussedin Section5.5 suffers from busywaiting. The definitions of the wait() andsignal()
semaphore operations just describedpresent the same problem. To overcome the needfor busywaiting, we canmodifythe definitionof the
wait() and signal() operations as follows:Whena process executes the wait() operationandfinds that the semaphore value is not positive, it
must wait. However, rather thanengaging inbusywaiting, the process can blockitself. The blockoperationplacesa process intoa waiting
queue associatedwiththe semaphore, andthe state of the process is switched to the waitingstate. Then control is transferred to the CPU
scheduler, whichselects another process to execute.
A process that is blocked, waiting ona semaphore S, shouldbe restartedwhensome other process
executes a signal() operation. The processis restartedbya wakeup() operation, whichchanges the process
from the waitingstate to the readystate. The process is thenplacedinthe readyqueue. (The CPU mayor
maynot be switched from the running process to the newlyreadyprocess, dependingon the CPU-scheduling algorithm.)To implement
semaphores under this definition, we define a semaphore as follows:
Each semaphore has aninteger value anda list of
processes list. Whena processmust wait on a
semaphore, it is addedto the list of processes. A signal()
operationremoves one process from the list ofwaiting
processes andawakens that process. Now, the wait()
and signal()semaphore operations canbe definedas:
The block() operationsuspends the processthat invokes it. The wakeup(P) operationresumesthe executionof a blockedprocess P. These
two operations are providedbythe operating system as basic systemcalls.
Note that inthis implementation, semaphore valuesmaybe negative, whereas semaphore values are never negative under the classical
definitionof semaphoreswith busywaiting. Ifa semaphore value is negative, its magnitude is the number of processes waitingon that
7
Process Synchronization (Galvin)
semaphore. Thisfact results from switchingthe order of the decrement andthe test inthe implementationof the wait() operation.
The list of waitingprocesses can be easilyimplementedbya link fieldineach process control block(PCB). Eachsemaphore contains an
integer value anda pointer to a list of PCBs. One wayto addandremove processes from the list soas to ensure boundedwaiting is to use a
FIFO queue, where the semaphore contains both headandtail pointers to the queue. Ingeneral, however, the list can use any queueing
strategy.
 It is criticalthat semaphore operations be executedatomically. We must guarantee that notwo processes canexecute wait() andsignal()
operations onthe same semaphore at the same time. Thisis a critical-section problem;andina single-processor environment, we can solve it
bysimplyinhibiting interrupts duringthe time the wait() andsignal() operations are executing. Thisscheme works ina single-processor
environment because, once interrupts are inhibited, instructions fromdifferent processes cannot be interleaved. Onlythe currentlyrunning
process executesuntil interrupts are reenabled andthe scheduler can regaincontrol. In a multiprocessor environment, interrupts must be
disabledoneveryprocessor. Otherwise, instructions fromdifferent processes(running ondifferent processors)maybe interleavedinsome
arbitraryway. Disablinginterrupts oneveryprocessor canbe a difficult taskandfurthermore can seriouslydiminish performance. Therefore,
SMP systems must provide alternative locking techniques— suchas compare_and_swap() or spinlocks—toensure that wait() and signal() are
performed atomically.
 It is important to admit that we have not completelyeliminatedbusywaiting withthis definition ofthe wait() andsignal() operations. Rather,
we have movedbusywaiting from the entrysectionto the critical sections of applicationprograms. Furthermore, we have limitedbusywaiting
to the criticalsections of the wait() andsignal() operations, and thesesections are short (if properlycoded, theyshouldbe no more than about
ten instructions). Thus, the critical sectionis almost never occupied, and busywaiting occurs rarely, andthenfor onlya short time. An entirely
different situation exists withapplication programs whose critical sections maybe long (minutes or evenhours) or mayalmost always be
occupied. Insuchcases, busywaitingis extremelyinefficient.
Deadlocks and Starvation
 The implementationof a semaphore with a waiting queue mayresult ina situationwhere two or more
processes are waiting indefinitelyfor anevent that can be causedonlybyone ofthe waiting processes. The
event inquestionis the execution ofa signal() operation. When such a state is reached, theseprocesses are
said to be deadlocked. Toillustrate this, consider a system consisting of twoprocesses, P0 andP1, each
accessing two semaphores, S andQ, set to the value 1:
Suppose that P0 executes wait(S) andthenP1 executeswait(Q).WhenP0 executes wait(Q), it must wait untilP1 executessignal(Q).
Similarly, whenP1 executeswait(S), it must wait until P0 executes signal(S). Since these signal() operations cannot be executed, P0 and P1 are
deadlocked.
 Another problemrelatedto deadlocks is indefinite blocking or starvation, a situation inwhichprocesses wait indefinitelywithinthe
semaphore. Indefinite blockingmayoccur if we remove processes fromthe list associatedwith a semaphore in LIFO (last-in, first-out)order.
Priority Inversion
 A scheduling challenge arises when a higher-priorityprocess needs to read or modifykernel data that are currentlybeing accessedbya lower-
priorityprocess—or a chain oflower-priorityprocesses. Since kernel data are typicallyprotectedwitha lock, the higher-priorityprocess will
have to wait for a lower-priorityone to finishwiththe resource. The situationbecomes more complicatedif the lower-priorityprocessis
preempted infavor of another process witha higher priority. As anexample, assume we have three processes—L, M, and H—whose priorities
followthe order L < M< H. Assume that processH requires resource R, whichis currentlybeing accessedbyprocessL. Ordinarily, processH
wouldwait for L to finishusingresource R. However, now suppose that process Mbecomes runnable, therebypreempting process L.
Indirectly, a processwith a lower priority—processM—has affectedhowlong process H must wait for L to relinquishresource R.
This problem is known as priority inversion. It occurs onlyinsystems withmore thantwo priorities, soone solution is to have onlytwo
priorities. That is insufficient for most general-purpose operating systems, however. Typicallythese systems solve the problem by
implementing a priority-inheritance protocol. According to thisprotocol, allprocesses that are accessing resources needed bya higher-priority
process inherit the higher priorityuntil theyare finishedwith the resources in question. Whenthey are finished, their prioritiesrevert to their
originalvalues. Inthe example above, a priority-inheritance protocol wouldallowprocess L to temporarilyinherit the priorityof processH,
therebypreventing process Mfrom preempting its execution. Whenprocess L had finishedusing resource R, it wouldrelinquishits inherited
priorityfromH and assume its original priority. Because resource R wouldnowbe available, process H—not M—wouldrunnext.
CLASSIC PROBLEMS OF SYNCHRONIZATION
In this section, we present a number of synchronizationproblems as examples of a large class ofconcurrency-control problems. These problems are
usedfor testingnearlyeverynewlyproposedsynchronizationscheme. In our solutions to the problems, we use semaphoresfor synchronization, since
that is the traditional wayto present such solutions. However, actual implementations ofthese solutions coulduse mutex locks in place of binary
semaphores.
The Bounded-Buffer Problem
8
Process Synchronization (Galvin)
The bounded-buffer problem wasintroduced inSection5.1;it is commonlyusedto illustrate the power of
synchronizationprimitives. Here, we present a general structure ofthis scheme without committing
ourselves to anyparticular implementation. We provide a relatedprogramming project inthe exercises at
the end ofthe chapter.
In our problem, the producer and consumer processesshare the following data structures:
int n;
semaphore mutex = 1;
semaphore empty = n;
semaphore full = 0
We assume that the pool consists of n buffers, eachcapable ofholdingone item. The mutex
semaphore provides mutualexclusionfor accesses to the buffer pool andis initializedto the value
1. The emptyandfullsemaphores count the number of empty andfull buffers. The semaphore
empty is initializedto the value n;the semaphore full is initialized to the value 0.
The code for the producer process is showninFigure 5.9, andthe code for the consumer
process is showninFigure 5.10. Note the symmetrybetween the producer andthe consumer. We can interpret thiscode as the producer producing full
buffers for the consumer or as the consumer producingemptybuffers for the producer.
The Readers–Writers Problem
 Suppose that a database is to be sharedamong several concurrent processes. Some of these processesmaywant onlyto read the database,
whereasothers maywant to update (that is, to read and write) the database. We distinguish betweenthese two types of p rocessesby
referring to the former as readers and to the latter as writers. Obviously, iftwo readers access the shared data simultaneously, no adverse
effects will result. However, if a writer andsome other process (either a reader or a writer) access the database simultaneously, chaos may
ensue. To ensure that these difficultiesdo not arise, we require that the writers have exclusive access to the shared database while writing to
the database. This synchronizationproblemis referredto as the readers–writers problem. Since it wasoriginallystated, it has beenusedto
test nearlyeverynewsynchronizationprimitive.
 The readers–writers problemhas severalvariations, all involvingpriorities. The simplest one, referredto as the first readers–writers problem,
requiresthat no reader be kept waiting unless a writer hasalreadyobtained permissionto use the sharedobject. Inother words, noreader
should wait for other readers to finishsimplybecause a writer is waiting. The secondreaders –writers problem requires that, once a writer is
ready, that writer perform its write as soon as possible. In other words, if a writer is waiting to access the object, nonew readers maystart
reading.
 A solutionto either problemmayresult instarvation. Inthe first case, writers maystarve;inthe secondcase, readers maystarve. For this
reason, other variants of the problemhave been proposed. Next, we present a solution to the first readers–writers problem. See the
bibliographical notes at the end ofthe chapter for referencesdescribing starvation-free solutions to the secondreaders–writers problem.

AssortedContent
 XXX
To be cleared
 I
Q’s Later
 XXX
Glossary
ReadLater
Further Reading
 S

9
Process Synchronization (Galvin)
Grey Areas
 XXX

Mais conteúdo relacionado

Mais procurados

Operating System Process Synchronization
Operating System Process SynchronizationOperating System Process Synchronization
Operating System Process SynchronizationHaziq Naeem
 
process management
 process management process management
process managementAshish Kumar
 
Operating system - Process and its concepts
Operating system - Process and its conceptsOperating system - Process and its concepts
Operating system - Process and its conceptsKaran Thakkar
 
Distributed Systems Theory for Mere Mortals
Distributed Systems Theory for Mere MortalsDistributed Systems Theory for Mere Mortals
Distributed Systems Theory for Mere MortalsEnsar Basri Kahveci
 
An Overview of Distributed Debugging
An Overview of Distributed DebuggingAn Overview of Distributed Debugging
An Overview of Distributed DebuggingAnant Narayanan
 
Lecture 7, 8, 9 and 10 Inter Process Communication (IPC) in Operating Systems
Lecture 7, 8, 9 and 10  Inter Process Communication (IPC) in Operating SystemsLecture 7, 8, 9 and 10  Inter Process Communication (IPC) in Operating Systems
Lecture 7, 8, 9 and 10 Inter Process Communication (IPC) in Operating SystemsRushdi Shams
 
Multiprocessing -Interprocessing communication and process sunchronization,se...
Multiprocessing -Interprocessing communication and process sunchronization,se...Multiprocessing -Interprocessing communication and process sunchronization,se...
Multiprocessing -Interprocessing communication and process sunchronization,se...Neena R Krishna
 
Operating system 23 process synchronization
Operating system 23 process synchronizationOperating system 23 process synchronization
Operating system 23 process synchronizationVaibhav Khanna
 

Mais procurados (18)

Operating System Process Synchronization
Operating System Process SynchronizationOperating System Process Synchronization
Operating System Process Synchronization
 
Chapter 3 chapter reading task
Chapter 3 chapter reading taskChapter 3 chapter reading task
Chapter 3 chapter reading task
 
Process coordination
Process coordinationProcess coordination
Process coordination
 
Chapter05 new
Chapter05 newChapter05 new
Chapter05 new
 
process management
 process management process management
process management
 
Operating system - Process and its concepts
Operating system - Process and its conceptsOperating system - Process and its concepts
Operating system - Process and its concepts
 
Distributed Systems Theory for Mere Mortals
Distributed Systems Theory for Mere MortalsDistributed Systems Theory for Mere Mortals
Distributed Systems Theory for Mere Mortals
 
Memory management
Memory managementMemory management
Memory management
 
Replication in the Wild
Replication in the WildReplication in the Wild
Replication in the Wild
 
Process synchronization
Process synchronizationProcess synchronization
Process synchronization
 
An Overview of Distributed Debugging
An Overview of Distributed DebuggingAn Overview of Distributed Debugging
An Overview of Distributed Debugging
 
Lecture 5 process concept
Lecture 5   process conceptLecture 5   process concept
Lecture 5 process concept
 
Lecture 7, 8, 9 and 10 Inter Process Communication (IPC) in Operating Systems
Lecture 7, 8, 9 and 10  Inter Process Communication (IPC) in Operating SystemsLecture 7, 8, 9 and 10  Inter Process Communication (IPC) in Operating Systems
Lecture 7, 8, 9 and 10 Inter Process Communication (IPC) in Operating Systems
 
Unit 5 ppt
Unit 5 pptUnit 5 ppt
Unit 5 ppt
 
Critical section operating system
Critical section  operating systemCritical section  operating system
Critical section operating system
 
3 process management
3 process management3 process management
3 process management
 
Multiprocessing -Interprocessing communication and process sunchronization,se...
Multiprocessing -Interprocessing communication and process sunchronization,se...Multiprocessing -Interprocessing communication and process sunchronization,se...
Multiprocessing -Interprocessing communication and process sunchronization,se...
 
Operating system 23 process synchronization
Operating system 23 process synchronizationOperating system 23 process synchronization
Operating system 23 process synchronization
 

Semelhante a Processscheduling 161001112521

Process synchronization
Process synchronizationProcess synchronization
Process synchronizationlodhran-hayat
 
Operating System- INTERPROCESS COMMUNICATION.docx
Operating System- INTERPROCESS COMMUNICATION.docxOperating System- INTERPROCESS COMMUNICATION.docx
Operating System- INTERPROCESS COMMUNICATION.docxminaltmv
 
Process scheduling
Process schedulingProcess scheduling
Process schedulingmarangburu42
 
Concurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and SynchronizationConcurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and SynchronizationAnas Ebrahim
 
Bt0081 software engineering2
Bt0081 software engineering2Bt0081 software engineering2
Bt0081 software engineering2Techglyphs
 
Notes about concurrent and distributed systems & x86 virtualization
Notes about concurrent and distributed systems & x86 virtualizationNotes about concurrent and distributed systems & x86 virtualization
Notes about concurrent and distributed systems & x86 virtualizationAlessio Villardita
 
Lecture 5- Process Synchronization (1).pptx
Lecture 5- Process Synchronization (1).pptxLecture 5- Process Synchronization (1).pptx
Lecture 5- Process Synchronization (1).pptxAmanuelmergia
 
Lecture 9 - Process Synchronization.pptx
Lecture 9 - Process Synchronization.pptxLecture 9 - Process Synchronization.pptx
Lecture 9 - Process Synchronization.pptxEhteshamulIslam1
 
Operating Systems - "Chapter 5 Process Synchronization"
Operating Systems - "Chapter 5 Process Synchronization"Operating Systems - "Chapter 5 Process Synchronization"
Operating Systems - "Chapter 5 Process Synchronization"Ra'Fat Al-Msie'deen
 
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTESPARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTESsuthi
 
Load balancing in Distributed Systems
Load balancing in Distributed SystemsLoad balancing in Distributed Systems
Load balancing in Distributed SystemsRicha Singh
 
Monitor(karthika)
Monitor(karthika)Monitor(karthika)
Monitor(karthika)Nagarajan
 
CS8603_Notes_003-1_edubuzz360.pdf
CS8603_Notes_003-1_edubuzz360.pdfCS8603_Notes_003-1_edubuzz360.pdf
CS8603_Notes_003-1_edubuzz360.pdfKishaKiddo
 
System Structure for Dependable Software Systems
System Structure for Dependable Software SystemsSystem Structure for Dependable Software Systems
System Structure for Dependable Software SystemsVincenzo De Florio
 
UNIT 2-UNDERSTANDING THE SYNCHRONIZATION PROCESS.pptx
UNIT 2-UNDERSTANDING THE SYNCHRONIZATION PROCESS.pptxUNIT 2-UNDERSTANDING THE SYNCHRONIZATION PROCESS.pptx
UNIT 2-UNDERSTANDING THE SYNCHRONIZATION PROCESS.pptxLeahRachael
 

Semelhante a Processscheduling 161001112521 (20)

Process synchronization
Process synchronizationProcess synchronization
Process synchronization
 
Operating System- INTERPROCESS COMMUNICATION.docx
Operating System- INTERPROCESS COMMUNICATION.docxOperating System- INTERPROCESS COMMUNICATION.docx
Operating System- INTERPROCESS COMMUNICATION.docx
 
Process scheduling
Process schedulingProcess scheduling
Process scheduling
 
Concurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and SynchronizationConcurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and Synchronization
 
Bt0081 software engineering2
Bt0081 software engineering2Bt0081 software engineering2
Bt0081 software engineering2
 
Notes about concurrent and distributed systems & x86 virtualization
Notes about concurrent and distributed systems & x86 virtualizationNotes about concurrent and distributed systems & x86 virtualization
Notes about concurrent and distributed systems & x86 virtualization
 
Os
OsOs
Os
 
Os
OsOs
Os
 
Lecture 5- Process Synchronization (1).pptx
Lecture 5- Process Synchronization (1).pptxLecture 5- Process Synchronization (1).pptx
Lecture 5- Process Synchronization (1).pptx
 
Lecture 9 - Process Synchronization.pptx
Lecture 9 - Process Synchronization.pptxLecture 9 - Process Synchronization.pptx
Lecture 9 - Process Synchronization.pptx
 
chp13.pdf
chp13.pdfchp13.pdf
chp13.pdf
 
Operating Systems - "Chapter 5 Process Synchronization"
Operating Systems - "Chapter 5 Process Synchronization"Operating Systems - "Chapter 5 Process Synchronization"
Operating Systems - "Chapter 5 Process Synchronization"
 
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTESPARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTES
 
Load balancing in Distributed Systems
Load balancing in Distributed SystemsLoad balancing in Distributed Systems
Load balancing in Distributed Systems
 
Monitor(karthika)
Monitor(karthika)Monitor(karthika)
Monitor(karthika)
 
CS8603_Notes_003-1_edubuzz360.pdf
CS8603_Notes_003-1_edubuzz360.pdfCS8603_Notes_003-1_edubuzz360.pdf
CS8603_Notes_003-1_edubuzz360.pdf
 
Os
OsOs
Os
 
Dynamic loading
Dynamic loadingDynamic loading
Dynamic loading
 
System Structure for Dependable Software Systems
System Structure for Dependable Software SystemsSystem Structure for Dependable Software Systems
System Structure for Dependable Software Systems
 
UNIT 2-UNDERSTANDING THE SYNCHRONIZATION PROCESS.pptx
UNIT 2-UNDERSTANDING THE SYNCHRONIZATION PROCESS.pptxUNIT 2-UNDERSTANDING THE SYNCHRONIZATION PROCESS.pptx
UNIT 2-UNDERSTANDING THE SYNCHRONIZATION PROCESS.pptx
 

Mais de marangburu42

Hennchthree 161102111515
Hennchthree 161102111515Hennchthree 161102111515
Hennchthree 161102111515marangburu42
 
Sequential circuits
Sequential circuitsSequential circuits
Sequential circuitsmarangburu42
 
Combinational circuits
Combinational circuitsCombinational circuits
Combinational circuitsmarangburu42
 
Hennchthree 160912095304
Hennchthree 160912095304Hennchthree 160912095304
Hennchthree 160912095304marangburu42
 
Sequential circuits
Sequential circuitsSequential circuits
Sequential circuitsmarangburu42
 
Combinational circuits
Combinational circuitsCombinational circuits
Combinational circuitsmarangburu42
 
Karnaugh mapping allaboutcircuits
Karnaugh mapping allaboutcircuitsKarnaugh mapping allaboutcircuits
Karnaugh mapping allaboutcircuitsmarangburu42
 
Aac boolean formulae
Aac   boolean formulaeAac   boolean formulae
Aac boolean formulaemarangburu42
 
Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858marangburu42
 
File system interfacefinal
File system interfacefinalFile system interfacefinal
File system interfacefinalmarangburu42
 
File systemimplementationfinal
File systemimplementationfinalFile systemimplementationfinal
File systemimplementationfinalmarangburu42
 
Mass storage structurefinal
Mass storage structurefinalMass storage structurefinal
Mass storage structurefinalmarangburu42
 
All aboutcircuits karnaugh maps
All aboutcircuits karnaugh mapsAll aboutcircuits karnaugh maps
All aboutcircuits karnaugh mapsmarangburu42
 
Virtual memoryfinal
Virtual memoryfinalVirtual memoryfinal
Virtual memoryfinalmarangburu42
 
Mainmemoryfinal 161019122029
Mainmemoryfinal 161019122029Mainmemoryfinal 161019122029
Mainmemoryfinal 161019122029marangburu42
 

Mais de marangburu42 (20)

Hol
HolHol
Hol
 
Write miss
Write missWrite miss
Write miss
 
Hennchthree 161102111515
Hennchthree 161102111515Hennchthree 161102111515
Hennchthree 161102111515
 
Hennchthree
HennchthreeHennchthree
Hennchthree
 
Hennchthree
HennchthreeHennchthree
Hennchthree
 
Sequential circuits
Sequential circuitsSequential circuits
Sequential circuits
 
Combinational circuits
Combinational circuitsCombinational circuits
Combinational circuits
 
Hennchthree 160912095304
Hennchthree 160912095304Hennchthree 160912095304
Hennchthree 160912095304
 
Sequential circuits
Sequential circuitsSequential circuits
Sequential circuits
 
Combinational circuits
Combinational circuitsCombinational circuits
Combinational circuits
 
Karnaugh mapping allaboutcircuits
Karnaugh mapping allaboutcircuitsKarnaugh mapping allaboutcircuits
Karnaugh mapping allaboutcircuits
 
Aac boolean formulae
Aac   boolean formulaeAac   boolean formulae
Aac boolean formulae
 
Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858
 
Io systems final
Io systems finalIo systems final
Io systems final
 
File system interfacefinal
File system interfacefinalFile system interfacefinal
File system interfacefinal
 
File systemimplementationfinal
File systemimplementationfinalFile systemimplementationfinal
File systemimplementationfinal
 
Mass storage structurefinal
Mass storage structurefinalMass storage structurefinal
Mass storage structurefinal
 
All aboutcircuits karnaugh maps
All aboutcircuits karnaugh mapsAll aboutcircuits karnaugh maps
All aboutcircuits karnaugh maps
 
Virtual memoryfinal
Virtual memoryfinalVirtual memoryfinal
Virtual memoryfinal
 
Mainmemoryfinal 161019122029
Mainmemoryfinal 161019122029Mainmemoryfinal 161019122029
Mainmemoryfinal 161019122029
 

Último

Call Girls Service Nagpur Aditi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Aditi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Aditi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Aditi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Sustainable Clothing Strategies and Challenges
Sustainable Clothing Strategies and ChallengesSustainable Clothing Strategies and Challenges
Sustainable Clothing Strategies and ChallengesDr. Salem Baidas
 
(AISHA) Wagholi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(AISHA) Wagholi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(AISHA) Wagholi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(AISHA) Wagholi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
Spiders by Slidesgo - an introduction to arachnids
Spiders by Slidesgo - an introduction to arachnidsSpiders by Slidesgo - an introduction to arachnids
Spiders by Slidesgo - an introduction to arachnidsprasan26
 
Low Rate Call Girls Bikaner Anika 8250192130 Independent Escort Service Bikaner
Low Rate Call Girls Bikaner Anika 8250192130 Independent Escort Service BikanerLow Rate Call Girls Bikaner Anika 8250192130 Independent Escort Service Bikaner
Low Rate Call Girls Bikaner Anika 8250192130 Independent Escort Service BikanerSuhani Kapoor
 
VIP Call Girls Moti Ganpur ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
VIP Call Girls Moti Ganpur ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...VIP Call Girls Moti Ganpur ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
VIP Call Girls Moti Ganpur ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...Suhani Kapoor
 
Russian Call Girls Nashik Anjali 7001305949 Independent Escort Service Nashik
Russian Call Girls Nashik Anjali 7001305949 Independent Escort Service NashikRussian Call Girls Nashik Anjali 7001305949 Independent Escort Service Nashik
Russian Call Girls Nashik Anjali 7001305949 Independent Escort Service Nashikranjana rawat
 
VIP Call Girls Service Chaitanyapuri Hyderabad Call +91-8250192130
VIP Call Girls Service Chaitanyapuri Hyderabad Call +91-8250192130VIP Call Girls Service Chaitanyapuri Hyderabad Call +91-8250192130
VIP Call Girls Service Chaitanyapuri Hyderabad Call +91-8250192130Suhani Kapoor
 
Freegle User Survey as visual display - BH
Freegle User Survey as visual display - BHFreegle User Survey as visual display - BH
Freegle User Survey as visual display - BHbill846304
 
ENVIRONMENTAL LAW ppt on laws of environmental law
ENVIRONMENTAL LAW ppt on laws of environmental lawENVIRONMENTAL LAW ppt on laws of environmental law
ENVIRONMENTAL LAW ppt on laws of environmental lawnitinraj1000000
 
(ANAYA) Call Girls Hadapsar ( 7001035870 ) HI-Fi Pune Escorts Service
(ANAYA) Call Girls Hadapsar ( 7001035870 ) HI-Fi Pune Escorts Service(ANAYA) Call Girls Hadapsar ( 7001035870 ) HI-Fi Pune Escorts Service
(ANAYA) Call Girls Hadapsar ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
(NANDITA) Hadapsar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune ...
(NANDITA) Hadapsar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune ...(NANDITA) Hadapsar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune ...
(NANDITA) Hadapsar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune ...ranjana rawat
 
History, principles and use for biopesticide risk assessment: Boet Glandorf a...
History, principles and use for biopesticide risk assessment: Boet Glandorf a...History, principles and use for biopesticide risk assessment: Boet Glandorf a...
History, principles and use for biopesticide risk assessment: Boet Glandorf a...OECD Environment
 
webinaire-green-mirror-episode-2-Smart contracts and virtual purchase agreeme...
webinaire-green-mirror-episode-2-Smart contracts and virtual purchase agreeme...webinaire-green-mirror-episode-2-Smart contracts and virtual purchase agreeme...
webinaire-green-mirror-episode-2-Smart contracts and virtual purchase agreeme...Cluster TWEED
 
(RIYA) Kalyani Nagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(RIYA) Kalyani Nagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(RIYA) Kalyani Nagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(RIYA) Kalyani Nagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
(ANIKA) Call Girls Wagholi ( 7001035870 ) HI-Fi Pune Escorts Service
(ANIKA) Call Girls Wagholi ( 7001035870 ) HI-Fi Pune Escorts Service(ANIKA) Call Girls Wagholi ( 7001035870 ) HI-Fi Pune Escorts Service
(ANIKA) Call Girls Wagholi ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 

Último (20)

Call Girls Service Nagpur Aditi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Aditi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Aditi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Aditi Call 7001035870 Meet With Nagpur Escorts
 
Sustainable Clothing Strategies and Challenges
Sustainable Clothing Strategies and ChallengesSustainable Clothing Strategies and Challenges
Sustainable Clothing Strategies and Challenges
 
(AISHA) Wagholi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(AISHA) Wagholi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(AISHA) Wagholi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(AISHA) Wagholi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
Call Girls In Yamuna Vihar꧁❤ 🔝 9953056974🔝❤꧂ Escort ServiCe
Call Girls In Yamuna Vihar꧁❤ 🔝 9953056974🔝❤꧂ Escort ServiCeCall Girls In Yamuna Vihar꧁❤ 🔝 9953056974🔝❤꧂ Escort ServiCe
Call Girls In Yamuna Vihar꧁❤ 🔝 9953056974🔝❤꧂ Escort ServiCe
 
Spiders by Slidesgo - an introduction to arachnids
Spiders by Slidesgo - an introduction to arachnidsSpiders by Slidesgo - an introduction to arachnids
Spiders by Slidesgo - an introduction to arachnids
 
Low Rate Call Girls Bikaner Anika 8250192130 Independent Escort Service Bikaner
Low Rate Call Girls Bikaner Anika 8250192130 Independent Escort Service BikanerLow Rate Call Girls Bikaner Anika 8250192130 Independent Escort Service Bikaner
Low Rate Call Girls Bikaner Anika 8250192130 Independent Escort Service Bikaner
 
VIP Call Girls Moti Ganpur ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
VIP Call Girls Moti Ganpur ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...VIP Call Girls Moti Ganpur ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
VIP Call Girls Moti Ganpur ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
 
Russian Call Girls Nashik Anjali 7001305949 Independent Escort Service Nashik
Russian Call Girls Nashik Anjali 7001305949 Independent Escort Service NashikRussian Call Girls Nashik Anjali 7001305949 Independent Escort Service Nashik
Russian Call Girls Nashik Anjali 7001305949 Independent Escort Service Nashik
 
VIP Call Girls Service Chaitanyapuri Hyderabad Call +91-8250192130
VIP Call Girls Service Chaitanyapuri Hyderabad Call +91-8250192130VIP Call Girls Service Chaitanyapuri Hyderabad Call +91-8250192130
VIP Call Girls Service Chaitanyapuri Hyderabad Call +91-8250192130
 
E Waste Management
E Waste ManagementE Waste Management
E Waste Management
 
Freegle User Survey as visual display - BH
Freegle User Survey as visual display - BHFreegle User Survey as visual display - BH
Freegle User Survey as visual display - BH
 
ENVIRONMENTAL LAW ppt on laws of environmental law
ENVIRONMENTAL LAW ppt on laws of environmental lawENVIRONMENTAL LAW ppt on laws of environmental law
ENVIRONMENTAL LAW ppt on laws of environmental law
 
(ANAYA) Call Girls Hadapsar ( 7001035870 ) HI-Fi Pune Escorts Service
(ANAYA) Call Girls Hadapsar ( 7001035870 ) HI-Fi Pune Escorts Service(ANAYA) Call Girls Hadapsar ( 7001035870 ) HI-Fi Pune Escorts Service
(ANAYA) Call Girls Hadapsar ( 7001035870 ) HI-Fi Pune Escorts Service
 
(NANDITA) Hadapsar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune ...
(NANDITA) Hadapsar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune ...(NANDITA) Hadapsar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune ...
(NANDITA) Hadapsar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune ...
 
History, principles and use for biopesticide risk assessment: Boet Glandorf a...
History, principles and use for biopesticide risk assessment: Boet Glandorf a...History, principles and use for biopesticide risk assessment: Boet Glandorf a...
History, principles and use for biopesticide risk assessment: Boet Glandorf a...
 
9953056974 ,Low Rate Call Girls In Adarsh Nagar Delhi 24hrs Available
9953056974 ,Low Rate Call Girls In Adarsh Nagar  Delhi 24hrs Available9953056974 ,Low Rate Call Girls In Adarsh Nagar  Delhi 24hrs Available
9953056974 ,Low Rate Call Girls In Adarsh Nagar Delhi 24hrs Available
 
webinaire-green-mirror-episode-2-Smart contracts and virtual purchase agreeme...
webinaire-green-mirror-episode-2-Smart contracts and virtual purchase agreeme...webinaire-green-mirror-episode-2-Smart contracts and virtual purchase agreeme...
webinaire-green-mirror-episode-2-Smart contracts and virtual purchase agreeme...
 
(RIYA) Kalyani Nagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(RIYA) Kalyani Nagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(RIYA) Kalyani Nagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(RIYA) Kalyani Nagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Call Girls In { Delhi } South Extension Whatsup 9873940964 Enjoy Unlimited Pl...
Call Girls In { Delhi } South Extension Whatsup 9873940964 Enjoy Unlimited Pl...Call Girls In { Delhi } South Extension Whatsup 9873940964 Enjoy Unlimited Pl...
Call Girls In { Delhi } South Extension Whatsup 9873940964 Enjoy Unlimited Pl...
 
(ANIKA) Call Girls Wagholi ( 7001035870 ) HI-Fi Pune Escorts Service
(ANIKA) Call Girls Wagholi ( 7001035870 ) HI-Fi Pune Escorts Service(ANIKA) Call Girls Wagholi ( 7001035870 ) HI-Fi Pune Escorts Service
(ANIKA) Call Girls Wagholi ( 7001035870 ) HI-Fi Pune Escorts Service
 

Processscheduling 161001112521

  • 1. 1 Process Synchronization (Galvin) Outline  CHAPTER OBJECTIVES  To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data.  To present both software and hardware solutions of the critical-section problem.  To examine several classical process-synchronization problems.  To explore several tools that are used to solve process synchronization problems.  BACKGROUND  THE CRITICAL SECTION PROBLEM  PETERSON'S SOLUTION  SYNCHRONIZATION HARDWARE  MUTEX LOCKS  SEMAPHORES o Semaphore Usage o Semaphore Implementation o Deadlocks and Starvation o Priority Inversion  CLASSIC PROBLEMS OF SYNCHRONIZATION o The Bounded-Buffer Problem o The Readers–Writers Problem o The Dining-Philosophers Problem  MONITORS o Monitor Usage o Dining-Philosophers Solution o Using Monitors o Implementing a Monitor o Using Semaphores o Resuming Processes within a Monitor  SYNCHRONIZATION EXAMPLES o Synchronization in Windows o Synchronization in Linux o Synchronization in Solaris o Pthreads Synchronization  ALTERNATIVE APPROACHES o Transactional Memory o OpenMP o Functional Programming Languages Contents A cooperating process is one that canaffect or be affectedbyother processes executing inthe system. Cooperating processescaneither directlyshare a logical address space (that is, both code anddata)or be allowed to share data onlythroughfiles or messages. The former case is achievedthroughthe use of threads, discussedinChapter 4. Concurrent accessto shareddata mayresult in data inconsistency, however. Inthis chapter, we discussvarious mechanisms to ensure the orderlyexecution ofcooperating processesthat share a logical address space, sothat data consistencyis maintained. BACKGROUND  We’ve alreadyseen that processes canexecute concurrentlyor in parallel. Section 3.2.2 introducedthe role of processsched ulingand described how the CPU scheduler switches rapidlybetween processesto provide concurrent execution. This means that one process may onlypartiallycomplete execution before another processis scheduled. Infact, a process maybe interruptedat anypoint in its instruction
  • 2. 2 Process Synchronization (Galvin) stream, andthe processing core maybe assigned to execute instructions of another process. Additionally, Section4.2 introducedparallel execution, in whichtwo instruction streams (representing different processes) execute simultaneouslyon separate processing cores. Inthis chapter, we explain how concurrent or parallelexecutioncan contribute to issues involving the integrityof data sharedbyseveral processes.  In Chapter 3, we developeda modelof a system consisting of cooperating sequential processesor threads, all running asynchronouslyand possiblysharing data. We illustratedthismodel withthe producer–consumer problem, whichis representative ofoperatingsystems. Specifically, in Section3.4.1, we describedhow a boundedbuffer couldbe usedto enable processes to share memory.  Coming to the bounded buffer problem, as we pointedout, our original solutionallowedat most BUFFER SIZE − 1 items inthe b uffer at the same time. Suppose we want to modifythe algorithm to remedythis deficiency. One possibilityis to addaninteger variable counter, initializedto 0. counter is incrementedeverytime we adda newitemto the buffer andis decrementedeverytime we remove o ne itemfrom the buffer. The code for the producer and consumer processes canbe modifiedas follows:  Although the producer andconsumer routines shown above are correct separately, theymaynot function correctlywhenexecuted concurrently. As anillustration, suppose that the value ofthe variable counter is currently5 andthat the producer andconsumer processes concurrentlyexecute the statements “counter++” and“counter--”. Following the execution ofthese two statements, the value of the variable counter maybe 4, 5, or 6! The onlycorrect result, though, is counter == 5, which is generated correctlyif the producer andconsumer execute separately. Note: Page 205 of 9th edition (which we have read well) shows whythe value of the counter may be incorrect. It is due to the way the statements "Counter++" or "Counter--" are implemented in assembly(and hence machine language) on a typical machine. Since we know it well, we don't clutter the content here. The following starts after that part in book.  We would arrive at this incorrect state because we allowed bothprocesses to manipulate the variable counter concurrently. A situationlike this, where several processes access and manipulate the same data concurrentlyandthe outcome of the execution depends on th e particular order in which the access takes place, is calleda race condition. To guard against the race condition above, we needto ensure that onlyone process at a time canbe manipulating the variable counter. To make sucha guarantee, we require that the processesbe synchronized in some way.  Situations such as the one just describedoccur frequentlyinoperating systems as different parts of the systemmanipulate resources. Furthermore, as we have emphasizedinearlier chapters, the growing importance ofmulticore systems has brought anincreased emphasis on developing multithreadedapplications. In such applications, several threads—which are quite possiblysharing data—are running in parallel on different processing cores. Clearly, we want anychanges that result fromsucha ctivities not to interfere with one another. Because of the importance of thisissue, we devote a major portionof thischapter to process synchronization and coordination among cooperating processes. THE CRITICAL SECTION PROBLEM We beginour considerationof process synchronizationbydiscussing the so-called critical-section problem. Consider a system consisting of n processes{P0, P1, ..., Pn−1}. Each process hasa segment of code, calleda critical section, in whichthe processmaybe changingcommonvariables, updating a table, writing a file, andsoon. The important feature ofthe system is that, whenone process is executinginits criticalsection, no other process is allowedto execute inits critical section. That is, no two processes are executingintheir critical sections at the same time. The critical-section problem is to design a protocol that the processes can use to cooperate. Each process must request permission to enter its critical section. The sectionof code implementingthis request is the entry section. The critical sectionmaybe followedbyan exit section. The remainingcode is the remaindersection. The general structure ofa typical process Pi is shown inFigure 5.1. The entrysection and exit sectionare enclosedinboxes to highlight these important segments of code. A solution to the critical-section problem must satisfythe following three requirements:  Mutual exclusion. If processPi is executing inits critical section, thenno other processes can be executing in their critical sections.  Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, thenonlythose processesthat are not executingintheir remainder sections canparticipate in decidingwhichwill enter its critical sectionnext, andthis selectioncannot be postponed indefinitely.
  • 3. 3 Process Synchronization (Galvin)  Bounded waiting. There exists a bound, or limit, onthe number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical sectionandbefore that request is granted. At a givenpoint intime, manykernel-mode processesmaybe active inthe operatingsystem. As a result, the code implementing an operatingsystem (kernel code)is subject to several possible race conditions. Consider as anexample a kernel data structure that maintains a list of all openfiles inthe system. This list must be modifiedwhena newfile is openedor closed(adding the file to the list or removing it from the list). If two processes were to open filessimultaneously, the separate updatesto this list couldresult in a race condition. Other kerneldata structures that are prone to possible race conditions include structures for maintaining memoryallocation, for maintaining process lists, andfor interrupt handling. It is upto kernel developers to ensure that the operatingsystemis free from such race conditions. Two general approachesare usedto handle criticalsections in operating systems: preemptive kernels andnon-preemptive kernels. A preemptive kernel allows a processto be preempted while it is running in kernel mode. A nonpreemptive kerneldoesnot allow a process running inkernel mode to be preempted;a kernel-mode process will run untilit exits kernel mode, blocks, or voluntarilyyields control of the CPU. Obviously, a non-preemptive kernel is essentiallyfree fromrace conditions onkerneldata structures, as onlyone process is active in the kernel at a time. We cannot saythe same about preemptive kernels, sotheymust be carefullydesignedto ensure that sharedkernel data are free from race conditions. Preemptive kernelsare especiallydifficult to designfor SMParchitectures, since inthese environments it is possible for two kernel-mode processes to runsimultaneouslyon different processors. PETERSON’S SOLUTION We now illustrate a classic software-basedsolution to the critical-sectionproblem known as Peterson’s solution. Because ofthe waymodern computer architectures perform basic machine-language instructions, such as loadand store, there are noguarantees that Peterson’s solutionwill work correctlyon such architectures. However, we present the solutionbecause it provides a goodalgorithmic descriptionof solving the critical-section problemand illustrates some of the complexitiesinvolvedindesigningsoftware that addressesthe requirements of mutualexclusion, progress, and boundedwaiting. Peterson’s solutionis restrictedto two processesthat alternate executionbetweentheir criticalsections andremainder sections. The processesare numberedP0 andP1. For convenience, whenpresenting Pi, we use Pj to denote the other process;that is, j equals 1 − i. Peterson’s solutionrequires the two processes to share two data items: int turn; and boolean flag[2]; The variable turnindicateswhose turnit is to enter its critical section. That is, if turn== i, then process Pi is allowedto execute in its critical section. The flag arrayis usedto indicate if a process is ready to enter its critical section. For example, ifflag[i] is true, this value indicatesthat Pi is readyto enter its critical section. With an explanation ofthese data structures complete, we are nowreadyto describe the algorithm shownin Figure 5.2. To enter the critical section, processPi first sets flag[i] to be true andthensets turnto the value j, therebyasserting that ifthe other process wishes to enter the critical section, it cando so. If both processes try to enter at the same time, turn willbe set to bothi andj at roughlythe same time. Only one of these assignments will last;the other willoccur but will be overwrittenimmediately. The eventual value of turn determines which ofthe two processes is allowedto enter its criticalsectionfirst. We now prove that this solutionis correct. We needto showthat:1. Mutual exclusionis preserved. 2. The progress requirement is satisfied. 3. The bounded-waitingrequirement is met. To prove property1, we note that each Pi enters its criticalsectiononlyif either flag[j]== false or turn == i. Alsonote that, ifbothprocesses canbe executingintheir critical sections at the same time, then flag[0]== flag[1]== true. These two observations implythat P0 and P1 couldnot have successfullyexecutedtheir while statements at about the same time, since the value of turncanbe either 0 or 1 but cannot be both. Hence, one of the processes —say, Pj—must have successfullyexecuted the while statement, whereas Pi had to execute at least one additionalstatement (“turn == j”). However, at that time, flag[j]== true andturn == j, andthis condition will persist as longas Pj is inits critical section;as a result, mutualexclusionis preserved. To prove properties 2 and3, we note that a process Pi canbe preventedfrom entering the critical section onlyifit is stuckinthe while loopwith the condition flag[j]== true and turn == j;this loopis the onlyone possible. If Pj is not readyto enter the critical section, then flag[j]== false, andPi can enter its critical section. If Pj has set flag[j]to true andis also executing inits while statement, theneither turn == i or turn == j. If turn == i, then Pi will enter the critical section. If turn== j, then Pj willenter the critical section. However, once Pj exits its critical section, it will reset flag[j]to false, allowing Pi to enter its critical section. If Pj resets flag[j]to true,it must also set turn to i. Thus, since Pi does not change the value of the variable turnwhile executing the while statement, Pi will enter the critical section(progress) after at most one entrybyPj (boundedwaiting). PETERSON’S SOLUTION (WIKIPEDIA)
  • 4. 4 Process Synchronization (Galvin) The algorithmuses twovariables, flag and turn . A flag[n] value of true indicates that the process n wants to enter the critical section. Entrance to the criticalsectionis grantedfor processP0 if P1 does not want to enter its critical section or if P1 has givenpriority to P0 bysetting turn to 0 . The algorithmsatisfies the three essential criteria to solve the critical sectionproblem, provided that changes to the variables turn , flag[0] , and flag[1] propagate immediatelyandatomically. The while conditionworks evenwith preemption. The three criteria are mutual exclusion, progress, and boundedwaiting. Since turncantake onone oftwo values, it canbe replaced bya single bit, meaning that the algorithms requires only three bits of memory. Mutual exclusion P0 and P1 can never be in the critical section at the same time:If P0 is inits critical section, thenflag[0]is true. In addition, either flag[1]is false (meaning P1 has left its critical section), or turn is 0 (meaningP1 is just now trying to enter the critical section, but graciouslywaiting), or P1 is at label P1_gate (trying to enter its critical section, after settingflag[1]to true but before settingturn to 0 and busywaiting). So ifbothprocesses are intheir critical sections thenwe conclude that the state must satisfyflag[0]andflag[1]andturn = 0 and turn= 1. No state cansatisfybothturn= 0 and turn = 1, so there can be nostate where bothprocesses are in their critical sections. (Thisrecounts an argument that is made rigorous in.[5]) Progress Progress is definedas the following:ifno process is executing in its critical sectionandsome processes wishto enter their criticalsections, thenonly those processes that are not executing in their remainder sections can participate inmakingthe decisionas to whichprocess will enter its critical section next. This selectioncannot be postponedindefinitely.[3] A process cannot immediatelyre-enter the critical section if the other processhas set its flag to saythat it wouldlike to enter its criticalsection. Bounded waiting Boundedwaiting, or boundedbypassmeans that the number of times a processis bypassed byanother process after it has indicatedits desire to enter the criticalsectionis bounded bya functionof the number of processes in the system.[3][4]:11 In Peterson's algorithm, a processwill never wait longer than one turnfor entrance to the critical section:After givingpriorityto the other process, thisprocesswill runto completion and set its flagto 1, therebynever allowingthe other process to enter the criticalsection. SYNCHRONIZATION HARDWARE As mentioned, software-based solutions such as Peterson’s are not guaranteed to work on modern computer architectures. Inthe followingdiscussions, we explore several more solutions to the critical-sectionproblemusing techniquesranging from hardware to software-based APIs available to bothkerneldevelopers andapplicationprogrammers. All these solutions are basedon the premiseof locking—that is, protectingcritical regions throughthe use oflocks. As we shall see, the designs ofsuch locks canbe quite sophisticated. We start bypresentingsome simple hardware instructions that are available onmanysystems and showing how theycanbe used effectivelyinsolving the critical-sectionproblem. Hardware featurescanmake any programmingtask easier andimprove systemefficiency. The critical-sectionproblemcould be solvedsimplyina single-processor environment if we could prevent interrupts from occurringwhile a shared variable was being modified. Inthis way, we could be sure that the current sequence of instructions wouldbe allowedto execute in order without preemption. No other instructions wouldbe run, sono unexpected modifications could be made to the sharedvariable. This is oftenthe approachtakenby nonpreemptive kernels. Unfortunately, this solution is not as feasible ina multiprocessor environment. Disablinginterrupts ona multiprocessor can be time consuming, since the message is passedto allthe processors. This message passingdelays entryinto eachcritical section, andsystem efficiencydecreases. Also consider the effect ona system’s clock if the clock is kept updated byinterrupts. Manymoderncomputer systems therefore provide special hardware instructions that allowus either to test andmodifythe content of a wordor to swap the contents of twowords atomically—that is, as one uninterruptibleunit. We canuse these special instructions to solve the critical-sectionproblemina relativelysimple manner. We abstract the main concepts behindthese types of instructions bydescribingthe test_ and_ set() and compare_and_swap() instructions. The atomic test_and_set() instructioncan be defined as shown in Figure 5.3. If the machine supports the test_and_set() instruction, thenwe can implement mutual exclusionbydeclaring a boolean variable lock, initializedto false. The structure of process Pi is shown inFigure 5.4.
  • 5. 5 Process Synchronization (Galvin) The compare_and_swap() instruction, in contrast to the test_and_set() instruction, operates onthree operands;it is defined inFigure 5.5. The operandvalue is set to newvalue onlyif the expression (*value== exected) is true. Regardless, compare_and_swap() always returns the originalvalue of the variable value. Like the test_and_set() instruction, compare_and_swap()is executedatomically. Mutualexclusioncanbe provided as follows:a global variable (lock) is declaredandis initializedto 0. The first process that invokescompare_and_swap()will set lock to 1. It will thenenter its critical section,because the original value of lockwas equal to the expected value of0. Subsequent callsto compare_and_swap()will not succeed, because lock now is not equal to the expectedvalue of0. Whena process exits its critical section, it sets lockback to 0, which allows another process to enter its critical section. The structure of process Pi is showninFigure 5.6. Althoughthese algorithms satisfythe mutual-exclusionrequirement, theydo not satisfythe bounded-waiting requirement. InFigure 5.7, we present another algorithm using the test_and_set() instructionthat satisfies all the critical-sectionrequirements. The commondata structures are boolean waiting[n]; boolean lock; These data structures are initialized to false. To prove that the mutual xclusion requirement is met, we note that processPi canenter its criticalsectiononlyif either waiting[i]== false or key == false. The value of keycanbecome falseonlyif the test_and_set() is executed. The first process to execute the test_and_set() will find key == false; all others must wait. The variable waiting[i]canbecome false onlyifanother process leaves its critical section;onlyone waiting[i]is set to false, maintainingthe mutual-exclusionrequirement. To prove that the progress requirement is met, we note that the arguments presented for mutual exclusionalsoapplyhere, since a process exitingthe criticalsectioneither sets lock to false or sets waiting[j]to false. Both allow a process that is waiting to enter its critical sectionto proceed. To prove that the bounded-waiting requirement is met, we note that, when a process leaves its critical section, it scans the arraywaitinginthe cyclic ordering(i + 1, i + 2, ..., n − 1, 0, ..., i − 1). It designates the first process in this orderingthat is inthe entrysection(waiting[j]== true) as the next one to enter the critical section. Anyprocesswaiting to enter its critical section will thus dosowithinn − 1 turns. Details describing the implementationof the atomic test_and_set() and compare_and_swap() instructions are discussed more fullyin books on computer architecture. MUTEX LOCKS  The hardware-based solutions to the critical-section problem presentedinSection 5.4 are complicatedas well as generallyinaccessible to applicationprogrammers. Instead, operating-systems designers buildsoftware tools to solve the critical-sectionproblem. The simplest of these tools is the mutex lock. (In fact, the termmutex is short for mutual exclusion.) We use the mutex lock to protect critical regions andthus prevent race conditions. That is, a process must acquire the lock before entering a critical section;it releasesthe lock when it exits the critical section. The acquire()functionacquires the lock, andthe release() function releasesthe lock, as illustratedinFigure 5.8.  A mutex lockhas a booleanvariable available whose value indicates if the lockis available or not. Ifthe lock is available, a call to acquire() succeeds, and the lock is then consideredunavailable. A processthat attempts to acquire anunavailable lockis blockeduntil the lockis released. The definition ofacquire() andrelease() are as follows:  Calls to either acquire() or release()must be performedatomically. Thus, mutex locks are oftenimplementedusing one of the hardware mechanisms describedinSection 5.4, and we leave the descriptionof this technique as an exercise.  The maindisadvantage ofthe implementation givenhere is that it requires busy waiting. While a process is in its critical section, anyother process that triesto enter its critical sectionmust loop continuouslyin the call to acquire(). In fact, thistype ofmutex lock is alsocalleda spinlock because the process “spins” while waiting for the lock to become available. (We see the same issue with the code examples illustrating the test_and_set() instructionandthe compare_and_swap()instruction.) This continual loopingis clearlya problem in a real multiprogramming system, where a single CPU is sharedamongmanyprocesses. Busywaiting wastes CPU cycles that some other process might be able to use productively.  Spinlocks do have anadvantage, however, inthat no context switchis requiredwhena process must wait ona lock, anda context switchmay take considerable time. Thus, whenlocks are expectedto be held for short times, spinlocks are useful. Theyare oftenemployed on multiprocessor systems where one threadcan“spin” onone processor while another threadperforms its critical sectiononanother processor.
  • 6. 6 Process Synchronization (Galvin) Later in thischapter (Section5.7), we examine how mutex locks can be used to solve classical synchronization problems. We alsodiscusshow these locks are used inseveral operating systems, as well as in Pthreads. SEMAPHORES  Mutex locks, as we mentioned earlier, are generallyconsidered the simplest of synchronization tools. Inthis section, we examine a more robust tool that can behave similarlyto a mutex lockbut can alsoprovide more sophisticatedways for processes to synchronize their activities. A semaphore S is aninteger variable that, apart from initialization, is accessedonlythroughtwo standardatomic operations: wait()and signal(). The definitions of wait()andsignal() are as follows:  All modifications to the integer value of the semaphore inthe wait() andsignal() operations must be executedindivisibly. That is, whenone process modifies the semaphore value, noother process can simultaneouslymodifythat same semaphore value. Inaddition, inthe case of wait(S), the testing of the integer value of S (S ≤ 0), as well as its possible modification(S--), must be executedwithout interruption. We shall see how these operations canbe implemented in Section5.6.2. First, let’s see how semaphores canbe used. Semaphore Usage  Operating systems often distinguishbetweencountingandbinarysemaphores. The value of a counting semaphore can range over an unrestricteddomain. The value of a binary semaphore can range onlybetween 0 and 1. Thus, binarysemaphores behave similarlyto mutex locks. Infact, onsystems that donot provide mutex locks, binarysemaphores canbe usedinstead for providingmutual exclusion.  Counting semaphores canbe usedto control accessto a given resource consistingof a finite number of instances. The semaphore is initialized to the number ofresources available. Eachprocessthat wishes to use a resource performs a wait() operation onthe semaphore (thereby decrementing the count). When a process releases a resource, it performs a signal() operation(incrementing the count). Whenthe count for the semaphore goesto 0, all resourcesare being used. After that, processes that wish to use a resource will block untilthe count becomes greater than0.  We can also use semaphores to solve various synchronizationproblems. For example, consider two concurrentlyrunningprocesses:P1 witha statement S1 and P2 witha statement S2. Suppose we require that S2 be executed onlyafter S1 has completed. We canimplement this scheme readilybylettingP1 andP2 share a commonsemaphore synch, initialized to 0. In process P1, we insert the statements S1; signal(synch); In processP2, we insert the statements wait(synch); S2; Because synch is initialized to 0, P2 will execute S2 onlyafter P1 has invoked signal(synch), which is after statement S1 has beenexecuted. Semaphore Implementation  Recall that the implementationof mutex locks discussedin Section5.5 suffers from busywaiting. The definitions of the wait() andsignal() semaphore operations just describedpresent the same problem. To overcome the needfor busywaiting, we canmodifythe definitionof the wait() and signal() operations as follows:Whena process executes the wait() operationandfinds that the semaphore value is not positive, it must wait. However, rather thanengaging inbusywaiting, the process can blockitself. The blockoperationplacesa process intoa waiting queue associatedwiththe semaphore, andthe state of the process is switched to the waitingstate. Then control is transferred to the CPU scheduler, whichselects another process to execute. A process that is blocked, waiting ona semaphore S, shouldbe restartedwhensome other process executes a signal() operation. The processis restartedbya wakeup() operation, whichchanges the process from the waitingstate to the readystate. The process is thenplacedinthe readyqueue. (The CPU mayor maynot be switched from the running process to the newlyreadyprocess, dependingon the CPU-scheduling algorithm.)To implement semaphores under this definition, we define a semaphore as follows: Each semaphore has aninteger value anda list of processes list. Whena processmust wait on a semaphore, it is addedto the list of processes. A signal() operationremoves one process from the list ofwaiting processes andawakens that process. Now, the wait() and signal()semaphore operations canbe definedas: The block() operationsuspends the processthat invokes it. The wakeup(P) operationresumesthe executionof a blockedprocess P. These two operations are providedbythe operating system as basic systemcalls. Note that inthis implementation, semaphore valuesmaybe negative, whereas semaphore values are never negative under the classical definitionof semaphoreswith busywaiting. Ifa semaphore value is negative, its magnitude is the number of processes waitingon that
  • 7. 7 Process Synchronization (Galvin) semaphore. Thisfact results from switchingthe order of the decrement andthe test inthe implementationof the wait() operation. The list of waitingprocesses can be easilyimplementedbya link fieldineach process control block(PCB). Eachsemaphore contains an integer value anda pointer to a list of PCBs. One wayto addandremove processes from the list soas to ensure boundedwaiting is to use a FIFO queue, where the semaphore contains both headandtail pointers to the queue. Ingeneral, however, the list can use any queueing strategy.  It is criticalthat semaphore operations be executedatomically. We must guarantee that notwo processes canexecute wait() andsignal() operations onthe same semaphore at the same time. Thisis a critical-section problem;andina single-processor environment, we can solve it bysimplyinhibiting interrupts duringthe time the wait() andsignal() operations are executing. Thisscheme works ina single-processor environment because, once interrupts are inhibited, instructions fromdifferent processes cannot be interleaved. Onlythe currentlyrunning process executesuntil interrupts are reenabled andthe scheduler can regaincontrol. In a multiprocessor environment, interrupts must be disabledoneveryprocessor. Otherwise, instructions fromdifferent processes(running ondifferent processors)maybe interleavedinsome arbitraryway. Disablinginterrupts oneveryprocessor canbe a difficult taskandfurthermore can seriouslydiminish performance. Therefore, SMP systems must provide alternative locking techniques— suchas compare_and_swap() or spinlocks—toensure that wait() and signal() are performed atomically.  It is important to admit that we have not completelyeliminatedbusywaiting withthis definition ofthe wait() andsignal() operations. Rather, we have movedbusywaiting from the entrysectionto the critical sections of applicationprograms. Furthermore, we have limitedbusywaiting to the criticalsections of the wait() andsignal() operations, and thesesections are short (if properlycoded, theyshouldbe no more than about ten instructions). Thus, the critical sectionis almost never occupied, and busywaiting occurs rarely, andthenfor onlya short time. An entirely different situation exists withapplication programs whose critical sections maybe long (minutes or evenhours) or mayalmost always be occupied. Insuchcases, busywaitingis extremelyinefficient. Deadlocks and Starvation  The implementationof a semaphore with a waiting queue mayresult ina situationwhere two or more processes are waiting indefinitelyfor anevent that can be causedonlybyone ofthe waiting processes. The event inquestionis the execution ofa signal() operation. When such a state is reached, theseprocesses are said to be deadlocked. Toillustrate this, consider a system consisting of twoprocesses, P0 andP1, each accessing two semaphores, S andQ, set to the value 1: Suppose that P0 executes wait(S) andthenP1 executeswait(Q).WhenP0 executes wait(Q), it must wait untilP1 executessignal(Q). Similarly, whenP1 executeswait(S), it must wait until P0 executes signal(S). Since these signal() operations cannot be executed, P0 and P1 are deadlocked.  Another problemrelatedto deadlocks is indefinite blocking or starvation, a situation inwhichprocesses wait indefinitelywithinthe semaphore. Indefinite blockingmayoccur if we remove processes fromthe list associatedwith a semaphore in LIFO (last-in, first-out)order. Priority Inversion  A scheduling challenge arises when a higher-priorityprocess needs to read or modifykernel data that are currentlybeing accessedbya lower- priorityprocess—or a chain oflower-priorityprocesses. Since kernel data are typicallyprotectedwitha lock, the higher-priorityprocess will have to wait for a lower-priorityone to finishwiththe resource. The situationbecomes more complicatedif the lower-priorityprocessis preempted infavor of another process witha higher priority. As anexample, assume we have three processes—L, M, and H—whose priorities followthe order L < M< H. Assume that processH requires resource R, whichis currentlybeing accessedbyprocessL. Ordinarily, processH wouldwait for L to finishusingresource R. However, now suppose that process Mbecomes runnable, therebypreempting process L. Indirectly, a processwith a lower priority—processM—has affectedhowlong process H must wait for L to relinquishresource R. This problem is known as priority inversion. It occurs onlyinsystems withmore thantwo priorities, soone solution is to have onlytwo priorities. That is insufficient for most general-purpose operating systems, however. Typicallythese systems solve the problem by implementing a priority-inheritance protocol. According to thisprotocol, allprocesses that are accessing resources needed bya higher-priority process inherit the higher priorityuntil theyare finishedwith the resources in question. Whenthey are finished, their prioritiesrevert to their originalvalues. Inthe example above, a priority-inheritance protocol wouldallowprocess L to temporarilyinherit the priorityof processH, therebypreventing process Mfrom preempting its execution. Whenprocess L had finishedusing resource R, it wouldrelinquishits inherited priorityfromH and assume its original priority. Because resource R wouldnowbe available, process H—not M—wouldrunnext. CLASSIC PROBLEMS OF SYNCHRONIZATION In this section, we present a number of synchronizationproblems as examples of a large class ofconcurrency-control problems. These problems are usedfor testingnearlyeverynewlyproposedsynchronizationscheme. In our solutions to the problems, we use semaphoresfor synchronization, since that is the traditional wayto present such solutions. However, actual implementations ofthese solutions coulduse mutex locks in place of binary semaphores. The Bounded-Buffer Problem
  • 8. 8 Process Synchronization (Galvin) The bounded-buffer problem wasintroduced inSection5.1;it is commonlyusedto illustrate the power of synchronizationprimitives. Here, we present a general structure ofthis scheme without committing ourselves to anyparticular implementation. We provide a relatedprogramming project inthe exercises at the end ofthe chapter. In our problem, the producer and consumer processesshare the following data structures: int n; semaphore mutex = 1; semaphore empty = n; semaphore full = 0 We assume that the pool consists of n buffers, eachcapable ofholdingone item. The mutex semaphore provides mutualexclusionfor accesses to the buffer pool andis initializedto the value 1. The emptyandfullsemaphores count the number of empty andfull buffers. The semaphore empty is initializedto the value n;the semaphore full is initialized to the value 0. The code for the producer process is showninFigure 5.9, andthe code for the consumer process is showninFigure 5.10. Note the symmetrybetween the producer andthe consumer. We can interpret thiscode as the producer producing full buffers for the consumer or as the consumer producingemptybuffers for the producer. The Readers–Writers Problem  Suppose that a database is to be sharedamong several concurrent processes. Some of these processesmaywant onlyto read the database, whereasothers maywant to update (that is, to read and write) the database. We distinguish betweenthese two types of p rocessesby referring to the former as readers and to the latter as writers. Obviously, iftwo readers access the shared data simultaneously, no adverse effects will result. However, if a writer andsome other process (either a reader or a writer) access the database simultaneously, chaos may ensue. To ensure that these difficultiesdo not arise, we require that the writers have exclusive access to the shared database while writing to the database. This synchronizationproblemis referredto as the readers–writers problem. Since it wasoriginallystated, it has beenusedto test nearlyeverynewsynchronizationprimitive.  The readers–writers problemhas severalvariations, all involvingpriorities. The simplest one, referredto as the first readers–writers problem, requiresthat no reader be kept waiting unless a writer hasalreadyobtained permissionto use the sharedobject. Inother words, noreader should wait for other readers to finishsimplybecause a writer is waiting. The secondreaders –writers problem requires that, once a writer is ready, that writer perform its write as soon as possible. In other words, if a writer is waiting to access the object, nonew readers maystart reading.  A solutionto either problemmayresult instarvation. Inthe first case, writers maystarve;inthe secondcase, readers maystarve. For this reason, other variants of the problemhave been proposed. Next, we present a solution to the first readers–writers problem. See the bibliographical notes at the end ofthe chapter for referencesdescribing starvation-free solutions to the secondreaders–writers problem.  AssortedContent  XXX To be cleared  I Q’s Later  XXX Glossary ReadLater Further Reading  S 