Ana səhifə

Chapter 6 Concurrency, Mutual Exclusion and Synchronization

Yüklə 418 Kb.
ölçüsü418 Kb.
Chapter 6 – Concurrency, Mutual Exclusion and Synchronization

  • Currency arises in three different contexts:

    • Multiple applications – Multiple programs are allowed to dynamically share processing time.

    • Structured applications – Some applications can be effectively programmed as a set of concurrent processes.

    • Operating system structure – The OS themselves are implemented as set of processes.
  • Concurrent processes (or threads) often need access to shared data and shared resources.

    • Processes use and update shared data such as shared variables, files, and data bases.

  • Writing must be mutually exclusive to prevent a condition leading to inconsistent data views.

  • Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes.

Example 1:

  • Shared-memory solution to bounded-butter problem allows at most n – 1 items in buffer at the same time. A solution, where all N buffers are used is not simple.

  • Suppose that we modify the producer-consumer code by adding a variable counter, initialized to 0 and incremented each time a new item is added to the buffer.

Shared data

#define BUFFER_SIZE 10

typedef struct {

. . .

} item;
item buffer[BUFFER_SIZE];

int in = 0;

int out = 0;

int counter = 0;

Producer process
item nextProduced;

while (1) {

while (counter == BUFFER_SIZE)

; /* do nothing */

buffer[in] = nextProduced;

in = (in + 1) % BUFFER_SIZE;



Consumer process

item nextConsumed;

while (1) {

while (counter == 0)

; /* do nothing */

nextConsumed = buffer[out];

out = (out + 1) % BUFFER_SIZE;



  • The statements


must be performed atomically.

  • Atomic operation means an operation that completes in its entirety without interruption.

  • The statement “count++” may be implemented in machine language as:

register1 = counter

register1 = register1 + 1

counter = register1

  • The statement “count—” may be implemented as:

register2 = counter
register2 = register2 – 1
counter = register2

  • If both the producer and consumer attempt to update the buffer concurrently, the assembly language statements may get interleaved.

  • Interleaving depends upon how the producer and consumer processes are scheduled.

  • Assume counter is initially 5. One interleaving of statements is:

producer: register1=counter (register1=5)

producer: register1=register1+1 (register1=6)

consumer: register2=counter (register2 = 5)

consumer: register2=register2–1 (register2 = 4)

producer: counter=register1 (counter = 6)

consumer: counter=register2 (counter = 4)

  • The value of count may be either 4 or 6, where the correct result should be 5.

Example 2: Spooler Directory

Race Condition

  • The race condition is a situation where several processes access (read/write) shared data concurrently and the final value of the shared data depends upon which process finishes last

    • The actions performed by concurrent processes will then depend on the order in which their execution is interleaved.

    • It means that neither process will proceed beyond a certain point in the computation until both have reached their respective synchronization point.

Critical Section/Region

  1. Consider a system consisting of n processes all competing to use some shared data.

  2. Each process has a code segment, called critical section, in which the shared data is accessed.

Example: Race condition updating a variable

Critical Section and Race Condition

  • Multiprogramming allows logical parallelism, uses devices efficiently - but we lose correctness when there is a race condition.

  • So we forbid logical parallelism inside critical section

    • We lose some parallelism but we regain correctness.

Wherein Lies the Problem

  • The problem stems from interruption of software-based process while executing critical section (low-level).

The Critical-Section Problem

  1. The critical-section problem is to design a protocol that the processes can cooperate. The protocol must ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.

  2. The critical section problem is to design a protocol that the processes can use so that their action will not depend on the order in which their execution is interleaved (possibly on many processors).

Solution to Critical Section Problem - Requirements

  • A solution to the critical-section problem must satisfy the following three requirements:

    1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections.

      • Implications:

        • Critical sections better be focused and short.

        • Better not get into an infinite loop in there.

        • If a process somehow halts/waits in its critical section, it must not interfere with other processes.

    2. Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely.

    3. Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
      • Assume that each process executes at a nonzero speed

      • No assumption concerning relative speed of the n processes.

Types of Solutions

  • Software solutions

    • Algorithms whose correctness does not rely on any other assumptions.

  • Hardware solutions

    • Rely on some special machine instructions.

  • Operating System solutions

    • Provide some functions and data structures to the programmer through system/library calls.

    • Linguistic constructs provided as part of a language.

Initial Attempts to Solve the Problem

  • The execution of critical sections must be mutually exclusive: at any time, only one process is allowed to execute in its critical section (even with multiple processors).

  • So each process must first request permission to enter its critical section.

  • The section of code implementing this request is called the Entry Section (ES).

  • The critical section (CS) might be followed by a Leave/Exit Section (LS).

  • The remaining code is the Remainder Section (RS).

Software Solutions

  • We consider first the case of 2 processes:

    • Algorithm 1, 2 and 3

  • Then we generalize to n processes:

    • The Bakery algorithm

  • Initial notation

    • Only 2 processes, P0 and P1

    • When usually just presenting process Pi, always denotes other process Pj (i != j).

  • General structure of process Pi (other process Pj)

do {
entry section

critical section

exit section
reminder section

} while (1);

  • Processes may share some common variables to synchronize/coordinate their actions.

Algorithm 1

Shared variables:

int turn; //initially turn = 0

// turn = i, Pi can enter its critical section
Process Pi
do {

while (turn != i);

critical section

turn = j;

reminder section

} while (1);

  • Satisfies mutual exclusion, but not progress.

Algorithms 2

Shared variables

boolean flag[2];

// initially flag [0]=flag [1]=false.

// flag [i]=true means that Pi is ready

// to enter its critical section
Process Pi
do {

flag[i] = true;
while (flag[j]) ;

critical section

flag [i] = false;

remainder section

} while (1);

  • Satisfies mutual exclusion, but not progress requirement.

Algorithm 3

  • Combined shared variables of algorithms 1 and 2.

Process Pi

do {


turn = j;

while (flag [j] and turn = j);

critical section

flag [i]=false;

remainder section

} while (1);

  • Meets all three requirements; solves the critical-section problem for two processes.



  • The algorithm which solves the critical-section problem for n processes.

  • Before entering its critical section, process receives a number. Holder of the smallest number enters the critical section.

  • If processes Pi and Pj receive the same number, if i < j, then Pi is served first; else Pj is served first.

  • The numbering scheme always generates numbers in increasing order of enumeration; i.e., 1,2,3,3,3,3,4,5...

  • Notation

Lexicographical order (ticket #, process id #)

(a,b) < (c,d) if a < c or if a = c and b < d

max (a0,…, an-1) is a number, k, such that

kai for i = 0, …, n – 1

  • Shared data

boolean choosing[n];

int number[n];
Data structures are initialized to false and 0 respectively.

do {


number[i]=max(number[0], number[1], …,

number [n – 1])+1;


for (j=0; j

while (choosing[j]);

while ((number[j]!= 0)

&& (number[j,j]


critical section

number[i] = 0;

remainder section

} while (1);

  • To prove that the Bakery algorithm works, we have to show that if Pi is in its critical section and Pk (ki) has already chosen its number[k]0 then


  • Consider that Pi is already in its critical-section and Pk trying to enter the Pk’s critical-section. When process Pk executes the second while loop for j=i, it finds that:

number[i] 0

Thus it continues looping in the while loop until pi leaves the critical section.

  • Note that processes enter in their critical-section on a first-come, first-serve basis.

What about Process Failure

  • If all 3 criteria (ME, progress, bounded waiting) are satisfied, then a valid solution will provide robustness against failure of a process in its remainder section (RS).

    • Since failure in RS is just like having an infinitely long RS.

  • However, no valid solution can provide robustness against a process failing in its critical section (CS).

    • A process Pi that fails in its CS does not signal that fact to other processes: for them Pi is still in its CS.

Drawback of Software Solutions

  • Processes that are requesting to enter their critical section are busy waiting (consuming processor time needlessly).

  • If critical sections are long, it would be more efficient to block processes that are waiting.

Mutual Exclusion – Hardware Support

  • Interrupt disabling

    • A process runs until it invokes an operating-system service or until it is interrupted

    • Disabling interrupts guarantees mutual exclusion

    • Processor is limited in its ability to interleave programs

    • Multiprocessing

  • Special machine instructions

    • Normally, access to a memory location excludes other access to that same location.

    • Extension: designers have proposed machines instructions that perform 2 actions atomically (indivisible) on the same memory location (e.g., reading and writing).

    • The execution of such an instruction is also mutually exclusive (even on Multiprocessors).

    • They can be used to provide mutual exclusion but need to be complemented by other mechanisms to satisfy the other 2 requirements of the CS problem.

TestAndSet Synchronization Hardware

  • Test and modify the content of a word atomically.

boolean TestAndSet(boolean &target) {

boolean rv = target;

target = true;
return rv;


Mutual Exclusion with Test-and-Set

Shared data:

boolean lock = false;

Process Pi
do {

while (TestAndSet(lock));

critical section

lock = false;

remainder section

Swap Synchronization Hardware

  • Atomically swap two variables.

void Swap(boolean &a, boolean &b) {

boolean temp = a;

a = b;

b = temp;

Mutual Exclusion with Swap
Shared data:

boolean lock = false;

Process Pi

do {

key = true;

while (key == true)


critical section

lock = false;

remainder section



  • Synchronization tool (provided by the OS) that does not require busy waiting.

  • Logically, a semaphore S is an integer variable that, apart from initialization, can only be accessed through 2 atomic and mutually exclusive operations:

    • wait(S)

    • signal(S)

Wait and Signal Operations

while (S  0 do) ;


signal(S) {



  • Modification to the value of semaphore (S) in the wait and signal is executed individually.

  • In wait, the testing and possible modification of S must also be executed without interruption.

Usage of Semaphore

  • The fundamental principle of semaphore is this: Two or more processes can cooperate by means of simple signals such that a process can be forced to stop at a specified place until it has received a specified signal.

Usage 1: Critical Section of n Processes Problem
Shared data:

semaphore mutex; //initially mutex=1

Process Pi:

do {


critical section

remainder section

} while (1);

Usage 2: synchronization of 2 processes

  • Suppose that we have two processes, P1 with statement s1 and P2 with statement s2. We require that s2 be executed only after s1 has completed. This can be done with a shared semaphore variable, mutex, initialized to 0, and by inserting the following statements in P1



And the following statements in P2



Semaphore Implementation

  • The semaphore discussed so far requires a busy waiting. That is if a process is in critical-section, the other process that tries to enter its critical-section must loop continuously in the entry code.

  • To overcome the busy waiting problem, the definition of the semaphore operations wait and signal should be modified.

  • When a process executes the wait operation and finds that the semaphore value is not positive, the process can block itself. The block operation places the process into a waiting queue associated with the semaphore.

  • A process that is blocked waiting on a semaphore should be restarted when some other process executes a signal operation. The blocked process should be restarted by a wakeup operation which put that process into ready queue.

  • To implemented the semaphore, we define a semaphore as a record as:

typedef struct {

int value;

struct process *L;

} semaphore;

  • Assume two simple operations:

  • block suspends the process that invokes it.

  • wakeup(P) resumes the execution of a blocked process P.

  • Semaphore operations now defined as



if (S.value < 0) {

add this process to S.L;





if (S.value <= 0) {

remove a process P from S.L;


Types of Semaphores

  • Counting semaphore – integer value can range over an unrestricted domain.

  • Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement.

  • Can implement a counting semaphore S as a binary semaphore.

Implementing S as a Binary


Data structures:

binary-semaphore S1, S2;

int C:

S1 = 1

S2 = 0

C = initial value of semaphore S

Implementing S
wait operation


if (C < 0) {




signal operation

C ++;

if (C <= 0)




Classical Problems of Synchronization

  • Bounded-Buffer Problem

  • Readers and Writers Problem

  • Dining-Philosophers Problem

Bounded-Buffer Problem
Shared data

semaphore full, empty, mutex;

// initially full = 0, empty = n, mutex = 1

Producer Process

do {

produce an item in nextp



add nextp to buffer



} while (1);
Consumer Process
do {



remove an item from buffer to nextc



consume the item in nextc

} while (1);

Prepared by Dr. Amjad Mahmood

Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur © 2016
rəhbərliyinə müraciət