### Semaphores and Condition Variables

Counting Semaphores

Now that we have hardware support, and a very basic primative, the mutex, we can build higher-level synchronization constructs that can make our life easier.

The first of these higher-level primatives that we'll discuss is a new type of variable called a semaphore. It is initially set to an integer value. After initialization, its value can only be affected by two operations:

• P(x)
• V(x)

P(x) was named from the Dutch word proberen, which means to test.

V(x) was named from the Dutch word verhogen, which means to increment.

The pseudo-code below illustrates the semantics of the two semaphore operations. This time the operations are made to be atomic outside of hardware using the hardware support that we discussed earlier -- but more on that soon.

```    /* proberen - test *.
P(sem)
{
while (sem <= 0)
;
sem = sem - 1;
}

/* verhogen - to increment */
V(sem)
{
sem = sem + 1;
}
```

In order to ensure that the critical sections within the semaphores themselves remain protected, we make use of mutexes. In this way we again grow a smaller guarantee into a larger one:

```    P (csem) {
while (1)  {
Acquire_mutex (csem.mutex);
if (csem.value <= 0) {
Release_mutex (csem.mutex);
continue;
}
else {
csem.value = csem.value – 1;
Release_mutex (csem.mutex);
break;
}
}
}

V (csem)
{
Acquire_mutex (csem.mutex);
csem.value = csem.value + 1;
Release_mutex (csem.mutex);
}
```

But let's carefully consider our implementation of P(csem). If contention is high and/or the critical section is large, we could spend a great deal of time spinning. Many processes could occupy the CPU and do nothing -- other than waste cycles waiting for a process in the runnable queue to run and release the critical section. This busy-waiting makes already high resource contention worse.

But all is not lost. With the help of the OS, we can implement semaphores so that the calling process will block instead of spin in the P() operation and wait for the V() operation to "wake it up" making it runnable again.

The pseudo-code below shows the implementation of such a semaphore, called a blocking semaphore:

```    P (csem) {
while (1)  {
Acquire_mutex (csem.mutex);
if (csem.value <= 0) {
insert_queue (getpid(), csem.queue);
Release_mutex_and_block (csem.mutex); /* atomic: lost wake-up */
}
else {
csem.value = csem.value – 1;
Release_mutex (csem.mutex);
break;
}
}
}

V (csem)
{
Acquire_mutex (csem.mutex);

csem.value = csem.value + 1;
dequeue_and_wakeup (csem.queue)

Release_mutex (csem.mutex);
}
```

Please notice that the P()ing process must atomically become unrunnable and release the mutex. This is becuase of the risk of a lost wakeup. Imagine the case where these were two different operations: release_mutex(xsem.mutex) and sleep(). If a context-switch would occur in between the release_mutex() and the sleep(), it would be possible for another process to perform a V() operation and attempt to dequeue_and_wakeup() the first process. Unfortunately, the first process isn't yet asleep, so it missed the wake-up -- instead, when it again runs, it immediately goes to sleep with no one left to wake it up.

Operating systems generally provide this support in the form of a sleep() system call that takes the mutex as a parameter. The kernel can then release the mutex and put the process to sleep in an environment free of interruptions (or otherwise protected).

Binary Semaphores

In many cases, it isn't necessary to count resources -- there is only one. A special type of semaphore, called abinary semaphore may be used for this purpose. Boolean semaphores may only have a value of 0 or 1. In most systems, boolean semaphores are just a special case of counting semaphores, also known as general semaphores.

Condition Variables - Overview

The condition variable is a synchronization primative that provides a queue for threads waiting for a resource. A thread tests to see if the resource is available. If it is available, it uses it. Otherwise it adds itself to the queue of threads waiting for the resource. When a thread has finished with a resource, it wakes up exactly one thread from the queue (or none, if the queue is empty). In the case of a sharable resource, a broadcast can be sent to wake up all sleeping threads.

Proper use of condition variables provides for safe access both to the queue and to test the resource even with concurrency. The implementation of condition variables involves several mutexes.

Condition variables support three operations:

• wait - add calling thread to the queue and put it to sleep
• signal - remove a thread form the queue and wake it up

When using condition variables, an additional mutex must be used to protect the critical sections of code that test the lock or change the locks state.

Condition Variables - Typical Use

The following code illustrates a typical use of condition variables to acquire a resource. Notes that both the mutex mx and the condition variable cv are passed into the wait function.

If you examine the implementation of wait below, you will find that the wait function atomically releases the mutex and puts the thread to sleep. After the thread is signalled and wakes up, it reacquires the resource. This is to prevent a lost wake-up. This situation is discussed in the section describing the implementation of condition variables.

```  spin_lock s;

GetLock (condition cv, mutex mx)
{
mutex_acquire (mx);
while (LOCKED)
wait (c, mx);

lock=LOCKED;
mutex_release (mx);
}

ReleaseLock (condition cv, mutex mx)
{
mutex_acquire (mx);
lock = UNLOCKED;
signal (cv);
mutex_release (mx);
}
```

Condition Variables - Implementation

This is just one implementation of condition variables, others are possible.

Data Structure

The condition variable data structure contains a double-linked list to use as a queue. It also contains a semaphore to protect operations on this queue. This semaphore should be a spin-lock since it will only be held for very short periods of time.

```  struct condition {
proc next;  /* doubly linked list implementation of */
proc prev;  /* queue for blocked threads */
mutex mx; /*protects queue */
};
```

wait()

The wait() operation adds a thread to the list and then puts it to sleep. The mutex that protects the critical section in the calling function is passed as a parameter to wait(). This allows wait to atomically release the mutex and put the process to sleep.

If this operation is not atomic and a context switch occurs after the release_mutex (mx) and before the thread goes to sleep, it is possible that a process will signal before the process goes to sleep. When the waiting() process is restored to execution, it will enter the sleep queue, but the message to wake it up will be forever gone.

```  void wait (condition *cv, mutex *mx)
{
mutex_acquire(&c->listLock);  /* protect the queue */
enqueue (&c->next, &c->prev, thr_self()); /* enqueue */
mutex_release (&c->listLock); /* we're done with the list */

/* The suspend and release_mutex() operation should be atomic */
release_mutex (mx));
thr_suspend (self);  /* Sleep 'til someone wakes us */

mutex_acquire (mx); /* Woke up -- our turn, get resource lock */

return;
}
```

signal()

The signal() operation gets the next thread from the queue and wakes it up. If the queue is empty, it does nothing.

```  void signal (condition *c)
{

mutex_acquire (c->listlock); /* protect the queue */
tid = dequeue(&c->next, &c->prev);
mutex_release (listLock);

if (tid>0)
thr_continue (tid);

return;
}
```

The broadcast operation wakes up every thread waiting for a particular resource. This generally makes sense only with sharable resources. Perhaps a writer just completed so all of the readers can be awakened.

```  void broadcast (condition *c)
{