Lecture 6: Introduction to Concurrency Control

Reading

Read Chapters 30 and 31

Sharing or Fighting?

Cooperating processes (or threads) often share resources. Some of these resources can be concurrently used by any number of processes. Others can only be used by one process at a time.

The air in this room can be shared by everyone without coordination -- we don't have to coordinate our breathing. But the printer wouldn't be much use to any of us if all of us were to use it at the same time. I'm not sure exactly how that would work -- perhaps it would print all of our images superimposed, or perhaps a piece here-or-there from each of our jobs. But in any case, we would want to do something to coordinate our use of the resource. Another example might be an interesection -- unless we have some way of controlling our use of an intersection (stop signs? traffic lights?) -- smack!

The policy that defines how a resource should be shared is known as a sharing discipline. The most common sharing discipline is mutual exclusion, but many others are possible. When the mutual exclusion policy is used, the use of a resource by one process excludes its use by any other process.

How do we manipulate a resource from within a program? With code, of course. The portion of a program that manipulates a resource in a way that requires mutual exclusion, or some other type of protection, is known as a critical section. We also noted that in practice, even a single line of HLL code can create a critical section, because one line of HLL code may be translated into several lines of interruptable machine-language code.

Characteristics of a Solution

A solution to the mutual exclusion problem must satisfy three conditions:
  1. Mutual Exclusion: Only one process can be in the critical section at a time -- otherwise what critical section?.
  2. Progress: No process is forced to wait for an available resource -- otherwise very wasteful.
  3. Bounded Waiting: No process can wait forever for a resource -- otherwise an easy solution: no one gets in.

Algorithm #1 (Incorrect)

One approach to solving the problem might be to have the processes take turns accessing the resource. Consider the following proposed solution for a two-process problem:

    /* i is this process; j is the other process */

    while (true) 
    {
       while (turn != i);  /* spin until it’s my turn */

       <<< critical section >>>

       turn = j;

       <<< code outside critical section >>>
    }

To understand why this code is incorrect, we must consider the characteristics of a solution that we discussed earlier.

This solution does ensure mutual exclusion, but it is not correct. The proposed solution violates both the progress criteria and the bounded wait criteria:

Observations of Algorithm #1

Let's Try Again - Algorithm #2 (Incorrect)

Let's put what we learned in Algorithm #1 to work and try again. This time, we won't worry about turns. We'll juggle if the other process wants a turn. If it is not looking for the critical section, we won't have to forego its use.
    /* i is this process; j is the other process */

    while (true) 
    {
       while (state[j]  == inside);  /* is the other one inside? */

       state[i] = inside;  /* get in and flip state */

       <<< critical section >>>

       state[i] = outside;  /* revert state */

       <<< code outside critical section >>>
    }

This proposal has some nice features:

But we still don't have a solution. To understand why this code is incorrect, we must remember two things:

Atomicity is the property of being executed as a single unit. This algorithm assumes that the test of (state[1] == inside) and the set of (state[0] = inside) are atomic. That is to say, this algorithm assumes that nothing can come in-between those two operations.

That assumption is inaccurate. A race-condition exists between testing and setting state. P0 can be pre-empted between the two operations, by P1. The result will be that P1 will test state[0], find it false, and enter the critical section.

Consider the following trace:

  1. P0 finds (state[1] == outside)
  2. The scheduler forces a context-switch
  3. P1 (finds state[0]==outside)
  4. P1 sets (state[0] = inside)
  5. P1 enters the critical section
  6. The scheduler forces a context-switch
  7. P0 sets (state[1] = inside)
  8. P0 enters the critical section
  9. Both P0 and P1 are now in the critical section

With both processes in the critical section, the mutual exclusion criteria has been violated.

Algorithm #3 (Incorrect)

Let's try again. This time, let's avoid the race-condition by expressing our intent first, and then checking the other process's state:
    /* i is this process; j is the other process */

    while (true) 
    {
       state[i] = interested;  /* declare interest */

       while (state[j]  == interested);  /* stay clear till safe */

       <<< critical section >>>

       state[i] = notinterested;  /* we're done */

       <<< code outside critical section >>>
    }

Okay. This does guarantee mutual exclusion, but not bounded wait. This approach allows a livelock. A livelock is a special type of deadlock, where the affected processes are consuming (wasting) CPU cycles by looping forever.

Consider the following trace:

  1. P0 sets state[0] to interested
  2. A context-switch occurs
  3. P1 sets state[1] to interested
  4. P1 loops in while
  5. A context-switch occurs
  6. P0 loops in while

Both P0 and P1 loop forever. This is the livelock.

Algorithm #4: Peterson's Algorithm (Correct)

This time, let's try using Algorithm #3, but taking turns to break ties:

    /* i is this process; j is the other process */

    while (true) 
    {
       state[i] = interested;  /* declare interest */
       turn = j;  /* be nice to other guy */

       while (state[j] == interested && turn == j);

       <<< critical section >>>

       state[i] = notinterested;  /* we're done */

       <<< code outside critical section >>>
}

This code satisfies all three properties:

It is interesting to note that this requires a formal proof to be really convincing. Proving erroneousness is easy: just give a counter-example. But proving correctness is much more difficult, since no number of examples is convincing. Instead, we'll focus on justifying the correctness with the same level of rigor that you might you during a discussion with your project partner.

A Multi-Process Solution: Lamport's Algorithm (Correct)

We didn't cover this in class today, but I figured I'd add it to the notes, just for fun. I like it becuase I think it motivates the need for better tools, espcially when it comes to synchronizing more than two processes.

    int choosing[N] = {false, false, …}; int number[N] = {0, 0, ...};

    while (true) 
    {
       choosing[i] = true;  /* Picking our number very imperfect lock */
       new = 0;  /* lowest possible number  */

       for (x=0; x < N; x++)  /* look for the highest number in use */
       {
          if (number[x] > new)  
             new = number[x]; 

          number[i] = new+1; /* give us a number one higher */
          choosing[i] = false; /* done picking our number */

          for (x=0; x > N; x++ )   /* ...until our turn */
          {
                while (choosing[x] )  /* wait - t could be someone with a higher number */
                

                while ( (number[x] != 0) && 
                           ( (number[x] < number[i]) ||  ((number[x] == number[i]) && (x < i)) );
          } /* if this one has a lower number wait. Break tie w/ id# */

          <<< critical section >>>

          number[i] = 0; /* Need to ask again if I want again */

          <<< code outside critical section >>>
    }

This algorithm works by giving each thread a number as it enters the competition for the critical section. Threads gain access to the critical resource in order of this number. The number assigned to the thread is higher than the number assigned to any thread thus far.

The assignment of this number is not protected, so two threads can get the same number. In the event of this tie, the thread with the lowest ID (position in the array of threads, the number[] array) will enter the critical section first.

While this may not be fair, it doesn't have to be. The bounded wait condition doesn't require fairness, just determinism. This approach guarantees that each thread will eventually get into the critical section. Although part of the decision about when a thread will enter the critical section is based on an arbitrary factor (thread id), the threads position in the line is determined when it enters. This means that another thread cannot "cut" in line and violating the bounded wait requirement.

Hyman Mutual Exclusion "Solution"

We didn't cover this in class, either. But, I also thought it might be fun to look at, if you happen to have some extra time on your hands. Below is an example of a potential solution to the problem of providing for mutual exclusion of a critical section between two processes. This solution was proposed by Hyman in 1966. The really interesting thing is that it was believed to be correct -- for decades (But is not).

If you have time, spend some time, preferably with friends, trying to understand it. Once you feel like you've done that, or run out of time, check the trace below.

    shared boolean flag[2] = {0, 0};
    shared int turn = 0;

    Process (int id)
0.  {
1.     while (1)
2.     {
3.        flag[id] = 1;
4.        while (turn !=id)
5.        {
6.           while (flag[1 - id] == 1)
7.           ;
8.           turn = id;
9.        }
   
11.       <<< critical section >>>
   
12.       flag[id] = 0;
         
13.       <<< non-critical code >>>
14.    }
15. }
    

Hyman Mutual Exclusion "Solution": An Exploitive Trace

The following trace demonstrates that this algorithm is incorrect.

TimeP0P1
011
113
214
314
436
546
6116
7116
8118
9119
10114
111111

What to say? Playing with concurrency this way is difficult. We need better tools and better approaches!

Disabling Interrupts

Although we have learned how to synchronize any arbitrary number of processes without hardware support, we have also learned that this is a messy business. As with many other aspects of software, hardware support can make it both simpler for the programmer and more efficient.

One rudimentary form of synchronization supported by hardware is frequently used within the kernel: disabling interrupts. The kernel often guarantees that its critical sections are mutually exclusive simply by maintaining control over the execution. Most kernels disable interrupts around many critical regions ensuring that they can not be interrupted. Interrupts are reenabled immediately after the critical section.

Unfortunately, this approach only works in kernel mode and can result in a delayed service of interrupts -- or lost interrupts (if two of the same type occur before interrupts are reenabled). And it only works on uniprocesors, becuase diabling interrupts can be very time-consuming and messy if several processors are involved.

    Disable_interrupts();

    <<< critical section >>>

    Enable_interrupts();
    

Special Instructions

Another approach is to build into the hardware very simple instructions that can give us a limited guarantee of mutual exclusion in hardware. From these small guarantees, we can build more complex constructs in software.

Test-and-Set

One such instruction that is commonly implemented in hardware is test-and-set. This instruction allows the atomic testing and setting of a value.

The semantics of the instruction are below. Please remember that it is atomic. It is not interruptable.

    TS (<mem loc>)
    {
       if (<memloc> == 0)
       {
           <mem loc> = 1;
           return 0;
       }
       else
       {
          return 1;
       }
    

Given the test-and-set instruction, we can build a simple synchronization primiative called the mutex or spin-lock. A process busy-waits until the test-and-set succeeds, at which time it can move into the critical section. When done, it can mark the critical section as available by setting the mutex's value back to 0 -- it is assumed that this operation is atomic.

    Acquire_mutex(<mutex>) /* Before entering critical section */
    {
        while(TS(<mutex>))
    }

    Release_mutex(<mutex>) /* After exiting critical section */
    {
        <mutex> = 0;
    }
    

Compare-and-Swap

Another common hardware-supplied instruction that is useful for building synchronization primatives is compare-and-swap. We only briefly mentioned it in class, so it is just bonus material here. But, it is more similar to the actual instruction on many processors than test-and-set. Much like test-and-set, it is atomic in hardware -- the pseudo-code below just illustrates the semantics, it is not meant to in any way suggest that the instruction is interruptable.

    CS (<mem loc>, <expected value>, <new value>)
    {
        if (<memloc> == <expected value>))
        {
           <mem loc> = <new value>;
           return 0;
        }
        else
        {
           return 1;
        }
    

You may also want to note that test-and-set is just a special case of compare-and-swap:

TS(x) == CS (x, 0, 1)

The pseudo-code below illustration the creation of a mutex (spin lock) using compare-and-swap:

    Acquire_mutex(<mutex>) /* Before entering critical section */
    {
       while (CS (<mutex>, 1, 0))
       ;
    }

    Release_mutex(<mutex>) /* After exiting critical section */
    {
       <mutex> =  1;
    }