CSE 120: Homework #2

Winter 2007

Out: 1/23
In: 1/30 (beginning of lecture)

For the homework questions below, if you believe that you cannot answer a question without making some assumptions, state those assumptions in your answer.

  1. Chapter 5: 5.1, 5.2, 5.3

    5.1  Provide two programming examples of multithreading that improve performance over a single-threaded solution.

    5.2  Provide two programming examples of multithreading that do not improve performance over a single-threaded solution.

    5.3  What are the differences between user-level threads and kernel-level threads? Under what circumstances is one type better than the other?

  2. What would be a possible problem if you executed the following program? How can you solve it?
    #include <signal.h>
    #include <sys/wait.h>
    
    main {
        for ( ; ; ) {
            if (! fork() ) { exit(0); }
            sleep(1);
        }
    }
    

  3. If one thread in a program calls fork(), does the new process duplicate all the threads, or is the new process single-threaded? If a thread invokes exec(), will the program specified in the parameter to exec() replace the entire process, including ALL of the threads?

  4. Suppose you have an operating system that has only binary semaphores. You wish to use counting semaphores. Show how you can implement counting semaphores using binary semaphores.

    Hints: You will need two binary semaphores to implement one counting semaphore. There is no need to use a queue - the queueing on the binary semaphores is all you'll need. You should not use busy waiting. The wait() operation for the counting semaphore will first wait on one of the two binary semaphores, and then on the other. The wait on the first semaphore implements the queueing on the counting semaphore and the wait on the second semaphore is for mutual exclusion.

  5. Show how XCHG can be used instead of test-and-set to implement the acquire() and release() functions of the spinlock data structure described in the "Synchronization" lecture.
    struct lock {
      ...
    }
    
    void acquire (struct lock *) {
      ...
    }
    
    void release (struct lock *) {
      ...
    }
    

  6. Imagine you've implemented Peterson's algorithm on a uniprocessor. The program has two threads, both repeatedly trying to enter the critical section. The critical section only increments a counter shared by both threads. The value of this counter is displayed every second.

    If you did this, you would notice the following behavior: for awhile - some seconds - the number of critical section entries per second will be very high. At some point, though, the number of critical section entries per second would suddenly drop to a much lower rate - indeed, many thousands of times slower.

    (If you wish, give it a try!)

    Explain why this happens.




voelker@cs.ucsd.edu, marzullo@cs.ucsd.edu