CSE 121 Spring 2003 Homework 2
Due Midnight, 2 May 2003 

Please turn in a tar file that contains your program and your solutions for Problems 1 and 2. For the problems that don't require a program, you can just submit a simple ASCII file.

Problem 1

Please read the section on checkpointing and recovery in Rosenblum's LFS paper that was passed out in class. In Sprite LFS, segments are written from back to front. For example, below is a diagram showing a segment that has had one write performed on it.

After one more write, the segment could look like this:

A third write would fill the segment (in this diagram, it would write two data/inode blocks and one summary block) and then start filling a new segment.
    1. Consider the roll-forward procedure in crash recovery for Sprite LFS. How does the roll-forward procedure determine when to stop rolling forward? If you can't determine this from the paper, then make an educated guess and justify your guess.
       
    2. Sprite LFS wrote the segments as outlined above because it allowed the cleaner and the roll-forward procedure to make the assumption that if a segment summary block was safely written to disk (such as the last block in the segment shown above), then all of the blocks described by the segment were also safely written to disk. Unfortunately, this is based on the assumption that a disk controller writes blocks in the sequence they are stored in memory. In order to reduce rotational latency, some controllers start the write transfer depending on the current rotational position of the head. For example, in the first write shown above, the controller could first write out the last three green blocks, and then the segment summary block, and then finally the first three green blocks.
What impact does this have on recovery? How would you change the way that LFS writes data to disk (the order, the contents, or both) in order to accommodate such disk controllers.

Problem 2

Consider the following code that inserts a node into a doubly linked list (using fields prev and next) sorted on the field key. It is a non-unreasonable approximation to what Rio might use to maintain the list of buffers that the recovery code will reference upon rebooting.
void InsertNode (Node* first, last, new) {
   Node *at = first;
   while (at != NULL && at->key > new->key) at = at->next;
   if (at == NULL) {          // Insert as last node in list
     new->prev = last;
     new->next = NULL;
     if (last == NULL) first = new;
     else last->next = new;
     last = new;
     }
   else {                     // Insert before at
     new->prev = at->prev;
     new->next = at;
     if (at->prev == NULL) first = new;
     else at->prev->next = new;
     at->prev = new;
     }
   }
  1. Consider using the Rio cache approach: this routine is used to maintain a data structure that is read after the machine recovers from a crash. How could a machine failure leave this doubly linked-list in an unsafe condition?
     
  2. Suppose that Rio indeed used such a doubly-linked list. How would you design Rio to deal with the problem? (Note that you have two general choices to consider: either doing something so that the datastructure is not seen in an unsafe condition or doing something to make it consistent).
     
  3. The Rio paper does not discuss the impact of either the L1 or L2 cache. The existence of such caches complicate building Rio. Explain why, and how one could change Rio to accomodate such hardware caches.

Problem 3

Design and write a C program that will allow you to determine the size of the file cache in an operating system, assuming (reasonably) that the cache is finite and that it uses an LRU replacement policy. Use your program to see how large the file cache is for some machine. Your answer should include a description of the ideas behind your program, and your analysis that led to your measurement of the file cache size.

Note that if decide to measure the file cache size of ieng9 you are most likely going to thrash the file cach, and you'll be competing with other users for the use of file cache. You need to run your program either on your own personal computer or on one of the individual workstations in EBU2 313, EBU1 3327, EBU3329, or APM B402 so that you are the only user of the computer. The room EBU2 313 usually has quote low traffic, and so you might try those first.

If you use one of the individual workstations, then you may fiind the file bigfile in public/hw2/ useful. This is a 1G file that contains a random number as an ASCII characters string (nonzero terminated).

If you wish your program to measures times, then you might want to use the routine gethrtime. Use man gethrtime for details.

Hint: If you use one of the ieng9 workstations, then there will be two caches: one associated with your workstation and one associated with the NFS server. Be careful to report the size of the first cache (although, if you wish, you could measure the size of each. If you do this, then try to choose a time when that server should be lightly loaded, since otherwise you'll be competiting with other users for the NFS server's cache).
last edited by Keith Marzullo on 22 April 2003