CSE 221: Homework 3

Winter 2014

Due: Tuesday, March 11, 2014 at 8:00am in class



  1. The Levy and Lipman paper on VAX/VMS virtual memory management states that the stack used by the operating system for servicing user process system calls (running in kernel-mode) resides in the user-level address space (the typical practice today is to allocate such a stack in the OS address space):
    "The P1 region [user address space] also contains fixed-sized stacks for use by executive code that executes on behalf of the process." (p. 37)

    This arrangement means that the user-level process has access to the memory region storing stack frames used by the kernel, including local variables with pointers to kernel data structures on the stack as well as return addresses that control where the kernel will execute when returning from a procedure call. Assume such stacks are mapped with read/write access in the user-level address space.

    1. Why do you think they allocated kernel stacks in the user-level portion of the address space?
    2. Why is this arrangement safe (does not violate user/kernel protection) given the process model described in the VAX/VMS paper?
    3. Modern operating systems like Linux allocate kernel stacks in the address space of the OS. Why is it necessary to do so to maintain safety?

  2. The Scheduler Activations paper states that deadlock is potentially an issue when activations perform an upcall:
    "One issue we have not yet addressed is that a user-level thread could be executing in a critical section at the instant when it is blocked or preempted...[a] possible ill effect ... [is] deadlock (e.g., the preempted thread could be holding a lock on the user-level thread ready list; if so, deadlock would occur if the upcall attempted to place the preempted thread onto the ready list)." (p. 102)

    Why is this not a concern with standard kernel threads, i.e., why do scheduler activations have to worry about this deadlock issue, but standard kernel threads implementations do not have to?

  3. In the FFS paper, when contrasting the old and new file system implementations, the authors make the following observation:
    "Unlike the old file system, the transfer rates for the new file system do not appear to change over time. The throughput rate is tied much more strongly to the amount of free space that is maintained." (p. 191)
    Why do transfer rates in the old file system degrade over time, while rates for the new file system depend on free space much more than the age of the file system?

  4. [Snoeren] A reliability-induced synchronous write is a synchronous write that is issued by the file system to ensure that the file system's state (as represented by the system's metadata) is not left inconsistent if the system crashes at an inconvenient time.

    1. Let f be a new file created in a directory d. The file system will issue at least three disk operations to complete this operation. Ignoring any data blocks allocated for the directory or new file, what are these three disk operations for?
    2. In Unix FFS, at least two of these writes will be issued synchronously. Which are they, and what order should they be performed in? Briefly explain why.
    3. Consider the Soft Updates solution to this problem. Does it do any reliability-induced synchronous writes? If so, how does it differ from FFS? If not, why can it avoid doing so? Explain.
    4. Consider the same operation in LFS. Does LFS generate any reliability-induced synchronous writes? Explain.
    5. Consider the same operation with the Rio file cache. Does Rio generate any reliability-induced synchronous writes? Explain.

  5. In 1999, Wang et al. proposed a disk drive architecture supporting a service called "eager write". Rather than update a block in place, as with normal disks, an eager writing disk simply writes to the next free block near the disk head (the disk internally keeps track of this mapping by maintaining a table mapping "logical" disk blocks to physical disk blocks). Argue whether using such a disk would improve the performance of a Log-Structured File System, hurt its performance, or make little difference.


voelker@cs.ucsd.edu