CSE 221: Homework 3

Fall 2010

Due: Thursday, December 2, 2010 at 3:30pm in class



  1. The Levy and Lipman paper on VAX/VMS virtual memory management states that the stack used by the operating system for servicing user process system calls resides in the user-level address space (!):
    "The P1 region [user address space] also contains fixed-sized stacks for use by executive code that executes on behalf of the process." (p. 37)

    This arrangement means that the user-level process has access to the memory region storing stack frames used by the kernel, including local variables with pointers to kernel data structures on the stack as well as return addresses that control where the kernel will execute when returning from a procedure call.

    1. Why do you think they allocated kernel stacks in the user-level portion of the address space?
    2. Why is this arrangement safe (does not violate user/kernel protection) given the process model described in the VAX/VMS paper?
    3. Modern operating systems like Linux allocate kernel stacks in the address space of the OS. Why is it necessary to do so to maintain safety?

  2. The Scheduler Activations paper states that deadlock is potentially an issue when activations perform an upcall:
    "One issue we have not yet addressed is that a user-level thread could be executing in a critical section at the instant when it is blocked or preempted...[a] possible ill effect ... [is] deadlock (e.g., the preempted thread could be holding a lock on the user-level thread ready list; if so, deadlock would occur if the upcall attempted to place the preempted thread onto the ready list)." (p. 102)

    Why is this not a concern with standard kernel threads, i.e., why do scheduler activations have to worry about this deadlock issue, but standard kernel threads implementations do not have to?

  3. In 2002, Witchel et al. proposed new hardware support for memory isolation called Mondriaan Memory Protection (MMP). Within the same address space, MMP provides the ability to implement very fine-grained protection domains (protecting regions as small as a single memory word) with very little overhead due to hardware support (few percent or less). Every execution context within the address space has a list of domains and rights to which it has access. MMP checks every load, store, and instruction execution and validates whether the operation is permitted by the current context.

    1. Consider a contemporary monolithic kernel OS implementation like Linux or Windows. What advantages could such a kernel implementation gain by using MMP to provide multiple protection domains within the kernel address space? What would be compelling uses of MMP?
    2. An alternative to hardware support for providing protection domains within the same address space is to use strongly typed languages. Compare and contrast the two approaches by giving two examples that represent an advantage for one approach and a disadvantage for the other.

  4. [Snoeren] A reliability-induced synchronous write is a synchronous write that is issued by the file system to ensure that the file system's state (as represented by the system's metadata) is not left inconsistent if the system crashes at an inconvenient time.

    1. Let f be a new file created in a directory d. The file system will issue at least three disk operations to complete this operation. Ignoring any data blocks allocated for the directory or new file, what are these three disk operations for?
    2. In Unix FFS, at least two of these writes will be issued synchronously. Which are they, and what order should they be performed in? Briefly explain why.
    3. Consider the Soft Updates solution to this problem. Does it do any reliability-induced synchronous writes? If so, how does it differ from FFS? If not, why can it avoid doing so? Explain.
    4. Consider the same operation in LFS. Does LFS generate any reliability-induced synchronous writes? Explain.
    5. Consider the same operation with the Rio file cache. Does Rio generate any reliability-induced synchronous writes? Explain.

  5. In 1999, Wang et al. proposed a disk drive architecture supporting a service called "eager write". Rather than update a block in place, as with normal disks, an eager writing disk simply writes to the next free block near the disk head (the disk internally keeps track of this mapping by maintaining a table mapping "logical" disk blocks to physical disk blocks). Argue whether using such a disk would improve the performance of a Log-Structured File System, hurt its performance, or make little difference.


voelker@cs.ucsd.edu