CSE 221: Homework 3

Winter 2015

Due: Tuesday, March 10, 2015 at 8:00am in class



  1. The Scheduler Activations paper states that deadlock is potentially an issue when activations perform an upcall:
    "One issue we have not yet addressed is that a user-level thread could be executing in a critical section at the instant when it is blocked or preempted...[a] possible ill effect ... [is] deadlock (e.g., the preempted thread could be holding a lock on the user-level thread ready list; if so, deadlock would occur if the upcall attempted to place the preempted thread onto the ready list)." (p. 102)

    Why is this not a concern with standard kernel threads, i.e., why do scheduler activations have to worry about this deadlock issue, but standard kernel threads implementations do not have to?

  2. The IX system design is tailored to data center services that have short messages and short service times (i.e., per request, very little CPU time is spent in the application). The paper motivates IX with other applications (e.g., a Web server), but evaluates only memcached.

    1. If instead requests spent a significant amount of CPU time in the application — 3x the time the request would spend in the networking stack on a normal Linux OS — would IX still be a good system to use? Explain why or why not.
    2. If instead every request initiated an I/O to disk, would IX still be a good system to use? Explain why or why not.

  3. The LFS paper is strongly motivated by perceived trends in both hardware performance and user access patterns. Thus, its improved performance is driven by a set of underlying assumptions.

    1. Explain the assumptions about hardware (technology) and access patterns (workload) that LFS depends on.
    2. Explain which mechanisms in LFS depend on each assumption.

    For the purposes of the following questions, NVRAM is persistent storage (i.e., will survive the loss of power) and has access times, characteristics, and throughput similar to DRAM.

    1. At the time the LFS paper was written, non-volatile RAM (NVRAM) was not commonly available. If disks were replaced with NVRAM entirely, would the LFS design still be appropriate? Explain why or why not and be specific in justifying your answer.
    2. In practice, the cost per byte of disk is likely to be far cheaper than NVRAM for some time. So instead, consider a situation where some NVRAM is available (e.g., 1/10th of the disk size). This NVRAM might be used for caching reads, caching writes, or storing particular meta-data. Argue which use might be most appropriate for improving the performance and reliability of LFS.

  4. [Snoeren] A reliability-induced synchronous write is a synchronous write that is issued by the file system to ensure that the file system's state (as represented by the system's metadata) is not left inconsistent if the system crashes at an inconvenient time.

    1. Let f be a new file created in a directory d. The file system will issue at least three disk operations to complete this operation. Ignoring any data blocks allocated for the directory or new file, what are these three disk operations for?
    2. In Unix FFS, at least two of these writes will be issued synchronously. Which are they, and what order should they be performed in? Briefly explain why.
    3. Consider the Soft Updates solution to this problem. Does it do any reliability-induced synchronous writes? If so, how does it differ from FFS? If not, why can it avoid doing so? Explain.
    4. Consider the same operation in LFS. Does LFS generate any reliability-induced synchronous writes? Explain.
    5. Consider the same operation with the Rio file cache. Does Rio generate any reliability-induced synchronous writes? Explain.


voelker@cs.ucsd.edu