CSE 221: Homework 3

Fall 2018

Hardcopy due Tuesday, December 4 at the start of class

Answer the following questions. For questions asking for short answers, there may not necessarily be a "right" answer, although some answers may be more compelling and/or much easier to justify. But I am interested in your explanation (the "why") as much as the answer itself. Also, do not use shorthand: write your answers using complete sentences.

When grading homeworks, we will grade one question in detail and assign full credit for technical answers to the others.

  1. The Scheduler Activations paper states that deadlock is potentially an issue when activations perform an upcall:
    "One issue we have not yet addressed is that a user-level thread could be executing in a critical section at the instant when it is blocked or preempted...[a] possible ill effect ... [is] deadlock (e.g., the preempted thread could be holding a lock on the user-level thread ready list; if so, deadlock would occur if the upcall attempted to place the preempted thread onto the ready list)." (p. 102)

    Why is this not a concern with standard kernel threads, i.e., why do scheduler activations have to worry about this deadlock issue, but standard kernel threads implementations do not have to?

  2. Both the BN-RPC and IX systems include a variety of optimizations to improve communication performance. For each of the optimizations from BN-RPC below:

    1. Frequent vs. infrequent requests
    2. Connection management
    3. Process/thread management
    4. Communication protocol

  3. The FFS paper describes a series of optimizations to improve file system performance on hard disks. We essentially still use most of these mechanisms in file systems today on both hard disks and SSDs. However, new non-volatile memory (NVM) technologies such as spin-torque transfer (STTM), phase change (PCM), and memristors offer levels of performance that will fundamentally change file system design and implementation.

    For each of the following mechanisms introduced for FFS, which would be useful for a file system using NVM and which would not? Briefly justify your answer. For the purposes of this question, assume that NVM technologies perform and behave exactly like DRAM (except that data is persistent), and assume that wear leveling is not an issue.

    1. Using cylinder groups to allocate data blocks in a file close together (to address random data block placement in aging file systems) and to allocate inodes and data blocks physically near each other (to address inodes being located far from data blocks).
    2. Using larger block sizes to improve bandwidth utilization.
    3. Using larger block sizes to increase the max file size.
    4. Using fragments to address waste from larger block sizes.
    5. Replicating the superblock for reliability.
    6. Parameterizing file system with device characteristics for rotationally optimal placement.