CSE 221: Homework 3
Fall 2018
Hardcopy due Tuesday, December 4 at the start of class
Answer the following questions. For questions asking for short
answers, there may not necessarily be a "right" answer, although some
answers may be more compelling and/or much easier to justify. But I
am interested in your explanation (the "why") as much as the answer
itself. Also, do not use shorthand: write your answers using complete
sentences.
When grading homeworks, we will grade one question in detail and
assign full credit for technical answers to the others.
- The Scheduler Activations paper states that deadlock is
potentially an issue when activations perform an upcall:
"One issue we have not yet addressed is that a user-level thread could
be executing in a critical section at the instant when it is blocked
or preempted...[a] possible ill effect ... [is] deadlock (e.g., the
preempted thread could be holding a lock on the user-level thread
ready list; if so, deadlock would occur if the upcall attempted to
place the preempted thread onto the ready list)." (p. 102)
Why is this not a concern with standard kernel threads, i.e., why
do scheduler activations have to worry about this deadlock issue, but
standard kernel threads implementations do not have to?
- Both the BN-RPC and IX systems include a variety of optimizations
to improve communication performance. For each of the
optimizations from BN-RPC below:
- State whether IX also optimized this aspect of the system.
- If IX did not optimize it, explain why.
- If IX did optimize it, compare and contrast IX's optimization
relative to BN-RPC. Also, if the papers report the performance
benefits of the optimization, quote the performance results
reported.
- Frequent vs. infrequent requests
- Connection management
- Process/thread management
- Communication protocol
- The FFS paper describes a series of optimizations to improve file
system performance on hard disks. We essentially still use most of
these mechanisms in file systems today on both hard disks and SSDs.
However, new non-volatile memory (NVM) technologies such as
spin-torque transfer (STTM), phase change (PCM), and memristors
offer levels of performance that will fundamentally change file
system design and implementation.
For each of the following mechanisms introduced for FFS, which
would be useful for a file system using NVM and which would not?
Briefly justify your answer. For the purposes of this question,
assume that NVM technologies perform and behave exactly like DRAM
(except that data is persistent), and assume that wear leveling is
not an issue.
- Using cylinder groups to allocate data blocks in a file close
together (to address random data block placement in aging file
systems) and to allocate inodes and data blocks physically near
each other (to address inodes being located far from data
blocks).
- Using larger block sizes to improve bandwidth utilization.
- Using larger block sizes to increase the max file size.
- Using fragments to address waste from larger block sizes.
- Replicating the superblock for reliability.
- Parameterizing file system with device characteristics for
rotationally optimal placement.