CSE 221: Homework 3   (Winter 2016)

Hardcopy due Thursday, March 10, 2016 at the start of class

Answer the following questions. For questions asking for short answers, there may not necessarily be a "right" answer, although some answers may be more compelling and/or much easier to justify. But we are interested in your explanation as much as the answer itself. Also, do not use shorthand: write your answers using complete sentences.

When grading homeworks, we will grade one question in detail and assign full credit for answers to the others.

  1. Exokernel and L4 represent approaches for providing protection and extensibility. Xen represents an approach for providing virtualization and isolation (or, alternately, is an extreme version of extensibility since it goes even beyond Exokernel in exposing the hardware interface to unprivileged code). Consider a Web server as a motivating application-level service running on each of these three system structures, each hosting the OS described in the paper.

    For each of the three systems, consider the path a network packet containing an HTTP request takes as it travels from the network interface card to a Web server process running at user level:

    1. Identify the various protection domains in the system for this scenario. Which domains are privileged, and which are unprivileged? (Feel free to draw "boxes-and-kernel-boundary" diagrams if you find them helpful.)

      For example, if the system were standard monolithic Linux, the protection domains would be the kernel and the Web server process with its address space. The kernel is privileged, and the server process unprivileged.

    2. Describe the journey of the packet as a sequence of steps through the protection domains identified above. For each protection domain crossing, state the communication mechanism used for that packet to cross protection domains.
    3. Argue which of these systems will likely provide the highest performance Web service without violating protection (e.g., not simply moving the Web server code into the kernel and running it in privileged mode). Justify your argument and be sure to state any assumptions you make.
    4. Further consider the Web server process triggering a page fault on a page in its address space. As with the network packet, trace the propagation of the page fault through protection domains. Which domain handles the page fault? Whose pool of physical memory is used to satisfy the page fault?

      For example, if the system were standard monolithic Linux, the CPU would raise an interrupt, halting the Web server process, and vector to a Linux kernel interrupt handler for page faults. The page fault handler would allocate a physical page from Linux's free physical page list and update the page table entry with the valid mapping. The Linux kernel would then return from the interrupt.

  2. The Scheduler Activations paper states that deadlock is potentially an issue when activations perform an upcall:
    "One issue we have not yet addressed is that a user-level thread could be executing in a critical section at the instant when it is blocked or preempted...[a] possible ill effect ... [is] deadlock (e.g., the preempted thread could be holding a lock on the user-level thread ready list; if so, deadlock would occur if the upcall attempted to place the preempted thread onto the ready list)." (p. 102)

    Why is this not a concern with standard kernel threads, i.e., why do scheduler activations have to worry about this deadlock issue, but standard kernel threads implementations do not have to?

  3. The FFS, LFS, and Soft Updates file systems introduced new designs and optimizations to improve upon a previous file system implementation. Consider the following three changes in underlying workload and storage technology. For each of the three file systems, explain whether the improvements they found with their design and optimizations would still hold under each of these changes. For instance, would FFS still see similar improvements relative to the old Unix file system under a read-dominated workload?

    1. Read-dominated workload (100x reads per write)
    2. Latency improves by 10x, bandwidth improves by 10x ("SSD")
    3. Latency degrades by 10x, bandwidth degrades by 10x ("Internet Cloud storage")

voelker@cs.ucsd.edu, snoeren@cs.ucsd.edu