04/19/2000 - Research paper reviews

Bryan Wang (bryan@clip.dhs.org)
Thu, 20 Apr 2000 01:02:13 -0700

<<medusa.txt>> <<staros.txt>>
Bryan Wang, 4/19/00

Medusa: An Experiment in Distributed Operating System Structure

The Medusa project is the second-generation OS for the Cm*
architecture, based partly on StarOS yet with different design goals.
Their goal exmphasized performance more heavily than StarOS, believing
the key to performance is to make the hardware structure imtimately
visible to the user. The goals of modularity and robustness remain
the same from StarOS.

Medusa implements this by imposing the restriction that a processor
cannot run code that resides in the memory subsystem of another
processor. If Job 1 is running on processor A wishes to run some
code that's on processor B, Job 1 sends a message to Processor B
requesting that it run the code and supplying input parameters.
Processor B returns a message to Job 1 (still running on Processor A)
with its result.

Their paper addresses its three goals satisfactorily, except for
proving hard numbers for performance. (seems to be a common trend, but
hey, it's just a "minor detail" :) I learned that the technique of
distributing utilties of the OS is very practical under the set of
hardware constraints they were working with.

I enjoyed reading this paper, and I feel that many concepts it
outlines will be useful in distributed systems over LAN or WAN. I
think the constraint of saving system RAM on each processor is
considerably less important today than it was in 1980 though.

Bryan Wang, 4/19/00

StarOS, A Multiprocessor Operating System for the Support of Task

StarOS attempts to exploit parallelism in multiprocessor computer
systems through the use of task forces. The system should be
arbitrarily extendable and offer reliability through redundancy.

By maintaining a performance hierarchy of memory and low-overhead
nucleus functions for process communications, StarOS is designed to
run well with inherently parallel programs. Creating a task force
requires cooperation from a programmer and compiler, which will split
a large parallelizable process into N smaller processes.

I learned that a kernel implemented in microcode as well as software
might offer higher performance than a purely software kernel. Also,
the concept of mailbox objects seem to be a quick method for process

I found the paper to be generally well-written but a little fuzzy in
certain areas (bailout mailboxes, capability name spaces). Also,
performance issues weren't well-understood at the time of the paper's
writing, suggesting a topic for future research.