Paper Evals

Carnevali $ilvio (
Mon, 08 May 2000 22:12:28 PDT

Why Aren't Operating Systems Getting Faster as Fast as Hardware?

This paper is not dedicated to the development of a new OS, but to the
comparison of the performances of existing
systems. It points out to the fact that CPU power is increasing much faster
than OS performance.

A series of benchmarks are applied to different OS/HW combinations to prove
this fact. Most of them are just
simple operations, useful to understand which OS features constitute the
main bottlenecks. Only one large scale
benchmark is applied, in order to understand the evolution of overall OS
The simple benchmarks basically tested the performance of kernel calls,
context switching between processes,
copy of data from memory/cache and basic file operations, while the large
scale one is a combination of the
above including some computationally intensive operations (compilation).
The main conclusion is that the MIPS-relative speed is decreasing as the CPU
power increases, with a few
exceptions. An improvement has been noticed in some systems that tend to
reduce disk usage, like Sprite. The
obvious reason was that the architecture of the system (diskless
workstations accessing data over the network)
required a reduction of disk accesses due to the overhead of Network
communication. Thus, data was written to
disk only after 30 secs, which reduced disk access time.

It was interesting to read this paper as it pointed out to a problem that is
still very common today, that is
memory performance vs CPU performance. I think the ideas are presented in an
orderly manner with lot's of
experimental data to prove them. Future work is not suggested, but we know
that one of the solution to that
problem is integrating more and more memory in a single chip, since there is
always a performance degradation
for external accesses.

The Interaction of Architecture and OS Design

This paper also addresses some issues about the offset between OS and
Architecture interaction for modern RISCs.
The point is that Architectures were slowly shifting towards RISC, while
OSes still had requirements that were
best met by specific instructions in CISC architectures. A similar study was
done by Ousterhout in the previous
paper, even though this paper treats more general issues rather than mere
memory vs CPU performance.

The main problem is that OS research was based on older HW versions quite
different from RISC architectures,
while HW design wasn't concerned enough about the OS needs.
Thus, as Kernels were getting smaller and required more communication, the
HW considerably increased the number
of instructions needed to handle them, thus making Network latency a minimal
part of overall processing time. The
management of new HW features also required an important processing overhead
when making System Calls and for
Interrupt Handling.
With the increasing complexity of pipelines in RISC computers, handling
memory management faults and TLB misses
became a more complex task requiring a high number of instruction, even with
the simplified memory management
necessary for new architectures.
Finally, perhaps the worst handicap is due to the high cost of context
switching. The increasing parallelism
provided by modern Kernels seems to be in open conflict with the increasing
linear performance of RISC
Architectures, with a consequent performance degradation for context
switching (down 50 times for SPARC Synapse!)
needed to implement parallelism among processes.

This paper, like the previous one, points out to the discrepancy between new
architecture features and old OS needs,
but gives more importance to IPC, VM and Parallel Processing rather than
mere memory access as a basis for the
analysis. I personally learned the importance of HW support for better OS
performace and viceversa, meaning that the
two should not be designed independently. Future work is not explicitly
mentioned even though the purpose of the
paper is clearly to bring a change in how HW and OS are designed.

Get Your Private, Free E-mail from MSN Hotmail at