CSE 221: System Measurement Project
1) Intro, Machine Description, and CPU (draft):
Thursday, March 3, 2005 in class
2) Final report with all measurements plus code:
Wednesday, March 16, 2005 at noon
In building an operating system, it is important to be able to
determine the performance characteristics of underlying hardware
components (CPU, RAM, disk, network, etc.), and to understand how
their performance influences or constrains operating system services.
Likewise, in building an application, one should understand the
performance of the underlying hardware and operating system, and how
they relate to the user's subjective sense of that application's
"responsiveness". While some of the relevant quantities can be found
in specs and documentation, many must be determined experimentally.
While some values may be used to predict others, the relations between
lower- and higher-level performance are often subtle and
In this project, you will create, justify, and apply a set of
experiments to a system to characterize and understand its
performance. In addition, you may explore the relations between some
of these quantities. In doing so, you will study how to use
benchmarks to usefully characterize a complex system. You should also
gain an intuitive feel for the relative speeds of different basic
operations, which is invaluable in identifying performance bottlenecks.
You may work either alone or in two-person groups. In groups, both
members receive the same grade. If collaboration issues arise,
contact me as soon as possible: flexibility in dealing with such
issues decreases as the deadline approaches.
This project has two parts. First, you will implement and perform
a series of experiments. Second, you will write a report documenting
the methodology and results of your experiments. When you finish, you
will submit your report as well as the code used to perform your
Your report will have a number of sections including an
introduction, a machine description, and descriptions and discussions
of your experiments.
Describe the goals of the project and, if you are in a group, who
performed which experiments. State the language you used to implement
your measurements, and the compiler version and optimization settings
you used to compile your code. Estimate the amount of time you spent
on this project.
2) Machine Description
Your report should contain a reasonably detailed description of the
test machine(s). The relevant information should be available either
from the system (e.g.
sysctl on BSD,
on Linux, System Profiler on Mac OS X), or online. You will not be graded on
this part, and it should not require much work, but in explaining and
analyzing your results you will find these numbers useful. You should
report at least the following quantities:
- Processor: model, cycle time, cache sizes (L1, L2, instruction,
- Memory bus.
- I/O bus.
- RAM size.
- Disk: capacity, RPM, controller cache size.
- Network card speed.
- Operating system (including version/release)
Perform your experiments by following these steps:
In your report:
- Estimate the base hardware performance of the operation and cite
the source you used to determine this quantity (system info, a
particular document). For example, when measuring disk read
performance for a particular size, you can refer to the disk
specification (easily found online) to determine seek, rotation, and
transfer performance. Based on these values, you can estimate the
average time to read a given amount of data from the disk assuming no
- Make a guess as to how much overhead the OS will add to the base
hardware performance. For a disk read, this will include the system
call, arranging the read I/O operation, handling the completed read,
and copying the data read into the user buffer. We will not grade you
on your guess, this is for you to test your intuition. (Obviously you
can do this after performing the experiment to derive an accurate
"guess", but where's the fun in that?)
- Combine the base hardware performance and your estimate
of software overhead into an overall prediction of performance.
- Implement and perform the measurement. In all cases, you should
run your experiment multiple times, for long enough to obtain
repeatable measurements, and average the results.
- Clearly explain the methodology of your experiment.
- Present your results:
- For measurements of single quantities (e.g., system call
overhead), use a table to summarize your results. In the table
report the base hardware performance, your estimate of software
overhead, your prediction of operation time, and your measured
- For measurements of operations as a function of some other
quantity, report your results as a graph with operation time on the
y-axis and the varied quantity on the x-axis. Include your estimates
of base hardware performance and overall prediction of operation time
as curves on the graph as well.
- Discuss your results:
- Cite the source for the base hardware performance.
- Compare the measured performance with the predicted performance.
If they are wildly different, speculate on reasons why. What
may be contributing to the overhead?
- Evaluate the success of your methodology. How accurate
do you think your results are?
- For graphs, explain any interesting features of the curves.
- Answer any questions specifically mentioned with the operation.
Do not underestimate the time it takes to describe your methodology
- CPU, Scheduling, and OS services
- Procedure call overhead:
Report as a function of number of integer arguments from 0-7.
What is the increment overhead of an argument?
- System call overhead:
Report the cost of a minimal system call. How does it
compare to the cost of a procedure call?
- Task creation time:
Report the time to create and run both a process and
a kernel thread. How do they compare?
- Context switch time:
Report the time to context switch from one process to
another, and from one kernel thread to another. How
do they compare?
- RAM access time:
Report latency for integer accesses to main memory and the L1
and L2 caches.
- RAM bandwidth:
Report bandwidth for both reading and writing.
- Round trip time.
- Peak bandwidth.
- Connection overhead: Report setup and tear-down.
Evaluate for the TCP protocol. For each quantity, compare both
remote and loopback interfaces. Comparing the remote and
loopback results, what can you deduce about baseline network
performance and the overhead of OS software? For both round
trip time and bandwidth, how close to ideal hardware performance
do you achieve? In describing your
methodology for the remote case, either provide a machine
description for the second machine (as above), or use two
- File System
- Size of file cache: Note that this may be very sensitive
to other load on the machine.
- File read time: Report for both sequential and random access
as a function of file size. Discuss the sense in which
your "sequential" access might not be sequential. Ensure
that you are not measuring cached data.
- Remote file read time: Repeat the previous experiment for
a remote file system. What is the "network penalty" of
accessing files over the network?
- Contention: Report the average time to read one file
system block of data as a function of the number of
processes simultaneously performing the same operation on
different files on the same disk (and not in the file
During the quarter you have read a number of papers describing various
system measurements, including V, Sprite, microkernels, Scheduler
Activations, LRPC, LFS, and IO-Lite. You may find these papers useful as
In addition, other papers you may find useful for help with system
- John K. Ousterhout, Why
Aren't Operating Systems Getting Faster as Fast as Hardware?,
Proc. of USENIX Summer Conference, pp. 247-256, June 1990.
- J. Bradley Chen, Yasuhiro Endo, Kee Chan, David Mazieres,
Antonio Dias, Margo Seltzer, and Michael D. Smith, The
measured performance of personal computer operating systems,
Proc. of ACM SOSP, pp. 299-313, December 1995.
- Larry McVoy and Carl Staelin, lmbench:
Portable Tools for Performance Analysis, Proc. of USENIX Annual
Technical Conference, January 1996.
- Aaron B. Brown and Margo I. Seltzer, Operating
system benchmarking in the wake of lmbench: a case study of the
performance of NetBSD on the Intel x86 architecture, Proc. of ACM
SIGMETRICS, pp. 214-224, June 1997.
You may read these papers, or other references, for strategies on
performing measurements, but you may not examine code to copy or
replicate the implementation of a measurement. For example, reading
the lmbench paper is fine, but downloading and looking at the
lmbench code violates the intent of the project.
Finally, it goes almost without saying that you must implement all
of your measurements. You may not download a tool to perform the
measurements for you.
We will grade your project on the relative accuracy of your
measurement results (disk reads performing faster than the buffer
cache are a bad sign) as well as the quality of your report in terms
of methodology description (can we understand what you did and why?),
discussion of results (answering specific questions, discussing
unexpected behavior), and the writing (lazy writing will hurt your grade).