Paper Evaluations

Carnevali $ilvio (
Wed, 31 May 2000 18:44:03 PDT


This paper presents the log-structured file system, a disk storage technique
alternative to existing File Systems,
that is able to speed up file writing and crash recovery. The idea of using
logs as a permanent data storage is new
and somehow departs from existing File System concepts that instead used
logs as a temporary storage.

The log is a sequential disk structure divided into segments. A segment
cleaner is used to defragment the system by
separating old slowly changing data from young rapidly changing data; this
is important because the efficiency of the
system depends on the availability of large amounts of free space.
One of the main advantages of a LFS is that the time spent seeking data is
much lower than in conventional UNIX FS
and thus enables a better disk bandwidth utilisation; this is very important
as recent technological developments
have greatly improved CPU performance as well as disk sizes, but have left
behind disk performance; using the full
potentiality of a disk thus becomes the main bottleneck for system
In LFS, inode maps are cached and data writing of many small units are
performed as a single write; this reduces the
number of disk accesses thus optimizing bandwidth utilisation. Since writing
large chunks of data requires large
amounts of free space, special care is devoted to file space management; for
this reason, the segment cleaning
mechanism is carefully analized in order to provide the highest ratio of
benefit to cost.
Finally, checkpoints are introduced as a mechanism to save consistent log
states at pre-determined intervals, which
is useful for data recovery after a crash (Roll-Fwd).

>From this paper I learned how LFS can be used to speed up data access to
disk, thus enabling a better bandwidth
utilisation. The ideas are clearly presented, even though I would have liked
more details about the log concept itself,
which is still unclear to me. Besides that, this paper is pretty good and
suggests new improvements here and there (like
the idea to use checkpoints after a certain amount of data has been

A Fast FS for UNIX

This paper presents an upgrade of the existing UNIX FS providing a better
utilisation of disk bandwidth without
increasing the percentage of wasted space. Most concepts were preserved from
the existing UNIX FS for compatibility
purposes, whith some new ideas added on top of it.

The main enhancement relies on the fact that larger block sizes help speedup
the data transfer by reducing the
overhead due to access times. However, wasted space is also directly related
to Block size, thus special care was
given to optimal storage utilisation. For this reason blocks were divided
into small fragments and a file expansion
policy was designed to optimally allocate data among blocks/fragments; as a
result, the percentage of waste for a
specific fragment size was equivalent to the waste in a non-fragmented
system with block size equal to the fragment
size, since the overhead due the Block/Fragment map is really negligeable.
Furthermore, the new FS takes into account the parameters of the disk HW, in
order to get the most out of it. For
example, the disk rotation frequency was considered for the computation of
the required spacing between two
consecutive blocks.
Some new functional enhancements not related to performance speedup were
also added to the system. The new FS thus
supported long file names, different kinds of file locking, symbolic links
across file systems, single system call
rename and user-related disk quotas.

This paper helped me see one of the steps of the evolution of the UNIX FS
across time. Most added features are still
in use today, while some other features were added later in time. I think
this research was useful to improve the
performance of the existing UNIX FS; future work is not mentioned though,
actually there isn't even a conclusion...
Get Your Private, Free E-mail from MSN Hotmail at