Yu XU (yxu@cs.ucsd.edu)
Thu, 1 Jun 2000 00:33:09 -0700 (PDT)

Evaluation of " A Fast File System for Unix"

This paper describes a reimplementation of the Unix file system. In a
word, the new file system clusters data that is sequentially accused and
provides two block sizes to allow fast access to large files and at the same
time not wasting large amounts of space for small files.
The Fast File System(FFS) has several important features:
FFS allows parameterizing a file system. Each file system is
parameterized so that it can be adapted to the characteristics of the disk on
which it is placed. To ease the calculation of finding rotationally optimal
blocks, the superblock contains rotational layout tables.
FFS has tow distinct layout policies: global policy and local policy. The
global policy routines call local allocation routines with requests for
specific blocks.
FFS support long file names.
FFS provides mechanism to place advisory lock on files.
FFS supports symbolic links which make extension of name space across file
system and even physical machine possible.
Rename system call is more robust in FFS.
Quota mechanism(soft limit and hard limit) gives administrative control of
resource usage.

Several questions about FFS:

In FFS, almost no deadlock detection is attempted. I think part of the
reason might be because the kernel has already some kind of deadlock avoidance
or deadlock recovery.

I agree that 512 byte for a block size is too small for access efficiency.
But 4019 is chosen according to the authors' experience. I think it would be
better if the block size can be chosen at configuration time(I don't find the
paper say that block size can also be parameterized). Say, in a pure audio and
video archive system, a much bigger block size would be better.

Evaluation of "The design and implementation of a Log-Structured System"

Basically, this paper always let me think of database design principles.
The main idea of log-structured file system is that it collect large amounts
of new data in file cache in main memory, then writes the data to disk
sequentially in a log-like structure, thereby speeding up both file writing
and crash recovery. The log is the only structure on disk and contains
indexing information to read back files form the log efficiently.
The key for log-structued file system is that there should be always large
extents of free space available for writing new data. To do that, LFS divide
the log into segments and use a segment cleaner to compress the live
information from heavily fragmented segments.

Several detail implementation in LFS:

LFS use inode map to maintain the current location of each inode.
Combination of threading and copying is used for free space management, there
is no bit map or free-block list in LFS.

Cost-benefit policy is used to clean segments. The assumption of this
policy is that the older the data in a segment the longer it is likely to
remain unchanged, the stability can be estimated by the age of data. To
support the cost-benefit cleaning policy, LFS maintains a segment usage table.
For crash recovery, checkpoints and roll-foward are used. They are quite
similar to those used in database systems.

Basically, we can find all the techonolies(log-structure concept, inode
map, cost-beneif segment selection,age soring, roll-foward,checkpoints..) used
in this paper in previous papers. The points is that in LFS these technologies
work together to form an efficient new file system design.