Qiao XIN (qxin@cs.ucsd.edu)
Wed, 31 May 2000 23:39:28 -0700 (PDT)

Qiao Xin

Evaluation of the Paper: A Fast File System for UNIX

This paper describes the changes from the original UNIX file
system to a reimplementation of fast file system. The original
512-byte UNIX file system used small block size, limited read-ahead
and many seeks makes the system incapable of providing the data
throughput rates that many applications require. To improve
throughput, the old file system changed the basic block size from 512
to 1024 bytes. Although the throughput had doubled, the old file
system still used only about 4 percent of the disk bandwidth. The
problem is the initially ordered free list became entirely random as
files were created and removed, causing files to have their blocks
allocated randomly over the disk and forcing a seek before every block

In the new file system, the size of blocks can be any power of 2
greater than or equal to 4096 bytes. Date are laid out so that larger
blocks can be transferred in a single disk transaction, greatly
increasing throughput. A single file system block is divided into
fragments to efficiently store small files by using large blocks
without undue waste. The result is about the same disk utilization
when a new file system's fragment size equals an old file system's
block size. A goal of the new file system is to parameterize the processor
capabilities and mass storage characteristics so that the block can
be allocated in an optimum configuration-dependent way. The new file
system tries to allocate new blocks on the same cylinder as the
previous block in the same file, i.e., these new blocks are
rotationally optimal. The file system layout policies try to improve
performance by placing all the inodes of files in a directory in the
same cylinder group and localizing data that are concurrently accessed
while spreading out unrelated data.

On the other hand, there are constrains in the new system. The
throughput rate is tied strongly to the amount of free space that
maintained, and the free space adds the amount of waste. There are
still space to improve the percentage of the usage of the
bandwidth. The performance is limited by memory to memory copy
operation. More implementation such as rewriting disk drivers to chain
together kernel buffers and batching up allocations could have be done
for better performance.

Evaluation of the Paper: Log-structured File System

The log-structured file system Sprite LFS is a new disk storage
management technique
which uses disks much more efficiently than current file system. It
tries to lessen the disk-bound problem due the big gap between the
improvement of the CPU speed and disk access time.

A log-structured file system stores data permanently in the log, the
only structure on disk. It buffers a sequence of file system changes
in the file cache and then writes all changes to disk sequentially in
a single disk write operation and thus improve the write
performance. For small files, it converts the many small synchronous
random writes of traditional file systems into large asynchronous
sequential transfers that can utilize nearly all raw disk bandwidth.

Sprite LFS doesn't put inodes fixed. It has the inode map to index the
disk address of inodes, and active inodes are compact enough to be
cached in main memory. The disk is divided into segments and threading
and copying are used for free space management. A segment cleaner
compresses the live data from heavily fragmented segments. A simple
cleaning policy based on cost and benefit allows high overall disk
capacity utilization yet provides a low write cost.

One problem for the system is it achieves temporal locality not
efficient to handle sequential re-reads. There are more things
about the cleaning policy to clear.