Paper Evaluations: 05-30-2000

Tue, 30 May 2000 00:08:34 -0700 (PDT)

Title: "Memory Coherence in Shared Virtual Memory Systems"

This paper presents algorithms for implementing shared virtual memory
in loosely-coupled multiprocessors. The algorithms provide memory
coherence, so that the value returned by a read operation is always the
same as the value written by the most recent write operation . In
loosely-coupled multiprocessors, each processor has its own physical
memory. The virtual memory described provides a single address space
which is shared among all the processors. Pages in the virtual memory
are swapped not only from memory to disk, but between the physical
memories of the individual processors. This approach facilitates
process migration.

Processors can access any memory location in the address space
directly. The mapping between local memories and the shared virtual
address space is implemented by memory mapping managers, which are
also responsible for guaranteeing address space coherence. Mapping
Managers can be implemented in the system in different ways. For all
the algorithms described, the ownership of a page is not
fixed. Moreover, upon a write fault, the ownership of the respective
page is changed, and messages are sent to processors currently holding
a read copy of the requested page in order to invalidate the copies.

A centralized manager algorithm assumes that there is one processor
responsible for keeping the information about page ownership. The
manager may be a bottleneck in this approach. To avoid scalability
problems, distributed manager algorithms are proposed. In these
algorithms, the task of keeping information about page ownership is
shared among the processors. In a fixed distributed manager algorithm,
a distribution function indicates which is the processor holding the
information about the requested page. A good distribution function is
difficult to find though. Instead of using a fixed distribution,
dynamic distributed manager algorithms keep page information in the
current owner of the page. Each processor has a probOwner field for
each page in its page table which indicates a possible owner. If the
processor indicated as possible owner does not hold the page, it
forwards the message to the processor in the probOwner field of its
page table. Thus, the request is forwarded until it reaches the
owner. A mechanism based on broadcasts is described to avoid the
growth of forwarding chains.

In my opinion, the paper is well structured and the goal of the paper
is very clear. In order to show to the reader how is the performance
using the algorithms described, tests were accomplished on four
parallel programs. The tests show that the speedup is very good for
some classes of programs, but not for others that require, for
instance, data sets which are read only once. The algorithms, however,
do not include mechanisms to recover from processor
failures. Moreover, the algorithms do not support message losses,
since the system may change to an inconsistent state. In a forwarding
chain for the dynamic distributed manager algorithms, a request loss
may cause some processors to indicate the owner of a page to the
requester, which is not currently the owner.


Title: "Implementing Global Memory Management in a Workstation

This paper describes the design and implementation of a global memory
service for a cluster of workstations. This service is included in the
lowest level of the operating system so that higher-level services can
benefit from it. Among these higher-level services are virtual memory
paging, mapped files, and file system buffering.

The global memory service uses the idle physical memory of processors
in the same local area network to cache pages evicted from the
physical memory of other processors. Thus, an application requiring
more physical memory than is available in the processor it is
currently running use the physical memory of other processors instead
of swapping pages to disk and later retrieving from it. Hence, the
latency for page faults decreases.

Addition and deletion of nodes occurs dynamically. A new node checks
with a master node, which then notifies all existing members of the
new addition. The master node also checks for the liveness of the
other nodes. Another important aspect of the memory service is that no
data is lost when a cluster node crashes. A modified page moves from
local memory to global memory when it is written to disk, where local
memory contains the pages currently being used by the processor and
global memory keeps pages evicted from some other processor local
memory. In addition, memory coherency is not provided by the service,
and hence it needs to be enforced by the application if it is

When a page fault occurs, the requested page may be cached in the
global memory. Thus, a page has to be evicted so that the processor
can bring the requested page. The evicted page may be either moved to
the global memory or discarded. This decision depends on the age
information of the page. Hence, a mechanism based on epochs is
provided to guarantee that each node has approximate information about
global pages.

A prototype for this service was implemented and some benchmarks were
executed to evaluate performance, presenting some interesting
results. Varying the frequency with which nodes change from idle to
non-idle does not affect significantly the speedup obtained for the
application used. The speedup might depend on the idle memory
distribution, but, as the results show, the speedup is approximately
constant for different distributions. Thus, it is not important if the
idle memory is concentrated in a few nodes or spread among several
nodes. The global memory service, however, may impact in non-idle
nodes in the cluster. The results demonstrate that the service causes
no slowdown on non-idle nodes without idle memory. On the other hand,
if a node is non-idle and it has idle memory, then the service does
impact in the node performance.

This paper presents a good research work, which includes a proposed
service design, a prototype implementation and a prototype performance
evaluation. Moreover, the performance evaluation is not only in terms
of microbenchmarks, but also in terms of applications that could
benefit of a system implementing the global memory service. Compared
to the previous paper, however, the service does not provide memory
coherence. This approach may restrict the execution of parallel
programs in clusters using this memory service.

-- Flavio Junqueira

Flavio Paiva Junqueira, PhD Student
Department of Computer Science and Engineering
University of California, San Diego

Office location: AP&M 4349 e-mail: