CSE 221 Paper Evaluations

Greg Johnson (johnson@SDSC.EDU)
Mon, 15 May 2000 21:38:44 -0700 (PDT)

PERFORMING REMOTE OPERATIONS EFFICIENTLY ON A LOCAL COMPUTER NETWORK
A. Spector, 1982.

This paper describes a rich, efficient communication model for processors
connected by a local area network, implementation issues related to this
model, and the performance of an example based on this model. This model
is intended to form the basis of an intermediate-level communications sub-
system upon which high level primitives might be built. Two distinguishing
aspects of Spector's design include:

* avoidance of a layered approach for reasons of efficiency, and

* selection and scope of the functions to include driven by the
desire to simplify the implementation of high level primitives.

The model itself is a master-slave design in which a process issues a
remote reference (master) which causes a remote operation (slave) to be
performed (and possibly a result returned). By providing primitives which
cause actual operations to be performed (instead of just moving data), a
simpler, more reliable, more efficient subsystem is achieved, one with a
straightforward mapping to higher level functions.

The paper goes on to define a taxonomy of remote references, which vary
in reliability characteristics, temporal relationship between reference
and corresponding operation, necessity for flow control, whether or not
a value is returned, and the type of process performing the operation.
The resulting attribute classes provide a very rich set of primitives.
This richness results in a specificity that allows processes to use only
the amount of communication required, and enable efficient operation of
the primitives themselves.

In his discussion of the performance of this model through both software
and microcode implementations, Spector shows that the software implement-
ation is two orders of magnitude slower. More importantly, he shows how
detrimental even a small number of remote references can be to the effective
performance of the CPU. Exactly how detrimental defines the appropriate
level of communication granularity to adopt in higher level programs /
protocols.

----------------------------------------------------------------------------

IMPLEMENTING REMOTE PROCEDURE CALLS
A. Birrell and B. Nelson, 1984.

This paper describes the design goals, implementation, and performance of
remote procedure calls (RPC). The idea behind RPC is simply the transfer
of control and data between two processes across a network. RPC was
specifically designed for use in environments with lightly loaded local
area networks. Design goals include: semantics akin to local procedure
calls, efficiency, and general purpose. The authors hoped to encourage
the development of distributed applications by providing a simple means
of performing remote computation (thereby removing "artificial" complex-
ities and reducing the problem to those characteristics unique to dist-
ributed systems).

To support fine-grained calling (such as might be found locally), RPC is
heavily performance oriented, generally accomplished through simplicity
of design. Another goal of RPC is powerful semantics (except - like
Asimov's three rules of robotics - where this goal would conflict with
either of the first two; simplicity or efficiency).

A key point the authors make relates to the decision to go with procedure
calls as the primary paradigm over message passing or another alternative.
Candidly, Birrell and Nelson state that their reasons for selecting PC
had more to do with features of the Mesa language than any fundamental
flaw in the alternatives.

RPC works by hiding communication details from the user and server codes
behind "stubs". Stubs perform the work of encoding and decoding arguments
and more for transmission. The amount of message traffic per transaction
is best suited to simple calls on local area networks. We see the effect
of this in the performance discussion later in the paper. RPC seems to
do well until larger amounts of data (more than will fit into a packet)
are sent. In such cases, an acknowledgement is sent for each packet.

=============================================================================
Greg Johnson office: (858) 534-8367
Senior Programmer Analyst fax: (858) 534-5152
San Diego Supercomputer Center email: johnson@sdsc.edu
University of California San Diego