8. The Net, the Web, and Some Economics
Let's begin with some technical points about the net with which many of you are probably already more or less familiar. The internet rests on the 7 layers of the TCP/IP protocol. The design is intended to allow building new protocols on top of this basis; for example, audio and video streaming are now in common use, but would have seemed almost science fiction in the earliest days of the internet. The web is a less recent example, which uses the http protocol. The basic idea of the internet is packet switching, which means breaking information up into small "packets," each which of which may get to its destination by a different route, and which are resent if not received. Thus it is by design highly decentralized, flexible, reliable, and extensible; this is in sharp contrast to most other media, e.g., books, newspapers, TV, telephones. (But strictly speaking, the internet is not a medium, but rather a carrier of media.)
Some aspects of the internet, and more specifically of electronic commerce ("ecommerce"), can be difficult to understand - and I don't mean technically. A vast array of questions are being raised by these extremely rapidly growing new media for communication. Many of these questions have actually vexed communication of all kinds for centuries, such as privacy, politics, and pornography; others may seem new, such as disintermediation, spam, and convergence - but are they really so new?
It will help to bear in mind the general principles that are listed below (some of which we have already discussed):
1. First, it should be clear that the internet exists in the context of society; it is not an autonomous force for change; it is subject to regulation, and in fact, it is increasingly being regulated; the internet and society are inextricably tied together. Technological determinism is not true for the internet; nor is social determinism.
2. Like anything else, in order to discuss the internet, we need appropriate concepts, with methods for applying those concepts, and theories within which they are given meaning. We also need values, in order to decide what is important. It is important to distinguish among many different kinds of concepts and assertions that are relevant to the internet. Some are (more or less) factual, while others are deeply intertwined with dubious theory, and still others are pure speculation. All socio-technical discourse is theory laden, even if the theories are vague or unarticulated, although it is certainly better to have explicit theories, of a high quality. Similarly, all socio-technical discourse is necessarily value laden, i.e., it involves values in an essential way, even if those values are implicit rather than explicit. It is always a good exercise to ask what are the values in some socio-technical discourse or text.
The following list is intended to illustrate the great diversity of kinds of concept that exist, each with its own loading:
8.1 Internet Mythology and Hyperbole
Urban legends are stories that appear mysteriously and spread spontaneously in varing forms, typically containing elements of humor and/or horror, with the latter often reflecting punishment to someone who flouts ordinary social conventions; they do not have to be false, and in fact often have a basis in reality, but assume a life of their own once in circulation. Originally they were spread within urban areas (hence their name) but now are typically spread over the internet. There are many websites devoted to collecting and analyzing urban legends, and the above discussion was taken from one of them (namely http://www.urbanlegends.com, where the text is attributed to Peter van Linden and Terry Chan. It is especially interesting to investigate the values found in these stories, since these often reflect unrecognized but powerful tendencies in our society, e.g., in relation to the net.
The article Hopes and horrors: Technological utopianism and anti-utopianism in narratives of computerization, by Rob Kling, in Computerization and Controversy, ed. Rob Kling, pp. 40-58 lays out the problem rather well, but fails to describe case studies in any detail, and also fails to indicate what methods one might use in carrying out a case study. Of course, we advocate the use of narratology, especially the determination of values in narrative, along lines described earlier in the course. ... More to come here. ...
The writing of Ted Lewis provides many good examples of buzz word journalism and internet mythology; connoisseurs of these art forms will also enjoy the works of George Gilder and many items in back issues of Wired magazine. One can find many failed predictions in these sources, though of course, one can sometimes also find some bits of common sense, and perhaps occasionally even wisdom.
Every educated person should know a little economics, and especially engineers should, because so much of their professional work involves making cost/benefit tradeoffs. This subsection discusses some fundamentals of neo-classical economics, with a bit of emphasis on its basic assumptions.
Classical economics begins with Adam Smith's famous 1776 book An Inquiry into the Nature and Causes of the Wealth of Nations, usually known as just The Wealth of Nations. Smith wanted to know what causes (what we would call) a high and rising standard of living. Although Smith thought that division of labor was the most important cause, today's economics puts its prime emphasis on resources, which are taken to include labor and capital, as well as so called natural resources (oil, gas, etc., plus water, air, etc.); in addition, technological innovation is considered important by both Smith and by modern economists, and recent ideas also call for considering intellectual and social capital. The role of resources in improving production can be either to use existing resources more productively, or to use more resources; these two strategies correspond (roughly) to micro- and macro- economics, respectively, which are the two main branches of modern economics. All this can be found in Roger McCain's webtext Essential Principles of Economics.
McCain's webtext characterizes economics as what he called a reasonable dialog; but I would rather call it a dialog of practical reasoning (this emphasis on dialog goes way back, to Plato's version of Socrates in ancient Greece.). Practical reasoning is what we do every day; this is a huge topic that is still poorly understood, in stark contrast to the formal reasoning of classical logic, which is very well developed, but only applies to an ideal world of Platonic truths. However, modern philosophers have made some progress with practical reasoning, and one important concept is defeasible reasoning, the idea of assertions that can be "defeated" by later assertions in a dialogue. For example, one might replace a default general assumption by some more specific information. Another important idea in this general area is the theory of vagueness, which notes that most words have fuzzy meanings; the theory of fuzzy sets provides a precise logic of this kind, which however fails to capture all the phemenology of vagueness. (Work in cognitive psychology by Eleanor Rosch and others does better.)
McCain's wish to judge economics by its practical applications seems very good; however, this unfortunately yields a very harsh judgement, since economists do a terrible job of making predictions and giving advice, e.g., look at their predictions about the US economy, or their advice for restructuring Russia's economy. In reality, economics is one of the least scientific and least successful of the sciences, if it is a science at all.
Perhaps Adam Smith's most important concept is that of a free market; the idea is that free trade among buyers and sellers will result is an optimal pricing and an optimal distribution of goods; the catch-phrase invisible hand is also used for this (alleged) process. This idea is still enormously influential today, but it is vital to recognize that Smith's theory depends upon the assumption that all players involved have approximately equal power; this fundamental assumption is called symmetry, and is obviously false in (most of) today's world; equal power must be taken in particular to include equal access to information, for which economists often use the technical term perfect information, following game theory. Note that these and other important issues surrounding symmetry are not discussed in McCain's webtext.
Classical economics was out of fashion for many years, one of its strongest critics being John Maynard Keynes, who (along with Karl Marx) pointed out the tendency for boom-bust cycles (i.e., alternating expansion and contraction of the economy, i.e., inflation and recession), and who suggested remedies for this based on government intervention. But in a revised form called neo-classical economics (hereafter abbreviated "NCE"), Adam Smith and his free market are now very much back in fashion, and much loved by conservative politicians and big business, for what seem to me very often to be propagandistic purposes, e.g., to justify the deregulation of markets in ways that benefit large corporations and (often) hurt consumers and/or poor countries.
McCain's webtext says that in NCE, economics is defined as "the study of the allocation of resources", or as "the study that considers human behavior as a relation between scarce means and alternative ends." McCain also says that "neoclassical economists believe that free markets usually bring about an efficient allocation of resources." We will see from the way that NCE treats resources that it is not permissible to interpret this term in a very broad way, despite the fact that this is often done in journalism and propaganda. We will also question the meaning of "efficient."
Although he does a poor job in discussing symmetry, McCain does a better job of discussing two other major assumptions of NCE, (1) rationality and (2) self-interest. McCain admits that these assumptions are obviously not true in most cases, but he gives arguments that they are still reasonable starting points for constructing models, which is what positive economics is supposed to do. He emphasizes that these are not normative assumptions, about how people ought to behave. However, this seems to be a rather disingenuous rhetorical move, because of the way that NCE is often used in policy arguments is something like the following: we want to have a successful nation (i.e., a wealthy nation, after the style of Adam Smith), and we know that free markets provide for the optimal allocation of resources, therefore we should de-regulate markets; that is, in practice, NCE often is used normatively rather than positively. However, there certainly are markets that are sufficiently close to free, so that NCE can be used scientifically.
McCain does not discuss two further very basic assumptions of
neo-classical economics, that (3) modelling only the
allocation of resources is adequate, and It is easy to see from McCain's Chapter 3 that these assumptions are
implicit in his approach, even though he does not state them. It is perhaps
less easy to see the implicit values that arise from these assumptions of in
neo-classical economics, but we will try to do so a bit later.
8.3 Economic Models; Supply and Demand
The first model introduced in McCain's webtext is the so-called "production
possibility frontier model" - a ridiculous name to be sure, and as we will
see, an over-rated model. One purpose of this model is to make the notions of
allocation and scarcity more precise; this is significant because these are
very basic concepts for NCE.
But first, let us note that the level of mathematical sophistication in
McCain's text is far below what one would expect in an engineering course,
since in particular, not even the very easy idea of a function is
exploited; we will see that the use of simple functional notation can greatly
clarify and simplify the exposition of the basic ideas with which McCain is
struggling.
An important concept is that of monotone function, which actually
comes in two varieties: A monotone increasing function is a real valued
function g of a real variable satisfying the condition x >
y implies g(x) > g(y); and dually, a monotone decreasing
function is a real valued function g of a real variable
satisfying the condition x > y implies g(x) < g(y).
The production possibility frontier model, in the case of just two
goods, say G1 and G2, says that there is a trade-off in the possibities of
producing those goods, i.e., if the amounts of the goods are g1, g2,
both of which must be non-negative, there is a monotone decreasing function
f(g2) which describes the maximum amount of G1 that can be produced
if the amount of G2 is g2; any amount g1 that is less than
(or equal to) f(g2) is said to be feasible, whereas any
amount that is greater than f(g2) is said to be infeasible.
In other words, this model says that the feasible possibilities are the
members of the set
Another important but unstated assumption is that the variables g1,
g2 are relatively independent (or in a language more likely to be used in
economics, they are at most loosely coupled), except near the frontiers. But
in McCain's example of food vs. machines, these variables are far from
independent, because if the workers do not have food, they cannot produce
machines; also, in a modern economy, food cannot be produced without machines.
However, once again, trade-offs may still be possible within a limited
intermediate range of values; however, the frontier curve may be rather
complex, very hard to compute, and moreover, will change with time, due to the
delayed propagation of effects through the system, e.g., there will be a time
lag before the lack of new machines begins to reduce agricultural production.
(More technically, we have to take account of states of an economy,
and of their tranition function.)
In my opinion, the main purpose of the production possibility frontier
model is rhetorical, to emphasize the notion of scarcity and the
consequent need for trade-offs. However, at least to me, its lack of realism
tends to undermine these very conclusions, and also tends to be applied to
resources, such as information, where the model clearly does not apply at all.
(In passing, I would also emphasize that the models which McCain claims are
realistic rather than made up, cannot possibly be realistic, due to the
factors discussed above. Probably these models are interpolations of some
sort of more or less reasonable data in the middle range; but interpolations
to cases where the amount of one of the goods is zero or close to zero are not
realistic. See Table 1 and Figure 3 in Chapter 2.)
As aspect of Adam Smith's "free market" that we have not discussed before
is the notion of natural price: this is the theory that every good has
some price above which it will not sell, and below which it cannot be sold
(because its costs cannot be covered). The purpose of a perfect market is to
achieve an optimal distribution of goods, and the mechanism by which it does
so it to find the natural price for all goods. As I have repeatedly
emphasized, this is not what happens in most real markets, but instead, it
represents an ideal, which is sometimes almost achieved in practice.
The famous theory of supply and demand due to John Stuart Mill, is
a refinement of Smith's theory of natural price. This theory says that both
the supply and the demand for a good are function of its price, say
s(p) and d(p), respectively, where p is the price.
The so called Law of Demand says that d(p) is a monotone
decreasing function, i.e., that a higher price always produces a lower demand.
Similarly, s(p) is a monotone increasing function, i.e., a higher
price always produces a greater supply. McCain of course does not discuss
this topic with anything like the mathematical precision of our discussion,
since he does not use the notion of function, but he does provide the very
important (though somewhat vague) qualification ceteris paribus, which
is Latin for "if nothing else changes." This phrase is often used in
economics, and as we have noted, it is often a very unrealistic assumption,
because of course things do change, because there are interdependencies among
the variables. Moreover, as noted above, demand is in general inelastic
outside of a certain range, and of course the same applies to supply.
Coupling of variables and inelasticity can also result in the supply and
demand functions being non-monotone.
However, assuming that we are in a situation where the above assumptions
are realistic, a little bit of somewhat interesting mathematics can now be
done. Adam Smith's natural price can be captured by solving the following
equation for p,
s(p) = d(p) .
Theorem: The above equation has at most one solution, i.e., the
solution is unique if it exists.
Proof: This is due to the monotonicity assumptions. Suppose that
p0 is a solution, and consider the function f(p) = s(p) -
d(p). Then f(p0) = 0, and p > p0 implies that
f(p) > 0 because the value of s(p) increases while that of
d(p) decreases. Similarly, p < p0 implies f(p) <
0 because the value of s(p) decreases while that of
d(p) increases. Q.E.D.
If such a solution exists, it is usually called the equilibrium
price in this context; this is the price at which supply and demand are
balanced, which is why we can consider this to be a refinement of Smith's
notion of natural price. (See the pictures in McCain's Chapter 3 to enhance
your intuition for this situation, especially for why there is at most one
solution.)
One example where this did not happen is the California power market, where
prices have been set in part by regulation, and in part by greed. Another
example is the Sony robot dog Aibo, where the price was arbitrarily set at
$20,000 and the quantity was fixed at 2,000. (Of course, a free market for
re-selling them arose after the initial supply was exhausted.) Manufacturers
and retailers are always trying to "game" the market, with advertising,
tie-ins, and other gimmicks; indeed, almost the entire apparatus of marketing
can be described as an attempt to make markets un-free, and instead biased
against the buyer. As explained earlier, a genuinely free market requires
free competition among a relatively large number of relatively small suppliers
and relatively small consumers, all with approximately equal (i.e. symmetric)
information and power.
A good example of the rhetoric use of ideas from neo-classical economics is
the so called supply side economics heavily promoted during the
administration of former President Reagan, and therefore also called
Reaganomics; in addition, and not without reason, it was also called
voodoo economics. This theory said that if tax breaks and other
advantages were given to producers, due to the nature of supply and demand,
economic benefits would also trickle down to consumers. It now seems
clear that very little of this sort of effect actually happened, and that the
purpose of this theory was simply to justify rewarding the traditional power
base of the Republican Party. I have found that appeals to principles of
neo-classical economics are very often used as arguments to justify various
public policies that are advantageous to the large corporations and the rich
minority in America. Of course, this is not the fault of the theory itself,
since it could equally well be used to justify completely different policies,
such as regulations that would reduce the size and information advantages of
suppliers, i.e., that would promote greater symmetry.
One rather fun thing that can be done with this simple theory of supply and
demand is to examine the effect of adding constants to (or subtracting them
from) the price in the supply or demand functions, e.g., replacing
d(p) by d(p + p0) where p0 is a constant. Notice
that this decreases the demand for the good at a given price p,
because d(p + d0) < d(p) by monotonicity, and it also increases
the price of the good at a given demand. A similar analysis for the case of
replacing d(p) by d(p - p0) shows that this increases the
demand for a given price, and decreases the price for a given demand.
An interesting application of the first case is the imposition of a tax
which must be paid by the consumer. Here we get a new demand function,
d'(p) = d(p + t) where t is the amount of the tax, and we need
to solve for p in the equation
s(p) = d'(p) ,
which is the same as solving for p in
s(p) = d(p + t) .
A little thought will show that this will increase the equilibrium price
and decrease the equilibrium production and consumption. This is exactly the
reasoning that is applied when health professionals call for increasing the
tax on tobacco in order to decrease the consumption.
A very similar analysis applies to the case of a subsidy that is paid by
the government to producers. Here the equation to be solved for p
is
s'(p) = d(p) ,
where s'(p) = s(p - b), where b is the amount of the
subsidy. This is the same as solving for p in
s(p - b) = d(p) ,
and a little thought will show that the new solution will show a decrease
in the equilibrium price and an increase in the equilibrium production and
consumption. This is exactly the reasoning that is applied when a government
wants to make more of some basic food good, such as corn or wheat, available
to the population.
It is left as an interesting exercise to analyze the effect of a tax cut
in the same style.
I close this section by noting that because every monotone function has an
inverse, it would be possible to write equations for price as functions of the
quantities supplied and demanded, instead of the way we have done it above,
and in fact, this would perhaps be closer to the exposition in McCain's
webtext. With this in mind, I would urge the reader to have a look at the
diagrams in Chapter 3 of that text.
8.4 Applications of Economics to the Net
In The Market and the Net,
Phil Agre highlights a kind of "convergence" between recent concerns of
neoclassical economists and "cyberspace pundits," having to do with the
alleged abolition of institutions caused by the internet. Agre describes an
approach due to Ronald Coase, which says that if some market does not behave
like Adam Smith's free market, it must be because some assumption behind that
market is not satisfied. There are implicit assumptions here, that markets
will behave in an optimal manner if the right conditions exist, and that free
markets are what we should have; there is also (I think) a confusion of the
technical economic meaning of "optimal" with its more everyday meaning.
Agre then explains what transaction costs and coordination
costs are, and goes over the argument that large firms should cease to
exist if transaction costs can be brought down far enough, which is supposedly
exactly what the internet is doing, i.e., bringing about Adam Smith's perfect
markets. He also gives Douglass North's definition (see also Agre's Review of Institutions, Institutional
Change, and Economic Performance) of an institution as "rules
of the game" that define the most basic relationships in social life, which
are especially taken to be economic relationships. Agre then goes over the
argument, associated with Richard Posner, that free markets require impersonal
institutions and perfect information. Agre relates all this to four myths
about the internet, which he calls community, cyberspace,
disintermediation, and decentralization.
Agre then introduces his own ideas about privacy, which are very different
from the idea of having perfect information about all participants in internet
markets; the contrast is brought out by discussing the (undesirable)
psychological condition called instant intimacy. After that, Agre
argues against Posner, by pointing out two fallacies.
Agre concludes by saying that: in bringing the internet into wider use, we
are necessarily building institutions, not destroying them; and that what is
really at issue here are questions of human freedom, and human relations, not
merely economic freedom and economic relations. I would like to highlight the
point from Agre's paper that privacy is under attack today not just from legal
and technical innovations, but also from changes in the philosophies and myths
to which people tend to subscribe, most often unconsciously.
This course argues that neo-classical economics is an oversimplified theory
which does not take account of many very important factors (such as depletion
human and natural resources, to say nothing of emotional, aesthetic, and
ethical issues) and which often yields incorrect results, but which
nonetheless has a strong intuitive appeal. In these respects it is much like
technological determinism; however, it is better than technological
determinism in that its way of thinking is genuinely interesting and even
potentially useful, if applied sufficiently carefully.
For a very utopian view of ecommerce and encryption, you might want to read
the novel Cryptonomicon by Neal Stephenson; this book also has some
entertaining World War II history, especially of the war in the Philippines,
of course mixed in with some even more entertaining fiction.
It is also interesting to notice that the argument for one of the
alternatives to ATM technology suggested in Dan Tebbutt's interview with Odlyzko is that it would be
effective because it would operate as a free market, in just the sense
imagined by Adam Smith. This system would have a number of identical subnets,
each charging a different rate, so that (presumably) the most expensive will
be the least loaded, and hence provide the best service - or if not, then
users will migrate to less costly subnets, thus again making it the least
loaded. (Here "users" are large corporations, not individuals.)
Dan Geer, in Risk Management is where
the Money is, draws the distinction between risk and security,
arguing that the key concept is risk, not security (or "trust"), even though
the computer science community has concentrated almost exclusively on security
(encryption, etc.). This point can be illustrated by some facts from a more
familiar mediator of commerce, the credit card. Its security is actually
fairly low, because it is easy to get someone else's credit card number, and
identities are not very carefully checked in most outlets, and hardly at all
over the internet. The usual agreement between the cardholder and the card
issuer involves a bank that accepts all risk except the first $50, which the
cardholder must accept. The system works well because the delay involved in
finalizing the transaction lets the cardholder confirm transactions by default
(if you do nothing, then the transaction is accepted). Credit cards would not
work if the risk were not dealt with in some way that was at least as
effective as this. The situation for ecommerce is similar: most book buyers
prefer to deal with some well established company like amazon.com rather than
take on the (perceived) higher risk of a less well known firm, even though
they know that they are likely to pay a slightly higher cost for this risk
reduction; it is like buying insurance. Recent problems with eBay also
illustrate these point.
Douglass North's definition of an institution as "the rules of the
game" is inconsistent with the kind of sociology that we have been studying in
this course. Whereas North takes rules as stable and self-sustaining, modern
sociologists would focus on the work needed to sustain institutions, in order
to make them appear stable; the "rules" are then merely a reification of the
results of that work, rather than self-existing entities. This should be
obvious from the fact that the rules of an institution can change, even
dramatically and quickly, while the institution continues to be obviously the
same institution to everyone involved, even though operating in a somewhat
different way.
For example, many corporations undergo significant transformations,
sometimes called re-engineering, and yet are clearly still the same
corporation; General Motors went through such a re-engineering some years ago,
as Hewlitt-Packard did recently, and Sony is doing now. Similarly, the
business models of MP3.com and Napster were greatly revised, but the name, and
much else remain the same. Many many other examples could be given, in which
no one doubts that the institution involved is still the same institution.
For a somewhat different example, the Olympics are (allegedly) undergoing
significant reform, but also striving to maintain continuity with past
traditions. Scientific revolutions in the sense of Thomas Kuhn provide other
examples of great change over a relatively short time, such as the succession
of quantum mechanics over classical mechanics. But physics departments did
not fire their faculty or change their names when this happened; the
insitutuions of physics carried on almost as if nothing had happened. I would
also emphasize that any institution necessarily embodies values, which because
of the stability of an institution, reflect successful chains of translation
among actants in the larger actor network in which it is involved. (For
example, think of banks, though they too are undergoing some change now.)
More generally, modern ANT suggests looking for the infrastructure
of institutions, seeking the hidden work that keeps things going. This is a
nearly opposite perspective from that of economics, especially neo-classic
economics, which deliberately seeks to abstract away as much as it can,
especially those things that cannot be measured.
For a simple but significant example, consider the "institution" of
personal computing. It's reasonably clear what the "rules" are - how to use
floppy discs, word processors, etc. The advantages of using PCs so are also
pretty clear, and so there have been numerous calls for "transfering" this
institution to underdeveloped countries. But actual attempts to do so have
often run into deep troubles with infrastructure: PCs require a reliable
source of electricity; they require rapid delivery of spare parts; they
require reliable advice; and so on. In our culture, these are pretty much
taken for granted, i.e., they are invisible infrastructure; this is
why it has been easy to ignore these factors when imagining the use of PCs in
situations that may be very different.
We can also use ANT to help explain why open source software development
works, whereas it seems quite counterintuitive from the viewpoint of
economics, which is biased to ignore social aspects, such as the cohesion and
peer pressure among programmers; for this reason, open source development for
a long time was not taken seriously by large corporations like MicroSoft. In
Homesteading the Noosphere,
Eric Raymond argues that it is exactly the social links that power this new
mode of software production; one part of his argument uses the anthropological
concept of a gift culture.
{ <g1, g2> | f(g2) => g1 => 0 and g2 => 0 } .
An important (but unstated by McCain) assumption of this model is that
demand is elastic, in the sense that the amount of these goods that people
are willing to buy can be "stretched" either upward or downward as long as it
is feasible. However, this assumption is clearly false: no one is going to
buy more bean sprouts or lettuce than they can use within a few days; no one
is going to buy more electricity than they can use; etc. And in the other
direction, no one is going to buy less food than they need to live, or less
clothing, etc. So the extreme ends of the set are not really feasible at all,
it is only in some middle range that one can reasonably make trade-offs; i.e.,
demand in general is inelastic outside some narrow middle range.
Gift cultures are adaptations not to scarcity but to abundance. They arise in
populations that do not have significant material-scarcity problems with
survival goods... We can also observe them in certain strata of our own
society, especially in show business and among the wealthy. In gift cultures,
social status is determined not by what you control but by what you give
away. And thus the hacker's long hours of effort to produce high-quality
open-source code. - p.8.
There is also an argument in favor of open source software that is based on
complexity theory: Brook's Law says that communication costs in
software development grow as the square of the number of developers; but this
does not apply to debugging, which can be done in parallel with costs growing
only linearly with the number of debuggers, due solely to communication with a
coordinating developer. Experience with Linux seems to confirm this. Eric
Raymond has summarized this phenomenon ( The Cathredal and the Bazaar,
p.6) in what he calls Linus' Law,
Given enough eyeballs, all bugs are shallow.
(However, the linear complexity formulation has more content, though less
zing.) The point here is that bugs are found, understood and fixed relatively
quickly in a sufficiently large open source community (not usually by the same
person).
To Section 9 of CSE 275 notes.
To Section 7 of CSE 275 notes.
To CSE 275 homepage
Maintained by Joseph Goguen
© 2000 - 2003 Joseph Goguen, all rights reserved.
Last modified: Thu Nov 20 20:27:01 PST 2003