(FOGA4)

The fourth Foundations of Genetic Algorithms workshop (FOGA) was held August 2--5, 1996, at the University of San Diego. These workshops have been held biennially, starting in 1990. FOGA alternates with the International Conference on Genetic Algorithms (ICGA) which is held in odd years. (ICGA's European sister conference, Parallel Problem Solving from Nature, is held in even years.) Both FOGA and ICGA are sponsored and organized under the auspices of the International Society for Genetic Algorithms. In keeping with past tradition, the papers in this proceedings are longer than typical conference papers, and were subjected to two rounds of reviewing to improve clarity. All participants had copies of early drafts available to them during the meeting. This did not, however, dampen the workshop spirit which FOGA embraces: a pleasant venue where researchers can advance hypotheses and discuss tentative ideas, interacting openly -- and with vigor! -- among a small group of colleagues.

A variety of theoretical directions and styles are exemplified by this year's papers, but two themes seem to continue to dominate current research into evolutionary algorithms:

- How can we model the dynamics of the genetic
*search process*? - How do features of the
*representation*used to encode solutions on the genome fare with respect to these dynamics?

The ``simple GA" is a model of the dynamical behavior of a population
fixed-length, binary genomes based on Markov chain analysis. The simple
GA has proven itself a useful foundation for both theoretical and empirical
investigation of the nature of genetic search. Two papers in the former
category are ``Diagonalizing the Simple
GA Mixing Matrix'' and ``A Further Result
on the Markov Chain Model of Genetic Algorithms and Its Application to
a Simulated Annealing-like Strategy.'' The first shows how a sparse
similarity transformation diagonalizes the simple GA's mixing matrix. Applications
include simplification of the fixed point component equations from *O(4 ^{l})
*terms to

Two papers investigating the simple GA empirically are ``A Search for Counterexamples to Two Conjectures on the Simple Genetic Algorithm'' and ``Analyzing GAs Using Markov Models with Semantically Ordered and Lumped States.'' The first paper concerns properties of the signal component of the simple GA, as opposed to its stochastic aspects. It sheds light on a fundamental conjecture, and provides a nontrivial example of cyclic behavior in the infinite population case. The second paper touches on the inherent emergent behavior of genetic search and reports on preliminary empirical inquiry into reducing the complexity of exact Markov chain models by aggregation of states.

In contrast, another group of papers aims at *modeling* selected
statistics of GA behavior. Here the goal becomes qualitative agreement
of modeled statistics with observed simulation behavior, ideally using a simple
and computational efficient model. ``Genetic
Algorithm Dynamics in a Two-well Potential'' uses a model of key population
statistics to investigate GA dynamics and contrast it with simulated annealing
on a sample problem. The model is motivated by principles from statistical
mechanics and is experimentally verified. ``Noisy
Fitness Evaluation in
Genetic Algorithms and the Dynamics of Learning'' uses similar modeling
techniques to predict population statistics for two sample problems involving
a stochastic fitness measure. ``Probing
Genetic Algorithm Performance of Fitness Landscapes'' is similar in
spirit, except that fitness is deterministic and model parameters are experimentally
calibrated to observed behavior.

``Replicators, Majorization and Genetic Algorithms: New Models and Analytical tools'' appeals to a different set of tool. Replicator equations have proven useful tools in population genetics without recombination, and majorization is recommended as a way to exploit these techniques for the analysis of a broader class evolutionary algorithms, connect them to other optimization methods, and suggest more analytically tractable operators.

Finally, being able to characterize the final, convergence behavior of the evolutionary search process is critical to the acceptance of these algorithms as effective optimization procedures. ``A Stationary Point Convergence Theory for Evolutionary Algorithms'' adapts the convergence theory for generalized pattern search methods to a class of real-coded stochastic algorithms for which generally effective stopping rules may be available.

Representation and operators that work effectively on them have an enormous impact on the trajectory of evolutionary search. For example, one important feature of the recombination operator is that it cannot introduce any allele values that were not present in the parents; the relation between this operator and population diversity therefore becomes important. ``Convergence Controlled Variation'' investigates advantages pair-wise crossover may have over larger parent ``pools.'' It also compares these to other convergence-control reproductive mechanisms, revisiting the basic concepts of exploration, schemata propagation, and hitchhiking in the process.

Just as the simple GA provides a foundation for much of our understanding of the evolutionary search dynamic, the analysis of schemata -- small components of the representation shared across the population and propagated into subsequent generations -- continues to provide insight about how individual features in the solutions' representation come to be selected and recombined with one another. ``Nonlinearity, Hyperplane Ranking and the Simple Genetic Algorithm'' correlates the analysis of hyperplanes (corresponding to binary schemata) with nonlinearities in the objective function, and relates this to convergence of the simple GA. ``Fitness Landscape Characterization by Variance of Decompositions'' presents a framework in which a variety of schema-based statistics are defined and reviewed, and some are related to Walsh coefficients. Crossover correlation is defined, related to variance coefficients, and used to contrast various crossover types. Since crossover is known to favor the ``linkages" of some representational features over others (typically, those closer together on the genome), ``Learning Linkage'' investigates a crossover operator designed to facilitated appropriate linkages. ``SEARCH, Blackbox Optimization, and Sample Complexity'' suggests a framework within which evolutionary algorithms can be applied with minimal knowledge of the objective function, and also related to other symbolic learning methods.

Rather than beginning with the crossover operator and analyzing schemata processed by it, a number of papers considered particular classes of representations, considered special properties of these and forms of recombination that might be appropriate. Most generally, ``On Searching $\alpha$-ary Hypercubes and Related Graphs'' analyzes the landscapes induced by various operators and fitness functions. Real coefficients are one of the most common epistemic atoms arising in optimization, and ``Real Representations'' presents a new representation for reals, based on Dedekind cuts. It also advances the perspective that representation-independent characterizations of genetic search can be specialized to particular problems in a principled fashion. ``A Study of Fixed-Length Subset Recombination'' considers alternate representations for subset problems and investigates operators designed for them. ``Fitness Functions for Multiple Objective Optimization Problems: Combining Preferences with Pareto Rankings'' illustrates a design methodology for representing a multiple objective function by a single value incorporating some preferences between Pareto optimal solutions.

Considering the genome to be a set of grammatical production rules has been an interesting special class of representations that relates back to Holland's Classifier System. ``Stochastic Context-Free Grammar Induction with a Genetic Algorithm Using Local Search'' relies on our knowledge of formal languages to provide insight into the evolutionary search process, as well as coupling this ``global" search for good grammars with a ``local" tuning of probabilities associated with each rule of the stochastic grammar. Compared to binary or real-valued representations, a grammatical representation is more difficult to analyze formally. But the recent excitement surrounding ``genetic programming" shows how much potential there is when evolutionary dynamics are applied to richer representations. ``Exact Uniform Initialization For Genetic Programming'' considers the special problem of population initialization when these individuals must be judiciously selected from the vast space of complete derivation trees of a bounded context-free language.

In summary, this volume echoes themes that have been found in earlier FOGA meetings, extends them in new directions, introduces new analytic tools we can expect to see again, and addresses new issues of growing importance. We express our gratitude to the authors, meeting participants and most especially to the program committee for their efforts in helping to take this next step towards a computational understanding of the evolutionary process.

Richard K. Belew

Computer Science & Engr. Dept.

Univ. California -- San Diego\

La Jolla, CA 92093-0114\\

rik@cs.ucsd.edu