Next: Artificial Life Up: Genetic algorithms Previous: GA adaptation

Evolution of Neural Networks

Much of my current work with the GA focuses on interactions between this evolutionary algorithm with neural network (NNet) learning techniques. I have found these two instances of adaptive systems to provide an extremely rich basis for comparision, contrast and hybridization. As alternative models of cognitive systems, connectionist networks and GA/Classifier Systems both represent novel proposals. During the Winter, 1988 term I lead a seminar for the Cognitive Science department in which the Parallel Distributed Processing volumes of McClelland and Rumelhart (very familiar to the UCSD connectionist community) were contrasted with the contemporaneous Induction volume of Holland, Holyoak, Nisbett and Thagard. One outgrowth of this discussion was just how the two computational systems are related, for example how back propagation learning could be realized in the Classifier System [26].

One of the most straight-forward relationships between NNets and the GA is as models of the within-lifetime learning by individuals and the generational evolution by species, resp. Interactions between these two adaptive phenomena has been a topic of interest since Darwin, and potential confusion since Lamarck. Extending an elegant model of G. Hinton and S. Nowlan, I used the GA to better understand a strictly neo-Darwinian, non-Larmarkian influence learning can have on the evolutionary process known as the Baldwin Effect [5] (a preliminary version of these results was also presented [22]). In this paper I also introduced an extension of the model that allows culture to be modeled as a third, interposed form of adaption by societies, between species' evolution and individuals' learning.

This investigation of qualitative features of the interaction between evolution and learning led to a more detailed consideration of potential hybridizations of the GA and NNets algorithms [13]. This report surveys a range of experiments combining GAs and NNets, and reports on extensive experiments using the GA to find good inital weights from which NNets then learn. These experiments were so successful that we have continued to investigate hybrids in which the GA's ``global sampling'' behavior (and that of other sampling procedures, such as Monte Carlo) is combined with the NNet's ``local searching'' behavior (and that of other first-order, second-order and conjugate gradient search procedures) as part of Bill Hart's Ph.D. thesis under my supervision. One early result is negative, showing that the typical characterization of the GA as a problem-independent, ``generic'' search method is inappropriate [34]. Hart's thesis demonstrates a number of problems, including both standard optimization test functions and practical drug docking applications, for which hybrids of GA and local search methods can solve more difficult problems than either alone, but also better than extensive simulated annealling attempts at the same problem [33].

A technical issue arising in this hybridization concerns the encoding of the real-valued weights used by NNets onto the binary strings typically used by the GA. N. Schraudolph and I have developed a ``dynamic parameter encoding'' algorithm that adaptively increases the resolution of the GA's search as it hones in on a progressively more focused search spaces [48].

My current research has taken yet another page from Nature's book, to interposed a model of the developmental process by which genotypes (like those manipulated by the GA) are transformed into phenotypes (like ``mature'' NNets) [35][9]. Incorporating development is not only a more accurate model of the underlying biology, but algorithmically well-motivated as well.



Next: Artificial Life Up: Genetic algorithms Previous: GA adaptation


rik@cs.ucsd.edu