Submitted to 8th Joint Symposium on Neural Computation
27 Apr 01
We study the role of developmental processes in the combined dynamics of learning and evolution in a population of agents inside an artificial environment. Earlier simulations have used neural networks of fixed architecture, or provided an unrealistic supervised training signal. Our simulations make no such assumptions, allowing evolution to explore the composition of a wide range of associative Hebbian neural networks composed of sensors, effectors and other hidden neurons. We extend previous work using a grammar to model the developmental process transforming the evolving genotypes into the phenotypic neural networks. The rules of the grammar govern how neurons divide and specialize, as well as axon growth and connectivity. The grammar can express both fixed and adaptable components which allow natural selection to evolve both innate and learned behaviors. We present results using the simulation to evolve novel solutions to two well-understood problems.
Many ``artificial life'' experiments place an agent in some environment and use some measure of their performance there as a measure of fitness, using evolutionary algorithms to shape subsequent generations. Typically the agent is controlled by a neural network, allowing the interaction of two levels of adaptation: evolution by the entire population and learning by the individual. The coupled dynamics of evolving and learning levels of adaptation are complex and difficult to analyze, but have shown to give rise to interesting interactions such as the Baldwin effect.
Our group and others have extended this work to incorporate an another dynamic, ontogenesis, transforming the genotypic representations manipulated by a genetic algorithm into phenotypes which actually interact with the world. We use a grammar to model this developmental process. Our grammar uses context-free rules that can be used to determine how cells divide and specialize to become neurons, as well as subsequent axon growth between neurons. It also allows the specification of both fixed and learnable synaptic connections, allowing natural selection to evolve both innate and learned behaviors.
The space of potential neural network designs explored by populations of such grammars is vast. In all cases we ensure that this design space includes simpler network designs assumed in earlier work, but many others as well. Such developmental mechanisms include a third intermediate dynamical level of adaptation, between the population's evolution and the individual's neural network learning. The present work explores these dynamics, considering well-understood learning problems and using prior insights to guide our analysis of the adaptation observed at all three levels.
The first task considered is taken from [Todd & Miller, 1991]. A
decision-making neural network must be built that receives input from
both deterministic and probabilistic sources, where the sensors of
these informational sources are placed in ``environments" of variable
conditions. Because models of both generational variation across
populations and within-lifetime variation across individuals are
possible, this has proven a useful task for now observing developmental
We have analyzed solutions from 50 different evolutions of XOR. This figure shows Classification of solutions by Boolean function used to compute by the hidden units used to calculate XOR. The Combo category represents solutions that used two or more Boolean functions. The ?? category are solutions that exploit neural network recurrence, allowing neurons to have temporal dependencies. These networks are difficult to analyze because the activation of the intermediary neurons may not be consistent.
We are currently working on new problem domains in which the agent is situated in an environment in which it must move, using discrete space 'maze' type environments that are prevalent in the reinforcement learning literature. This will allow comparison against well-known models based on partially-observable Markov decision processes.