My research focus is a characterization of adaptive knowledge representations. Issues of representation have always played a central role in artificial intelligence (AI), as well as in computer science and theories of mind more generally. But I would argue that most of this work has (implicitly or explicitly) assumed that the representational language is wielded manually, by humans encoding an explicit characterization of what they believe to be true of the world. Philosophical difficulties aside, some modern machine learning techniques are capable of automatically developing elaborate representations of the world. A central result of the mathematical theory of induction is that the selection of an appropriate language for representing learned concepts is absolutely critical to their identification. It is therefore appropriate to reconsider basic notions of what makes for good knowledge representation, with constraints imposed by the learning process considered sine qua non, along with those (expressive adequacy, valid inference, etc.) more typically considered by AI.
I have found it productive to pursue this general interest through two more specific research projects. The first of these uses connectionist (neural) networks as a representation for the information retrieval (IR) problem. This construction allows an IR system to learn a more effective indexing representation of free-text documents as a simple by-product of the browsing behaviors of its users. Second and more recently, I have investigated Genetic Algorithm (GAs), particularly interactions between neural networks and the GA, both as algorithmic techniques and as models of natural phenomena (learning and evolution, resp.). I have found that my work in these two areas allows a ``stereoscopic'' view, encompassing both low-level biological constraints and high-level cultural issues, that are at the heart of modern AI and cognitive science.