A NEUROCOMPUTATIONAL MODEL OF CAUSAL REASONING AND COMPOSITIONAL WORKING MEMORY FOR IMITATION LEARNING

Loading...
Thumbnail Image

Files

Publication or External Link

Date

2022

Citation

Abstract

Although contemporary neural models excel in a surprisingly diverse range of application domains, they struggle to capture several key qualities of human cognition that are considered crucial for human-level artificial intelligence (AI). Some of these qualities, such as compositionality and interpretability, are readily achieved with traditional symbolic programming, leading some researchers to suggest hybrid neuro-symbolic programming as a viable route to human-level AI. However, the cognitive capabilities of biological nervous systems indicate that it should be possible to achieve human-level reasoning in artificial neural networks without the support of non-neural symbolic algorithms. Furthermore, the computational explanatory gap between cognitive and neural algorithms is a major obstacle to understanding the neural basis of cognition, an endeavor that is mutually beneficial to researchers in AI, neuroscience, and cognitive science. A viable approach to bridging this gap involves "programmable neural networks" that learn to store and evaluate symbolic expressions directly in neural memory, such as the recently developed "Neural Virtual Machine" (NVM). While the NVM achieves Turing-complete universal neural programming, its assembly-like programming language makes it difficult to express the complex algorithms and data structures that are common in symbolic AI, limiting its ability to learn human-level cognitive procedures.

I present an approach to high-level neural programming that supports human-like reasoning using only biologically-plausible neural computations. First, I introduce a neural model that represents graph-based data structures as systems of dynamical attractor states called attractor graphs. This model serves as a temporally-local compositional working memory that can be controlled via top-down neural gating. Then, I present a programmable neural network called NeuroLISP that learns an interpreter for a subset of Common LISP. NeuroLISP features native support for compositional data structures, scoped variable binding, and a shared memory space in which programs can be modified as data. Empirical experiments demonstrate that NeuroLISP can learn algorithms for multiway tree processing, compositional sequence manipulation, and symbolic unification in first-order logic. Finally, I present NeuroCERIL, a neural model that performs hierarchical causal reasoning for robotic imitation learning and successfully learns a battery of procedural maintenance tasks from human demonstrations. NeuroCERIL implements a cognitively-plausible and computationally-efficient algorithm for hypothetico-deductive reasoning, which combines bottom-up abductive inference with top-down predictive verification. Because the hypothetico-deductive approach is broadly relevant to a variety of cognitive domains, including problem-solving and diagnostic reasoning, NeuroCERIL is a significant step toward human-level cognition in neural networks.

Notes

Rights