Neurocomputational Methods for Autonomous Cognitive Control

Loading...
Thumbnail Image

Files

Publication or External Link

Date

2014

Citation

Abstract

Artificial Intelligence can be divided between symbolic and sub-symbolic methods, with neural networks making up a majority of the latter. Symbolic systems have the advantage when capabilities such as deduction and planning are required, while sub-symbolic ones are preferable for tasks requiring skills such as perception and generalization. One of the domains in which neural approaches tend to fare poorly is cognitive control: maintaining short-term memory, inhibiting distractions, and shifting attention. Our own biological neural networks are more than capable of these sorts of executive functions, but artificial neural networks struggle with them. This work explores the gap between the cognitive control that is possible with both symbolic AI systems and biological neural networks, but not with artificial neural networks. To do so, I identify a set of general-purpose, regional-level functions and interactions that are useful for cognitive control in large-scale neural architectures. My approach has three main pillars: a region-and-pathway architecture inspired by the human cerebral cortex and biologically-plausible Hebbian learning, neural regions that each serve as an attractor network able to learn sequences, and neural regions that not only learn to exchange information but also to modulate the functions of other regions. The resultant networks have behaviors based on their own memory contents rather than exclusively on their structure. Because they learn not just memories of the environment but also procedures for tasks, it is possible to "program" these neural networks with the desired behaviors.

This research makes four primary contributions. First, the extension of Hopfield-like attractor networks from processing only fixed-point attractors to processing sequential ones. This is accomplished via the introduction of temporally asymmetric weights to Hopfield-like networks, a novel technique that I developed. Second, the combination of several such networks to create models capable of autonomously directing their own performance of cognitive control tasks. By learning procedural memories for a task they can perform in ways that match those of human subjects in key respects. Third, the extension of this approach to spatial domains, binding together visuospatial data to perform a complex memory task at the same level observed in humans and a comparable symbolic model. Finally, these new memories and learning procedures are integrated so that models can respond to feedback from the environment. This enables them to improve as they gain experience by refining their own internal representations of their instructions. These results establish that the use of regional networks, sequential attractor dynamics, and gated connections provide an effective way to accomplish the difficult task of neurally-based cognitive control.

Notes

Rights