Neural Network Generation of Temporal Sequences from Single Static Vector Inputs using Varying Length Distal Target Sequences

Loading...
Thumbnail Image

Files

umi-umd-4183.pdf (1.33 MB)
No. of downloads: 625

Publication or External Link

Date

2007-04-10

Citation

DRUM DOI

Abstract

Training an agent to operate in an environment whose mappings are largely unknown is generally recognized to be exceptionally difficult. Further, granting such a learning agent the ability to produce an appropriate sequence of actions entirely from a single input stimulus remains a key problem. Various reinforcement learning techniques have been utilized to handle such learning tasks, but convergence to optimal policies is not guaranteed for many of these methods. Traditional supervised learning methods hold more assurances of convergence, but these methods are not well suited for tasks where desired actions in the output space of the learner, termed proximal actions, are not available for training. Rather, target outputs from the environment are distal from where the learning takes place. For example, a child acquiring language skill who makes speech errors must learn to correct them based on heard information that reaches his/her auditory cortex, which is distant from the motor cortical regions that control speech output. While distal supervised learning techniques for neural networks have been devised, it remains to be established how they can be trained to produce sequences of proximal actions from only a single static input.

The architecture demonstrated here incorporates recurrent multi-layered neural networks, each maintaining some manner of memory in the form of a context vector, into the distal supervised learning framework. This enables it to train learners capable of generating correct proximal sequences from single static input stimuli. This is in contrast to existing distal learning methods designed for non-recurrent neural network learners that utilize no concept of memory of their prior behavior. Also, a technique known as teacher forcing was adapted for use in distal sequential learning settings which is shown to result in more efficient usage of the recurrent neural network's context layer. The effectiveness of this approach is demonstrated by applying it in training recurrent learners to acquire phoneme sequence generating behavior using only previously heard and stored auditory phoneme sequences. The results indicate that recurrent networks can be integrated with distal learning methods to create effective sequence generators even when constantly updating current state information is unavailable.

Notes

Rights