Skip to content
University of Maryland LibrariesDigital Repository at the University of Maryland
    • Login
    View Item 
    •   DRUM
    • Theses and Dissertations from UMD
    • UMD Theses and Dissertations
    • View Item
    •   DRUM
    • Theses and Dissertations from UMD
    • UMD Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Neural Network Generation of Temporal Sequences from Single Static Vector Inputs using Varying Length Distal Target Sequences

    Thumbnail
    View/Open
    umi-umd-4183.pdf (1.329Mb)
    No. of downloads: 608

    Date
    2007-04-10
    Author
    Gittens, Shaun
    Advisor
    Reggia, James
    Metadata
    Show full item record
    Abstract
    Training an agent to operate in an environment whose mappings are largely unknown is generally recognized to be exceptionally difficult. Further, granting such a learning agent the ability to produce an appropriate sequence of actions entirely from a single input stimulus remains a key problem. Various reinforcement learning techniques have been utilized to handle such learning tasks, but convergence to optimal policies is not guaranteed for many of these methods. Traditional supervised learning methods hold more assurances of convergence, but these methods are not well suited for tasks where desired actions in the output space of the learner, termed proximal actions, are not available for training. Rather, target outputs from the environment are distal from where the learning takes place. For example, a child acquiring language skill who makes speech errors must learn to correct them based on heard information that reaches his/her auditory cortex, which is distant from the motor cortical regions that control speech output. While distal supervised learning techniques for neural networks have been devised, it remains to be established how they can be trained to produce sequences of proximal actions from only a single static input. The architecture demonstrated here incorporates recurrent multi-layered neural networks, each maintaining some manner of memory in the form of a context vector, into the distal supervised learning framework. This enables it to train learners capable of generating correct proximal sequences from single static input stimuli. This is in contrast to existing distal learning methods designed for non-recurrent neural network learners that utilize no concept of memory of their prior behavior. Also, a technique known as teacher forcing was adapted for use in distal sequential learning settings which is shown to result in more efficient usage of the recurrent neural network's context layer. The effectiveness of this approach is demonstrated by applying it in training recurrent learners to acquire phoneme sequence generating behavior using only previously heard and stored auditory phoneme sequences. The results indicate that recurrent networks can be integrated with distal learning methods to create effective sequence generators even when constantly updating current state information is unavailable.
    URI
    http://hdl.handle.net/1903/6710
    Collections
    • Computer Science Theses and Dissertations
    • UMD Theses and Dissertations

    DRUM is brought to you by the University of Maryland Libraries
    University of Maryland, College Park, MD 20742-7011 (301)314-1328.
    Please send us your comments.
    Web Accessibility
     

     

    Browse

    All of DRUMCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister
    Pages
    About DRUMAbout Download Statistics

    DRUM is brought to you by the University of Maryland Libraries
    University of Maryland, College Park, MD 20742-7011 (301)314-1328.
    Please send us your comments.
    Web Accessibility