The Neural Network Pushdown Automaton: Model, Stack and Learning Simulations

dc.contributor.authorSun, G.Z.en_US
dc.contributor.authorGiles, C. Leeen_US
dc.contributor.authorChen, H.H.en_US
dc.contributor.authorLee, Y.C.en_US
dc.date.accessioned2004-05-31T22:23:50Z
dc.date.available2004-05-31T22:23:50Z
dc.date.created1993-08en_US
dc.date.issued1998-10-15en_US
dc.description.abstractIn order for neural networks to learn complex languages or grammars, they must have sufficient computational power or resources to recognize or generate such languages. Though many approaches have been discussed, one obvious approach to enhancing the processing power of a recurrent neural network is to couple it with an external stack memory - in effect creating a neural network pushdown automata (NNPDA). This paper discusses in detail this NNPDA - its construction, how it can be trained and how useful symbolic information can be extracted from the trained network. In order to couple the external stack to the neural network, an optimization method is developed which uses an error function that connects the learning of the state automaton of the neural network to the learning of the operation of the external stack. To minimize the error function using gradient descent learning, an analog stack is designed such that the action and storage of information in the stack are continuous. One interpretation of a continuous stack is the probabilistic storage of and action on data. After training on sample strings of an unknown source grammar, a quantization procedure extracts from the analog stack and neural network a discrete pushdown automata (PDA). Simulations show that in learning deterministic context-free grammars - the balanced parenthesis language, 1n0n, and the deterministic Palindrome - the extracted PDA is correct in the sense that it can correctly recognize unseen strings of arbitrary length. In addition, the extracted PDAs can be shown to be identical or equivalent to the PDAs of the source grammars which were used to generate the training strings. (Also cross-referenced as UMIACS-TR-93-77.)en_US
dc.format.extent371023 bytes
dc.format.mimetypeapplication/postscript
dc.identifier.urihttp://hdl.handle.net/1903/594
dc.language.isoen_US
dc.relation.isAvailableAtDigital Repository at the University of Marylanden_US
dc.relation.isAvailableAtUniversity of Maryland (College Park, Md.)en_US
dc.relation.isAvailableAtTech Reports in Computer Science and Engineeringen_US
dc.relation.isAvailableAtUMIACS Technical Reportsen_US
dc.relation.ispartofseriesUM Computer Science Department; CS-TR-3118en_US
dc.relation.ispartofseriesUMIACS; UMIACS-TR-93-77.en_US
dc.titleThe Neural Network Pushdown Automaton: Model, Stack and Learning Simulationsen_US
dc.typeTechnical Reporten_US

Files

Original bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
CS-TR-3118.ps
Size:
362.33 KB
Format:
Postscript Files
Loading...
Thumbnail Image
Name:
CS-TR-3118.pdf
Size:
142.45 KB
Format:
Adobe Portable Document Format
Description:
Auto-generated copy of CS-TR-3118.ps