Search
Now showing items 1-10 of 24
Computational Capabilities of Recurrent NARX Neural Networks
(1998-10-15)
Recently, fully connected recurrent neural networks have been proven to be
computationally rich --- at least as powerful as Turing machines. This
work focuses on another network which is popular in control applications
and ...
The Neural Network Pushdown Automaton: Model, Stack and Learning Simulations
(1998-10-15)
In order for neural networks to learn complex languages or grammars, they
must have sufficient computational power or resources to recognize or generate
such languages. Though many approaches have been discussed, one ...
Using Recurrent Neural Networks to Learn the Structure of Interconnection Networks
(1998-10-15)
A modified Recurrent Neural Network (RNN) is used to learn a
Self-Routing Interconnection Network (SRIN)
from a set of routing examples. The RNN is modified so
that it has several distinct initial states. This
is ...
Extraction of Rules from Discrete-Time Recurrent Neural Networks
(1998-10-15)
The extraction of symbolic knowledge from trained neural networks and the
direct encoding of (partial) knowledge into networks prior to training are
important issues. They allow the exchange of information between symbolic
and ...
Face Recognition: A Hybrid Neural Network Approach
(1998-10-15)
Faces represent complex, multidimensional, meaningful visual stimuli
and developing a computational model for face recognition is
difficult. We present a hybrid neural network solution which compares
favorably with other ...
How Embedded Memory in Recurrent Neural Network Architectures Helps Learning Long-term Dependencies
(1998-10-15)
Learning long-term temporal dependencies with recurrent neural
networks can be a difficult problem. It has recently been
shown that a class of recurrent neural networks called NARX
networks perform much better than ...
Product Unit Learning
(1998-10-15)
Product units provide a method of automatically learning the
higher-order input combinations required for the efficient synthesis of
Boolean logic functions by neural networks. Product units also have a
higher information ...
Neural Learning of Chaotic Dynamics: The Error Propagation Algorithm
(1998-10-15)
An algorithm is introduced that trains a neural network to identify
chaotic dynamics from a single measured time-series. The algorithm has
four special features:
1. The state of the system is extracted from the time-series ...
Learning a Class of Large Finite State Machines with a Recurrent Neural Network
(1998-10-15)
One of the issues in any learning model is how it scales with problem
size. Neural networks have not been immune to scaling issues. We show that a
dynamically-driven discrete-time recurrent network (DRNN) can learn rather ...
Fixed Points in Two--Neuron Discrete Time Recurrent Networks: Stability and Bifurcation Considerations
(1998-10-15)
The position, number and stability types of fixed points of a two--neuron
recurrent network with nonzero weights are investigated. Using simple
geometrical arguments in the space of derivatives of the sigmoid transfer
function ...