Browsing by Author "Horne, Bill G."
Now showing 1 - 11 of 11
Results Per Page
Sort Options
Item Alternative Discrete-Time Operators and Their Application to Nonlinear Models(1998-10-15) Back, Andrew D.; Tsoi, Ah Chung; Horne, Bill G.; Giles, C. LeeThe shift operator, defined as q x(t) = x(t+1), is the basis for almost all discrete-time models. It has been shown however, that linear models based on the shift operator suffer problems when used to model lightly-damped-low-frequency (LDLF) systems, with poles near $(1,0)$ on the unit circle in the complex plane. This problem occurs under fast sampling conditions. As the sampling rate increases, coefficient sensitivity and round-off noise become a problem as the difference between successive sampled inputs becomes smaller and smaller. The resulting coefficients of the model approach the coefficients obtained in a binomial expansion, regardless of the underlying continuous-time system. This implies that for a given finite wordlength, severe inaccuracies may result. Wordlengths for the coefficients may also need to be made longer to accommodate models which have low frequency characteristics, corresponding to poles in the neighbourhood of (1,0). These problems also arise in neural network models which comprise of linear parts and nonlinear neural activation functions. Various alternative discrete-time operators can be introduced which offer numerical computational advantages over the conventional shift operator. The alternative discrete-time operators have been proposed independently of each other in the fields of digital filtering, adaptive control and neural networks. These include the delta, rho, gamma and bilinear operators. In this paper we first review these operators and examine some of their properties. An analysis of the TDNN and FIR MLP network structures is given which shows their susceptibility to parameter sensitivity problems. Subsequently, it is shown that models may be formulated using alternative discrete-time operators which have low sensitivity properties. Consideration is given to the problem of finding parameters for stable alternative discrete-time operators. A learning algorithm which adapts the alternative discrete-time operators parameters on-line is presented for MLP neural network models based on alternative discrete-time operators. It is shown that neural network models which use these alternative discrete-time perform better than those using the shift operator alone. (Also cross-referenced as UMIACS-TR-97-03)Item Computational Capabilities of Recurrent NARX Neural Networks(1998-10-15) Siegelmann, Hava T.; Horne, Bill G.; Giles, C. LeeRecently, fully connected recurrent neural networks have been proven to be computationally rich --- at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called {\em NARX networks}. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by \[ y(t) = \Psi \left( \rule[-1ex]{0em}{3ex} u(t-n_u), \ldots, u(t-1), u(t), y(t-n_y), \ldots, y(t-1) \right), \] where $u(t)$ and $y(t)$ represent input and output of the network at time $t$, $n_u$ and $n_y$ are the input and output order, and the function $\Psi$ is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power. (Also cross-referenced as UMIACS-TR-95-12)Item A Delay Damage Model Selection Algorithm for NARX Neural Networks(1998-10-15) Lin, Tsungnan; Giles, C. Lee; Horne, Bill G.; Kung, Sun-YangRecurrent neural networks have become popular models for system identification and time series prediction. NARX (Nonlinear AutoRegressive models with eXogenous inputs) neural network models are a popular subclass of recurrent networks and have beenused in many applications. Though embedded memory can be found in all recurrent network models, it is particularly prominent in NARX models. We show that using intelligent memory order selection through pruning and good initial heuristics significantly improves the generalization and predictive performance of these nonlinear systems on problems as diverse as grammatical inference and time series prediction. (Also cross-referenced as UMIACS-TR-96-77)Item Finite State Machines and Recurrent Neural Networks -- Automata and Dynamical Systems Approaches(1998-10-15) Tino, Peter; Horne, Bill G.; Giles, C. LeeWe present two approaches to the analysis of the relationship between a recurrent neural network (RNN) and the finite state machine \( {\cal M} \) the network is able to exactly mimic. First, the network is treated as a state machine and the relationship between the RNN and \( {\cal M} \) is established in the context of algebraic theory of automata. In the second approach, the RNN is viewed as a set of discrete-time dynamical systems associated with input symbols of \( {\cal M} \). In particular, issues concerning network representation of loops and cycles in the state transition diagram of \( {\cal M} \) are shown to provide a basis for the interpretation of learning process from the point of view of bifurcation analysis. The circumstances under which a loop corresponding to an input symbol \( x \) is represented by an attractive fixed point of the underlying dynamical system associated with \( x \) are investigated. For the case of two recurrent neurons, under some assumptions on weight values, bifurcations can be understood in the geometrical context of intersection of increasing and decreasing parts of curves defining fixed points. The most typical bifurcation responsible for the creation of a new fixed point is the saddle node bifurcation. (Also cross-referenced as UMIACS-TR-95-1)Item Fixed Points in Two--Neuron Discrete Time Recurrent Networks: Stability and Bifurcation Considerations(1998-10-15) Tino, Peter; Horne, Bill G.; Giles, C. LeeThe position, number and stability types of fixed points of a two--neuron recurrent network with nonzero weights are investigated. Using simple geometrical arguments in the space of derivatives of the sigmoid transfer function with respect to the weighted sum of neuron inputs, we partition the network state space into several regions corresponding to stability types of the fixed points. If the neurons have the same mutual interaction pattern, i.e. they either mutually inhibit or mutually excite themselves, a lower bound on the rate of convergence of the attractive fixed points towards the saturation values, as the absolute values of weights on the self--loops grow, is given. The role of weights in location of fixed points is explored through an intuitively appealing characterization of neurons according to their inhibition/excitation performance in the network. In particular, each neuron can be of one of the four types: greedy, enthusiastic, altruistic or depressed. Both with and without the external inhibition/excitation sources, we investigate the position and number of fixed points according to character of the neurons. When both neurons self-excite (or self-inhibit) themselves and have the same mutual interaction pattern, the mechanism of creation of a new attractive fixed point is shown to be that of saddle node bifurcation. (Also cross-referenced as UMIACS-TR-95-51)Item How Embedded Memory in Recurrent Neural Network Architectures Helps Learning Long-term Dependencies(1998-10-15) Lin, Tsungnan; Horne, Bill G.; Giles, C. LeeLearning long-term temporal dependencies with recurrent neural networks can be a difficult problem. It has recently been shown that a class of recurrent neural networks called NARX networks perform much better than conventional recurrent neural networks for learning certain simple long-term dependency problems. The intuitive explanation for this behavior is that the output memories of a NARX network can be manifested as jump-ahead connections in the time-unfolded network. These jump-ahead connections can propagate gradient information more efficiently, thus reducing the sensitivity of the network to long-term dependencies. This work gives empirical justification to our hypothesis that similar improvements in learning long-term dependencies can be achieved with other classes of recurrent neural network architectures simply by increasing the order of the embedded memory. In particular we explore the impact of learning simple long-term dependency problems on three classes of recurrent neural networks architectures: globally recurrent networks, locally recurrent networks, and NARX (output feedback) networks. Comparing the performance of these architectures with different orders of embedded memory on two simple long-term dependences problems shows that all of these classes of networks architectures demonstrate significant improvement on learning long-term dependencies when the orders of embedded memory are increased. These results can be important to a user comfortable to a specific recurrent neural network architecture because simply increasing the embedding memory order will make the architecture more robust to the problem of long-term dependency learning. (Also cross-referenced as UMIACS-TR-96-28)Item Learning a Class of Large Finite State Machines with a Recurrent Neural Network(1998-10-15) Giles, C. Lee; Horne, Bill G.; Lin, T.One of the issues in any learning model is how it scales with problem size. Neural networks have not been immune to scaling issues. We show that a dynamically-driven discrete-time recurrent network (DRNN) can learn rather large grammatical inference problems when the strings of a finite memory machine (FMM) are encoded as temporal sequences. FMMs are a subclass of finite state machines which have a finite memory or a finite order of inputs and outputs. The DRNN that learns the FMM is a neural network that maps directly from the sequential machine implementation of the FMM. It has feedback only from the output and not from any hidden units; an example is the recurrent network of Narendra and Parthasarathy. (FMMs that have zero order in the feedback of outputs are called definite memory machines and are analogous to Time-delay or Finite Impulse Response neural networks.) Due to their topology these DRNNs are as least as powerful as any sequential machine implementation of a FMM and should be capable of representing any FMM. We choose to learn ``particular FMMs.\' Specifically, these FMMs have a large number of states (simulations are for $256$ and $512$ state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine. Simulations for the number of training examples versus generalization performance and FMM extraction size show that the number of training samples necessary for perfect generalization is less than that necessary to completely characterize the FMM to be learned. This is in a sense a best case learning problem since any arbitrarily chosen FMM with a minimal number of states would have much more order and string depth and most likely require more logic in its sequential machine implementation. (Also cross-referenced as UMIACS-TR-94-94)Item Learning Long-Term Dependencies is Not as Difficult With NARX Recurrent Neural Networks(1998-10-15) Lin, Tsungnan; Horne, Bill G.; Tino, Peter; Giles, C. LeeIt has recently been shown that gradient descent learning algorithms for recurrent neural networks can perform poorly on tasks that involve long- term dependencies, i.e. those problems for which the desired output depends on inputs presented at times far in the past. In this paper we explore the long-term dependencies problem for a class of architectures called NARX recurrent neural networks, which have power ful representational capabilities. We have previously reported that gradient descent learning is more effective in NARX networks than in recurrent neural network architectures that have ``hidden states'' on problems includ ing grammatical inference and nonlinear system identification. Typically, the network converges much faster and generalizes better than other net works. The results in this paper are an attempt to explain this phenomenon. We present some experimental results which show that NARX networks can often retain information for two to three times as long as conventional recurrent neural networks. We show that although NARX networks do not circumvent the problem of long-term dependencies, they can greatly improve performance on long-term dependency problems. We also describe in detail some of the assumption regarding what it means to latch information robustly and suggest possible ways to loosen these assumptions. (Also cross-referenced as UMIACS-TR-95-78)Item Performance of On-Line Learning Methods in Predicting Multiprocessor Memory Access Patterns(1998-10-15) Sakr, Majd F.; Levitan, Steven P.; Chiarulli, Donald M.; Horne, Bill G.; Giles, C. LeeShared memory multiprocessors require reconfigurable interconnection networks (INs) for scalability. These INs are reconfigured by an IN control unit. However, these INs are often plagued by undesirable reconfiguration time that is primarily due to control latency, the amount of time delay that the control unit takes to decide on a desired new IN configuration. To reduce control latency, a trainable prediction unit (PU) was devised and added to the IN controller. The PU's job is to anticipate and reduce control configuration time, the major component of the control latency. Three different on-line prediction techniques were tested to learn and predict repetitive memory access patterns for three typical parallel processing applications, the 2-D relaxation algorithm, matrix multiply and Fast Fourier Transform. The predictions were then used by a routing control algorithm to reduce control latency by configuring the IN to provide needed memory access paths before they were requested. Three prediction techniques were used and tested: 1). a Markov predictor, 2). a linear predictor and 3). a time delay neural network (TDNN) predictor. As expected, different predictors performed best on different applications, however, the TDNN produced the best overall results. (Also cross-referenced as UMIACS-TR-96-59)Item Product Unit Learning(1998-10-15) Leerink, Laurens R.; Giles, C. Lee; Horne, Bill G.; Jabri, Marwan A.Product units provide a method of automatically learning the higher-order input combinations required for the efficient synthesis of Boolean logic functions by neural networks. Product units also have a higher information capacity than sigmoidal networks. However, this activation function has not received much attention in the literature. A possible reason for this is that one encounters some problems when using standard backpropagation to train networks containing these units. This report examines these problems, and evaluates the performance of three training algorithms on networks of this type. Empirical results indicate that the error surface of networks containing product units have more local minima than corresponding networks with summation units. For this reason, a combination of local and global training algorithms were found to provide the most reliable convergence. We then investigate how `hints' can be added to the training algorithm. By extracting a common frequency from the input weights, and training this frequency separately, we show that convergence can be accelerated. A constructive algorithm is then introduced which adds product units to a network as required by the problem. Simulations show that for the same problems this method creates a network with significantly less neurons than those constructed by the tiling and upstart algorithms. In order to compare their performance with other transfer functions, product units were implemented as candidate units in the Cascade Correlation (CC) \cite{Fahlman90} system. Using these candidate units resulted in smaller networks which trained faster than when the any of the standard (three sigmoidal types and one Gaussian) transfer functions were used. This superiority was confirmed when a pool of candidate units of four different nonlinear activation functions were used, which have to compete for addition to the network. Extensive simulations showed that for the problem of implementing random Boolean logic functions, product units are always chosen above any of the other transfer functions. (Also cross-referenced as UMIACS-TR-95-80)Item Synaptic Noise in Dynamically-driven Recurrent Neural Networks: Convergence and Generalization(1998-10-15) Jim, Kam; Giles, C. Lee; Horne, Bill G.There has been much interest in applying noise to feedforward neural networks in order to observe their effect on network performance. We extend these results by introducing and analyzing various methods of injecting synaptic noise into dynamically-driven recurrent networks during training. By analyzing and comparing the effects of these noise models on the error function, we found that applying a controlled amount of noise during training can improve convergence time and generalization performance. In addition, we analyze the effects of various noise parameters (additive vs. multiplicative, cumulative vs. non-cumulative, per time step vs. per sequence) and predict that best overall performance can be achieved by injecting additive noise at each time step. Noise contributes a second-order gradient term to the error function which can be viewed as an anticipatory agent} to aid convergence. This term appears to find promising regions of weight space in the beginning stages of training when the training error is large and should improve convergence on error surfaces with local minima.Synaptic noise also enhances the error function by favoring internal representations where state nodes are operating in the saturated regions of the sigmoid discriminant function, thus improving generalization to longer sequences. We substantiate these predictions by performing extensive simulations on learning the dual parity grammar from grammatical strings encoded as temporal sequences with a second-order fully recurrent neural network. (Also cross-referenced as UMIACS-TR-94-89)