Institute for Systems Research Technical Reports
Permanent URI for this collectionhttp://hdl.handle.net/1903/4376
This archive contains a collection of reports generated by the faculty and students of the Institute for Systems Research (ISR), a permanent, interdisciplinary research unit in the A. James Clark School of Engineering at the University of Maryland. ISR-based projects are conducted through partnerships with industry and government, bringing together faculty and students from multiple academic departments and colleges across the university.
Browse
10 results
Search Results
Item Commodity Trading Using Neural Networks: Models for the Gold Market(1997) Brauner, Erik; Dayhoff, Judith E.; Sun, Xiaoyun; ISREssential to building a good financial forecasting model is having a realistic trading model to evaluate forecasting performance. Using gold trading as a platform for testing we present a profit based model which we use to evaluate a number of different approaches to forecasting. Using novel training techniques we show that neural network forecasting systems are capable of generating returns for above those of classical regression models.Item Dynamic Attractors and Basin Class Capacity in Binary Neural Networks(1995) Dayhoff, Judith E.; Palmadesso, Peter J.; ISRThe wide repertoire of attractors and basins of attraction that appear in dynamic neural networks not only serve as models of brain activity patterns but create possibilities for new computational paradigms that use attractors and their basins. To develop such computational paradigms, it is first critical to assess neural network capacity for attractors and for differing basins of attraction, depending on the number of neurons and the weights. In this paper we analyze the attractors and basins of attraction for recurrent, fully-connected single layer binary networks. We utilize the network transition graph - a graph that shows all transitions from one state to another for a given neural network - to show all oscillations and fixed-point attractors, along with the basins of attraction. Conditions are shown whereby pairs of transitions are possible from the same neural network. We derive a lower bound for the number of transition graphs possible 2n2- n , for an n-neuron network. Simulation results show a wide variety of transition graphs and basins of attraction and sometimes networks have more attractors than neurons. We count thousands of basin classes - networks with differing basins of attraction - in networks with as few as five neurons. Dynamic networks show promise for overcoming the limitations of static neural networks, by use of dynamic attractors and their basins. We show that dynamic networks have high capacity for basin classes, can have more attractors than neurons, and have more stable basin boundaries than in the Hopfield associative memory.Item Target Discrimination with Neural Networks(1995) Lin, Daw-Tung; Dayhoff, Judith E.; Resch, C.L.; ISRThe feasibility of distinguishing multiple type components of exo-atmospheric targets is demonstrated by applying the Time Delay Neural Network (TDNN) and the Adaptive Time-Delay Neural Network (ATNN). Exo-atmospheric targets are especially difficult to distinguish using currently available techniques because all target parts follow the same spatial trajectory. Thus classification must be based on light sensors that record signal over time. Results have demonstrated that the trained neural networks were able to successfully identify warheads from other missile parts on a variety of simulated scenarios, including differing angles and tumbling. The network with adaptive time delays (the ATNN) performs highly complex mapping on a limited set of training data and achieves better generalization to overall trends of situations compared to the TDNN, which includes time delays but adapts only its weights. The ATNN was trained on additive noisy data and it is shown that the ATNN possesses robustness to environment variations.Item Sampling Effects on Trajectory Learning and Production(1995) Lin, Daw-Tung; Dayhoff, Judith E.; ISRThe time-delay neural network (TDNN) and the adaptive time-delay neural network (ATNN) are effective tools for signal production and trajectory generation. Previous studies have shown production of circular and figure-eight trajectories to be robust after training. We show here the effects of different sampling rates on the production of trajectories by the ATNN neural network, including the influence of sampling rate on the robustness and noise-resilience of the resulting system. Although fast training occurred with few samples per trajectory, and the trajectory was learned successfully, more resilience to noise was observed when there were higher numbers of samples per trajectory. The effects of changing the initial segments that begin the trajectory generation were evaluated, and a minimum length of initial segment is required but the location of that segment does not influence the trajectory generation, even when different initial segments are used during training and recall. A major conclusion from these results is that the network learns the inherent features of the trajectory rather than memorizing each point. When a recurrent loop was added from the output to the input of the ATNN, the the training was shown to result in an attractor of the network for a figure-eight trajectory, which involves more complexity due to crossover compared with previous attractor training of a circular trajectory. Furthermore, when the trajectory length was not a multiple of the sampling interval, the trained network generated intervening points on subsequent repetitions of the trajectory, a feature of limit cycle attractors observed in dynamic networks. Thus an effective method of training an individual dynamic attractor into a neural network is extended to more complex trajectories and to show the properties of a limit cycle attractor.Item Network Unfolding Algorithm and Universal Spatiotemporal Function Approximation(1995) Lin, Daw-Tung; Dayhoff, Judith E.; ISRIt has previously been known that a feed-forward network with time-delay can be unfolded into a conventional feed-forward network with a time history as input. In this paper, We show explicitly how this unfolding operation can occur, with a newly defined Network Unfolding Algorithm (NUA) that involves creation of virtual units and moving all time delays to a preprocessing stage consisting of the time histories. The NUA provides a tool for analyzing the complexity of the ATNN. From this tool, we concluded that the ATNN reduces the cost of network complexity by at least a factor of O(n) compared to an unfolded Backpropagation net. We then applied the theorem of Funahashi, Hornik et al and Stone-Weierstrass to state the general function approximation ability of the ATNN. We furthermore show a lemma (Lemma 1) that the adaptation of time-delays is mathematically equivalent to the adjustment of interconnections on a unfolded feed-forward network provided there are a large enough number (h2nd) of hidden units. Since this number of hidden units is often impractically large, we can conclude that the TDNN and ATNN are thus more powerful than BP with a time history.Item A Population-Based Search from Genetic Algorithms through Thermodynamic Operation(1994) Sun, Ray-Long; Dayhoff, Judith E.; Weigand, William A.; ISRThe guided random search techniques, genetic algorithms and simulated annealing, are very promising strategies, and both techniques are analogs from physical and biological systems. Through genetic algorithms, the simulation of evolution for the purposes of parameter optimization has generally demonstrated itself to be a robust and rapid optimization technique. The simulated annealing algorithm often finds high quality candidate solutions. Limitations, however, occur in performance because optimization may take large numbers of iterations or final parameter values may be found that there are not at global minimum (or maximum) points. In this paper we propose a population-based search algorithm that combines the approaches from genetic algorithms and simulated annealing. The combined approach, called GASA, maintains a population of individuals over a period of generations. In the GASA technique, simulated annealing is used in choices regarding a subset of individuals to undergo crossover and mutation. We show that the GASA technique performs superior to a genetic algorithm on the Bohachevsky function, an objective function with m any local minima. The methodology and the test results on function optimization are given and compared with classical genetic algorithms.Item Learning with the Adaptive Time-Delay Neural Network(1993) Lin, Daw-Tung; Ligomenides, Panos A.; Dayhoff, Judith E.; ISRThe Adaptive Time-delay Neural Network (AT N N), a paradigm for training a nonlinear neural network with adaptive time-delays, is described. Both time delays and connection weights are adapted on-line according to a gradient descent approach, with time delays unconstrained with respect to one another, and an arbitrary number of interconnections with different time delays placed between any two processing units. Weight and time-delay adaptations evolve based on inputs and target outputs consisting of spatiotemporal patterns (e.g. multichannel temporal sequences). The AT N N is used to generate circular and figure- eight trajectories, to model harmonic waves, and to do chaotic time series predictions. Its performance outstrips that of the time-delay neural network (T D N N), which has adaptable weights but fixed time delays. Applications to identification and control as well as signal processing and speech recognition are domains to which this type of network can be appropriately applied.Item Fast Gravity: An n-Squared Algorithm for Identification of Synchronous Neural Assemblies(1992) Dayhoff, Judith E.; ISRThe identification of synchronously active neural assemblies in simultaneous recordings of neuron activities is an important research issue and a difficult algorithmic problem. A gravitational analysis method was developed previously to detect and identify groups of neurons that tend to generate action potentials in near-synchrony from among a larger population of simultaneously recorded units. In this paper we show an improved algorithm for the gravitational clustering method. Where the original algorithm ran in n3 time (n = the number of neurons), the new algorithm runs in n2 time. Neurons are represented as particles in n-space that "gravitate" towards one another whenever near-synchronous electrical activity occurs. Ensembles of neurons that tend to fire together then become clustered together. The gravitational technique gives not only an identification of synchronous goroups present but also can be used for graphical display of changing activity patterns and changing synchronies among a larger population of neurons.Item A Learning Algorithm for Adaptive Time-Delays in a Temporal Neural Network(1992) Lin, Daw-Tung; Dayhoff, Judith E.; Ligomenides, Panos A.; ISRThe time delay neural network (TDNN) is an effective tool for speech recognition and spatiotemporal classification. This network learns by example, adapts its weights according to gradient descent, and incorporates a time delay on each interconnection. In the TDNN, time delays are fixed throughout training, and strong weights evolve for interconnections whose delay values are important to the pattern classification task. Here we present an adaptive time delay neural network (ATNN) that adapts its time delay values during training, to better accommodate to the pattern classification task. Connection strengths are adapted as well in the ATNN. We demonstrate the effectiveness of the TDNN on chaotic series prediction.Item Biological Plausibility of Back-Error Propagation through Microtubules(1992) Dayhoff, Judith E.; Hameroff, Stuart; Swenberg, Charles E.; Lahoz-Beltra, Rafael; ISRWe propose a plausible model for learning by back-error propagation in biological neurons. Forwards propagation occurs as action potentials propagate signals along branching axons and transmit those signals across axo-dendritic synapses, whereupon post-synaptic neurons sum their incoming signals. In our model, back-error propagation is proposed to occur via signals within intraneuronal cytoskeletal microtubules. These signals modify the effective strengths of synapses during learning. Differences between network output and desired (target) outputs are computed at synapses or by synaptic complexes. Biophysical mechanisms are suggested for the summing of errors and the propagation of errors backwards through microtubules within each neuron of the network. We discuss issues and assumptions of the model, alternative candidate mechanisms, and the degree of biological plausibility.