Institute for Systems Research Technical Reports
Permanent URI for this collectionhttp://hdl.handle.net/1903/4376
This archive contains a collection of reports generated by the faculty and students of the Institute for Systems Research (ISR), a permanent, interdisciplinary research unit in the A. James Clark School of Engineering at the University of Maryland. ISR-based projects are conducted through partnerships with industry and government, bringing together faculty and students from multiple academic departments and colleges across the university.
Browse
4 results
Search Results
Item Learning with the Adaptive Time-Delay Neural Network(1993) Lin, Daw-Tung; Ligomenides, Panos A.; Dayhoff, Judith E.; ISRThe Adaptive Time-delay Neural Network (AT N N), a paradigm for training a nonlinear neural network with adaptive time-delays, is described. Both time delays and connection weights are adapted on-line according to a gradient descent approach, with time delays unconstrained with respect to one another, and an arbitrary number of interconnections with different time delays placed between any two processing units. Weight and time-delay adaptations evolve based on inputs and target outputs consisting of spatiotemporal patterns (e.g. multichannel temporal sequences). The AT N N is used to generate circular and figure- eight trajectories, to model harmonic waves, and to do chaotic time series predictions. Its performance outstrips that of the time-delay neural network (T D N N), which has adaptable weights but fixed time delays. Applications to identification and control as well as signal processing and speech recognition are domains to which this type of network can be appropriately applied.Item Fast Gravity: An n-Squared Algorithm for Identification of Synchronous Neural Assemblies(1992) Dayhoff, Judith E.; ISRThe identification of synchronously active neural assemblies in simultaneous recordings of neuron activities is an important research issue and a difficult algorithmic problem. A gravitational analysis method was developed previously to detect and identify groups of neurons that tend to generate action potentials in near-synchrony from among a larger population of simultaneously recorded units. In this paper we show an improved algorithm for the gravitational clustering method. Where the original algorithm ran in n3 time (n = the number of neurons), the new algorithm runs in n2 time. Neurons are represented as particles in n-space that "gravitate" towards one another whenever near-synchronous electrical activity occurs. Ensembles of neurons that tend to fire together then become clustered together. The gravitational technique gives not only an identification of synchronous goroups present but also can be used for graphical display of changing activity patterns and changing synchronies among a larger population of neurons.Item A Learning Algorithm for Adaptive Time-Delays in a Temporal Neural Network(1992) Lin, Daw-Tung; Dayhoff, Judith E.; Ligomenides, Panos A.; ISRThe time delay neural network (TDNN) is an effective tool for speech recognition and spatiotemporal classification. This network learns by example, adapts its weights according to gradient descent, and incorporates a time delay on each interconnection. In the TDNN, time delays are fixed throughout training, and strong weights evolve for interconnections whose delay values are important to the pattern classification task. Here we present an adaptive time delay neural network (ATNN) that adapts its time delay values during training, to better accommodate to the pattern classification task. Connection strengths are adapted as well in the ATNN. We demonstrate the effectiveness of the TDNN on chaotic series prediction.Item Biological Plausibility of Back-Error Propagation through Microtubules(1992) Dayhoff, Judith E.; Hameroff, Stuart; Swenberg, Charles E.; Lahoz-Beltra, Rafael; ISRWe propose a plausible model for learning by back-error propagation in biological neurons. Forwards propagation occurs as action potentials propagate signals along branching axons and transmit those signals across axo-dendritic synapses, whereupon post-synaptic neurons sum their incoming signals. In our model, back-error propagation is proposed to occur via signals within intraneuronal cytoskeletal microtubules. These signals modify the effective strengths of synapses during learning. Differences between network output and desired (target) outputs are computed at synapses or by synaptic complexes. Biophysical mechanisms are suggested for the summing of errors and the propagation of errors backwards through microtubules within each neuron of the network. We discuss issues and assumptions of the model, alternative candidate mechanisms, and the degree of biological plausibility.