Alternative Discrete-Time Operators and Their Application to Nonlinear Models

dc.contributor.authorBack, Andrew D.en_US
dc.contributor.authorTsoi, Ah Chungen_US
dc.contributor.authorHorne, Bill G.en_US
dc.contributor.authorGiles, C. Leeen_US
dc.date.accessioned2004-05-31T22:43:32Z
dc.date.available2004-05-31T22:43:32Z
dc.date.created1997-01en_US
dc.date.issued1998-10-15en_US
dc.description.abstractThe shift operator, defined as q x(t) = x(t+1), is the basis for almost all discrete-time models. It has been shown however, that linear models based on the shift operator suffer problems when used to model lightly-damped-low-frequency (LDLF) systems, with poles near $(1,0)$ on the unit circle in the complex plane. This problem occurs under fast sampling conditions. As the sampling rate increases, coefficient sensitivity and round-off noise become a problem as the difference between successive sampled inputs becomes smaller and smaller. The resulting coefficients of the model approach the coefficients obtained in a binomial expansion, regardless of the underlying continuous-time system. This implies that for a given finite wordlength, severe inaccuracies may result. Wordlengths for the coefficients may also need to be made longer to accommodate models which have low frequency characteristics, corresponding to poles in the neighbourhood of (1,0). These problems also arise in neural network models which comprise of linear parts and nonlinear neural activation functions. Various alternative discrete-time operators can be introduced which offer numerical computational advantages over the conventional shift operator. The alternative discrete-time operators have been proposed independently of each other in the fields of digital filtering, adaptive control and neural networks. These include the delta, rho, gamma and bilinear operators. In this paper we first review these operators and examine some of their properties. An analysis of the TDNN and FIR MLP network structures is given which shows their susceptibility to parameter sensitivity problems. Subsequently, it is shown that models may be formulated using alternative discrete-time operators which have low sensitivity properties. Consideration is given to the problem of finding parameters for stable alternative discrete-time operators. A learning algorithm which adapts the alternative discrete-time operators parameters on-line is presented for MLP neural network models based on alternative discrete-time operators. It is shown that neural network models which use these alternative discrete-time perform better than those using the shift operator alone. (Also cross-referenced as UMIACS-TR-97-03)en_US
dc.format.extent449871 bytes
dc.format.mimetypeapplication/postscript
dc.identifier.urihttp://hdl.handle.net/1903/870
dc.language.isoen_US
dc.relation.isAvailableAtDigital Repository at the University of Marylanden_US
dc.relation.isAvailableAtUniversity of Maryland (College Park, Md.)en_US
dc.relation.isAvailableAtTech Reports in Computer Science and Engineeringen_US
dc.relation.isAvailableAtUMIACS Technical Reportsen_US
dc.relation.ispartofseriesUM Computer Science Department; CS-TR-3738en_US
dc.relation.ispartofseriesUMIACS; UMIACS-TR-97-03en_US
dc.titleAlternative Discrete-Time Operators and Their Application to Nonlinear Modelsen_US
dc.typeTechnical Reporten_US

Files

Original bundle

Now showing 1 - 2 of 2
No Thumbnail Available
Name:
CS-TR-3738.ps
Size:
439.33 KB
Format:
Postscript Files
Thumbnail Image
Name:
CS-TR-3738.pdf
Size:
326.54 KB
Format:
Adobe Portable Document Format
Description:
Auto-generated copy of CS-TR-3738.ps