Alternative Discrete-Time Operators and Their Application to Nonlinear
Models
Files
Publication or External Link
Date
Advisor
Citation
DRUM DOI
Abstract
The shift operator, defined as q x(t) = x(t+1), is the basis for almost
all discrete-time models. It has been shown however, that linear models
based on the shift operator suffer problems when used to model
lightly-damped-low-frequency (LDLF) systems, with poles near $(1,0)$ on
the unit circle in the complex plane. This problem occurs under fast
sampling conditions. As the sampling rate increases, coefficient
sensitivity and round-off noise become a problem as the difference between
successive sampled inputs becomes smaller and smaller. The resulting
coefficients of the model approach the coefficients obtained in a
binomial expansion, regardless of the underlying continuous-time system.
This implies that for a given finite wordlength, severe inaccuracies may
result. Wordlengths for the coefficients may also need to be made longer
to accommodate models which have low frequency characteristics,
corresponding to poles in the neighbourhood of (1,0). These problems also
arise in neural network models which comprise of linear parts and
nonlinear neural activation functions. Various alternative discrete-time
operators can be introduced which offer numerical computational
advantages over the conventional shift operator. The alternative
discrete-time operators have been proposed independently of each other in
the fields of digital filtering, adaptive control and neural networks.
These include the delta, rho, gamma and bilinear operators. In this paper
we first review these operators and examine some of their properties. An
analysis of the TDNN and FIR MLP network structures is given which shows
their susceptibility to parameter sensitivity problems. Subsequently, it
is shown that models may be formulated using alternative discrete-time
operators which have low sensitivity properties. Consideration is given
to the problem of finding parameters for stable alternative discrete-time
operators. A learning algorithm which adapts the alternative
discrete-time operators parameters on-line is presented for MLP neural
network models based on alternative discrete-time operators. It is shown
that neural network models which use these alternative discrete-time
perform better than those using the shift operator alone.
(Also cross-referenced as UMIACS-TR-97-03)