### Browsing by Author "Farvardin, Nariman"

###### Results Per Page

###### Sort Options

Item Adaptive Block Transofmr Coding of Speech Based on LPC Vector Quantization.(1989) Hussain, Yunus; Farvardin, Nariman; ISRIn this paper we describe an adaptive block transform speech coding system based on vector quantization of LPC parameters. In order to account for the power fluctuations, the speech signal is normalized to have a unit-energy prediction residual The temporal variations in the short-term spectrum, on the other hand, are taken into accour by vector quantizing the UC parameters associated with the vector of speech samples and transmitting the codeword index. For each block based on the codevector associated with the input vector, an optimum bit assignment map is used to quantize the transform coefficients. We consider two types of zero memory quantizers for encoding the transform coefficients, namely the Llyod-Max quantizer and the entropy-coded quantizer. The performance of these schemes is compared with other adaptive transform coding schemes. We show by means of simulations that the system based on entropy-coded quantizer design leads to very high performance and in most cases as much as 5 dB performance improvement in terms of segmental signal-to-noise ratio is observed over the adaptive block transform coding scheme of Noll and Zelinski [1]. The effects of the bit-rate and the size of the codebook on the performance of the systems are also studied in detail.Item Channel--Matched Hierarchical Table--Lookup Vector Quantization for Transmission of Video Over Wireless Channels(1997) Jafarkhani, Hamid; Farvardin, Nariman; ISRWe propose a channel-matched hierarchical table-lookup vector quantizer (CM-HTVQ) which provides some robustness against the channel noise. We use a finite-state channel to model slow fading channels and propose an adaptive coding scheme to transmit a source over wireless channels. The performance of CM-HTVQ is in general slightly inferior to that of, channel-optimized vector quantizer (COVQ) (the performances coincide at some cases); however, the encoder complexity of CM-HTVQ is much less than the encoder complexity of COVQ. A copy of this report has been published in the proceedings of The 1st Annual Advanced Telecommunications/Information Distribution Research Program Conference, January 21-22, 1997.Item Detection of Binary Sources Over Discrete Channels with Additive Markov Noise(1994) Alajaji, Fady; Phamdo, N.; Farvardin, Nariman; Fuja, Tom E.; ISRWe consider the problem of directly transmitting a binary source with an inherent redundancy over a binary channel with additive stationary ergodic Markov noise. Out objective is to design an optimum receiver which fully utilizes the source redundancy in order to combat the channel noise.We investigate the problem of detecting a binary iid non-uniform source transmitted across the Markov channel. Two maximum a posteriori (MAP) formulations are considered: a sequence MAP detection and an instantaneous MAP detection. The two MAP detection problems are implemented using a modified version of the Viterbi decoding algorithm and a recursive algorithm. Necessary and sufficient conditions under which the sequence MAP detector becomes useless as well as simulation results are presented. A comparison between the performance of the proposed system with that of a (substantially more complex) traditional tandem source-channel coding scheme exhibits a better performance for the proposed scheme at relatively high channel bit error rates.

The same detection problem is then analyzed for the case of a binary symmetric Markov source. Analytical and simulation results show the existence of a "mismatch" between the source and the channel. This mismatch is reduced by the use of a rate-one convolutional encoder. Finally, the detection problem is generalized for the case of a binary non-symmetric Markov source.

Item Efficient Encoding of Speech LSP Parameters Using the Discrete Cos ine Transformation.(1989) Farvardin, Nariman; Laroia, Rajiv; ISRIn this paper, the intraframe and interframe correlation propeffie s are used to develop two efficient encoding algorithms for speech line spectrum pair (ISP) parameters. The first algorithm (2-D DCT) , which requires relatively large coding delays, is based on twodi mensional (time and frequency) discrete cosine transform coding te chniques; the second algorithm (DCT- DPCM), which does not need any coding delay, uses one-dimensional discrete cosine transform in th e frequency domain and DPCM in the time domain. The performance of these systems for different bit rates and delays are studied and a ppropriate comparisons are made. It is shown that an average spect ral distortion of approximately 1 dB sup 2 can be achieved with 21 and 25 bits/frame using the 2-D DCT and DCT-DPCM schemes, respecti vely. This is a noticeable improvement over the previously reporte d bit rates of 32 bits/frame and above [1], [2].Item Entropy-Constrained Trellis Coded Quantization: Implementation and Adaptation(1993) Lee, Cheng-Chieh; Farvardin, Nariman; ISREntropy-constrained trellis coded quantization (ECTCQ) of memoryless sources is known to be an efficient source coding technique in the rate-distortion sense. We develop an ECTCQ scheme that employs a symmetric reproduction codebook. The symmetry of the reproduction codebook, while essentially costs no performance loss, is exploited to reduce the memory requirement in entropy coding the ECTCQ output. In practice, a buffer of finite, and preferably small, size is needed to interface the variable-length codewords to the fixed-rate channel. An adaptive ECTCQ (A-ECTCQ) scheme, which uses a buffer-state feedback to control the quantizer characteristics to avoid buffer overflow/underflow, is studied in this work. The choice of encoding delay is an important issue in A- ECTCQ, as too long a delay will adversely impact the performance of the feedback control. We propose a pathwise-adaptive ECTCQ (PA-ECTCQ) that solves the encoding delay problem. Simulation results indicate that, while the buffer overflow/underflow problems of the PA- ECTCQ can be practically eliminated, the overall quantization distortion is increased only negligibly over theoretical performance predictions. Our experiments also suggests that PA- ECTCQ is robust with respect to source mismatch.Item Extension of the Fixed-Rate Structured Vector quantizer to Vector Sources(1991) Laroia, Rajiv; Farvardin, Nariman; ISRThe fixed-rate structured vector quantizer (SVQ) derived from a variable-length scalar quantizer was originally proposed for quantizing stationary memoryless sources. In this paper, the SVQ has been extended to a specific type of vector sources in which each component is a stationary memoryless scalar subsource in dependent of the other components. algorithms for the design and implementation of the original SVQ are modified to apply to this case. The resulting SVQ, referred to as the extended SVQ (ESVQ), is then used to quantize stationary sources with memory (with know autocorrelation function). This is done by first using a linear orthonormal block transformation, such as the Karhunen- Loeve transform, to decorrelate a block of source samples. The transform output vectors, which can be approximated as the output of an independent-component vector source, are then quantized using the ESVQ. Numerical results are presented for the quantization of first-order Gauss-Markov sources using this scheme. It is shown that ESVQ-based scheme performs very close to the entropy-coded transform quantization while maintaining a fixed-rate output and outperforms the fixed-rate scheme which uses scalar Lloyd-Marx quantization of the transform coefficients. Finally, it is shown that this scheme also performs better than implementable vector quantizers, specially at high rates.Item Fast Reconstruction of Subband-Decomposed Progressively- Transmitted, Signals(1997) Jafarkhani, Hamid; Farvardin, Nariman; ISRIn this paper we propose a fast reconstruction method for a progressive subband-decomposed signal coding system. It is shown that unlike the, normal approach which contains a fixed computational complexity, the, computational complexity of the proposed approach is proportional to the number of refined coefficients. Therefore, using the proposed approach in image coding applications, we can update the image after receiving each new coefficient and create a continuously refined perception. This can be done without any extra computational cost compared to the normal case where, the image is reconstructed after receiving a predefined number of bits.*A copy of this report has been published in the proceedings of*

The 1st Annual Advanced Telecommunications/Information Distribution Research Program Conference, January 21-22, 1997.Item Finite-State Vector Quantization for Noisy Channels(1992) Hussain, Yunus; Farvardin, Nariman; ISRUnder noiseless channel conditions and for sources with memory, finite-state vector quantizers (FSVQs) exhibit improvements over memoryless vector quantizers. It is shown, however, that in the presence of channel noise, the performance of the FSVQ degrades significantly. This suggests that for noisy channels, the FSVQ design problem needs to be reformulated by taking into account the channel noise. Using some of the developments in joint source-channel trellis coding, we describe two different methods leading to two types of noisy channel FSVQs. We show by means of simulations on the Gauss-Markov source and speech LSP parameters and for a binary symmetric channel that both schemes are fairly robust to channel noise. For the Gauss-Markov source, the proposed noisy channel FSVQs perform at least as well as or better than the channel-optimized VQ, while for speech LSP parameters, they lead to saving of 1.5-4 bits/frame over the channel-optimized VQ depending on the level of noise in the channel.Item Joint Design of Block Source Codes and Modulation Signal Sets(1990) Vaishampayan, V.; Farvardin, Nariman; ISRWe consider the problem of designing a bandwidth-efficient, power-limited digital communication system for transmitting information from a source with known statistics over a noisy waveform channel. Each output vector of the source is encoded by a block encoder to one of a finite number of signals in a modulation signal set. The received waveform is processed in the receiver by an estimation-based decoder. The goal is to design an encoder, decoder and modulation signal set so as to minimize the mean squared-error (MSE) between the source vector and its estimate in the receiver. For highly noisy gaussian channels we justify restricting the estimator to the class of linear estimators. With this restriction, we derive necessary conditions for optimality of the encoder, decoder and the signal set and develop a convergent algorithm for solving these necessary conditions. We prove that the MSE of the digital system designed here is bounded from below by the MSE of an analog modulation system. Performance results for the digital system and signal constellation designs are presented for first- order Gauss-Markov sources and a white Gaussian channel. Comparisons are made against a standard vector quantizer (VQ)- based system, the bounding analog modulation system and the optimum performance theoretically attainable. The results indicate that for a correlated source, a sufficiently noisy channel and specific source block sizes and bandwidths, the digital system performance coincides with the optimum performance theoretically attainable. Further, significant performance improvements over the standard VQ-based system are demonstrated when the channel is noisy. Situations in which the linearity assumption results in poor performance are also identified.Item Low Bit-Rate Image Coding Using a Three-Component Image Model(1992) Ran, X.; Farvardin, Nariman; ISRIn this paper the use of a perceptually-motivated image model in the context of image compression is investigated. The model consists of a so-called primary component which contains the strong edge information of the image, a smooth component which represents the background slow-intensity variations and a texture component which contains the textures. The primary component, which is known to be perceptually important, is encoded separately by encoding the intensity and geometric information of the strong edge brim contours. Two alternatives for coding the smooth and texture components are studied: Entropy-coded adaptive DCT and entropy-coded subband coding. It is shown via extensive simulations that the proposed schemes, which can be thought of as a hybrid of waveform coding and featurebased coding techniques, result in both subjective and objective performance improvements over several other image coding schemes and, in particular, over the JPEG continuous-tone image compression standard. These improvements are especially noticeable at low bit rates. Furthermore, it is shown that a perceptual tuning based on the contrast-sensitivity of the human visual system can be used in the DCT-based scheme, which in conjunction with the 3- component model, leads to additional subjective performance improvements.Item On SVQ Shaping of Multidimensional Constellations - High-Rate Large-Dimensional Constellations(1992) Laroia, Rajiv; Farvardin, Nariman; Tretter, S.; ISRAn optimal shaping scheme for multidimensional constellations, motivated by some ideas from a fixed-rate structured vector quantizer (SVQ), was recently proposed by Laroia. It was shown that optimal shaping could be performed subject to a constraint on the CER2 or PAR2 by expressing the (optimally shaped) constellation as the codebook of an SVQ and using the SVQ encoding/decoding algorithms to index the constellation points. Further, compatibility with trellis coded modulation was demonstrated. The complexity of the proposed scheme was reasonable but dependent on the data transmission rate. In this paper, we use recent results due to Calderbank and Ozarow to show that complexity of this scheme can be reduced and made independent of the data rate with essentially no effect on the shaping gain. Also, we modify the SVQ encoding/decoding algorithms to reduce the implementation complexity even further. It is shown that SVQ shaping can achieve a shaping gain of about 1.20 dB with a PAR2 of 3.75 at a very reasonable complexity (about 15 multiply-adds/baud and a memory requirement of 1.5 kbytes). Further, a shaping gain of 1 dB results in a PAR2 of less than 3. This is considerable less than a PAR2 of 3.75 for Forney's trellis shaping scheme that gives about 1 dB shaping gain.Item On the Performance and Complexity of Channel-Optimized Vector Quantizers.(1989) Farvardin, Nariman; ISRIn this correspondence, the performance and complexity of channel-optimized vector quantizers are studied for the Gauss- Markov source. Some interesting observations on the geometric structure of these quantizers are made which have an important implication on the encoding complexity. For the squared-error distortion measure, it is shown that while the optimum partition is not described by the nearest-neighbor rule, an operation equivalent to a Euclidean distance measurement with respect to an appropriately defined set of points (used to identify the encoding regions) can be used to perform the encoding. This implies that the encoding complexity is proportional to the number of encoding regions. It is then demonstrated that for very noisy channels and a heavily correlated source, when the codebook size is large the number of encoding regions is considerably smaller than the codebook size - implying a reduction in encoding complexity.Item Optimal Block Cosine Tranaform Image Coding for Noisy Channels.(1986) Vaishampayan, V.; Farvardin, Nariman; ISRThe two-dimensional block transform coding scheme based on the discrete cosine transform has been studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. In this paper we present a method for the joint source-channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless binary symmetric channel. Our approach involves an iterative algorithm for the design of the quantizers (in the presence of channel errors) used for encoding the transform coefficients. This algorithm produces a set of locally optimum (in the mean squared- error sense) quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, we have used an algorithm based on the steepest descent method, which under certain covexity conditions on the performance of the channel-optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels have been obtained and appropriate comparisons against a reference system designed for no channel errors have been rendered. It is shown that substantial performance improvements can be obtained by using this scheme. Furthermore, theoretically predicted results for an assumed 2-D image model are provided.Item Optimal Detection of Discrete Markov Sources Over Discrete Memoryless Channels - Applications to Combined Sources-Channel Coding(1992) Phamdo, N.; Farvardin, Nariman; ISRWe consider the problem of detecting a discrete Markov source which is transmitted across a discrete memoryless channel. The detection is based upon the maximum a posteriori (MAP) criterion which yields the minimum probability of error for a given observation. Two formulations of this problem are considered: (i) a sequence MAP detection in which the objective is to determine the most probable transmitted sequence given the observed sequence and (ii) an instantaneous MAP detection which is to determine the most probable transmitted symbol at time n given all the observations prior to and including time n. The solution to the first problem results in a "Viterbi-like" implementation of the MAP detector (with large delay) while the later problem results in a recursive (with no delay). For the special case of the binary symmetric Markov source and binary symmetric channel, simulation results are presented and an analysis of these two systems yields explicit critical channel bit error rates above which the MAP detectors become useful.Applications of the MAP detection problem in a combined source-channel coding system are considered. Here it is assumed that the source is highly correlated and that the source encoder (in our case, a vector quantizer (VQ) fails to remove all of the source redundancy. The remaining redundancy at the output of the source encoder is referred to as the "residual" redundancy. It is shown, through simulation, that the residual redundancy can be used by the MAP detectors to combat channel errors. For small block sizes, the proposed system beats Farvardin and Vaishampayan's channel- optimized VQ by wide margins. Finally, it is shown that the instantaneous MAP detector can be combined with the VQ decoder to form a minimum mean-squared error decoder. Simulation results are also given for this case.

Item Optimal Quantizer Design for Noisy Channels: An Approach to Combined Source-Channel Coding.(1985) Farvardin, Nariman; Vaishampayan, Vinay; ISRIn this paper, we present an analysis of the zero-memory quantization of memoryless sources when the quantizer output is to be encoded and transmitted across a noisy channel. Necessary conditions for the joint optimization of the quantizer and the encoder/decoder pair are presented and a recursive algorithm for obtaining a locally optimum system is developed. The performance of this locally optimal system, obtained for the class of generalized Gaussian distributions and the Binary Symmetric Channel is compared against the Optimum Performance Theoretically Attainable (using Rate-Distortion theoretic arguments), as well as against the performance of Loyd-Max quantizers encoded using Natural Binary Codes. It is shown that this optimal design could result in substantial performance improvements. The performance improvements are more noticeable at high bit rates and for more broad-tailed densities.Item A Perceptually Motivated Three-Component Image Model(1992) Ran, X.; Farvardin, Nariman; ISRIn this paper, results of phychovisual studies of the human visual system are discussed and interpreted in a mathematical framework. The formation of the perception is described by appropriate minimization problems and the edge information is found to be of primary importance in visual perception. Having introduced the concept of edge strength, it is demonstrated that strong edges are of higher perceptual importance than weaker edges (textures). We have also found that smooth areas of an image influence our perception together with the edge information, and that this influence can be mathematically described via a minimization problem. Based on this study, we have proposed to decompose the image into three components: (i) primary, (ii) smooth and (iii) texture, which contain, respectively, the strong edges, the background and the textures. An algorithm is developed to generate the three-component image model and an example is provided in which the resulting three components demonstrate the specific properties as expected. finally, it is shown that the primary component provides a superior representation of the strong edge information as compared with the Laplacian-Gaussian Operator scheme which is a popular edge extraction method.Item Performance of Entropy-Constrained Block Transform Quantizers.(1985) Farvardin, Nariman; Lin, F.Y.; ISRAn analysis of the rate-distorted performance of an entropy- constrained block transform quantization scheme operating on discrete-time stationary autoregressive process is presented. Uniform-threshold quantization is employed to quantize the transform coefficients. An algorithm for optimum stepsize (or, equivalently, entropy) assignment among the quantizers is developed. A simple asymptotic formula indicating the high rate performance of the block transform quantization schemes is presented. Finally, specific results determining the rate- distortion performance of the entropy-constrained block transform quantization scheme operating upon first-order Gauss-Markov and Laplace-Markov sources are presented and appropriate comparisons with the Haung and Schulthesis block transform quantization, vector quantization and predictive encoding are rendered.Item Quantization of Memoryless and Gauss-Markov Sources Over Binary Markov Channels(1994) Phamdo, N.; Alajaji, Fady; Farvardin, Nariman; ISRJoint source-channel coding for stationary memoryless and Gauss- Markov sources and binary Markov channels is considered. The channel is an additive-noise channel where the noise process is an M-th order Markov chain. Two joint source-channel coding schemes are considered. The first is a channel-optimized vector quantizer - optimized for both source and channel. The second scheme consists of a scalar quantizer and a maximum a posteriori detector. In this scheme, it is assumed that the scalar quantizer output has residual redundancy that can be exploited by the maximum a posteriori detector to combat the correlated channel noise. These two schemes are then compared against two schemes which use channel interleaving. Numerical results show that the proposed schemes outperform the interleaving schemes. For very noisy channels with high noise correlation, gains of 4 to 5 dB in signal-to-noise ratio are possible.Item Quantization Over Discrete Noisy Channels Using Rate-One Convolutional Codes(1993) Phamdo, N.; Farvardin, Nariman; ISRWe consider high-rate scalar quantization of a memoryless source for transmission over a binary symmetric channel. It is assumed that, due to its suboptimality, the quantizer's output is redundant. Our aim is to make use of this redundancy to combat channel noise. A rate-one convolutional code is introduced to convert this natural redundancy into a usable form. at the receiver, a maximum a posteriori decoder is employed. An upper bound on the average distortion of the proposed system is derived. An approximation of this bound is computable and we search for that convolutional code which minimizes the approximate upper bound. simulation results for a generalized Gaussian source with parameter a = 0.5 at rate 4 bits/sample and channel crossover provability 0.005 show improvement of 11.9 dB in signal-to-noise ratio over the Lloyd-Max quantizer and 4.6 dB over Farvardin and Vaishampayan's channel-optimized scalar quantizer.Item Quantizer Design in LSP Speech Analysis-Synthesis.(1987) Sugamura, N.; Farvardin, Nariman; ISRThe LSP speech analysis-synthesis method is known as one the most efficient vocoders. An important issue in encoding of the LSP parameters is that a certain ordering relationship between the LSP parameters is required to insure the stability of the synthesis filter. This requirement has an important impact on the design of quantizers for the LSP parameters. In this paper, the performance of several algorithms for the quantization of the LSP parameters is studied. A new adaptive method which utilizes the ordering property of the LSP parameters is presented. A combination of this adaptive algorithm with non-uniform step size quantization is shown to be a very effective method for encoding the LSP parameters. The performance of the different quantization schemes is studied on a long sequence of speech samples. For the spectral distortion measure, appropriate performance comparisons between the different quantization schemes are rendered.