Institute for Systems Research

Permanent URI for this communityhttp://hdl.handle.net/1903/4375

Browse

Search Results

Now showing 1 - 10 of 62
  • Thumbnail Image
    Item
    Modeling and Simulation of a Tungsten Chemical Vapor Deposition Reactor
    (2000) Chang, Hsiao-Yung; Adomaitis, Raymond A.; ISR
    Chemical vapor deposition (CVD) processes are widely used in semiconductor device fabrication to deposit thin films of electronic materials. Physically based CVD modeling and simulation methods have been adopted for reactor design and process optimization applications to satisfy the increasingly strigent processing requirements.

    In this research, an ULVAC ERA-1000 selective tungsten chemical vapor deposition system located at the University of Maryland was studied where a temperature difference as large as 120 oC between the system wafer temperature reading and the thermocoupled instrumented wafer measurement was found during the manual processing mode.

    The goal of this research was to develop a simplified, but accurate, three-dimensional transport model that is capable of describing the observed reactor behavior.

    A hybrid approach combining experimental and simulation studies was used for model development. Several sets of experiments were conducted to investigate the effects of process parameters on wafer temperature.

    A three-dimensional gas flow and temperature model was developed and used to compute the energy transferred across the gas/wafer interface. System dependent heat transfer parameters were formulated as a nonlinear parameter estimation problem and identified using experimental measurements.

    Good agreement was found between the steady-state wafer temperature predictions and experimental data at various gas compositions, and the wafer temperature dynamics were successfully predicted using a temperature model considering the energy exchanges between the thermocouple, wafer, and showerhead.

  • Thumbnail Image
    Item
    Comparison of Run-to-Run Control Methods in Semiconductor Manufacturing Processes
    (2000) Zhang, Chang; Deng, Hao; Baras, John S.; Baras, John S.; ISR
    Run-to Run (RtR) control plays an important role in semiconductor manufacturing.

    In this paper, RtR control methods are generalized. The set-valued RtR controllers with ellipsoidapproximation are compared with other RtR controllers bysimulation according to the following criteria: A good RtR controller should be able to compensate for variousdisturbances, such as process drifts, process shifts (step disturbance)and model errors; moreover, it should beable to deal with limitations, bounds, cost requirement, multipletargets and time delays that are often encountered in realprocesses.

    Preliminary results show the good performance of the set-valued RtRcontroller. Furthermore, this paper shows that it is insufficient to uselinear models to approximate nonlinear processes and it is necessary to developnonlinear model based RtR controllers.

  • Thumbnail Image
    Item
    Statistical Parameter Learning for Belief Networks with Fixed Structure
    (1999) Li, Hongjun; Baras, John S.; ISR; CSHCN
    In this report, we address the problem of parameter learning for belief networks with fixed structure based on empirical observations. Both complete and incomplete (data) observations are included. Given complete data, we describe the simple problem of single parameter learning for intuition and then expand to belief networks under appropriate system decomposition. If the observations are incomplete, we first estimate the "missing" observations and treat them as though they are "real" observations, based on which the parameter learning can be executed as in complete data case. We derive a uniform algorithm based on this idea for incomplete data case and present the convergence and optimality properties. Such an algorithm is suitable trivially under complete observations.
  • Thumbnail Image
    Item
    An Introduction to Belief Networks
    (1999) Li, Hongjun; Baras, John S.; ISR; CSHCN
    Belief networks, also called Bayesian networks or probabilistic causal networks, were developed in the late 1970s to model the distributed processing in reading comprehension. Since then they have attracted much attention and have become popular within the AI probability and uncertainty community. As a natural and efficient model for humans' inferential reasoning, belief networks have emerged as the general knowledge representation scheme under uncertainty.

    In this report, we first introduce belief networks in the light of knowledge representation under uncertainty, then in the remainingsections we give the descriptions of the semantics, inference mechanisms and some issues related to learning belief networks, respectively. This report is not intended to be a tutorial for beginners. Rather it aims to point out some important aspects of belief networks and summarize some important algorithms.

  • Thumbnail Image
    Item
    Discrete-Time Risk-Sensitive Filters with Non-Gaussian Initial Conditions and their Ergodic Properties
    (1998) Dey, Subhrakanti; Charalambous, Charalambos D.; ISR
    In this paper, we study asymptotic stability properties ofrisk-sensitive filters with respect to their initial conditions. In particular, we consider a linear time-invariant system with initial conditionsthat are not necessarily Gaussian. We show that in the case of Gaussianinitial conditions, the optimal risk-sensitive filter asymptoticallyconverges to any suboptimal filter initialized with an incorrect covariancematrix for the initial state vector in the mean square sense provided the incorrect initializing value for the covariance matrix results in arisk-sensitive filter that is asymptotically stable (that is, resultsin a solution for a Riccati equation that is asymptoticallystabilizing). For non-Gaussian initial conditions, we derive theexpression for the risk-sensitive filter in terms of a finite number ofparameters. Under a boundedness assumption satisfied by thefourth order moments of the initial state variable and a slow growthcondition satified by a certainRadon-Nikodym derivative, we show that a suboptimal risk-sensitive filterinitialized with Gaussian initial conditions asymptotically approachesthe optimal risk-sensitive filter for non-Gaussian initial conditions inthe mean square sense.

    The research and scientific content in this material has been submitted to the 1999 American Control Conference, San Diego, June 1999.
  • Thumbnail Image
    Item
    Accurate Segmentation and Estimation of Parametric Motion Fields for Object-based Video Coding using Mean Field Theory
    (1997) Haridasan, Radhakrishan; Baras, John S.; ISR; CSHCN
    We formulate the problem of decomposing a scene into its constituent objects as one of partitioning the current frame into objects comprising it. The motion parameter is modeled as a nonrandom but unknown quantity and the problem is posed as one of Maximum Likelihood (ML) estimation. The MRF potentials which characterize the underlying segmentation field are defined in a way that the spatio-temporal segmentation is constrained by the static image segmentation of the current frame. To compute the motion parameter vector and the segmentation simultaneously we use the Expectation Maximization (EM) algorithm. The E-step of the EM algorithm, which computes the conditional expectation of the segmentation field, now reflects interdependencies more accurately because of neighborhood interactions. We take recourse to Mean Field theory to compute the expected value of the conditional MRF. Robust M-estimation methods are used in the M- step. To allow for motions of large magnitudes image frames are represented at various scales and the EM procedure is embedded in a hierarchical coarse-to-fine framework. Our formulation results in a highly parallel algorithm that computes robust and accurate segmentations as well as motion vectors for use in low bit rate video coding.

    This report has been submitted as a paper to the SPIE conference on Visual Communications and Image Processing - VCIP98 to be held in San Jose, California on Jan 24- 30, 1998.
  • Thumbnail Image
    Item
    Minimum Chi-Square vs Least Squares in Grouped Data
    (1997) Kedem, Benjamin; Wu, Y.; ISR
    Estimation of parameters from grouped data is considered using a least squares estimator popular in sceintific applications. The method minimizes the square distance between the empirical and hypothesized cumulative distribution functions, and is reminiscent of a discrete version of the Cramer-von Mises statistic. The resulting least squares estimator, is related to the minimum chi-square estimator, and likewise is asymptotically normal. The two methods are compared briefly for categorized mixed lognormal data with a jump at zero.
  • Thumbnail Image
    Item
    Mathematical Programming Algorithms for Regression-based Nonlinear Filtering in IRN
    (1997) Sidiropoulos, N.D.; Bro, R.; ISR
    Constrained regression problems appear in the context of optimal nonlinear filtering, as well as in a variety of other contexts, e.g., chromatographic analysis in chemometrics and manufacturing, and spectral estimation. This paper presents novel mathematical programming algorithms for some important constrained regression problems in IRN . For brevity, we focus on four key problems, namely, locally monotonic regression (the optimal counterpart of iterated median filtering), and the related problem of piecewise monotonic regression, runlength-constrained regression (a useful segmentation and edge detection technique), and uni- and oligo- modal regression (of interest in chromatography and spectral estimation). The proposed algorithms are exact and efficient, and they also naturally suggest slightly suboptimal but very fast approximate algorithms, which may be preferable in practice.
  • Thumbnail Image
    Item
    Recovering Information from Summary Data
    (1997) Faloutsos, Christos; Jagadish, H.V.; Sidiropoulos, N.D.; ISR
    Data is often stored in summarized form, as a histogram of aggregates (COUNTs,SUMs, or AVeraGes) over specified ranges. Queries regarding specific values, or ranges different from those stored, cannot be answered exactly from the summarized data. In this paper we study how to estimate the original detail data from the stored summary.

    We formulate this task as an inverse problem, specifying a well-defined cost function that has to be optimized under constraints.

    In particular, we propose the use of a Linear Regularization method, which ﲭaximizes the smoothness of the estimate. Our main theoretical contribution is a Theorem, which shows that, for smooth enough distributions, we can achieve full recovery from summary data.

    Our theorem is closely related to the well known Shannon-Nyquist sampling theorem.

    We describe how to apply this theory to a variety of database problems, that involve partial information, such as OLAP, data warehousing and histograms in query optimization. Our main practical contribution is that the Linear Regularization method is extremely effective, both on synthetic and on real data. Our experiments show that the proposed approach almost consistently outperforms the ﲵniformity assumption, achieving significant savings in root-mean-square error: up to 20% for stock price data, and up to 90% for smoother data sets.

  • Thumbnail Image
    Item
    Recovering Information from Summary Data
    (1997) Faloutsos, Christos; Jagadish, H.V.; Sidiropoulos, N.D.; ISR
    Data is often stored in summarized form, as a histogram of aggregates (COUNTs,SUMs, or AVeraGes) over specified ranges. Queries regarding specific values, or ranges different from those stored, cannot be answered exactly from the summarized data. In this paper we study how to estimate the original detail data from the stored summary.

    We formulate this task as an inverse problem, specifying a well-defined cost function that has to be optimized under constraints.

    In particular, we propose the use of a Linear Regularization method, which ﲭaximizes the smoothness of the estimate. Our main theoretical contribution is a Theorem, which shows that, for smooth enough distributions, we can achieve full recovery from summary data.

    Our theorem is closely related to the well known Shannon-Nyquist sampling theorem.

    We describe how to apply this theory to a variety of database problems, that involve partial information, such as OLAP, data warehousing and histograms in query optimization. Our main practical contribution is that the Linear Regularization method is extremely effective, both on synthetic and on real data. Our experiments show that the proposed approach almost consistently outperforms the ﲵniformity assumption, achieving significant savings in root-mean-square error: up to 20% for stock price data, and up to 90% for smoother data sets.