Institute for Systems Research Technical Reports
Permanent URI for this collectionhttp://hdl.handle.net/1903/4376
This archive contains a collection of reports generated by the faculty and students of the Institute for Systems Research (ISR), a permanent, interdisciplinary research unit in the A. James Clark School of Engineering at the University of Maryland. ISR-based projects are conducted through partnerships with industry and government, bringing together faculty and students from multiple academic departments and colleges across the university.
Browse
Search Results
Item Sensitivity Analysis and Discrete Stochastic Optimization for Semiconductor Manufacturing Systems(2000) Mellacheruvu, Praveen V.; Herrmann, Jeffrey W.; Fu, Michael C.; ISRThe semiconductor industry is a capital-intensive industry with rapid time-to-market, short product development cycles, complex product flows and other characteristics. These factors make it necessary to utilize equipment efficiently and reduce cycle times. Further, the complexity and highly stochastic nature of these manufacturing systems make it difficult to study their characteristics through analytical models. Hence we resort to simulation-based methodologies to model these systems.This research aims at developing and implementing simulation-based operations research techniques to facilitate System Control (through sensitivity analysis) and System Design (through optimization) for semiconductor manufacturing systems.
Sensitivity analysis for small changes in input parameters is performed using gradient estimation techniques. Gradient estimation methods are evaluated by studying the state of the art and comparing the finite difference method and simultaneous perturbation method by applying them to a stochastic manufacturing system. The results are compared with the gradients obtained through analytical queueing models. The finite difference method is implemented in a heterogeneous simulation environment (HSE)-based decision support tool for process engineers. This tool performs heterogeneous simulations and sensitivity analyses.
The gradient-based techniques used for sensitivity analysis form the building blocks for a gradient-based discrete stochastic optimization procedure. This procedure is applied to the problem of allocating a limited budget to machine purchases to achieve throughput requirements and minimize cycle time. The performance of the algorithm is evaluated by applying the algorithm on a wide range of problem instances.
Item Optimal Risk Sensitive Control of Semi-Markov Decision Processes(2000) Chawla, Jay P.; Marcus, Steven I.; Shayman, Mark A.; ISRIn this thesis, we study risk-sensitive cost minimization in semi-Markov decision processes. The main thrust of the thesis concerns the minimization of average risk sensitive costs over the infinite horizon.Existing theory is expanded intwo directions: the semi-Markov case is considered, and non-irreduciblechains are considered. In particular, the analysis of the non-irreduciblecase is a significant addition to the literature, since many real-worldsystems do not exhibit irreducibility under all stationary Markov policies. Extension of existing results to the semi-Markovcase is significant because it requires the definition of a newdynamic programming equation and a technically challenging adaptation of the Perron-Frobeniuseigenvalue from the discrete time case.
In order to determine an optimal policy, new concepts in the classificationof Markov chains need to be introduced. This is because in thenon-irreducible case, the average risk sensitive cost objective function permits extremely unlikely events to exert a controlling influence on costs. We define equivalence classes of statescalled `strongly communicating classes' and formulate in terms of thema new characterization of the underlying structureof Markov Decision Problems and Markov chains.
In the risk sensitive case, the expected cost incurred prior to a stopping time with finite expected valuecan be infinite. For this reason, we introduce an assumption: reachability with finite cost. This is the fundamental assumptionrequired to achieve the major results of this thesis.
We explore existence conditions for an optimal policy, optimality equations,and behavior for large and small risk sensitivity parameter. (Onlynon-negative risk parameters are discussed in this thesis -- i.e. the risk averse and risk neutral cases, not the risk seeking case.) Ramificationsfor the risk neutral objective function are also analyzed.Furthermore, a simple solution technique we call `recursive computation'to find an optimal policy that isapplicable to small state spaces is described through examples.
The countable state space case is explored, and results that hold only for a finite state space are also presented. Other, relatedobjective functions such as sample path cost are analyzed and discussed.
We also explore finite time horizon semi-Markov problems, and present a general technique for solving them.We define a new objective function, the minimization of which is calledthe `deadline problem'. This is a problem in which the probability of reaching the goal state in a set period of time is maximized. We transform thedeadline problem objective function into an equivalent finite-horizonrisk sensitive objective function.
Item Stochastic Approximation and Optimization for Markov Chains(2000) Bartusek, John D.; Makowski, Armand M.; ISRWe study the convergence properties of the projected stochasticapproximation (SA) algorithm which may be used to find the root of an unknown steady state function of a parameterized family of Markov chains. The analysis is based on the ODE Method and we develop a set of application-oriented conditions which imply almost sure convergence and are verifiable in terms of typically available model data. Specific results are obtained for geometrically ergodic Markov chains satisfying a uniform Foster-Lyapunov drift inequality.Stochastic optimization is a direct application of the above root finding problem if the SA is driven by a gradient estimate of steady state performance. We study the convergence properties of an SA driven by agradient estimator which observes an increasing number of samples from the Markov chain at each step of the SA's recursion. To show almost sure convergence to the optimizer, a framework of verifiable conditions is introduced which builds on the general SA conditions proposed for the root finding problem.
We also consider a difficulty sometimes encountered in applicationswhen selecting the set used in the projection operator of the SA algorithm.Suppose there exists a well-behaved positive recurrent region of the state process parameter space where the convergence conditions are satisfied; this being the ideal set to project on. Unfortunately, the boundaries of this projection set are not known a priori when implementing the SA. Therefore, we consider the convergence properties when the projection set is chosen to include regions outside the well-behaved region. Specifically, we consider an SA applied to an M/M/1 which adjusts the service rate parameter when the projection set includes parameters that cause the queue to be transient.
Finally, we consider an alternative SA where the recursion is driven by a sample average of observations. We develop conditions implying convergence for this algorithm which are based on a uniform large deviation upper bound and we present specialized conditions implyingthis property for finite state Markov chains.
Item Randomized Difference Two-Timescale Simultaneous Perturbation Stochastic Approximation Algorithms for Simulation Optimization of Hidden Markov Models(2000) Bhatnagar, Shalabh; Fu, Michael C.; Marcus, Steven I.; Bhatnagar, Shashank; Marcus, Steven I.; Fu, Michael C.; ISRWe proposetwo finite difference two-timescale simultaneous perturbationstochastic approximation (SPSA)algorithmsfor simulation optimization ofhidden Markov models. Stability and convergence of both thealgorithms is proved.Numericalexperiments on a queueing model with high-dimensional parameter vectorsdemonstrate orders of magnitude faster convergence using thesealgorithms over related $(N+1)$-Simulation finite difference analoguesand another two-simulation finite difference algorithm that updates incycles.
Item Comparing Gradient Estimation Methods Applied to Stochastic Manufacturing Systems(2000) Mellacheruvu, Praveen V.; Fu, Michael C.; Herrmann, Jeffrey W.; ISRThis paper compares two gradient estimation methods that can be usedfor estimating the sensitivities of output metrics with respectto the input parameters of a stochastic manufacturing system.A brief description of the methods used currently is followedby a description of the two methods: the finite difference methodand the simultaneous perturbation method. While the finitedifference method has been in use for a long time, simultaneousperturbation is a relatively new method which has beenapplied with stochastic approximation for optimizationwith good results. The methods described are used to analyzea stochastic manufacturing system and estimate gradients.The results are compared to the gradients calculated fromanalytical queueing system models.These gradient methods are of significant use in complex manufacturingsystems like semiconductor manufacturing systems where we havea large number of input parameters which affect the average total cycle time.These gradient estimation methods can estimate the impact thatthese input parameters have and identify theparameters that have the maximum impact on system performance.
Item Optimal Multilevel Feedback Policies for ABR Flow Control using Two Timescale SPSA(1999) Bhatnagar, Shalabh; Fu, Michael C.; Marcus, Steven I.; ISROptimal multilevel control policies for rate based flow control in available bit rate (ABR) service in asynchronous transfer mode (ATM) networks are obtained in the presence of information and propagation delays, using a numerically efficient two timescale simultaneous perturbation stochastic approximation (SPSA) algorithm. Numerical experiments demonstrate fast convergence even in the presence of significant delays and a large number of parametrized parameter levels.Item Gradient Estimation of Two-Stage Continuous Transfer Lines Subject to Operation-Dependent Failures(1998) Fu, Michael C.; Xie, Xiaolan; ISRThis paper addresses the gradient estimation of transfer linescomprising two machines separated by a buffer of finite capacity. A continuous flow model is considered, where machines are subject tooperation-dependent failures, i.e., a machine cannot fail when it is idle. Both repair times and failure times may be general, i.e., they need not be exponentially distributed.The system is hybrid in the sense that it hasboth continuous dynamics, as a result of continuous material flow, and discrete events: failures and repairs. The purpose of this paper is to estimate the gradient of the throughput rate with respect to the buffer capacity. Both IPA estimators and SPA estimators are derived. Simulation results show that IPA estimators do not work, contradicting the common belief that IPA always works for continuous flow models.Item Application of Perturbation Analysis to the Design and Analysis of Control Charts(1997) Fu, Michael C.; Hu, Jian-Qiang; ISRThe design of control charts in statistical quality control addresses the optimal selection of the design parameters such as the sampling frequency and the control limits; and includes sensitivity analysis with respect to system parameters such as the various process parameters and the economic costs of sampling. The advent of more complicated control chart schemes has necessitated the use of Monte Carlo simulation in the design process, particularly in the evaluation of performance measures such as average run length. In this paper, we apply perturbation analysis to derive gradient estimators that can be used in gradient-based optimization algorithms and in sensitivity analysis when Monte Carlo simulation is employed. We illustrate the technique on a simple Shewhart control chart and on a more complicated control chart that includes the exponentially- weighted moving average control chart as a special case.