Decision, Operations & Information Technologies Research Works

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 17 of 17
  • Item
    Estimating the Tour Length for the Close Enough Traveling Salesman Problem
    (MDPI, 2021-04-12) Roy, Debdatta Sinha; Golden, Bruce; Wang, Xingyin; Wasil, Edward
    We construct empirically based regression models for estimating the tour length in the Close Enough Traveling Salesman Problem (CETSP). In the CETSP, a customer is considered visited when the salesman visits any point in the customer’s service region. We build our models using as many as 14 independent variables on a set of 780 benchmark instances of the CETSP and compare the estimated tour lengths to the results from a Steiner zone heuristic. We validate our results on a new set of 234 instances that are similar to the 780 benchmark instances. We also generate results for a new set of 72 larger instances. Overall, our models fit the data well and do a very good job of estimating the tour length. In addition, we show that our modeling approach can be used to accurately estimate the optimal tour lengths for the CETSP.
  • Item
    Individual differences in regulatory mode moderate the effectiveness of a pilot mHealth trial for diabetes management among older veterans
    (PLoS (Public Library of Science), 2018-03-07) Dugas, Michelle; Crowley, Kenyon; Gao, Guodong Gordon; Xu, Timothy; Agarwal, Ritu; Kruglanski, Arie W.; Steinle, Nanette
    mHealth tools to help people manage chronic illnesses have surged in popularity, but evidence of their effectiveness remains mixed. The aim of this study was to address a gap in the mHealth and health psychology literatures by investigating how individual differences in psychological traits are associated with mHealth effectiveness. Drawing from regulatory mode theory, we tested the role of locomotion and assessment in explaining why mHealth tools are effective for some but not everyone. A 13-week pilot study investigated the effectiveness of an mHealth app in improving health behaviors among older veterans (n = 27) with poorly controlled Type 2 diabetes. We developed a gamified mHealth tool (DiaSocial) aimed at encouraging tracking of glucose control, exercise, nutrition, and medication adherence. Important individual differences in longitudinal trends of adherence, operationalized as points earned for healthy behavior, over the course of the 13-week study period were found. Specifically, low locomotion was associated with unchanging levels of adherence during the course of the study. In contrast, high locomotion was associated with generally stronger adherence although it exhibited a quadratic longitudinal trend. In addition, high assessment was associated with a marginal, positive trend in adherence over time while low assessment was associated with a marginal, negative trend. Next, we examined the relationship between greater adherence and improved clinical outcomes, finding that greater adherence was associated with greater reductions in glycated hemoglobin (HbA1c) levels. Findings from the pilot study suggest that mHealth technologies can help older adults improve their diabetes management, but a “one size fits all” approach may yield suboptimal outcomes.
  • Item
    Online Appendix for “Gradient-Based Myopic Allocation Policy: An Efficient Sampling Procedure in a Low-Confidence Scenario”
    (2017) Peng, Yijie; Chen, Chun-Hung; Fu, Michael; Hu, Jian-Qiang
    This is the online appendix, which includes theoretical and numerical supplements containing some technical details and three additional numerical examples, which could not fit in the main body due to page limits by the journal for a technical note. The abstract for the main body is as follows: In this note, we study a simulation optimization problem of selecting the alternative with the best performance from a finite set, or a so-called ranking and selection problem, in a special low-confidence scenario. The most popular sampling allocation procedures in ranking and selection do not perform well in this scenario, because they all ignore certain induced correlations that significantly affect the probability of correct selection in this scenario. We propose a gradient-based myopic allocation policy (G-MAP) that takes the induced correlations into account, reflecting a trade-off between the induced correlation and the two factors (mean-variance) found in the optimal computing budget allocation formula. Numerical experiments substantiate the efficiency of the new procedure in the low-confidence scenario.
  • Item
    Online Appendix for “Ranking and Selection as Stochastic Control”
    (2017-04) Peng, Yijie; Chong, Edwin K. P.; Chen, Chun-Hung; Fu, Michael C.
  • Item
    Online Supplement to ‘Myopic Allocation Policy with Asymptotically Optimal Sampling Rate’
    (2016) Peng, Yijie; Fu, Michael
    In this online appendix, we test the performance of the AOMAP (asymptotically optimal myopic allocation policy) algorithm under the unknown variances scenario and compare it with EI (expected improvement) and OCBA (optimal computing budget allocation).
  • Item
    Instances for the Generalized Regenerator Location Problem
    (2015) Chen, Si; Ljubic, Ivana; Raghavan, S.
  • Item
    Instances for the Recoverable Robust Two-Level Network Design Problem
    (2014) Alvarez-Miranda, Eduardo; Ljubic, Ivana; Raghavan, S.; Toth, Paolo
    We provide the instances used in the paper "The Recoverable Robust Two-Level Network Design Problem", by E. Alvarez-Miranda, I. Ljubic, S. Raghavan and P. Toth, accepted for publication in the INFORMS J. on Computing, 2014 (http://dx.doi.org/10.1287/ijoc.2014.0606). This repository contains both the instances used in the paper as well as the results obtained by the proposed algorithm.
  • Item
    Online Supplement to `Efficient Simulation Resource Sharing and Allocation for Selecting the Best'
    (2012) Peng, Yijie; Chen, Chun-Hung; Fu, Michael; Hu, Jian-Qiang
    This is the online supplement to the article by the same authors, "Efficient Simulation Resource Sharing and Allocation for Selecting the Best," published in the IEEE Transactions on Automatic Control.
  • Item
    Note: An Application of the EOQ Model with Nonlinear Holding Cost to Inventory Management of Perishables
    (2005-07-19T20:53:40Z) Souza, Gilvan; Ferguson, Mark; Jayaraman, Vaidy
    We consider a variation of the economic order quantity (EOQ) model where cumulative holding cost is a nonlinear function of time. This problem has been studied by Weiss (1982), and we here show how it is an approximation of the optimal order quantity for perishable goods, such as milk, and produce, sold in small to medium size grocery stores where there are delivery surcharges due to infrequent ordering, and managers frequently utilize markdowns to stabilize demand as the product’s expiration date nears. We show how the holding cost curve parameters can be estimated via a regression approach from the product’s usual holding cost (storage plus capital costs), lifetime, and markdown policy. We show in a numerical study that the model provides significant improvement in cost vis-à-vis the classic EOQ model, with a median improvement of 40%. This improvement is more significant for higher daily demand rate, lower holding cost, shorter lifetime, and a markdown policy with steeper discounts.
  • Item
    A Large Deviations Analysis of Quantile Estimation with Application to Value at Risk
    (2005-07-01T12:31:49Z) Jin, Xing; Fu, Michael C.
    Quantile estimation has become increasingly important, particularly in the financial industry, where Value-at-Risk has emerged as a standard measurement tool for controlling portfolio risk. In this paper we apply the theory of large deviations to analyze various simulation-based quantile estimators. First, we show that the coverage probability of the standard quantile estimator converges to one exponentially fast with sample size. Then we introduce a new quantile estimator that has a provably faster convergence rate. Furthermore, we show that the coverage probability for this new estimator can be guaranteed to be 100% with sufficiently large, but finite, sample size. Numerical experiments on a VaR example illustrate the potential for dramatic variance reduction.
  • Item
    Multi-Echelon Models for Repairable Items: A Review
    (2005-07-01T12:31:37Z) Diaz, Angel; Fu, Michael C.
    We review multi-echelon inventory models for repairable items. Such models have been widely applied to the management of critical spare parts for military equipment for around three decades, but the application to manufacturing and service industries seems to be much less documented. We feel that the appropriate use of models in the management of spare parts for heavily utilized equipment in industry can result in significant cost savings, in particular in those settings where repair facilities are resource constrained. In our review, we provide a strategic framework for making these decisions, place the modeling problem in the broader context of inventory control, and review the prominent models in the literature under a unified setting, highlighting some key relationships. We concentrate on describing those models which we feel are most applicable for practical application, revisiting in detail the Multi-Echelon Technique for Recoverable Item Control (METRIC) model and its variations, and then discussing a variety of more general queueing models. We then discuss the components which we feel must be addressed in the models in order to apply them practically to industrial settings.
  • Item
    Sensitivity Analysis for Monte Carlo Simulation of Option Pricing
    (1995) Fu, Michael C.; Hu, Jian-Qiang
    Monte Carlo simulation is one alternative for analyzing options markets when the assumptions of simpler analytical models are violated. We introduce techniques for the sensitivity analysis of option pricing which can be efficiently carried out in the simulation. In particular, using these techniques, a single run of the simulation would often provide not only an estimate of the option value but also estimates of the sensitivities of the option value to various parameters of the model. Both European and American options are considered, starting with simple analytically tractable models to present the idea and proceeding to more complicated examples. We then propose an approach for the pricing of options with early exercise features by incorporating the gradient estimates in an iterative stochastic approximation algorithm. The procedure is illustrated in a simple example estimating the option value of an American call. Numerical results indicate that the additional computational effort required over that required to estimate a European option is relatively small.
  • Item
    Stochastic Gradient Estimation
    (2005-07-01T12:31:02Z) Fu, Michael C.
    We consider the problem of efficiently estimating gradients from stochastic simulation. Although the primary motivation is their use in simulation optimization, the resulting estimators can also be useful in other ways, e.g., sensitivity analysis. The main approaches described are finite differences (including simultaneous perturbations), perturbation analysis, the likelihood ratio/score function method, and the use of weak derivatives.
  • Item
    Supply Chain Coordination for False Failure Returns
    (2005-04-11T13:17:49Z) Souza, Gilvan; Ferguson, Mark; Guide, V. Daniel, Jr.
    False failure returns are products that are returned by consumers to retailers with no functional or cosmetic defect. The cost of a false failure return includes the processing actions of testing, refurbishing if necessary, repackaging, the loss in value during the time the product spends in the reverse supply chain (a time that can exceed several months for many firms), and the loss in revenue because the product is sold at a discounted price. This cost is significant, and is incurred primarily by the manufacturer. Reducing false failure returns, however, requires effort primarily by the retailer, for example informing consumers about the exact product that best fits their needs. We address the problem of reducing false failure returns via supply chain coordination methods. Specifically, we propose a target rebate contract that pays the retailer a specific dollar amount per each unit of false failure returns below a target. This target rebate provides an incentive to the retailer to increase her effort, thus decreasing the number of false failures and (potentially) increasing net sales. We show that this contract is Pareto–improving in the majority of cases. Our results also indicate that the profit improvement to both parties, and the supply chain, is substantial.
  • Item
    Time Value of Commercial Product Returns
    (2005-01-13T15:06:27Z) Souza, Gilvan; Guide, V. Daniel; Van Wassenhove, Luk; Blackburn, Joseph
    Manufacturers and their distributors must cope with an increased flow of returned products from their customers. The value of commercial product returns, which we define as products returned for any reason within 90 days of sale, now exceeds US $100 billion annually in the US. Although the reverse supply chain of returned products represents a sizeable flow of potentially recoverable assets, only a relatively small fraction of the value is currently extracted by manufacturers; a large proportion of the product value erodes away due to long processing delays. Thus, there are significant opportunities to build competitive advantage from making the appropriate reverse supply chain design choices. In this paper, we present a simple queuing network model that includes the marginal value of time to identify the drivers of reverse supply chain design. We illustrate our approach with specific examples from two companies in different industries and then examine how industry clockspeed generally affects the choice between an efficient and a responsive returns network.
  • Item
    The Optimal Pace of Product Updates
    (2005-01-13T15:05:47Z) Souza, Gilvan; Druehl, Cheryl; Schmidt, Glen
    Some firms (such as Intel and Medtronics) use a time–pacing strategy for new product development, introducing new generations at regular intervals. If the firm adopts a fast pace (introducing frequently) then it prematurely cannibalizes its old generation and incurs high development costs, while if it waits too long, it fails to capitalize on customer willingness–to–pay for more advanced technology. We develop a model to gain insight into which factors drive the pace. We consider the degree to which a new generation stimulates market growth, the rate at which it diffuses (its coefficients of innovation and imitation), the rate of decline in its margin over time, and the cost of new product development. The optimization problem is non–concave; however we are able to solve it numerically for a wide range of parameters because there is a finite number of possible solutions for each case. Somewhat intuitively, we find that a faster pace is associated with a higher market growth rate and faster margin decay. Not so intuitively, we find that relatively minor differences in the new product development cost function can significantly impact the optimal pace. Regarding the Bass coefficients of innovation and imitation, we find that a higher sum of these coefficients leads to a faster pace but with diminishing effects, and that for relatively higher sums the coefficients are effectively substitutes.
  • Item
    Finding the Value of Information About a State Variable in a Markov Decision Process
    (2005-01-13T15:04:40Z) Souza, Gilvan
    In this paper we present a mixed–integer programming formulation that computes the optimal solution for a certain class of Markov decision processes with finite state and action spaces, where a state is comprised of multiple state variables, and one of the state variables is unobservable to the decision maker. Our approach is a much simpler modeling alternative to the theory of partially observable Markov decision processes (POMDP), where an information and updating structure about the decision variable needs to be defined. We illustrate the approach with an example of a duopoly where one firm’s actions are not immediately observable by the other firm, and present computational results. We believe that this approach can be used in a variety of applications, where the decision maker wants to assess the value of information about an additional decision variable.