Theses and Dissertations from UMD

Permanent URI for this communityhttp://hdl.handle.net/1903/2

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 10 of 102
  • Thumbnail Image
    Item
    Equilibrium Programming for Improved Management of Water-Resource Systems
    (2024) Boyd, Nathan Tyler; Gabriel, Steven A; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Effective water-resources management requires the joint consideration of multiple decision-makers as well as the physical flow of water in both built and natural environments. Traditionally, game-theory models were developed to explain the interactions of water decision-makers such as states, cities, industries, and regulators. These models account for socio-economic factors such as water supply and demand. However, they often lack insight into how water or pollution should be physically managed with respect to overland flow, streams, reservoirs, and infrastructure. Conversely, optimization-based models have accounted for these physical features but usually assume a single decision-maker who acts as a central planner. Equilibrium programming, which was developed in the field of operations research, provides a solution to this modeling dilemma. First, it can incorporate the optimization problems of multiple decision-makers into a single model. Second, the socio-economic interactions of these decision-makers can be modeled as well such as a market for balancing water supply and demand. Equilibrium programming has been widely applied to energy problems, but a few recent works have begun to explore applications in water-resource systems. These works model water-allocation markets subject to the flow of water supply from upstream to downstream as well as the nexus of water-quality management with energy markets. This dissertation applies equilibrium programming to a broader set of physical characteristics and socio-economic interactions than these recent works. Chapter 2 also focuses on the flow of water from upstream to downstream but incorporates markets for water recycling and reuse. Chapter 3 also focuses on water-quality management but uses a credit market to implement water-pollution regulations in a globally optimal manner. Chapter 4 explores alternative conceptions for socio-economic interactions beyond market-based approaches. Specifically, social learning is modeled as a means to lower the cost of water-treatment technologies. This dissertation's research contributions are significant to both the operations research community and the water-resources community. For the operations research community, this dissertation could serve as model archetypes for future research into equilibrium programming and water-resource systems. For instance, Chapter 1 organizes the research in this dissertation in terms of three themes: stream, land, and sea. For the water-resources community, this dissertation could make equilibrium programming more relevant in practice. Chapter 2 applies equilibrium programming to the Duck River Watershed (Tennessee, USA), and Chapter 3 applies it to the Anacostia River Watershed (Washington DC and Maryland, USA). The results also reinforce the importance of the relationships between socio-economic interactions and physical features in water resource systems. However, the risk aversion of the players acts as an important mediating role in the significance of these relationships. Future research could investigate mechanisms for the emergence of altruistic decision-making to improve equity among the players in water-resource systems.
  • Thumbnail Image
    Item
    Quantum Algorithms for Nonconvex Optimization: Theory and Implementation
    (2024) Leng, Jiaqi; Wu, Xiaodi XW; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Continuous optimization problems arise in virtually all disciplines of quantitative research. While convex optimization has been well-studied in recent decades, large-scale nonconvex optimization problems remain intractable in both theory and practice. Quantum computers are expected to outperform classical computers in certain challenging computational problems. Some quantum algorithms for convex optimization have shown asymptotic speedups, while the quantum advantage for nonconvex optimization is yet to be fully understood. This thesis focuses on Quantum Hamiltonian Descent (QHD), a quantum algorithm for continuous optimization problems. A systematic study of Quantum Hamiltonian Descent is presented, including theoretical results concerning nonconvex optimization and efficient implementation techniques for quantum computers. Quantum Hamiltonian Descent is derived as the path integral of classical gradient descent algorithms. Due to the quantum interference of classical descent trajectories, Quantum Hamiltonian Descent exhibits drastically different behavior from classical gradient descent, especially for nonconvex problems. Under mild assumptions, we prove that Quantum Hamiltonian Descent can always find the global minimum of an unconstrained optimization problem given a sufficiently long runtime. Moreover, we demonstrate that Quantum Hamiltonian Descent can efficiently solve a family of nonconvex optimization problems with exponentially many local minima, which most commonly used classical optimizers require super-polynomial time to solve. Additionally, by using Quantum Hamiltonian Descent as an algorithmic primitive, we show a quadratic oracular separation between quantum and classical computing. We consider the implementation of Quantum Hamiltonian Descent for two important paradigms of quantum computing, namely digital (fault-tolerant) and analog quantum computers. Exploiting the product formula for quantum Hamiltonian simulation, we demonstrate that a digital quantum computer can implement Quantum Hamiltonian Descent with gate complexity nearly linear in problem dimension and evolution time. With a hardware-efficient sparse Hamiltonian simulation technique known as Hamiltonian embedding, we develop an analog implementation recipe for Quantum Hamiltonian Descent that addresses a broad class of nonlinear optimization problems, including nonconvex quadratic programming. This analog implementation approach is deployed on large-scale quantum spin-glass simulators, and the empirical results strongly suggest that Quantum Hamiltonian Descent has great potential for highly nonconvex and nonlinear optimization tasks.
  • Thumbnail Image
    Item
    REAL-TIME DISPATCHING AND REDEPLOYMENT OF HETEROGENEOUS EMERGENCY VEHICLES FLEET WITH A BALANCED WORKLOAD
    (2023) Fang, Chenyang; Haghani, Ali; Civil Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The emergency management service (EMS) system is a complicated system that tries to coordinate each system component to provide a quick response to emergencies. Different types of vehicles cooperate to finish the tasks under unified command. The EMS system tries to respond quickly to emergency calls and communicate with each department to balance the resources and provide maximal coverage for the whole system. This work aims to develop a highly efficient model for the EMS system to assist the coordinator in making the dispatching and relocation decisions simultaneously. Meanwhile, the model will make a route decision to provide the vehicle drivers with route guidance. In the model, heterogenous emergency vehicle fleets consisting of police vehicles, Basic Life Support (BLS) vehicles, Advanced Life Support (ALS) vehicles, Fire Engines, Fire Trucks, and Fire Quants are considered. Moreover, a coverage strategy is proposed, and different coverage types are considered according to the division of vehicle function. The model tries to provide maximal coverage by advanced vehicles under the premise of ensuring full coverage by basic vehicles. The workload balance of the vehicle crews is considered in the model to ensure fairness. A mathematical model is proposed, then a numerical study is conducted to test the model's performance. The results show that the proposed model can perform well in large-scale problems with significant demands. A comprehensive analysis is conducted on the real-case historical medical data. Then a discrete event simulation system is built. The framework of a discrete event simulation model can mimic the evolution of the entire operation of an emergency response system over time. Finally, the proposed model and discrete event simulation system are applied to the real-case historical medical data. Three different categories of performance measurements are collected, analyzed, and compared with the real-case data. A comprehensive sensitivity analysis is conducted to test the ability of the model to handle different situations. The final results illustrate that the proposed model can improve overall performance in various evaluation metrics.
  • Thumbnail Image
    Item
    THREE ESSAYS ON OPTIMIZATION, MACHINE LEARNING, AND GAME THEORY IN ENERGY
    (2023) Chanpiwat, Pattanun; Gabriel, Steven A.; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation comprises three main essays that share a common theme: developing methods to promote sustainable and renewable energy from both the supply and demand sides, from an application perspective. The first essay (Chapter 2) addresses demand response (DR) scheduling using dynamic programming (DP) and customer classification. The goal is to analyze and cluster residential households into homogeneous groups based on their electricity load. This allows retail electric providers (REPs) to reduce energy use and financial risks during peak demand periods. Compared to a business-as-usual heuristic, the proposed approach has an average 2.3% improvement in profitability and runs approximately 70 times faster by avoiding the need to run the DR dynamic programming separately for each household. The second essay in Chapter 3 analyzes the integration of renewable energy sources and battery storage in energy systems. It develops a stochastic mixed complementarity problem (MCP) for analyzing oligopolistic generation with battery storage, taking into account both conventional and variable renewable energy supplies. This contribution is novel because it considers multi-stage stochastic MCPs with recourse decisions. The sensitivity analysis shows that increasing battery capacity can reduce price volatility and variance of power generation. However, it has a small impact on carbon emissions reduction. Using a stochastic MCP approach can increase power producers' profits by almost 20 percent, as proposed by the value of stochastic equilibrium solutions. Higher battery storage capacity reduces the uncertainty of the system in all cases related to average delivered prices. Nevertheless, investing in enlarging battery storage has diminishing returns to producers' profits at a certain point restricted by market limitations such as demand and supply or pricing structure. The third essay (Chapter 4) proposes a new practical application of the stochastic dual dynamic programming (SDDP) algorithm that considers uncertainties in the electricity market, such as electricity prices, residential photovoltaic (PV) generation, and loads. The SDDP model optimizes the scheduling of battery storage usage for sequential decision-making over a planning horizon by considering predicted uncertainty scenarios and their associated probabilities. After examining the benefits of shared battery storage in housing companies, the results show that the SDDP model improves the average objective function values (i.e., costs) by approximately 32% compared to a model without it. The results also indicate that the mean objective function values at the end of the first stage of the proposed SDDP model with battery storage and the deterministic LP model equivalent (with perfect foresight) with battery storage differ by less than 30%. The models and insights developed in this dissertation are valuable for facilitating energy policy-making in a rapidly evolving industry. Furthermore, these contributions can advance computational techniques, encourage the use and development of renewable energy sources, and increase public education on energy efficiency and environmental awareness.
  • Thumbnail Image
    Item
    A NOVEL APPLICATION OF SELECT AGILE CONCEPTS AND STOCHASTIC ANALYSIS FOR THE OPTIMIZATION OF TRAINING PROGRAMS WITHIN HIGH RELIABILITY ORGANIZATIONS IN HIGH TURN-OVER ENVIRONMENTS AT EDUCATIONAL INSTITUTES AND IN INDUSTRY
    (2023) Blanton, Richard L; Cui, Qingbin; Civil Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    High-turnover environments have been extensively studied with the bulk of the literature focusing on the negative effects on business operations.[1] They present challenges to the resilience of the organization while also limiting the potential profitability from consistently having to spend time training new staff. Furthermore, in manufacturing environments inexperienced staff are prone to mistakes and uncertainty, which can lead to increases in scrap materials and lower production rates due to a lack of mastery of the process. From an organizational standpoint a high-turnover environment presents an unmitigated risk to the organization from the continuous loss of institutional knowledge. This loss can present challenges to the organization in numerous ways, such as capital equipment that no longer has staff qualified or experienced enough to use it leading to costly retraining by the manufacturer or increased risk of a catastrophic failure resulting in damage to the equipment and or injury to the staff. Furthermore, the loss of institutional history leads to the loss of why operations are performed a certain way. As the common saying goes, “those who forget history are bound to repeat it.” which can lead to substantial costs for the organization while old solutions that were previously rejected due to lack of merit are constantly rehashed due to a lack of understanding of how the organization arrived at its current policies. This thesis presents a novel framework to mitigate the potential loss of institutional knowledge via a multifaceted approach. To achieve this a specific topic was identified and used to frame questions that guided the research. Mitigation of the negative impacts of high-turnover in manufacturing environments with a specific focus on the optimization of training programs. This topic led to the formulation of the following research questions. What steps can be taken to reduce the chance of lost institutional knowledge in a high-turnover environment? What steps can be taken to reduce the time needed to train a high performing replacement employee, while maintaining strict performance and safety standards? What steps should be taken to improve the accuracy of budgetary projections? What steps need to be taken to enable accurate analysis of potential future investment opportunities in a training program. The answers to the above research questions are compiled and presented with the aim to provide professionals, who are responsible for training programs in high-turnover environments that require a high organizational reliability, with a framework and analysis toolset that will enable data-driven decision making regarding the program. Additionally this thesis provides a framework for addressing the continuous risk of loss of institutional knowledge. When contrasted with a standard training model, where a trainee is presented with new material and then tested for retention before moving to the next topic, the proposed model implements a schema that can be rapidly iterated upon and improved until the desired performance outcome is achieved, while increasing the potential accuracy of budgetary estimation by as much as 57%. Throughout the process, decision makers will have insight into the long term effects of their potential actions by way of running simulations that give insight into not only the expected steady-state cost of a program but also the rough volume of trainees required to achieve that steady-state.
  • Thumbnail Image
    Item
    Simulation Optimization: Efficient Selection and Dynamic Programming
    (2023) Zhou, Yi; Fu, Michael C; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Many real-world problems can be modeled as stochastic systems, whose performances can be estimated by simulations. Important topics include statistical ranking and selection (R&S) problems, response surface problems, and stochastic estimation problems. The first part of the dissertation focuses on the R&S problems, where there is a finite set of feasible solutions ("systems" or "alternatives") with unknown values of performances that must be estimated from noisy observations, e.g., from expensive stochastic simulation experiments. The goal in R&S problems is to identify the highest-valued alternative as quickly as possible. In many practical settings, alternatives with similar attributes could have similar performances; we thus focus on such settings where prior similarity information between the performance outputs of different alternatives can be learned from data. Incorporating similarity information enables efficient budget allocation for faster identification of the best alternative in sequential selection. Using a new selection criterion, the similarity selection index, we develop two new allocation methods: one based on a mathematical programming characterization of the asymptotically optimal budget allocation, and the other based on a myopic expected improvement measure. For the former, we present a novel sequential implementation that provably learns the optimal allocation without tuning. For the latter, we also derive its asymptotic sampling ratios. We also propose a practical way to update the prior similarity information as new samples are collected. The second part of the dissertation considers the stochastic estimation problems of estimating a conditional expectation. We first formulate the conditional expectation as a ratio of two stochastic derivatives. Applying stochastic gradient estimation techniques, we express the two derivatives using ordinary expectations, which can be then estimated by Monte Carlo methods. Based on an empirical distribution estimated from simulation, we provide guidance on selecting the appropriate formulation of the derivatives to reduce the variance of the ratio estimator. The third part of this dissertation further explores the application of estimators for conditional expectations in policy evaluation, an important topic in stochastic dynamic programming. In particular, we focus on a finite-horizon setting with continuous state variables. The Bellman equation represents the value function as a conditional expectation. By using the finite difference method and the generalized likelihood ratio method, we propose new estimators for policy evaluation and show how the value of any given state can be estimated using sample paths starting from various other states.
  • Thumbnail Image
    Item
    ECONOMETRIC ANALYSIS OF BIKE SHARING SYSTEMS
    (2022) Cao, Huan; Tunca, Tunay TT; Business and Management: Decision & Information Technologies; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    I study the efficiency of the dockless bike sharing system, and how to utilize the operational decisions to improve system efficiency and profit. In the first chapter, I empirically analyze riders' economic incentives in a dockless bike sharing system and explore how to improve the efficiency of this business model. Specifically, I aim to answer three main questions: (i) What is the impact of the number of bicycles in the system on efficiency? (ii) How can bike relocation be best used to improve utilization? (iii) How do the efficiency of dockless and dock-based systems compare? To address these questions, I first build a microeconomic model of user decision-making in a dockless bike sharing system. I then use this model together with transaction-level data from a major dockless bike-sharing firm to structurally estimate the customer utility and demand parameters. Using this estimation in counterfactual analysis, we find that the company can decrease the bicycle fleet size by 40% while maintaining 90% of transactions, leading to estimated savings of $6.5 Million. We further find that a spatial bicycle rebalancing system based on our customer utility model can improve daily transactions by approximately 19%. Finally, we demonstrate that without bicycle redistribution, a smartly designed dock-based system can significantly outperform a dockless system. Our model provides a utility-based model that allows companies to estimate not only transactions, but also the time and location of lost potential demand, which can be used to make targeted improvements to the geographic bike distribution. It also allows managers to fine-tune bicycle fleet sizes and spatial rebalancing parameters. Further, our structural demand modeling can be used to improve the efficiency of dock-based systems by helping with targeted dock location decisions. In the second chapter, using data from a major dockless bike sharing system in Beijing, I study the subscription behavior and its relationship with the service level and price. Specifically, I develop an econometric model to study the subscription behavior for both existing subscribers and new sign-ups, respectively, and build up a functional relationship between the service level and system demand level and reveal the dynamics and interplays among subscription, system demand, and service level, which helps to recover the evolution of the number of subscribers over time. Based on all the estimation results and functional relationships, I then construct an empirical framework and straighten out the relationship between company profit and bicycle fleet size/subscription price. The counterfactual results show that the marginal benefit of deploying a larger bicycle fleet is decreasing, and the company should be cautious in determining and adjusting the bicycle fleet size. In addition, the examination demonstrates that the current price is too low, and raising the price properly can achieve about a 25% profit increase. The results also show the value of understanding riders' sensitivity to price and how to use it to better their operational decisions and accomplish better financial results. The counterfactual framework I proposed can be utilized in various policy evaluations and provides important insights and recommendations to the bikeshare companies and regulators.
  • Thumbnail Image
    Item
    SENSITIVITY ANALYSIS AND STOCHASTIC OPTIMIZATIONS IN STOCHASTIC ACTIVITY NETWORKS
    (2022) Wan, Peng; Fu, Michael C; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Activity networks are a powerful tool for modeling and analysis in project management, and in many other applications, such as circuit design and parallel computing. An activity network can be represented by a directed acyclic graph with one source node and one sink node. The directed arcs between nodes in an activity network represent the precedence relationships between different activities in the project. In a stochastic activity network (SAN), the arc lengths are random variables. This dissertation studies stochastic gradient estimators for SANs using Monte Carlo simulation, and the application of stochastic gradient estimators to network optimization problems. A new algorithm called Threshold Arc Criticality (TAC) for estimating the arc criticalities of stochastic activity networks is proposed. TAC is based on the following result: given the length of all arcs in a SAN except for the one arc of interest, that arc is on the critical path (longest path) if and only if its length is greater than a threshold. By applying Infinitesimal Perturbation Analysis (IPA) to TAC, an unbiased estimator of the derivative of the arc criticalities with respect to parameters of arc length distributions can be derived. The stochastic derivative estimator can be used for sensitivity analysis of arc criticalities via simulation. Using TAC, a new IPA gradient estimator of the first and second moments of project completion time (PCT) is proposed. Combining the new PCT stochastic gradient estimator with a Taylor series approximation, a functional estimation procedure for estimating the change in PCT moments caused by a large perturbation in an activity duration's distribution parameter is proposed and applied to optimization problems involving time-cost tradeoffs. In activity networks, crashing an activity means reducing the activity's duration (deterministic or stochastic) by a given percentage with an associated cost. A crashing plan of a project aims to shorten the PCT by reducing the duration of a set of activities under a limited budget. A disruption is an event that occurs at an uncertain time. Examples of disruptions are natural disasters, electrical outages, labor strikes, etc. For an activity network, a disruption may cause delays in unfinished activities. Previous work formulates finding the optimal crashing plan of an activity network under a single disruption as a two-stage stochastic mixed integer programming problem and applies a sample average approximation technique for finding the optimal solution. In this thesis, a new stochastic gradient estimator is derived and a gradient-based simulation optimization algorithm is applied to the problem of optimizing crashing under disruption.
  • Thumbnail Image
    Item
    Working in Reverse: Advancing Inverse Optimization in the Fields of Equilibrium and Infrastructure Modeling
    (2022) Allen, Stephanie Ann; Gabriel, Steven A; Dickerson, John P; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Transportation and infrastructure modeling allows us to pursue societal aims such as improved disaster management, traffic flow, and water allocation. Equilibrium programming enables us to represent the entities involved in these applications such that we can learn more about their dynamics. These entities include transportation users and market players. However, determining the parameters in these models can be a difficult task because the entities involved in these equilibrium processes may not be able to articulate or to disclose the parameterizations that motivate them. The field of inverse optimization (IO) offers a potential solution to this problem by taking observed equilibria to these systems and using them to parameterize equilibrium models. In this dissertation, we explore the use of inverse optimization to parameterize multiple new or understudied subclasses of equilibrium problems as well as expand inverse optimization's application to new infrastructure domains. In the first project of our dissertation, our contribution to the literature is to propose that IO can be used to parameterize cost functions in multi-stage stochastic programs for disaster management and can be used in disaster support systems. We demonstrate in most of our experiments that using IO to obtain the hidden cost parameters for travel on a road network changes the protection decisions made on that road network when compared to utilizing the mean of the parameter range for the hidden parameters (also referred to as ``uniform cost''). The protection decisions made under the IO cost parameterizations versus the true cost parameterizations are similar for most of the experiments, thus lending credibility to the IO parameterizations. In the second project of our dissertation, we extend a well-known framework in the IO community to the case of jointly convex generalized Nash equilibrium problems (GNEPs). We demonstrate the utility of this framework in a multi-player transportation game in which we vary the number of players, the capacity level, and the network topology in the experiments as well as run experiments assuming the same costs among players and different costs among players. Our promising results provide evidence that our work could be used to regulate traffic flow toward aims such as reduction of emissions. In the final project of our dissertation, we explore the general parameterization of the constant vector in linear complementarity problems (LCPs), which are mathematical expressions that can represent optimization, game theory, and market models (Gabriel et al., 2012). Unlike the limited previous work on inverse optimization for LCPs, we characterize theoretical considerations regarding the inverse optimization problem for LCPs, prove that a previously proposed IO solution model can be dramatically simplified, and handle the case of multiple solution data points for the IO LCP problem. Additionally, we use our knowledge regarding LCPs and IO in a water market allocation case study, which is an application not previously explored in the IO literature, and we find that charging an additional tax on the upstream players enables the market to reach a system optimal. In sum, this dissertation contributes to the inverse optimization literature by expanding its reach in the equilibrium problem domain and by reaching new infrastructure applications.
  • Thumbnail Image
    Item
    An Operations Management Framework to Improve Geographic Equity in Liver Transplantation
    (2022) Akshat, Shubham; Raghavan, S.; Business and Management: Decision & Information Technologies; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In the United States (U.S.), on average three people die every day awaiting a liver transplant for a total of 1,133 lives lost in 2021. While 13,439 patients were added to the waiting list in 2021, only 9,236 patients received liver transplantation. To make matters worse, there is significant geographic disparity across the U.S. in transplant candidate access to deceased donor organs. The U.S. Department of Health and Human Services (HHS) is keen to improve transplant policy to mitigate these disparities. The deceased donor liver allocation policy has been through three major implementations in the last nine years, but yet the issue persists. This dissertation seeks to apply operations management models to (i) understand transplant candidate behavior, and (ii) suggest improvements to transplant policy that mitigate geographic disparity. In the first essay, we focus on reducing disparities in the organ supply to candidate demand (s/d) ratios across transplant centers. We develop a nonlinear integer programming model that allocates organ supply to maximize the minimum s/d ratios across all transplant centers. We focus on circular donation regions that address legal issues raised with earlier organ distribution frameworks. This enables reformulating our model as a set-partitioning problem and our proposal can be viewed as a heterogeneous donor circle policy. Compared to the current Acuity Circles policy that has fixed radius circles around donation locations, the heterogeneous donor circle policy greatly improves both the worst s/d ratio, and the range of s/d ratios. With the fixed radius policy of 500 nautical miles (NM) the s/d ratio ranges from 0.37 to 0.84 at transplant centers, while with the heterogeneous circle policy capped at a maximum radius of 500NM the s/d ratio ranges from 0.55 to 0.60, closely matching the national s/d ratio of 0.5983. Broader sharing of organs is believed to mitigate geographic disparity. Recent policies are moving towards broader sharing in principle. In the second essay, we develop a patient's dynamic choice model to analyze her strategic response to a policy change. First, we study the impact of the Share 35 policy, a variant of broader sharing introduced in 2013, on the behavioral change of patients at the transplant centers (i.e., change in their organ acceptance probability), geographic equity, and efficiency (transplant quality, offer refusals, survival benefit from a transplant, and organ travel distance). We find that sicker patients became more selective in accepting organs (acceptance probability decreased) under the Share 35 policy. Second, we study the current Acuity Circles policy and conclude that it would result in lower efficiency (more offer refusals and a lower transplant benefit) than the previous Share 35 policy. Finally, we show that broader sharing in its current form may not be the best strategy to balance geographic equity and efficiency. The intuition is that by indiscriminately enlarging the pool of supply locations from where patients can receive offers, they tend to become more selective, resulting in more offer rejections and less efficiency. We illustrate that the heterogeneous donor circles policy that equalizes the s/d ratios across geographies is better than Acuity Circles in achieving geographic equity at the lowest trade-off on efficiency metrics. The previous two essays demonstrate the benefit of equalizing the s/d ratios across geographies. In December 2018 the Organ Procurement and Transplantation Network (OPTN) Board of Directors approved the continuous distribution framework as the desired policy goal for all the organ allocation systems. In this framework, the waiting list candidates will be prioritized based on several factors, each contributing some points towards the total score of a candidate. The factors in consideration are medical severity, expected post-transplant outcome, the efficient management of organ placement, and equity. However, the respective weights for each of these potential factors are not yet decided. In the third essay, we consider two factors, medical severity and the efficient management of organ placement (captured using the distance between the donor hospital and transplant center), and we design an allocation policy that maximizes the geographic equity. We develop a mathematical model to calculate the s/d ratio of deceased-donor organs at a transplant center in a continuous scoring framework of organ allocation policy. We then formulate a set-partitioning optimization problem and test our proposals using simulation. Our experiments suggest that reducing inherent differences in s/d ratios at the transplant centers result in saving lives and reduced geographic disparity.