UMD Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/3
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a given thesis/dissertation in DRUM.
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
7 results
Search Results
Item PREDICTION AND CLOSED-LOOP CONTROL OF BLOOD PRESSURE FOR HEMORRHAGE RESUSCITATION(2023) Hohenhaus, Drew Xavier; Hahn, Jin-Oh; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Hemorrhage is responsible for a large percentage of mortality worldwide and the majority of fatalities on the battlefield. Resuscitation procedures for hemorrhage trauma patients are critical for their recovery. Currently, during resuscitation, physicians manually monitor blood pressure and use intuition to determine when fluid should be administered and how much. Due to factors such as exhaustion, distraction, and inexperience of the physician, this method has often been reported as fallible. This thesis proposes two methods to assist in automating hemorrhage resuscitation. The first is a blood pressure prediction algorithm for decision support systems. The algorithm individualizes itself to different subjects using extended Kalman filtering (EKF), to account for high inter-subject variability, before accurately forecasting future blood pressure. The second method is an observer-based feedback controller which regulates blood pressure from a hypotensive state back to a “healthy” setpoint. The controller was designed using linear matrix inequality (LMI) techniques to ensure it was absolutely stable, which let a portion of the hemodynamic plant model remain unspecified and allowed for performance over a range of physiologies. Both strategies were evaluated in-silico on a cohort of 100 virtual patients generated from an experimental dataset. The prediction algorithm showed accuracy superior to conventional assumptions. The controller tracked the given setpoint with an accuracy and performance comparable to more complex adaptive methods. Further work, with respect to the prediction algorithm, includes developing it into a full decision-support system and incorporating disturbance rejecting components to account for common issues such as rebleed. The controller’s performance deteriorates for high-speed applications, suggesting further study is required to increase its situational flexibility.Item STRUCTANT: A CONTEXT-AWARE TASK MANAGEMENT FRAMEWORK FOR HETEROGENEOUS COMPUTATIONAL ENVIRONMENTS(2019) Pachulski, Andrew J; Agrawala, Ashok; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The Internet of Things has produced a plethora of devices, systems, and networks able to produce, transmit, and process data at unprecedented rates. These data can have tremendous value for businesses, organizations, and researchers who wish to better serve an audience or understand a topic. Pipelining is a common technique used to automate the scraping, processing, transport, and analytic steps necessary for collecting and utilizing these data.Each step in a pipeline may have specific physical, virtual, and organizational processing requirements that dictate when the step can run and what machines can run it. Physical processing requirements may include hardware specific computing capabilities such as the presence of Graphics Processing Units (GPU), memory capacity, and specific CPU instruction sets. Virtual processing requirements may include job precedence, machine architecture, availability of input datasets, runtime libraries, and executable code. Organizational processing requirements may include encryption standards for data transport and data at rest, physical server security, and monetary budget constraints. Moreover, these processing requirements may have dynamic or temporal properties not known until schedule time.These processing requirements can greatly impact the ability organizations to use these data. Despite the popularity of Big Data and cloud computing and the plethora of tools they provide, organizations still face challenges when attempting to adopt these solutions. These challenges include the need to recreate the pipeline, cryptic configuration parameters, and inability to support rapid deployment and modification for data exploration. Prior work has focused on solutions that apply only to specific steps, platforms, or algorithms in the pipeline, without considering the abundance of information that describes the processing environment and operations.In this dissertation, we present Structant, a context-aware task management framework and scheduler that helps users manage complex physical, virtual, and organizational processing requirements. Structant models jobs, machines, links, and datasets by storing contextual information for each entity in the Computational Environment. Through inference of this contextual information, Structant creates mappings of jobs to resources that satisfy all relevant processing requirements. As jobs execute, Structant observes performance and creates runtime estimates for new jobs based on prior execution traces and relevant context selection. Using runtime estimates, Structant can schedule jobs with respect to dynamic and temporal processing requirements.We present results from three experiments to demonstrate how Structant can aid a user in running both simple and complex pipelines. In our first experiment, we demonstrate how Structant can schedule data collection, processing, and movement with virtual processing requirements to facilitate forward prediction of communities at risk for opioid epidemics. In our second experiment, we demonstrate how Structant can profile operations and obey temporal organizational policies to schedule data movement with fewer preemptions than two naive scheduling algorithms. In our third experiment, we demonstrate how Structant can acquire external contextual information from server room monitors and maintain regulatory compliance of the processing environment by shutting down machines according to a predetermined pipeline.Item Study of real-time traffic state estimation and short-term prediction of signalized arterial network considering heterogeneous information sources(2013) Lu, Yang; Ali, Haghani; Civil Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Compared with a freeway network, real-time traffic state estimation and prediction of a signalized arterial network is a challenging yet under-studied field. Starting from discussing the arterial traffic flow dynamics, this study proposes a novel framework for real-time traffic state estimation and short-term prediction for signalized corridors. Particle filter techniques are used to integrate field measurements from different sources to improve the accuracy and robustness of the model. Several comprehensive numerical studies based on both real world and simulated datasets showed that the proposed model can generate reliable estimation and short-term prediction of different traffic states including queue length, flow density, speed and travel time with a high degree of accuracy. The proposed model can serve as the key component in both ATIS (Advanced Traveler's Information System) and proactive traffic control systemsItem PREDICTION IN SOCIAL MEDIA FOR MONITORING AND RECOMMENDATION(2012) Wu, Shanchan; Raschid, Louiqa; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Social media including blogs and microblogs provide a rich window into user online activity. Monitoring social media datasets can be expensive due to the scale and inherent noise in such data streams. Monitoring and prediction can provide significant benefit for many applications including brand monitoring and making recommendations. Consider a focal topic and posts on multiple blog channels on this topic. Being able to target a few potentially influential blog channels which will contain relevant posts is valuable. Once these channels have been identified, a user can proactively join the conversation themselves to encourage positive word-of-mouth and to mitigate negative word-of-mouth. Links between different blog channels, and retweets and mentions between different microblog users, are a proxy of information flow and influence. When trying to monitor where information will flow and who will be influenced by a focal user, it is valuable to predict future links, retweets and mentions. Predictions of users who will post on a focal topic or who will be influenced by a focal user can yield valuable recommendations. In this thesis we address the problem of prediction in social media to select social media channels for monitoring and recommendation. Our analysis focuses on individual authors and linkers. We address a series of prediction problems including future author prediction problem and future link prediction problem in the blogosphere, as well as prediction in microblogs such as twitter. For the future author prediction in the blogosphere, where there are network properties and content properties, we develop prediction methods inspired by information retrieval approaches that use historical posts in the blog channel for prediction. We also train a ranking support vector machine (SVM) to solve the problem, considering both network properties and content properties. We identify a number of features which have impact on prediction accuracy. For the future link prediction in the blogosphere, we compare multiple link prediction methods, and show that our proposed solution which combines the network properties of the blog with content properties does better than methods which examine network properties or content properties in isolation. Most of the previous work has only looked at either one or the other. For the prediction in microblogs, where there are follower network, retweet network, and mention network, we propose a prediction model to utilize the hybrid network for prediction. In this model, we define a potential function that reflects the likelihood of a candidate user having a specific type of link to a focal user in the future and identify an optimization problem by the principle of maximum likelihood to determine the parameters in the model. We propose different approximate approaches based on the prediction model. Our approaches are demonstrated to outperform the baseline methods which only consider one network or utilize hybrid networks in a naive way. The prediction model can be applied to other similar problems where hybrid networks exist.Item COMMITMENT AND FLEXIBILITY IN THE DEVELOPING PARSER(2010) Omaki, Akira; Phillips, Colin; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation investigates adults and children's sentence processing mechanisms, with a special focus on how multiple levels of linguistic representation are incrementally computed in real time, and how this process affects the parser's ability to later revise its early commitments. Using cross-methodological and cross-linguistic investigations of long-distance dependency processing, this dissertation demonstrates how paying explicit attention to the procedures by which linguistic representations are computed is vital to understanding both adults' real time linguistic computation and children's reanalysis mechanisms. The first part of the dissertation uses time course evidence from self-paced reading and eye tracking studies (reading and visual world) to show that long-distance dependency processing can be decomposed into a sequence of syntactic and interpretive processes. First, the reading experiments provide evidence that suggests that filler-gap dependencies are constructed before verb information is accessed. Second, visual world experiments show that, in the absence of information that would allow hearers to predict verb content in advance, interpretive processes in filler-gap dependency computation take around 600ms. These results argue for a predictive model of sentence interpretation in which syntactic representations are computed in advance of interpretive processes. The second part of the dissertation capitalizes on this procedural account of filler-gap dependency processing, and reports cross-linguistic studies on children's long-distance dependency processing. Interpretation data from English and Japanese demonstrate that children actively associate a fronted wh-phrase with the first VP in the sentence, and successfully retract such active syntactic commitments when the lack of felicitous interpretation is signaled by verb information, but not when it is signaled by syntactic information. A comparison of the process of anaphor reconstruction in adults and children further suggests that verb-based thematic information is an effective revision cue for children. Finally, distributional analyses of wh-dependencies in child-directed speech are conducted to investigate how parsing constraints impact language acquisition. It is shown that the actual properties of the child parser can skew the input distribution, such that the effective distribution differs drastically from the input distribution seen from a researcher's perspective. This suggests that properties of developing perceptual mechanisms deserve more attention in language acquisition research.Item The predictive nature of language comprehension(2009) Lau, Ellen Frances; Phillips, Colin; Linguistics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation explores the hypothesis that predictive processing--the access and construction of internal representations in advance of the external input that supports them--plays a central role in language comprehension. Linguistic input is frequently noisy, variable, and rapid, but it is also subject to numerous constraints. Predictive processing could be a particularly useful approach in language comprehension, as predictions based on the constraints imposed by the prior context could allow computation to be speeded and noisy input to be disambiguated. Decades of previous research have demonstrated that the broader sentence context has an effect on how new input is processed, but less progress has been made in determining the mechanisms underlying such contextual effects. This dissertation is aimed at advancing this second goal, by using both behavioral and neurophysiological methods to motivate predictive or top-down interpretations of contextual effects and to test particular hypotheses about the nature of the predictive mechanisms in question. The first part of the dissertation focuses on the lexical-semantic predictions made possible by word and sentence contexts. MEG and fMRI experiments, in conjunction with a meta-analysis of the previous neuroimaging literature, support the claim that an ERP effect classically observed in response to contextual manipulations--the N400 effect--reflects facilitation in processing due to lexical-semantic predictions, and that these predictions are realized at least in part through top-down changes in activity in left posterior middle temporal cortex, the cortical region thought to represent lexical-semantic information in long-term memory,. The second part of the dissertation focuses on syntactic predictions. ERP and reaction time data suggest that the syntactic requirements of the prior context impacts processing of the current input very early, and that predicting the syntactic position in which the requirements can be fulfilled may allow the processor to avoid a retrieval mechanism that is prone to similarity-based interference errors. In sum, the results described here are consistent with the hypothesis that a significant amount of language comprehension takes place in advance of the external input, and suggest future avenues of investigation towards understanding the mechanisms that make this possible.Item Predicting Success in the Montgomery County Pre-Release Center: The Actuarial Efficacy of the Selection Suitability Scale(2007-05-03) Flower, Shawn Marie; Simpson, Sally S.; Criminology and Criminal Justice; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The rising costs of incarceration and a renewed interest in rehabilitation has prompted a resurgence of interest in community corrections. A major concern is determining which offenders are appropriate for community corrections without compromising public safety. The Montgomery County Pre Release Center (PRC) is a work release facility that offers comprehensive services designed to assist offenders with transitioning back to the community after a period of incarceration. The PRC uses the "Selection Suitability Scale" (SSS), a structured instrument created by PRC staff over 20 years ago, to ascertain which offenders are appropriate for admission to the institution. The SSS quantifies criteria believed to influence the applicant's probability of success in the PRC, and classify their level of risk to the community. Criteria include measures of criminal history, employment history, residential stability, as well as mental health and substance abuse. Those with higher scores on the SSS are hypothesized to be more likely to succeed in the institution. This study assessed whether the instrument predicted an offender's performance using three outcome measures, and whether the SSS, the total scale score and disaggregated by sub-category component score, predicted the applicant's performance above and beyond demographic and criminal history information easily obtained from institutional records. Using multivariate regression, three outcome measures of success were examined. These include whether the resident incurred an infraction, was discharged in good standing, and a composite scale score of 13 performance areas assessed by the staff during the resident's last month of program participation. Study subjects included 600 male (n=427) and female (n=173) residents from 2001 to 2004. The SSS performed as expected - those with higher scores on the scale perform better than those with lower scores. Further, the total SSS score provided a small improvement over demographic and criminal history factors alone. Likewise, several SSS component scores, depending on the outcome examined, are predictive. The general conclusion is despite the modest predictive power of the SSS, this should not chill additional experimentation either with this or other predictive tools. Study limitations, including that these results were not cross-validated and future research plans are explicated.