Theses and Dissertations from UMD

Permanent URI for this communityhttp://hdl.handle.net/1903/2

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 10 of 10
  • Thumbnail Image
    Item
    BEYOND LABELS: BUILDING INTEGRATED COMMUNITIES FOR COGNITIVE ACCESSIBILITY
    (2024) LaQuey, Madison Amanda; Bennett, Ralph; Architecture; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This project aims to create a tailored community for the mentally handicapped. This project will emphasize connectivity, creating a network of spaces to promote mental well-being and social interactions within the community. This planned community would create a living space where the community could live, learn, work, and enjoy outdoor spaces. The architecture will prioritize innovative and adaptable designs that cater to the specific needs of the mentally handicapped. Housing will offer flexibility to accommodate diverse needs and ensuring a supportive living environment. Emphasizing independence while maintaining a focus on safety. The incorporation of communal living spaces fosters a sense of community and shared responsibility, contributing to a holistic support system. The planned community incorporates vocational training centers and employment hubs specifically designed to cater to the unique abilities and challenges of this population. These spaces aim to create a supportive work environment that promotes skill development, independence, and a sense of purpose, contributing to the overall inclusion of mentally handicapped individuals in the workforce.
  • Item
    TRANSITIONING VISUALLY IMPAIRED USERS TO UTILIZE ACCESSIBILITY TECHNOLOGY
    (2024) Jo, Hyejin; Reitz, Galina; Library & Information Services; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In a world increasingly driven by visual information, this research develops the Transition Experience Interface (TEI), dedicated to supporting individuals adapting to visual impairments with advanced accessibility technologies. TEI features a user-centric design with a mobile user interface that includes tutorials, updates on new features, a voice command guide, and a progress dashboard. These components aim to reduce dependency on visual cues, enhancing digital inclusivity and promoting independence by encouraging the use of built-in accessibility features on smartphones. TEI educates users on their devices’ capabilities and fosters habitual use of these features, preparing them to rely less on vision and more on voice commands and other settings. This proactive approach helps users operate their smartphones confidently and independently as their visual function changes, bridging the gap between traditional tools and user needs, and highlighting the potential of inclusive design.
  • Thumbnail Image
    Item
    A NOVEL MEASUREMENT OF JOB ACCESSIBILITY BASED ON MOBILE DEVICE LOCATION DATA
    (2022) Zhao, Guangchen; Zhang, Lei LZ; Civil Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Mobile device location data (MDLD) can offer a new perspective on measuring accessibility. Compared with the traditional accessibility measures, MDLD is capable of capturing people’s preferences with the observed locations. This study proposes a job accessibility measure based on the identified home and work locations from MDLD, evaluating the job accessibility by the proportion of workers identified working in zones within a certain time threshold. In the case study on the Baltimore region, the job accessibility from the MDLD-based measure is compared with the results from a widely-used traditional measure. Then, generalized additive models (GAM) are built to analyze the socio-demographic impact on job accessibility from a MDLD-based measure and a traditional measure, with a feature-to-feature comparison. Finally, the socio-demographic characteristics of regions where there are major disparities between the job accessibility from the traditional measure and the MDLD-based measure are also evaluated from the Student's t-test results.
  • Thumbnail Image
    Item
    Exploring Blind and Sighted Users’ Interactions With Error-Prone Speech and Image Recognition
    (2021) Hong, Jonggi; Kacorri, Hernisa; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Speech and image recognition, already employed in many mainstream and assistive applications, hold great promise for increasing independence and improving the quality of life for people with visual impairments. However, their error-prone nature combined with challenges in visually inspecting errors can hold back their use for more independent living. This thesis explores blind users’ challenges and strategies in handling speech and image recognition errors through non-visual interactions looking at both perspectives: that of an end-user interacting with already trained and deployed models such as automatic speech recognizer and image recognizers but also that of an end-user who is empowered to attune the model to their idiosyncratic characteristics such as teachable image recognizers. To better contextualize the findings and account for human factors beyond visual impairments, user studies also involve sighted participants on a parallel thread. More specifically, Part I of this thesis explores blind and sighted participants' experience with speech recognition errors through audio-only interactions. Here, the recognition result from a pre-trained model is not being displayed; instead, it is played back through text-to-speech. Through carefully engineered speech dictation tasks in both crowdsourcing and controlled-lab settings, this part investigates the percentage and type of errors that users miss, their strategies in identifying errors, as well as potential manipulations of the synthesized speech that may help users better identify the errors. Part II investigates blind and sighted participants' experience with image recognition errors. Here, we consider both pre-trained image recognition models and those fine-tuned by the users. Through carefully engineered questions and tasks in both crowdsourcing and semi-controlled remote lab settings, this part investigates the percentage and type of errors that users miss, their strategies in identifying errors, as well as potential interfaces for accessing training examples that may help users better avoid prediction errors when fine-tuning models for personalization.
  • Thumbnail Image
    Item
    Integrating Human Performance Models into Early Design Stages to Support Accessibility
    (2021) Knisely, Benjamin Martin; Vaughn-Cooke, Monifa; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Humans have heterogeneous physical and cognitive capabilities. Engineers must cater to this heterogeneity to minimize opportunities for user error and system failure. Human factors considerations are typically evaluated late in the design process, risking expensive redesign when new human concerns become apparent. Evaluating user capability earlier could mitigate this risk. One critical early-stage design decision is function allocation – assigning system functions to humans and machines. Automating functions can eliminate the need for users to perform risky tasks but increases resource requirements. Engineers require guidance to evaluate and optimize function allocation that acknowledges the trade-offs between user accommodation and system complexity. In this dissertation, a multi-stage design methodology is proposed to facilitate the efficient allocation of system functions to humans and machines in heterogeneous user populations. The first stage of the methodology introduces a process to model population user groups to guide product customization. User characteristics that drive performance of generalized product interaction tasks are identified and corresponding variables from a national population database are clustered. In stage two, expert elicitation is proposed as a cost-effective means to quantify risk of user error for the user group models. Probabilistic estimates of user group performance are elicited from internal medicine physicians for generalized product interaction tasks. In the final stage, the data (user groups, performance estimations) are integrated into a multi-objective optimization model to allocate functions in a product family when considering user accommodation and system complexity. The methodology was demonstrated on a design case study involving self-management technology use by diabetes patients, a heterogeneous population in a safety-critical domain. The population modeling approach produced quantitatively and qualitatively validated clusters. For the expert elicitation, experts provided internally validated, distinct estimates for each user group-task pair. To validate the utility of the proposed method (acquired data, optimization model), engineering students (n=16) performed the function allocation task manually. Results indicated that participants were unable to allocate functions as efficiently as the model despite indicating user capability and cost were priorities. This research demonstrated that the proposed methodology can provide engineers valuable information regarding user capability and system functionality to drive accessible early-stage design decisions.
  • Thumbnail Image
    Item
    Ubiquitous Accessibility Digital-Maps for Smart Cities: Principles and Realization
    (2019) Ismail, Heba; Agrawala, Ashok; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    To support disabled individuals' active participation in the society, the Americans with Disabilities Act (ADA) requires installing various accessibility measures in roads and public accommodation spaces such as malls and airports. For example, curb ramps are installed on sidewalks to aid wheel-chaired individuals to transition from/to sidewalks smoothly. However, to comply with the ADA requirements, it is sufficient to have one accessible route in a place and usually there are no clear directions on how to reach that route. Hence, even within ADA-compliant facilities, accessing them can still be challenging for a disabled individual. To improve the spaces' accessibility, recently, systems have been proposed to rate outdoor walkways and intersections’ accessibility through active crowdsourcing where individuals mark and/or validate a maps’ accessibility assessments. Yet, depending on humans limits the ubiquity, accuracy and the update-rate of the generated maps. In this dissertation, we propose the AccessMap—Accessibility Digital Maps—system to build ubiquitous accessibility digital-maps automatically; where indoor/outdoor spaces are updated with various accessibility semantics and marked with assessment of their accessibility levels for the vision- and mobility-impairment disability types. To build the maps automatically, we propose a passive crowdsourcing approach where the users’ smartphone devices’ spatiotemporal sensors signals (e.g. barometer, accelerometer, etc.) are analyzed to detect and map the accessibility semantics. We present algorithms to passively detect various semantics such as accessible pedestrian signals and missing curb-ramps. We also present a probabilistic framework to construct the map while taking the uncertainty in the detected semantics and the sensors into account. AccessMap was evaluated in two different countries, the evaluation results show high detection accuracy for the different accessibility semantics. Moreover, the crowdsourcing framework helps further improve the map integrity overtime. Additionally, to tag the crowdsourced data with location stamps, GPS is the de-facto-standard localization method, but it fails in indoor environments. Thus, we present the Hapi WiFi-based localization system to estimate the crowdsourcers’ location indoors. WiFi represents a promising technology for indoor localization due to its world-wide deployment. Nevertheless, current systems either rely on a tedious expensive offline calibration phase and/or focus on a single-floor area of interest. To address these limitations, Hapi combines signal-processing, deep-learning and probabilistic models to estimate a user’s 2.5D location (i.e. the user floor-level and her 2D location within that floor) in a calibration-free manner. Our evaluation results show that, in high-rise buildings, we could achieve significant improvements over state-of-the-art indoor-localization systems.
  • Thumbnail Image
    Item
    EXPLORING THE ACCESSIBILITY OF HOME-BASED, VOICE-CONTROLLED INTELLIGENT PERSONAL ASSISTANTS
    (2018) Pradhan, Alisha; Lazar, Amanda; Library & Information Services; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    From an accessibility perspective, home-based, voice-controlled intelligent personal assistants (IPAs) have the potential to greatly expand speech interaction beyond dictation and screenreader output. This research examines the accessibility of off-the-shelf IPAs (e.g., Amazon Echo) by conducting two exploratory studies. To explore the use of IPAs by people with disabilities, we analyzed 346 Amazon Echo reviews mentioning users with disabilities, followed by interviews with 16 visually impaired IPA users. Although some accessibility challenges exist, individuals with a range of disabilities are using IPAs, including unexpected uses such as speech therapy and memory aids. The second study involved a three-week deployment of Echo Dot, a popular IPA, with five older adults who use technology infrequently. Findings indicate preferences for using IPAs over traditional computing devices. We identify design implications to improve IPAs for this population. Both studies highlight issues of discoverability and the need for feature-rich voice-based applications. The findings of this research can inform future work on accessible voice-based IPAs.
  • Thumbnail Image
    Item
    Accessible On-Body Interaction for People With Visual Impairments
    (2016) Oh, Uran Oh; Findlater, Leah; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    While mobile devices offer new opportunities to gain independence in everyday activities for people with disabilities, modern touchscreen-based interfaces can present accessibility challenges for low vision and blind users. Even with state-of-the-art screenreaders, it can be difficult or time-consuming to select specific items without visual feedback. The smooth surface of the touchscreen provides little tactile feedback compared to physical button-based phones. Furthermore, in a mobile context, hand-held devices present additional accessibility issues when both of the users’ hands are not available for interaction (e.g., on hand may be holding a cane or a dog leash). To improve mobile accessibility for people with visual impairments, I investigate on-body interaction, which employs the user’s own skin surface as the input space. On-body interaction may offer an alternative or complementary means of mobile interaction for people with visual impairments by enabling non-visual interaction with extra tactile and proprioceptive feedback compared to a touchscreen. In addition, on-body input may free users’ hands and offer efficient interaction as it can eliminate the need to pull out or hold the device. Despite this potential, little work has investigated the accessibility of on-body interaction for people with visual impairments. Thus, I begin by identifying needs and preferences of accessible on-body interaction. From there, I evaluate user performance in target acquisition and shape drawing tasks on the hand compared to on a touchscreen. Building on these studies, I focus on the design, implementation, and evaluation of an accessible on-body interaction system for visually impaired users. The contributions of this dissertation are: (1) identification of perceived advantages and limitations of on-body input compared to a touchscreen phone, (2) empirical evidence of the performance benefits of on-body input over touchscreen input in terms of speed and accuracy, (3) implementation and evaluation of an on-body gesture recognizer using finger- and wrist-mounted sensors, and (4) design implications for accessible non-visual on-body interaction for people with visual impairments.
  • Thumbnail Image
    Item
    TRANSPORTATION RESILIENCE ARCHITECTURE: A FREMEWORK FOR ANALYSIS OF INFRASTRUCTURE, AGENCY AND USERS
    (2015) Urena Serulle, Nayel; Cirillo, Cinzia; Civil Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    How do some countries, or sectors of it, overcome potentially disastrous events while others fail at it? The answer lies on the concept of resilience, and its importance grows as our environment’s deterioration escalates, limiting the access to economic, social, and natural resources. This study evaluates resilience from a transportation perspective and defines it as “the ability for the system to maintain its demonstrated level of service or to restore itself to that level of service in a specified timeframe” (Heaslip, Louisell, & Collura, 2009). The literature shows that previous evaluation approaches usually do not directly integrate all perspectives of a transportation system. In this manner, this study introduces the concept of Transportation Resilience Architecture (TRA) as a framework for evaluating resilience of a transportation system through the cumulative effect of a system’s Infrastructure, Agency and User layer. This research introduces three quantitative methodologies as a way to evaluate resilience through TRA. For Infrastructure, a practical tool for measuring the level of accessibility to “safe zones” is presented, which takes advantage of the logsum measure resulting from Statewide Transportation Models. Results from the two locations analyzed (Frederick, MD and Anacostia, D.C.) suggest a positive correlation between income and accessibility. For Agency, metrics collected through a thorough literature review where combined with survey data to develop an evaluation framework based on Fuzzy Algorithms that yields to an index. The end product highlights the importance of interoperability as a disaster preparedness and response enhancing practice. Finally, for User, a dynamic discrete choice model was adapted to evaluate evacuation behavior, taking into account the disaster’s characteristics and the population’s expectations of them—a first from an evacuation perspective. The proposed framework is estimated using SP evacuation data collected on Louisiana residents. The result indicates that the dynamic discrete choice model excels in incorporating demographic information of respondents, a key input in policy evaluation, and yields significantly more accurate evacuation percentages per forecast.
  • Thumbnail Image
    Item
    Interactive Sonification of Abstract Data - Framework, Design Space, Evaluation, and User Tool
    (2006-04-24) Zhao, Haixia; Shneiderman, Ben; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    For people with visual impairments, sound is an important information channel. The traditional accommodation for visually impaired users to access data is to rely on screen readers to speak the data in tabular forms. While speech can accurately describe information, such data presentation tends to be long and hard to realize complex information. This is particularly true in exploratory data analysis in which users often need to examine the data from different aspects. Sonification, the use of non-speech sound, has shown to help data comprehension. Previous data sonifications focus on data to sound attribute mapping and typically lack support for task-oriented data interaction. This dissertation makes four contributions. (1) An Action-by-Design-Component (ADC) framework guides auditory interface designs for exploratory data analysis. The framework characterizes data interaction in the auditory mode as a set of Auditory Information Seeking Actions (AISA). It also discusses design considerations for a set of Design Components to support AISAs, contrasted with actions in visualizations. (2) Applying the framework to geo-referenced statistical data, I explore its design space. Through user evaluations, effective design options were identified and insights were obtained regarding human ability to perceive complex information, especially those with spatial structures, from interactive sounds. (3) A tool, iSonic, was developed, with synchronized visual and auditory displays. Forty-two hours of case studies with seven blind users show that iSonic enables them to effectively explore data in highly coordinated map and table views without special devices, to find facts and discover data trends even in unfamiliar geographical contexts. Preliminary algorithms are also described to automatically generate geographical region spatial sweep orders for arbitrary maps. (4) The application to geo-referenced data demonstrated that the ADC framework provided a rich set of task-oriented actions (AISAs) that were effective for blind users to accomplish complex tasks with multiple highly coordinated data views. It also showed that some widely used techniques in visualization can adapt to the auditory mode. By applying the framework to scatterplots and line graphs, I show that the framework could be generalized and lead to the design of a unified auditory workspace for general exploratory data analysis.