Theses and Dissertations from UMD

Permanent URI for this communityhttp://hdl.handle.net/1903/2

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 8 of 8
  • Thumbnail Image
    Item
    Egocentric Vision in Assistive Technologies For and By the Blind
    (2022) Lee, Kyungjun; Kacorri, Hernisa; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Visual information in our surroundings, such as everyday objects and passersby, is often inaccessible to people who are blind. Cameras that leverage egocentric vision, in an attempt to approximate the visual field of the camera wearer, hold great promise for making the visual world more accessible for this population. Typically, such applications rely on pre-trained computer vision models and thus are limited. Moreover, as with any AI system that augments sensory abilities, conversations around ethical implications and privacy concerns lie at the core of their design and regulation. However, early efforts tend to decouple perspectives, considering only either those of the blind users or potential bystanders. In this dissertation, we revisit egocentric vision for the blind. Through a holistic approach, we examine the following dimensions: type of application (objects and passersby), camera form factor (handheld and wearable), user’s role (a passive consumer and an active director of technology), and privacy concerns (from both end-users and bystanders). Specifically, we propose to design egocentric vision models that capture blind users’ intent and are fine-tuned by the user in the context of object recognition. We seek to explore societal issues that AI-powered cameras may lead to, considering perspectives from both blind users and nearby people whose faces or objects might be captured by the cameras. Last, we investigate interactions and perceptions across different camera form factors to reveal design implications for future work.
  • Thumbnail Image
    Item
    Digital Frost: Accessibility and Public Humanities
    (2020) Yokoyama, Setsuko; Smith, Martha Nell; English Language and Literature; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    His frequently recirculated televised reading at President John F. Kennedy’s inauguration ceremony attests to the fact that Robert Frost is often remembered as one of the iconic popular poets of the early twentieth century. What is less remembered today is the fact that Frost gave talks and readings at universities, colleges, and other public venues for nearly five decades to make poetry accessible to general readers. These talks epitomize Frost’s dedication to the democratic discussion of literature and daily discourse as he demonstrated, through humor, how to practice auditory attentiveness to the figures of speech used by poets, scientists, politicians, and other authority figures. Though central to his career and his contribution to American culture and literary history, Frost’s public performance as a genre has long been overlooked primarily due to the inaccessibility of audio recordings housed in archives. Digital Frost: Accessibility and Public Humanities investigates how best to redress such critical neglect of Frost’s public talks and readings through the development of a pilot audio edition and the discussion of theoretical underpinnings of the very edition’s design. As part of the larger effort to build a cross-intuitional platform in partnership with literary scholars, special collections librarians, Frost’s family members and friends, as well as the poet’s literary estate and publisher, the pilot audio edition tests the feasibility of critical collaboration and expands on the disciplinary responsibility of textual scholarship. In its accompanying chapters, Digital Frost contests the seemingly monolithic discourse around “accessibility” via analyses of its sociohistorical meanings from archival, literary, disability, and digital studies perspectives. Digital Frost argues that only when technical accessibility is concomitantly considered from a sociohistorical perspective, are we equipped to invent a culturally appropriate access design for online literary collections.
  • Thumbnail Image
    Item
    EMBODIED HAMLET: DISABILITY, ACCESSIBILITY, GENDER, AND SCIENCE FICTION
    (2019) Hands, Christine; Widrig, Patrik; Dance; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    “Hamlet” was a thirty-eight minute work of dance art premiered at the Clarice Smith Performing Arts Center at the University of Maryland on October 12 and 14, 2018. The work explored four pillars of research through embodied exploration: representation, accessibility, inclusion, and reinvention. These four themes are discussed in the following paper as theoretical points of inquiry. The first chapter discusses representation of peoples with disabilities. The second chapter explores the accessibility features for audience members which were available at the performance. The third chapter considers inclusion and challenges the canon of traditional white, male casting of the role of Hamlet. The fourth chapter discusses the use of science fiction to tie everything together by creating a space of transformative play-acting where people can exercise their imaginations to create a more inclusive and accessible society. Theoretical and scholarly research informs and then reflects the work onstage in “Hamlet.”
  • Thumbnail Image
    Item
    HandSight: A Touch-Based Wearable System to Increase Information Accessibility for People with Visual Impairments
    (2018) Stearns, Lee Stephan; Froehlich, Jon E; Chellappa, Rama; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Many activities of daily living such as getting dressed, preparing food, wayfinding, or shopping rely heavily on visual information, and the inability to access that information can negatively impact the quality of life for people with vision impairments. While numerous researchers have explored solutions for assisting with visual tasks that can be performed at a distance, such as identifying landmarks for navigation or recognizing people and objects, few have attempted to provide access to nearby visual information through touch. Touch is a highly attuned means of acquiring tactile and spatial information, especially for people with vision impairments. By supporting touch-based access to information, we may help users to better understand how a surface appears (e.g., document layout, clothing patterns), thereby improving the quality of life. To address this gap in research, this dissertation explores methods to augment a visually impaired user’s sense of touch with interactive, real-time computer vision to access information about the physical world. These explorations span three application areas: reading and exploring printed documents, controlling mobile devices, and identifying colors and visual textures. At the core of each application is a system called HandSight that uses wearable cameras and other sensors to detect touch events and identify surface content beneath the user’s finger. To create HandSight, we designed and implemented the physical hardware, developed signal processing and computer vision algorithms, and designed real-time feedback that enables users to interpret visual or digital content. We involve visually impaired users throughout the design and development process, conducting several user studies to assess usability and robustness and to improve our prototype designs. The contributions of this dissertation include: (i) developing and iteratively refining HandSight, a novel wearable system to assist visually impaired users in their daily lives; (ii) evaluating HandSight across a diverse set of tasks, and identifying tradeoffs of a finger-worn approach in terms of physical design, algorithmic complexity and robustness, and usability; and (iii) identifying broader design implications for future wearable systems and for the fields of accessibility, computer vision, augmented and virtual reality, and human-computer interaction.
  • Thumbnail Image
    Item
    DESIGNING AND IMPLEMENTING ACCESSIBLE WEARABLE INTERACTIONS FOR PEOPLE WITH MOTOR IMPAIRMENTS
    (2018) Malu, Meethu; Findlater, Leah; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Emerging wearable technologies like fitness bands, smartwatches, and head-mounted displays (HMDs) are entering the mainstream market. Unlike smartphones and tablets, these wearables, worn on the body or clothing, are always available and have the potential to provide quick access to information [7]. For instance, HMDs can provide relatively hands-free interaction compared to smartphones, and smartwatches and activity trackers can collect continuous health and fitness-related information of their wearer. However, there are over 20 million people in the U.S. with upper body motor impairments [133], who may not be able to gain from the potential benefits of these wearables. For example, the small interaction spaces of smartwatches may present accessibility challenges. Yet, few studies have explored the potential impacts or evaluated the accessibility of these wearables or investigated ways to design accessible wearable interactions for people with motor impairments. To inform the design of future wearable technologies, my dissertation investigates three threads of research: (1) assessing the accessibility of wearable technologies like HMDs, smartwatches and fitness trackers; (2) understanding the potential impacts of sharing automatically tracked fitness-related information for people with mobility impairments; and (3) implementing and evaluating accessible interactions for HMDs and smartwatches. As part of my first research thread, I conducted two formative studies investigating the accessibility of HMDs and fitness trackers and found that people with motor impairments experienced accessibility challenges like problematic form factors, irrelevant data tracking and difficulty with existing input. For my second research thread, I investigated the potential impacts of sharing automatically tracked data from fitness trackers with peers with similar impairments and therapists and presented design opportunities to build tools to support sharing. Towards my third research thread, I addressed the earlier issues identified with HMD accessibility by building custom wearable touchpads to control a commercial HMD. Next, I explored the touchscreen and non-touchscreen areas (bezel, wristband and user’s body) of smartwatches for accessible interaction. And, lastly, I built and compared bezel input with touchscreen input for accessible smartwatch interaction. The techniques implemented and evaluated in this dissertation will enable more equitable and independent use of wearable technologies for people with motor impairments.
  • Thumbnail Image
    Item
    Temporal Tracking Urban Areas using Google Street View
    (2016) Najafizadeh, Ladan; Froehlich, Jon E; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Tracking the evolution of built environments is a challenging problem in computer vision due to the intrinsic complexity of urban scenes, as well as the dearth of temporal visual information from urban areas. Emerging technologies such as street view cars, provide massive amounts of high quality imagery data of urban environments at street-level (e.g., sidewalks, buildings, and aesthetics of streets). Such datasets are consistent with respect to space and time; hence, they could be a potential source for exploring the temporal changes transpiring in built environments. However, using street view images to detect temporal changes in urban scenes induces new challenges such as variation in illumination, camera pose, and appearance/disappearance of objects. In this thesis, we leverage Google Street View’s new feature, “time machine”, to track and label the temporal changes of built environments, specifically accessibility features (e.g., existence of curb-ramps, condition of sidewalks). The main contributions of this thesis are: (i) initial proof-of-concept automated method for tracking accessibility features through panorama images across time, (ii) a framework for processing and analyzing time series panoramas at scale, and (iii) a geo-temporal dataset including different types of accessibility features for the task of detection.
  • Thumbnail Image
    Item
    The Cost of Turning Heads - The Design and Evaluation of Vocabulary Prompts on a Head-Worn Display to Support Persons with Aphasia in Conversation
    (2015) Williams, Kristin; Findlater, Leah; Geography/Library & Information Systems; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Symbol-based dictionaries could provide persons with aphasia a resource for finding needed words, but they can detract from conversation. This research explores the potential of head-worn displays (HWDs) to provide glanceable vocabulary support that is unobtrusive and always-available. Two formative studies explored the benefits and challenges of using a HWD, and evaluated a proof-of-concept prototype in both lab and field settings. These studies showed that a HWD may allow wearers to maintain focus on the conversation, reduce reliance on external support (e.g., paper and pen, or people), and minimize the visibility of support by others. A third study compared use of a HWD to a smartphone, and found preliminary evidence that the HWD may offer a better overall experience with assistive vocabulary and may better support the wearer in advancing through conversation. These studies should motivate further investigation of head-worn conversational support.
  • Thumbnail Image
    Item
    ACCESSIBILITY IN CONTEXT: UNDERSTANDING THE TRULY MOBILE EXPERIENCE OF USERS WITH MOTOR IMPAIRMENTS
    (2014) Naftali, Maia; Findlater, Leah; History/Library & Information Systems; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Touchscreen smartphones are becoming broadly adopted by the US population. Ensuring that these devices are accessible for people with disabilities is critical for equal access. For people with motor impairments, the vast majority of studies on touchscreen mobile accessibility have taken place in the laboratory. These studies show that while touchscreen input offers advantages, such as requiring less strength than physical buttons, it also presents accessibility challenges, such as the difficulty of tapping on small targets or making multitouch gestures. However, because of the focus on controlled lab settings, past work does not provide an understanding of contextual factors that impact smartphone use in everyday life, and the activities these devices enable for people with motor impairments. To investigate these issues, this thesis research includes two studies, first, an in-person study with four participants with motor impairments that included diary entries and an observational session, and, secondarily, an online survey with nine respondents. Using case study analysis for the in-person participants, we found that mobile devices have the potential to help motor-impaired users reduce the physical effort required for everyday tasks (e.g., turning on a TV, checking transit accessibility in advance), that challenges in touchscreen input still exist, and that the impact of situational impairments to this population can be impeding. The online survey results confirm these findings, for example, highlighting the difficulty of text input, particularly when users are out and mobile rather than at home. Based on these findings, future research should focus on the enhancement of current touchscreen input, exploring the potential of wearable devices for mobile accessibility, and designing more applications and services to improve access to physical world.