Computer Science Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/2756
Browse
Item DESIGNING AND IMPLEMENTING ACCESSIBLE WEARABLE INTERACTIONS FOR PEOPLE WITH MOTOR IMPAIRMENTS(2018) Malu, Meethu; Findlater, Leah; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Emerging wearable technologies like fitness bands, smartwatches, and head-mounted displays (HMDs) are entering the mainstream market. Unlike smartphones and tablets, these wearables, worn on the body or clothing, are always available and have the potential to provide quick access to information [7]. For instance, HMDs can provide relatively hands-free interaction compared to smartphones, and smartwatches and activity trackers can collect continuous health and fitness-related information of their wearer. However, there are over 20 million people in the U.S. with upper body motor impairments [133], who may not be able to gain from the potential benefits of these wearables. For example, the small interaction spaces of smartwatches may present accessibility challenges. Yet, few studies have explored the potential impacts or evaluated the accessibility of these wearables or investigated ways to design accessible wearable interactions for people with motor impairments. To inform the design of future wearable technologies, my dissertation investigates three threads of research: (1) assessing the accessibility of wearable technologies like HMDs, smartwatches and fitness trackers; (2) understanding the potential impacts of sharing automatically tracked fitness-related information for people with mobility impairments; and (3) implementing and evaluating accessible interactions for HMDs and smartwatches. As part of my first research thread, I conducted two formative studies investigating the accessibility of HMDs and fitness trackers and found that people with motor impairments experienced accessibility challenges like problematic form factors, irrelevant data tracking and difficulty with existing input. For my second research thread, I investigated the potential impacts of sharing automatically tracked data from fitness trackers with peers with similar impairments and therapists and presented design opportunities to build tools to support sharing. Towards my third research thread, I addressed the earlier issues identified with HMD accessibility by building custom wearable touchpads to control a commercial HMD. Next, I explored the touchscreen and non-touchscreen areas (bezel, wristband and user’s body) of smartwatches for accessible interaction. And, lastly, I built and compared bezel input with touchscreen input for accessible smartwatch interaction. The techniques implemented and evaluated in this dissertation will enable more equitable and independent use of wearable technologies for people with motor impairments.Item Egocentric Vision in Assistive Technologies For and By the Blind(2022) Lee, Kyungjun; Kacorri, Hernisa; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Visual information in our surroundings, such as everyday objects and passersby, is often inaccessible to people who are blind. Cameras that leverage egocentric vision, in an attempt to approximate the visual field of the camera wearer, hold great promise for making the visual world more accessible for this population. Typically, such applications rely on pre-trained computer vision models and thus are limited. Moreover, as with any AI system that augments sensory abilities, conversations around ethical implications and privacy concerns lie at the core of their design and regulation. However, early efforts tend to decouple perspectives, considering only either those of the blind users or potential bystanders. In this dissertation, we revisit egocentric vision for the blind. Through a holistic approach, we examine the following dimensions: type of application (objects and passersby), camera form factor (handheld and wearable), user’s role (a passive consumer and an active director of technology), and privacy concerns (from both end-users and bystanders). Specifically, we propose to design egocentric vision models that capture blind users’ intent and are fine-tuned by the user in the context of object recognition. We seek to explore societal issues that AI-powered cameras may lead to, considering perspectives from both blind users and nearby people whose faces or objects might be captured by the cameras. Last, we investigate interactions and perceptions across different camera form factors to reveal design implications for future work.Item HandSight: A Touch-Based Wearable System to Increase Information Accessibility for People with Visual Impairments(2018) Stearns, Lee Stephan; Froehlich, Jon E; Chellappa, Rama; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Many activities of daily living such as getting dressed, preparing food, wayfinding, or shopping rely heavily on visual information, and the inability to access that information can negatively impact the quality of life for people with vision impairments. While numerous researchers have explored solutions for assisting with visual tasks that can be performed at a distance, such as identifying landmarks for navigation or recognizing people and objects, few have attempted to provide access to nearby visual information through touch. Touch is a highly attuned means of acquiring tactile and spatial information, especially for people with vision impairments. By supporting touch-based access to information, we may help users to better understand how a surface appears (e.g., document layout, clothing patterns), thereby improving the quality of life. To address this gap in research, this dissertation explores methods to augment a visually impaired user’s sense of touch with interactive, real-time computer vision to access information about the physical world. These explorations span three application areas: reading and exploring printed documents, controlling mobile devices, and identifying colors and visual textures. At the core of each application is a system called HandSight that uses wearable cameras and other sensors to detect touch events and identify surface content beneath the user’s finger. To create HandSight, we designed and implemented the physical hardware, developed signal processing and computer vision algorithms, and designed real-time feedback that enables users to interpret visual or digital content. We involve visually impaired users throughout the design and development process, conducting several user studies to assess usability and robustness and to improve our prototype designs. The contributions of this dissertation include: (i) developing and iteratively refining HandSight, a novel wearable system to assist visually impaired users in their daily lives; (ii) evaluating HandSight across a diverse set of tasks, and identifying tradeoffs of a finger-worn approach in terms of physical design, algorithmic complexity and robustness, and usability; and (iii) identifying broader design implications for future wearable systems and for the fields of accessibility, computer vision, augmented and virtual reality, and human-computer interaction.Item Temporal Tracking Urban Areas using Google Street View(2016) Najafizadeh, Ladan; Froehlich, Jon E; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Tracking the evolution of built environments is a challenging problem in computer vision due to the intrinsic complexity of urban scenes, as well as the dearth of temporal visual information from urban areas. Emerging technologies such as street view cars, provide massive amounts of high quality imagery data of urban environments at street-level (e.g., sidewalks, buildings, and aesthetics of streets). Such datasets are consistent with respect to space and time; hence, they could be a potential source for exploring the temporal changes transpiring in built environments. However, using street view images to detect temporal changes in urban scenes induces new challenges such as variation in illumination, camera pose, and appearance/disappearance of objects. In this thesis, we leverage Google Street View’s new feature, “time machine”, to track and label the temporal changes of built environments, specifically accessibility features (e.g., existence of curb-ramps, condition of sidewalks). The main contributions of this thesis are: (i) initial proof-of-concept automated method for tracking accessibility features through panorama images across time, (ii) a framework for processing and analyzing time series panoramas at scale, and (iii) a geo-temporal dataset including different types of accessibility features for the task of detection.