Information Studies Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/2780

Browse

Search Results

Now showing 1 - 10 of 117
  • Item
    The "Extra Layer of Things": Everyday Information Management Strategies and Unmet Needs of Moms with ADHD
    (2024) Walsh, Sheila Ann; St. Jean, Beth; Information Studies; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Mothers with ADHD need to manage their symptoms while balancing parenting responsibilities. Although technology is recommended to people with ADHD, there is limited related research in human-computer interaction (HCI). To help fill this gap, the author interviewed five mothers diagnosed with ADHD. The mothers, whose voices are largely unheard in HCI research, vividly describe their challenges managing everyday information and their attempts to adapt existing systems. The study uncovers a previously unrecognized tendency among moms with ADHD to frequently switch, and sometimes abandon, tools and systems. The study contributes to HCI by linking each finding to a design consideration. The study builds upon previous findings that neurodivergent individuals benefit from externalizing thoughts, providing new insights into how and why this occurs. These findings lay the groundwork for further HCI research and human-centered design initiatives to help parents with ADHD, and their families, thrive.
  • Thumbnail Image
    Item
    Information Avoidance in the Archival Context
    (2024) Beland II, Scott; St. Jean, Beth; Library & Information Services; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Information avoidance (IA) has been researched across several disciplines like psychology, economics, consumer health informatics, communications, and the information sciences, but the exploration of this phenomenon in archives is nearly non-existent. As information professionals, IA should be seen as a relevant concern to archivists as it may impact how people interact with archival materials, and more importantly how they may avoid certain materials, or the archives altogether. My study provides an extensive overview of IA in the archival context with a systematic literature review across disciplines and through qualitative interviews with 12 archivists across the United States of varying experience levels and from varying institution types. The aim is to explore how they think about IA in archives and how they may have experienced it in their work to answer the two research questions: 1) What abstract ideas do archivists have about IA as it relates to archives? 2) How do archivists experience IA in their daily work? Thematic analysis and synthesis grids were used to converge the transcripts into five key themes and findings about who is susceptible to IA, the contributing variables that impact and are impacted by IA, how IA manifests, real life applications of IA, and specific archival practices and concepts that impact and are impacted by IA in the context of archival work and research. Interpretations of this data resulted in theoretical models and implications that draw on existing understandings, as well as new understandings of IA that impact the information lifecycle of archival records and how people interact with them. These contributions to the archival and IA literatures can be used as a roadmap that will allow archivists to approach their work with a more mindful, and hopefully empathetic, ethic of care in handling information, understanding the costs and benefits of those decisions and actions, and better serving their patrons.
  • Thumbnail Image
    Item
    TRANSFORMING ENVIRONMENTAL EDUCATION: EXPLORING THE IMPACT OF DATA PHYSICALIZATION ON CHILDREN'S LEARNING
    (2024) Lin, Yi-Hsieh; Aston, Jason; Information Studies; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This paper explores the integration of data physicalization in Education for SustainableDevelopment (ESD), focusing on its potential to enhance the learning experience for young audiences, particularly those aged 7-12. By examining current approaches in ESD and analyzing the impact of tangible data interactions on children's understanding and engagement with sustainability issues, the study underscores the importance of innovative educational methods. Preliminary findings indicate that data physicalization help enhance comprehension, engagement, and active learning among young learners. The research contributes to the discourse on effective ESD practices, advocating for the inclusion of data physicalization techniques in educational curriculums to better prepare youth for addressing global environmental challenges.
  • Thumbnail Image
    Item
    When Good MT Goes Bad: Undestanding and Mitigating Misleading Machine Translations
    (2024) Martindale, Marianna; Carpuat, Marine; Information Studies; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Machine Translation (MT) has long been viewed as a force multiplier, enabling monolingual users to assist in processing foreign language text. In ideal situations, Neural MT (NMT) provides unprecedented MT quality, potentially increasing productivity and user acceptance of the technology. However, outside of ideal circumstances, NMT introduces new types of errors that may be difficult for users who don't understand the source language to recognize, resulting in misleading output. This dissertation seeks to understand the prevalence, nature, and impact of potentially misleading output and whether a simple intervention can mitigate its effects on monolingual users. To understand the prevalence of misleading MT output, we conduct a study to quantify the potential impact of output that is fluent but not adequate, or ``fluently inadequate", by observing the relative frequency of these types of errors in two types of MT models, statistical and early neural models. We find that neural models were consistently more prone to this type of error than traditional statistical models. However, improving the overall quality of the MT system such as through domain adaptation reduces these errors. We examine the nature of misleading MT output by moving from an intrinsic feature (fluency) to a more user-centered feature, believability, defined as a monolingual user's perception of the likelihood that the meaning of the MT output matches the meaning of the input, without understanding the source. We find that fluency accounts for most believability judgments, but semantic features like plausibility also play a role. Finally, we turn to mitigating the impacts of potentially misleading NMT output. We propose two simple interventions to help users more effectively handle inadequate output: providing output from a second NMT system and providing output from a rule-based MT (RBMT) system. We test these interventions for one use case with a user study designed to mimic typical intelligence analysis triage workflows and with actual intelligence analysts as participants. We see significant increases in performance on relevance judgment tasks with output from two NMT systems and in performance on relevant entity identification tasks with the addition of RBMT output.
  • Thumbnail Image
    Item
    Value sets for the analysis of real-world patient data: Problems, theory, and solutions
    (2024) Gold, Sigfried; Lutters, Wayne; Information Studies; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Observational, retrospective, in silico studies based on real-world data—that is, data for research collected from sources other than randomized clinical trials—cost a minute fraction of randomized clinical trials and are essential for clinical research, pharmacoepidemiology, clinical quality measurement, health system administration, value-based care, clinical guideline compliance, and public health surveillance. They offer an alternative when randomized trials cannot provide large enough patient cohorts or patients representative of real populations in terms of comorbidities, age range, disease severity, rare conditions.Improvements in the speed, frequency, and quality of research investigations using real-world data have accelerated with the emergence of distributed research networks based on common data models over the past ten years. Analyses of repositories of coded patient data involve data models, controlled medical vocabularies and ontologies, analytic protocols, implementations of query logic, value sets of vocabulary terms, and software platforms for developing and using these. These studies generally rely on clinical data represented using controlled medical vocabularies and ontologies—like ICD10, SNOMED, RxNorm, CPT, and LOINC—which catalogue and organize clinical phenomena such as conditions, treatments, and observations. Clinicians, researchers, and other medical staff collect patient data into electronic health records, registries, and claims databases with each phenomenon represented by a code, a concept identifier, from a medical vocabulary. Value sets are groupings of these identifiers that facilitate data collection, representation, harmonization, and analysis. Although medical vocabularies use hierarchical classification and other data structures to represent phenomena at different levels of granularity, value sets are needed for concepts that cover a number of codes. These lists of codes representing medical terms are a common feature of the cohort, phenotype, or other variable definitions that are used to specify patients with particular clinical conditions in analytic algorithms. Developing and validating original value sets is difficult to do well; it is a relatively small but ubiquitous part of real-world data analysis, it is time-consuming, and it requires a range of clinical, terminological, and informatics expertise. When a value set fails to match all the appropriate records or matches records that do not indicate the phenomenon of interest, study results are compromised. An inaccurate value set can lead to completely wrong study results. When value set inaccuracy causes more subtle errors in study results, conclusions may be incorrect without catching researchers’ attention. One hopes in this case that the researchers will notice a problem and track it down to a value set issue. Verifying or measuring value set accuracy is difficult and costly, often impractical, sometimes impossible. Literature recognizing the deleterious effects of value set quality on the reliability of observational research results frequently recommends public repositories where high-quality value sets for reuse can be stored, maintained, and refined by successive users. Though such repositories have been available for years and populated with hundreds or thousands of value sets, regular reuse has not been demonstrated. Value set quality has continued to be questioned in the literature, but the value of reuse has continued to be recommended and generally accepted at face value. The hope for value set repositories has been not only for researchers to have access to expertly designed value sets but for incremental refinement, that, over time, researchers will take advantage of others’ work, building on it where possible instead of repeating it, evaluating the accuracy of the value sets, and contributing their changes back to the repository. Rather than incremental improvement or indications of value sets being vetted and validated, what we see in repositories is proliferation and clutter: new value sets that may or may not have been vetted in any way and junk concept sets, created for some reason but never finished. We have found general agreement in our data that the presence of many alternative value sets for a given condition often leads value set developers to ignore all of them and start from scratch, as there is generally no easy way to tell which will be more appropriate for the researcher’s needs. And if they share their value set back to the repository, they further compound the problem, especially if they neglect to document the new value set's intention and provenance. The research offered here casts doubt on the value of reuse with currently available software and infrastructure for value set management. It is about understanding the challenges value sets present; understanding how they are made, used, and reused; and offering practice and software design recommendations to advance the ability of researchers to efficiently make or find accurate value sets for their studies, leveraging and adding to prior value set development efforts. This required field work, and, with my advisors, I conducted a qualitative study of professionals in the field: an observational user study with the aim of understanding and characterizing normative and real-world practices in value set construction and validation, with a particular focus on how researchers use the knowledge embedded in medical terminologies and ontologies to inform that work. I collected data through an online survey of RWD analysts and researchers interviews with a subset of survey participants, and observation of certain participants performing actual work to create value sets. We performed open coding and thematic analysis on interview and observation transcripts, interview notes, and open-ended question text from the surveys. The requirements, recommendations, and theoretical contributions in prior literature have not been sufficient to guide the design of software that could make effective leveraging of shared value sets a reality. This dissertation presents a conceptual framework, real-world experience, and deep, detailed account of the challenges to reuse, and makes up that deficit with a high-level requirements roadmap for improved value set creation tools. I argue, based on the evidence marshalled throughout, that there is one way to get researchers to reuse appropriate value sets or to follow best practices in determining whether a new one is absolutely needed creating their own and dedicate sufficient and appropriate effort to create them well and prepare them for reuse by others. That is, giving them software that pushes them to do these things, mostly by making it easy and obviously beneficial to do them. I offer a start in building such software with Value Set Hub, a platform for browsing, comparing, analyzing, and authoring value sets—a tool in which the presence of multiple, sometimes redundant, value sets for the same condition strengthens rather than stymies efforts to build on the work of prior value set developers. Particular innovations include the presentation of multiple value sets on the same screen for easy comparison, the display of compared value sets in the context of vocabulary hierarchies, the integration of these analytic features and value set authoring, and value set browsing features that encourage users to review existing value sets that may be relevant to their needs. Fitness-for-use is identified as the central challenge for value set developers and the strategies for addressing this challenge are categorized into two approaches: value-set-focused and code-focused. The concluding recommendations offer a roadmap for future work in building the next generation of value set repository platforms and authoring tools.
  • Thumbnail Image
    Item
    Understanding Sustainability Practices and Challenges in Making and Prototyping
    (2024) Dhaygude, Mrunal Sanjay; Peng, Huaishu; Information Studies; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Democratization of prototyping technologies like 3D printers and laser cutters has led to more rapid prototyping practices for the reasons of research, product development and individual interests. While prototyping is becoming a much easier and faster process, there are many sustainability implications neglected. To investigate the current sustainability landscape within the realm of making, we conducted a comprehensive semi-structured interview study involving 15 participants, encompassing researchers, makerspace managers, entrepreneurs, and casual makers. In this paper, we present the findings from this study, shedding light on the challenges, knowledge gaps, motivations, and opportunities that influence sustainable making practices. We discuss potential future paradigms of HCI research to help resolve sustainability challenges in the maker community.
  • Thumbnail Image
    Item
    Studying the Effects of Colors Within Virtual Reality (VR) on Psychological and Physical Behavior
    (2024) Fabian, Ciara Aliese; Aston, Jason; Library & Information Services; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Color theory is an important aspect of today's world, especially when consideringuser design, technology, and art. The primary objective of this thesis is to examine how the color groups, warm and cool, affect individuals psychologically and physiologically. While combining technological advancements, physiological methods, and psychological analyses, I will try to discover the emotional associations with specific color groups and determine the psychological and physiological impact of color groups on individuals. I hypothesize that warm colors will increase heart rate and skin conductance response, which will directly correlate to emotions of stress and excitement, and cool colors will decrease heart rate and skin conductance, which is associated with the emotions of calmness and positivity. This study demonstrated that the two-color groups exhibited a notable influence on heart rate. Using the skin conductance response method yielded unanticipated results in comparison to prior research. Prior studies have shown that there is a relationship between heart rate and skin conductance response, and therefore, if one increases, then the other should also increase. This study found that when the heart rate increased, many participants experienced a decrease in skin conductance response, showcasing a contrast in physiological reaction. Furthermore, the study demonstrated a correlation between physiological changes, such as heart rate variations, and corresponding changes in participants' psychological behavior.
  • Thumbnail Image
    Item
    REVISITING SHAKESPEARE'S WORLD: OPTIMIZING DATA OUTCOMES AND INVESTIGATING CONTRIBUTOR DYNAMICS
    (2024) Wang, ZhiCheng; Van Hyning, Victoria; Library & Information Services; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this study, we present our work processing data output from Shakespeare's World (2015-2019), an early transcription project hosted on the Zooniverse online crowdsourcing platform. We refined the dataset to make it more amenable to low-code tools such as OpenRefine, enabling easier exploration and reuse. Utilizing the cleaned dataset, we also explored Shakespeare's World volunteers’ contribution patterns. By documenting our process of cleaning the outcome dataset, we provide steps and insights that may be useful for other transcription projects working with data derived from the Zooniverse platform. In addition to offering one plausible way to clean and analyze Zooniverse outcome data, our study also reveals the significant contributions from both anonymous and registered Shakespeare’s World volunteers; the challenges in maintaining participation over the project’s lifespan; and how the original aggregation protocol, which was designed specifically to combine multiple transcriptions by Shakespeare’s World volunteers, resulted in fewer successfully transcribed lines than expected. These findings have broader implications for project design, volunteer engagement, and data management practices in online crowdsourced transcription projects.
  • Thumbnail Image
    Item
    Behavior Displacement in Sedentary and Screen Time Among Older Adults
    (2024) Li, Mengying; Choe, Eun Kyoung; Library & Information Services; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this thesis, I examine sedentary and screen-based activities among older adults, aim- ing to offer insights for designing effective behavior displacement interventions. While displacement represents a potentially effective intervention in reducing sedentary behavior, research in this area has largely overlooked older adults. Through a 7-day diary study and debriefing interviews, I examine reasons and factors that influence older adults’ decisions to displace sedentary and screen-based activities. I find that attention demand and overall productivity and quality of activities are key factors that influence older adults’ decisions to engage in displacement. I identify internal and external catalysts for displacement and preferred displacement strategies by older adults in various conditions. These findings emphasize the importance of designing personalized and adaptive interventions to reduce sedentary time, considering the diverse preferences and agency of older adults.
  • Thumbnail Image
    Item
    The Role of 3D Spatiotemporal Telemetry Analysis in Combat Flight Simulation
    (2024) Mane, Sourabh Vijaykumar; Elmqvist, Niklas Dr; Library & Information Services; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Analyzing 3D telemetry data collected from competitive video games on the internet can support players in improving performance as well as spectators in viewing data-driven narratives of the gameplay. In this thesis, we conduct an in-depth qualitative study on the use of telemetry analysis by embedding over several weeks in a virtual F-14A Tomcat squadron in the multiplayer combat flight simulator DCS World (DCS) (2008). Based on formative interviews with DCS pilots, we design a web-based game analytics framework for rendering 3D telemetry from the flight simulator in a live 3D player, incorporating many of the data displays and visualizations requested by the participants. We then evaluate the framework with real mission data from several air-to-air engagements involving the virtual squadron. Our findings highlight the key role of 3D telemetry playback in competitive multiplayer gaming.