SIMULATION, REPRESENTATION, AND AUTOMATION: HUMAN-CENTERED ARTIFICIAL INTELLIGENCE FOR AUGMENTING VISUALIZATION DESIGN

dc.contributor.advisorElmqvist, Niklasen_US
dc.contributor.authorShin, Sungboken_US
dc.contributor.departmentComputer Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2024-09-23T05:59:25Z
dc.date.available2024-09-23T05:59:25Z
dc.date.issued2024en_US
dc.description.abstractData visualization is a powerful strategy for using graphics to represent data for effective communication and analysis. Unfortunately, creating effective data visualizations is a challenge for both novice and expert design users. The task often involves an iterative process of trial and error, which by its nature, is time-consuming. Designers frequently seek feedback to ensure their visualizations convey the intended message clearly to their target audience. However, obtaining feedback from peers can be challenging, and alternatives like user studies or crowdsourcing is costly and time-consuming. This suggests the potential for a tool that can provide design feedback for visualizations. To that end, I create a virtual, human vision-inspired system that looks into the visualization design and provides feedback on it using various AI techniques. The goal is not to replicate an exact version of a human eye. Instead, my work aims to develop a practical and effective system that delivers design feedback to visualization designers, utilizing advanced AI techniques, such as deep neural networks (DNNs) and large language models (LLMs). My thesis includes three distinct works, each aimed at developing a virtual system inspired by human vision using AI techniques. Specifically, these works focus on simulation, representation, and automation, collectively progressing toward the aim. First, I develop a methodology to simulate human perception in machines through a virtual eye tracker named A SCANNER DEEPLY. This involves gathering eye gaze data from chart images and training them using a DNN. Second, I focus on effectively and pragmatically representing a virtual human vision-inspired system by creating PERCEPTUAL PAT, which includes a suite of perceptually-based filters. Third, I automate the feedback generation process with VISUALIZATIONARY, leveraging large language models to enhance the automation. I report on challenges and lessons learned about the key components and design considerations that help visualization designers. Finally, I end the dissertation by discussing future research directions for using AI for augmenting visualization design process.en_US
dc.identifierhttps://doi.org/10.13016/emzi-aa5u
dc.identifier.urihttp://hdl.handle.net/1903/33362
dc.language.isoenen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pquncontrolledArtificial Intelligenceen_US
dc.subject.pquncontrolledData Scienceen_US
dc.subject.pquncontrolledData Visualizationen_US
dc.subject.pquncontrolledHuman-Centered Artificial Intelligenceen_US
dc.subject.pquncontrolledHuman-Computer Interactionen_US
dc.subject.pquncontrolledInformation Visualizationen_US
dc.titleSIMULATION, REPRESENTATION, AND AUTOMATION: HUMAN-CENTERED ARTIFICIAL INTELLIGENCE FOR AUGMENTING VISUALIZATION DESIGNen_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Shin_umd_0117E_24545.pdf
Size:
78.34 MB
Format:
Adobe Portable Document Format