Designing for the Human in the Loop: Transparency and Control in Interactive Machine Learning
dc.contributor.advisor | Boyd-Graber, Jordan | en_US |
dc.contributor.advisor | Findlater, Leah | en_US |
dc.contributor.author | Renner, Alison Marie | en_US |
dc.contributor.department | Computer Science | en_US |
dc.contributor.publisher | Digital Repository at the University of Maryland | en_US |
dc.contributor.publisher | University of Maryland (College Park, Md.) | en_US |
dc.date.accessioned | 2020-07-08T05:34:15Z | |
dc.date.available | 2020-07-08T05:34:15Z | |
dc.date.issued | 2020 | en_US |
dc.description.abstract | Interactive machine learning techniques inject domain expertise to improve or adapt models. Prior research has focused on adapting underlying algorithms and optimizing system performance, which comes at the expense of user experience. This dissertation advances our understanding of how to design for human-machine collaboration--improving both user experience and system performance--through four studies of end users' experience, perceptions, and behaviors with interactive machine learning systems. In particular, we focus on two critical aspects of interactive machine learning: how systems explain themselves to users (transparency) and how users provide feedback or guide systems (control). We first explored how explanations shape users' experience of a simple text classifier with or without the ability to provide feedback to it. Users were frustrated when given explanations without means for feedback and expected model improvement over time even in the absence of feedback. To explore transparency and control in the context of more complex models and subjective tasks, we chose an unsupervised machine learning case, topic modeling. First, we developed a novel topic visualization technique and compared it against common topic representations (e.g., word lists) for interpretability. While users quickly understood topics with simple word lists, our visualization exposed phrases that other representations obscured. Next, we developed a novel, ``human-centered'' interactive topic modeling system supporting users' desired control mechanisms. A formative user study with this system identified two aspects of control exposed by transparency: adherence, or whether models incorporate user feedback as expected, and stability, or whether other unexpected model updates occur. Finally, we further studied adherence and stability by comparing user experience across three interactive topic modeling approaches. These approaches incorporate input differently, resulting in varied adherence, stability, and update speeds. Participants disliked slow updates most, followed by lack of adherence. Instability was polarizing: some participants liked it when it surfaced interesting information, while others did not. Across modeling approaches, participants differed only in whether they noticed adherence. This dissertation contributes to our understanding of how end users comprehend and interact with machine learning models and provides guidelines for designing systems for the ``human in the loop.'' | en_US |
dc.identifier | https://doi.org/10.13016/ze3u-bfbq | |
dc.identifier.uri | http://hdl.handle.net/1903/26063 | |
dc.language.iso | en | en_US |
dc.subject.pqcontrolled | Computer science | en_US |
dc.subject.pquncontrolled | control | en_US |
dc.subject.pquncontrolled | human-centered machine learning | en_US |
dc.subject.pquncontrolled | human-in-the-loop | en_US |
dc.subject.pquncontrolled | interactive machine learning | en_US |
dc.subject.pquncontrolled | interactive topic modeling | en_US |
dc.subject.pquncontrolled | transparency | en_US |
dc.title | Designing for the Human in the Loop: Transparency and Control in Interactive Machine Learning | en_US |
dc.type | Dissertation | en_US |
Files
Original bundle
1 - 1 of 1