Designing for the Human in the Loop: Transparency and Control in Interactive Machine Learning

dc.contributor.advisorBoyd-Graber, Jordanen_US
dc.contributor.advisorFindlater, Leahen_US
dc.contributor.authorRenner, Alison Marieen_US
dc.contributor.departmentComputer Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2020-07-08T05:34:15Z
dc.date.available2020-07-08T05:34:15Z
dc.date.issued2020en_US
dc.description.abstractInteractive machine learning techniques inject domain expertise to improve or adapt models. Prior research has focused on adapting underlying algorithms and optimizing system performance, which comes at the expense of user experience. This dissertation advances our understanding of how to design for human-machine collaboration--improving both user experience and system performance--through four studies of end users' experience, perceptions, and behaviors with interactive machine learning systems. In particular, we focus on two critical aspects of interactive machine learning: how systems explain themselves to users (transparency) and how users provide feedback or guide systems (control). We first explored how explanations shape users' experience of a simple text classifier with or without the ability to provide feedback to it. Users were frustrated when given explanations without means for feedback and expected model improvement over time even in the absence of feedback. To explore transparency and control in the context of more complex models and subjective tasks, we chose an unsupervised machine learning case, topic modeling. First, we developed a novel topic visualization technique and compared it against common topic representations (e.g., word lists) for interpretability. While users quickly understood topics with simple word lists, our visualization exposed phrases that other representations obscured. Next, we developed a novel, ``human-centered'' interactive topic modeling system supporting users' desired control mechanisms. A formative user study with this system identified two aspects of control exposed by transparency: adherence, or whether models incorporate user feedback as expected, and stability, or whether other unexpected model updates occur. Finally, we further studied adherence and stability by comparing user experience across three interactive topic modeling approaches. These approaches incorporate input differently, resulting in varied adherence, stability, and update speeds. Participants disliked slow updates most, followed by lack of adherence. Instability was polarizing: some participants liked it when it surfaced interesting information, while others did not. Across modeling approaches, participants differed only in whether they noticed adherence. This dissertation contributes to our understanding of how end users comprehend and interact with machine learning models and provides guidelines for designing systems for the ``human in the loop.''en_US
dc.identifierhttps://doi.org/10.13016/ze3u-bfbq
dc.identifier.urihttp://hdl.handle.net/1903/26063
dc.language.isoenen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pquncontrolledcontrolen_US
dc.subject.pquncontrolledhuman-centered machine learningen_US
dc.subject.pquncontrolledhuman-in-the-loopen_US
dc.subject.pquncontrolledinteractive machine learningen_US
dc.subject.pquncontrolledinteractive topic modelingen_US
dc.subject.pquncontrolledtransparencyen_US
dc.titleDesigning for the Human in the Loop: Transparency and Control in Interactive Machine Learningen_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Renner_umd_0117E_20671.pdf
Size:
9.63 MB
Format:
Adobe Portable Document Format