Designing for the Human in the Loop: Transparency and Control in Interactive Machine Learning
Renner, Alison Marie
MetadataShow full item record
Interactive machine learning techniques inject domain expertise to improve or adapt models. Prior research has focused on adapting underlying algorithms and optimizing system performance, which comes at the expense of user experience. This dissertation advances our understanding of how to design for human-machine collaboration--improving both user experience and system performance--through four studies of end users' experience, perceptions, and behaviors with interactive machine learning systems. In particular, we focus on two critical aspects of interactive machine learning: how systems explain themselves to users (transparency) and how users provide feedback or guide systems (control). We first explored how explanations shape users' experience of a simple text classifier with or without the ability to provide feedback to it. Users were frustrated when given explanations without means for feedback and expected model improvement over time even in the absence of feedback. To explore transparency and control in the context of more complex models and subjective tasks, we chose an unsupervised machine learning case, topic modeling. First, we developed a novel topic visualization technique and compared it against common topic representations (e.g., word lists) for interpretability. While users quickly understood topics with simple word lists, our visualization exposed phrases that other representations obscured. Next, we developed a novel, ``human-centered'' interactive topic modeling system supporting users' desired control mechanisms. A formative user study with this system identified two aspects of control exposed by transparency: adherence, or whether models incorporate user feedback as expected, and stability, or whether other unexpected model updates occur. Finally, we further studied adherence and stability by comparing user experience across three interactive topic modeling approaches. These approaches incorporate input differently, resulting in varied adherence, stability, and update speeds. Participants disliked slow updates most, followed by lack of adherence. Instability was polarizing: some participants liked it when it surfaced interesting information, while others did not. Across modeling approaches, participants differed only in whether they noticed adherence. This dissertation contributes to our understanding of how end users comprehend and interact with machine learning models and provides guidelines for designing systems for the ``human in the loop.''