Promoting Rich and Low-Burden Self-Tracking With Multimodal Data Input

dc.contributor.advisorChoe, Eun Kyoungen_US
dc.contributor.authorLuo, Yuhanen_US
dc.contributor.departmentLibrary & Information Servicesen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2022-06-15T05:40:27Z
dc.date.available2022-06-15T05:40:27Z
dc.date.issued2022en_US
dc.description.abstractManual tracking of personal data offers many benefits such as increased engagement and situated awareness. However, existing self-tracking tools often employ touch-based input to support manual tracking, imposing a heavy input burden and limiting the richness of the collected data. Inspired by speech's fast and flexible nature, this dissertation examines how speech input works with traditional touch input to manually capture personal data in different contexts: food practice, productivity, and exercise. As a first step, I conducted co-design workshops with registered dietitians to explore opportunities for customizing food trackers composed of multimodal input. The workshops generated diverse tracker designs to meet dietitians' information needs, with a wide range of tracking items, timing, data format, and input modalities. In the second study, I specifically examined how speech input supports capturing everyday food practice. I created FoodScrap, a speech-based food journaling app, and conducted a data collection study, in which FoodScrap not only collected rich details of meals and food decisions, but was also recognized for encouraging self-reflection. To further integrate touch and speech on mobile phones, I developed NoteWordy, a multimodal system integrating touch and speech input to capture multiple types of data. Through deploying NoteWordy in the context of productivity tracking, I found several input patterns varying by the data type as well as participants' input habits, error tolerance, and social surroundings. Additionally, speech input helped faster entry completion and enhanced the richness of the free-form text. Furthermore, I expanded the research scope by exploring speech input on smart speakers by developing TandemTrack, a multimodal exercise assistant coupling a mobile app and an Alexa skill. In a four-week deployment study, TandemTrack demonstrated the convenience of the hands-free speech input to capture exercise data and acknowledged the importance of visual feedback on the mobile app to help with data exploration. Across these studies, I describe the strengths and limitations of speech as an input modality to capture personal data in various contexts, and discuss opportunities for improving the data capture experience with natural language input. Lastly, I conclude the dissertation with design recommendations toward a low-burden, rich, and reflective self-tracking experience.en_US
dc.identifierhttps://doi.org/10.13016/pqfg-3ow7
dc.identifier.urihttp://hdl.handle.net/1903/28753
dc.language.isoenen_US
dc.subject.pqcontrolledInformation technologyen_US
dc.subject.pqcontrolledInformation scienceen_US
dc.subject.pqcontrolledComputer engineeringen_US
dc.subject.pquncontrolledhealthen_US
dc.subject.pquncontrolledmultimodal interactionen_US
dc.subject.pquncontrolledpersonal informaticsen_US
dc.subject.pquncontrolledproductivityen_US
dc.subject.pquncontrolledself-trackingen_US
dc.subject.pquncontrolledspeech inputen_US
dc.titlePromoting Rich and Low-Burden Self-Tracking With Multimodal Data Inputen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Luo_umd_0117E_22389.pdf
Size:
14.69 MB
Format:
Adobe Portable Document Format