Computer Science Research Works
Permanent URI for this collectionhttp://hdl.handle.net/1903/1593
Browse
Search Results
Item Just Do Something: Comparing Self-proposed and Machine-recommended Stress Interventions among Online Workers with Home Sweet Ofice(Association for Computer Machinery (ACM), 2023-04-23) Tong, Xin; Mauriello, Matthew Louis; Mora-Mendoza, Marco Antonio; Prabhu, Nina; Kim, Jane Paik; Paredes, Pablo E.Modern stress management techniques have been shown to be efective, particularly when applied systematically and with the supervision of an instructor. However, online workers usually lack sufcient support from therapists and learning resources to selfmanage their stress. To better assist these users, we implemented a browser-based application, Home Sweet Office (HSO), to administer a set ofstress micro-interventions which mimic existing therapeutic techniques, including somatic, positive psychology, meta cognitive, and cognitive behavioral categories. In a four-week feld study, we compared random and machine-recommended interventions to interventions that were self-proposed by participants in order to investigate effective content and recommendation methods. Our primary fndings suggest that both machine-recommended and self-proposed interventions had signifcantly higher momentary efficacy than random selection, whereas machine-recommended interventions offer more activity diversity compared to self-proposed interventions. We conclude with refections on these results, discuss features and mechanisms which might improve efficacy, and suggest areas for future work.Item Code Code Evolution: Understanding How People Change Data Science Notebooks Over Time(Association for Computer Machinery (ACM), 2023-04) Raghunandan, Deepthi; Roy, Aayushi; Shi, Shenzhi; Elmqvist, Niklas; Battle, LeilaniSensemaking is the iterative process of identifying, extracting, and explaining insights from data, where each iteration is referred to as the “sensemaking loop.” However, little is known about how sensemaking behavior evolves from exploration and explanation during this process. This gap limits our ability to understand the full scope of sensemaking, which in turn inhibits the design of tools that support the process. We contribute the first mixed-method to characterize how sensemaking evolves within computational notebooks. We study 2,574 Jupyter notebooks mined from GitHub by identifying data science notebooks that have undergone significant iterations, presenting a regression model that automatically characterizes sensemaking activity, and using this regression model to calculate and analyze shifts in activity across GitHub versions. Our results show that notebook authors participate in various sensemaking tasks over time, such as annotation, branching analysis, and documentation. We use our insights to recommend extensions to current notebook environments.