Design And Evaluation of A Conversational Agent for Mental Health Support: Forming Human-Agent Sociotechnical And Therapeutic Relationships
Publication or External Link
Many people with mental health disorders face significant challenges getting the help they need, including the costs of obtaining psychological counseling or psychiatry services, as well as fear of being stigmatized. As a way of addressing these barriers, text-based conversational agents (chatbots) have gained traction as a new form of e-therapy. Powered by artificial intelligence (AI) and natural language processing techniques, this technology offers more natural interactions and a “judgment-free zone” for clients concerned about stigma. However, literature on psychotherapeutic chatbots is sparse in both the psychology and human computer interaction (HCI) fields. While recent studies indicate that chatbots provide an affordable and effective therapy delivery method, this research has not thoroughly explained the underlying mechanisms for increasing acceptance of chatbots and making them more engaging. Don Norman (1994) has argued the main difficulties of utilizing intelligent agents are social—not technical—and particularly center around people’s perceptions of agents. In exploring the use of chatbots in psychotherapy, we must investigate how this technology is conceptually understood, and the thoughts and feelings they evoke when people interact with them. This dissertation focuses on two types of relationships critical to the success of utilizing chatbots for mental health interventions: sociotechnical relationships and therapeutic relationships. A sociotechnical relationship concerns technology adoption, usability, and the compatibility between humans and chatbots. A therapeutic relationship encompasses people’s feelings and attitudes toward a chatbot therapist. Therefore, this dissertation asks: What are the optimal design principles for a conversational agent that facilitates the development of both sociotechnical and therapeutic relationships to help people manage their mental health? To investigate this question, I designed an original conversational system with eight gendered and racially heterogeneous personas, and one neutral robot-like persona. Using a mixed-method approach (online experiment and interviews), I evaluated factors related to the adoption and use of conversational agents for psychotherapeutic purposes. I also unpacked the human-agent relational dynamics and evaluated how anthropomorphism and perceived racial similarity impact people’s perceptions of and interactions with the chatbot. These findings contributed to the wider understanding of conversational AI application in mental health support and provided actionable design recommendations.