THE EFFECT OF EXPLANATIONS FROM A ROBOT TEAMMATE ON HUMANS
Files
(RESTRICTED ACCESS)
Publication or External Link
Date
Authors
Advisor
Citation
DRUM DOI
Abstract
Technological advances have given rise to AI-powered humanoid robots which, via explainable artificial intelligent (XAI) systems, can explain their actions to their human teammates when working to collaboratively complete a task. While the focus has been on advancing XAI algorithms, their effect on human’s neurocognitive processes is unknown. Although XAI benefits individuals, most investigations employed simple virtual tasks and behavioral measures. Moreover, while it was proposed that the quality of explanations can influence the human’s response, gradation in quality has received little attention. Further, prior efforts suggested that the humans’ skill level affect the understanding of explanations, especially when its quality is manipulated. However, XAI work did not use experimental paradigms to robustly assess human skill levels but instead used subjective self-reports. Finally, prior human-focused XAI investigations employed behavioral examination without using electroencephalography (EEG) which can inform underlying neural mechanisms. Therefore, by combining both behavioral and EEG analyses, this work aimed to investigate how the explanation quality affects performance and neurocognitive processes in unskilled and skilled individuals when they collaboratively execute a complex sequential task with a humanoid robot partner that has explanatory capabilities. A series of three studies were conducted where individuals learned (assessed via a well-established approach) to complete a sequential task. Before and after learning, individuals had to assess computer-generated explanations with several degrees of correctness. While the first study was behavioral with the human performing alone to validate our approach, the two last studies combined behavioral (performance, trust, subjective mental workload) and EEG (theta, low/high-alpha activity) analyses in a human-robot teaming context. Collectively, findings from these studies suggest that learning led to the encoding of an internal model of the trained sequence to complete the task, resulting in performance enhancement and refinement of the neurocognitive processes. In turn, such an internal model would facilitate the processing of the explanation quality, decreasing the need to engage neurocognitive mechanisms. This work can inform the neurocognitive mechanisms in un-/skilled humans when teaming with a partner who is able to explain its behavior and the design or validation of XAI systems embedded in robots that collaboratively work with humans.