Abstract

A tale of two explanations: Enhancing human trust by explaining robot behavior

Mark Edmonds1, Feng Gao2, Hangxin Liu1, Xu Xie2, Siyuan Qi1, Brandon Rothrock 3, Yixin Zhu2, Ying Nian Wu2, Hongjing Lu2,4, Song-Chun Zhu1,2
1 Department of Computer Science, UCLA | 2 Department of Statistics, UCLA | 3 Jet Propulsion Lab, Caltech | 4 Department of Pyschology, UCLA

Abstract

The ability to provide comprehensive explanations of chosen actions is a hallmark of intelligence. Lack of this ability impedes the general acceptance of AI and robot systems in critical tasks. This paper examines what forms of explanations best foster human trust in machines and proposes a framework in which explanations are generated from both functional and mechanistic perspectives. The robot system learns from human demonstrations to open medicine bottles using (i) an embodied haptic prediction model to extract knowledge from sensory feedback, (ii) a stochastic grammar model induced to capture the compositional structure of a multistep task, and (iii) an improved Earley parsing algorithm to jointly leverage both the haptic and grammar models. The robot system not only shows the ability to learn from human demonstrators but also succeeds in opening new, unseen bottles. Using different forms of explanations generated by the robot system, we conducted a psychological experiment to examine what forms of explanations best foster human trust in the robot. We found that comprehensive and real-time visualizations of the robot’s internal decisions were more effective in promoting human trust than explanations based on summary text descriptions. In addition, forms of explanation that are best suited to foster trust do not necessarily correspond to the model components contributing to the best task performance. This divergence shows a need for the robotics community to integrate model components to enhance both task execution and human trust in machines.

Selected Figures

System architecture
Fig. 1 Overview of demonstration, learning, evaluation, and explainability. By observing human demonstrations, the robot learns, performs, and explains using both a symbolic representation and a haptic representation. (A) Fine-grained human manipulation data were collected using a tactile glove. On the basis of the human demonstrations, the model learns (B) symbolic representations by inducing a grammar model that encodes long-term task structure to generate mechanistic explanations and (C) embodied haptic representations using an autoencoder to bridge the human and robot sensory input in a common space, providing a functional explanation of robot action. These two components are integrated using (D) the GEP for action planning. These processes complement each other in both (E) improving robot performance and (F) generating effective explanations that foster human trust.
explanation panels over time
Fig. 5 Explanations generated by the symbolic planner and the haptic model. (A) Symbolic (mechanistic) and haptic (functional) explanations at a0 of the robot action sequence. (B to D) Explanations at times a2, a8, and a9, respectively, where ai refers to the ith action. Note that the red on the robot gripper’s palm indicates a large magnitude of force applied by the gripper, and green indicates no force; other values are interpolated. These explanations are provided in real time as the robot executes.

Videos

Bibtex

@article{edmonds2019tale,
  title={A tale of two explanations: Enhancing human trust by explaining robot behavior},
  author={Edmonds, Mark and Gao, Feng and Liu, Hangxin and Xie, Xu and Qi, Siyuan and Rothrock, Brandon and and Zhu, Yixin and Wu, Ying Nian and Lu, Hongjing and Zhu, Song-Chun},
  journal={Science Robotics},
  volume={4},
  number={37},
  pages={eaay4663},
  year={2019},
  publisher={AAAS}
}

News Coverage