Natural Language Interaction with Explainable AI Models.

CVPR Workshops(2019)

引用 27|浏览457
暂无评分
摘要
This paper presents an explainable AI (XAI) system that provides explanations for its predictions. The system consists of two key components -- namely, the prediction And-Or graph (AOG) model for recognizing and localizing concepts of interest in input data, and the XAI model for providing explanations to the user about the AOGu0027s predictions. In this work, we focus on the XAI model specified to interact with the user in natural language, whereas the AOGu0027s predictions are considered given and represented by the corresponding parse graphs (pgu0027s) of the AOG. Our XAI model takes pgu0027s as input and provides answers to the useru0027s questions using the following types of reasoning: direct evidence (e.g., detection scores), part-based inference (e.g., detected parts provide evidence for the concept asked), and other evidences from spatio-temporal context (e.g., constraints from the spatio-temporal surround). We identify several correlations between useru0027s questions and the XAI answers using Youtube Action dataset.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要