Driving through the Concept Gridlock: Unraveling Explainability Bottlenecks in Automated Driving.

IEEE/CVF Winter Conference on Applications of Computer Vision(2024)

引用 0|浏览2
暂无评分
摘要
Concept bottleneck models have been successfully used for explainable machine learning by encoding information within the model with a set of human-defined concepts. In the context of human-assisted or autonomous driving, explainability models can help user acceptance and understanding of decisions made by the autonomous vehicle, which can be used to rationalize and explain driver or vehicle behavior. We propose a new approach using concept bottlenecks as visual features for control command predictions and explanations of user and vehicle behavior. We learn a human-understandable concept layer that we use to explain sequential driving scenes while learning vehicle control commands. This approach can then be used to determine whether a change in a preferred gap or steering commands from a human (or autonomous vehicle) is led by an external stimulus or change in preferences. We achieve competitive performance to latent visual features while gaining interpretability within our model setup. 1
更多
查看译文
关键词
Applications,Autonomous Driving,Algorithms,Explainable,fair,accountable,privacy-preserving,ethical computer vision
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要