On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis
arxiv(2024)
摘要
A major challenge in Explainable AI is in correctly interpreting activations
of hidden neurons: accurate interpretations would help answer the question of
what a deep learning system internally detects as relevant in the input,
demystifying the otherwise black-box nature of deep learning systems. The state
of the art indicates that hidden node activations can, in some cases, be
interpretable in a way that makes sense to humans, but systematic automated
methods that would be able to hypothesize and verify interpretations of hidden
neuron activations are underexplored. This is particularly the case for
approaches that can both draw explanations from substantial background
knowledge, and that are based on inherently explainable (symbolic) methods.
In this paper, we introduce a novel model-agnostic post-hoc Explainable AI
method demonstrating that it provides meaningful interpretations. Our approach
is based on using a Wikipedia-derived concept hierarchy with approximately 2
million classes as background knowledge, and utilizes OWL-reasoning-based
Concept Induction for explanation generation. Additionally, we explore and
compare the capabilities of off-the-shelf pre-trained multimodal-based
explainable methods.
Our results indicate that our approach can automatically attach meaningful
class expressions as explanations to individual neurons in the dense layer of a
Convolutional Neural Network. Evaluation through statistical analysis and
degree of concept activation in the hidden layer show that our method provides
a competitive edge in both quantitative and qualitative aspects compared to
prior work.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要