Is Epistemic Uncertainty Faithfully Represented by Evidential Deep Learning Methods?
CoRR(2024)
摘要
Trustworthy ML systems should not only return accurate predictions, but also
a reliable representation of their uncertainty. Bayesian methods are commonly
used to quantify both aleatoric and epistemic uncertainty, but alternative
approaches, such as evidential deep learning methods, have become popular in
recent years. The latter group of methods in essence extends empirical risk
minimization (ERM) for predicting second-order probability distributions over
outcomes, from which measures of epistemic (and aleatoric) uncertainty can be
extracted. This paper presents novel theoretical insights of evidential deep
learning, highlighting the difficulties in optimizing second-order loss
functions and interpreting the resulting epistemic uncertainty measures. With a
systematic setup that covers a wide range of approaches for classification,
regression and counts, it provides novel insights into issues of
identifiability and convergence in second-order loss minimization, and the
relative (rather than absolute) nature of epistemic uncertainty measures.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要