Explainable Regression Via Prototypes.

ACM Trans. Evol. Learn. Optim.(2022)

引用 0|浏览6
暂无评分
摘要
Model interpretability/explainability is increasingly a concern when applying machine learning to real-world problems. In this article, we are interested in explaining regression models by exploiting prototypes, which are exemplar cases in the problem domain. Previous works focused on finding prototypes that are representative of all training data but ignore the model predictions, i.e., they explain the data distribution but not necessarily the predictions. We propose a two-level model-agnostic method that considers prototypes to provide global and local explanations for regression problems and that account for both the input features and the model output. M-PEER (Multiobjective Prototype-basEd Explanation for Regression) is based on a multi-objective evolutionary method that optimizes both the error of the explainable model and two other “semantics”-based measures of interpretability adapted from the context of classification, namely, model fidelity and stability. We compare the proposed method with the state-of-the-art method based on prototypes for explanation—ProtoDash—and with other methods widely used in correlated areas of machine learning, such as instance selection and clustering. We conduct experiments on 25 datasets, and results demonstrate significant gains of M-PEER over other strategies, with an average of 12% improvement in the proposed metrics (i.e., model fidelity and stability) and 17% in root mean squared error (RMSE) when compared to ProtoDash.
更多
查看译文
关键词
Regression,explanation,example-based explanations,interpretability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要