Information-theoretic metric learning

Proceedings of the 24th international conference on Machine learning(2007)

引用 2651|浏览3
暂无评分
摘要
In this paper, we present an information-theoretic approach to learning a Mahalanobis distance function. We formulate the problem as that of minimizing the differential relative entropy between two multivariate Gaussians under con- straints on the distance function. We express this problem as a particular Bregman optimiza- tion problem—that of minimizing the LogDet di- vergence subject to linear constraints. Our result- ing algorithm has several advantages over exist- ing methods. First, our method can handle a wide variety of constraints and can optionally incorpo- rate a prior on the distance function. Second, it is fast and scalable. Unlike most existing meth- ods, no eigenvalue computations or semi-definite programming are required. We also present an online version and derive regret bounds for the resulting algorithm. Finally, we evaluate our method on a recent error reporting system for software called Clarify, in the context of met- ric learning for nearest neighbor classification, as well as on standard data sets.
更多
查看译文
关键词
particular bregman optimization problem,derive regret bound,existing method,logdet divergence subject,metric learning,information-theoretic metric learning,eigenvalue computation,resulting algorithm,mahalanobis distance function,differential relative entropy,distance function
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要