Perceptron Learning with Random Coordinate Descent

msra(2005)

引用 27|浏览11
暂无评分
摘要
A perceptron is a linear threshold classier that separates examples with a hyperplane. It is perhaps the simplest learning model that is used standalone. In this paper, we propose a family of random coordinate descent algorithms for perceptron learning on binary classication problems. Un- like most perceptron learning algorithms, which require smooth cost functions, our algorithms directly minimize the training error, and usually achieve the lowest training error compared with other algo- rithms. The algorithms are also computationally ecien t. Such advantages make them favorable for both standalone use and ensemble learning, on problems that are not linearly separable. Experiments show that our algorithms work very well with AdaBoost, and achieve the lowest test errors for half of the data sets.
更多
查看译文
关键词
ensemble learning,cost function
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要