Investigating the impact of hybrid optimization strategies on distributed machine learning algorithms

user-5f8411ab4c775e9685ff56d3(2014)

引用 0|浏览4
暂无评分
摘要
Many parallel data-driven systems have been successful in their ability to store and process large volumes of data. This has led to an increased interest in performing largescale analytics on this data. Much acclaimed for its ability to scale petabytes of data, the MapReduce framework has been found to be limiting for iterative algorithms. Such iterative algorithms form the basis for many domains of data analysis. To address these challenges, various new techniques have been proposed. These usually revolve around either developing extensions to the existing systems or coming up with specialized domain specific systems.Tackling this problem at an algorithmic level, we propose a set of optimization techniques that train either locally producing a sub-optimal, but a fast solution or globally creating slower yet optimal solutions. We evaluate the tradeoffs between these training approaches from the dimensions of quality and performance. Further, we suggest and investigate hybrid training techniques as a possible “middle ground" that try to come up with a better solution while still taking substantially less time than the global approaches. Initial experiments have shown that the proposed architecture yields accurate predictions in a shorter training time following an easy-to-use framework. Our study aims to provide necessary guidelines to Data Scientists for choosing the most effective combination for the performance and cost requirements of a given learning task.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要