21st Century Algorithms Project Report

semanticscholar(2018)

引用 0|浏览7
暂无评分
摘要
This paper studies the Maximum Matching Problem, which is a classical and one of the most widely studied problems in Theoretical Computer Science. A Matching is a set of edges in a graph that do not share end-points. In the Maximum Cardinality Matching (MCM) problem, given an unweighted graph, we need to find a matching of maximum size. A generalization of this problem is the Maximum Weighted Matching (MWM) problem, where the input is a weighted graph and we are to find a matching of maximum total weight. Both of these problems have also been studied specifically for the class of Bipartite Graphs, as various problems that we commonly encounter in Computer Science naturally reduce to finding Bipartite Matchings. This paper considers both the MCM and MWM problems in the semi-streaming model, and has qualitatively better results for the special case of Bipartite Graphs. It is important to understand the model that this paper considers. The streaming model is a widely studied model for processing massive data sets. In this model, the input is defined by a stream of data that arrives sequentially. For instance, for graph problems, the stream could be the stream of edges of the graph. An algorithm in this model has to process this input stream in the order it arrives, using a limited amount of memory. The Streaming model in general also allows multiple passes over the data. It has been explored for various problems, that are apparently hard in limited space for a single pass, whether they can be solved with multiple (but small) number of passes over the data stream. For approximation algorithms, it has been shown in several works that there is often a natural trade-off between number of passes and the approximation factor. Solving most graph problems in the streaming model is difficult if the space allowed is sublinear in the number of vertices as in that case, it is not even possible to store some individual information about all vertices simultaneously. On the other hand, there isn’t enough space to store the entire graph, i.e. all edges of the graph. Hence, having space roughly equal to the size of the vertex set, which is essentially sublinear in the number of edges, is an ideal middle ground. The semi-streaming model allows O(n · polylog(n)) space, where n is the number of vertices in the graph, and hence is the most extensively studied model for solving graph problems. Massive graphs arise naturally in various domains where there is data about certain entities as well as the relationships between those entities, e.g. people and their friendships on a social network, web-pages and hyperlinks, information retrieval, neurons and synapses, papers and their citations, just to name a few. So with limited memory, we need to process such
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要