Hypergraph Self-supervised Learning with Sampling-efficient Signals
CoRR(2024)
摘要
Self-supervised learning (SSL) provides a promising alternative for
representation learning on hypergraphs without costly labels. However, existing
hypergraph SSL models are mostly based on contrastive methods with the
instance-level discrimination strategy, suffering from two significant
limitations: (1) They select negative samples arbitrarily, which is unreliable
in deciding similar and dissimilar pairs, causing training bias. (2) They often
require a large number of negative samples, resulting in expensive
computational costs. To address the above issues, we propose SE-HSSL, a
hypergraph SSL framework with three sampling-efficient self-supervised signals.
Specifically, we introduce two sampling-free objectives leveraging the
canonical correlation analysis as the node-level and group-level
self-supervised signals. Additionally, we develop a novel hierarchical
membership-level contrast objective motivated by the cascading overlap
relationship in hypergraphs, which can further reduce membership sampling bias
and improve the efficiency of sample utilization. Through comprehensive
experiments on 7 real-world hypergraphs, we demonstrate the superiority of our
approach over the state-of-the-art method in terms of both effectiveness and
efficiency.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要