暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

图对比学习框架PyGCL||快速实现图数据增广,对比模式,对比目标,和负采样

2183

本文来自图与推荐微信公众号,原文链接https://mp.weixin.qq.com/s/TxTse0zv6DYliFcxhfrd3A

推荐一个图对比学习的框架PyGCL: Graph Contrastive Learning for PyTorch
,其实现了多种最新顶会上的图对比学习论文(如DGI,GRACE,GraphCL等),并且在持续更新中~

PyGCL将图对比学习模块化为:图数据增广,对比模式,对比目标,和负采样策略。通过对每个模块进行配置,即可快速进行多种实验测试。用起来非常舒爽~


链接:https://github.com/GraphCL/PyGCL

Package Overview

Our PyGCL implements four main components of graph contrastive learning algorithms:

  • Graph augmentation: transforms input graphs into congruent graph views.
  • Contrasting modes: specifies positive and negative pairs.
  • Contrastive objectives: computes the likelihood score for positive and negative pairs.
  • Negative mining strategies: improves the negative sample set by considering the relative similarity (the hardness) of negative sample.

Implementations and Examples

  • DGI (P. Veličković et al., Deep Graph Infomax, ICLR, 2019)
  • InfoGraph (F.-Y. Sun et al., InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization, ICLR, 2020)
  • MVGRL (K. Hassani et al., Contrastive Multi-View Representation Learning on Graphs, ICML, 2020)
  • GRACE (Y. Zhu et al., Deep Graph Contrastive Representation Learning, GRL+@ICML, 2020)
  • GraphCL (Y. You et al., Graph Contrastive Learning with Augmentations, NeurIPS, 2020)
  • SupCon (P. Khosla et al., Supervised Contrastive Learning, NeurIPS, 2020)
  • HardMixing (Y. Kalantidis et al., Hard Negative Mixing for Contrastive Learning, NeurIPS, 2020)
  • DCL (C.-Y. Chuang et al., Debiased Contrastive Learning, NeurIPS, 2020)
  • HCL (J. Robinson et al., Contrastive Learning with Hard Negative Samples, ICLR, 2021)
  • Ring (M. Wu et al., Conditional Negative Sampling for Contrastive Learning of Visual Representations, ICLR, 2021)
  • Exemplar (N. Zhao et al., What Makes Instance Discrimination Good for Transfer Learning?, ICLR, 2021)
  • BGRL (S. Thakoor et al., Bootstrapped Representation Learning on Graphs, arXiv, 2021)
  • G-BT (P. Bielak et al., Graph Barlow Twins: A Self-Supervised Representation Learning Framework for Graphs, arXiv, 2021)
  • VICReg (A. Bardes et al., VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, arXiv, 2021)

Building Your Own GCL Algorithms

Besides try the above examples for node and graph classification tasks, you can also build your own graph contrastive learning algorithms straightforwardly.

Graph Augmentation

In GCL.augmentors
, PyGCL provides the Augmentor
base class, which offers a universal interface for graph augmentation functions. Specifically, PyGCL implements the following augmentation functions:

AugmentationClass name
Edge Adding (EA)EdgeAdding
Edge Removing (ER)EdgeRemoving
Feature Masking (FM)FeatureMasking
Feature Dropout (FD)FeatureDropout
Personalized PageRank (PPR)PPRDiffusion
Markov Diffusion Kernel (MDK)MarkovDiffusion
Node Dropping (ND)NodeDropping
Subgraphs induced by Random Walks (RWS)RWSampling
Ego-net Sampling (ES)Identity

Call these augmentation functions by feeding with a graph of in a tuple form of node features, edge index, and edge features x, edge_index, edge_weights
will produce corresponding augmented graphs.

PyGCL also supports composing arbitrary number of augmentations together. To compose a list of augmentation instances augmentors
, you only need to use the right shift operator >>
:

aug = augmentors[0]
for a in augs[1:]:
    aug = aug >> a

You can also write your own augmentation functions by defining the augment
function.

Contrasting Modes

PyGCL implements three contrasting modes: (a) local-local, (b) global-local, and (c) global-global modes.

Contrastive Objectives

In GCL.losses
, PyGCL implements the following contrastive objectives:

Contrastive objectivesClass name
InfoNCE lossInfoNCELoss
Jensen-Shannon Divergence (JSD) lossJSDLoss
Triplet Margin (TM) lossTripletLoss
Bootstrapping Latent (BL) lossBootstrapLoss
Barlow Twins (BT) lossBTLoss
VICReg lossVICRegLoss

Negative Mining Strategies

In GCL.losses
, PyGCL further implements four negative mining strategies that are build upon the InfoNCE contrastive objective:

Hard negative mining strategiesClass name
Hard negative mixingHardMixingLoss
Conditional negative samplingRingLoss
Debiased contrastive objectiveInfoNCELoss(debiased_nt_xent_loss)
Hardness-biased negative samplingInfoNCELoss(hardness_nt_xent_loss)
文章转载自深度学习与图网络,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论