1) The document summarizes a research paper that proposes DropAGG, a novel dropout mechanism for graph neural networks (GNNs) that randomly selects nodes to not participate in message aggregation.
2) It introduces a Graph Random Aggregation Network (GRANet) that uses DropAGG to enhance GNN robustness against structural noise and adversarial attacks while alleviating over-smoothing.
3) Experiments show GRANet outperforms other GNNs under different attacks and maintains better performance with increased propagation steps, demonstrating the effectiveness of DropAGG in improving GNN robustness.
Axa Assurance Maroc - Insurer Innovation Award 2024
NS - CUK Seminar: S.T.Nguyen, Review on "DropAGG: Robust Graph Neural Networks via Drop Aggregation", Neural Netw 2023
1. LAB SEMINAR
Nguyen Thanh Sang
Network Science Lab
Dept. of Artificial Intelligence
The Catholic University of Korea
E-mail: sang.ngt99@gmail.com
DropAGG: Robust Graph Neural Networks via Drop
Aggregation
--- Bo Jiang, Yong Chen, Beibei Wang, Haiyun Xu, Bin Luo ---
2023-06-08
3. 2
Introduction
Graph Neural Networks (GNNs)
+ GNNs have attracted more and more attention due to its effectiveness on conducting graph representation and learning tasks.
4. 3
Introduction
Dropout in GNN
+ DropEdge: randomly drops out a certain rate of edges from original edge set.
+ DropNode: randomly samples a certain rate of nodes according to Bernoulli probability distribution and sets the features of these nodes
to zeros.
Dropout rate
Bernoulli distribution parameter
5. 4
Problems
+ Existing GNNs generally adopt the deterministic message
propagation mechanism.
Perform non-robustly w.r.t structural noises and adversarial attacks.
Over-smoothing issue.
6. 5
Contributions
• Propose a dropout mechanism DropAGG for GNNs’ message propagation to enhance the
robustness of GNNs and alleviate the over-smoothing issue.
• An end-to-end Graph Random Aggregation Network (GRANet) for robust graph data
representation and learning.
8. 7
DropAGG
• Randomly sample an indicative variable
• Aggregation function: sum or mean
Bernoulli distribution parameter
9. 8
Graph Random Aggregation Network
• Main idea: use DropAGG.
• Multiple DropAGG branches in GRANet perform the above DropAGG multiple times with
different random configurations and obtain multiple predictions.
• Taking average presentation:
10. 9
Training
• Regularization loss to conduct self-supervised learning in GRANet:
• Focus on semi-supervised learning:
semisupervised cross entropy loss Balancing parameter
12. 11
Experiments
Baseline comparisons
• Comparing with some recent GNNs including GCN, GAT, APPNP and GMNN, the proposed GRANet
generally obtains better performance.
effectiveness of the proposed DropAGG on guiding effective graph learning tasks.
• GRANet outperforms some other popular dropout methods including DropEdge and DropNode (GRAND)
which demonstrates the more effective of the proposed dropout technique.
13. 12
Experiments
Robustness results on noisy datasets
• Contrary to some recent GNNs, GRANet generally obtains better performance under different adversarial
attacks especially under Random attack.
• GRANet outperforms DropEdge which also uses a dropout strategy in GNNs.
=> the more effectiveness of our proposed DropAGG scheme w.r.t graph structural attacks.
• GRANet generally performs better than recent competing DropNode (GRAND).
=> the more robustness of GRANet on conducting noisy graph representation.
14. 13
Experiments
Over-smoothing results
• As the propagation step increases, GRANet
maintains better learning performance and also
performs better than the baselines.
=> indicates the effectiveness of the proposed
DropAGG on alleviating the issue of over-smoothing.
16. 15
Experiments
Ablation study
• DropAGG (DA) mechanism can obviously improve the
learning performance of the baseline method.
• The SL strategy can generally further improve the
learning performance.
17. 16
Experiments
Experiments on pure DropAGG
• DropAGG mechanism can improve the learning performance of GCN and
GAT.
• DropAGG shows better performance than DropEdge in general,
especially based on the GCN.
• DropAGG with different baselines generally obtains better performance
18. 17
Experiments
Supplementary experiment
• DropEdge GNNs may be unreliable for the very sparse
graph while DropNode may fail to address the graph with
identity features.
• DropAGG can obtain obvious improvements in sparse
graph and graph with identity features.
• Bernoulli distribution obtain better performance than some
other samplings.
• This model archieves better performance on graph
learning with unbalanced data
19. 18
Conclusions
• Propose a novel random message passing mechanism DropAGG based on which we can
derive a robust GNN for graph data learning.
• The main idea of DropAGG is to randomly select some nodes that do not perform message
aggregation.
• Using DropAGG, a Graph Random Aggregation Network (GRANet) is constructed for robust
graph data learning.
Deal with over-smoothing and non-robustness on semi-supervised learning task.