NS-CUK Seminar: V.T.Hoang, Review on "Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with Heterophily", TNNLS 2023
1. Van Thuy Hoang
Dept. of Artificial Intelligence,
The Catholic University of Korea
hoangvanthuy90@gmail.com
2. 2
The inter-class edges
The identifiability of neighbors. Nodes with the same colors share
the same class labels.
The above neighbor distribution for the blue class is random, and
the bottom is not random.
the inter-class edges may be beneficial to improve the node
classification task if their neighbor distribution is identifiable instead
of random.
3. 3
Contributions
A new perspective of heterophily with neighbor identifiability and quantify it
with the metric inspired by the von Neumann entropy.
Propose CAGNNs as a general framework to improve classical GNNs by
learning the neighbor effect for each node
Experiments on nine well-known benchmark datasets to verify the
effectiveness, interpretability, and robustness of CAGNNs.
4. 4
Graph Neural Networks
most GNNs assume the local Markov property on node features, i.e.,
for each node
6. 6
Class-level von Neumann entropy
measures the information of neighbors’ label distribution matrix.
This metric ranges from [0, 1] and can quantify the identifiability of
neighbors for a specific class (a lower number indicates higher
identifiability of neighbors).
7. 7
PROPOSED METHOD
the Conv-Agnostic GNN framework (CAGNNs) to improve traditional
GNNs performance by adaptively learning the node-level neighbor
effect.
9. 9
PROPOSED METHOD
Graph Convolution (GC): framework is ConvAgnostic, in this part, any
standard graph convolution layers (e.g., GCN, GAT, and GIN) can be
applied to aggregate each node’s neighborhood information to
update the aggregation representation H
10. 10
PROPOSED METHOD
Mixer:
the goal of the mixer function is to evaluate the neighbor effect of
each node and then to selectively incorporate the neighbors’
information.