O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

PaperReview_ “Few-shot Graph Classification with Contrastive Loss and Meta-classifier” by Chao Wei, Zhidong Deng (2).pptx

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Próximos SlideShares
LDM_ImageSythesis.pptx
LDM_ImageSythesis.pptx
Carregando em…3
×

Confira estes a seguir

1 de 19 Anúncio

Mais Conteúdo rRelacionado

Mais recentes (20)

Anúncio

PaperReview_ “Few-shot Graph Classification with Contrastive Loss and Meta-classifier” by Chao Wei, Zhidong Deng (2).pptx

  1. 1. Paper: “Few-shot Graph Classification with Contrastive Loss and Meta-classifier” by Chao Wei, Zhidong Deng Review by Akanksha Rawat
  2. 2. Few-Shot Learning FFew-shot learning (FSL) uses just a few samples and past information to acquire representations that generalize effectively to novel classes. Existing FSL models can generally be divided into three categories: (1) methods based on optimization (2) Using memory-based techniques (3) Approaches based on metrics
  3. 3. Few shot classification Few-shot classification seeks to train a classifier to identify previously unidentified classes using a small number of labeled instances. Recent works suggest incorporating few-shot learning frameworks for quick adaptations to graph classes with few labeled graphs to address the label scarcity barrier.
  4. 4. Contrastive Representation Learning Contrastive learning uses the idea of comparing samples against one another to identify characteristics that are shared by different data classes and characteristics that distinguish one data class from another, improving performance on visual tasks. The fundamental foundation for contrastive learning involves choosing a "anchor" data sample, a "positive" sample of data from the same distribution as the anchor, and a "negative" sample of data from a different distribution.
  5. 5. Introduction This paper investigates the problem of few-shot graph classification. They presented a novel graph contrastive relation network (GCRNet) by introducing a practical yet straightforward graph meta-baseline with contrastive loss and meta-classifier, which achieved comparable performance for graph few-shot learning.
  6. 6. Paper’s main contributions ● It recommends a contrastive loss to obtain a strong contrastive representation by pushing away samples from various classes and grouping graph features belonging to the same class. ● It also suggests a meta-classifier, which starts with the mean feature of the support sets and extracts the global feature using GNode to learn more appropriate similarity metrics. ● Even with relatively small support sets, such as 1-shot or 5-shot, SOTA results are obtained in the experiment on all four datasets. ● Comparatively speaking, the proposed method outperforms the current SOTA method by 10%.
  7. 7. Problem Definition In an N-way K-shot few-shot task, the support set contains N classes with K samples in each category. the query set contains the same N classes with Q samples in each class. The goal is to classify the N × Q unlabeled samples in the query set into N classes.
  8. 8. Architecture Graph Neural Network A specific node called the global node (GNode) was added to the graph and made a directed connection from each graph node to GNode individually. In the AGGREGATEUPDATE step, the representation of GNode has been updated as normal nodes in the graph, and GNode has no impact on GNNs to learn the node properties. Finally, a linear projection was applied, followed by a softmax to make the prediction.
  9. 9. Architecture Meta-learning Algorithm- Existing GNNs and novel GNode were chosen as graph feature extractor to learn contrastive representation and choose a linear parameter with softmax function as meta-classifier. Few-shot classification method is considered as meta-learning because it makes the training procedure explicitly learn to learn from a given small support set.
  10. 10. Architecture Meta-learning framework- 1. First pre-train GCRNet with a series of meta-training tasks sampled from the base graph set for a feature extractor Fθ. 2. Finally, finetune the classifier with the support set.
  11. 11. Architecture
  12. 12. Experiment and Results Datasets and Backbone- Reddit-12K ENZYMES dataset The Letter-High dataset TRIANGLES dataset Baseline and implementation details: They adopted a five-layer graph isomorphism network (GIN) with 64- dimensional hidden units for performance comparison. They ran the model by partitioning it into the feature extractor, i.e., GIN (backbone) + GNode, and the classifier to fairly compare our method with other baselines.
  13. 13. Experiment and Results
  14. 14. Experiment and Results
  15. 15. Experiment and Results Few-Shot Results The proposed method, GCRNet, achieved the best performance on all four datasets, thus strongly indicating that the improvements of the method can primarily be attributed to the graph meta-classifier fed with contrastive loss.
  16. 16. Experiment and Results Results on different GNNs It compared the effect of four competitive GNN models, i.e., GCN, GraphSAGE, GAT, and GIN, as the backbone of the proposed GCRNet. model almost achieve the best results with GIN in all four datasets, which indicates that GIN is more powerful for learning the graph-level representation.
  17. 17. Experiment and Results
  18. 18. References 1. Few-shot Graph Classification with Contrastive Loss and Meta-classifier- https://ieeexplore-ieee- org.libaccess.sjlibrary.org/stamp/stamp.jsp?tp=&arnumber=9892886&tag=1
  19. 19. Thank you!

×