Out-of-Distribution Generalization in Graph Neural Networks via Contrastive Learning

Document Type : Research Paper

Authors

1 Semnan University, Electrical and Computer Engineering Faculty

2 Damghan University

Abstract

Graph Neural Networks (GNNs) excel at learning from graph-structured data but suffer significant performance degradation under distribution shifts between training and test environments. This paper proposes a Siamese-based contrastive learning framework for improving out-of-distribution (OOD) generalization in node classification tasks. Our approach generates positive samples through feature matrix perturbation without requiring negative samples, thereby reducing computational complexity. The model employs dual GCN encoders and MLP classifiers with shared weights, optimized using a three-component loss function that maximizes representation similarity, prediction consistency, and classification accuracy. Experimental evaluation on GOOD benchmark datasets across both covariate and concept shift scenarios demonstrates that our method outperforms baseline approaches. This work demonstrates that contrastive learning with Siamese architecture offers a computationally efficient and effective solution for enhancing GNN robustness under distribution shifts, with promising implications for real-world applications requiring reliable model performance in dynamic environments. The proposed method on average and in the GAP metric, has reduced the performance gap between IID and OOD scenarios by 19.75%, while also achieving an average OOD accuracy of 55.04%.

Keywords

Main Subjects



Articles in Press, Accepted Manuscript
Available Online from 13 April 2026
  • Receive Date: 04 November 2025
  • Revise Date: 04 January 2026
  • Accept Date: 26 January 2026