نوع مقاله : مقاله پژوهشی
نویسندگان
1 دانشگاه سمنان، دانشکده مهندسی برق و کامپیوتر
2 دانشگاه سمنان
3 دانشگاه دامغان
چکیده
کلیدواژهها
موضوعات
عنوان مقاله [English]
نویسندگان [English]
Graph Neural Networks (GNNs) excel at learning from graph-structured data but suffer significant performance degradation under distribution shifts between training and test environments. This paper proposes a Siamese-based contrastive learning framework for improving out-of-distribution (OOD) generalization in node classification tasks. Our approach generates positive samples through feature matrix perturbation without requiring negative samples, thereby reducing computational complexity. The model employs dual GCN encoders and MLP classifiers with shared weights, optimized using a three-component loss function that maximizes representation similarity, prediction consistency, and classification accuracy. Experimental evaluation on GOOD benchmark datasets across both covariate and concept shift scenarios demonstrates that our method outperforms baseline approaches. This work demonstrates that contrastive learning with Siamese architecture offers a computationally efficient and effective solution for enhancing GNN robustness under distribution shifts, with promising implications for real-world applications requiring reliable model performance in dynamic environments. The proposed method on average and in the GAP metric, has reduced the performance gap between IID and OOD scenarios by 19.75%, while also achieving an average OOD accuracy of 55.04%.
کلیدواژهها [English]