TY - GEN
T1 - Every Node Counts
T2 - 27th European Conference on Artificial Intelligence, ECAI 2024
AU - Eliasof, Moshe
AU - Haber, Eldad
AU - Treister, Eran
N1 - Publisher Copyright: © 2024 The Authors.
PY - 2024/10/16
Y1 - 2024/10/16
N2 - Graph Neural Networks (GNNs) are prominent in handling sparse and unstructured data efficiently and effectively. Specifically, GNNs were shown to be highly effective for node classification tasks, where labelled information is available for only a fraction of the nodes. Typically, the optimization process, through the objective function, considers only labelled nodes while ignoring the rest. In this paper, we propose novel objective terms for the training of GNNs for node classification, aiming to exploit all the available data and improve accuracy. Our first term seeks to maximize the mutual information between node and label features, considering both labelled and unlabelled nodes in the optimization process. Our second term promotes anisotropic smoothness in the prediction maps. Lastly, we propose a cross-validating gradients approach to enhance the learning from labelled data. Our proposed objectives are general and can be applied to various GNNs, and require no architectural modifications. Extensive experiments demonstrate our approach using popular GNNs like Graph Convolutional Networks (e.g., GCN and GCNII), and Graph Attention Networks (e.g., GAT), reading a consistent and significant accuracy improvement on 10 real-world node classification datasets.
AB - Graph Neural Networks (GNNs) are prominent in handling sparse and unstructured data efficiently and effectively. Specifically, GNNs were shown to be highly effective for node classification tasks, where labelled information is available for only a fraction of the nodes. Typically, the optimization process, through the objective function, considers only labelled nodes while ignoring the rest. In this paper, we propose novel objective terms for the training of GNNs for node classification, aiming to exploit all the available data and improve accuracy. Our first term seeks to maximize the mutual information between node and label features, considering both labelled and unlabelled nodes in the optimization process. Our second term promotes anisotropic smoothness in the prediction maps. Lastly, we propose a cross-validating gradients approach to enhance the learning from labelled data. Our proposed objectives are general and can be applied to various GNNs, and require no architectural modifications. Extensive experiments demonstrate our approach using popular GNNs like Graph Convolutional Networks (e.g., GCN and GCNII), and Graph Attention Networks (e.g., GAT), reading a consistent and significant accuracy improvement on 10 real-world node classification datasets.
UR - http://www.scopus.com/inward/record.url?scp=85213329671&partnerID=8YFLogxK
U2 - https://doi.org/10.3233/FAIA240779
DO - https://doi.org/10.3233/FAIA240779
M3 - Conference contribution
T3 - Frontiers in Artificial Intelligence and Applications
SP - 2508
EP - 2515
BT - ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings
A2 - Endriss, Ulle
A2 - Melo, Francisco S.
A2 - Bach, Kerstin
A2 - Bugarin-Diz, Alberto
A2 - Alonso-Moral, Jose M.
A2 - Barro, Senen
A2 - Heintz, Fredrik
PB - IOS Press BV
Y2 - 19 October 2024 through 24 October 2024
ER -