Details
Original language | English |
---|---|
Journal | Advances in Neural Information Processing Systems |
Volume | 37 |
Publication status | Published - 2024 |
Event | 38th Conference on Neural Information Processing Systems, NeurIPS 2024 - Vancouver, Canada Duration: 10 Dec 2024 → 15 Dec 2024 |
Abstract
We characterize the computational power of neural networks that follow the graph neural network (GNN) architecture, not restricted to aggregate-combine GNNs or other particular types. We establish an exact correspondence between the expressivity of GNNs using diverse activation functions and arithmetic circuits over real numbers. In our results the activation function of the network becomes a gate type in the circuit. Our result holds for families of constant depth circuits and networks, both uniformly and non-uniformly, for all common activation functions.
ASJC Scopus subject areas
- Computer Science(all)
- Computer Networks and Communications
- Computer Science(all)
- Information Systems
- Computer Science(all)
- Signal Processing
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: Advances in Neural Information Processing Systems, Vol. 37, 2024.
Research output: Contribution to journal › Conference article › Research › peer review
}
TY - JOUR
T1 - Graph Neural Networks and Arithmetic Circuits
AU - Barlag, Timon
AU - Holzapfel, Vivian
AU - Strieker, Laura
AU - Virtema, Jonni
AU - Vollmer, Heribert
N1 - Publisher Copyright: © 2024 Neural information processing systems foundation. All rights reserved.
PY - 2024
Y1 - 2024
N2 - We characterize the computational power of neural networks that follow the graph neural network (GNN) architecture, not restricted to aggregate-combine GNNs or other particular types. We establish an exact correspondence between the expressivity of GNNs using diverse activation functions and arithmetic circuits over real numbers. In our results the activation function of the network becomes a gate type in the circuit. Our result holds for families of constant depth circuits and networks, both uniformly and non-uniformly, for all common activation functions.
AB - We characterize the computational power of neural networks that follow the graph neural network (GNN) architecture, not restricted to aggregate-combine GNNs or other particular types. We establish an exact correspondence between the expressivity of GNNs using diverse activation functions and arithmetic circuits over real numbers. In our results the activation function of the network becomes a gate type in the circuit. Our result holds for families of constant depth circuits and networks, both uniformly and non-uniformly, for all common activation functions.
UR - http://www.scopus.com/inward/record.url?scp=105000491402&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2402.17805
DO - 10.48550/arXiv.2402.17805
M3 - Conference article
AN - SCOPUS:105000491402
VL - 37
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
SN - 1049-5258
T2 - 38th Conference on Neural Information Processing Systems, NeurIPS 2024
Y2 - 10 December 2024 through 15 December 2024
ER -