Details
Originalsprache | Englisch |
---|---|
Fachzeitschrift | Advances in Neural Information Processing Systems |
Jahrgang | 37 |
Publikationsstatus | Veröffentlicht - 2024 |
Veranstaltung | 38th Conference on Neural Information Processing Systems, NeurIPS 2024 - Vancouver, Kanada Dauer: 10 Dez. 2024 → 15 Dez. 2024 |
Abstract
We characterize the computational power of neural networks that follow the graph neural network (GNN) architecture, not restricted to aggregate-combine GNNs or other particular types. We establish an exact correspondence between the expressivity of GNNs using diverse activation functions and arithmetic circuits over real numbers. In our results the activation function of the network becomes a gate type in the circuit. Our result holds for families of constant depth circuits and networks, both uniformly and non-uniformly, for all common activation functions.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Computernetzwerke und -kommunikation
- Informatik (insg.)
- Information systems
- Informatik (insg.)
- Signalverarbeitung
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: Advances in Neural Information Processing Systems, Jahrgang 37, 2024.
Publikation: Beitrag in Fachzeitschrift › Konferenzaufsatz in Fachzeitschrift › Forschung › Peer-Review
}
TY - JOUR
T1 - Graph Neural Networks and Arithmetic Circuits
AU - Barlag, Timon
AU - Holzapfel, Vivian
AU - Strieker, Laura
AU - Virtema, Jonni
AU - Vollmer, Heribert
N1 - Publisher Copyright: © 2024 Neural information processing systems foundation. All rights reserved.
PY - 2024
Y1 - 2024
N2 - We characterize the computational power of neural networks that follow the graph neural network (GNN) architecture, not restricted to aggregate-combine GNNs or other particular types. We establish an exact correspondence between the expressivity of GNNs using diverse activation functions and arithmetic circuits over real numbers. In our results the activation function of the network becomes a gate type in the circuit. Our result holds for families of constant depth circuits and networks, both uniformly and non-uniformly, for all common activation functions.
AB - We characterize the computational power of neural networks that follow the graph neural network (GNN) architecture, not restricted to aggregate-combine GNNs or other particular types. We establish an exact correspondence between the expressivity of GNNs using diverse activation functions and arithmetic circuits over real numbers. In our results the activation function of the network becomes a gate type in the circuit. Our result holds for families of constant depth circuits and networks, both uniformly and non-uniformly, for all common activation functions.
UR - http://www.scopus.com/inward/record.url?scp=105000491402&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2402.17805
DO - 10.48550/arXiv.2402.17805
M3 - Conference article
AN - SCOPUS:105000491402
VL - 37
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
SN - 1049-5258
T2 - 38th Conference on Neural Information Processing Systems, NeurIPS 2024
Y2 - 10 December 2024 through 15 December 2024
ER -