MPSketch: Message Passing Networks via Randomized Hashing for Efficient Attributed Network Embedding

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

Organisationseinheiten

Externe Organisationen

  • Central South University
  • Fudan University
  • Beihang University
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)2941-2954
Seitenumfang14
FachzeitschriftIEEE transactions on cybernetics
Jahrgang54
Ausgabenummer5
Frühes Online-Datum24 Feb. 2023
PublikationsstatusVeröffentlicht - Mai 2024

Abstract

Given a network, it is well recognized that attributed network embedding represents each node of the network in a low-dimensional space, and, thus, brings considerable benefits for numerous graph mining tasks. In practice, a diverse set of graph tasks can be processed efficiently via the compact representation that preserves content and structure information. The majority of attributed network embedding approaches, especially, the graph neural network (GNN) algorithms, are substantially costly in either time or space due to the expensive learning process, while the randomized hashing technique, locality-sensitive hashing (LSH), which does not need learning, can speedup the embedding process at the expense of losing some accuracy. In this article, we propose the MPSketch model, which bridges the performance gap between the GNN framework and the LSH framework by adopting the LSH technique to pass messages and capture high-order proximity in a larger aggregated information pool from the neighborhood. The extensive experimental results confirm that in node classification and link prediction, the proposed MPSketch algorithm enjoys performance comparable to the state-of-the-art learning-based algorithms and outperforms the existing LSH algorithms, while running faster than the GNN algorithms by 3–4 orders of magnitude. More precisely, MPSketch runs 2121, 1167, and 1155 times faster than GraphSAGE, GraphZoom, and FATNet on average, respectively.

ASJC Scopus Sachgebiete

Zitieren

MPSketch: Message Passing Networks via Randomized Hashing for Efficient Attributed Network Embedding. / Wu, Wei; Li, Bin; Luo, Chuan et al.
in: IEEE transactions on cybernetics, Jahrgang 54, Nr. 5, 05.2024, S. 2941-2954.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Wu W, Li B, Luo C, Nejdl W, Tan X. MPSketch: Message Passing Networks via Randomized Hashing for Efficient Attributed Network Embedding. IEEE transactions on cybernetics. 2024 Mai;54(5):2941-2954. Epub 2023 Feb 24. doi: 10.1109/TCYB.2023.3243763
Download
@article{3c38d86b44084885b602199598ef7fb0,
title = "MPSketch: Message Passing Networks via Randomized Hashing for Efficient Attributed Network Embedding",
abstract = "Given a network, it is well recognized that attributed network embedding represents each node of the network in a low-dimensional space, and, thus, brings considerable benefits for numerous graph mining tasks. In practice, a diverse set of graph tasks can be processed efficiently via the compact representation that preserves content and structure information. The majority of attributed network embedding approaches, especially, the graph neural network (GNN) algorithms, are substantially costly in either time or space due to the expensive learning process, while the randomized hashing technique, locality-sensitive hashing (LSH), which does not need learning, can speedup the embedding process at the expense of losing some accuracy. In this article, we propose the MPSketch model, which bridges the performance gap between the GNN framework and the LSH framework by adopting the LSH technique to pass messages and capture high-order proximity in a larger aggregated information pool from the neighborhood. The extensive experimental results confirm that in node classification and link prediction, the proposed MPSketch algorithm enjoys performance comparable to the state-of-the-art learning-based algorithms and outperforms the existing LSH algorithms, while running faster than the GNN algorithms by 3–4 orders of magnitude. More precisely, MPSketch runs 2121, 1167, and 1155 times faster than GraphSAGE, GraphZoom, and FATNet on average, respectively.",
keywords = "Approximation algorithms, Computer science, Electronic mail, Graph neural networks, Graph neural networks (GNNs), hashing, Message passing, message passing (MP), network embedding, Prediction algorithms, Task analysis",
author = "Wei Wu and Bin Li and Chuan Luo and Wolfgang Nejdl and Xuan Tan",
note = "Publisher Copyright: IEEE",
year = "2024",
month = may,
doi = "10.1109/TCYB.2023.3243763",
language = "English",
volume = "54",
pages = "2941--2954",
journal = "IEEE transactions on cybernetics",
issn = "2168-2267",
publisher = "IEEE Advancing Technology for Humanity",
number = "5",

}

Download

TY - JOUR

T1 - MPSketch

T2 - Message Passing Networks via Randomized Hashing for Efficient Attributed Network Embedding

AU - Wu, Wei

AU - Li, Bin

AU - Luo, Chuan

AU - Nejdl, Wolfgang

AU - Tan, Xuan

N1 - Publisher Copyright: IEEE

PY - 2024/5

Y1 - 2024/5

N2 - Given a network, it is well recognized that attributed network embedding represents each node of the network in a low-dimensional space, and, thus, brings considerable benefits for numerous graph mining tasks. In practice, a diverse set of graph tasks can be processed efficiently via the compact representation that preserves content and structure information. The majority of attributed network embedding approaches, especially, the graph neural network (GNN) algorithms, are substantially costly in either time or space due to the expensive learning process, while the randomized hashing technique, locality-sensitive hashing (LSH), which does not need learning, can speedup the embedding process at the expense of losing some accuracy. In this article, we propose the MPSketch model, which bridges the performance gap between the GNN framework and the LSH framework by adopting the LSH technique to pass messages and capture high-order proximity in a larger aggregated information pool from the neighborhood. The extensive experimental results confirm that in node classification and link prediction, the proposed MPSketch algorithm enjoys performance comparable to the state-of-the-art learning-based algorithms and outperforms the existing LSH algorithms, while running faster than the GNN algorithms by 3–4 orders of magnitude. More precisely, MPSketch runs 2121, 1167, and 1155 times faster than GraphSAGE, GraphZoom, and FATNet on average, respectively.

AB - Given a network, it is well recognized that attributed network embedding represents each node of the network in a low-dimensional space, and, thus, brings considerable benefits for numerous graph mining tasks. In practice, a diverse set of graph tasks can be processed efficiently via the compact representation that preserves content and structure information. The majority of attributed network embedding approaches, especially, the graph neural network (GNN) algorithms, are substantially costly in either time or space due to the expensive learning process, while the randomized hashing technique, locality-sensitive hashing (LSH), which does not need learning, can speedup the embedding process at the expense of losing some accuracy. In this article, we propose the MPSketch model, which bridges the performance gap between the GNN framework and the LSH framework by adopting the LSH technique to pass messages and capture high-order proximity in a larger aggregated information pool from the neighborhood. The extensive experimental results confirm that in node classification and link prediction, the proposed MPSketch algorithm enjoys performance comparable to the state-of-the-art learning-based algorithms and outperforms the existing LSH algorithms, while running faster than the GNN algorithms by 3–4 orders of magnitude. More precisely, MPSketch runs 2121, 1167, and 1155 times faster than GraphSAGE, GraphZoom, and FATNet on average, respectively.

KW - Approximation algorithms

KW - Computer science

KW - Electronic mail

KW - Graph neural networks

KW - Graph neural networks (GNNs)

KW - hashing

KW - Message passing

KW - message passing (MP)

KW - network embedding

KW - Prediction algorithms

KW - Task analysis

UR - http://www.scopus.com/inward/record.url?scp=85149385879&partnerID=8YFLogxK

U2 - 10.1109/TCYB.2023.3243763

DO - 10.1109/TCYB.2023.3243763

M3 - Article

AN - SCOPUS:85149385879

VL - 54

SP - 2941

EP - 2954

JO - IEEE transactions on cybernetics

JF - IEEE transactions on cybernetics

SN - 2168-2267

IS - 5

ER -

Von denselben Autoren