Details
Original language | English |
---|---|
Title of host publication | 2023 International Joint Conference on Neural Networks (IJCNN) |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (electronic) | 9781665488679 |
ISBN (print) | 978-1-6654-8868-6 |
Publication status | Published - 2023 |
Event | International Joint Conference on Neural Networks, IJCNN 2023 - Gold Coast, Australia Duration: 18 Jun 2023 → 23 Jun 2023 |
Publication series
Name | Proceedings of the International Joint Conference on Neural Networks |
---|---|
Volume | 2023-June |
ISSN (Print) | 2161-4393 |
ISSN (electronic) | 2161-4407 |
Abstract
The goal of subgraph matching is to determine the presence of a particular query pattern within a large collection of data graphs. Despite being a hard problem, subgraph matching is essential in various disciplines, including bioinformatics, text matching, and graph retrieval. Although traditional approaches could provide exact solutions, their computations are known to be NP-complete, leading to an overwhelmingly querying latency. While recent neural-based approaches have been shown to improve the response time, the oversimplified assumption of the first-order network may neglect the generalisability of fully capturing patterns in varying sizes, causing the performance to drop significantly in datasets in various domains. To overcome these limitations, this paper proposes xDualSM, a dual matching neural network model with interleaved diffusion attention. Specifically, we first embed the structural information of graphs into different adjacency matrices, which explicitly capture the intra-graph and cross-graph structures between the query pattern and the target graph. Then, we introduce a dual matching network with interleaved diffusion attention to carefully capture intra-graph and cross-graph information while reducing computational complexity. Empirically, our proposed framework not only boosted the speed of subgraph matching more than 10x compared to the fastest baseline but also achieved significant improvements of 47.64% in Recall and 34.39% in F1-score compared to the state-of-the-art approximation approach on COX2 dataset. In addition, our results are comparable with exact methods.
Keywords
- diffusion attention, graph mining, graph neural networks, matching by embedding, subgraph matching
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Computer Science(all)
- Artificial Intelligence
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
2023 International Joint Conference on Neural Networks (IJCNN). Institute of Electrical and Electronics Engineers Inc., 2023. (Proceedings of the International Joint Conference on Neural Networks; Vol. 2023-June).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - 10X Faster Subgraph Matching
T2 - International Joint Conference on Neural Networks, IJCNN 2023
AU - Nguyen, Quang Duc
AU - Ren, Zhao
AU - Jo, Jun
AU - Nguyen, Quoc Viet Hung
AU - Toan Nguyen, Thanh
AU - Tam Nguyen, Thanh
PY - 2023
Y1 - 2023
N2 - The goal of subgraph matching is to determine the presence of a particular query pattern within a large collection of data graphs. Despite being a hard problem, subgraph matching is essential in various disciplines, including bioinformatics, text matching, and graph retrieval. Although traditional approaches could provide exact solutions, their computations are known to be NP-complete, leading to an overwhelmingly querying latency. While recent neural-based approaches have been shown to improve the response time, the oversimplified assumption of the first-order network may neglect the generalisability of fully capturing patterns in varying sizes, causing the performance to drop significantly in datasets in various domains. To overcome these limitations, this paper proposes xDualSM, a dual matching neural network model with interleaved diffusion attention. Specifically, we first embed the structural information of graphs into different adjacency matrices, which explicitly capture the intra-graph and cross-graph structures between the query pattern and the target graph. Then, we introduce a dual matching network with interleaved diffusion attention to carefully capture intra-graph and cross-graph information while reducing computational complexity. Empirically, our proposed framework not only boosted the speed of subgraph matching more than 10x compared to the fastest baseline but also achieved significant improvements of 47.64% in Recall and 34.39% in F1-score compared to the state-of-the-art approximation approach on COX2 dataset. In addition, our results are comparable with exact methods.
AB - The goal of subgraph matching is to determine the presence of a particular query pattern within a large collection of data graphs. Despite being a hard problem, subgraph matching is essential in various disciplines, including bioinformatics, text matching, and graph retrieval. Although traditional approaches could provide exact solutions, their computations are known to be NP-complete, leading to an overwhelmingly querying latency. While recent neural-based approaches have been shown to improve the response time, the oversimplified assumption of the first-order network may neglect the generalisability of fully capturing patterns in varying sizes, causing the performance to drop significantly in datasets in various domains. To overcome these limitations, this paper proposes xDualSM, a dual matching neural network model with interleaved diffusion attention. Specifically, we first embed the structural information of graphs into different adjacency matrices, which explicitly capture the intra-graph and cross-graph structures between the query pattern and the target graph. Then, we introduce a dual matching network with interleaved diffusion attention to carefully capture intra-graph and cross-graph information while reducing computational complexity. Empirically, our proposed framework not only boosted the speed of subgraph matching more than 10x compared to the fastest baseline but also achieved significant improvements of 47.64% in Recall and 34.39% in F1-score compared to the state-of-the-art approximation approach on COX2 dataset. In addition, our results are comparable with exact methods.
KW - diffusion attention
KW - graph mining
KW - graph neural networks
KW - matching by embedding
KW - subgraph matching
UR - http://www.scopus.com/inward/record.url?scp=85169594176&partnerID=8YFLogxK
U2 - 10.1109/IJCNN54540.2023.10191159
DO - 10.1109/IJCNN54540.2023.10191159
M3 - Conference contribution
AN - SCOPUS:85169594176
SN - 978-1-6654-8868-6
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2023 International Joint Conference on Neural Networks (IJCNN)
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 18 June 2023 through 23 June 2023
ER -