Beyond 100 Million Entities: Large-scale Blocking-based Resolution for Heterogeneous Data

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • George Papadakis
  • Ekaterini Ioannou
  • Claudia Niederée
  • Themis Palpanas
  • Wolfgang Nejdl

Organisationseinheiten

Externe Organisationen

  • Nationale Technische Universität Athen (NTUA)
  • Technical University of Crete
  • Università degli Studi di Trento
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksWSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining
Seiten53-62
Seitenumfang10
PublikationsstatusVeröffentlicht - 8 Feb. 2012
Veranstaltung5th ACM International Conference on Web Search and Data Mining, WSDM 2012 - Seattle, WA, USA / Vereinigte Staaten
Dauer: 8 Feb. 201212 Feb. 2012

Publikationsreihe

NameWSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining

Abstract

A prerequisite for leveraging the vast amount of data available on the Web is Entity Resolution, i.e., the process of identifying and linking data that describe the same real-world objects. To make this inherently quadratic process applicable to large data sets, blocking is typically employed: entities (records) are grouped into clusters - the blocks - of matching candidates and only entities of the same block are compared. However, novel blocking techniques are required for dealing with the noisy, heterogeneous, semi-structured, user-generated data in the Web, as traditional blocking techniques are inapplicable due to their reliance on schema information. The introduction of redundancy, improves the robustness of blocking methods but comes at the price of additional computational cost. In this paper, we present methods for enhancing the eficiency of redundancy-bearing blocking methods, such as our attributeagnostic blocking approach. We introduce novel blocking schemes that build blocks based on a variety of evidences, including entity identifiers and relationships between entities; they significantly reduce the required number of comparisons, while maintaining blocking effectiveness at very high levels. We also introduce two theoretical measures that provide a reliable estimation of the performance of a blocking method, without requiring the analytical processing of its blocks. Based on these measures, we develop two techniques for improving the performance of blocking: combining individual, complementary blocking schemes, and purging blocks until given criteria are satisfied. We test our methods through an extensive experimental evaluation, using a voluminous data set with 182 million heterogeneous entities. The outcomes of our study show the applicability and the high performance of our approach.

ASJC Scopus Sachgebiete

Zitieren

Beyond 100 Million Entities: Large-scale Blocking-based Resolution for Heterogeneous Data. / Papadakis, George; Ioannou, Ekaterini; Niederée, Claudia et al.
WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining. 2012. S. 53-62 (WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Papadakis, G, Ioannou, E, Niederée, C, Palpanas, T & Nejdl, W 2012, Beyond 100 Million Entities: Large-scale Blocking-based Resolution for Heterogeneous Data. in WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining. WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining, S. 53-62, 5th ACM International Conference on Web Search and Data Mining, WSDM 2012, Seattle, WA, USA / Vereinigte Staaten, 8 Feb. 2012. https://doi.org/10.1145/2124295.2124305
Papadakis, G., Ioannou, E., Niederée, C., Palpanas, T., & Nejdl, W. (2012). Beyond 100 Million Entities: Large-scale Blocking-based Resolution for Heterogeneous Data. In WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining (S. 53-62). (WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining). https://doi.org/10.1145/2124295.2124305
Papadakis G, Ioannou E, Niederée C, Palpanas T, Nejdl W. Beyond 100 Million Entities: Large-scale Blocking-based Resolution for Heterogeneous Data. in WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining. 2012. S. 53-62. (WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining). doi: 10.1145/2124295.2124305
Papadakis, George ; Ioannou, Ekaterini ; Niederée, Claudia et al. / Beyond 100 Million Entities : Large-scale Blocking-based Resolution for Heterogeneous Data. WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining. 2012. S. 53-62 (WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining).
Download
@inproceedings{e8ea5aa82ee943b58f21546219d6ba40,
title = "Beyond 100 Million Entities: Large-scale Blocking-based Resolution for Heterogeneous Data",
abstract = "A prerequisite for leveraging the vast amount of data available on the Web is Entity Resolution, i.e., the process of identifying and linking data that describe the same real-world objects. To make this inherently quadratic process applicable to large data sets, blocking is typically employed: entities (records) are grouped into clusters - the blocks - of matching candidates and only entities of the same block are compared. However, novel blocking techniques are required for dealing with the noisy, heterogeneous, semi-structured, user-generated data in the Web, as traditional blocking techniques are inapplicable due to their reliance on schema information. The introduction of redundancy, improves the robustness of blocking methods but comes at the price of additional computational cost. In this paper, we present methods for enhancing the eficiency of redundancy-bearing blocking methods, such as our attributeagnostic blocking approach. We introduce novel blocking schemes that build blocks based on a variety of evidences, including entity identifiers and relationships between entities; they significantly reduce the required number of comparisons, while maintaining blocking effectiveness at very high levels. We also introduce two theoretical measures that provide a reliable estimation of the performance of a blocking method, without requiring the analytical processing of its blocks. Based on these measures, we develop two techniques for improving the performance of blocking: combining individual, complementary blocking schemes, and purging blocks until given criteria are satisfied. We test our methods through an extensive experimental evaluation, using a voluminous data set with 182 million heterogeneous entities. The outcomes of our study show the applicability and the high performance of our approach.",
keywords = "Attribute-agnostic blocking, Data cleaning, Entity resolution",
author = "George Papadakis and Ekaterini Ioannou and Claudia Nieder{\'e}e and Themis Palpanas and Wolfgang Nejdl",
year = "2012",
month = feb,
day = "8",
doi = "10.1145/2124295.2124305",
language = "English",
isbn = "9781450307475",
series = "WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining",
pages = "53--62",
booktitle = "WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining",
note = "5th ACM International Conference on Web Search and Data Mining, WSDM 2012 ; Conference date: 08-02-2012 Through 12-02-2012",

}

Download

TY - GEN

T1 - Beyond 100 Million Entities

T2 - 5th ACM International Conference on Web Search and Data Mining, WSDM 2012

AU - Papadakis, George

AU - Ioannou, Ekaterini

AU - Niederée, Claudia

AU - Palpanas, Themis

AU - Nejdl, Wolfgang

PY - 2012/2/8

Y1 - 2012/2/8

N2 - A prerequisite for leveraging the vast amount of data available on the Web is Entity Resolution, i.e., the process of identifying and linking data that describe the same real-world objects. To make this inherently quadratic process applicable to large data sets, blocking is typically employed: entities (records) are grouped into clusters - the blocks - of matching candidates and only entities of the same block are compared. However, novel blocking techniques are required for dealing with the noisy, heterogeneous, semi-structured, user-generated data in the Web, as traditional blocking techniques are inapplicable due to their reliance on schema information. The introduction of redundancy, improves the robustness of blocking methods but comes at the price of additional computational cost. In this paper, we present methods for enhancing the eficiency of redundancy-bearing blocking methods, such as our attributeagnostic blocking approach. We introduce novel blocking schemes that build blocks based on a variety of evidences, including entity identifiers and relationships between entities; they significantly reduce the required number of comparisons, while maintaining blocking effectiveness at very high levels. We also introduce two theoretical measures that provide a reliable estimation of the performance of a blocking method, without requiring the analytical processing of its blocks. Based on these measures, we develop two techniques for improving the performance of blocking: combining individual, complementary blocking schemes, and purging blocks until given criteria are satisfied. We test our methods through an extensive experimental evaluation, using a voluminous data set with 182 million heterogeneous entities. The outcomes of our study show the applicability and the high performance of our approach.

AB - A prerequisite for leveraging the vast amount of data available on the Web is Entity Resolution, i.e., the process of identifying and linking data that describe the same real-world objects. To make this inherently quadratic process applicable to large data sets, blocking is typically employed: entities (records) are grouped into clusters - the blocks - of matching candidates and only entities of the same block are compared. However, novel blocking techniques are required for dealing with the noisy, heterogeneous, semi-structured, user-generated data in the Web, as traditional blocking techniques are inapplicable due to their reliance on schema information. The introduction of redundancy, improves the robustness of blocking methods but comes at the price of additional computational cost. In this paper, we present methods for enhancing the eficiency of redundancy-bearing blocking methods, such as our attributeagnostic blocking approach. We introduce novel blocking schemes that build blocks based on a variety of evidences, including entity identifiers and relationships between entities; they significantly reduce the required number of comparisons, while maintaining blocking effectiveness at very high levels. We also introduce two theoretical measures that provide a reliable estimation of the performance of a blocking method, without requiring the analytical processing of its blocks. Based on these measures, we develop two techniques for improving the performance of blocking: combining individual, complementary blocking schemes, and purging blocks until given criteria are satisfied. We test our methods through an extensive experimental evaluation, using a voluminous data set with 182 million heterogeneous entities. The outcomes of our study show the applicability and the high performance of our approach.

KW - Attribute-agnostic blocking

KW - Data cleaning

KW - Entity resolution

UR - http://www.scopus.com/inward/record.url?scp=84858041897&partnerID=8YFLogxK

U2 - 10.1145/2124295.2124305

DO - 10.1145/2124295.2124305

M3 - Conference contribution

AN - SCOPUS:84858041897

SN - 9781450307475

T3 - WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining

SP - 53

EP - 62

BT - WSDM 2012 - Proceedings of the 5th ACM International Conference on Web Search and Data Mining

Y2 - 8 February 2012 through 12 February 2012

ER -

Von denselben Autoren