Overview of Touché 2020: Argument Retrieval

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Alexander Bondarenko
  • Maik Fröbe
  • Meriem Beloucif
  • Lukas Gienapp
  • Yamen Ajjour
  • Alexander Panchenko
  • Chris Biemann
  • Benno Stein
  • Henning Wachsmuth
  • Martin Potthast
  • Matthias Hagen

External Research Organisations

  • Martin Luther University Halle-Wittenberg
  • Universität Hamburg
  • Leipzig University
  • Skolkovo Institute of Science and Technology
  • Bauhaus-Universität Weimar
  • Paderborn University
View graph of relations

Details

Original languageEnglish
Title of host publicationCLEF 2020 Working Notes
Subtitle of host publicationWorking Notes of CLEF 2020 - Conference and Labs of the Evaluation Forum
Number of pages22
Publication statusPublished - 2020
Externally publishedYes
Event11th Conference and Labs of the Evaluation Forum, CLEF 2020 - Thessaloniki, Greece
Duration: 22 Sept 202025 Sept 2020

Publication series

NameCEUR Workshop Proceedings
PublisherCEUR Workshop Proceedings
Volume2696
ISSN (Print)1613-0073

Abstract

Argumentation is essential for opinion formation when it comes to debating on socially important topics as well as when making everyday personal decisions. The web provides an enormous source of argumentative texts, where well-reasoned argumentations are mixed with biased, faked, and populist ones. The research direction of developing argument retrieval technologies thus focuses not only retrieving relevant arguments for some argumentative information need, but also on retrieving arguments of a high quality. In this overview of the first shared task on argument retrieval at the CLEF 2020 Touché lab, we survey and evaluate 41 approaches submitted by 17 participating teams for two tasks: (1) retrieval of arguments on socially important topics, and (2) retrieval of arguments on everyday personal decisions. The most effective approaches submitted share some common techniques, such as query expansion, and taking argument quality into account. Still, the evaluation results show that only few of the submitted approaches (slightly) improve upon relatively simple argumentation-agnostic baselines—indicating that argument retrieval is in its infancy and meriting further research into this direction.

ASJC Scopus subject areas

Cite this

Overview of Touché 2020: Argument Retrieval. / Bondarenko, Alexander; Fröbe, Maik; Beloucif, Meriem et al.
CLEF 2020 Working Notes: Working Notes of CLEF 2020 - Conference and Labs of the Evaluation Forum. 2020. (CEUR Workshop Proceedings; Vol. 2696).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Bondarenko, A, Fröbe, M, Beloucif, M, Gienapp, L, Ajjour, Y, Panchenko, A, Biemann, C, Stein, B, Wachsmuth, H, Potthast, M & Hagen, M 2020, Overview of Touché 2020: Argument Retrieval. in CLEF 2020 Working Notes: Working Notes of CLEF 2020 - Conference and Labs of the Evaluation Forum. CEUR Workshop Proceedings, vol. 2696, 11th Conference and Labs of the Evaluation Forum, CLEF 2020, Thessaloniki, Greece, 22 Sept 2020. <https://ceur-ws.org/Vol-2696/paper_261.pdf>
Bondarenko, A., Fröbe, M., Beloucif, M., Gienapp, L., Ajjour, Y., Panchenko, A., Biemann, C., Stein, B., Wachsmuth, H., Potthast, M., & Hagen, M. (2020). Overview of Touché 2020: Argument Retrieval. In CLEF 2020 Working Notes: Working Notes of CLEF 2020 - Conference and Labs of the Evaluation Forum (CEUR Workshop Proceedings; Vol. 2696). https://ceur-ws.org/Vol-2696/paper_261.pdf
Bondarenko A, Fröbe M, Beloucif M, Gienapp L, Ajjour Y, Panchenko A et al. Overview of Touché 2020: Argument Retrieval. In CLEF 2020 Working Notes: Working Notes of CLEF 2020 - Conference and Labs of the Evaluation Forum. 2020. (CEUR Workshop Proceedings).
Bondarenko, Alexander ; Fröbe, Maik ; Beloucif, Meriem et al. / Overview of Touché 2020 : Argument Retrieval. CLEF 2020 Working Notes: Working Notes of CLEF 2020 - Conference and Labs of the Evaluation Forum. 2020. (CEUR Workshop Proceedings).
Download
@inproceedings{1879dadfb0fa44ff823268e0589f757d,
title = "Overview of Touch{\'e} 2020: Argument Retrieval",
abstract = "Argumentation is essential for opinion formation when it comes to debating on socially important topics as well as when making everyday personal decisions. The web provides an enormous source of argumentative texts, where well-reasoned argumentations are mixed with biased, faked, and populist ones. The research direction of developing argument retrieval technologies thus focuses not only retrieving relevant arguments for some argumentative information need, but also on retrieving arguments of a high quality. In this overview of the first shared task on argument retrieval at the CLEF 2020 Touch{\'e} lab, we survey and evaluate 41 approaches submitted by 17 participating teams for two tasks: (1) retrieval of arguments on socially important topics, and (2) retrieval of arguments on everyday personal decisions. The most effective approaches submitted share some common techniques, such as query expansion, and taking argument quality into account. Still, the evaluation results show that only few of the submitted approaches (slightly) improve upon relatively simple argumentation-agnostic baselines—indicating that argument retrieval is in its infancy and meriting further research into this direction.",
author = "Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen",
note = "Funding Information: This work was supported by the DFG through the project “ACQuA: Answering Comparative Questions with Arguments” (grants BI 1544/7-1 and HA 5851/2-1) as part of the priority program “RATIO: Robust Argumentation Machines” (SPP 1999).; 11th Conference and Labs of the Evaluation Forum, CLEF 2020 ; Conference date: 22-09-2020 Through 25-09-2020",
year = "2020",
language = "English",
series = "CEUR Workshop Proceedings",
publisher = "CEUR Workshop Proceedings",
booktitle = "CLEF 2020 Working Notes",

}

Download

TY - GEN

T1 - Overview of Touché 2020

T2 - 11th Conference and Labs of the Evaluation Forum, CLEF 2020

AU - Bondarenko, Alexander

AU - Fröbe, Maik

AU - Beloucif, Meriem

AU - Gienapp, Lukas

AU - Ajjour, Yamen

AU - Panchenko, Alexander

AU - Biemann, Chris

AU - Stein, Benno

AU - Wachsmuth, Henning

AU - Potthast, Martin

AU - Hagen, Matthias

N1 - Funding Information: This work was supported by the DFG through the project “ACQuA: Answering Comparative Questions with Arguments” (grants BI 1544/7-1 and HA 5851/2-1) as part of the priority program “RATIO: Robust Argumentation Machines” (SPP 1999).

PY - 2020

Y1 - 2020

N2 - Argumentation is essential for opinion formation when it comes to debating on socially important topics as well as when making everyday personal decisions. The web provides an enormous source of argumentative texts, where well-reasoned argumentations are mixed with biased, faked, and populist ones. The research direction of developing argument retrieval technologies thus focuses not only retrieving relevant arguments for some argumentative information need, but also on retrieving arguments of a high quality. In this overview of the first shared task on argument retrieval at the CLEF 2020 Touché lab, we survey and evaluate 41 approaches submitted by 17 participating teams for two tasks: (1) retrieval of arguments on socially important topics, and (2) retrieval of arguments on everyday personal decisions. The most effective approaches submitted share some common techniques, such as query expansion, and taking argument quality into account. Still, the evaluation results show that only few of the submitted approaches (slightly) improve upon relatively simple argumentation-agnostic baselines—indicating that argument retrieval is in its infancy and meriting further research into this direction.

AB - Argumentation is essential for opinion formation when it comes to debating on socially important topics as well as when making everyday personal decisions. The web provides an enormous source of argumentative texts, where well-reasoned argumentations are mixed with biased, faked, and populist ones. The research direction of developing argument retrieval technologies thus focuses not only retrieving relevant arguments for some argumentative information need, but also on retrieving arguments of a high quality. In this overview of the first shared task on argument retrieval at the CLEF 2020 Touché lab, we survey and evaluate 41 approaches submitted by 17 participating teams for two tasks: (1) retrieval of arguments on socially important topics, and (2) retrieval of arguments on everyday personal decisions. The most effective approaches submitted share some common techniques, such as query expansion, and taking argument quality into account. Still, the evaluation results show that only few of the submitted approaches (slightly) improve upon relatively simple argumentation-agnostic baselines—indicating that argument retrieval is in its infancy and meriting further research into this direction.

UR - http://www.scopus.com/inward/record.url?scp=85121796632&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85121796632

T3 - CEUR Workshop Proceedings

BT - CLEF 2020 Working Notes

Y2 - 22 September 2020 through 25 September 2020

ER -

By the same author(s)