Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | CLEF 2020 Working Notes |
Untertitel | Working Notes of CLEF 2020 - Conference and Labs of the Evaluation Forum |
Seitenumfang | 22 |
Publikationsstatus | Veröffentlicht - 2020 |
Extern publiziert | Ja |
Veranstaltung | 11th Conference and Labs of the Evaluation Forum, CLEF 2020 - Thessaloniki, Griechenland Dauer: 22 Sept. 2020 → 25 Sept. 2020 |
Publikationsreihe
Name | CEUR Workshop Proceedings |
---|---|
Herausgeber (Verlag) | CEUR Workshop Proceedings |
Band | 2696 |
ISSN (Print) | 1613-0073 |
Abstract
Argumentation is essential for opinion formation when it comes to debating on socially important topics as well as when making everyday personal decisions. The web provides an enormous source of argumentative texts, where well-reasoned argumentations are mixed with biased, faked, and populist ones. The research direction of developing argument retrieval technologies thus focuses not only retrieving relevant arguments for some argumentative information need, but also on retrieving arguments of a high quality. In this overview of the first shared task on argument retrieval at the CLEF 2020 Touché lab, we survey and evaluate 41 approaches submitted by 17 participating teams for two tasks: (1) retrieval of arguments on socially important topics, and (2) retrieval of arguments on everyday personal decisions. The most effective approaches submitted share some common techniques, such as query expansion, and taking argument quality into account. Still, the evaluation results show that only few of the submitted approaches (slightly) improve upon relatively simple argumentation-agnostic baselines—indicating that argument retrieval is in its infancy and meriting further research into this direction.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Allgemeine Computerwissenschaft
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
CLEF 2020 Working Notes: Working Notes of CLEF 2020 - Conference and Labs of the Evaluation Forum. 2020. (CEUR Workshop Proceedings; Band 2696).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Overview of Touché 2020
T2 - 11th Conference and Labs of the Evaluation Forum, CLEF 2020
AU - Bondarenko, Alexander
AU - Fröbe, Maik
AU - Beloucif, Meriem
AU - Gienapp, Lukas
AU - Ajjour, Yamen
AU - Panchenko, Alexander
AU - Biemann, Chris
AU - Stein, Benno
AU - Wachsmuth, Henning
AU - Potthast, Martin
AU - Hagen, Matthias
N1 - Funding Information: This work was supported by the DFG through the project “ACQuA: Answering Comparative Questions with Arguments” (grants BI 1544/7-1 and HA 5851/2-1) as part of the priority program “RATIO: Robust Argumentation Machines” (SPP 1999).
PY - 2020
Y1 - 2020
N2 - Argumentation is essential for opinion formation when it comes to debating on socially important topics as well as when making everyday personal decisions. The web provides an enormous source of argumentative texts, where well-reasoned argumentations are mixed with biased, faked, and populist ones. The research direction of developing argument retrieval technologies thus focuses not only retrieving relevant arguments for some argumentative information need, but also on retrieving arguments of a high quality. In this overview of the first shared task on argument retrieval at the CLEF 2020 Touché lab, we survey and evaluate 41 approaches submitted by 17 participating teams for two tasks: (1) retrieval of arguments on socially important topics, and (2) retrieval of arguments on everyday personal decisions. The most effective approaches submitted share some common techniques, such as query expansion, and taking argument quality into account. Still, the evaluation results show that only few of the submitted approaches (slightly) improve upon relatively simple argumentation-agnostic baselines—indicating that argument retrieval is in its infancy and meriting further research into this direction.
AB - Argumentation is essential for opinion formation when it comes to debating on socially important topics as well as when making everyday personal decisions. The web provides an enormous source of argumentative texts, where well-reasoned argumentations are mixed with biased, faked, and populist ones. The research direction of developing argument retrieval technologies thus focuses not only retrieving relevant arguments for some argumentative information need, but also on retrieving arguments of a high quality. In this overview of the first shared task on argument retrieval at the CLEF 2020 Touché lab, we survey and evaluate 41 approaches submitted by 17 participating teams for two tasks: (1) retrieval of arguments on socially important topics, and (2) retrieval of arguments on everyday personal decisions. The most effective approaches submitted share some common techniques, such as query expansion, and taking argument quality into account. Still, the evaluation results show that only few of the submitted approaches (slightly) improve upon relatively simple argumentation-agnostic baselines—indicating that argument retrieval is in its infancy and meriting further research into this direction.
UR - http://www.scopus.com/inward/record.url?scp=85121796632&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85121796632
T3 - CEUR Workshop Proceedings
BT - CLEF 2020 Working Notes
Y2 - 22 September 2020 through 25 September 2020
ER -