What Is Unclear? Computational Assessment of Task Clarity in Crowdsourcing

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Externe Organisationen

  • Universität Paderborn
  • Delft University of Technology
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksHT 2021 - Proceedings of the 32nd ACM Conference on Hypertext and Social Media
Seiten165-175
Seitenumfang11
PublikationsstatusVeröffentlicht - 29 Aug. 2021
Extern publiziertJa
Veranstaltung32nd ACM Conference on Hypertext and Social Media, HT 2021 - Virtual, Online, Keine Angaben
Dauer: 30 Aug. 20212 Sept. 2021

Abstract

Designing tasks clearly to facilitate accurate task completion is a challenging endeavor for requesters on crowdsourcing platforms. Prior research shows that inexperienced requesters fail to write clear and complete task descriptions which directly leads to low quality submissions from workers. By complementing existing works that have aimed to address this challenge, in this paper we study whether clarity flaws in task descriptions can be identified automatically using natural language processing methods. We identify and synthesize seven clarity flaws in task descriptions that are grounded in relevant literature. We build both BERT-based and feature-based binary classifiers, in order to study the extent to which clarity flaws in task descriptions can be computationally assessed, and understand textual properties of descriptions that affect task clarity. Through a crowdsourced study, we collect annotations of clarity flaws in 1332 real task descriptions. Using this dataset, we evaluate several configurations of the classifiers. Our results indicate that nearly all the clarity flaws in task descriptions can be assessed reasonably by the classifiers. We found that the content, style, and readability of tasks descriptions are particularly important in shaping their clarity. This work has important implications on the design of tools to help requesters in improving task clarity on crowdsourcing platforms. Flaw-specific properties can provide for valuable guidance in improving task descriptions.

ASJC Scopus Sachgebiete

Zitieren

What Is Unclear? Computational Assessment of Task Clarity in Crowdsourcing. / Nouri, Zahra; Gadiraju, Ujwal; Engels, Gregor et al.
HT 2021 - Proceedings of the 32nd ACM Conference on Hypertext and Social Media. 2021. S. 165-175.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Nouri, Z, Gadiraju, U, Engels, G & Wachsmuth, H 2021, What Is Unclear? Computational Assessment of Task Clarity in Crowdsourcing. in HT 2021 - Proceedings of the 32nd ACM Conference on Hypertext and Social Media. S. 165-175, 32nd ACM Conference on Hypertext and Social Media, HT 2021, Virtual, Online, Keine Angaben, 30 Aug. 2021. https://doi.org/10.1145/3465336.3475109
Nouri, Z., Gadiraju, U., Engels, G., & Wachsmuth, H. (2021). What Is Unclear? Computational Assessment of Task Clarity in Crowdsourcing. In HT 2021 - Proceedings of the 32nd ACM Conference on Hypertext and Social Media (S. 165-175) https://doi.org/10.1145/3465336.3475109
Nouri Z, Gadiraju U, Engels G, Wachsmuth H. What Is Unclear? Computational Assessment of Task Clarity in Crowdsourcing. in HT 2021 - Proceedings of the 32nd ACM Conference on Hypertext and Social Media. 2021. S. 165-175 doi: 10.1145/3465336.3475109
Nouri, Zahra ; Gadiraju, Ujwal ; Engels, Gregor et al. / What Is Unclear? Computational Assessment of Task Clarity in Crowdsourcing. HT 2021 - Proceedings of the 32nd ACM Conference on Hypertext and Social Media. 2021. S. 165-175
Download
@inproceedings{5dd61c646d5d407eb1559fc682975afe,
title = "What Is Unclear?: Computational Assessment of Task Clarity in Crowdsourcing",
abstract = "Designing tasks clearly to facilitate accurate task completion is a challenging endeavor for requesters on crowdsourcing platforms. Prior research shows that inexperienced requesters fail to write clear and complete task descriptions which directly leads to low quality submissions from workers. By complementing existing works that have aimed to address this challenge, in this paper we study whether clarity flaws in task descriptions can be identified automatically using natural language processing methods. We identify and synthesize seven clarity flaws in task descriptions that are grounded in relevant literature. We build both BERT-based and feature-based binary classifiers, in order to study the extent to which clarity flaws in task descriptions can be computationally assessed, and understand textual properties of descriptions that affect task clarity. Through a crowdsourced study, we collect annotations of clarity flaws in 1332 real task descriptions. Using this dataset, we evaluate several configurations of the classifiers. Our results indicate that nearly all the clarity flaws in task descriptions can be assessed reasonably by the classifiers. We found that the content, style, and readability of tasks descriptions are particularly important in shaping their clarity. This work has important implications on the design of tools to help requesters in improving task clarity on crowdsourcing platforms. Flaw-specific properties can provide for valuable guidance in improving task descriptions.",
keywords = "BERT-based binary classification, crowdsourcing, feature-based binary classification, task clarity assessment, task design, unclear task descriptions",
author = "Zahra Nouri and Ujwal Gadiraju and Gregor Engels and Henning Wachsmuth",
year = "2021",
month = aug,
day = "29",
doi = "10.1145/3465336.3475109",
language = "English",
isbn = "9781450385510",
pages = "165--175",
booktitle = "HT 2021 - Proceedings of the 32nd ACM Conference on Hypertext and Social Media",
note = "32nd ACM Conference on Hypertext and Social Media, HT 2021 ; Conference date: 30-08-2021 Through 02-09-2021",

}

Download

TY - GEN

T1 - What Is Unclear?

T2 - 32nd ACM Conference on Hypertext and Social Media, HT 2021

AU - Nouri, Zahra

AU - Gadiraju, Ujwal

AU - Engels, Gregor

AU - Wachsmuth, Henning

PY - 2021/8/29

Y1 - 2021/8/29

N2 - Designing tasks clearly to facilitate accurate task completion is a challenging endeavor for requesters on crowdsourcing platforms. Prior research shows that inexperienced requesters fail to write clear and complete task descriptions which directly leads to low quality submissions from workers. By complementing existing works that have aimed to address this challenge, in this paper we study whether clarity flaws in task descriptions can be identified automatically using natural language processing methods. We identify and synthesize seven clarity flaws in task descriptions that are grounded in relevant literature. We build both BERT-based and feature-based binary classifiers, in order to study the extent to which clarity flaws in task descriptions can be computationally assessed, and understand textual properties of descriptions that affect task clarity. Through a crowdsourced study, we collect annotations of clarity flaws in 1332 real task descriptions. Using this dataset, we evaluate several configurations of the classifiers. Our results indicate that nearly all the clarity flaws in task descriptions can be assessed reasonably by the classifiers. We found that the content, style, and readability of tasks descriptions are particularly important in shaping their clarity. This work has important implications on the design of tools to help requesters in improving task clarity on crowdsourcing platforms. Flaw-specific properties can provide for valuable guidance in improving task descriptions.

AB - Designing tasks clearly to facilitate accurate task completion is a challenging endeavor for requesters on crowdsourcing platforms. Prior research shows that inexperienced requesters fail to write clear and complete task descriptions which directly leads to low quality submissions from workers. By complementing existing works that have aimed to address this challenge, in this paper we study whether clarity flaws in task descriptions can be identified automatically using natural language processing methods. We identify and synthesize seven clarity flaws in task descriptions that are grounded in relevant literature. We build both BERT-based and feature-based binary classifiers, in order to study the extent to which clarity flaws in task descriptions can be computationally assessed, and understand textual properties of descriptions that affect task clarity. Through a crowdsourced study, we collect annotations of clarity flaws in 1332 real task descriptions. Using this dataset, we evaluate several configurations of the classifiers. Our results indicate that nearly all the clarity flaws in task descriptions can be assessed reasonably by the classifiers. We found that the content, style, and readability of tasks descriptions are particularly important in shaping their clarity. This work has important implications on the design of tools to help requesters in improving task clarity on crowdsourcing platforms. Flaw-specific properties can provide for valuable guidance in improving task descriptions.

KW - BERT-based binary classification

KW - crowdsourcing

KW - feature-based binary classification

KW - task clarity assessment

KW - task design

KW - unclear task descriptions

UR - http://www.scopus.com/inward/record.url?scp=85114834391&partnerID=8YFLogxK

U2 - 10.1145/3465336.3475109

DO - 10.1145/3465336.3475109

M3 - Conference contribution

AN - SCOPUS:85114834391

SN - 9781450385510

SP - 165

EP - 175

BT - HT 2021 - Proceedings of the 32nd ACM Conference on Hypertext and Social Media

Y2 - 30 August 2021 through 2 September 2021

ER -

Von denselben Autoren