Supporting Requesters in Writing Clear Crowdsourcing Task Descriptions Through Computational Flaw Assessment

Research output: Chapter in book/report/conference proceedingConference contributionResearch

Authors

Research Organisations

External Research Organisations

  • Paderborn University
  • Northeastern University
  • Delft University of Technology
View graph of relations

Details

Original languageEnglish
Title of host publicationIUI 2023 - Proceedings of the 28th International Conference on Intelligent User Interfaces
Place of PublicationNew York, NY, USA
PublisherAssociation for Computing Machinery (ACM)
Pages737–749
Number of pages13
ISBN (electronic)9798400701061
ISBN (print)9798400701061
Publication statusPublished - 27 Mar 2023

Abstract

Quality control is an, if not the, essential challenge in crowdsourcing. Unsatisfactory responses from crowd workers have been found to particularly result from ambiguous and incomplete task descriptions, often from inexperienced task requesters. However, creating clear task descriptions with sufficient information is a complex process for requesters in crowdsourcing marketplaces. In this paper, we investigate the extent to which requesters can be supported effectively in this process through computational techniques. To this end, we developed a tool that enables requesters to iteratively identify and correct eight common clarity flaws in their task descriptions before deployment on the platform. The tool can be used to write task descriptions from scratch or to assess and improve the clarity of prepared descriptions. It employs machine learning-based natural language processing models trained on real-world task descriptions that score a given task description for the eight clarity flaws. On this basis, the requester can iteratively revise and reassess the task description until it reaches a sufficient level of clarity. In a first user study, we let requesters create task descriptions using the tool and rate the tool’s different aspects of helpfulness thereafter. We then carried out a second user study with crowd workers, as those who are confronted with such descriptions in practice, to rate the clarity of the created task descriptions. According to our results, 65% of the requesters classified the helpfulness of the information provided by the tool high or very high (only 12% as low or very low). The requesters saw some room for improvement though, for example, concerning the display of bad examples. Nevertheless, 76% of the crowd workers believe that the overall clarity of the task descriptions created by the requesters using the tool improves over the initial version. In line with this, the automatically-computed clarity scores of the edited task descriptions were generally higher than those of the initial descriptions, indicating that the tool reliably predicts the clarity of task descriptions in overall terms.

Cite this

Supporting Requesters in Writing Clear Crowdsourcing Task Descriptions Through Computational Flaw Assessment. / Nouri, Zahra; Prakash, Nikhil; Gadiraju, Ujwal et al.
IUI 2023 - Proceedings of the 28th International Conference on Intelligent User Interfaces. New York, NY, USA: Association for Computing Machinery (ACM), 2023. p. 737–749.

Research output: Chapter in book/report/conference proceedingConference contributionResearch

Nouri, Z, Prakash, N, Gadiraju, U & Wachsmuth, H 2023, Supporting Requesters in Writing Clear Crowdsourcing Task Descriptions Through Computational Flaw Assessment. in IUI 2023 - Proceedings of the 28th International Conference on Intelligent User Interfaces. Association for Computing Machinery (ACM), New York, NY, USA, pp. 737–749. https://doi.org/10.1145/3581641.3584039
Nouri, Z., Prakash, N., Gadiraju, U., & Wachsmuth, H. (2023). Supporting Requesters in Writing Clear Crowdsourcing Task Descriptions Through Computational Flaw Assessment. In IUI 2023 - Proceedings of the 28th International Conference on Intelligent User Interfaces (pp. 737–749). Association for Computing Machinery (ACM). https://doi.org/10.1145/3581641.3584039
Nouri Z, Prakash N, Gadiraju U, Wachsmuth H. Supporting Requesters in Writing Clear Crowdsourcing Task Descriptions Through Computational Flaw Assessment. In IUI 2023 - Proceedings of the 28th International Conference on Intelligent User Interfaces. New York, NY, USA: Association for Computing Machinery (ACM). 2023. p. 737–749 doi: 10.1145/3581641.3584039
Nouri, Zahra ; Prakash, Nikhil ; Gadiraju, Ujwal et al. / Supporting Requesters in Writing Clear Crowdsourcing Task Descriptions Through Computational Flaw Assessment. IUI 2023 - Proceedings of the 28th International Conference on Intelligent User Interfaces. New York, NY, USA : Association for Computing Machinery (ACM), 2023. pp. 737–749
Download
@inproceedings{c8805a9337804332af8733a51c0d2bcb,
title = "Supporting Requesters in Writing Clear Crowdsourcing Task Descriptions Through Computational Flaw Assessment",
abstract = "Quality control is an, if not the, essential challenge in crowdsourcing. Unsatisfactory responses from crowd workers have been found to particularly result from ambiguous and incomplete task descriptions, often from inexperienced task requesters. However, creating clear task descriptions with sufficient information is a complex process for requesters in crowdsourcing marketplaces. In this paper, we investigate the extent to which requesters can be supported effectively in this process through computational techniques. To this end, we developed a tool that enables requesters to iteratively identify and correct eight common clarity flaws in their task descriptions before deployment on the platform. The tool can be used to write task descriptions from scratch or to assess and improve the clarity of prepared descriptions. It employs machine learning-based natural language processing models trained on real-world task descriptions that score a given task description for the eight clarity flaws. On this basis, the requester can iteratively revise and reassess the task description until it reaches a sufficient level of clarity. In a first user study, we let requesters create task descriptions using the tool and rate the tool{\textquoteright}s different aspects of helpfulness thereafter. We then carried out a second user study with crowd workers, as those who are confronted with such descriptions in practice, to rate the clarity of the created task descriptions. According to our results, 65% of the requesters classified the helpfulness of the information provided by the tool high or very high (only 12% as low or very low). The requesters saw some room for improvement though, for example, concerning the display of bad examples. Nevertheless, 76% of the crowd workers believe that the overall clarity of the task descriptions created by the requesters using the tool improves over the initial version. In line with this, the automatically-computed clarity scores of the edited task descriptions were generally higher than those of the initial descriptions, indicating that the tool reliably predicts the clarity of task descriptions in overall terms.",
author = "Zahra Nouri and Nikhil Prakash and Ujwal Gadiraju and Henning Wachsmuth",
year = "2023",
month = mar,
day = "27",
doi = "10.1145/3581641.3584039",
language = "English",
isbn = "9798400701061",
pages = "737–749",
booktitle = "IUI 2023 - Proceedings of the 28th International Conference on Intelligent User Interfaces",
publisher = "Association for Computing Machinery (ACM)",
address = "United States",

}

Download

TY - GEN

T1 - Supporting Requesters in Writing Clear Crowdsourcing Task Descriptions Through Computational Flaw Assessment

AU - Nouri, Zahra

AU - Prakash, Nikhil

AU - Gadiraju, Ujwal

AU - Wachsmuth, Henning

PY - 2023/3/27

Y1 - 2023/3/27

N2 - Quality control is an, if not the, essential challenge in crowdsourcing. Unsatisfactory responses from crowd workers have been found to particularly result from ambiguous and incomplete task descriptions, often from inexperienced task requesters. However, creating clear task descriptions with sufficient information is a complex process for requesters in crowdsourcing marketplaces. In this paper, we investigate the extent to which requesters can be supported effectively in this process through computational techniques. To this end, we developed a tool that enables requesters to iteratively identify and correct eight common clarity flaws in their task descriptions before deployment on the platform. The tool can be used to write task descriptions from scratch or to assess and improve the clarity of prepared descriptions. It employs machine learning-based natural language processing models trained on real-world task descriptions that score a given task description for the eight clarity flaws. On this basis, the requester can iteratively revise and reassess the task description until it reaches a sufficient level of clarity. In a first user study, we let requesters create task descriptions using the tool and rate the tool’s different aspects of helpfulness thereafter. We then carried out a second user study with crowd workers, as those who are confronted with such descriptions in practice, to rate the clarity of the created task descriptions. According to our results, 65% of the requesters classified the helpfulness of the information provided by the tool high or very high (only 12% as low or very low). The requesters saw some room for improvement though, for example, concerning the display of bad examples. Nevertheless, 76% of the crowd workers believe that the overall clarity of the task descriptions created by the requesters using the tool improves over the initial version. In line with this, the automatically-computed clarity scores of the edited task descriptions were generally higher than those of the initial descriptions, indicating that the tool reliably predicts the clarity of task descriptions in overall terms.

AB - Quality control is an, if not the, essential challenge in crowdsourcing. Unsatisfactory responses from crowd workers have been found to particularly result from ambiguous and incomplete task descriptions, often from inexperienced task requesters. However, creating clear task descriptions with sufficient information is a complex process for requesters in crowdsourcing marketplaces. In this paper, we investigate the extent to which requesters can be supported effectively in this process through computational techniques. To this end, we developed a tool that enables requesters to iteratively identify and correct eight common clarity flaws in their task descriptions before deployment on the platform. The tool can be used to write task descriptions from scratch or to assess and improve the clarity of prepared descriptions. It employs machine learning-based natural language processing models trained on real-world task descriptions that score a given task description for the eight clarity flaws. On this basis, the requester can iteratively revise and reassess the task description until it reaches a sufficient level of clarity. In a first user study, we let requesters create task descriptions using the tool and rate the tool’s different aspects of helpfulness thereafter. We then carried out a second user study with crowd workers, as those who are confronted with such descriptions in practice, to rate the clarity of the created task descriptions. According to our results, 65% of the requesters classified the helpfulness of the information provided by the tool high or very high (only 12% as low or very low). The requesters saw some room for improvement though, for example, concerning the display of bad examples. Nevertheless, 76% of the crowd workers believe that the overall clarity of the task descriptions created by the requesters using the tool improves over the initial version. In line with this, the automatically-computed clarity scores of the edited task descriptions were generally higher than those of the initial descriptions, indicating that the tool reliably predicts the clarity of task descriptions in overall terms.

UR - http://www.scopus.com/inward/record.url?scp=85152130464&partnerID=8YFLogxK

U2 - 10.1145/3581641.3584039

DO - 10.1145/3581641.3584039

M3 - Conference contribution

SN - 9798400701061

SP - 737

EP - 749

BT - IUI 2023 - Proceedings of the 28th International Conference on Intelligent User Interfaces

PB - Association for Computing Machinery (ACM)

CY - New York, NY, USA

ER -

By the same author(s)