Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | WSDM 2020 - Proceedings of the 13th International Conference on Web Search and Data Mining |
Seiten | 241-249 |
Seitenumfang | 9 |
ISBN (elektronisch) | 9781450368223 |
Publikationsstatus | Veröffentlicht - 20 Jan. 2020 |
Veranstaltung | 13th ACM International Conference on Web Search and Data Mining, WSDM 2020 - Houston, USA / Vereinigte Staaten Dauer: 3 Feb. 2020 → 7 Feb. 2020 |
Publikationsreihe
Name | WSDM 2020 - Proceedings of the 13th International Conference on Web Search and Data Mining |
---|
Abstract
Crowdsourcing is a popular technique to collect large amounts of human-generated labels, such as relevance judgments used to create information retrieval (IR) evaluation collections. Previous research has shown how collecting high quality labels from a crowdsourcing platform can be challenging. Existing quality assurance techniques focus on answer aggregation or on the use of gold questions where ground-truth data allows to check for the quality of the responses. In this paper, we present qualitative and quantitative results, revealing how different crowd workers adopt different work strategies to complete relevance judgment tasks efficiently and their consequent impact on quality. We delve into the techniques and tools that highly experienced crowd workers use to be more efficient in completing crowdsourcing micro-tasks. To this end, we use both qualitative results from worker interviews and surveys, as well as the results of a data-driven study of behavioral log data (i.e., clicks, keystrokes and keyboard shortcuts) collected from crowd workers performing relevance judgment tasks. Our results highlight the presence of frequently used shortcut patterns that can speed-up task completion, thus increasing the hourly wage of efficient workers. We observe how crowd work experiences result in different types of working strategies, productivity levels, quality and diversity of the crowdsourced judgments.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Computernetzwerke und -kommunikation
- Informatik (insg.)
- Software
- Informatik (insg.)
- Angewandte Informatik
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
WSDM 2020 - Proceedings of the 13th International Conference on Web Search and Data Mining. 2020. S. 241-249 (WSDM 2020 - Proceedings of the 13th International Conference on Web Search and Data Mining).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Crowd worker strategies in relevance judgment tasks
AU - Han, Lei
AU - Maddalena, Eddy
AU - Checco, Alessandro
AU - Sarasua, Cristina
AU - Gadiraju, Ujwal
AU - Roitero, Kevin
AU - Demartini, Gianluca
N1 - Funding Information: Acknowledgements. This work is supported by ARC Discovery Project (DP190102141) and the Erasmus+ project DISKOW (60171990).
PY - 2020/1/20
Y1 - 2020/1/20
N2 - Crowdsourcing is a popular technique to collect large amounts of human-generated labels, such as relevance judgments used to create information retrieval (IR) evaluation collections. Previous research has shown how collecting high quality labels from a crowdsourcing platform can be challenging. Existing quality assurance techniques focus on answer aggregation or on the use of gold questions where ground-truth data allows to check for the quality of the responses. In this paper, we present qualitative and quantitative results, revealing how different crowd workers adopt different work strategies to complete relevance judgment tasks efficiently and their consequent impact on quality. We delve into the techniques and tools that highly experienced crowd workers use to be more efficient in completing crowdsourcing micro-tasks. To this end, we use both qualitative results from worker interviews and surveys, as well as the results of a data-driven study of behavioral log data (i.e., clicks, keystrokes and keyboard shortcuts) collected from crowd workers performing relevance judgment tasks. Our results highlight the presence of frequently used shortcut patterns that can speed-up task completion, thus increasing the hourly wage of efficient workers. We observe how crowd work experiences result in different types of working strategies, productivity levels, quality and diversity of the crowdsourced judgments.
AB - Crowdsourcing is a popular technique to collect large amounts of human-generated labels, such as relevance judgments used to create information retrieval (IR) evaluation collections. Previous research has shown how collecting high quality labels from a crowdsourcing platform can be challenging. Existing quality assurance techniques focus on answer aggregation or on the use of gold questions where ground-truth data allows to check for the quality of the responses. In this paper, we present qualitative and quantitative results, revealing how different crowd workers adopt different work strategies to complete relevance judgment tasks efficiently and their consequent impact on quality. We delve into the techniques and tools that highly experienced crowd workers use to be more efficient in completing crowdsourcing micro-tasks. To this end, we use both qualitative results from worker interviews and surveys, as well as the results of a data-driven study of behavioral log data (i.e., clicks, keystrokes and keyboard shortcuts) collected from crowd workers performing relevance judgment tasks. Our results highlight the presence of frequently used shortcut patterns that can speed-up task completion, thus increasing the hourly wage of efficient workers. We observe how crowd work experiences result in different types of working strategies, productivity levels, quality and diversity of the crowdsourced judgments.
KW - Crowdsourcing
KW - IR evaluation
KW - Relevance judgment
KW - User behavior
UR - http://www.scopus.com/inward/record.url?scp=85079523647&partnerID=8YFLogxK
U2 - 10.1145/3336191.3371857
DO - 10.1145/3336191.3371857
M3 - Conference contribution
AN - SCOPUS:85079523647
T3 - WSDM 2020 - Proceedings of the 13th International Conference on Web Search and Data Mining
SP - 241
EP - 249
BT - WSDM 2020 - Proceedings of the 13th International Conference on Web Search and Data Mining
T2 - 13th ACM International Conference on Web Search and Data Mining, WSDM 2020
Y2 - 3 February 2020 through 7 February 2020
ER -