Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Advances in Information Retrieval |
Untertitel | 39th European Conference on IR Research, ECIR 2017, Proceedings |
Herausgeber/-innen | Claudia Hauff, Joemon M. Jose, Dyaa Albakour, Ismail Sengor Altingovde, John Tait, Dawei Song, Stuart Watt |
Herausgeber (Verlag) | Springer Verlag |
Seiten | 186-198 |
Seitenumfang | 13 |
ISBN (Print) | 9783319566078 |
Publikationsstatus | Veröffentlicht - 2017 |
Veranstaltung | 39th European Conference on Information Retrieval, ECIR 2017 - Aberdeen, Großbritannien / Vereinigtes Königreich Dauer: 8 Apr. 2017 → 13 Apr. 2017 |
Publikationsreihe
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Band | 10193 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (elektronisch) | 1611-3349 |
Abstract
“Do machines perform better than humans in visual recognition tasks?” Not so long ago, this question would have been considered even somewhat provoking and the answer would have been clear: “No”. In this paper, we present a comparison of human and machine performance with respect to annotation for multimedia retrieval tasks. Going beyond recent crowdsourcing studies in this respect, we also report results of two extensive user studies. In total, 23 participants were asked to annotate more than 1000 images of a benchmark dataset, which is the most comprehensive study in the field so far. Krippendorff’s α is used to measure inter-coder agreement among several coders and the results are compared with the best machine results. The study is preceded by a summary of studies which compared human and machine performance in different visual and auditory recognition tasks. We discuss the results and derive a methodology in order to compare machine performance in multimedia annotation tasks at human level. This allows us to formally answer the question whether a recognition problem can be considered as solved. Finally, we are going to answer the initial question.
ASJC Scopus Sachgebiete
- Mathematik (insg.)
- Theoretische Informatik
- Informatik (insg.)
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Advances in Information Retrieval: 39th European Conference on IR Research, ECIR 2017, Proceedings. Hrsg. / Claudia Hauff; Joemon M. Jose; Dyaa Albakour; Ismail Sengor Altingovde; John Tait; Dawei Song; Stuart Watt. Springer Verlag, 2017. S. 186-198 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 10193 LNCS).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - “Are machines better than humans in image tagging?” - A user study adds to the puzzle
AU - Ewerth, Ralph
AU - Springstein, Matthias
AU - Phan-Vogtmann, Lo An
AU - Schütze, Juliane
N1 - Publisher Copyright: © The Author(s) 2017. Copyright: Copyright 2017 Elsevier B.V., All rights reserved.
PY - 2017
Y1 - 2017
N2 - “Do machines perform better than humans in visual recognition tasks?” Not so long ago, this question would have been considered even somewhat provoking and the answer would have been clear: “No”. In this paper, we present a comparison of human and machine performance with respect to annotation for multimedia retrieval tasks. Going beyond recent crowdsourcing studies in this respect, we also report results of two extensive user studies. In total, 23 participants were asked to annotate more than 1000 images of a benchmark dataset, which is the most comprehensive study in the field so far. Krippendorff’s α is used to measure inter-coder agreement among several coders and the results are compared with the best machine results. The study is preceded by a summary of studies which compared human and machine performance in different visual and auditory recognition tasks. We discuss the results and derive a methodology in order to compare machine performance in multimedia annotation tasks at human level. This allows us to formally answer the question whether a recognition problem can be considered as solved. Finally, we are going to answer the initial question.
AB - “Do machines perform better than humans in visual recognition tasks?” Not so long ago, this question would have been considered even somewhat provoking and the answer would have been clear: “No”. In this paper, we present a comparison of human and machine performance with respect to annotation for multimedia retrieval tasks. Going beyond recent crowdsourcing studies in this respect, we also report results of two extensive user studies. In total, 23 participants were asked to annotate more than 1000 images of a benchmark dataset, which is the most comprehensive study in the field so far. Krippendorff’s α is used to measure inter-coder agreement among several coders and the results are compared with the best machine results. The study is preceded by a summary of studies which compared human and machine performance in different visual and auditory recognition tasks. We discuss the results and derive a methodology in order to compare machine performance in multimedia annotation tasks at human level. This allows us to formally answer the question whether a recognition problem can be considered as solved. Finally, we are going to answer the initial question.
UR - http://www.scopus.com/inward/record.url?scp=85018704962&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-56608-5_15
DO - 10.1007/978-3-319-56608-5_15
M3 - Conference contribution
AN - SCOPUS:85018704962
SN - 9783319566078
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 186
EP - 198
BT - Advances in Information Retrieval
A2 - Hauff, Claudia
A2 - Jose, Joemon M.
A2 - Albakour, Dyaa
A2 - Altingovde, Ismail Sengor
A2 - Tait, John
A2 - Song, Dawei
A2 - Watt, Stuart
PB - Springer Verlag
T2 - 39th European Conference on Information Retrieval, ECIR 2017
Y2 - 8 April 2017 through 13 April 2017
ER -