A Representative Study on Human Detection of Artificially Generated Media Across Countries

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Joel Frank
  • Franziska Herbert
  • Jonas Ricker
  • Lea Schönherr
  • Thorsten Eisenhofer
  • Asja Fischer
  • Markus Dürmuth
  • Thorsten Holz

Externe Organisationen

  • Ruhr-Universität Bochum
  • Helmholtz-Zentrum für Informationssicherheit (CISPA)
  • Technische Universität Berlin
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings 45th IEEE Symposium on Security and Privacy
Seiten55-73
Seitenumfang19
ISBN (elektronisch)979-8-3503-3130-1
PublikationsstatusVeröffentlicht - 23 Mai 2024

Publikationsreihe

NameProceedings - IEEE Symposium on Security and Privacy
ISSN (Print)1081-6011

Abstract

AI-generated media has become a threat to our digital society as we know it. Forgeries can be created automatically and on a large scale based on publicly available technologies. Recognizing this challenge, academics and practitioners have proposed a multitude of automatic detection strategies to detect such artificial media. However, in contrast to these technological advances, the human perception of generated media has not been thoroughly studied yet.In this paper, we aim to close this research gap. We conduct the first comprehensive survey on people's ability to detect generated media, spanning three countries (USA, Germany, and China), with 3,002 participants covering audio, image, and text media. Our results indicate that state-of-the-art forgeries are almost indistinguishable from "real"media, with the majority of participants simply guessing when asked to rate them as human- or machine-generated. In addition, AI-generated media is rated as more likely to be human-generated across all media types and all countries. To further understand which factors influence people's ability to detect AI-generated media, we include personal variables, chosen based on a literature review in the domains of deepfake and fake news research. In a regression analysis, we found that generalized trust, cognitive reflection, and self-reported familiarity with deepfakes significantly influence participants' decisions across all media categories.

ASJC Scopus Sachgebiete

Zitieren

A Representative Study on Human Detection of Artificially Generated Media Across Countries. / Frank, Joel ; Herbert, Franziska ; Ricker, Jonas et al.
Proceedings 45th IEEE Symposium on Security and Privacy. 2024. S. 55-73 (Proceedings - IEEE Symposium on Security and Privacy).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Frank, J, Herbert, F, Ricker, J, Schönherr, L, Eisenhofer, T, Fischer, A, Dürmuth, M & Holz, T 2024, A Representative Study on Human Detection of Artificially Generated Media Across Countries. in Proceedings 45th IEEE Symposium on Security and Privacy. Proceedings - IEEE Symposium on Security and Privacy, S. 55-73. https://doi.org/10.48550/arXiv.2312.05976, https://doi.org/10.1109/SP54263.2024.00159
Frank, J., Herbert, F., Ricker, J., Schönherr, L., Eisenhofer, T., Fischer, A., Dürmuth, M., & Holz, T. (2024). A Representative Study on Human Detection of Artificially Generated Media Across Countries. In Proceedings 45th IEEE Symposium on Security and Privacy (S. 55-73). (Proceedings - IEEE Symposium on Security and Privacy). https://doi.org/10.48550/arXiv.2312.05976, https://doi.org/10.1109/SP54263.2024.00159
Frank J, Herbert F, Ricker J, Schönherr L, Eisenhofer T, Fischer A et al. A Representative Study on Human Detection of Artificially Generated Media Across Countries. in Proceedings 45th IEEE Symposium on Security and Privacy. 2024. S. 55-73. (Proceedings - IEEE Symposium on Security and Privacy). doi: 10.48550/arXiv.2312.05976, 10.1109/SP54263.2024.00159
Frank, Joel ; Herbert, Franziska ; Ricker, Jonas et al. / A Representative Study on Human Detection of Artificially Generated Media Across Countries. Proceedings 45th IEEE Symposium on Security and Privacy. 2024. S. 55-73 (Proceedings - IEEE Symposium on Security and Privacy).
Download
@inproceedings{edf9270049f04fddad5e554b359ad454,
title = "A Representative Study on Human Detection of Artificially Generated Media Across Countries",
abstract = "AI-generated media has become a threat to our digital society as we know it. Forgeries can be created automatically and on a large scale based on publicly available technologies. Recognizing this challenge, academics and practitioners have proposed a multitude of automatic detection strategies to detect such artificial media. However, in contrast to these technological advances, the human perception of generated media has not been thoroughly studied yet.In this paper, we aim to close this research gap. We conduct the first comprehensive survey on people's ability to detect generated media, spanning three countries (USA, Germany, and China), with 3,002 participants covering audio, image, and text media. Our results indicate that state-of-the-art forgeries are almost indistinguishable from {"}real{"}media, with the majority of participants simply guessing when asked to rate them as human- or machine-generated. In addition, AI-generated media is rated as more likely to be human-generated across all media types and all countries. To further understand which factors influence people's ability to detect AI-generated media, we include personal variables, chosen based on a literature review in the domains of deepfake and fake news research. In a regression analysis, we found that generalized trust, cognitive reflection, and self-reported familiarity with deepfakes significantly influence participants' decisions across all media categories.",
author = "Joel Frank and Franziska Herbert and Jonas Ricker and Lea Sch{\"o}nherr and Thorsten Eisenhofer and Asja Fischer and Markus D{\"u}rmuth and Thorsten Holz",
note = "Publisher Copyright: {\textcopyright} 2024 IEEE.",
year = "2024",
month = may,
day = "23",
doi = "10.48550/arXiv.2312.05976",
language = "English",
series = "Proceedings - IEEE Symposium on Security and Privacy",
pages = "55--73",
booktitle = "Proceedings 45th IEEE Symposium on Security and Privacy",

}

Download

TY - GEN

T1 - A Representative Study on Human Detection of Artificially Generated Media Across Countries

AU - Frank, Joel

AU - Herbert, Franziska

AU - Ricker, Jonas

AU - Schönherr, Lea

AU - Eisenhofer, Thorsten

AU - Fischer, Asja

AU - Dürmuth, Markus

AU - Holz, Thorsten

N1 - Publisher Copyright: © 2024 IEEE.

PY - 2024/5/23

Y1 - 2024/5/23

N2 - AI-generated media has become a threat to our digital society as we know it. Forgeries can be created automatically and on a large scale based on publicly available technologies. Recognizing this challenge, academics and practitioners have proposed a multitude of automatic detection strategies to detect such artificial media. However, in contrast to these technological advances, the human perception of generated media has not been thoroughly studied yet.In this paper, we aim to close this research gap. We conduct the first comprehensive survey on people's ability to detect generated media, spanning three countries (USA, Germany, and China), with 3,002 participants covering audio, image, and text media. Our results indicate that state-of-the-art forgeries are almost indistinguishable from "real"media, with the majority of participants simply guessing when asked to rate them as human- or machine-generated. In addition, AI-generated media is rated as more likely to be human-generated across all media types and all countries. To further understand which factors influence people's ability to detect AI-generated media, we include personal variables, chosen based on a literature review in the domains of deepfake and fake news research. In a regression analysis, we found that generalized trust, cognitive reflection, and self-reported familiarity with deepfakes significantly influence participants' decisions across all media categories.

AB - AI-generated media has become a threat to our digital society as we know it. Forgeries can be created automatically and on a large scale based on publicly available technologies. Recognizing this challenge, academics and practitioners have proposed a multitude of automatic detection strategies to detect such artificial media. However, in contrast to these technological advances, the human perception of generated media has not been thoroughly studied yet.In this paper, we aim to close this research gap. We conduct the first comprehensive survey on people's ability to detect generated media, spanning three countries (USA, Germany, and China), with 3,002 participants covering audio, image, and text media. Our results indicate that state-of-the-art forgeries are almost indistinguishable from "real"media, with the majority of participants simply guessing when asked to rate them as human- or machine-generated. In addition, AI-generated media is rated as more likely to be human-generated across all media types and all countries. To further understand which factors influence people's ability to detect AI-generated media, we include personal variables, chosen based on a literature review in the domains of deepfake and fake news research. In a regression analysis, we found that generalized trust, cognitive reflection, and self-reported familiarity with deepfakes significantly influence participants' decisions across all media categories.

UR - http://www.scopus.com/inward/record.url?scp=85202204511&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2312.05976

DO - 10.48550/arXiv.2312.05976

M3 - Conference contribution

T3 - Proceedings - IEEE Symposium on Security and Privacy

SP - 55

EP - 73

BT - Proceedings 45th IEEE Symposium on Security and Privacy

ER -

Von denselben Autoren