Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Avishek Anand
  • Kilian Bizer
  • Alexander Erlei
  • Ujwal Gadiraju
  • Christian Heinze
  • Lukas Meub
  • Wolfgang Nejdl
  • Björn Steinrötter

Externe Organisationen

  • Georg-August-Universität Göttingen
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksHCOMP 2018 Works in Progress and Demonstration Papers
UntertitelProceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018)
PublikationsstatusVeröffentlicht - 2018
Veranstaltung2018 HCOMP Works in Progress and Demonstration Papers, HCOMP WIP and DEMO 2018 - Zurich, Schweiz
Dauer: 5 Juli 20188 Juli 2018

Publikationsreihe

NameCEUR Workshop Proceedings
Herausgeber (Verlag)CEUR Workshop Proceedings
Band2173
ISSN (Print)1613-0073

Abstract

Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.

Zitieren

Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing. / Anand, Avishek; Bizer, Kilian; Erlei, Alexander et al.
HCOMP 2018 Works in Progress and Demonstration Papers: Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). 2018. (CEUR Workshop Proceedings; Band 2173).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Anand, A, Bizer, K, Erlei, A, Gadiraju, U, Heinze, C, Meub, L, Nejdl, W & Steinrötter, B 2018, Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing. in HCOMP 2018 Works in Progress and Demonstration Papers: Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). CEUR Workshop Proceedings, Bd. 2173, 2018 HCOMP Works in Progress and Demonstration Papers, HCOMP WIP and DEMO 2018, Zurich, Schweiz, 5 Juli 2018. <http://ceur-ws.org/Vol-2173/paper5.pdf>
Anand, A., Bizer, K., Erlei, A., Gadiraju, U., Heinze, C., Meub, L., Nejdl, W., & Steinrötter, B. (2018). Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing. In HCOMP 2018 Works in Progress and Demonstration Papers: Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018) (CEUR Workshop Proceedings; Band 2173). http://ceur-ws.org/Vol-2173/paper5.pdf
Anand A, Bizer K, Erlei A, Gadiraju U, Heinze C, Meub L et al. Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing. in HCOMP 2018 Works in Progress and Demonstration Papers: Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). 2018. (CEUR Workshop Proceedings).
Anand, Avishek ; Bizer, Kilian ; Erlei, Alexander et al. / Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing. HCOMP 2018 Works in Progress and Demonstration Papers: Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). 2018. (CEUR Workshop Proceedings).
Download
@inproceedings{d2efdd5366fc49b8be8f72f2af507416,
title = "Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing",
abstract = "Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.",
author = "Avishek Anand and Kilian Bizer and Alexander Erlei and Ujwal Gadiraju and Christian Heinze and Lukas Meub and Wolfgang Nejdl and Bj{\"o}rn Steinr{\"o}tter",
year = "2018",
language = "English",
series = "CEUR Workshop Proceedings",
publisher = "CEUR Workshop Proceedings",
booktitle = "HCOMP 2018 Works in Progress and Demonstration Papers",
note = "2018 HCOMP Works in Progress and Demonstration Papers, HCOMP WIP and DEMO 2018 ; Conference date: 05-07-2018 Through 08-07-2018",

}

Download

TY - GEN

T1 - Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing

AU - Anand, Avishek

AU - Bizer, Kilian

AU - Erlei, Alexander

AU - Gadiraju, Ujwal

AU - Heinze, Christian

AU - Meub, Lukas

AU - Nejdl, Wolfgang

AU - Steinrötter, Björn

PY - 2018

Y1 - 2018

N2 - Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.

AB - Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.

UR - http://www.scopus.com/inward/record.url?scp=85052384790&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85052384790

T3 - CEUR Workshop Proceedings

BT - HCOMP 2018 Works in Progress and Demonstration Papers

T2 - 2018 HCOMP Works in Progress and Demonstration Papers, HCOMP WIP and DEMO 2018

Y2 - 5 July 2018 through 8 July 2018

ER -

Von denselben Autoren