Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Avishek Anand
  • Kilian Bizer
  • Alexander Erlei
  • Ujwal Gadiraju
  • Christian Heinze
  • Lukas Meub
  • Wolfgang Nejdl
  • Björn Steinrötter

External Research Organisations

  • University of Göttingen
View graph of relations

Details

Original languageEnglish
Title of host publicationHCOMP 2018 Works in Progress and Demonstration Papers
Subtitle of host publicationProceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018)
Publication statusPublished - 2018
Event2018 HCOMP Works in Progress and Demonstration Papers, HCOMP WIP and DEMO 2018 - Zurich, Switzerland
Duration: 5 Jul 20188 Jul 2018

Publication series

NameCEUR Workshop Proceedings
PublisherCEUR Workshop Proceedings
Volume2173
ISSN (Print)1613-0073

Abstract

Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.

ASJC Scopus subject areas

Sustainable Development Goals

Cite this

Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing. / Anand, Avishek; Bizer, Kilian; Erlei, Alexander et al.
HCOMP 2018 Works in Progress and Demonstration Papers: Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). 2018. (CEUR Workshop Proceedings; Vol. 2173).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Anand, A, Bizer, K, Erlei, A, Gadiraju, U, Heinze, C, Meub, L, Nejdl, W & Steinrötter, B 2018, Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing. in HCOMP 2018 Works in Progress and Demonstration Papers: Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). CEUR Workshop Proceedings, vol. 2173, 2018 HCOMP Works in Progress and Demonstration Papers, HCOMP WIP and DEMO 2018, Zurich, Switzerland, 5 Jul 2018. <http://ceur-ws.org/Vol-2173/paper5.pdf>
Anand, A., Bizer, K., Erlei, A., Gadiraju, U., Heinze, C., Meub, L., Nejdl, W., & Steinrötter, B. (2018). Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing. In HCOMP 2018 Works in Progress and Demonstration Papers: Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018) (CEUR Workshop Proceedings; Vol. 2173). http://ceur-ws.org/Vol-2173/paper5.pdf
Anand A, Bizer K, Erlei A, Gadiraju U, Heinze C, Meub L et al. Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing. In HCOMP 2018 Works in Progress and Demonstration Papers: Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). 2018. (CEUR Workshop Proceedings).
Anand, Avishek ; Bizer, Kilian ; Erlei, Alexander et al. / Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing. HCOMP 2018 Works in Progress and Demonstration Papers: Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). 2018. (CEUR Workshop Proceedings).
Download
@inproceedings{d2efdd5366fc49b8be8f72f2af507416,
title = "Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing",
abstract = "Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.",
author = "Avishek Anand and Kilian Bizer and Alexander Erlei and Ujwal Gadiraju and Christian Heinze and Lukas Meub and Wolfgang Nejdl and Bj{\"o}rn Steinr{\"o}tter",
year = "2018",
language = "English",
series = "CEUR Workshop Proceedings",
publisher = "CEUR Workshop Proceedings",
booktitle = "HCOMP 2018 Works in Progress and Demonstration Papers",
note = "2018 HCOMP Works in Progress and Demonstration Papers, HCOMP WIP and DEMO 2018 ; Conference date: 05-07-2018 Through 08-07-2018",

}

Download

TY - GEN

T1 - Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing

AU - Anand, Avishek

AU - Bizer, Kilian

AU - Erlei, Alexander

AU - Gadiraju, Ujwal

AU - Heinze, Christian

AU - Meub, Lukas

AU - Nejdl, Wolfgang

AU - Steinrötter, Björn

PY - 2018

Y1 - 2018

N2 - Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.

AB - Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.

UR - http://www.scopus.com/inward/record.url?scp=85052384790&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85052384790

T3 - CEUR Workshop Proceedings

BT - HCOMP 2018 Works in Progress and Demonstration Papers

T2 - 2018 HCOMP Works in Progress and Demonstration Papers, HCOMP WIP and DEMO 2018

Y2 - 5 July 2018 through 8 July 2018

ER -

By the same author(s)