Details
Original language | English |
---|---|
Title of host publication | HCOMP 2018 Works in Progress and Demonstration Papers |
Subtitle of host publication | Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018) |
Publication status | Published - 2018 |
Event | 2018 HCOMP Works in Progress and Demonstration Papers, HCOMP WIP and DEMO 2018 - Zurich, Switzerland Duration: 5 Jul 2018 → 8 Jul 2018 |
Publication series
Name | CEUR Workshop Proceedings |
---|---|
Publisher | CEUR Workshop Proceedings |
Volume | 2173 |
ISSN (Print) | 1613-0073 |
Abstract
Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
HCOMP 2018 Works in Progress and Demonstration Papers: Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). 2018. (CEUR Workshop Proceedings; Vol. 2173).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing
AU - Anand, Avishek
AU - Bizer, Kilian
AU - Erlei, Alexander
AU - Gadiraju, Ujwal
AU - Heinze, Christian
AU - Meub, Lukas
AU - Nejdl, Wolfgang
AU - Steinrötter, Björn
PY - 2018
Y1 - 2018
N2 - Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.
AB - Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.
UR - http://www.scopus.com/inward/record.url?scp=85052384790&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85052384790
T3 - CEUR Workshop Proceedings
BT - HCOMP 2018 Works in Progress and Demonstration Papers
T2 - 2018 HCOMP Works in Progress and Demonstration Papers, HCOMP WIP and DEMO 2018
Y2 - 5 July 2018 through 8 July 2018
ER -