Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | HCOMP 2018 Works in Progress and Demonstration Papers |
Untertitel | Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018) |
Publikationsstatus | Veröffentlicht - 2018 |
Veranstaltung | 2018 HCOMP Works in Progress and Demonstration Papers, HCOMP WIP and DEMO 2018 - Zurich, Schweiz Dauer: 5 Juli 2018 → 8 Juli 2018 |
Publikationsreihe
Name | CEUR Workshop Proceedings |
---|---|
Herausgeber (Verlag) | CEUR Workshop Proceedings |
Band | 2173 |
ISSN (Print) | 1613-0073 |
Abstract
Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Allgemeine Computerwissenschaft
Ziele für nachhaltige Entwicklung
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
HCOMP 2018 Works in Progress and Demonstration Papers: Proceedings of the HCOMP 2018 Works in Progress and Demonstration Papers Track of the Sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018). 2018. (CEUR Workshop Proceedings; Band 2173).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing
AU - Anand, Avishek
AU - Bizer, Kilian
AU - Erlei, Alexander
AU - Gadiraju, Ujwal
AU - Heinze, Christian
AU - Meub, Lukas
AU - Nejdl, Wolfgang
AU - Steinrötter, Björn
PY - 2018
Y1 - 2018
N2 - Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.
AB - Today algorithmic decision-making (ADM) is prevalent in several fields including medicine, the criminal justice system, financial markets etc. On the one hand, this is testament to the ever improving performance and capabilities of complex machine learning models. On the other hand, the increased complexity has resulted in a lack of transparency and interpretability which has led to critical decision-making models being deployed as functional black boxes. There is a general consensus that being able to explain the actions of such systems will help to address legal issues like transparency (ex ante) and compliance requirements (interim) as well as liability (ex post). Moreover it may build trust, expose biases and in turn lead to improved models. This has most recently led to research on extracting post-hoc explanations from black box classifiers and sequence generators in tasks like image captioning, text classification and machine translation. However, there is no work yet that has investigated and revealed the impact of model explanations on the nature of human decision-making. We undertake a large scale study using crowd-sourcing as a means to measure how interpretability affects human-decision making using well understood principles of behavioral economics. To our knowledge this is the first of its kind of an inter-disciplinary study involving interpretability in ADM models.
UR - http://www.scopus.com/inward/record.url?scp=85052384790&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85052384790
T3 - CEUR Workshop Proceedings
BT - HCOMP 2018 Works in Progress and Demonstration Papers
T2 - 2018 HCOMP Works in Progress and Demonstration Papers, HCOMP WIP and DEMO 2018
Y2 - 5 July 2018 through 8 July 2018
ER -