Dissonance Between Human and Machine Understanding

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Zijian Zhang
  • Jaspreet Singh
  • Ujwal Gadiraju
  • Avishek Anand

Research Organisations

View graph of relations

Details

Original languageEnglish
Article number56
JournalProceedings of the ACM on Human-Computer Interaction
Volume3
Issue numberCSCW
Publication statusPublished - 7 Nov 2019

Abstract

Complex machine learning models are deployed in several critical domains including healthcare and autonomous vehicles nowadays, albeit as functional blackboxes. Consequently, there has been a recent surge in interpreting decisions of such complex models in order to explain their actions to humans. Models which correspond to human interpretation of a task are more desirable in certain contexts and can help attribute liability, build trust, expose biases and in turn build better models. It is therefore crucial to understand how and which models conform to human understanding of tasks. In this paper we present a large-scale crowdsourcing study that reveals and quantifies the dissonance between human and machine understanding, through the lens of an image classification task. In particular, we seek to answer the following questions: Which (well performing) complex ML models are closer to humans in their use of features to make accurate predictions? How does task difficulty affect the feature selection capability of machines in comparison to humans? Are humans consistently better at selecting features that make image recognition more accurate? Our findings have important implications on human-machine collaboration, considering that a long term goal in the field of artificial intelligence is to make machines capable of learning and reasoning like humans.

Keywords

    Crowdsourcing, Dissonance, Human Intelligence, Humans, Image Understanding, Interpretability, Machine Learning Models, Machines, Neural Networks, Object Recognition

ASJC Scopus subject areas

Sustainable Development Goals

Cite this

Dissonance Between Human and Machine Understanding. / Zhang, Zijian; Singh, Jaspreet; Gadiraju, Ujwal et al.
In: Proceedings of the ACM on Human-Computer Interaction, Vol. 3, No. CSCW, 56, 07.11.2019.

Research output: Contribution to journalArticleResearchpeer review

Zhang, Z, Singh, J, Gadiraju, U & Anand, A 2019, 'Dissonance Between Human and Machine Understanding', Proceedings of the ACM on Human-Computer Interaction, vol. 3, no. CSCW, 56. https://doi.org/10.1145/3359158
Zhang, Z., Singh, J., Gadiraju, U., & Anand, A. (2019). Dissonance Between Human and Machine Understanding. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), Article 56. https://doi.org/10.1145/3359158
Zhang Z, Singh J, Gadiraju U, Anand A. Dissonance Between Human and Machine Understanding. Proceedings of the ACM on Human-Computer Interaction. 2019 Nov 7;3(CSCW):56. doi: 10.1145/3359158
Zhang, Zijian ; Singh, Jaspreet ; Gadiraju, Ujwal et al. / Dissonance Between Human and Machine Understanding. In: Proceedings of the ACM on Human-Computer Interaction. 2019 ; Vol. 3, No. CSCW.
Download
@article{8b2404a1bc5c4a87bd4e7d0cf2d7eec7,
title = "Dissonance Between Human and Machine Understanding",
abstract = "Complex machine learning models are deployed in several critical domains including healthcare and autonomous vehicles nowadays, albeit as functional blackboxes. Consequently, there has been a recent surge in interpreting decisions of such complex models in order to explain their actions to humans. Models which correspond to human interpretation of a task are more desirable in certain contexts and can help attribute liability, build trust, expose biases and in turn build better models. It is therefore crucial to understand how and which models conform to human understanding of tasks. In this paper we present a large-scale crowdsourcing study that reveals and quantifies the dissonance between human and machine understanding, through the lens of an image classification task. In particular, we seek to answer the following questions: Which (well performing) complex ML models are closer to humans in their use of features to make accurate predictions? How does task difficulty affect the feature selection capability of machines in comparison to humans? Are humans consistently better at selecting features that make image recognition more accurate? Our findings have important implications on human-machine collaboration, considering that a long term goal in the field of artificial intelligence is to make machines capable of learning and reasoning like humans.",
keywords = "Crowdsourcing, Dissonance, Human Intelligence, Humans, Image Understanding, Interpretability, Machine Learning Models, Machines, Neural Networks, Object Recognition",
author = "Zijian Zhang and Jaspreet Singh and Ujwal Gadiraju and Avishek Anand",
note = "Funding information: We thank all the anonymous crowd workers who participated in our experiments. This research has been supported in part by the Amazon Research Awards, and the Erasmus+ project DISKOW (grant no. 60171990).",
year = "2019",
month = nov,
day = "7",
doi = "10.1145/3359158",
language = "English",
volume = "3",
number = "CSCW",

}

Download

TY - JOUR

T1 - Dissonance Between Human and Machine Understanding

AU - Zhang, Zijian

AU - Singh, Jaspreet

AU - Gadiraju, Ujwal

AU - Anand, Avishek

N1 - Funding information: We thank all the anonymous crowd workers who participated in our experiments. This research has been supported in part by the Amazon Research Awards, and the Erasmus+ project DISKOW (grant no. 60171990).

PY - 2019/11/7

Y1 - 2019/11/7

N2 - Complex machine learning models are deployed in several critical domains including healthcare and autonomous vehicles nowadays, albeit as functional blackboxes. Consequently, there has been a recent surge in interpreting decisions of such complex models in order to explain their actions to humans. Models which correspond to human interpretation of a task are more desirable in certain contexts and can help attribute liability, build trust, expose biases and in turn build better models. It is therefore crucial to understand how and which models conform to human understanding of tasks. In this paper we present a large-scale crowdsourcing study that reveals and quantifies the dissonance between human and machine understanding, through the lens of an image classification task. In particular, we seek to answer the following questions: Which (well performing) complex ML models are closer to humans in their use of features to make accurate predictions? How does task difficulty affect the feature selection capability of machines in comparison to humans? Are humans consistently better at selecting features that make image recognition more accurate? Our findings have important implications on human-machine collaboration, considering that a long term goal in the field of artificial intelligence is to make machines capable of learning and reasoning like humans.

AB - Complex machine learning models are deployed in several critical domains including healthcare and autonomous vehicles nowadays, albeit as functional blackboxes. Consequently, there has been a recent surge in interpreting decisions of such complex models in order to explain their actions to humans. Models which correspond to human interpretation of a task are more desirable in certain contexts and can help attribute liability, build trust, expose biases and in turn build better models. It is therefore crucial to understand how and which models conform to human understanding of tasks. In this paper we present a large-scale crowdsourcing study that reveals and quantifies the dissonance between human and machine understanding, through the lens of an image classification task. In particular, we seek to answer the following questions: Which (well performing) complex ML models are closer to humans in their use of features to make accurate predictions? How does task difficulty affect the feature selection capability of machines in comparison to humans? Are humans consistently better at selecting features that make image recognition more accurate? Our findings have important implications on human-machine collaboration, considering that a long term goal in the field of artificial intelligence is to make machines capable of learning and reasoning like humans.

KW - Crowdsourcing

KW - Dissonance

KW - Human Intelligence

KW - Humans

KW - Image Understanding

KW - Interpretability

KW - Machine Learning Models

KW - Machines

KW - Neural Networks

KW - Object Recognition

UR - http://www.scopus.com/inward/record.url?scp=85075088294&partnerID=8YFLogxK

U2 - 10.1145/3359158

DO - 10.1145/3359158

M3 - Article

AN - SCOPUS:85075088294

VL - 3

JO - Proceedings of the ACM on Human-Computer Interaction

JF - Proceedings of the ACM on Human-Computer Interaction

IS - CSCW

M1 - 56

ER -