Explainable deep convolutional learning for intuitive model development by non-machine learning domain experts

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Sundaravelpandian Singaravel
  • Johan Suykens
  • Hans Janssen
  • Philipp Florian Geyer

External Research Organisations

  • KU Leuven
View graph of relations

Details

Original languageEnglish
Article numbere23
Number of pages27
JournalDesign Science
Volume6
Issue number23
Early online date5 Oct 2020
Publication statusPublished - 2020
Externally publishedYes

Abstract

During the design stage, quick and accurate predictions are required for effective design decisions. Model developers prefer simple interpretable models for high computation speed. Given that deep learning (DL) has high computational speed and accuracy, it will be beneficial if these models are explainable. Furthermore, current DL development tools simplify the model development process. The article proposes a method to make the learning of the DL model explainable to enable non–machine learning (ML) experts to infer on model generalization and reusability. The proposed method utilizes dimensionality reduction (t-Distribution Stochastic Neighbour Embedding) and mutual information (MI). Results indicate that the convolutional layers capture design-related interpretations, and the fully connected layer captures performance-related interpretations. Furthermore, the global geometric structure within a model that generalized well and poorly is similar. The key difference indicating poor generalization is smoothness in the low-dimensional embedding. MI enables quantifying the reason for good and poor generalization. Such interpretation adds more information on model behaviour to a non-ML expert.

Keywords

    design space representation, model exploration, reasoning

ASJC Scopus subject areas

Cite this

Explainable deep convolutional learning for intuitive model development by non-machine learning domain experts. / Singaravel, Sundaravelpandian; Suykens, Johan; Janssen, Hans et al.
In: Design Science, Vol. 6, No. 23, e23, 2020.

Research output: Contribution to journalArticleResearchpeer review

Singaravel, S, Suykens, J, Janssen, H & Geyer, PF 2020, 'Explainable deep convolutional learning for intuitive model development by non-machine learning domain experts', Design Science, vol. 6, no. 23, e23. https://doi.org/10.1017/dsj.2020.22
Singaravel, S., Suykens, J., Janssen, H., & Geyer, P. F. (2020). Explainable deep convolutional learning for intuitive model development by non-machine learning domain experts. Design Science, 6(23), Article e23. https://doi.org/10.1017/dsj.2020.22
Singaravel S, Suykens J, Janssen H, Geyer PF. Explainable deep convolutional learning for intuitive model development by non-machine learning domain experts. Design Science. 2020;6(23):e23. Epub 2020 Oct 5. doi: 10.1017/dsj.2020.22
Singaravel, Sundaravelpandian ; Suykens, Johan ; Janssen, Hans et al. / Explainable deep convolutional learning for intuitive model development by non-machine learning domain experts. In: Design Science. 2020 ; Vol. 6, No. 23.
Download
@article{c1e857ef384d42e2bf05f17703d90196,
title = "Explainable deep convolutional learning for intuitive model development by non-machine learning domain experts",
abstract = "During the design stage, quick and accurate predictions are required for effective design decisions. Model developers prefer simple interpretable models for high computation speed. Given that deep learning (DL) has high computational speed and accuracy, it will be beneficial if these models are explainable. Furthermore, current DL development tools simplify the model development process. The article proposes a method to make the learning of the DL model explainable to enable non–machine learning (ML) experts to infer on model generalization and reusability. The proposed method utilizes dimensionality reduction (t-Distribution Stochastic Neighbour Embedding) and mutual information (MI). Results indicate that the convolutional layers capture design-related interpretations, and the fully connected layer captures performance-related interpretations. Furthermore, the global geometric structure within a model that generalized well and poorly is similar. The key difference indicating poor generalization is smoothness in the low-dimensional embedding. MI enables quantifying the reason for good and poor generalization. Such interpretation adds more information on model behaviour to a non-ML expert.",
keywords = "design space representation, model exploration, reasoning",
author = "Sundaravelpandian Singaravel and Johan Suykens and Hans Janssen and Geyer, {Philipp Florian}",
note = "Publisher Copyright: {\textcopyright} The Author(s), 2020. Published by Cambridge University Press.",
year = "2020",
doi = "10.1017/dsj.2020.22",
language = "English",
volume = "6",
number = "23",

}

Download

TY - JOUR

T1 - Explainable deep convolutional learning for intuitive model development by non-machine learning domain experts

AU - Singaravel, Sundaravelpandian

AU - Suykens, Johan

AU - Janssen, Hans

AU - Geyer, Philipp Florian

N1 - Publisher Copyright: © The Author(s), 2020. Published by Cambridge University Press.

PY - 2020

Y1 - 2020

N2 - During the design stage, quick and accurate predictions are required for effective design decisions. Model developers prefer simple interpretable models for high computation speed. Given that deep learning (DL) has high computational speed and accuracy, it will be beneficial if these models are explainable. Furthermore, current DL development tools simplify the model development process. The article proposes a method to make the learning of the DL model explainable to enable non–machine learning (ML) experts to infer on model generalization and reusability. The proposed method utilizes dimensionality reduction (t-Distribution Stochastic Neighbour Embedding) and mutual information (MI). Results indicate that the convolutional layers capture design-related interpretations, and the fully connected layer captures performance-related interpretations. Furthermore, the global geometric structure within a model that generalized well and poorly is similar. The key difference indicating poor generalization is smoothness in the low-dimensional embedding. MI enables quantifying the reason for good and poor generalization. Such interpretation adds more information on model behaviour to a non-ML expert.

AB - During the design stage, quick and accurate predictions are required for effective design decisions. Model developers prefer simple interpretable models for high computation speed. Given that deep learning (DL) has high computational speed and accuracy, it will be beneficial if these models are explainable. Furthermore, current DL development tools simplify the model development process. The article proposes a method to make the learning of the DL model explainable to enable non–machine learning (ML) experts to infer on model generalization and reusability. The proposed method utilizes dimensionality reduction (t-Distribution Stochastic Neighbour Embedding) and mutual information (MI). Results indicate that the convolutional layers capture design-related interpretations, and the fully connected layer captures performance-related interpretations. Furthermore, the global geometric structure within a model that generalized well and poorly is similar. The key difference indicating poor generalization is smoothness in the low-dimensional embedding. MI enables quantifying the reason for good and poor generalization. Such interpretation adds more information on model behaviour to a non-ML expert.

KW - design space representation

KW - model exploration

KW - reasoning

UR - http://www.scopus.com/inward/record.url?scp=85093523412&partnerID=8YFLogxK

U2 - 10.1017/dsj.2020.22

DO - 10.1017/dsj.2020.22

M3 - Article

AN - SCOPUS:85093523412

VL - 6

JO - Design Science

JF - Design Science

IS - 23

M1 - e23

ER -