Details
Original language | English |
---|---|
Article number | e23 |
Number of pages | 27 |
Journal | Design Science |
Volume | 6 |
Issue number | 23 |
Early online date | 5 Oct 2020 |
Publication status | Published - 2020 |
Externally published | Yes |
Abstract
During the design stage, quick and accurate predictions are required for effective design decisions. Model developers prefer simple interpretable models for high computation speed. Given that deep learning (DL) has high computational speed and accuracy, it will be beneficial if these models are explainable. Furthermore, current DL development tools simplify the model development process. The article proposes a method to make the learning of the DL model explainable to enable non–machine learning (ML) experts to infer on model generalization and reusability. The proposed method utilizes dimensionality reduction (t-Distribution Stochastic Neighbour Embedding) and mutual information (MI). Results indicate that the convolutional layers capture design-related interpretations, and the fully connected layer captures performance-related interpretations. Furthermore, the global geometric structure within a model that generalized well and poorly is similar. The key difference indicating poor generalization is smoothness in the low-dimensional embedding. MI enables quantifying the reason for good and poor generalization. Such interpretation adds more information on model behaviour to a non-ML expert.
Keywords
- design space representation, model exploration, reasoning
ASJC Scopus subject areas
- Mathematics(all)
- Modelling and Simulation
- Arts and Humanities(all)
- Visual Arts and Performing Arts
- Engineering(all)
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: Design Science, Vol. 6, No. 23, e23, 2020.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Explainable deep convolutional learning for intuitive model development by non-machine learning domain experts
AU - Singaravel, Sundaravelpandian
AU - Suykens, Johan
AU - Janssen, Hans
AU - Geyer, Philipp Florian
N1 - Publisher Copyright: © The Author(s), 2020. Published by Cambridge University Press.
PY - 2020
Y1 - 2020
N2 - During the design stage, quick and accurate predictions are required for effective design decisions. Model developers prefer simple interpretable models for high computation speed. Given that deep learning (DL) has high computational speed and accuracy, it will be beneficial if these models are explainable. Furthermore, current DL development tools simplify the model development process. The article proposes a method to make the learning of the DL model explainable to enable non–machine learning (ML) experts to infer on model generalization and reusability. The proposed method utilizes dimensionality reduction (t-Distribution Stochastic Neighbour Embedding) and mutual information (MI). Results indicate that the convolutional layers capture design-related interpretations, and the fully connected layer captures performance-related interpretations. Furthermore, the global geometric structure within a model that generalized well and poorly is similar. The key difference indicating poor generalization is smoothness in the low-dimensional embedding. MI enables quantifying the reason for good and poor generalization. Such interpretation adds more information on model behaviour to a non-ML expert.
AB - During the design stage, quick and accurate predictions are required for effective design decisions. Model developers prefer simple interpretable models for high computation speed. Given that deep learning (DL) has high computational speed and accuracy, it will be beneficial if these models are explainable. Furthermore, current DL development tools simplify the model development process. The article proposes a method to make the learning of the DL model explainable to enable non–machine learning (ML) experts to infer on model generalization and reusability. The proposed method utilizes dimensionality reduction (t-Distribution Stochastic Neighbour Embedding) and mutual information (MI). Results indicate that the convolutional layers capture design-related interpretations, and the fully connected layer captures performance-related interpretations. Furthermore, the global geometric structure within a model that generalized well and poorly is similar. The key difference indicating poor generalization is smoothness in the low-dimensional embedding. MI enables quantifying the reason for good and poor generalization. Such interpretation adds more information on model behaviour to a non-ML expert.
KW - design space representation
KW - model exploration
KW - reasoning
UR - http://www.scopus.com/inward/record.url?scp=85093523412&partnerID=8YFLogxK
U2 - 10.1017/dsj.2020.22
DO - 10.1017/dsj.2020.22
M3 - Article
AN - SCOPUS:85093523412
VL - 6
JO - Design Science
JF - Design Science
IS - 23
M1 - e23
ER -