Details
Original language | English |
---|---|
Pages (from-to) | 2139-2158 |
Number of pages | 20 |
Journal | Electronic markets |
Volume | 32 |
Issue number | 4 |
Early online date | 23 Nov 2022 |
Publication status | Published - Dec 2022 |
Abstract
The black-box nature of Artificial Intelligence (AI) models and their associated explainability limitations create a major adoption barrier. Explainable Artificial Intelligence (XAI) aims to make AI models more transparent to address this challenge. Researchers and practitioners apply XAI services to explore relationships in data, improve AI methods, justify AI decisions, and control AI technologies with the goals to improve knowledge about AI and address user needs. The market volume of XAI services has grown significantly. As a result, trustworthiness, reliability, transferability, fairness, and accessibility are required capabilities of XAI for a range of relevant stakeholders, including managers, regulators, users of XAI models, developers, and consumers. We contribute to theory and practice by deducing XAI archetypes and developing a user-centric decision support framework to identify the XAI services most suitable for the requirements of relevant stakeholders. Our decision tree is founded on a literature-based morphological box and a classification of real-world XAI services. Finally, we discussed archetypical business models of XAI services and exemplary use cases.
Keywords
- Archetypes, Artificial intelligence, Business models, Decision tree, Explainability, Morphological analysis
ASJC Scopus subject areas
- Business, Management and Accounting(all)
- Business and International Management
- Economics, Econometrics and Finance(all)
- Economics and Econometrics
- Computer Science(all)
- Computer Science Applications
- Business, Management and Accounting(all)
- Marketing
- Business, Management and Accounting(all)
- Management of Technology and Innovation
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: Electronic markets, Vol. 32, No. 4, 12.2022, p. 2139-2158.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Decision support for efficient XAI services
T2 - A morphological analysis, business model archetypes, and a decision tree
AU - Gerlach, Jana
AU - Hoppe, Paul
AU - Jagels, Sarah
AU - Licker, Luisa
AU - Breitner, Michael H.
N1 - Funding Information: Open Access funding enabled and organized by Projekt DEAL. The research project 'SiNED – Systemdienstleistungen für sichere Stromnetze in Zeiten fortschreitender Energiewende und digitaler Transformation' acknowledges the support of the Lower Saxony Ministry of Science and Culture through the 'Niedersächsisches Vorab' grant programme (grant ZN3563) and of the Energy Research Centre of Lower Saxony.
PY - 2022/12
Y1 - 2022/12
N2 - The black-box nature of Artificial Intelligence (AI) models and their associated explainability limitations create a major adoption barrier. Explainable Artificial Intelligence (XAI) aims to make AI models more transparent to address this challenge. Researchers and practitioners apply XAI services to explore relationships in data, improve AI methods, justify AI decisions, and control AI technologies with the goals to improve knowledge about AI and address user needs. The market volume of XAI services has grown significantly. As a result, trustworthiness, reliability, transferability, fairness, and accessibility are required capabilities of XAI for a range of relevant stakeholders, including managers, regulators, users of XAI models, developers, and consumers. We contribute to theory and practice by deducing XAI archetypes and developing a user-centric decision support framework to identify the XAI services most suitable for the requirements of relevant stakeholders. Our decision tree is founded on a literature-based morphological box and a classification of real-world XAI services. Finally, we discussed archetypical business models of XAI services and exemplary use cases.
AB - The black-box nature of Artificial Intelligence (AI) models and their associated explainability limitations create a major adoption barrier. Explainable Artificial Intelligence (XAI) aims to make AI models more transparent to address this challenge. Researchers and practitioners apply XAI services to explore relationships in data, improve AI methods, justify AI decisions, and control AI technologies with the goals to improve knowledge about AI and address user needs. The market volume of XAI services has grown significantly. As a result, trustworthiness, reliability, transferability, fairness, and accessibility are required capabilities of XAI for a range of relevant stakeholders, including managers, regulators, users of XAI models, developers, and consumers. We contribute to theory and practice by deducing XAI archetypes and developing a user-centric decision support framework to identify the XAI services most suitable for the requirements of relevant stakeholders. Our decision tree is founded on a literature-based morphological box and a classification of real-world XAI services. Finally, we discussed archetypical business models of XAI services and exemplary use cases.
KW - Archetypes
KW - Artificial intelligence
KW - Business models
KW - Decision tree
KW - Explainability
KW - Morphological analysis
UR - http://www.scopus.com/inward/record.url?scp=85142454660&partnerID=8YFLogxK
U2 - 10.1007/s12525-022-00603-6
DO - 10.1007/s12525-022-00603-6
M3 - Article
AN - SCOPUS:85142454660
VL - 32
SP - 2139
EP - 2158
JO - Electronic markets
JF - Electronic markets
SN - 1019-6781
IS - 4
ER -