Bias in data-driven artificial intelligence systems: An introductory survey

Research output: Contribution to journalReview articleResearchpeer review

Authors

  • Eirini Ntoutsi
  • Pavlos Fafalios
  • Ujwal Gadiraju
  • Vasileios Iosifidis
  • Wolfgang Nejdl
  • Maria Esther Vidal
  • Salvatore Ruggieri
  • Franco Turini
  • Symeon Papadopoulos
  • Emmanouil Krasanakis
  • Ioannis Kompatsiaris
  • Katharina Kinder-Kurlanda
  • Claudia Wagner
  • Fariba Karimi
  • Miriam Fernandez
  • Harith Alani
  • Bettina Berendt
  • Tina Kruegel
  • Christian Heinze
  • Klaus Broelemann
  • Gjergji Kasneci
  • Thanassis Tiropanis
  • Steffen Staab

External Research Organisations

  • University of Pisa
  • Center For Research And Technology - Hellas
  • GESIS - Leibniz Institute for the Social Sciences
  • Open University
  • Technische Universität Berlin
  • KU Leuven
  • SCHUFA Holding AG
  • University of Southampton
  • University of Stuttgart
  • Foundation for Research & Technology - Hellas (FORTH)
  • German National Library of Science and Technology (TIB)
View graph of relations

Details

Original languageEnglish
Article numbere1356
JournalWiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
Volume10
Issue number3
Early online date3 Feb 2020
Publication statusPublished - 16 Apr 2020

Abstract

Artificial Intelligence (AI)-based systems are widely employed nowadays to make decisions that have far-reaching impact on individuals and society. Their decisions might affect everyone, everywhere, and anytime, entailing concerns about potential human rights issues. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training, and deployment to ensure social good while still benefiting from the huge potential of the AI technology. The goal of this survey is to provide a broad multidisciplinary overview of the area of bias in AI systems, focusing on technical challenges and solutions as well as to suggest new research directions towards approaches well-grounded in a legal frame. In this survey, we focus on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful machine learning algorithms. If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features such as race, sex, and so forth. This article is categorized under: Commercial, Legal, and Ethical Issues > Fairness in Data Mining Commercial, Legal, and Ethical Issues > Ethical Considerations Commercial, Legal, and Ethical Issues > Legal Issues.

Keywords

    fairness, fairness-aware AI, fairness-aware machine learning, interpretability, responsible AI

ASJC Scopus subject areas

Cite this

Bias in data-driven artificial intelligence systems: An introductory survey. / Ntoutsi, Eirini; Fafalios, Pavlos; Gadiraju, Ujwal et al.
In: Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, Vol. 10, No. 3, e1356, 16.04.2020.

Research output: Contribution to journalReview articleResearchpeer review

Ntoutsi, E, Fafalios, P, Gadiraju, U, Iosifidis, V, Nejdl, W, Vidal, ME, Ruggieri, S, Turini, F, Papadopoulos, S, Krasanakis, E, Kompatsiaris, I, Kinder-Kurlanda, K, Wagner, C, Karimi, F, Fernandez, M, Alani, H, Berendt, B, Kruegel, T, Heinze, C, Broelemann, K, Kasneci, G, Tiropanis, T & Staab, S 2020, 'Bias in data-driven artificial intelligence systems: An introductory survey', Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 10, no. 3, e1356. https://doi.org/10.1002/widm.1356
Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M. E., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., ... Staab, S. (2020). Bias in data-driven artificial intelligence systems: An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3), Article e1356. https://doi.org/10.1002/widm.1356
Ntoutsi E, Fafalios P, Gadiraju U, Iosifidis V, Nejdl W, Vidal ME et al. Bias in data-driven artificial intelligence systems: An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 2020 Apr 16;10(3):e1356. Epub 2020 Feb 3. doi: 10.1002/widm.1356
Ntoutsi, Eirini ; Fafalios, Pavlos ; Gadiraju, Ujwal et al. / Bias in data-driven artificial intelligence systems : An introductory survey. In: Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 2020 ; Vol. 10, No. 3.
Download
@article{550d3e3edbbc46a5a850ee3c34708270,
title = "Bias in data-driven artificial intelligence systems: An introductory survey",
abstract = "Artificial Intelligence (AI)-based systems are widely employed nowadays to make decisions that have far-reaching impact on individuals and society. Their decisions might affect everyone, everywhere, and anytime, entailing concerns about potential human rights issues. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training, and deployment to ensure social good while still benefiting from the huge potential of the AI technology. The goal of this survey is to provide a broad multidisciplinary overview of the area of bias in AI systems, focusing on technical challenges and solutions as well as to suggest new research directions towards approaches well-grounded in a legal frame. In this survey, we focus on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful machine learning algorithms. If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features such as race, sex, and so forth. This article is categorized under: Commercial, Legal, and Ethical Issues > Fairness in Data Mining Commercial, Legal, and Ethical Issues > Ethical Considerations Commercial, Legal, and Ethical Issues > Legal Issues.",
keywords = "fairness, fairness-aware AI, fairness-aware machine learning, interpretability, responsible AI",
author = "Eirini Ntoutsi and Pavlos Fafalios and Ujwal Gadiraju and Vasileios Iosifidis and Wolfgang Nejdl and Vidal, {Maria Esther} and Salvatore Ruggieri and Franco Turini and Symeon Papadopoulos and Emmanouil Krasanakis and Ioannis Kompatsiaris and Katharina Kinder-Kurlanda and Claudia Wagner and Fariba Karimi and Miriam Fernandez and Harith Alani and Bettina Berendt and Tina Kruegel and Christian Heinze and Klaus Broelemann and Gjergji Kasneci and Thanassis Tiropanis and Steffen Staab",
note = "Funding Information: This work is supported by the project “NoBias ‐ Artificial Intelligence without Bias,” which has received funding from the European Union's Horizon 2020 research and innovation programme, under the Marie Sk{\l}odowska‐Curie (Innovative Training Network) grant agreement no. 860630.",
year = "2020",
month = apr,
day = "16",
doi = "10.1002/widm.1356",
language = "English",
volume = "10",
number = "3",

}

Download

TY - JOUR

T1 - Bias in data-driven artificial intelligence systems

T2 - An introductory survey

AU - Ntoutsi, Eirini

AU - Fafalios, Pavlos

AU - Gadiraju, Ujwal

AU - Iosifidis, Vasileios

AU - Nejdl, Wolfgang

AU - Vidal, Maria Esther

AU - Ruggieri, Salvatore

AU - Turini, Franco

AU - Papadopoulos, Symeon

AU - Krasanakis, Emmanouil

AU - Kompatsiaris, Ioannis

AU - Kinder-Kurlanda, Katharina

AU - Wagner, Claudia

AU - Karimi, Fariba

AU - Fernandez, Miriam

AU - Alani, Harith

AU - Berendt, Bettina

AU - Kruegel, Tina

AU - Heinze, Christian

AU - Broelemann, Klaus

AU - Kasneci, Gjergji

AU - Tiropanis, Thanassis

AU - Staab, Steffen

N1 - Funding Information: This work is supported by the project “NoBias ‐ Artificial Intelligence without Bias,” which has received funding from the European Union's Horizon 2020 research and innovation programme, under the Marie Skłodowska‐Curie (Innovative Training Network) grant agreement no. 860630.

PY - 2020/4/16

Y1 - 2020/4/16

N2 - Artificial Intelligence (AI)-based systems are widely employed nowadays to make decisions that have far-reaching impact on individuals and society. Their decisions might affect everyone, everywhere, and anytime, entailing concerns about potential human rights issues. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training, and deployment to ensure social good while still benefiting from the huge potential of the AI technology. The goal of this survey is to provide a broad multidisciplinary overview of the area of bias in AI systems, focusing on technical challenges and solutions as well as to suggest new research directions towards approaches well-grounded in a legal frame. In this survey, we focus on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful machine learning algorithms. If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features such as race, sex, and so forth. This article is categorized under: Commercial, Legal, and Ethical Issues > Fairness in Data Mining Commercial, Legal, and Ethical Issues > Ethical Considerations Commercial, Legal, and Ethical Issues > Legal Issues.

AB - Artificial Intelligence (AI)-based systems are widely employed nowadays to make decisions that have far-reaching impact on individuals and society. Their decisions might affect everyone, everywhere, and anytime, entailing concerns about potential human rights issues. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training, and deployment to ensure social good while still benefiting from the huge potential of the AI technology. The goal of this survey is to provide a broad multidisciplinary overview of the area of bias in AI systems, focusing on technical challenges and solutions as well as to suggest new research directions towards approaches well-grounded in a legal frame. In this survey, we focus on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful machine learning algorithms. If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features such as race, sex, and so forth. This article is categorized under: Commercial, Legal, and Ethical Issues > Fairness in Data Mining Commercial, Legal, and Ethical Issues > Ethical Considerations Commercial, Legal, and Ethical Issues > Legal Issues.

KW - fairness

KW - fairness-aware AI

KW - fairness-aware machine learning

KW - interpretability

KW - responsible AI

UR - http://www.scopus.com/inward/record.url?scp=85078894838&partnerID=8YFLogxK

U2 - 10.1002/widm.1356

DO - 10.1002/widm.1356

M3 - Review article

AN - SCOPUS:85078894838

VL - 10

JO - Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery

JF - Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery

SN - 1942-4787

IS - 3

M1 - e1356

ER -

By the same author(s)