A Comprehensive Study of Modern Architectures and Regularization Approaches on CheXpert5000

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Sontje Ihler
  • Felix Kuhnke
  • Svenja Spindeldreier
View graph of relations

Details

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings
EditorsLinwei Wang, Qi Dou, P. Thomas Fletcher, Stefanie Speidel, Shuo Li
PublisherSpringer Science and Business Media Deutschland GmbH
Pages654-663
Number of pages10
ISBN (electronic)9783031164316
ISBN (print)9783031164309
Publication statusPublished - 15 Sept 2022
Event25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022 - Singapore, Singapore
Duration: 18 Sept 202222 Sept 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13431 LNCS
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Abstract

Computer aided diagnosis (CAD) has gained an increased amount of attention in the general research community over the last years as an example of a typical limited data application - with experiments on labeled 100k–200k datasets. Although these datasets are still small compared to natural image datasets like ImageNet1k, ImageNet21k and JFT, they are large for annotated medical datasets, where 1k–10k labeled samples are much more common. There is no baseline on which methods to build on in the low data regime. In this work we bridge this gap by providing an extensive study on medical image classification with limited annotations (5k). We present a study of modern architectures applied to a fixed low data regime of 5000 images on the CheXpert dataset. Conclusively we find that models pretrained on ImageNet21k achieve a higher AUC and larger models require less training steps. All models are quite well calibrated even though we only fine-tuned on 5000 training samples. All ‘modern’ architectures have higher AUC than ResNet50. Regularization of Big Transfer Models with MixUp or Mean Teacher improves calibration, MixUp also improves accuracy. Vision Transformer achieve comparable or on par results to Big Transfer Models.

Keywords

    Limited data, Medical image classification, Transfer learning

ASJC Scopus subject areas

Cite this

A Comprehensive Study of Modern Architectures and Regularization Approaches on CheXpert5000. / Ihler, Sontje; Kuhnke, Felix; Spindeldreier, Svenja.
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings. ed. / Linwei Wang; Qi Dou; P. Thomas Fletcher; Stefanie Speidel; Shuo Li. Springer Science and Business Media Deutschland GmbH, 2022. p. 654-663 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 13431 LNCS).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Ihler, S, Kuhnke, F & Spindeldreier, S 2022, A Comprehensive Study of Modern Architectures and Regularization Approaches on CheXpert5000. in L Wang, Q Dou, PT Fletcher, S Speidel & S Li (eds), Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13431 LNCS, Springer Science and Business Media Deutschland GmbH, pp. 654-663, 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022, Singapore, Singapore, 18 Sept 2022. https://doi.org/10.1007/978-3-031-16431-6_62, https://doi.org/10.48550/arXiv.2302.06684
Ihler, S., Kuhnke, F., & Spindeldreier, S. (2022). A Comprehensive Study of Modern Architectures and Regularization Approaches on CheXpert5000. In L. Wang, Q. Dou, P. T. Fletcher, S. Speidel, & S. Li (Eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings (pp. 654-663). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 13431 LNCS). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16431-6_62, https://doi.org/10.48550/arXiv.2302.06684
Ihler S, Kuhnke F, Spindeldreier S. A Comprehensive Study of Modern Architectures and Regularization Approaches on CheXpert5000. In Wang L, Dou Q, Fletcher PT, Speidel S, Li S, editors, Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings. Springer Science and Business Media Deutschland GmbH. 2022. p. 654-663. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). doi: 10.1007/978-3-031-16431-6_62, 10.48550/arXiv.2302.06684
Ihler, Sontje ; Kuhnke, Felix ; Spindeldreier, Svenja. / A Comprehensive Study of Modern Architectures and Regularization Approaches on CheXpert5000. Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings. editor / Linwei Wang ; Qi Dou ; P. Thomas Fletcher ; Stefanie Speidel ; Shuo Li. Springer Science and Business Media Deutschland GmbH, 2022. pp. 654-663 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Download
@inproceedings{b869318f961a4c95858d72b3a4ab5415,
title = "A Comprehensive Study of Modern Architectures and Regularization Approaches on CheXpert5000",
abstract = "Computer aided diagnosis (CAD) has gained an increased amount of attention in the general research community over the last years as an example of a typical limited data application - with experiments on labeled 100k–200k datasets. Although these datasets are still small compared to natural image datasets like ImageNet1k, ImageNet21k and JFT, they are large for annotated medical datasets, where 1k–10k labeled samples are much more common. There is no baseline on which methods to build on in the low data regime. In this work we bridge this gap by providing an extensive study on medical image classification with limited annotations (5k). We present a study of modern architectures applied to a fixed low data regime of 5000 images on the CheXpert dataset. Conclusively we find that models pretrained on ImageNet21k achieve a higher AUC and larger models require less training steps. All models are quite well calibrated even though we only fine-tuned on 5000 training samples. All {\textquoteleft}modern{\textquoteright} architectures have higher AUC than ResNet50. Regularization of Big Transfer Models with MixUp or Mean Teacher improves calibration, MixUp also improves accuracy. Vision Transformer achieve comparable or on par results to Big Transfer Models.",
keywords = "Limited data, Medical image classification, Transfer learning",
author = "Sontje Ihler and Felix Kuhnke and Svenja Spindeldreier",
year = "2022",
month = sep,
day = "15",
doi = "10.1007/978-3-031-16431-6_62",
language = "English",
isbn = "9783031164309",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Science and Business Media Deutschland GmbH",
pages = "654--663",
editor = "Linwei Wang and Qi Dou and Fletcher, {P. Thomas} and Stefanie Speidel and Shuo Li",
booktitle = "Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings",
address = "Germany",
note = "25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022 ; Conference date: 18-09-2022 Through 22-09-2022",

}

Download

TY - GEN

T1 - A Comprehensive Study of Modern Architectures and Regularization Approaches on CheXpert5000

AU - Ihler, Sontje

AU - Kuhnke, Felix

AU - Spindeldreier, Svenja

PY - 2022/9/15

Y1 - 2022/9/15

N2 - Computer aided diagnosis (CAD) has gained an increased amount of attention in the general research community over the last years as an example of a typical limited data application - with experiments on labeled 100k–200k datasets. Although these datasets are still small compared to natural image datasets like ImageNet1k, ImageNet21k and JFT, they are large for annotated medical datasets, where 1k–10k labeled samples are much more common. There is no baseline on which methods to build on in the low data regime. In this work we bridge this gap by providing an extensive study on medical image classification with limited annotations (5k). We present a study of modern architectures applied to a fixed low data regime of 5000 images on the CheXpert dataset. Conclusively we find that models pretrained on ImageNet21k achieve a higher AUC and larger models require less training steps. All models are quite well calibrated even though we only fine-tuned on 5000 training samples. All ‘modern’ architectures have higher AUC than ResNet50. Regularization of Big Transfer Models with MixUp or Mean Teacher improves calibration, MixUp also improves accuracy. Vision Transformer achieve comparable or on par results to Big Transfer Models.

AB - Computer aided diagnosis (CAD) has gained an increased amount of attention in the general research community over the last years as an example of a typical limited data application - with experiments on labeled 100k–200k datasets. Although these datasets are still small compared to natural image datasets like ImageNet1k, ImageNet21k and JFT, they are large for annotated medical datasets, where 1k–10k labeled samples are much more common. There is no baseline on which methods to build on in the low data regime. In this work we bridge this gap by providing an extensive study on medical image classification with limited annotations (5k). We present a study of modern architectures applied to a fixed low data regime of 5000 images on the CheXpert dataset. Conclusively we find that models pretrained on ImageNet21k achieve a higher AUC and larger models require less training steps. All models are quite well calibrated even though we only fine-tuned on 5000 training samples. All ‘modern’ architectures have higher AUC than ResNet50. Regularization of Big Transfer Models with MixUp or Mean Teacher improves calibration, MixUp also improves accuracy. Vision Transformer achieve comparable or on par results to Big Transfer Models.

KW - Limited data

KW - Medical image classification

KW - Transfer learning

UR - http://www.scopus.com/inward/record.url?scp=85138802602&partnerID=8YFLogxK

U2 - 10.1007/978-3-031-16431-6_62

DO - 10.1007/978-3-031-16431-6_62

M3 - Conference contribution

AN - SCOPUS:85138802602

SN - 9783031164309

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 654

EP - 663

BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings

A2 - Wang, Linwei

A2 - Dou, Qi

A2 - Fletcher, P. Thomas

A2 - Speidel, Stefanie

A2 - Li, Shuo

PB - Springer Science and Business Media Deutschland GmbH

T2 - 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022

Y2 - 18 September 2022 through 22 September 2022

ER -