N2V2PRO: Neural Network Mapping Framework for a Custom Vector Processor Architecture

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Externe Organisationen

  • Technische Universität Braunschweig
  • Robert Bosch GmbH
  • Dream Chip Technologies GmbH
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des Sammelwerks2023 IEEE 13th International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2023
Herausgeber (Verlag)IEEE Computer Society
Seiten94-99
Seitenumfang6
ISBN (elektronisch)9798350324150
PublikationsstatusVeröffentlicht - 2023
Veranstaltung13th IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2023 - Berlin, Deutschland
Dauer: 4 Sept. 20225 Sept. 2022

Publikationsreihe

NameIEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin
ISSN (Print)2166-6814
ISSN (elektronisch)2166-6822

Abstract

Convolutional neural networks (CNNs) have been demonstrated to be a successful approach in the field of artificial intelligence (AI). Deploying CNNs on embedded devices at a large scale would contribute significantly to the advancement and practical implementation of AI in various industries. However, the complexity of CNNs in terms of memory and operation requirements poses challenges in terms of computing performance, memory bandwidth, and flexibility of the executing hardware. This paper introduces a framework that addresses these issues through model quantization and hardware acceleration on a scalable vertical vector processor architecture. Firstly, the framework includes a method for layer fusion, which is designed to optimize the hardware utilization. Secondly, data storage is optimized to enhance memory efficiency. Lastly, CNNs are mapped onto the vertical vector processing concept of the hardware accelerator. The effectiveness of the proposed framework is evaluated by analyzing the accelerator efficiency based on a field-programmable gate array (FPGA). The results demonstrate that the framework offers flexibility, configurability, and efficient mapping for typical CNN implementations. The framework achieves up to 84% of the peak performance of the vector processor for the VGG net.

ASJC Scopus Sachgebiete

Zitieren

N2V2PRO: Neural Network Mapping Framework for a Custom Vector Processor Architecture. / Gesper, Sven; Thieu, Gia Bao; Kohler, Daniel et al.
2023 IEEE 13th International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2023. IEEE Computer Society, 2023. S. 94-99 (IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Gesper, S, Thieu, GB, Kohler, D, Kock, M, Berthold, T, Renke, O, Blume, H & Paya-Vaya, G 2023, N2V2PRO: Neural Network Mapping Framework for a Custom Vector Processor Architecture. in 2023 IEEE 13th International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2023. IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin, IEEE Computer Society, S. 94-99, 13th IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2023, Berlin, Deutschland, 4 Sept. 2022. https://doi.org/10.1109/icce-berlin58801.2023.10375652
Gesper, S., Thieu, G. B., Kohler, D., Kock, M., Berthold, T., Renke, O., Blume, H., & Paya-Vaya, G. (2023). N2V2PRO: Neural Network Mapping Framework for a Custom Vector Processor Architecture. In 2023 IEEE 13th International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2023 (S. 94-99). (IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin). IEEE Computer Society. https://doi.org/10.1109/icce-berlin58801.2023.10375652
Gesper S, Thieu GB, Kohler D, Kock M, Berthold T, Renke O et al. N2V2PRO: Neural Network Mapping Framework for a Custom Vector Processor Architecture. in 2023 IEEE 13th International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2023. IEEE Computer Society. 2023. S. 94-99. (IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin). doi: 10.1109/icce-berlin58801.2023.10375652
Gesper, Sven ; Thieu, Gia Bao ; Kohler, Daniel et al. / N2V2PRO : Neural Network Mapping Framework for a Custom Vector Processor Architecture. 2023 IEEE 13th International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2023. IEEE Computer Society, 2023. S. 94-99 (IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin).
Download
@inproceedings{f01c92096de54ed4b81f1e5f2d91f80b,
title = "N2V2PRO: Neural Network Mapping Framework for a Custom Vector Processor Architecture",
abstract = "Convolutional neural networks (CNNs) have been demonstrated to be a successful approach in the field of artificial intelligence (AI). Deploying CNNs on embedded devices at a large scale would contribute significantly to the advancement and practical implementation of AI in various industries. However, the complexity of CNNs in terms of memory and operation requirements poses challenges in terms of computing performance, memory bandwidth, and flexibility of the executing hardware. This paper introduces a framework that addresses these issues through model quantization and hardware acceleration on a scalable vertical vector processor architecture. Firstly, the framework includes a method for layer fusion, which is designed to optimize the hardware utilization. Secondly, data storage is optimized to enhance memory efficiency. Lastly, CNNs are mapped onto the vertical vector processing concept of the hardware accelerator. The effectiveness of the proposed framework is evaluated by analyzing the accelerator efficiency based on a field-programmable gate array (FPGA). The results demonstrate that the framework offers flexibility, configurability, and efficient mapping for typical CNN implementations. The framework achieves up to 84% of the peak performance of the vector processor for the VGG net.",
keywords = "CNN Layer Conversion, Custom Accelerator, Neural Network Hardware Mapping, Neural Network Quantization",
author = "Sven Gesper and Thieu, {Gia Bao} and Daniel Kohler and Markus Kock and Tim Berthold and Oliver Renke and Holger Blume and Guillermo Paya-Vaya",
note = "Funding information: Acknowledgment This work was partly funded by the German Federal Ministry of Education and Research (BMBF) under project number 16ME0379 (ZuSE-KI-AVF).; 13th IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2023 ; Conference date: 04-09-2022 Through 05-09-2022",
year = "2023",
doi = "10.1109/icce-berlin58801.2023.10375652",
language = "English",
series = "IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin",
publisher = "IEEE Computer Society",
pages = "94--99",
booktitle = "2023 IEEE 13th International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2023",
address = "United States",

}

Download

TY - GEN

T1 - N2V2PRO

T2 - 13th IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2023

AU - Gesper, Sven

AU - Thieu, Gia Bao

AU - Kohler, Daniel

AU - Kock, Markus

AU - Berthold, Tim

AU - Renke, Oliver

AU - Blume, Holger

AU - Paya-Vaya, Guillermo

N1 - Funding information: Acknowledgment This work was partly funded by the German Federal Ministry of Education and Research (BMBF) under project number 16ME0379 (ZuSE-KI-AVF).

PY - 2023

Y1 - 2023

N2 - Convolutional neural networks (CNNs) have been demonstrated to be a successful approach in the field of artificial intelligence (AI). Deploying CNNs on embedded devices at a large scale would contribute significantly to the advancement and practical implementation of AI in various industries. However, the complexity of CNNs in terms of memory and operation requirements poses challenges in terms of computing performance, memory bandwidth, and flexibility of the executing hardware. This paper introduces a framework that addresses these issues through model quantization and hardware acceleration on a scalable vertical vector processor architecture. Firstly, the framework includes a method for layer fusion, which is designed to optimize the hardware utilization. Secondly, data storage is optimized to enhance memory efficiency. Lastly, CNNs are mapped onto the vertical vector processing concept of the hardware accelerator. The effectiveness of the proposed framework is evaluated by analyzing the accelerator efficiency based on a field-programmable gate array (FPGA). The results demonstrate that the framework offers flexibility, configurability, and efficient mapping for typical CNN implementations. The framework achieves up to 84% of the peak performance of the vector processor for the VGG net.

AB - Convolutional neural networks (CNNs) have been demonstrated to be a successful approach in the field of artificial intelligence (AI). Deploying CNNs on embedded devices at a large scale would contribute significantly to the advancement and practical implementation of AI in various industries. However, the complexity of CNNs in terms of memory and operation requirements poses challenges in terms of computing performance, memory bandwidth, and flexibility of the executing hardware. This paper introduces a framework that addresses these issues through model quantization and hardware acceleration on a scalable vertical vector processor architecture. Firstly, the framework includes a method for layer fusion, which is designed to optimize the hardware utilization. Secondly, data storage is optimized to enhance memory efficiency. Lastly, CNNs are mapped onto the vertical vector processing concept of the hardware accelerator. The effectiveness of the proposed framework is evaluated by analyzing the accelerator efficiency based on a field-programmable gate array (FPGA). The results demonstrate that the framework offers flexibility, configurability, and efficient mapping for typical CNN implementations. The framework achieves up to 84% of the peak performance of the vector processor for the VGG net.

KW - CNN Layer Conversion

KW - Custom Accelerator

KW - Neural Network Hardware Mapping

KW - Neural Network Quantization

UR - http://www.scopus.com/inward/record.url?scp=85182920276&partnerID=8YFLogxK

U2 - 10.1109/icce-berlin58801.2023.10375652

DO - 10.1109/icce-berlin58801.2023.10375652

M3 - Conference contribution

AN - SCOPUS:85182920276

T3 - IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin

SP - 94

EP - 99

BT - 2023 IEEE 13th International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2023

PB - IEEE Computer Society

Y2 - 4 September 2022 through 5 September 2022

ER -

Von denselben Autoren