Two Worlds in One Network: Fusing Deep Learning and Random Forests for Classification and Object Detection

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandBeitrag in Buch/SammelwerkForschungPeer-Review

Autoren

Externe Organisationen

  • University of Twente
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksVolunteered Geographic Information
UntertitelInterpretation, Visualization and Social Context
Herausgeber (Verlag)Springer Nature
Seiten103-130
Seitenumfang28
ISBN (elektronisch)9783031353741
ISBN (Print)9783031353734
PublikationsstatusVeröffentlicht - 9 Dez. 2023

Abstract

Neural networks have demonstrated great success; however, large amounts of labeled data are usually required for training the networks. In this work, a framework for analyzing the road and traffic situations for cyclists and pedestrians is presented, which only requires very few labeled examples. We address this problem by combining convolutional neural networks and random forests, transforming the random forest into a neural network, and generating a fully convolutional network for detecting objects. Because existing methods for transforming random forests into neural networks propose a direct mapping and produce inefficient architectures, we present neural random forest imitation-an imitation learning approach by generating training data from a random forest and learning a neural network that imitates its behavior. This implicit transformation creates very efficient neural networks that learn the decision boundaries of a random forest. The generated model is differentiable, can be used as a warm start for fine-tuning, and enables end-to-end optimization. Experiments on several real- world benchmark datasets demonstrate superior performance, especially when training with very few training examples. Compared to state-of-the-art methods, we significantly reduce the number of network parameters while achieving the same or even improved accuracy due to better generalization.

Zitieren

Two Worlds in One Network: Fusing Deep Learning and Random Forests for Classification and Object Detection. / Reinders, Christoph; Yang, Michael Ying; Rosenhahn, Bodo.
Volunteered Geographic Information: Interpretation, Visualization and Social Context. Springer Nature, 2023. S. 103-130.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandBeitrag in Buch/SammelwerkForschungPeer-Review

Reinders, C, Yang, MY & Rosenhahn, B 2023, Two Worlds in One Network: Fusing Deep Learning and Random Forests for Classification and Object Detection. in Volunteered Geographic Information: Interpretation, Visualization and Social Context. Springer Nature, S. 103-130. https://doi.org/10.1007/9783031353741_5
Reinders, C., Yang, M. Y., & Rosenhahn, B. (2023). Two Worlds in One Network: Fusing Deep Learning and Random Forests for Classification and Object Detection. In Volunteered Geographic Information: Interpretation, Visualization and Social Context (S. 103-130). Springer Nature. https://doi.org/10.1007/9783031353741_5
Reinders C, Yang MY, Rosenhahn B. Two Worlds in One Network: Fusing Deep Learning and Random Forests for Classification and Object Detection. in Volunteered Geographic Information: Interpretation, Visualization and Social Context. Springer Nature. 2023. S. 103-130 doi: 10.1007/9783031353741_5
Reinders, Christoph ; Yang, Michael Ying ; Rosenhahn, Bodo. / Two Worlds in One Network : Fusing Deep Learning and Random Forests for Classification and Object Detection. Volunteered Geographic Information: Interpretation, Visualization and Social Context. Springer Nature, 2023. S. 103-130
Download
@inbook{108e14f7a53340d3bb9f85e07c1a2a7a,
title = "Two Worlds in One Network: Fusing Deep Learning and Random Forests for Classification and Object Detection",
abstract = "Neural networks have demonstrated great success; however, large amounts of labeled data are usually required for training the networks. In this work, a framework for analyzing the road and traffic situations for cyclists and pedestrians is presented, which only requires very few labeled examples. We address this problem by combining convolutional neural networks and random forests, transforming the random forest into a neural network, and generating a fully convolutional network for detecting objects. Because existing methods for transforming random forests into neural networks propose a direct mapping and produce inefficient architectures, we present neural random forest imitation-an imitation learning approach by generating training data from a random forest and learning a neural network that imitates its behavior. This implicit transformation creates very efficient neural networks that learn the decision boundaries of a random forest. The generated model is differentiable, can be used as a warm start for fine-tuning, and enables end-to-end optimization. Experiments on several real- world benchmark datasets demonstrate superior performance, especially when training with very few training examples. Compared to state-of-the-art methods, we significantly reduce the number of network parameters while achieving the same or even improved accuracy due to better generalization.",
keywords = "Classification, Imitation learning, Localization, Neural networks, Object detection, Random forests",
author = "Christoph Reinders and Yang, {Michael Ying} and Bodo Rosenhahn",
note = "Publisher Copyright: {\textcopyright} The Author(s) 2024. All rights reserved.",
year = "2023",
month = dec,
day = "9",
doi = "10.1007/9783031353741_5",
language = "English",
isbn = "9783031353734",
pages = "103--130",
booktitle = "Volunteered Geographic Information",
publisher = "Springer Nature",
address = "United States",

}

Download

TY - CHAP

T1 - Two Worlds in One Network

T2 - Fusing Deep Learning and Random Forests for Classification and Object Detection

AU - Reinders, Christoph

AU - Yang, Michael Ying

AU - Rosenhahn, Bodo

N1 - Publisher Copyright: © The Author(s) 2024. All rights reserved.

PY - 2023/12/9

Y1 - 2023/12/9

N2 - Neural networks have demonstrated great success; however, large amounts of labeled data are usually required for training the networks. In this work, a framework for analyzing the road and traffic situations for cyclists and pedestrians is presented, which only requires very few labeled examples. We address this problem by combining convolutional neural networks and random forests, transforming the random forest into a neural network, and generating a fully convolutional network for detecting objects. Because existing methods for transforming random forests into neural networks propose a direct mapping and produce inefficient architectures, we present neural random forest imitation-an imitation learning approach by generating training data from a random forest and learning a neural network that imitates its behavior. This implicit transformation creates very efficient neural networks that learn the decision boundaries of a random forest. The generated model is differentiable, can be used as a warm start for fine-tuning, and enables end-to-end optimization. Experiments on several real- world benchmark datasets demonstrate superior performance, especially when training with very few training examples. Compared to state-of-the-art methods, we significantly reduce the number of network parameters while achieving the same or even improved accuracy due to better generalization.

AB - Neural networks have demonstrated great success; however, large amounts of labeled data are usually required for training the networks. In this work, a framework for analyzing the road and traffic situations for cyclists and pedestrians is presented, which only requires very few labeled examples. We address this problem by combining convolutional neural networks and random forests, transforming the random forest into a neural network, and generating a fully convolutional network for detecting objects. Because existing methods for transforming random forests into neural networks propose a direct mapping and produce inefficient architectures, we present neural random forest imitation-an imitation learning approach by generating training data from a random forest and learning a neural network that imitates its behavior. This implicit transformation creates very efficient neural networks that learn the decision boundaries of a random forest. The generated model is differentiable, can be used as a warm start for fine-tuning, and enables end-to-end optimization. Experiments on several real- world benchmark datasets demonstrate superior performance, especially when training with very few training examples. Compared to state-of-the-art methods, we significantly reduce the number of network parameters while achieving the same or even improved accuracy due to better generalization.

KW - Classification

KW - Imitation learning

KW - Localization

KW - Neural networks

KW - Object detection

KW - Random forests

UR - http://www.scopus.com/inward/record.url?scp=85195113085&partnerID=8YFLogxK

U2 - 10.1007/9783031353741_5

DO - 10.1007/9783031353741_5

M3 - Contribution to book/anthology

AN - SCOPUS:85195113085

SN - 9783031353734

SP - 103

EP - 130

BT - Volunteered Geographic Information

PB - Springer Nature

ER -

Von denselben Autoren