Details
Original language | English |
---|---|
Title of host publication | Proceedings |
Subtitle of host publication | 2015 12th Conference on Computer and Robot Vision, CRV 2015 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 155-160 |
Number of pages | 6 |
ISBN (electronic) | 9781479919864 |
Publication status | Published - 14 Jul 2015 |
Event | 12th Conference on Computer and Robot Vision, CRV 2015 - Halifax, Canada Duration: 3 Jun 2015 → 5 Jun 2015 |
Abstract
Random Forest is a well-known ensemble learning method that achieves high recognition accuracies while preserving a fast training procedure. To construct a Random Forest classifier, several decision trees are arranged in a forest while a majority voting leads to the final decision. In order to split each node of a decision tree into two children, several possible variables are randomly selected while a splitting criterion is computed for each of them. Using this pool of possible splits, the Random Forest algorithm selects the best variable according to the splitting criterion. Often, this splitting is not reliable leading to a reduced recognition accuracy. In this paper, we propose to introduce an additional condition for selecting the best variable leading to an improvement of the recognition accuracy, especially for a smaller number of trees. We enhance the standard threshold selection by a quality estimation that is computed using a calibration method. The proposed method is evaluated on datasets for machine learning as well as object recognition.
Keywords
- Machine Learning, Object Recognition, Random Forest, Splitting
ASJC Scopus subject areas
- Computer Science(all)
- Computer Vision and Pattern Recognition
- Computer Science(all)
- Computer Science Applications
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Proceedings: 2015 12th Conference on Computer and Robot Vision, CRV 2015. Institute of Electrical and Electronics Engineers Inc., 2015. p. 155-160 7158334.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Improved Threshold Selection by Using Calibrated Probabilities for Random Forest Classifiers
AU - Baumann, Florian
AU - Chen, Jinghui
AU - Vogt, Karsten
AU - Rosenhahn, Bodo
PY - 2015/7/14
Y1 - 2015/7/14
N2 - Random Forest is a well-known ensemble learning method that achieves high recognition accuracies while preserving a fast training procedure. To construct a Random Forest classifier, several decision trees are arranged in a forest while a majority voting leads to the final decision. In order to split each node of a decision tree into two children, several possible variables are randomly selected while a splitting criterion is computed for each of them. Using this pool of possible splits, the Random Forest algorithm selects the best variable according to the splitting criterion. Often, this splitting is not reliable leading to a reduced recognition accuracy. In this paper, we propose to introduce an additional condition for selecting the best variable leading to an improvement of the recognition accuracy, especially for a smaller number of trees. We enhance the standard threshold selection by a quality estimation that is computed using a calibration method. The proposed method is evaluated on datasets for machine learning as well as object recognition.
AB - Random Forest is a well-known ensemble learning method that achieves high recognition accuracies while preserving a fast training procedure. To construct a Random Forest classifier, several decision trees are arranged in a forest while a majority voting leads to the final decision. In order to split each node of a decision tree into two children, several possible variables are randomly selected while a splitting criterion is computed for each of them. Using this pool of possible splits, the Random Forest algorithm selects the best variable according to the splitting criterion. Often, this splitting is not reliable leading to a reduced recognition accuracy. In this paper, we propose to introduce an additional condition for selecting the best variable leading to an improvement of the recognition accuracy, especially for a smaller number of trees. We enhance the standard threshold selection by a quality estimation that is computed using a calibration method. The proposed method is evaluated on datasets for machine learning as well as object recognition.
KW - Machine Learning
KW - Object Recognition
KW - Random Forest
KW - Splitting
UR - http://www.scopus.com/inward/record.url?scp=84943159205&partnerID=8YFLogxK
U2 - 10.1109/crv.2015.28
DO - 10.1109/crv.2015.28
M3 - Conference contribution
AN - SCOPUS:84943159205
SP - 155
EP - 160
BT - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 12th Conference on Computer and Robot Vision, CRV 2015
Y2 - 3 June 2015 through 5 June 2015
ER -