Details
Original language | English |
---|---|
Title of host publication | 2018 IEEE International Conference on Big Knowledge (ICBK) |
Editors | Ong Yew Soon, Huanhuan Chen, Xindong Wu, Charu Aggarwal |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 328-335 |
Number of pages | 8 |
ISBN (electronic) | 9781538691243 |
ISBN (print) | 9781538691267 |
Publication status | Published - 27 Dec 2018 |
Event | 9th IEEE International Conference on Big Knowledge, ICBK 2018 - Singapore, Singapore Duration: 17 Nov 2018 → 18 Nov 2018 |
Abstract
Traditional decision tree induction algorithms are greedy with locally-optimal decisions made at each node based on splitting criteria like information gain or Gini index. A reinforcement learning approach to decision tree building seems more suitable as it aims at maximizing the long-term return rather than optimizing a short-term goal. In this paper, a reinforcement learning approach is used to train a Markov Decision Process (MDP), which enables the creation of a short and highly accurate decision tree. Moreover, the use of reinforcement learning naturally enables additional functionality such as learning under concept drifts, feature importance weighting, inclusion of new features and forgetting of obsolete ones as well as classification with incomplete data. To deal with concept drifts, a reset operation is proposed that allows for local re-learning of outdated parts of the tree. Preliminary experiments show that such an approach allows for better adaptation to concept drifts and changing feature spaces, while still producing a short and highly accurate decision tree.
Keywords
- Concept drifts, Decision trees, Reinforcement learning, Stream mining
ASJC Scopus subject areas
- Computer Science(all)
- Artificial Intelligence
- Computer Science(all)
- Computer Networks and Communications
- Decision Sciences(all)
- Information Systems and Management
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
2018 IEEE International Conference on Big Knowledge (ICBK). ed. / Ong Yew Soon; Huanhuan Chen; Xindong Wu; Charu Aggarwal. Institute of Electrical and Electronics Engineers Inc., 2018. p. 328-335 00051.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Reinforcement Learning Based Decision Tree Induction over Data Streams with Concept Drifts
AU - Blake, Christopher
AU - Ntoutsi, Eirini
N1 - Funding information: The work was partially funded by the European Commission for the ERC Advanced Grant ALEXANDRIA under grant No. 339233 and inspired by the German Research Foundation (DFG) project OSCAR (Opinion Stream Classification with Ensembles and Active leaRners) for which the last author is Co-Principal Investigator. The work was partially funded by the European Commission for the ERC Advanced Grant ALEXANDRIA under grant No. 339233 and inspired by the German Research Foundation (DFG) project OSCAR (Opinion Stream Classification with Ensembles and Active leaRners) for which the last author is Co-Principal Investigator.
PY - 2018/12/27
Y1 - 2018/12/27
N2 - Traditional decision tree induction algorithms are greedy with locally-optimal decisions made at each node based on splitting criteria like information gain or Gini index. A reinforcement learning approach to decision tree building seems more suitable as it aims at maximizing the long-term return rather than optimizing a short-term goal. In this paper, a reinforcement learning approach is used to train a Markov Decision Process (MDP), which enables the creation of a short and highly accurate decision tree. Moreover, the use of reinforcement learning naturally enables additional functionality such as learning under concept drifts, feature importance weighting, inclusion of new features and forgetting of obsolete ones as well as classification with incomplete data. To deal with concept drifts, a reset operation is proposed that allows for local re-learning of outdated parts of the tree. Preliminary experiments show that such an approach allows for better adaptation to concept drifts and changing feature spaces, while still producing a short and highly accurate decision tree.
AB - Traditional decision tree induction algorithms are greedy with locally-optimal decisions made at each node based on splitting criteria like information gain or Gini index. A reinforcement learning approach to decision tree building seems more suitable as it aims at maximizing the long-term return rather than optimizing a short-term goal. In this paper, a reinforcement learning approach is used to train a Markov Decision Process (MDP), which enables the creation of a short and highly accurate decision tree. Moreover, the use of reinforcement learning naturally enables additional functionality such as learning under concept drifts, feature importance weighting, inclusion of new features and forgetting of obsolete ones as well as classification with incomplete data. To deal with concept drifts, a reset operation is proposed that allows for local re-learning of outdated parts of the tree. Preliminary experiments show that such an approach allows for better adaptation to concept drifts and changing feature spaces, while still producing a short and highly accurate decision tree.
KW - Concept drifts
KW - Decision trees
KW - Reinforcement learning
KW - Stream mining
UR - http://www.scopus.com/inward/record.url?scp=85061377032&partnerID=8YFLogxK
U2 - 10.1109/ICBK.2018.00051
DO - 10.1109/ICBK.2018.00051
M3 - Conference contribution
AN - SCOPUS:85061377032
SN - 9781538691267
SP - 328
EP - 335
BT - 2018 IEEE International Conference on Big Knowledge (ICBK)
A2 - Soon, Ong Yew
A2 - Chen, Huanhuan
A2 - Wu, Xindong
A2 - Aggarwal, Charu
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 9th IEEE International Conference on Big Knowledge, ICBK 2018
Y2 - 17 November 2018 through 18 November 2018
ER -