Loading [MathJax]/extensions/tex2jax.js

PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector

Publikation: Beitrag in FachzeitschriftKonferenzaufsatz in FachzeitschriftForschungPeer-Review

Autorschaft

  • Chuan Luo
  • Pu Zhao
  • Chen Chen
  • Bo Qiao
  • Wei Wu

Organisationseinheiten

Externe Organisationen

  • Microsoft Corporation
  • Microsoft Research
  • University of Newcastle
  • CAS - Institute of Software
  • Graduate University of Chinese Academy of Sciences

Details

OriginalspracheEnglisch
Seiten (von - bis)8784-8792
Seitenumfang9
FachzeitschriftProceedings of the AAAI Conference on Artificial Intelligence
Jahrgang35
Ausgabenummer10
PublikationsstatusVeröffentlicht - 18 Mai 2021
Veranstaltung35th AAAI Conference on Artificial Intelligence, AAAI 2021 - Virtual, Online
Dauer: 2 Feb. 20219 Feb. 2021

Abstract

Positive-unlabeled learning (PU learning) is an important case of binary classification where the training data only contains positive and unlabeled samples. The current state-of-the-art approach for PU learning is the cost-sensitive approach, which casts PU learning as a cost-sensitive classification problem and relies on unbiased risk estimator for correcting the bias introduced by the unlabeled samples. However, this approach requires the knowledge of class prior and is subject to the potential label noise. In this paper, we propose a novel PU learning approach dubbed PULNS, equipped with an effective negative sample selector, which is optimized by reinforcement learning. Our PULNS approach employs an effective negative sample selector as the agent responsible for selecting negative samples from the unlabeled data. While the selected, likely negative samples can be used to improve the classifier, the performance of classifier is also used as the reward to improve the selector through the REINFORCE algorithm. By alternating the updates of the selector and the classifier, the performance of both is improved. Extensive experimental studies on 7 real-world application benchmarks demonstrate that PULNS consistently outperforms the current state-of-the-art methods in PU learning, and our experimental results also confirm the effectiveness of the negative sample selector underlying PULNS.

ASJC Scopus Sachgebiete

Zitieren

PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector. / Luo, Chuan; Zhao, Pu; Chen, Chen et al.
in: Proceedings of the AAAI Conference on Artificial Intelligence, Jahrgang 35, Nr. 10, 18.05.2021, S. 8784-8792.

Publikation: Beitrag in FachzeitschriftKonferenzaufsatz in FachzeitschriftForschungPeer-Review

Luo, C, Zhao, P, Chen, C, Qiao, B, Du, C, Zhang, H, Wu, W, Cai, S, He, B, Rajmohan, S & Lin, Q 2021, 'PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector', Proceedings of the AAAI Conference on Artificial Intelligence, Jg. 35, Nr. 10, S. 8784-8792. https://doi.org/10.1609/aaai.v35i10.17064
Luo, C., Zhao, P., Chen, C., Qiao, B., Du, C., Zhang, H., Wu, W., Cai, S., He, B., Rajmohan, S., & Lin, Q. (2021). PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8784-8792. https://doi.org/10.1609/aaai.v35i10.17064
Luo C, Zhao P, Chen C, Qiao B, Du C, Zhang H et al. PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector. Proceedings of the AAAI Conference on Artificial Intelligence. 2021 Mai 18;35(10):8784-8792. doi: 10.1609/aaai.v35i10.17064
Luo, Chuan ; Zhao, Pu ; Chen, Chen et al. / PULNS : Positive-Unlabeled Learning with Effective Negative Sample Selector. in: Proceedings of the AAAI Conference on Artificial Intelligence. 2021 ; Jahrgang 35, Nr. 10. S. 8784-8792.
Download
@article{f72a1df11dc143569b884096c2760b82,
title = "PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector",
abstract = "Positive-unlabeled learning (PU learning) is an important case of binary classification where the training data only contains positive and unlabeled samples. The current state-of-the-art approach for PU learning is the cost-sensitive approach, which casts PU learning as a cost-sensitive classification problem and relies on unbiased risk estimator for correcting the bias introduced by the unlabeled samples. However, this approach requires the knowledge of class prior and is subject to the potential label noise. In this paper, we propose a novel PU learning approach dubbed PULNS, equipped with an effective negative sample selector, which is optimized by reinforcement learning. Our PULNS approach employs an effective negative sample selector as the agent responsible for selecting negative samples from the unlabeled data. While the selected, likely negative samples can be used to improve the classifier, the performance of classifier is also used as the reward to improve the selector through the REINFORCE algorithm. By alternating the updates of the selector and the classifier, the performance of both is improved. Extensive experimental studies on 7 real-world application benchmarks demonstrate that PULNS consistently outperforms the current state-of-the-art methods in PU learning, and our experimental results also confirm the effectiveness of the negative sample selector underlying PULNS.",
author = "Chuan Luo and Pu Zhao and Chen Chen and Bo Qiao and Chao Du and Hongyu Zhang and Wei Wu and Shaowei Cai and Bing He and Saravanakumar Rajmohan and Qingwei Lin",
year = "2021",
month = may,
day = "18",
doi = "10.1609/aaai.v35i10.17064",
language = "English",
volume = "35",
pages = "8784--8792",
number = "10",
note = "35th AAAI Conference on Artificial Intelligence, AAAI 2021 ; Conference date: 02-02-2021 Through 09-02-2021",

}

Download

TY - JOUR

T1 - PULNS

T2 - 35th AAAI Conference on Artificial Intelligence, AAAI 2021

AU - Luo, Chuan

AU - Zhao, Pu

AU - Chen, Chen

AU - Qiao, Bo

AU - Du, Chao

AU - Zhang, Hongyu

AU - Wu, Wei

AU - Cai, Shaowei

AU - He, Bing

AU - Rajmohan, Saravanakumar

AU - Lin, Qingwei

PY - 2021/5/18

Y1 - 2021/5/18

N2 - Positive-unlabeled learning (PU learning) is an important case of binary classification where the training data only contains positive and unlabeled samples. The current state-of-the-art approach for PU learning is the cost-sensitive approach, which casts PU learning as a cost-sensitive classification problem and relies on unbiased risk estimator for correcting the bias introduced by the unlabeled samples. However, this approach requires the knowledge of class prior and is subject to the potential label noise. In this paper, we propose a novel PU learning approach dubbed PULNS, equipped with an effective negative sample selector, which is optimized by reinforcement learning. Our PULNS approach employs an effective negative sample selector as the agent responsible for selecting negative samples from the unlabeled data. While the selected, likely negative samples can be used to improve the classifier, the performance of classifier is also used as the reward to improve the selector through the REINFORCE algorithm. By alternating the updates of the selector and the classifier, the performance of both is improved. Extensive experimental studies on 7 real-world application benchmarks demonstrate that PULNS consistently outperforms the current state-of-the-art methods in PU learning, and our experimental results also confirm the effectiveness of the negative sample selector underlying PULNS.

AB - Positive-unlabeled learning (PU learning) is an important case of binary classification where the training data only contains positive and unlabeled samples. The current state-of-the-art approach for PU learning is the cost-sensitive approach, which casts PU learning as a cost-sensitive classification problem and relies on unbiased risk estimator for correcting the bias introduced by the unlabeled samples. However, this approach requires the knowledge of class prior and is subject to the potential label noise. In this paper, we propose a novel PU learning approach dubbed PULNS, equipped with an effective negative sample selector, which is optimized by reinforcement learning. Our PULNS approach employs an effective negative sample selector as the agent responsible for selecting negative samples from the unlabeled data. While the selected, likely negative samples can be used to improve the classifier, the performance of classifier is also used as the reward to improve the selector through the REINFORCE algorithm. By alternating the updates of the selector and the classifier, the performance of both is improved. Extensive experimental studies on 7 real-world application benchmarks demonstrate that PULNS consistently outperforms the current state-of-the-art methods in PU learning, and our experimental results also confirm the effectiveness of the negative sample selector underlying PULNS.

UR - http://www.scopus.com/inward/record.url?scp=85107947831&partnerID=8YFLogxK

U2 - 10.1609/aaai.v35i10.17064

DO - 10.1609/aaai.v35i10.17064

M3 - Conference article

AN - SCOPUS:85107947831

VL - 35

SP - 8784

EP - 8792

JO - Proceedings of the AAAI Conference on Artificial Intelligence

JF - Proceedings of the AAAI Conference on Artificial Intelligence

IS - 10

Y2 - 2 February 2021 through 9 February 2021

ER -