Loading [MathJax]/extensions/tex2jax.js

PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector

Research output: Contribution to journalConference articleResearchpeer review

Authors

  • Chuan Luo
  • Pu Zhao
  • Chen Chen
  • Bo Qiao
  • Wei Wu

Research Organisations

External Research Organisations

  • Microsoft Corporation
  • Microsoft Research
  • University of Newcastle
  • CAS - Institute of Software
  • University of the Chinese Academy of Sciences (UCAS)
Plum Print visual indicator of research metrics
  • Citations
    • Citation Indexes: 31
  • Captures
    • Readers: 31
see details

Details

Original languageEnglish
Pages (from-to)8784-8792
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume35
Issue number10
Publication statusPublished - 18 May 2021
Event35th AAAI Conference on Artificial Intelligence, AAAI 2021 - Virtual, Online
Duration: 2 Feb 20219 Feb 2021

Abstract

Positive-unlabeled learning (PU learning) is an important case of binary classification where the training data only contains positive and unlabeled samples. The current state-of-the-art approach for PU learning is the cost-sensitive approach, which casts PU learning as a cost-sensitive classification problem and relies on unbiased risk estimator for correcting the bias introduced by the unlabeled samples. However, this approach requires the knowledge of class prior and is subject to the potential label noise. In this paper, we propose a novel PU learning approach dubbed PULNS, equipped with an effective negative sample selector, which is optimized by reinforcement learning. Our PULNS approach employs an effective negative sample selector as the agent responsible for selecting negative samples from the unlabeled data. While the selected, likely negative samples can be used to improve the classifier, the performance of classifier is also used as the reward to improve the selector through the REINFORCE algorithm. By alternating the updates of the selector and the classifier, the performance of both is improved. Extensive experimental studies on 7 real-world application benchmarks demonstrate that PULNS consistently outperforms the current state-of-the-art methods in PU learning, and our experimental results also confirm the effectiveness of the negative sample selector underlying PULNS.

ASJC Scopus subject areas

Cite this

PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector. / Luo, Chuan; Zhao, Pu; Chen, Chen et al.
In: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, No. 10, 18.05.2021, p. 8784-8792.

Research output: Contribution to journalConference articleResearchpeer review

Luo, C, Zhao, P, Chen, C, Qiao, B, Du, C, Zhang, H, Wu, W, Cai, S, He, B, Rajmohan, S & Lin, Q 2021, 'PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector', Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 10, pp. 8784-8792. https://doi.org/10.1609/aaai.v35i10.17064
Luo, C., Zhao, P., Chen, C., Qiao, B., Du, C., Zhang, H., Wu, W., Cai, S., He, B., Rajmohan, S., & Lin, Q. (2021). PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8784-8792. https://doi.org/10.1609/aaai.v35i10.17064
Luo C, Zhao P, Chen C, Qiao B, Du C, Zhang H et al. PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector. Proceedings of the AAAI Conference on Artificial Intelligence. 2021 May 18;35(10):8784-8792. doi: 10.1609/aaai.v35i10.17064
Luo, Chuan ; Zhao, Pu ; Chen, Chen et al. / PULNS : Positive-Unlabeled Learning with Effective Negative Sample Selector. In: Proceedings of the AAAI Conference on Artificial Intelligence. 2021 ; Vol. 35, No. 10. pp. 8784-8792.
Download
@article{f72a1df11dc143569b884096c2760b82,
title = "PULNS: Positive-Unlabeled Learning with Effective Negative Sample Selector",
abstract = "Positive-unlabeled learning (PU learning) is an important case of binary classification where the training data only contains positive and unlabeled samples. The current state-of-the-art approach for PU learning is the cost-sensitive approach, which casts PU learning as a cost-sensitive classification problem and relies on unbiased risk estimator for correcting the bias introduced by the unlabeled samples. However, this approach requires the knowledge of class prior and is subject to the potential label noise. In this paper, we propose a novel PU learning approach dubbed PULNS, equipped with an effective negative sample selector, which is optimized by reinforcement learning. Our PULNS approach employs an effective negative sample selector as the agent responsible for selecting negative samples from the unlabeled data. While the selected, likely negative samples can be used to improve the classifier, the performance of classifier is also used as the reward to improve the selector through the REINFORCE algorithm. By alternating the updates of the selector and the classifier, the performance of both is improved. Extensive experimental studies on 7 real-world application benchmarks demonstrate that PULNS consistently outperforms the current state-of-the-art methods in PU learning, and our experimental results also confirm the effectiveness of the negative sample selector underlying PULNS.",
author = "Chuan Luo and Pu Zhao and Chen Chen and Bo Qiao and Chao Du and Hongyu Zhang and Wei Wu and Shaowei Cai and Bing He and Saravanakumar Rajmohan and Qingwei Lin",
year = "2021",
month = may,
day = "18",
doi = "10.1609/aaai.v35i10.17064",
language = "English",
volume = "35",
pages = "8784--8792",
number = "10",
note = "35th AAAI Conference on Artificial Intelligence, AAAI 2021 ; Conference date: 02-02-2021 Through 09-02-2021",

}

Download

TY - JOUR

T1 - PULNS

T2 - 35th AAAI Conference on Artificial Intelligence, AAAI 2021

AU - Luo, Chuan

AU - Zhao, Pu

AU - Chen, Chen

AU - Qiao, Bo

AU - Du, Chao

AU - Zhang, Hongyu

AU - Wu, Wei

AU - Cai, Shaowei

AU - He, Bing

AU - Rajmohan, Saravanakumar

AU - Lin, Qingwei

PY - 2021/5/18

Y1 - 2021/5/18

N2 - Positive-unlabeled learning (PU learning) is an important case of binary classification where the training data only contains positive and unlabeled samples. The current state-of-the-art approach for PU learning is the cost-sensitive approach, which casts PU learning as a cost-sensitive classification problem and relies on unbiased risk estimator for correcting the bias introduced by the unlabeled samples. However, this approach requires the knowledge of class prior and is subject to the potential label noise. In this paper, we propose a novel PU learning approach dubbed PULNS, equipped with an effective negative sample selector, which is optimized by reinforcement learning. Our PULNS approach employs an effective negative sample selector as the agent responsible for selecting negative samples from the unlabeled data. While the selected, likely negative samples can be used to improve the classifier, the performance of classifier is also used as the reward to improve the selector through the REINFORCE algorithm. By alternating the updates of the selector and the classifier, the performance of both is improved. Extensive experimental studies on 7 real-world application benchmarks demonstrate that PULNS consistently outperforms the current state-of-the-art methods in PU learning, and our experimental results also confirm the effectiveness of the negative sample selector underlying PULNS.

AB - Positive-unlabeled learning (PU learning) is an important case of binary classification where the training data only contains positive and unlabeled samples. The current state-of-the-art approach for PU learning is the cost-sensitive approach, which casts PU learning as a cost-sensitive classification problem and relies on unbiased risk estimator for correcting the bias introduced by the unlabeled samples. However, this approach requires the knowledge of class prior and is subject to the potential label noise. In this paper, we propose a novel PU learning approach dubbed PULNS, equipped with an effective negative sample selector, which is optimized by reinforcement learning. Our PULNS approach employs an effective negative sample selector as the agent responsible for selecting negative samples from the unlabeled data. While the selected, likely negative samples can be used to improve the classifier, the performance of classifier is also used as the reward to improve the selector through the REINFORCE algorithm. By alternating the updates of the selector and the classifier, the performance of both is improved. Extensive experimental studies on 7 real-world application benchmarks demonstrate that PULNS consistently outperforms the current state-of-the-art methods in PU learning, and our experimental results also confirm the effectiveness of the negative sample selector underlying PULNS.

UR - http://www.scopus.com/inward/record.url?scp=85107947831&partnerID=8YFLogxK

U2 - 10.1609/aaai.v35i10.17064

DO - 10.1609/aaai.v35i10.17064

M3 - Conference article

AN - SCOPUS:85107947831

VL - 35

SP - 8784

EP - 8792

JO - Proceedings of the AAAI Conference on Artificial Intelligence

JF - Proceedings of the AAAI Conference on Artificial Intelligence

IS - 10

Y2 - 2 February 2021 through 9 February 2021

ER -