FaxPlainAC: A Fact-Checking Tool Based on EXPLAINable Models with HumAn Correction in the Loop

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Zijian Zhang
  • Koustav Rudra
  • Avishek Anand

Research Organisations

View graph of relations

Details

Original languageEnglish
Title of host publicationCIKM '21
Subtitle of host publicationProceedings of the 30th ACM International Conference on Information & Knowledge Management
PublisherAssociation for Computing Machinery (ACM)
Pages4823-4827
Number of pages5
ISBN (electronic)9781450384469
Publication statusPublished - 30 Oct 2021
Event30th ACM International Conference on Information and Knowledge Management, CIKM 2021 - Virtual, Online, Australia
Duration: 1 Nov 20215 Nov 2021

Publication series

NameInternational Conference on Information and Knowledge Management, Proceedings

Abstract

Fact-checking on the Web has become the main mechanism through which we detect the credibility of the news or information. Existing fact-checkers verify the authenticity of the information (support or refute the claim) based on secondary sources of information. However, existing approaches do not consider the problem of model updates due to constantly increasing training data due to user feedback. It is therefore important to conduct user studies to correct models' inference biases and improve the model in a life-long learning manner in the future according to the user feedback. In this paper, we present FaxPlainAC, a tool that gathers user feedback on the output of explainable fact-checking models. FaxPlainAC outputs both the model decision, i.e., whether the input fact is true or not, along with the supporting/refuting evidence considered by the model. Additionally, FaxPlainAC allows for accepting user feedback both on the prediction and explanation. Developed in Python, FaxPlainAC is designed as a modular and easily deployable tool. It can be integrated with other downstream tasks and allowing for fact-checking human annotation gathering and life-long learning.

Keywords

    data gathering, fact-checking, human-in-the-loop machine learning, interpretable machine learning

ASJC Scopus subject areas

Cite this

FaxPlainAC: A Fact-Checking Tool Based on EXPLAINable Models with HumAn Correction in the Loop. / Zhang, Zijian; Rudra, Koustav; Anand, Avishek.
CIKM '21: Proceedings of the 30th ACM International Conference on Information & Knowledge Management. Association for Computing Machinery (ACM), 2021. p. 4823-4827 (International Conference on Information and Knowledge Management, Proceedings).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Zhang, Z, Rudra, K & Anand, A 2021, FaxPlainAC: A Fact-Checking Tool Based on EXPLAINable Models with HumAn Correction in the Loop. in CIKM '21: Proceedings of the 30th ACM International Conference on Information & Knowledge Management. International Conference on Information and Knowledge Management, Proceedings, Association for Computing Machinery (ACM), pp. 4823-4827, 30th ACM International Conference on Information and Knowledge Management, CIKM 2021, Virtual, Online, Australia, 1 Nov 2021. https://doi.org/10.48550/arXiv.2110.10144, https://doi.org/10.1145/3459637.3481985
Zhang, Z., Rudra, K., & Anand, A. (2021). FaxPlainAC: A Fact-Checking Tool Based on EXPLAINable Models with HumAn Correction in the Loop. In CIKM '21: Proceedings of the 30th ACM International Conference on Information & Knowledge Management (pp. 4823-4827). (International Conference on Information and Knowledge Management, Proceedings). Association for Computing Machinery (ACM). https://doi.org/10.48550/arXiv.2110.10144, https://doi.org/10.1145/3459637.3481985
Zhang Z, Rudra K, Anand A. FaxPlainAC: A Fact-Checking Tool Based on EXPLAINable Models with HumAn Correction in the Loop. In CIKM '21: Proceedings of the 30th ACM International Conference on Information & Knowledge Management. Association for Computing Machinery (ACM). 2021. p. 4823-4827. (International Conference on Information and Knowledge Management, Proceedings). doi: 10.48550/arXiv.2110.10144, 10.1145/3459637.3481985
Zhang, Zijian ; Rudra, Koustav ; Anand, Avishek. / FaxPlainAC : A Fact-Checking Tool Based on EXPLAINable Models with HumAn Correction in the Loop. CIKM '21: Proceedings of the 30th ACM International Conference on Information & Knowledge Management. Association for Computing Machinery (ACM), 2021. pp. 4823-4827 (International Conference on Information and Knowledge Management, Proceedings).
Download
@inproceedings{6b9b21f229aa43ee94b0ff0e23856625,
title = "FaxPlainAC: A Fact-Checking Tool Based on EXPLAINable Models with HumAn Correction in the Loop",
abstract = "Fact-checking on the Web has become the main mechanism through which we detect the credibility of the news or information. Existing fact-checkers verify the authenticity of the information (support or refute the claim) based on secondary sources of information. However, existing approaches do not consider the problem of model updates due to constantly increasing training data due to user feedback. It is therefore important to conduct user studies to correct models' inference biases and improve the model in a life-long learning manner in the future according to the user feedback. In this paper, we present FaxPlainAC, a tool that gathers user feedback on the output of explainable fact-checking models. FaxPlainAC outputs both the model decision, i.e., whether the input fact is true or not, along with the supporting/refuting evidence considered by the model. Additionally, FaxPlainAC allows for accepting user feedback both on the prediction and explanation. Developed in Python, FaxPlainAC is designed as a modular and easily deployable tool. It can be integrated with other downstream tasks and allowing for fact-checking human annotation gathering and life-long learning.",
keywords = "data gathering, fact-checking, human-in-the-loop machine learning, interpretable machine learning",
author = "Zijian Zhang and Koustav Rudra and Avishek Anand",
note = "Funding Information: Acknowledgement: Funding for this project was in part provid-edby the European Union{\textquoteright}s Horizon 2020 research and innovation program under grant agreement No 832921 and No 871042. ; 30th ACM International Conference on Information and Knowledge Management, CIKM 2021 ; Conference date: 01-11-2021 Through 05-11-2021",
year = "2021",
month = oct,
day = "30",
doi = "10.48550/arXiv.2110.10144",
language = "English",
series = "International Conference on Information and Knowledge Management, Proceedings",
publisher = "Association for Computing Machinery (ACM)",
pages = "4823--4827",
booktitle = "CIKM '21",
address = "United States",

}

Download

TY - GEN

T1 - FaxPlainAC

T2 - 30th ACM International Conference on Information and Knowledge Management, CIKM 2021

AU - Zhang, Zijian

AU - Rudra, Koustav

AU - Anand, Avishek

N1 - Funding Information: Acknowledgement: Funding for this project was in part provid-edby the European Union’s Horizon 2020 research and innovation program under grant agreement No 832921 and No 871042.

PY - 2021/10/30

Y1 - 2021/10/30

N2 - Fact-checking on the Web has become the main mechanism through which we detect the credibility of the news or information. Existing fact-checkers verify the authenticity of the information (support or refute the claim) based on secondary sources of information. However, existing approaches do not consider the problem of model updates due to constantly increasing training data due to user feedback. It is therefore important to conduct user studies to correct models' inference biases and improve the model in a life-long learning manner in the future according to the user feedback. In this paper, we present FaxPlainAC, a tool that gathers user feedback on the output of explainable fact-checking models. FaxPlainAC outputs both the model decision, i.e., whether the input fact is true or not, along with the supporting/refuting evidence considered by the model. Additionally, FaxPlainAC allows for accepting user feedback both on the prediction and explanation. Developed in Python, FaxPlainAC is designed as a modular and easily deployable tool. It can be integrated with other downstream tasks and allowing for fact-checking human annotation gathering and life-long learning.

AB - Fact-checking on the Web has become the main mechanism through which we detect the credibility of the news or information. Existing fact-checkers verify the authenticity of the information (support or refute the claim) based on secondary sources of information. However, existing approaches do not consider the problem of model updates due to constantly increasing training data due to user feedback. It is therefore important to conduct user studies to correct models' inference biases and improve the model in a life-long learning manner in the future according to the user feedback. In this paper, we present FaxPlainAC, a tool that gathers user feedback on the output of explainable fact-checking models. FaxPlainAC outputs both the model decision, i.e., whether the input fact is true or not, along with the supporting/refuting evidence considered by the model. Additionally, FaxPlainAC allows for accepting user feedback both on the prediction and explanation. Developed in Python, FaxPlainAC is designed as a modular and easily deployable tool. It can be integrated with other downstream tasks and allowing for fact-checking human annotation gathering and life-long learning.

KW - data gathering

KW - fact-checking

KW - human-in-the-loop machine learning

KW - interpretable machine learning

UR - http://www.scopus.com/inward/record.url?scp=85119204381&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2110.10144

DO - 10.48550/arXiv.2110.10144

M3 - Conference contribution

AN - SCOPUS:85119204381

T3 - International Conference on Information and Knowledge Management, Proceedings

SP - 4823

EP - 4827

BT - CIKM '21

PB - Association for Computing Machinery (ACM)

Y2 - 1 November 2021 through 5 November 2021

ER -