SparCAssist: A Model Risk Assessment Assistant Based on Sparse Generated Counterfactuals

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Zijian Zhang
  • Vinay Setty
  • Avishek Anand

Research Organisations

External Research Organisations

  • University of Stavanger
  • Delft University of Technology
View graph of relations

Details

Original languageEnglish
Title of host publicationSIGIR 2022
Subtitle of host publicationProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval
Pages3219-3223
Number of pages5
ISBN (electronic)9781450387323
Publication statusPublished - 7 Jul 2022
Event45th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2022 - Madrid, Spain
Duration: 11 Jul 202215 Jul 2022

Abstract

We introduce SparCAssist, a general-purpose risk assessment tool for the machine learning models trained for language tasks. It evaluates models' risk by inspecting their behavior on counterfactuals, namely out-of-distribution instances generated based on the given data instance. The counterfactuals are generated by replacing tokens in rational subsequences identified by ExPred, while the replacements are retrieved using HotFlip or the Masked-Language-Model-based algorithms. The main purpose of our system is to help the human annotators to assess the model's risk on deployment. The counterfactual instances generated during the assessment are the by-product and can be used to train more robust NLP models in the future.

Keywords

    counterfactual interpretation, data-annotation tools, human-in-the-loop machine learning, interpretable machine learning

ASJC Scopus subject areas

Cite this

SparCAssist: A Model Risk Assessment Assistant Based on Sparse Generated Counterfactuals. / Zhang, Zijian; Setty, Vinay; Anand, Avishek.
SIGIR 2022 : Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2022. p. 3219-3223.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Zhang, Z, Setty, V & Anand, A 2022, SparCAssist: A Model Risk Assessment Assistant Based on Sparse Generated Counterfactuals. in SIGIR 2022 : Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 3219-3223, 45th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2022, Madrid, Spain, 11 Jul 2022. https://doi.org/10.48550/arXiv.2205.01588, https://doi.org/10.1145/3477495.3531677
Zhang, Z., Setty, V., & Anand, A. (2022). SparCAssist: A Model Risk Assessment Assistant Based on Sparse Generated Counterfactuals. In SIGIR 2022 : Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 3219-3223) https://doi.org/10.48550/arXiv.2205.01588, https://doi.org/10.1145/3477495.3531677
Zhang Z, Setty V, Anand A. SparCAssist: A Model Risk Assessment Assistant Based on Sparse Generated Counterfactuals. In SIGIR 2022 : Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2022. p. 3219-3223 doi: 10.48550/arXiv.2205.01588, 10.1145/3477495.3531677
Zhang, Zijian ; Setty, Vinay ; Anand, Avishek. / SparCAssist : A Model Risk Assessment Assistant Based on Sparse Generated Counterfactuals. SIGIR 2022 : Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2022. pp. 3219-3223
Download
@inproceedings{c4365e9f870b46f184a001972087d438,
title = "SparCAssist: A Model Risk Assessment Assistant Based on Sparse Generated Counterfactuals",
abstract = "We introduce SparCAssist, a general-purpose risk assessment tool for the machine learning models trained for language tasks. It evaluates models' risk by inspecting their behavior on counterfactuals, namely out-of-distribution instances generated based on the given data instance. The counterfactuals are generated by replacing tokens in rational subsequences identified by ExPred, while the replacements are retrieved using HotFlip or the Masked-Language-Model-based algorithms. The main purpose of our system is to help the human annotators to assess the model's risk on deployment. The counterfactual instances generated during the assessment are the by-product and can be used to train more robust NLP models in the future.",
keywords = "counterfactual interpretation, data-annotation tools, human-in-the-loop machine learning, interpretable machine learning",
author = "Zijian Zhang and Vinay Setty and Avishek Anand",
note = "Funding Information: This work is partially funded by project MIRROR under grant agreement No. 832921 (project MIRROR from the European Commission: Migration-Related Risks caused by misconceptions of Opportunities and Requirement) and project ROXANNE, the European Union{\textquoteright}s Horizon 2020 research and innovation program under grant agreement No. 833635.; 45th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2022 ; Conference date: 11-07-2022 Through 15-07-2022",
year = "2022",
month = jul,
day = "7",
doi = "10.48550/arXiv.2205.01588",
language = "English",
pages = "3219--3223",
booktitle = "SIGIR 2022",

}

Download

TY - GEN

T1 - SparCAssist

T2 - 45th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2022

AU - Zhang, Zijian

AU - Setty, Vinay

AU - Anand, Avishek

N1 - Funding Information: This work is partially funded by project MIRROR under grant agreement No. 832921 (project MIRROR from the European Commission: Migration-Related Risks caused by misconceptions of Opportunities and Requirement) and project ROXANNE, the European Union’s Horizon 2020 research and innovation program under grant agreement No. 833635.

PY - 2022/7/7

Y1 - 2022/7/7

N2 - We introduce SparCAssist, a general-purpose risk assessment tool for the machine learning models trained for language tasks. It evaluates models' risk by inspecting their behavior on counterfactuals, namely out-of-distribution instances generated based on the given data instance. The counterfactuals are generated by replacing tokens in rational subsequences identified by ExPred, while the replacements are retrieved using HotFlip or the Masked-Language-Model-based algorithms. The main purpose of our system is to help the human annotators to assess the model's risk on deployment. The counterfactual instances generated during the assessment are the by-product and can be used to train more robust NLP models in the future.

AB - We introduce SparCAssist, a general-purpose risk assessment tool for the machine learning models trained for language tasks. It evaluates models' risk by inspecting their behavior on counterfactuals, namely out-of-distribution instances generated based on the given data instance. The counterfactuals are generated by replacing tokens in rational subsequences identified by ExPred, while the replacements are retrieved using HotFlip or the Masked-Language-Model-based algorithms. The main purpose of our system is to help the human annotators to assess the model's risk on deployment. The counterfactual instances generated during the assessment are the by-product and can be used to train more robust NLP models in the future.

KW - counterfactual interpretation

KW - data-annotation tools

KW - human-in-the-loop machine learning

KW - interpretable machine learning

UR - http://www.scopus.com/inward/record.url?scp=85135007122&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2205.01588

DO - 10.48550/arXiv.2205.01588

M3 - Conference contribution

AN - SCOPUS:85135007122

SP - 3219

EP - 3223

BT - SIGIR 2022

Y2 - 11 July 2022 through 15 July 2022

ER -