What Works Better? A Study of Classifying Requirements

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

  • Zahra Shakeri Hossein Abad
  • Oliver Karras
  • Parisa Ghazi
  • Martin Glinz
  • Guenther Ruhe
  • Kurt Schneider

Organisationseinheiten

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017
Seiten496-501
Seitenumfang6
ISBN (elektronisch)9781538631911
PublikationsstatusVeröffentlicht - 22 Sept. 2017

Abstract

Classifying requirements into functional requirements (FR) and non-functional ones (NFR) is an important task in requirements engineering. However, automated classification of requirements written in natural language is not straightforward, due to the variability of natural language and the absence of a controlled vocabulary. This paper investigates how automated classification of requirements into FR and NFR can be improved and how well several machine learning approaches work in this context. We contribute an approach for preprocessing requirements that standardizes and normalizes requirements before applying classification algorithms. Further, we report on how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as usability, availability, or performance. Our study is performed on 625 requirements provided by the OpenScience tera-PROMISE repository. We found that our preprocessing improved the performance of an existing classification method. We further found significant differences in the performance of approaches such as Latent Dirichlet Allocation, Biterm Topic Modeling, or Naïve Bayes for the sub-classification of NFRs.

ASJC Scopus Sachgebiete

Zitieren

What Works Better? A Study of Classifying Requirements. / Abad, Zahra Shakeri Hossein; Karras, Oliver; Ghazi, Parisa et al.
Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017. 2017. S. 496-501 8049172.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Abad, ZSH, Karras, O, Ghazi, P, Glinz, M, Ruhe, G & Schneider, K 2017, What Works Better? A Study of Classifying Requirements. in Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017., 8049172, S. 496-501. https://doi.org/10.48550/arXiv.1707.02358, https://doi.org/10.1109/RE.2017.36
Abad, Z. S. H., Karras, O., Ghazi, P., Glinz, M., Ruhe, G., & Schneider, K. (2017). What Works Better? A Study of Classifying Requirements. In Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017 (S. 496-501). Artikel 8049172 https://doi.org/10.48550/arXiv.1707.02358, https://doi.org/10.1109/RE.2017.36
Abad ZSH, Karras O, Ghazi P, Glinz M, Ruhe G, Schneider K. What Works Better? A Study of Classifying Requirements. in Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017. 2017. S. 496-501. 8049172 doi: 10.48550/arXiv.1707.02358, 10.1109/RE.2017.36
Abad, Zahra Shakeri Hossein ; Karras, Oliver ; Ghazi, Parisa et al. / What Works Better? A Study of Classifying Requirements. Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017. 2017. S. 496-501
Download
@inproceedings{ed2c46ddf6a84df2baecc7e88555d720,
title = "What Works Better? A Study of Classifying Requirements",
abstract = "Classifying requirements into functional requirements (FR) and non-functional ones (NFR) is an important task in requirements engineering. However, automated classification of requirements written in natural language is not straightforward, due to the variability of natural language and the absence of a controlled vocabulary. This paper investigates how automated classification of requirements into FR and NFR can be improved and how well several machine learning approaches work in this context. We contribute an approach for preprocessing requirements that standardizes and normalizes requirements before applying classification algorithms. Further, we report on how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as usability, availability, or performance. Our study is performed on 625 requirements provided by the OpenScience tera-PROMISE repository. We found that our preprocessing improved the performance of an existing classification method. We further found significant differences in the performance of approaches such as Latent Dirichlet Allocation, Biterm Topic Modeling, or Na{\"i}ve Bayes for the sub-classification of NFRs.",
keywords = "Classification, Clustering, Functional and Non-Functional Requirements, Naive Bayes, Topic Modeling",
author = "Abad, {Zahra Shakeri Hossein} and Oliver Karras and Parisa Ghazi and Martin Glinz and Guenther Ruhe and Kurt Schneider",
note = "Publisher Copyright: {\textcopyright} 2017 IEEE. Copyright: Copyright 2017 Elsevier B.V., All rights reserved.",
year = "2017",
month = sep,
day = "22",
doi = "10.48550/arXiv.1707.02358",
language = "English",
pages = "496--501",
booktitle = "Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017",

}

Download

TY - GEN

T1 - What Works Better? A Study of Classifying Requirements

AU - Abad, Zahra Shakeri Hossein

AU - Karras, Oliver

AU - Ghazi, Parisa

AU - Glinz, Martin

AU - Ruhe, Guenther

AU - Schneider, Kurt

N1 - Publisher Copyright: © 2017 IEEE. Copyright: Copyright 2017 Elsevier B.V., All rights reserved.

PY - 2017/9/22

Y1 - 2017/9/22

N2 - Classifying requirements into functional requirements (FR) and non-functional ones (NFR) is an important task in requirements engineering. However, automated classification of requirements written in natural language is not straightforward, due to the variability of natural language and the absence of a controlled vocabulary. This paper investigates how automated classification of requirements into FR and NFR can be improved and how well several machine learning approaches work in this context. We contribute an approach for preprocessing requirements that standardizes and normalizes requirements before applying classification algorithms. Further, we report on how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as usability, availability, or performance. Our study is performed on 625 requirements provided by the OpenScience tera-PROMISE repository. We found that our preprocessing improved the performance of an existing classification method. We further found significant differences in the performance of approaches such as Latent Dirichlet Allocation, Biterm Topic Modeling, or Naïve Bayes for the sub-classification of NFRs.

AB - Classifying requirements into functional requirements (FR) and non-functional ones (NFR) is an important task in requirements engineering. However, automated classification of requirements written in natural language is not straightforward, due to the variability of natural language and the absence of a controlled vocabulary. This paper investigates how automated classification of requirements into FR and NFR can be improved and how well several machine learning approaches work in this context. We contribute an approach for preprocessing requirements that standardizes and normalizes requirements before applying classification algorithms. Further, we report on how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as usability, availability, or performance. Our study is performed on 625 requirements provided by the OpenScience tera-PROMISE repository. We found that our preprocessing improved the performance of an existing classification method. We further found significant differences in the performance of approaches such as Latent Dirichlet Allocation, Biterm Topic Modeling, or Naïve Bayes for the sub-classification of NFRs.

KW - Classification

KW - Clustering

KW - Functional and Non-Functional Requirements

KW - Naive Bayes

KW - Topic Modeling

UR - http://www.scopus.com/inward/record.url?scp=85032825139&partnerID=8YFLogxK

U2 - 10.48550/arXiv.1707.02358

DO - 10.48550/arXiv.1707.02358

M3 - Conference contribution

SP - 496

EP - 501

BT - Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017

ER -

Von denselben Autoren