Details
Original language | English |
---|---|
Title of host publication | Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017 |
Pages | 496-501 |
Number of pages | 6 |
ISBN (electronic) | 9781538631911 |
Publication status | Published - 22 Sept 2017 |
Abstract
Classifying requirements into functional requirements (FR) and non-functional ones (NFR) is an important task in requirements engineering. However, automated classification of requirements written in natural language is not straightforward, due to the variability of natural language and the absence of a controlled vocabulary. This paper investigates how automated classification of requirements into FR and NFR can be improved and how well several machine learning approaches work in this context. We contribute an approach for preprocessing requirements that standardizes and normalizes requirements before applying classification algorithms. Further, we report on how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as usability, availability, or performance. Our study is performed on 625 requirements provided by the OpenScience tera-PROMISE repository. We found that our preprocessing improved the performance of an existing classification method. We further found significant differences in the performance of approaches such as Latent Dirichlet Allocation, Biterm Topic Modeling, or Naïve Bayes for the sub-classification of NFRs.
Keywords
- Classification, Clustering, Functional and Non-Functional Requirements, Naive Bayes, Topic Modeling
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Engineering(all)
- Safety, Risk, Reliability and Quality
- Business, Management and Accounting(all)
- Management of Technology and Innovation
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017. 2017. p. 496-501 8049172.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - What Works Better? A Study of Classifying Requirements
AU - Abad, Zahra Shakeri Hossein
AU - Karras, Oliver
AU - Ghazi, Parisa
AU - Glinz, Martin
AU - Ruhe, Guenther
AU - Schneider, Kurt
N1 - Publisher Copyright: © 2017 IEEE. Copyright: Copyright 2017 Elsevier B.V., All rights reserved.
PY - 2017/9/22
Y1 - 2017/9/22
N2 - Classifying requirements into functional requirements (FR) and non-functional ones (NFR) is an important task in requirements engineering. However, automated classification of requirements written in natural language is not straightforward, due to the variability of natural language and the absence of a controlled vocabulary. This paper investigates how automated classification of requirements into FR and NFR can be improved and how well several machine learning approaches work in this context. We contribute an approach for preprocessing requirements that standardizes and normalizes requirements before applying classification algorithms. Further, we report on how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as usability, availability, or performance. Our study is performed on 625 requirements provided by the OpenScience tera-PROMISE repository. We found that our preprocessing improved the performance of an existing classification method. We further found significant differences in the performance of approaches such as Latent Dirichlet Allocation, Biterm Topic Modeling, or Naïve Bayes for the sub-classification of NFRs.
AB - Classifying requirements into functional requirements (FR) and non-functional ones (NFR) is an important task in requirements engineering. However, automated classification of requirements written in natural language is not straightforward, due to the variability of natural language and the absence of a controlled vocabulary. This paper investigates how automated classification of requirements into FR and NFR can be improved and how well several machine learning approaches work in this context. We contribute an approach for preprocessing requirements that standardizes and normalizes requirements before applying classification algorithms. Further, we report on how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as usability, availability, or performance. Our study is performed on 625 requirements provided by the OpenScience tera-PROMISE repository. We found that our preprocessing improved the performance of an existing classification method. We further found significant differences in the performance of approaches such as Latent Dirichlet Allocation, Biterm Topic Modeling, or Naïve Bayes for the sub-classification of NFRs.
KW - Classification
KW - Clustering
KW - Functional and Non-Functional Requirements
KW - Naive Bayes
KW - Topic Modeling
UR - http://www.scopus.com/inward/record.url?scp=85032825139&partnerID=8YFLogxK
U2 - 10.48550/arXiv.1707.02358
DO - 10.48550/arXiv.1707.02358
M3 - Conference contribution
SP - 496
EP - 501
BT - Proceedings - 2017 IEEE 25th International Requirements Engineering Conference, RE 2017
ER -