Question Generation Capabilities of “Small" Large Language Models

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Joshua Berger
  • Jonathan Koß
  • Markos Stamatakis
  • Anett Hoppe
  • Ralph Ewerth
  • Christian Wartena

Research Organisations

External Research Organisations

  • University of Applied Sciences and Arts Hannover (HsH)
  • German National Library of Science and Technology (TIB)
View graph of relations

Details

Original languageEnglish
Title of host publicationNatural Language Processing and Information Systems
Subtitle of host publication29th International Conference on Applications of Natural Language to Information Systems, NLDB 2024, Proceedings
EditorsAmon Rapp, Luigi Di Caro, Farid Meziane, Vijayan Sugumaran
PublisherSpringer Science and Business Media Deutschland GmbH
Pages183-194
Number of pages12
ISBN (electronic)978-3-031-70242-6
ISBN (print)9783031702419
Publication statusPublished - 20 Sept 2024
Event29th International Conference on Natural Language and Information Systems, NLDB 2024 - Turin, Italy
Duration: 25 Jun 202427 Jun 2024

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14763 LNCS
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Abstract

Questions are an integral part of test formats in education. Also online learning platforms like Coursera or Udemy use questions to check learners’ understanding. However, the manual creation of questions can be very time-intensive. This problem can be mitigated through automatic question generation. In this paper, we present a comparison of fine-tuned text-generating transformers for question generation. Our methods include (i) a comparison of multiple fine-tuned transformers to identify differences in the generated output, (ii) a comparison of multiple token search strategies evaluated on each model to find differences in generated questions across different strategies and (iii) a newly developed manual evaluation metric that evaluates generated questions regarding aspects of naturalness and suitability. Our experiments show a difference in question length, structure and quality depending on the used transformer architecture, which indicates a correlation between transformer architecture and question structure. Furthermore, different search strategies for the same model architecture do not greatly impact structure or quality.

Keywords

    Automatic Question Generation, Pre-trained Transformer, Transformer Architecture

ASJC Scopus subject areas

Cite this

Question Generation Capabilities of “Small" Large Language Models. / Berger, Joshua; Koß, Jonathan; Stamatakis, Markos et al.
Natural Language Processing and Information Systems : 29th International Conference on Applications of Natural Language to Information Systems, NLDB 2024, Proceedings. ed. / Amon Rapp; Luigi Di Caro; Farid Meziane; Vijayan Sugumaran. Springer Science and Business Media Deutschland GmbH, 2024. p. 183-194 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 14763 LNCS).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Berger, J, Koß, J, Stamatakis, M, Hoppe, A, Ewerth, R & Wartena, C 2024, Question Generation Capabilities of “Small" Large Language Models. in A Rapp, L Di Caro, F Meziane & V Sugumaran (eds), Natural Language Processing and Information Systems : 29th International Conference on Applications of Natural Language to Information Systems, NLDB 2024, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 14763 LNCS, Springer Science and Business Media Deutschland GmbH, pp. 183-194, 29th International Conference on Natural Language and Information Systems, NLDB 2024, Turin, Italy, 25 Jun 2024. https://doi.org/10.1007/978-3-031-70242-6_18
Berger, J., Koß, J., Stamatakis, M., Hoppe, A., Ewerth, R., & Wartena, C. (2024). Question Generation Capabilities of “Small" Large Language Models. In A. Rapp, L. Di Caro, F. Meziane, & V. Sugumaran (Eds.), Natural Language Processing and Information Systems : 29th International Conference on Applications of Natural Language to Information Systems, NLDB 2024, Proceedings (pp. 183-194). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 14763 LNCS). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-70242-6_18
Berger J, Koß J, Stamatakis M, Hoppe A, Ewerth R, Wartena C. Question Generation Capabilities of “Small" Large Language Models. In Rapp A, Di Caro L, Meziane F, Sugumaran V, editors, Natural Language Processing and Information Systems : 29th International Conference on Applications of Natural Language to Information Systems, NLDB 2024, Proceedings. Springer Science and Business Media Deutschland GmbH. 2024. p. 183-194. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). doi: 10.1007/978-3-031-70242-6_18
Berger, Joshua ; Koß, Jonathan ; Stamatakis, Markos et al. / Question Generation Capabilities of “Small" Large Language Models. Natural Language Processing and Information Systems : 29th International Conference on Applications of Natural Language to Information Systems, NLDB 2024, Proceedings. editor / Amon Rapp ; Luigi Di Caro ; Farid Meziane ; Vijayan Sugumaran. Springer Science and Business Media Deutschland GmbH, 2024. pp. 183-194 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Download
@inproceedings{2c41e6fa34954e20afabdb8a335b3d0d,
title = "Question Generation Capabilities of “Small{"} Large Language Models",
abstract = "Questions are an integral part of test formats in education. Also online learning platforms like Coursera or Udemy use questions to check learners{\textquoteright} understanding. However, the manual creation of questions can be very time-intensive. This problem can be mitigated through automatic question generation. In this paper, we present a comparison of fine-tuned text-generating transformers for question generation. Our methods include (i) a comparison of multiple fine-tuned transformers to identify differences in the generated output, (ii) a comparison of multiple token search strategies evaluated on each model to find differences in generated questions across different strategies and (iii) a newly developed manual evaluation metric that evaluates generated questions regarding aspects of naturalness and suitability. Our experiments show a difference in question length, structure and quality depending on the used transformer architecture, which indicates a correlation between transformer architecture and question structure. Furthermore, different search strategies for the same model architecture do not greatly impact structure or quality.",
keywords = "Automatic Question Generation, Pre-trained Transformer, Transformer Architecture",
author = "Joshua Berger and Jonathan Ko{\ss} and Markos Stamatakis and Anett Hoppe and Ralph Ewerth and Christian Wartena",
note = "Publisher Copyright: {\textcopyright} The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.; 29th International Conference on Natural Language and Information Systems, NLDB 2024 ; Conference date: 25-06-2024 Through 27-06-2024",
year = "2024",
month = sep,
day = "20",
doi = "10.1007/978-3-031-70242-6_18",
language = "English",
isbn = "9783031702419",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Science and Business Media Deutschland GmbH",
pages = "183--194",
editor = "Amon Rapp and {Di Caro}, Luigi and Farid Meziane and Vijayan Sugumaran",
booktitle = "Natural Language Processing and Information Systems",
address = "Germany",

}

Download

TY - GEN

T1 - Question Generation Capabilities of “Small" Large Language Models

AU - Berger, Joshua

AU - Koß, Jonathan

AU - Stamatakis, Markos

AU - Hoppe, Anett

AU - Ewerth, Ralph

AU - Wartena, Christian

N1 - Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.

PY - 2024/9/20

Y1 - 2024/9/20

N2 - Questions are an integral part of test formats in education. Also online learning platforms like Coursera or Udemy use questions to check learners’ understanding. However, the manual creation of questions can be very time-intensive. This problem can be mitigated through automatic question generation. In this paper, we present a comparison of fine-tuned text-generating transformers for question generation. Our methods include (i) a comparison of multiple fine-tuned transformers to identify differences in the generated output, (ii) a comparison of multiple token search strategies evaluated on each model to find differences in generated questions across different strategies and (iii) a newly developed manual evaluation metric that evaluates generated questions regarding aspects of naturalness and suitability. Our experiments show a difference in question length, structure and quality depending on the used transformer architecture, which indicates a correlation between transformer architecture and question structure. Furthermore, different search strategies for the same model architecture do not greatly impact structure or quality.

AB - Questions are an integral part of test formats in education. Also online learning platforms like Coursera or Udemy use questions to check learners’ understanding. However, the manual creation of questions can be very time-intensive. This problem can be mitigated through automatic question generation. In this paper, we present a comparison of fine-tuned text-generating transformers for question generation. Our methods include (i) a comparison of multiple fine-tuned transformers to identify differences in the generated output, (ii) a comparison of multiple token search strategies evaluated on each model to find differences in generated questions across different strategies and (iii) a newly developed manual evaluation metric that evaluates generated questions regarding aspects of naturalness and suitability. Our experiments show a difference in question length, structure and quality depending on the used transformer architecture, which indicates a correlation between transformer architecture and question structure. Furthermore, different search strategies for the same model architecture do not greatly impact structure or quality.

KW - Automatic Question Generation

KW - Pre-trained Transformer

KW - Transformer Architecture

UR - http://www.scopus.com/inward/record.url?scp=85205514065&partnerID=8YFLogxK

U2 - 10.1007/978-3-031-70242-6_18

DO - 10.1007/978-3-031-70242-6_18

M3 - Conference contribution

AN - SCOPUS:85205514065

SN - 9783031702419

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 183

EP - 194

BT - Natural Language Processing and Information Systems

A2 - Rapp, Amon

A2 - Di Caro, Luigi

A2 - Meziane, Farid

A2 - Sugumaran, Vijayan

PB - Springer Science and Business Media Deutschland GmbH

T2 - 29th International Conference on Natural Language and Information Systems, NLDB 2024

Y2 - 25 June 2024 through 27 June 2024

ER -