Investigating Correlations of Automatically Extracted Multimodal Features and Lecture Video Quality

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Jianwei Shi
  • Christian Otto
  • Ralph Ewerth
  • Anett Hoppe
  • Peter Holtz

Organisationseinheiten

Externe Organisationen

  • Technische Informationsbibliothek (TIB) Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
  • Leibniz-Institut für Wissensmedien (IWM)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksSALMM 2019 - Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information, co-located with MM 2019
UntertitelProceedings of the 1st International Workshop on Search as Learning with Multimedia Information
Seiten11-19
Seitenumfang9
PublikationsstatusVeröffentlicht - Okt. 2020
VeranstaltungThe 27th ACM International Conference on Multimedia - Nice, Frankreich
Dauer: 21 Okt. 201925 Okt. 2019
Konferenznummer: 27

Abstract

Ranking and recommendation of multimedia content such as videos is usually realized with respect to the relevance to a user query. However, for lecture videos and MOOCs (Massive Open Online Courses) it is not only required to retrieve relevant videos, but particularly to find lecture videos of high quality that facilitate learning, for instance, independent of the video's or speaker's popularity. Thus, metadata about a lecture video's quality are crucial features for learning contexts, e.g., lecture video recommendation in search as learning scenarios. In this paper, we investigate whether automatically extracted features are correlated to quality aspects of a video. A set of scholarly videos from a Mass Open Online Course (MOOC) is analyzed regarding audio, linguistic, and visual features. Furthermore, a set of cross-modal features is proposed which are derived by combining transcripts, audio, video, and slide content. A user study is conducted to investigate the correlations between the automatically collected features and human ratings of quality aspects of a lecture video. Finally, the impact of our features on the knowledge gain of the participants is discussed.

ASJC Scopus Sachgebiete

Zitieren

Investigating Correlations of Automatically Extracted Multimodal Features and Lecture Video Quality. / Shi, Jianwei; Otto, Christian; Ewerth, Ralph et al.
SALMM 2019 - Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information, co-located with MM 2019: Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information. 2020. S. 11-19.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Shi, J, Otto, C, Ewerth, R, Hoppe, A & Holtz, P 2020, Investigating Correlations of Automatically Extracted Multimodal Features and Lecture Video Quality. in SALMM 2019 - Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information, co-located with MM 2019: Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information. S. 11-19, The 27th ACM International Conference on Multimedia, Nice, Frankreich, 21 Okt. 2019. https://doi.org/10.48550/arXiv.2005.13876, https://doi.org/10.1145/3347451.3356731
Shi, J., Otto, C., Ewerth, R., Hoppe, A., & Holtz, P. (2020). Investigating Correlations of Automatically Extracted Multimodal Features and Lecture Video Quality. In SALMM 2019 - Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information, co-located with MM 2019: Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information (S. 11-19) https://doi.org/10.48550/arXiv.2005.13876, https://doi.org/10.1145/3347451.3356731
Shi J, Otto C, Ewerth R, Hoppe A, Holtz P. Investigating Correlations of Automatically Extracted Multimodal Features and Lecture Video Quality. in SALMM 2019 - Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information, co-located with MM 2019: Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information. 2020. S. 11-19 Epub 2020 Okt 15. doi: 10.48550/arXiv.2005.13876, 10.1145/3347451.3356731
Shi, Jianwei ; Otto, Christian ; Ewerth, Ralph et al. / Investigating Correlations of Automatically Extracted Multimodal Features and Lecture Video Quality. SALMM 2019 - Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information, co-located with MM 2019: Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information. 2020. S. 11-19
Download
@inproceedings{e0fc63a340054b2297689b38a9f1831a,
title = "Investigating Correlations of Automatically Extracted Multimodal Features and Lecture Video Quality",
abstract = " Ranking and recommendation of multimedia content such as videos is usually realized with respect to the relevance to a user query. However, for lecture videos and MOOCs (Massive Open Online Courses) it is not only required to retrieve relevant videos, but particularly to find lecture videos of high quality that facilitate learning, for instance, independent of the video's or speaker's popularity. Thus, metadata about a lecture video's quality are crucial features for learning contexts, e.g., lecture video recommendation in search as learning scenarios. In this paper, we investigate whether automatically extracted features are correlated to quality aspects of a video. A set of scholarly videos from a Mass Open Online Course (MOOC) is analyzed regarding audio, linguistic, and visual features. Furthermore, a set of cross-modal features is proposed which are derived by combining transcripts, audio, video, and slide content. A user study is conducted to investigate the correlations between the automatically collected features and human ratings of quality aspects of a lecture video. Finally, the impact of our features on the knowledge gain of the participants is discussed. ",
keywords = "cs.MM, H.5.1, Correlation, Multimodal, Video assessment, Knowledge gain",
author = "Jianwei Shi and Christian Otto and Ralph Ewerth and Anett Hoppe and Peter Holtz",
note = "Funding Information: Part of this work is financially supported by the Leibniz Association, Germany (Leibniz Competition 2018, funding line {"}Collaborative Excellence{"}, project SALIENT [K68/2017]).; The 27th ACM International Conference on Multimedia ; Conference date: 21-10-2019 Through 25-10-2019",
year = "2020",
month = oct,
doi = "10.48550/arXiv.2005.13876",
language = "English",
isbn = "978-1-4503-6919-0",
pages = "11--19",
booktitle = "SALMM 2019 - Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information, co-located with MM 2019",

}

Download

TY - GEN

T1 - Investigating Correlations of Automatically Extracted Multimodal Features and Lecture Video Quality

AU - Shi, Jianwei

AU - Otto, Christian

AU - Ewerth, Ralph

AU - Hoppe, Anett

AU - Holtz, Peter

N1 - Conference code: 27

PY - 2020/10

Y1 - 2020/10

N2 - Ranking and recommendation of multimedia content such as videos is usually realized with respect to the relevance to a user query. However, for lecture videos and MOOCs (Massive Open Online Courses) it is not only required to retrieve relevant videos, but particularly to find lecture videos of high quality that facilitate learning, for instance, independent of the video's or speaker's popularity. Thus, metadata about a lecture video's quality are crucial features for learning contexts, e.g., lecture video recommendation in search as learning scenarios. In this paper, we investigate whether automatically extracted features are correlated to quality aspects of a video. A set of scholarly videos from a Mass Open Online Course (MOOC) is analyzed regarding audio, linguistic, and visual features. Furthermore, a set of cross-modal features is proposed which are derived by combining transcripts, audio, video, and slide content. A user study is conducted to investigate the correlations between the automatically collected features and human ratings of quality aspects of a lecture video. Finally, the impact of our features on the knowledge gain of the participants is discussed.

AB - Ranking and recommendation of multimedia content such as videos is usually realized with respect to the relevance to a user query. However, for lecture videos and MOOCs (Massive Open Online Courses) it is not only required to retrieve relevant videos, but particularly to find lecture videos of high quality that facilitate learning, for instance, independent of the video's or speaker's popularity. Thus, metadata about a lecture video's quality are crucial features for learning contexts, e.g., lecture video recommendation in search as learning scenarios. In this paper, we investigate whether automatically extracted features are correlated to quality aspects of a video. A set of scholarly videos from a Mass Open Online Course (MOOC) is analyzed regarding audio, linguistic, and visual features. Furthermore, a set of cross-modal features is proposed which are derived by combining transcripts, audio, video, and slide content. A user study is conducted to investigate the correlations between the automatically collected features and human ratings of quality aspects of a lecture video. Finally, the impact of our features on the knowledge gain of the participants is discussed.

KW - cs.MM

KW - H.5.1

KW - Correlation

KW - Multimodal

KW - Video assessment

KW - Knowledge gain

UR - http://www.scopus.com/inward/record.url?scp=85074761015&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2005.13876

DO - 10.48550/arXiv.2005.13876

M3 - Conference contribution

SN - 978-1-4503-6919-0

SP - 11

EP - 19

BT - SALMM 2019 - Proceedings of the 1st International Workshop on Search as Learning with Multimedia Information, co-located with MM 2019

T2 - The 27th ACM International Conference on Multimedia

Y2 - 21 October 2019 through 25 October 2019

ER -

Von denselben Autoren