Details
Original language | English |
---|---|
Title of host publication | CIKM 2024 |
Subtitle of host publication | Proceedings of the 33rd ACM International Conference on Information and Knowledge Management |
Pages | 1743-1751 |
Number of pages | 9 |
ISBN (electronic) | 9798400704369 |
Publication status | Published - 21 Oct 2024 |
Event | 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 - Boise, United States Duration: 21 Oct 2024 → 25 Oct 2024 |
Abstract
Identifying the regions of a learning resource that a learner pays attention to is crucial for assessing the material's impact and improving its design and related support systems. Saliency detection in videos addresses the automatic recognition of attention-drawing regions in single frames. In educational settings, the recognition of pertinent regions in a video's visual stream can enhance content accessibility and information retrieval tasks such as video segmentation, navigation, and summarization. Such advancements can pave the way for the development of advanced AI-assisted technologies that support learning with greater efficacy. However, this task becomes particularly challenging for educational videos due to the combination of unique characteristics such as text, voice, illustrations, animations, and more. To the best of our knowledge, there is currently no study that evaluates saliency detection approaches in educational videos. In this paper, we address this gap by evaluating four state-of-the-art saliency detection approaches for educational videos. We reproduce the original studies and explore the replication capabilities for general-purpose (non-educational) datasets. Then, we investigate the generalization capabilities of the models and evaluate their performance on educational videos. We conduct a comprehensive analysis to identify common failure scenarios and possible areas of improvement. Our experimental results show that educational videos remain a challenging context for generic video saliency detection models.
Keywords
- educational videos, video saliency detection, video-based learning
ASJC Scopus subject areas
- Business, Management and Accounting(all)
- General Business,Management and Accounting
- Decision Sciences(all)
- General Decision Sciences
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024. p. 1743-1751.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Saliency Detection in Educational Videos
T2 - 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024
AU - Navarrete, Evelyn
AU - Ewerth, Ralph
AU - Hoppe, Anett
N1 - Publisher Copyright: © 2024 Owner/Author.
PY - 2024/10/21
Y1 - 2024/10/21
N2 - Identifying the regions of a learning resource that a learner pays attention to is crucial for assessing the material's impact and improving its design and related support systems. Saliency detection in videos addresses the automatic recognition of attention-drawing regions in single frames. In educational settings, the recognition of pertinent regions in a video's visual stream can enhance content accessibility and information retrieval tasks such as video segmentation, navigation, and summarization. Such advancements can pave the way for the development of advanced AI-assisted technologies that support learning with greater efficacy. However, this task becomes particularly challenging for educational videos due to the combination of unique characteristics such as text, voice, illustrations, animations, and more. To the best of our knowledge, there is currently no study that evaluates saliency detection approaches in educational videos. In this paper, we address this gap by evaluating four state-of-the-art saliency detection approaches for educational videos. We reproduce the original studies and explore the replication capabilities for general-purpose (non-educational) datasets. Then, we investigate the generalization capabilities of the models and evaluate their performance on educational videos. We conduct a comprehensive analysis to identify common failure scenarios and possible areas of improvement. Our experimental results show that educational videos remain a challenging context for generic video saliency detection models.
AB - Identifying the regions of a learning resource that a learner pays attention to is crucial for assessing the material's impact and improving its design and related support systems. Saliency detection in videos addresses the automatic recognition of attention-drawing regions in single frames. In educational settings, the recognition of pertinent regions in a video's visual stream can enhance content accessibility and information retrieval tasks such as video segmentation, navigation, and summarization. Such advancements can pave the way for the development of advanced AI-assisted technologies that support learning with greater efficacy. However, this task becomes particularly challenging for educational videos due to the combination of unique characteristics such as text, voice, illustrations, animations, and more. To the best of our knowledge, there is currently no study that evaluates saliency detection approaches in educational videos. In this paper, we address this gap by evaluating four state-of-the-art saliency detection approaches for educational videos. We reproduce the original studies and explore the replication capabilities for general-purpose (non-educational) datasets. Then, we investigate the generalization capabilities of the models and evaluate their performance on educational videos. We conduct a comprehensive analysis to identify common failure scenarios and possible areas of improvement. Our experimental results show that educational videos remain a challenging context for generic video saliency detection models.
KW - educational videos
KW - video saliency detection
KW - video-based learning
UR - http://www.scopus.com/inward/record.url?scp=85210038718&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2408.04515
DO - 10.48550/arXiv.2408.04515
M3 - Conference contribution
AN - SCOPUS:85210038718
SP - 1743
EP - 1751
BT - CIKM 2024
Y2 - 21 October 2024 through 25 October 2024
ER -