University of Marburg at TRECVID 2006: Shot boundary detection and rushes task results

Research output: Contribution to conferencePaperResearchpeer review

Authors

  • Ralph Ewerth
  • Markus Mühling
  • Thilo Stadelmann
  • Ermir Qeli
  • Björn Agel
  • Dominik Seiler
  • Bernd Freisleben

External Research Organisations

  • University of Siegen
  • Philipps-Universität Marburg
View graph of relations

Details

Original languageEnglish
Publication statusPublished - 2006
Externally publishedYes
EventTREC Video Retrieval Evaluation, TRECVID 2006 - Gaithersburg, MD, United States
Duration: 13 Nov 200614 Nov 2006

Conference

ConferenceTREC Video Retrieval Evaluation, TRECVID 2006
Country/TerritoryUnited States
CityGaithersburg, MD
Period13 Nov 200614 Nov 2006

Abstract

In this paper, we summarize our results for the shot boundary detection task and the rushes task at TRECVID 2006. The shot boundary detection approach which was evaluated last year at TRECVID 2005 served as a basis for our experiments this year and was modified in several ways. First, we investigated different parameter settings for the unsupervised approach. Second, we experimented with the possibility to create an unsupervised ensemble that consists of several clusterings that have been obtained with different parameter settings. Our prototype for the rushes task consists of a summarization component and a retrieval component. Rushes videos are segmented on a sub-shot basis in order to separate redundant from non-redundant sequences within a shot. The summarization component is based on subshots clustering, and an appropriate visualization of clusters is presented to the user. The sub-shots are clustered with respect to a number of low-level and mid-level features, and they are visualized such that the user can navigate through these sub-shots. The retrieval component enables the user to search the rushes material automatically according to several features: camera motion, audio features (silence, speech, music, action, and background), speaker identity and interviews, shot sizes, face appearances, and by queries by example based on color and texture features.

ASJC Scopus subject areas

Cite this

University of Marburg at TRECVID 2006: Shot boundary detection and rushes task results. / Ewerth, Ralph; Mühling, Markus; Stadelmann, Thilo et al.
2006. Paper presented at TREC Video Retrieval Evaluation, TRECVID 2006, Gaithersburg, MD, United States.

Research output: Contribution to conferencePaperResearchpeer review

Ewerth, R, Mühling, M, Stadelmann, T, Qeli, E, Agel, B, Seiler, D & Freisleben, B 2006, 'University of Marburg at TRECVID 2006: Shot boundary detection and rushes task results', Paper presented at TREC Video Retrieval Evaluation, TRECVID 2006, Gaithersburg, MD, United States, 13 Nov 2006 - 14 Nov 2006.
Ewerth, R., Mühling, M., Stadelmann, T., Qeli, E., Agel, B., Seiler, D., & Freisleben, B. (2006). University of Marburg at TRECVID 2006: Shot boundary detection and rushes task results. Paper presented at TREC Video Retrieval Evaluation, TRECVID 2006, Gaithersburg, MD, United States.
Ewerth R, Mühling M, Stadelmann T, Qeli E, Agel B, Seiler D et al.. University of Marburg at TRECVID 2006: Shot boundary detection and rushes task results. 2006. Paper presented at TREC Video Retrieval Evaluation, TRECVID 2006, Gaithersburg, MD, United States.
Ewerth, Ralph ; Mühling, Markus ; Stadelmann, Thilo et al. / University of Marburg at TRECVID 2006 : Shot boundary detection and rushes task results. Paper presented at TREC Video Retrieval Evaluation, TRECVID 2006, Gaithersburg, MD, United States.
Download
@conference{012f7957bf1d4bb2bce433dbb6cc9ac0,
title = "University of Marburg at TRECVID 2006: Shot boundary detection and rushes task results",
abstract = "In this paper, we summarize our results for the shot boundary detection task and the rushes task at TRECVID 2006. The shot boundary detection approach which was evaluated last year at TRECVID 2005 served as a basis for our experiments this year and was modified in several ways. First, we investigated different parameter settings for the unsupervised approach. Second, we experimented with the possibility to create an unsupervised ensemble that consists of several clusterings that have been obtained with different parameter settings. Our prototype for the rushes task consists of a summarization component and a retrieval component. Rushes videos are segmented on a sub-shot basis in order to separate redundant from non-redundant sequences within a shot. The summarization component is based on subshots clustering, and an appropriate visualization of clusters is presented to the user. The sub-shots are clustered with respect to a number of low-level and mid-level features, and they are visualized such that the user can navigate through these sub-shots. The retrieval component enables the user to search the rushes material automatically according to several features: camera motion, audio features (silence, speech, music, action, and background), speaker identity and interviews, shot sizes, face appearances, and by queries by example based on color and texture features.",
author = "Ralph Ewerth and Markus M{\"u}hling and Thilo Stadelmann and Ermir Qeli and Bj{\"o}rn Agel and Dominik Seiler and Bernd Freisleben",
year = "2006",
language = "English",
note = "TREC Video Retrieval Evaluation, TRECVID 2006 ; Conference date: 13-11-2006 Through 14-11-2006",

}

Download

TY - CONF

T1 - University of Marburg at TRECVID 2006

T2 - TREC Video Retrieval Evaluation, TRECVID 2006

AU - Ewerth, Ralph

AU - Mühling, Markus

AU - Stadelmann, Thilo

AU - Qeli, Ermir

AU - Agel, Björn

AU - Seiler, Dominik

AU - Freisleben, Bernd

PY - 2006

Y1 - 2006

N2 - In this paper, we summarize our results for the shot boundary detection task and the rushes task at TRECVID 2006. The shot boundary detection approach which was evaluated last year at TRECVID 2005 served as a basis for our experiments this year and was modified in several ways. First, we investigated different parameter settings for the unsupervised approach. Second, we experimented with the possibility to create an unsupervised ensemble that consists of several clusterings that have been obtained with different parameter settings. Our prototype for the rushes task consists of a summarization component and a retrieval component. Rushes videos are segmented on a sub-shot basis in order to separate redundant from non-redundant sequences within a shot. The summarization component is based on subshots clustering, and an appropriate visualization of clusters is presented to the user. The sub-shots are clustered with respect to a number of low-level and mid-level features, and they are visualized such that the user can navigate through these sub-shots. The retrieval component enables the user to search the rushes material automatically according to several features: camera motion, audio features (silence, speech, music, action, and background), speaker identity and interviews, shot sizes, face appearances, and by queries by example based on color and texture features.

AB - In this paper, we summarize our results for the shot boundary detection task and the rushes task at TRECVID 2006. The shot boundary detection approach which was evaluated last year at TRECVID 2005 served as a basis for our experiments this year and was modified in several ways. First, we investigated different parameter settings for the unsupervised approach. Second, we experimented with the possibility to create an unsupervised ensemble that consists of several clusterings that have been obtained with different parameter settings. Our prototype for the rushes task consists of a summarization component and a retrieval component. Rushes videos are segmented on a sub-shot basis in order to separate redundant from non-redundant sequences within a shot. The summarization component is based on subshots clustering, and an appropriate visualization of clusters is presented to the user. The sub-shots are clustered with respect to a number of low-level and mid-level features, and they are visualized such that the user can navigate through these sub-shots. The retrieval component enables the user to search the rushes material automatically according to several features: camera motion, audio features (silence, speech, music, action, and background), speaker identity and interviews, shot sizes, face appearances, and by queries by example based on color and texture features.

UR - http://www.scopus.com/inward/record.url?scp=84905178040&partnerID=8YFLogxK

M3 - Paper

Y2 - 13 November 2006 through 14 November 2006

ER -