SSGVS: Semantic Scene Graph-to-Video Synthesis

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Externe Organisationen

  • Rheinische Friedrich-Wilhelms-Universität Bonn
  • University of Twente
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des Sammelwerks2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops
UntertitelCVPRW 2023
Herausgeber (Verlag)IEEE Computer Society
Seiten2555-2565
Seitenumfang11
ISBN (elektronisch)9798350302493
ISBN (Print)979-8-3503-0250-9
PublikationsstatusVeröffentlicht - 2023
Veranstaltung2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023 - Vancouver, Kanada
Dauer: 17 Juni 202324 Juni 2023

Publikationsreihe

NameIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Band2023-June
ISSN (Print)2160-7508
ISSN (elektronisch)2160-7516

Abstract

As a natural extension of the image synthesis task, video synthesis has attracted a lot of interest recently. Many image synthesis works utilize class labels or text as guidance. However, neither labels nor text can provide explicit temporal guidance, such as when an action starts or ends. To overcome this limitation, we introduce semantic video scene graphs as input for video synthesis, as they represent the spatial and temporal relationships between objects in the scene. Since video scene graphs are usually temporally discrete annotations, we propose a video scene graph (VSG) encoder that not only encodes the existing video scene graphs but also predicts the graph representations for unlabeled frames. The VSG encoder is pre-trained with different contrastive multi-modal losses. A semantic scene graph-to-video synthesis framework (SSGVS), based on the pre-trained VSG encoder, VQ-VAE, and auto-regressive Transformer, is proposed to synthesize a video given an initial scene image and a non-fixed number of semantic scene graphs. We evaluate SSGVS and other state-of-the-art video synthesis models on the Action Genome dataset and demonstrate the positive significance of video scene graphs in video synthesis. The source code is available at https://github.com/yrcong/SSGVS.

ASJC Scopus Sachgebiete

Zitieren

SSGVS: Semantic Scene Graph-to-Video Synthesis. / Cong, Yuren; Yi, Jinhui; Rosenhahn, Bodo et al.
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: CVPRW 2023. IEEE Computer Society, 2023. S. 2555-2565 (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; Band 2023-June).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Cong, Y, Yi, J, Rosenhahn, B & Yang, MY 2023, SSGVS: Semantic Scene Graph-to-Video Synthesis. in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: CVPRW 2023. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Bd. 2023-June, IEEE Computer Society, S. 2555-2565, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023, Vancouver, Kanada, 17 Juni 2023. https://doi.org/10.48550/arXiv.2211.06119, https://doi.org/10.1109/CVPRW59228.2023.00254
Cong, Y., Yi, J., Rosenhahn, B., & Yang, M. Y. (2023). SSGVS: Semantic Scene Graph-to-Video Synthesis. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: CVPRW 2023 (S. 2555-2565). (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; Band 2023-June). IEEE Computer Society. https://doi.org/10.48550/arXiv.2211.06119, https://doi.org/10.1109/CVPRW59228.2023.00254
Cong Y, Yi J, Rosenhahn B, Yang MY. SSGVS: Semantic Scene Graph-to-Video Synthesis. in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: CVPRW 2023. IEEE Computer Society. 2023. S. 2555-2565. (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops). doi: 10.48550/arXiv.2211.06119, 10.1109/CVPRW59228.2023.00254
Cong, Yuren ; Yi, Jinhui ; Rosenhahn, Bodo et al. / SSGVS : Semantic Scene Graph-to-Video Synthesis. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: CVPRW 2023. IEEE Computer Society, 2023. S. 2555-2565 (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops).
Download
@inproceedings{55d68d3b046542d89cab7dac76e02879,
title = "SSGVS: Semantic Scene Graph-to-Video Synthesis",
abstract = "As a natural extension of the image synthesis task, video synthesis has attracted a lot of interest recently. Many image synthesis works utilize class labels or text as guidance. However, neither labels nor text can provide explicit temporal guidance, such as when an action starts or ends. To overcome this limitation, we introduce semantic video scene graphs as input for video synthesis, as they represent the spatial and temporal relationships between objects in the scene. Since video scene graphs are usually temporally discrete annotations, we propose a video scene graph (VSG) encoder that not only encodes the existing video scene graphs but also predicts the graph representations for unlabeled frames. The VSG encoder is pre-trained with different contrastive multi-modal losses. A semantic scene graph-to-video synthesis framework (SSGVS), based on the pre-trained VSG encoder, VQ-VAE, and auto-regressive Transformer, is proposed to synthesize a video given an initial scene image and a non-fixed number of semantic scene graphs. We evaluate SSGVS and other state-of-the-art video synthesis models on the Action Genome dataset and demonstrate the positive significance of video scene graphs in video synthesis. The source code is available at https://github.com/yrcong/SSGVS.",
author = "Yuren Cong and Jinhui Yi and Bodo Rosenhahn and Yang, {Michael Ying}",
note = "Funding Information: Acknowledgements This work was supported by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor (grant no. 01DD20003) and the AI service center KISSKI (grant no. 01IS22093C), ZDIN and DFG under Germany{\textquoteright}s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122). ; 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023 ; Conference date: 17-06-2023 Through 24-06-2023",
year = "2023",
doi = "10.48550/arXiv.2211.06119",
language = "English",
isbn = "979-8-3503-0250-9",
series = "IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops",
publisher = "IEEE Computer Society",
pages = "2555--2565",
booktitle = "2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops",
address = "United States",

}

Download

TY - GEN

T1 - SSGVS

T2 - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023

AU - Cong, Yuren

AU - Yi, Jinhui

AU - Rosenhahn, Bodo

AU - Yang, Michael Ying

N1 - Funding Information: Acknowledgements This work was supported by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor (grant no. 01DD20003) and the AI service center KISSKI (grant no. 01IS22093C), ZDIN and DFG under Germany’s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122).

PY - 2023

Y1 - 2023

N2 - As a natural extension of the image synthesis task, video synthesis has attracted a lot of interest recently. Many image synthesis works utilize class labels or text as guidance. However, neither labels nor text can provide explicit temporal guidance, such as when an action starts or ends. To overcome this limitation, we introduce semantic video scene graphs as input for video synthesis, as they represent the spatial and temporal relationships between objects in the scene. Since video scene graphs are usually temporally discrete annotations, we propose a video scene graph (VSG) encoder that not only encodes the existing video scene graphs but also predicts the graph representations for unlabeled frames. The VSG encoder is pre-trained with different contrastive multi-modal losses. A semantic scene graph-to-video synthesis framework (SSGVS), based on the pre-trained VSG encoder, VQ-VAE, and auto-regressive Transformer, is proposed to synthesize a video given an initial scene image and a non-fixed number of semantic scene graphs. We evaluate SSGVS and other state-of-the-art video synthesis models on the Action Genome dataset and demonstrate the positive significance of video scene graphs in video synthesis. The source code is available at https://github.com/yrcong/SSGVS.

AB - As a natural extension of the image synthesis task, video synthesis has attracted a lot of interest recently. Many image synthesis works utilize class labels or text as guidance. However, neither labels nor text can provide explicit temporal guidance, such as when an action starts or ends. To overcome this limitation, we introduce semantic video scene graphs as input for video synthesis, as they represent the spatial and temporal relationships between objects in the scene. Since video scene graphs are usually temporally discrete annotations, we propose a video scene graph (VSG) encoder that not only encodes the existing video scene graphs but also predicts the graph representations for unlabeled frames. The VSG encoder is pre-trained with different contrastive multi-modal losses. A semantic scene graph-to-video synthesis framework (SSGVS), based on the pre-trained VSG encoder, VQ-VAE, and auto-regressive Transformer, is proposed to synthesize a video given an initial scene image and a non-fixed number of semantic scene graphs. We evaluate SSGVS and other state-of-the-art video synthesis models on the Action Genome dataset and demonstrate the positive significance of video scene graphs in video synthesis. The source code is available at https://github.com/yrcong/SSGVS.

UR - http://www.scopus.com/inward/record.url?scp=85168723164&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2211.06119

DO - 10.48550/arXiv.2211.06119

M3 - Conference contribution

AN - SCOPUS:85168723164

SN - 979-8-3503-0250-9

T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

SP - 2555

EP - 2565

BT - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops

PB - IEEE Computer Society

Y2 - 17 June 2023 through 24 June 2023

ER -

Von denselben Autoren