Animated Talking Head with Personalized 3D Head Model

Research output: Contribution to journalArticleResearchpeer review

Authors

External Research Organisations

  • University of Illinois at Urbana-Champaign
  • AT&T Labs
View graph of relations

Details

Original languageEnglish
Pages (from-to)97-105
Number of pages9
JournalJournal of signal processing systems for signal, image, and video technology
Volume20
Issue number1-2
Publication statusPublished - 1 Oct 1998
Externally publishedYes

Abstract

Natural Human-Computer Interface requires integration of realistic audio and visual information for perception and display. An example of such an interface is an animated talking head displayed on the computer screen in the form of a human-like computer agent. This system converts text to acoustic speech with synchronized animation of mouth movements. The talking head is based on a generic 3D human head model, but to improve realism, natural looking personalized models are necessary. In this paper, we report a semi-automatic method for adapting a generic head model to 3D range data of a human head obtained from a 3D-laser range scanner. This personalized model is incorporated into the talking head system. With texture mapping, the personalized model offers a more natural and realistic look than the generic model. The model created with the proposed method compares favorable to generic models.

ASJC Scopus subject areas

Cite this

Animated Talking Head with Personalized 3D Head Model. / Ostermann, Jörn; Chen, Lawrence S.; Huang, Thomas S.
In: Journal of signal processing systems for signal, image, and video technology, Vol. 20, No. 1-2, 01.10.1998, p. 97-105.

Research output: Contribution to journalArticleResearchpeer review

Download
@article{eaec4430e92b44de8798386a476e8468,
title = "Animated Talking Head with Personalized 3D Head Model",
abstract = "Natural Human-Computer Interface requires integration of realistic audio and visual information for perception and display. An example of such an interface is an animated talking head displayed on the computer screen in the form of a human-like computer agent. This system converts text to acoustic speech with synchronized animation of mouth movements. The talking head is based on a generic 3D human head model, but to improve realism, natural looking personalized models are necessary. In this paper, we report a semi-automatic method for adapting a generic head model to 3D range data of a human head obtained from a 3D-laser range scanner. This personalized model is incorporated into the talking head system. With texture mapping, the personalized model offers a more natural and realistic look than the generic model. The model created with the proposed method compares favorable to generic models.",
author = "J{\"o}rn Ostermann and Chen, {Lawrence S.} and Huang, {Thomas S.}",
note = "{\textcopyright} Copyright 2012 Elsevier B.V., All rights reserved.",
year = "1998",
month = oct,
day = "1",
doi = "10.1023/A:1008070323952",
language = "English",
volume = "20",
pages = "97--105",
journal = "Journal of signal processing systems for signal, image, and video technology",
issn = "1387-5485",
publisher = "Springer New York",
number = "1-2",

}

Download

TY - JOUR

T1 - Animated Talking Head with Personalized 3D Head Model

AU - Ostermann, Jörn

AU - Chen, Lawrence S.

AU - Huang, Thomas S.

N1 - © Copyright 2012 Elsevier B.V., All rights reserved.

PY - 1998/10/1

Y1 - 1998/10/1

N2 - Natural Human-Computer Interface requires integration of realistic audio and visual information for perception and display. An example of such an interface is an animated talking head displayed on the computer screen in the form of a human-like computer agent. This system converts text to acoustic speech with synchronized animation of mouth movements. The talking head is based on a generic 3D human head model, but to improve realism, natural looking personalized models are necessary. In this paper, we report a semi-automatic method for adapting a generic head model to 3D range data of a human head obtained from a 3D-laser range scanner. This personalized model is incorporated into the talking head system. With texture mapping, the personalized model offers a more natural and realistic look than the generic model. The model created with the proposed method compares favorable to generic models.

AB - Natural Human-Computer Interface requires integration of realistic audio and visual information for perception and display. An example of such an interface is an animated talking head displayed on the computer screen in the form of a human-like computer agent. This system converts text to acoustic speech with synchronized animation of mouth movements. The talking head is based on a generic 3D human head model, but to improve realism, natural looking personalized models are necessary. In this paper, we report a semi-automatic method for adapting a generic head model to 3D range data of a human head obtained from a 3D-laser range scanner. This personalized model is incorporated into the talking head system. With texture mapping, the personalized model offers a more natural and realistic look than the generic model. The model created with the proposed method compares favorable to generic models.

UR - http://www.scopus.com/inward/record.url?scp=0032178446&partnerID=8YFLogxK

U2 - 10.1023/A:1008070323952

DO - 10.1023/A:1008070323952

M3 - Article

AN - SCOPUS:0032178446

VL - 20

SP - 97

EP - 105

JO - Journal of signal processing systems for signal, image, and video technology

JF - Journal of signal processing systems for signal, image, and video technology

SN - 1387-5485

IS - 1-2

ER -

By the same author(s)