Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments, PETRA 2022 |
Seiten | 260-268 |
Seitenumfang | 9 |
ISBN (elektronisch) | 9781450396318 |
Publikationsstatus | Veröffentlicht - 11 Juli 2022 |
Extern publiziert | Ja |
Veranstaltung | 15th International Conference on Pervasive Technologies Related to Assistive Environments, PETRA 2022 - Corfu, Griechenland Dauer: 29 Juni 2022 → 1 Juli 2022 |
Publikationsreihe
Name | ACM International Conference Proceeding Series |
---|
Abstract
In everyday life, Deaf People face barriers because information is often only available in spoken or written language. Producing sign language videos showing a human interpreter is often not feasible due to the amount of data required or because the information changes frequently. The ongoing AVASAG project addresses this issue by developing a 3D sign language avatar for the automatic translation of texts into sign language for public services. The avatar is trained using recordings of human interpreters translating text into sign language. For this purpose, we create a corpus with video and motion capture data and an annotation scheme that allows for real-time translation and subsequent correction without requiring to correct the animation frames manually. This paper presents the general translation pipeline focusing on innovative points, such as adjusting an existing annotation system to the specific requirements of sign language and making it usable to annotators from the Deaf communities.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Software
- Informatik (insg.)
- Mensch-Maschine-Interaktion
- Informatik (insg.)
- Maschinelles Sehen und Mustererkennung
- Informatik (insg.)
- Computernetzwerke und -kommunikation
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments, PETRA 2022. 2022. S. 260-268 (ACM International Conference Proceeding Series).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Towards Automated Sign Language Production
T2 - 15th International Conference on Pervasive Technologies Related to Assistive Environments, PETRA 2022
AU - Bernhard, Lucas
AU - Nunnari, Fabrizio
AU - Unger, Amelie
AU - Bauerdiek, Judith
AU - Dold, Christian
AU - Hauck, Marcel
AU - Stricker, Alexander
AU - Baur, Tobias
AU - Heimerl, Alexander
AU - André, Elisabeth
AU - Reinecker, Melissa
AU - España-Bonet, Cristina
AU - Hamidullah, Yasser
AU - Busemann, Stephan
AU - Gebhard, Patrick
AU - Jäger, Corinna
AU - Wecker, Sonja
AU - Kossel, Yvonne
AU - Müller, Henrik
AU - Waldow, Kristoffer
AU - Fuhrmann, Arnulph
AU - Misiak, Martin
AU - Wallach, Dieter
N1 - Publisher Copyright: © 2022 ACM.
PY - 2022/7/11
Y1 - 2022/7/11
N2 - In everyday life, Deaf People face barriers because information is often only available in spoken or written language. Producing sign language videos showing a human interpreter is often not feasible due to the amount of data required or because the information changes frequently. The ongoing AVASAG project addresses this issue by developing a 3D sign language avatar for the automatic translation of texts into sign language for public services. The avatar is trained using recordings of human interpreters translating text into sign language. For this purpose, we create a corpus with video and motion capture data and an annotation scheme that allows for real-time translation and subsequent correction without requiring to correct the animation frames manually. This paper presents the general translation pipeline focusing on innovative points, such as adjusting an existing annotation system to the specific requirements of sign language and making it usable to annotators from the Deaf communities.
AB - In everyday life, Deaf People face barriers because information is often only available in spoken or written language. Producing sign language videos showing a human interpreter is often not feasible due to the amount of data required or because the information changes frequently. The ongoing AVASAG project addresses this issue by developing a 3D sign language avatar for the automatic translation of texts into sign language for public services. The avatar is trained using recordings of human interpreters translating text into sign language. For this purpose, we create a corpus with video and motion capture data and an annotation scheme that allows for real-time translation and subsequent correction without requiring to correct the animation frames manually. This paper presents the general translation pipeline focusing on innovative points, such as adjusting an existing annotation system to the specific requirements of sign language and making it usable to annotators from the Deaf communities.
KW - annotation
KW - automatic translation.
KW - corpus
KW - motion capture
KW - sign language production
UR - http://www.scopus.com/inward/record.url?scp=85134433383&partnerID=8YFLogxK
U2 - 10.1145/3529190.3529202
DO - 10.1145/3529190.3529202
M3 - Conference contribution
AN - SCOPUS:85134433383
T3 - ACM International Conference Proceeding Series
SP - 260
EP - 268
BT - Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments, PETRA 2022
Y2 - 29 June 2022 through 1 July 2022
ER -