Details
Original language | English |
---|---|
Publication status | Published - 21 Aug 2019 |
Event | 2019 AES International Conference on Headphone Technology - San Francisco, United States Duration: 27 Aug 2019 → 29 Aug 2019 |
Conference
Conference | 2019 AES International Conference on Headphone Technology |
---|---|
Country/Territory | United States |
City | San Francisco |
Period | 27 Aug 2019 → 29 Aug 2019 |
Abstract
Non-individual head-related transfer functions (HRTFs) are commonly used in binaural rendering systems because measuring personal HRTFs is difficult in consumer scenarios. However, the perceived externalization of such rendered virtual sound images is usually low, especially for frontal and rear sound sources. This study proposed a method for improving perceived externalization of virtual frontal and rear sound images presented over headphones, consisting mainly of direction-dependent peak and notch filters for direct sound parts, and filter bank based decorrelation filters for early reflection parts. The result of the subjective listening experiment showed that the perceived externalization of frontal and rear sound sources was substantially improved by our proposed method compared to the conventional method.
ASJC Scopus subject areas
- Engineering(all)
- Electrical and Electronic Engineering
- Physics and Astronomy(all)
- Acoustics and Ultrasonics
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
2019. Paper presented at 2019 AES International Conference on Headphone Technology, San Francisco, United States.
Research output: Contribution to conference › Paper › Research › peer review
}
TY - CONF
T1 - Externalization Enhancement for Headphone-Reproduced Virtual Frontal and Rear Sound Images
AU - Li, Song
AU - Schlieper, Roman
AU - Peissig, Jürgen
N1 - Funding information: This work is supported by Huawei Innovation Research Program FLAGSHIP (HIRP FLAGSHIP) project. The authors would like to thank those who participated in the listening tests.
PY - 2019/8/21
Y1 - 2019/8/21
N2 - Non-individual head-related transfer functions (HRTFs) are commonly used in binaural rendering systems because measuring personal HRTFs is difficult in consumer scenarios. However, the perceived externalization of such rendered virtual sound images is usually low, especially for frontal and rear sound sources. This study proposed a method for improving perceived externalization of virtual frontal and rear sound images presented over headphones, consisting mainly of direction-dependent peak and notch filters for direct sound parts, and filter bank based decorrelation filters for early reflection parts. The result of the subjective listening experiment showed that the perceived externalization of frontal and rear sound sources was substantially improved by our proposed method compared to the conventional method.
AB - Non-individual head-related transfer functions (HRTFs) are commonly used in binaural rendering systems because measuring personal HRTFs is difficult in consumer scenarios. However, the perceived externalization of such rendered virtual sound images is usually low, especially for frontal and rear sound sources. This study proposed a method for improving perceived externalization of virtual frontal and rear sound images presented over headphones, consisting mainly of direction-dependent peak and notch filters for direct sound parts, and filter bank based decorrelation filters for early reflection parts. The result of the subjective listening experiment showed that the perceived externalization of frontal and rear sound sources was substantially improved by our proposed method compared to the conventional method.
UR - http://www.scopus.com/inward/record.url?scp=85074588719&partnerID=8YFLogxK
M3 - Paper
T2 - 2019 AES International Conference on Headphone Technology
Y2 - 27 August 2019 through 29 August 2019
ER -