Ultrasonic Sensor-Based Personalized Multichannel Audio Rendering for Multiview Broadcasting Services
- Abstract
- An ultrasonic sensor-based personalized multichannel audio rendering method is proposed for multiview broadcasting services. Multiview broadcasting, a representative next-generation broadcasting technique, renders video image sequences captured by several stereoscopic cameras from different viewpoints. To achieve realistic multiview broadcasting, multichannel audio that is synchronized with a user's viewpoint should be rendered in real time. For this reason, both a real-time person-tracking technique for estimating the user's position and a multichannel audio rendering technique for virtual sound localization are necessary in order to provide realistic audio. Therefore, the proposed method is composed of two parts: a person-tracking method using ultrasonic sensors and a multichannel audio rendering method using MPEG Surround parameters. In order to evaluate the perceptual quality and localization performance of the proposed method, a MUSHRA listening test is conducted, and the directivity patterns are investigated. It is shown from these experiments that the proposed method provides better perceptual quality and localization performance than a conventional multichannel audio rendering method that also uses MPEG Surround parameters.
- Author(s)
- Kim, Yong Guk; Moon, Sang-Taeck; Choi, Seung Ho; Kim, Hong Kook
- Issued Date
- 2013-03
- Type
- Article
- DOI
- 10.1155/2013/417574
- URI
- https://scholar.gist.ac.kr/handle/local/15640
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.