OAK

Ultrasonic Sensor-Based Personalized Multichannel Audio Rendering for Multiview Broadcasting Services

Metadata Downloads
Abstract
An ultrasonic sensor-based personalized multichannel audio rendering method is proposed for multiview broadcasting services. Multiview broadcasting, a representative next-generation broadcasting technique, renders video image sequences captured by several stereoscopic cameras from different viewpoints. To achieve realistic multiview broadcasting, multichannel audio that is synchronized with a user's viewpoint should be rendered in real time. For this reason, both a real-time person-tracking technique for estimating the user's position and a multichannel audio rendering technique for virtual sound localization are necessary in order to provide realistic audio. Therefore, the proposed method is composed of two parts: a person-tracking method using ultrasonic sensors and a multichannel audio rendering method using MPEG Surround parameters. In order to evaluate the perceptual quality and localization performance of the proposed method, a MUSHRA listening test is conducted, and the directivity patterns are investigated. It is shown from these experiments that the proposed method provides better perceptual quality and localization performance than a conventional multichannel audio rendering method that also uses MPEG Surround parameters.
Author(s)
Kim, Yong GukMoon, Sang-TaeckChoi, Seung HoKim, Hong Kook
Issued Date
2013-03
Type
Article
DOI
10.1155/2013/417574
URI
https://scholar.gist.ac.kr/handle/local/15640
Publisher
Taylor and Francis
Citation
International Journal of Distributed Sensor Networks
ISSN
1550-1329
Appears in Collections:
Department of Electrical Engineering and Computer Science > 1. Journal Articles
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.