OAK

Integrated DNN-Based Parameter Estimation for Multichannel Speech Enhancement

Metadata Downloads
Author(s)
Cheong, SeinKim, MinseungShin, Jong-won
Type
Article
Citation
IEEE Signal Processing Letters, v.32, pp.3320 - 3324
Issued Date
2025-08
Abstract
One of the popular configurations for the statistical model-based multichannel speech enhancement (SE) is to apply a spatial filter such as the minimum-variance distortionless response beamformer followed by a single channel post-filter, and some of the deep neural network (DNN)-based approaches mimic it. While a number of DNN-based SE focused on direct estimation of clean speech features or the masks to estimate clean speech, some of the efforts were devoted to estimate the statistical parameters. DNN-based parameter estimation with two DNNs for a beamforming stage and a post-filtering stage has demonstrated impressive performance, but the parameter estimation for a beamformer and that for a post-filter operate separately, which may not be optimal in that the post-filter cannot utilize spatial information from multi-microphone signals. In this letter, we propose integrated DNN-based parameter estimation for multichannel SE based on both the beamformer output and multi-microphone signals. The speech presence probability and the power spectral densities for speech and noise estimated in the beamforming stage are utilized in the post-filtering stage for better parameter estimation. We also adopt the dual-path conformer structure with an encoder and decoders to enhance the performance. Experimental results show that the proposed method marked the best wideband perceptual evaluation of speech quality (PESQ) scores on the CHiME-4 dataset among all methods with comparable computational complexity. © 2025 Elsevier B.V., All rights reserved.
Publisher
Institute of Electrical and Electronics Engineers Inc.
ISSN
1070-9908
DOI
10.1109/LSP.2025.3599455
URI
https://scholar.gist.ac.kr/handle/local/32029
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.