Class-dependent and differential Huffman coding of compressed feature parameters for distributed speech recognition
- Abstract
- In this paper, we propose an entropy coding method for compressing quantized mel-frequency cepstral coefficients (MFCCs) used for distributed speech recognition (DSR). In the European Telecommunication Standards Institute (ETSI) extended DSR standard, MFCCs are compressed with additional parameters such as pitch and voicing class. The entropy of compressed MFCCs in each analysis frame varies according to the voicing class of the frame, thereby enabling the design of different Huffman trees for MFCCs according to voicing class, referred to here as class-dependent Huffman coding. In addition to the voicing class, the correlation in subvector-wise is utilized for Huffman coding, which is called subvector-wise Huffman coding. It is also explored that differential Huffman coding can further enhance a coding gain against class-dependent Huffman coding and subvector-wise Huffman coding. Based on the benefits above, hybrid types of Huffman coding by combining class-dependent and subvector-wise with differential Huffman coding are compared in this paper. Subsequent experiments show that the average bitrate of subvector-wise differential Huffman coding is measured at 33.93 bits/frame, whereas that of a traditional Huffman coding which does not consider voicing class and encodes with a single Huffman coding tree for all the subvectors is at 42.22 bits/frame. ©2009 IEEE.
- Author(s)
- Lee Young Han; Kim Deok Su; Kim, Hong Kook
- Issued Date
- 2009-04-20
- Type
- Conference Paper
- DOI
- 10.1109/ICASSP.2009.4960546
- URI
- https://scholar.gist.ac.kr/handle/local/25797
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.