In this paper, we propose a Convolutional-Transformer speech codec which utilizes stacks of convolutions and self-attention layers to remove redundant information at the downsampling and upsampling blocks of a U-Net-style encoder-decoder neural codec architecture. We design the Transformers to use channel and temporal attention with any number of attention stages and heads while maintaining causality. This allows us to take into consideration the characteristics of the input vectors and flexibly utilize temporal and channel-wise relationships at different scales when encoding the salient information that is present in speech. This enables our model to reduce the dimensionality of its latent embeddings and improve its quantization efficiency while maintaining quality. Experimental results demonstrate that our approach achieves significantly better performance than convolution-only baselines.
Authors:
Kang, Hong-Goo; Kleijn, W. Bastiaan; Skoglund, Jan; Chinen, Michael
Affiliations:
Google; Google; Google; Google(See document for exact affiliation information.)
AES Convention:
155 (October 2023)
Paper Number:
10668
Publication Date:
October 25, 2023
Subject:
Signal Processing
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this Signal Processing yet.
To be notified of new comments on this Signal Processing you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this Signal Processing then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.