In this article we present a Deep Neural Network (DNN)-based version of the VOCALISE (Voice Comparison and Analysis of the Likelihood of Speech Evidence) forensic automatic speaker recognition system. DNNs mark a new phase in the evolution of automatic speaker recognition technology, providing a powerful framework for extracting highly-discriminative speaker-specific features from a recording of speech. The latest version of VOCALISE aims to preserve the ‘open-box’ philosophy of its predecessors, offering the forensic practitioner flexibility in the configuration and training of all parts of the automatic speaker recognition pipeline. VOCALISE continues to support both legacy and state-of-the-art speaker modelling algorithms, the latest of which is a DNN-based ‘x-vector’ framework, a state-of-the-art approach that leverages a DNN to extract compact speaker representations. Here, we introduce the x-vector framework and its implementation in VOCALISE, and demonstrate its powerful performance capabilities on some forensically relevant data.
Authors:
Kelly, Finnian; Forth, Oscar; Kent, Samuel; Gerlach, Linda; Alexander, Anil
Affiliations:
Oxford Wave Research Ltd., Oxford, UK; Oxford Wave Research Ltd., Oxford, UK; Oxford Wave Research Ltd., Oxford, UK; Philipps-Universität Marburg, Germany; Oxford Wave Research Ltd., Oxford, UK(See document for exact affiliation information.)
AES Conference:
2019 AES International Conference on Audio Forensics (June 2019)
Paper Number:
27
Publication Date:
June 8, 2019
Download Now (837 KB)
This paper is Open Access which means you can download it for free.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.