Historic recordings usually have degraded audio quality because of their age, improper storage, and the shortcomings of the original media. One typical problem is the presence of impulsive disturbances. Recordings that suffer from clicks and crackles can be processed by impulse-restoration algorithms to improve their audio quality. This report presents a new algorithm that classifies one-second frames of an audio recording based on the existence of impulsive disturbances. The algorithm uses supervised learning. It is shown that existing impulse-restoration algorithms suffer from degradation of the desired signal if the input SNR is high and if no manual parameter adjustment is possible. This would make automatic restoration of large amounts of diverse archive audio material unfeasible. The proposed classification algorithm can be used as a supplement to an existing impulse-restoration algorithm to alleviate this drawback. An evaluation using a large number of test signals shows that high classification accuracy can be achieved, making automatic impulse restoration possible. Results show that prewhitening of the input signal by means of a phase-only transform serves to increase the detectability of disturbance impulses, which can also be used as a detection enhancement method for impulse-restoration algorithms.
Authors:
Brandt, Matthias; Doclo, Simon; Gerkmann, Timo; Bitzer, Joerg
Affiliations:
University of Oldenburg, Dept. of Medical Physics and Acoustics and Cluster of Excellence Hearing4all, Oldenburg, Germany; University of Hamburg, Dept. of Informatics, Signal Processing Group, Hamburg, Germany; Jade University of Applied Sciences, Oldenburg, Germany(See document for exact affiliation information.)
JAES Volume 65 Issue 10 pp. 826-840; October 2017
Publication Date:
October 30, 2017
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.