Spectroscopic food analysis has been much studied over many years and is still an ongoing topic of research. In this paper, we propose a spectroscopic diagnostic method to generate an audio output in order to discriminate between two classes of data, based on the features of spectral datasets. To do this, we first perform spectral pre-processing and extract appropriate features from the spectra, and then apply different selection criteria to narrow down the number of features selected. To optimise the process, we compare three selection criteria, as applied to two spectroscopy food datasets in order to evaluate the performance of sonification as a method for discriminating data. Lastly, the salient features are mapped to the parameters of a frequency modulation (FM) synthesizer, so as to generate audio samples. The results indicate that the model is able to provide relevant auditory information, and most importantly, allows users to discriminate consistently between two classes of spectral data. This approach using the sonification of spectroscopic data has been shown to be useful to a food analyst as a new method of investigating the purity of food.
Kew, Hsein; Stables, Ryan
Affiliation: Digital Media Technology Lab (DMT Lab), Birmingham City University, UK
AES Convention: 149 (October 2020) Paper Number: 10393
Publication Date: October 22, 2020
Subject: Audio Applications and Technologies
No AES members have commented on this paper yet.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.