Food safety is a global concern, and with the rise of automation, novel methods of categorising, sorting, and discriminating food types are being explored. These techniques require a reliable method for rapidly identifying food sources. In this paper, we propose a method of spectroscopic food analysis, whereby audio data is generated from spectra to allow users to discriminate between two classes of a given food type. To do this, we develop a system which first extracts features and applies dimensionality reduction, then maps them to the parameters of a synthesizer. To optimise the process, we compare Amplitude Modulation (AM) and Frequency Modulation (FM) synthesis, as applied to two real-life datasets to evaluate the performance of sonification as a method for discriminating data. The results indicate that the model is able to provide relevant auditory information, and most importantly, allows users to consistently discriminate between two classes of spectral data. This provides a complementary tool to supplement current food detection methods.
Authors:
Kew, Hsein; Stables, Ryan
Affiliation:
Digital Media Technology Lab (DMT Lab), Birmingham City University, UK
AES Convention:
149 (October 2020)
Paper Number:
10394
Publication Date:
October 22, 2020
Subject:
Audio Applications and Technologies
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.