Semantic Audio is an emerging field in the intersection of signal processing, machine learning, knowledge representation, and ontologies unifying techniques involving audio analysis and the Semantic Web. These mechanisms enable the creation of new applications and user experiences for music communities. We present a case study focusing on what Semantic Audio can offer to a particular fan base, that of the Grateful Dead, characterized by a profoundly strong affinity with technology and the internet. We discuss an application that combines information drawn from existing platforms and results from the automatic analysis of audio content to infer higher-level musical information, providing novel user experiences particularly in the context of live music events.
Authors:
Wilmering, Thomas; Thalmann, Florian; Fazekas, György; Sandler, Mark B.
Affiliations:
Queen Mary University of London, London, UK; Centre for Digital Music (C4DM)(See document for exact affiliation information.)
AES Convention:
143 (October 2017)
eBrief:387
Publication Date:
October 8, 2017
Subject:
Posters—Part 2
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.