Sonification can be defined as any technique that translates data into non-speech sound with a systematic, describable, and reproducible method, in order to reveal or facilitate communication, interpretation, or discovery of meaning that is latent in the data. This paper describes an approach for communicating cross-cultural differences in sentiment data through sonification, which is a powerful technique for the translation of patterns into sounds that are understandable, accessible, and musically pleasant. A machine-learning classifier was trained on sentiment information of two samples of Tweets from Singapore and New York with the keyword of "happiness." Positive-valence words that relate to the concept of happiness showed stronger influences on the classifier than negative words. For mapping, Tweet frequency differences of the semantic variable "anticipation" affected tempo, positive-affected pitch, and joy-affected loudness, while "trust" affected rhythmic regularity. The authors evaluated sonification of the original data from the two cities, together with a control condition generated from random mappings in a listening experiment. Results suggest that the original was rated as significantly more pleasant.
Authors:
Liew, Kongmeng; Lindborg, PerMagnus
Affiliations:
Kyoto University, Kyoto, Japan; Seoul National University, Seoul, South Korea(See document for exact affiliation information.)
JAES Volume 68 Issue 1/2 pp. 25-33; January 2020
Publication Date:
February 5, 2020
Click to purchase paper as a non-member or you can login as an AES member to see more options.
No AES members have commented on this paper yet.
To be notified of new comments on this paper you can
subscribe to this RSS feed.
Forum users should login to see additional options.
If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.