1. Melodic Emotions
Insight and Prediction
Pauline Mouawad
pmouawad@ndu.edu.lb
Supervisors Robin Laney
Chris Dobbyn
Department/Institute Computing Department
Status Part-time Overseas
Probation viva After
Starting date October 2009
Research Question
What is the role of melody in conveying emotions to listeners and are there particular
characteristics in melody that would allow us to make predictions about whether other
melodic lines carrying those characteristics would have the same emotional impact on
listeners?
The presentation will first relate some of the music information retrieval approaches,
and then a general idea about the correlation of melody with emotions will be
conveyed followed by a discussion of the influence of melody on emotions through
the findings of a pilot study. Future work will investigate the second part of the
research question.
Related Works
The large collection of musical audio available online necessitates methods to find
and retrieve musical recordings relevant to the user’s need. This requires the
development of efficient music retrieval systems.
Several approaches have been adopted to address problems in music information
retrieval; namely, retrieval by metadata, retrieval through social tags and content-
based retrieval by analysis of audio signals.
Retrieval by metadata involves searching for and retrieving music by artist
information, release date, or music title. This has some limitations because many of
these descriptions have been created by people with very different encoding concepts.
Furthermore, the current commercial systems that rely heavily on metadata do not
provide the users with options to find music they don’t already or they know but do
not know how to search for.
Social tags applied to music, are free, informal user-generated text labels that are
applied to music in order to describe its content such as the singer, composer, story
subject, and emotions conveyed. Tags are not drawn from a controlled vocabulary as
2. they are generally supplied by a community of internet users to help find a music item
by browsing or searching. This type of tagging is collaborative where any user can tag
any resource and view other’s tags and resources. Their significance derives from the
fact that they depict a considerable amount of musically-relevant information as
perceived by the listener. Limitations of social tags include the cold start problem,
spelling variants and irrelevant tags, malicious tags and tagger bias.
Content-based retrieval involves analyzing the audio signal in an attempt to find
information about a musical work and music performance. A major interest in this
field is the ability to query and process large quantities of music without prior
metadata knowledge. So there is a need for automatic content extraction related to
various music features. This is called a bottom-up strategy that starts by focusing on
the audio signal in order to extract from music high-level content descriptions such as
genre and emotions.
It is believed that the emotion experienced by music listeners is among the main
reasons why they seek music. As a result, research on musically induced emotions has
attracted various research efforts in an attempt to understand what specific music
features induce the listeners’ emotions and motivate them to seek for music.
Methods used
Various music features have been studied and it has been found that some music
features with bipolar attributes are closely related to the emotions experienced by
listeners. Among the most emotionally influential music features are mode, tempo,
loudness, pitch and melodic contour. [1-6].
In order to find dependencies between objective sound parameters and subjective
emotional labels as assigned by the listeners, musical features should be extracted.
Some of the methods used [3, 4] consist of collecting music data such as MP3 music,
labeling the excerpts with emotional terms then extracting the music features such as
pitch and finally classifying music segments according to emotional labels for
algorithm training purposes. Some studies have used the Matlab MIRtoolbox to
extract features such as melody, pitch and timbre
Findings
Primarily, mode is found to be particularly relevant for conveying happiness or
sadness by the major and minor modes respectively. Tempo is believed to influence
emotions with fast tempo associating with happy feelings and slow tempo associating
with sad ones. Loud music is perceived as happy and softer music is perceived as sad
or peaceful.
Melodic line is believed to have the potential to convey discriminable emotional
meaning [1]. Although there are only two characteristics that belong exclusively to
melodic line – up and down movement of pitch – there are several ways in which
melodic lines can vary with respect to these characteristics [1]. This may be a reason
why melodic line has yet failed to provide clear-cut results concerning emotions
conveyed.
3. Some studies have shown that when the emotions in the melody conflict with the
emotions expressed by lyrics for example, the emotions of the melody are dominant
[7]. Furthermore, a previous study has shown that melody is a dominant music feature
in conveying emotions and that it conveys emotional meaning in a principled way [1].
Particularly, it conveys basic emotions such as happiness, sadness and fear. This
suggests that there must be attributes of music which are consistently associated with
certain emotions and which can therefore serve as the basis of a code communicating
the intentions of the composer to the listener [1].
A pilot study was done in an attempt to better understand the emotions perceived by
listeners from melodies.
Pilot study and Method used
The study asked users to listen to 11 excerpts of music and then to choose from a list
of predefined labels, the emotion and the mood perceived, the genre of the music, the
instruments recognized in the music and the colour they might associate with the
music.
The music corpus was chosen randomly and the music excerpts were 30-45 seconds
recordings of instrumental music including the introductory music segments. There
are no lyrics associated with the music. Most of the excerpts were of the classic genre
and the users were five with ages between 27 and 33 coming from almost similar
cultural background. The predefined emotional labels were happy, sad, angry, dreamy,
depressing and passionate. The mood labels were content, melancholic-blue, Anger-
choleric, nostalgic-happy, nostalgic-sad, pensive, dark and zealous-energetic. The
instruments’ list included guitar, piano, violin or orchestral and the genre labels were
classic, jazz, pop, oldies, disco, rock and country. Under any category, the user could
enter ‘I don’t know’.
Findings
According to the study, basic emotions such as happy, sad and dreamy received the
highest ratings. Other emotions such as depressing and passionate received the lowest
ratings which confirm previous literature that basic emotions are more dominantly
perceived in the music by listeners.
A close correlation between emotion and mood labels was found; emotional labels
‘sad’, ‘depressing’, ‘happy’ and ‘dreamy’ were highly associated with mood labels
‘nostalgic-sad’, ‘melancholic-blue’, ‘zealous-energetic’ and ‘pensive’ respectively.
An emotion-mood valence table was built which showed that 9 of 11 excerpts
received positive-mood-positive-emotion labels. These findings suggest that listeners
closely correlate their emotional state with their mood and that it would be possible to
build an emotion-mood space to allow for a greater set of labels.
The ‘genre’ category received high labels for ‘classic’ and ‘oldies’. This is due to the
fact that most of the music corpus belonged to this genre. This might also be due to
the familiarity of the users with classic music. Conversely, ‘I don’t know’ received a
high rating, which indicates that users lack the familiarity between music heard and its
corresponding genre.
4. The blue label for the ‘Colour’ category related with ‘sad, depressing’ music; the
‘pink’ and ‘red’ labels highly associated with ‘happy’ music. This indicates that users
made intuitive visual associations between colour and emotions perceived. This may
be due to their common cultural background.
Users were asked to describe with their own words the music excerpt they listened to;
30 out 55 labels were affective descriptions such as ‘lovely’, ‘disturbing’, ‘sad’,
‘miserable’, ‘nice but sad’, ‘innocence’, ‘glass of whisky’, ‘uplifting’, let’s dance
Greek’, etc… Only 14 were objective labels such as the music title ‘Edelweiss’,
‘Belle’, ‘Love Story’, ‘seventies’ or ‘I don’t know’. This highlights the strong
emotional involvement of listeners with music and emphasizes the importance of
emotion in music selectivity.
Finally, users were asked to mention how they would search for each music item on
the internet. 2 out of 5 users did not provide answers. This could be because they’re
not familiar with the music and no metadata was provided. This may suggest that
music without textual information may be less accessible to users than music with
metadata. 16 out of 33 answers would search for the music by instrument, 5 of which
were associated with mood, 2 of which included ‘emotions’, which suggests that users
may be correlating the instrument with mood or emotion. 2 out of 33 would search by
‘mood’, 1 by ‘mood and emotion’ and 1 by ‘mood and genre’. 11/33 would search by
textual description if available. The findings suggest that the instrument is the primary
mode of search for music without textual descriptions.
Future work
In future work, we shall try to find whether there are particular characteristics in
melody that induce particular emotions and that can be used for predictions. To that
end we will select some short music samples, study happy versus sad ones, low-
pitches versus high-pitches and try to investigate what in the melody may transform it
from happy to sad and whether these could be used for predictions.
References
1 Melodic Line and Emotion: Cooke’s Theory Revisited – 2000.
2 Emotion and the experience of listening to music – 2001.
3 Towards extracting emotions from music – A. Wieczorkowska, 2005
4 Extracting emotions from music data – Alicia Wieczorkowska, 2005
5 Emotions in music –Alan Goldman – 1995
6 Extraction of emotional content from music data – 2008
7 The expression and arousal of emotions in music – Jenefer Robinson.
8 Songs and emotions: are lyrics and melodies equal partners? S. O. Ali, 2006
9 Universal Recognition of Three Basic Emotions in Music. Fritz, 2009
10 Emotional Responses to music: experience, expression and physiology. Lundqvist,
Carlsson, Hilmersson, P. Juslin, 2009
11 Using music to induce emotions: Influences of musical preference and absorption. G.
Kreutz, 2008
12 Relationship between Expressed and Felt Emotions in Music. Paul Evans, 2006