A Whole New World—Exploring Emotion in Music

Author: Rachel Brodsky || Scientific Reviewer: Kirvani Buddhiraju || Lay Reviewer: Ola Szmacinski || General Editor: Katie Calaku || Artist: Sam Hobson || Graduate Scientific Reviewer: Maria Brucato

Publication Date: May 10, 2021

 

When Disney movies open with a murmur of classical music, crescendoing into powerful waves, they immediately transport us to animated lands of princes and princesses, talking animals and evil stepmothers. Or if you haven’t watched a Disney movie in a while, what about the recent allure of the sea shanty? When listening to the now-familiar rhythm of the folk songs that traditionally accompanied laborious tasks while at sea, we imagine ourselves on a 19th century ship, helping to raise the sail or hoist up the anchor. How do we create entire worlds for ourselves, whether familiar or from centuries before, based on the music we hear? Human perception of music is influenced by pitch, key, tempo and other factors, which evoke emotion by activating the limbic and paralimbic systems [1, 2]. That said, the whole story behind music is still being sounded out, and some current theories are explored below.

In order to understand music’s role in our interpretation and imagination of fantastical worlds, we should first consider its evolution throughout and alongside human development. In terms of brain structure, primates have a larger area for visual processing compared to humans, while Homo sapiens dedicate a greater area to the processing of auditory information [3]. This indicates that as modern humans evolved, we became less dependent on observing our world, and more reliant on our advanced language system to inform our lives. Sounds are an explicit method of communication, and humans are exposed to rhythms from the earliest stages of life, starting with the pulsing cocoon of their mother’s heartbeat. The latter does not only indicate life by its very nature, but it also inspires music and its design. Accordingly, the earliest forms of music were likely the beatings of hands or drums, but as the voice was found to cover a larger range of manipulation, it was added in [4].  

Evolution may have also favored the synchronicity in matching a musical beat. This would have been particularly useful when cooperation was needed for safety or progress, such as the coordinated construction of shelter [5]. Human ability to match rhythm is called “tactus,” which describes the beat that humans naturally identify when listening to a composition. Aside from tactus, there are several other characteristics that influence the interpretation of a piece. Tempo, the pace of the music, is closely related to tactus, and variations within it show similar interpretations across cultures. Music with a slower tempo is often perceived as melancholy or contemplative, whereas a faster tempo indicates happiness and heightened levels of activity [3]. It has been found that even young children, around the age of five, can sense the different emotions associated with varying tempos [5]. This further hints at humans’ possible inclination for interpreting rhythmic sound.

 
Heartbeat.jpg
 

In music common to Western culture, there are major and minor keys, the choice of which plays a large part in influencing perception. A key is the general group of notes, or the scale, that forms the basis of a composition, and it has generally been accepted that music in a major key feels upbeat and positive, whereas minor keys sound darker and sadder [6, 7]. In addition, slower tempo music is perceived as more sad than higher tempo music. To illustrate these differences, think of “Happy” by Pharell Williams, written in F major with a fast tempo, versus “Take Me To Church” by Hozier, written in E minor with a slower tempo. Keys are distinguished by the relative ratios of the frequencies they consist of, and differences center around the third note of the sequence [8, 9]. The major third has a ratio of 5/4, equal to 400 cents (a unit of pitch where 1200 cents make up an octave), whereas the minor third has a ratio of 6/5, or around 300 cents [10]. In a study [11] testing the correlations between emotions associated with differences in musical key and speech, nine actresses were recorded saying four bi-syllabic phrases, such as “Let’s go,” each in a different emotional strain (anger, happiness, pleasantness, and sadness). When the recordings for sadness were analyzed, the actresses’ speech frequencies peaked at -300 cents, correlating to a minor third that starts on a higher pitch and descends [11]. Since the musical minor third is typically associated with melancholic emotions, its presence in spoken communication may signal a common basis between language and music. The ‘anger’ analysis also revealed two distinct peaks. However, in contrast to the previous pattern, both were around ascending keys and pitch rose throughout the utterances [11]. While sadness and anger had fairly strong ratio indicators, pleasantness and happiness had less clear patterns, which may indicate that frequency ratios of keys are more critical to the transmission of negative or urgent emotion [11]. It is unclear why this is the case, but a possibility might be that the consequences for missing a negative or urgent directive are typically worse than for missing a positive remark. As a result, the sensitivity may have evolved as an evolutionary safeguard. This leaves other factors as potentially more influential on the perception of positive emotion.   

There are also theories that the pleasure that comes from listening to music centers around the generation of expectations for the progression of a musical piece and the fulfillment, or lack, of said predictions. The limbic system within the brain is responsible for emotional responses to stimuli, and the dopamine mesolimbic pathway in particular is theorized to be critical to regulating interpretation of pleasurable and rewarding experiences [2]. A significant motivating factor in all we do is the potential reward of dopamine, which is a neurotransmitter responsible in large part for positive emotions [12]. Because humans tend to pursue pleasing experiences, it plays a role in many actions and aspects throughout life [12]. Midbrain dopamine neurons anticipate potential reward, and are theorized to “encode the degree to which an outcome matches expectations” [2]. The feedback refines the prediction model and increases the likelihood of engaging in rewarding activity and receiving a dopamine rush. The degree to which the outcome exceeds the expectation determines the strength of the response and neurotransmitter release. In terms of creating expectations for music, the buildup for the dopamine release comes from the brain’s activities as it interprets incoming auditory information. While someone listens to a piece of music, their brain recognizes the patterns of sounds and in turn creates a projection for how the rest will unfold [2, 13]. 

 
Path.jpg
 

Our brains can anticipate the structure of the music mainly through knowledge of its genre, recognition of common rhythms, and prior experience with the specific piece. Foretelling mental images are built on familiarity and experience, which encompass the implicit and explicit aspects of music. It may appear that hearing a song would no longer bring joy after some time, but because our mental representations are often incomplete, we can still enjoy a piece of music we’re familiar with. The complexity of many compositions means that someone’s brain may not predict each part perfectly. This results in minor elements of surprise, or in the case of an accurate mental construction, the fulfillment resulting from hearing a desirable sound [2]. As the brain’s expectations are fulfilled or subverted through a surprising turn, dopamine is released and triggers the positive emotions [14].

The exact processes behind how and why humans perceive music the way they do are still being investigated, but there are several theories with evidence providing a strong framework for future research. The similarities found in traditionally “darker” music (by Western standards) and sad and angry emotions present in speech point to connections between musical and language perception. On the other hand, positive emotions are thought to result from mental expectations of musical progression and the fulfillment or subversion of said expectations.While composers follow a more organic path rather than creating music based off of specific scientific discoveries, the next time you’re listening to a sea shanty, pause to consider what your brain is interpreting as you’re transported to a fantasy land.


References

  1. Blood, A., Zatorre, R., Bermudez, P., & Evans, A.C. (1999). Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nature Neuroscience, 2, 382–387. https://doi.org/10.1038/7299

  2. Salimpoor, V.N., Zald, D.H., Zatorre, R.J., Dagher, A., & McIntosh, A.R. (2014). Predictions and the brain: How musical sounds become rewarding. Trends in Cognitive Sciences, 19(2), 86-91. https://doi.org/10.1016/j.tics.2014.12.001.

  3. Trimble, M., & Hesdorffer, D. (2017). Music and the brain: The neuroscience of music and musical appreciation. BJPsych international, 14(2), 28–31. https://doi.org/10.1192/s2056474000001720

  4. Falk, D. (2018). The music moves us — but how? Knowable Magazine. https://knowablemagazine.org/article/mind/2018/music-moves-us-how

  5. Levitin, D. J., Grahn, J. A., & London, J. (2018). The psychology of music: Rhythm and movement. Annual Review of Psychology, 69(1), 51–75. https://doi.org/10.1146/annurev-psych-122216-011740

  6. Chase, S. (2021). What Is A Key in Music? A Complete Guide. Hello Music Theory: Learn Music Theory Online. https://hellomusictheory.com/learn/keys/

  7. Poon, M., & Schutz, M. (2015). Cueing musical emotions: An empirical analysis of 24-piece sets by Bach and Chopin documents parallels with emotional speech. Frontiers in Psychology, 6(1664), 1419. https://doi.org/10.3389/fpsyg.2015.01419

  8. Wikimedia Foundation. (2021). Major third. Wikipedia. https://en.wikipedia.org/wiki/Major_third

  9. Wikimedia Foundation. (2021). Minor third. Wikipedia. https://en.wikipedia.org/wiki/Minor_third#cite_note-2

  10. Harlan, B., & Chidambaram, A. (n.d.). Harry Partch Ratio Representation Project. http://www-classes.usc.edu/engr/ise/599muscog/2004/projects/harlan-chidambaram/ratios.htm

  11. Curtis, M.E., & Bharucha, J.J. (2010). The minor third communicates sadness in speech, mirroring its use in music. Emotion, 10(3), 335-48. https://doi.org/10.1037/a0017928

  12. Bhandari, S. (2019). Dopamine: What it is & what it does. WebMD. https://www.webmd.com/mental-health/what-is-dopamine#1-1.

  13. Zatorre, R. J., & Salimpoor, V. N. (2013). From perception to pleasure: Music and its neural substrates. Proceedings of the National Academy of Sciences of the United States of America, 110(Supplement 2), 10430–10437. https://doi.org/10.1073/pnas.1301228110

  14. Shany, O., Singer, N., Gold, B. P., Jacoby, N., Tarrasch, R., Hendler, T., & Granot, R. (2019). Surprise-related activation in the nucleus accumbens interacts with music-induced pleasantness. Social cognitive and affective neuroscience, 14(4), 459–470. https://doi.org/10.1093/scan/nsz019

 
Previous
Previous

Bridging the Gap Between the Science & People Affected by Traumatic Brain Injury

Next
Next

Fungus Among Us