top of page
Writer's picturePriyanka Thirumurti

How Music is Processed in the Brain and Its Role in Brain Connectivity

Updated: Nov 3


How the Brain Perceives and Processes Music

To grasp how the brain perceives and processes music, it helps to start with some basics. This includes understanding how sound is picked up by the ears, key music theory concepts like pitch and harmony, and the brain’s response to musical events. Scientists study these responses, called event-related potentials, which are tiny brain signals that happen in response to sounds. Research into how the brain reacts to music has a fascinating history, especially through methods like electrophysiology, which records electrical activity in the brain to show how we process musical sounds in real time.



How the Ear Processes Sound Waves





When sound waves, like music, reach our ear, they travel down the ear canal until they hit the eardrum (also called the tympanic membrane), setting it in motion. This movement then travels through the middle ear bones—the malleus, incus, and stapes—which amplify or reduce the sound as needed. These vibrations create pressure changes in the fluid-filled inner ear, specifically within the cochlea’s three main chambers: the scala vestibuli, scala media, and scala tympani.


Inside the cochlea, a special structure called the Organ of Corti converts these pressure changes into electrical signals that the brain can interpret. This conversion happens through hair cells that bend back and forth. Depending on the direction they bend, the hair cells either activate or quiet down, creating the signals needed for us to perceive sound.


Another important part of the cochlea is the basilar membrane, which is designed to respond to different pitches along its length. The stiff, thin part at the base picks up high-pitched sounds, while the thicker part at the top responds to lower-pitched sounds. This arrangement allows us to detect a wide range of frequencies and appreciate the full spectrum of sounds in music.


How Sound Travels from the Ear to the Brain





Once sound is processed in the inner ear, it travels along the cochlear nerve, which connects to the central nervous system in the brain stem. Here, the sound information from the ear’s hair cells is sent through a network of connections. Some of these connections cross to the opposite side of the brain (contralateral connections), while others stay on the same side (ipsilateral connections).


The signals then move up to different areas of the auditory cortex, where the brain interprets the sound. This process is complex, as the brain picks apart various features of the sound—like pitch, rhythm, and tone—to create the final perception of music or sound.

Music Theory Basics



Now that we have a basic idea of how sound and music are processed in the brain, let’s dive into some key concepts from music theory that help explain how music is structured.


In music theory, the distance between two notes is called an interval. If the ratio between two pitches is 1:2, we call it an octave. An octave is divided into 12 equally spaced steps, called semitones, creating 12 different pitches known as the chromatic scale. This scale forms the foundation of Western music and applies to many instruments, although each has a unique sound quality, or timbre.


Combining semitone steps in different patterns within an octave creates various scales, like the major scale, melodic minor, and harmonic minor. Each scale has a specific set of notes called scale degrees. The first note, or tonic, serves as the scale’s “home base,” and each following note is numbered (second degree, third degree, etc.).


Chords are built on these scale degrees. For example, in a major scale, the chords based on the first, fourth, and fifth degrees are major chords, while those on the second, third, and sixth degrees are minor chords. The chord on the seventh degree is a diminished chord.


Sometimes, a chord temporarily takes on the role of the tonic. This often happens when a chord is preceded by a secondary dominant, a special chord that leads to a different tonic. If this tonic is itself a dominant chord, we call it a double dominant, sometimes referred to as the chromatic supertonic in major keys.



Understanding How We Perceive Pitch

To truly grasp how we process music, it’s essential to understand the concept of pitch, which plays a crucial role in our musical experience. Pitch can take many forms, meaning it can change while still being recognized as the same musical idea. For example, you can shift a melody or harmony up or down in pitch, but it will still sound familiar because of its tonal qualities.


Think of pitch perception as a linear experience. When two notes are an octave apart, they have frequencies that are related and sound quite similar to our ears. This means that even if you hear a note at one pitch and then hear the same note an octave higher or lower, you can still recognize it as the same musical pattern.



Tools of Measure Used to Analyze Musical Processing in the Brain







To understand how music is processed in the brain, we need to know a bit about brain activity and how it’s measured. Researchers often use tools like EEG (electroencephalography) and MEG (magnetoencephalography) to track how the brain responds to sounds and events, providing a window into musical processing.


In the brain, neurons communicate through small electrical signals. These signals are either excitatory (EPSPs), making a neuron more likely to fire, or inhibitory (IPSPs), making it less likely to fire. When these signals happen in large groups of neurons, they create electric potentials that can be recorded on the surface of the scalp using EEG. EEG captures these brain signals by placing electrodes on the scalp, one over the area of brain activity (active electrode) and another at a reference point further away (indifferent electrode).


Event-related potentials (ERPs) are the specific EEG signals that occur in response to a stimulus, like a sound. ERPs can be grouped into two types: exogenous ERPs, which happen within the first 100 milliseconds of a sound and are related to sensory processing, and endogenous ERPs, which happen later and are linked to cognitive processing. ERPs are described by their amplitude (signal strength) and latency (how long they last). Together, these measures help researchers understand how the brain processes different aspects of music over time.






Different types of ERP (event-related potential) waveforms help scientists track how the brain responds to sounds and language over time. Each wave has a distinct timing and purpose:


1. P50 Wave: This wave appears around 40-75 milliseconds (ms) after a sound and shows the brain’s early reaction to auditory signals. Its strength, or “amplitude,” is the difference between its peak and the dip right before it.

2. N100 (or N1) Wave: This wave, peaking around 90-200 ms after a sound, shows the brain’s response as it initially processes the stimulus. It’s a negative deflection, meaning it dips down on the graph.

3. N200 (or N2) Wave: The N200 shows up around 200 ms, especially when the brain notices a sudden change in a repeated sound pattern. A related wave, MMN (Mismatch Negativity), signals when the brain detects something unexpected in the usual rhythm or sound environment.

4. N2b Wave: This wave appears a bit later than MMN and specifically responds when a change in the sound is relevant to a task. The brain flags this as “important,” which is helpful for focused listening.

5. P300 (or P3) Wave: Peaking between 250-400 ms, this wave’s amplitude increases with attention. If we’re really tuned in to a sound, the P3 wave gets stronger, reflecting how focused we are.

6. N400 Wave: The N400, occurring between 300-600 ms, is closely tied to language. It reacts when something unexpected happens in a sentence. For example, if a sentence ends with an unusual word, the N400 spike reflects our brain’s surprise.

7. P600 Wave: This wave responds to grammar or syntax errors in language. If a sentence has a complicated or incorrect structure, the P600 wave activates, showing the brain’s work to understand or correct the error.


Each of these waves provides insight into how the brain reacts to and processes different kinds of information, from basic sounds to complex language structure. By studying these waves, researchers can better understand how attention, expectation, and even error correction work in our brains.







Neurocognitive Model of Music Perception

To understand how our brains process music, we can look at the neurocognitive model of music perception. This model breaks down the journey of sound into a series of steps that take place in different parts of the brain, transforming raw sounds into the rich experience of music we recognize.


1. Feature Extraction: Music perception starts with decoding the raw sounds we hear. This is called feature extraction, where the auditory brainstem, thalamus, and auditory cortex (mainly in areas known as BA41, 42, and 52) work together to interpret basic sound features like rhythm, pitch, and volume. The auditory cortex’s main role here is transforming these acoustic details into sensory perceptions like pitch height (how high or low a note sounds) and loudness.

2. Echoic Memory (Creating Patterns): As the brain processes these sound features, it stores them in auditory sensory memory, allowing us to recognize patterns over time. Here, sounds combine into a “Gestalt,” or a unified whole—this is where we start to hear melody and rhythm as connected pieces rather than random sounds. The process is reflected in specific brain activity called MMN (mismatch negativity), which activates when there’s a change in an expected auditory pattern.

3. Interval Analysis and Syntactic Structure Building: As we continue to process music, the brain analyzes the intervals, or distances, between notes. This part involves building a syntactic structure, similar to how we process language, where our brains follow the “grammar” of music. A specific brain response, ERAN (early right anterior negativity), appears when we encounter unexpected shifts in musical structure.

4. Structural Re-Analysis: When music takes an unexpected turn, such as an unexpected chord or key change, the brain adjusts its understanding, a process known as structural re-analysis. This allows us to follow and adapt to complex musical pieces as they unfold.

5. Vitalization and Movement Activation: In the final stages of music perception, brain areas linked to motivation and movement get involved. Music often triggers emotional and physical responses, preparing our bodies to react—whether by tapping a foot, swaying, or even dancing. In these final steps, music becomes not just something we hear but something that physically and emotionally moves us.


This journey, from decoding sound to feeling rhythm in our bodies, shows how the brain transforms simple vibrations in the air into a fully immersive experience. The neurocognitive model of music perception reveals how music taps into our memory, sensory processing, emotional centers, and even physical movement, uniting these systems to create the rich experience we enjoy.

The Effects of Background Music on Different Types of Memory

The way different kinds of music are processed and how these types of music affect different neurological functions such as mood, arousal and memory functions is an interesting topic to investigate. In one specific study, researchers decided to investigate the effects of background music on different types of memory, including verbal memory. Previous literature says that background music serves as a homeostatic mechanism for listeners' internal mood and arousal, giving the listeners' the opportunity to be in a state where they can perform accurately and highly on memory tasks (2). In the study conducted by the group of researchers, background music was not found to have a better effect on verbal memory performance compared with silence in the control group (1). There was no significant difference between the two groups on the effect of verbal memory performance, so in this case background music was not more effective (2).




How EEG Helps Uncover Emotions Triggered by Music

In an innovative study, researchers used EEG (electroencephalography) to explore how different emotions are triggered in people when they listen to music. By analyzing the brain activity recorded by EEG, the team aimed to identify which emotions participants experienced and then connected these emotional responses to specific acoustic qualities in the music itself. This approach allowed them to explore the emotional impact of music without relying on existing theories or assumptions about music and emotion.


Using a regression model, they successfully predicted the emotions that listeners might feel in response to previously unheard music with a modest correlation coefficient (r = 0.234). Essentially, the model could anticipate how people might emotionally respond to new music based on EEG data, providing a fresh lens on how our brains connect with sound.


This line of research reveals how music processing in the brain can impact various neurological functions, like memory and emotion. It also opens doors for studying brain connectivity networks, using tools like functional connectivity analysis, to understand even more deeply how music shapes and interacts with our minds.


How Network Science Can be Used to Understand the Effects of Music Preference on Functional Brain Connectivity

In a fascinating study, researchers used network science to explore how listening to favorite music affects connections in the brain. Past research has shown that preferred music often triggers personal memories and thoughts, making it uniquely meaningful. The study found that listening to favorite songs activated functional connections between the auditory cortex (where sounds are processed) and the hippocampus (key for memory) in ways the researchers didn’t expect, given the complexity of music and the individuality of musical taste.


Using fMRI scans of 21 participants listening to pre-selected songs, the researchers created brain maps to see how different regions interacted. When participants listened to music they liked, their brain’s default mode network—the area linked to self-reflection and mind-wandering—was highly active. But when they listened to disliked music, the network connectivity shifted: the precuneus (in the parietal lobe, important for visual-spatial and self-related processing) was mostly isolated, connecting primarily to itself rather than other parts of the brain. This study highlights how our music preferences are reflected in distinct patterns of brain activity, showing the powerful role personal taste plays in shaping our brain’s response to music.



How Sound Stimulation can Shape Brain Connections in Early Development

Just like our favorite songs can change how our brains connect and communicate, sound stimulation in the early days of life can have a profound impact on brain development. Research shows that listening to certain sounds during this crucial period can strengthen neural connections, boost cognitive abilities, and even help repair damage from various neurological and psychiatric disorders. This effect isn’t limited to humans; sound stimulation influences brain structures in many species. In their review, scientists explore the fascinating ways that sound can shape neural connectivity, emphasizing how auditory experiences activate the BDNF-Trk pathway—a biological route similar to that activated by enriched environments. By understanding these mechanisms, we can appreciate the powerful role sound plays in our brain development and overall mental health.




Sources

  1. Koelsch, Stefan. Brain and Music. John Wiley & Sons. 2013.

  2. Nguyen, Trăm and Grahn, Jessica A. Mind Your Music: The Effects of Music-Induced Mood and Arousal Across Different Memory Tasks. Psychomusicology: Music, Mind and Brain; 27(2):81-94.2017.

  3. Daly, Ian et al. Music-Induced emotions can be predicted from a combination of brain activity and acoustic features. Brain and Cognition. 2015.

  4. Wilkins, R.W. et al. Network Science and the Effects of Music Preference on Functional Brain Connectivity: From Beethoven to Eminem. Human Behavior Cognitive Neuroscience. 2014.

  5. Chaudhury, Sraboni et al. Role of sound stimulation in preprogramming brain connectivity. J Bio Sci; 38:605-614. 2013.




20 views0 comments

Comments


bottom of page