April 6, 2007 | David F. Coppedge

Preprocessed Sound Produces Tone Map in the Brain

Most of us know that our ears involve three domains: the outer ear, the middle ear, and the inner ear.  We learned in school how the eardrum transmits the sound to tiny bones that transmit it to fluid in the cochlea, which stimulates hair cells that send the impulses down the auditory nerve to the brain.  What happens after that?  Scientists know surprisingly little, reported Andrew J. Kinga and Jan W. H. Schnupp in Current Biology,1 but are beginning to find out.  “Research on the auditory cortex is at an exciting stage,” they said as they shared some of the current knowledge about how the brain hears sound.
    The article mentions nothing about the origin of hearing by evolution or design.  It begins, though with this accolade for the sensitivity and complexity of the system:

Recognizing other people, animals or objects by the sound they make is something that most of us take for granted.  In fact, this ability relies on a series of rich and complex processes that begin when sounds are transduced into electrical signals by the exquisitely sensitive hair cell receptors that lie inside the cochlea of the inner ear.  These messages are then encoded as volleys of action potentials by the axons of the vestibulo-cochlear nerve and transmitted via a complex chain of nuclei in the brainstem, midbrain and thalamus towards the auditory cortex (Figure 1A),* where the interpretation and recognition of sounds is thought to take place.  Compared to other sensory systems, in which information reaches the cortex more directly, auditory signals are heavily pre-processed by the time they arrive at the cortex, and, in many animal species, this subcortical processing can mediate quite complex auditory tasks.
*A diagram of the auditory cortex regions in the brains of rhesus monkey and cat

The pre-processing is so extensive, in fact, that they “wonder what is left for the auditory cortex to do.”  Quite a lot is the answer.  We get clues from studies of people with brain damage to the auditory cortex, which can result in “severe hearing loss, at least temporarily, and an inability to recognize complex sounds or to pinpoint sound source locations,” they continue.  “Auditory cortex thus plays a crucial role in hearing, but how it does this is still very poorly understood.”  One thing is known: each sense “maps” the incoming information onto the brain:

A common feature of the primary cortical areas in different sensory systems is that they contain topographic representations or maps of the appropriate receptor surface.  Thus, neighbouring neurons in the primary visual cortex (V1) receive inputs from adjacent parts of the retina in the eye, which results in the presence of a map of the visual world across the surface of the cortex.  Similarly, each region of the skin is represented in a different part of the primary somatosensory cortex (S1), producing a cortical map of the body surface.  The same principle applies in the auditory system, except that hair cells located at different points along the length of the cochlea are tuned to different sound frequencies rather than to different locations in space.  The topographically organized projection from the thalamus to the primary auditory cortex (A1) therefore gives rise to a ‘tonotopic’ map of sound frequency.

These cortical regions for different senses appear so similar, in fact, it raises the question of whether one could substitute for the other.  To some degree, in fact, this appears to be the case.  Experiments with “rewiring” ferret brains showed that the auditory cortex could, after a fashion, “see” what was coming through the eyes.  We all know how the deaf can read Braille, and some blind people have been given devices that allow them to “see” through their skin.  Then there is the phenomenon called synesthesia, in which some people “taste” color or “smell” sound.  We each experience some of these mixed cues while falling asleep or dreaming.  The parts are not completely interchangeable, however.  The visual cortex appears optimally organized for sensing motion, while the auditory cortex appears to work as “linear filters of the acoustic stimulus,” detecting edges of frequencies instead of edges of moving objects.
    The auditory neurons are more than simple filters.  Kinga and Schnupp describe how they can adapt to the circumstances:

A number of studies have now shown that the response properties of A1 neurons can change over different time scales, indicating that they are sensitive to the context in which stimuli are presented.  This plasticity allows the filter properties of the neurons to be rapidly retuned according to the stimuli that have occurred previously and the task that is being performed.  These findings have important consequences for the way in which combinations of different sounds are represented in the cortex and argue against the presence within A1 of an invariant representation of the physical features of sound sources.

They next describe how portions of the auditory cortex seem to respond to specific properties of sound, like the controls on an oscilloscope: “response threshold, dynamic range and shape of response-level functions, sharpness of frequency tuning, sensitivity to frequency modulation, and the type of binaural interaction exhibited by the neurons” (i.e., differences in the data coming from the left and right ears).  This information is mapped onto the brain.  Sounds of a certain frequency, for instance, might form an “isofrequency contour” with intensity orthogonal to it.  It’s more complex than that, though: “more recent studies have characterized the interactions between the ears in more detail and shown that they are organized into smaller clusters, rather than continuous bands of neurons with similar properties.”
    Another aspect of the brain’s interpretation of sound is “division of labor.”  Researchers have found areas outside the auditory cortex involved in the perception of pitch, and other areas involved in processing spatial orientation of sound.  These areas are not distinct, however, and some overlapping of function occurs; “it is possible that this segregation of function relates more to differences in how information is processed than to clear categorical distinctions in what is processed there.”
    Another interesting finding involves the two-way communication of the brain and the ear.  Sound is not just dropped off at the brain’s doorstep like a postal package.  The brain talks back to the ear and tells it what to focus on.  Surprisingly, the brain replies more than it listens:

As in other sensory systems, the auditory thalamus receives a massive descending projection, with four times more inputs arising from the cortex than from the ascending pathways.  Cortical neurons also innervate the midbrain as well as various targets in the brainstem, nuclei that do not have direct access to the cortex, indicating that their influence on subcortical processing is likely to be very pervasive.

Thus, auditory inputs, after processing by the brain, set off a massive response of signals to the ears and other parts of the body.  Think of how your body responds to a loud sound like a gunshot.  You might start breathing faster, your head will turn, and your adrenaline may flow – all before you even consciously take any action.  That’s what these “corticofugal” signals trigger.  But they might also send messages back to the ears to filter out unwanted information.  The constant hum of a motor, for instance, or the sound of a passing train – while detected by the ears – is effectively shut off by the brain that has learned that these inputs are uninteresting during work or sleep:

These findings have led to the suggestion that corticofugal axons may be involved in selectively filtering information in the midbrain and thalamus, which may enable us to pay particular attention to certain aspects of our auditory environment while ignoring others.  This, in turn, would lead to an enhanced representation of stimuli that are frequently encountered or of particular significance, and could trigger longer-term, use-dependent plasticity.

That last sentence indicates we can train our ears to hear things.  Hope, perhaps, for the tone-deaf?  Or for husbands who don’t pay attention?  Practice makes perfect – maybe even perfect pitch.  In closing, Kinga and Schnupp remark that these are exciting times for research on the brain and hearing.  Scientists continue to watch what happens to different parts of the brain when selected auditory “probes” are used.

A better understanding of the transformations that take place from the thalamus to the cortex and between different cortical fields will shed light on the extent to which the processing of biologically important information is parsed into parallel functional streams.  At the same time, elucidating the functions and mechanisms of action of the many descending corticofugal projections will provide insights into both the dynamic coding of information throughout the auditory pathway and the role of the cortex itself.  Finally, a complete description of how the auditory cortex works also has to take into account how inputs from other sensory modalities – now known to be widespread in the temporal lobe – as well as cognitive factors, such as attention and memory, influence the activity of its neurons.


1Andrew J. Kinga and Jan W. H. Schnupp, “Primer: The auditory cortex,” Current Biology, Volume 17, Issue 7, 3 April 2007, pages R236-R239.

Philosophers have a field day with information like this.  Are we really hearing what is “out there” in the world?  We say that we “hear” a Beethoven symphony, but in reality, there is catgut scraping on horse hair, vibrations of air columns in tubes, impacts of cotton on stretched plastic or metal on metal, and other physical activity generating pressure waves in gas (the air).  By the time the eardrum has sympathetically vibrated and sent these pressure waves through the bones and fluids and nerves, a great deal of preprocessing has occurred.  Then, the brain is effectively shutting out what it doesn’t care to hear, either consciously or by habit.  A “trained ear” is going to hear much more out of the performance than someone unfamiliar with the nuances of music.
    Similarly, what do we know about things heard in conversation?  We cannot get “outside our head” to truly connect with someone else’s thoughts and feelings.  My thought has to be modulated through a voice and tongue (with feedback from my ears modulating the pitch and intensity of my words) to set up pressure waves, which your ear picks up and manipulates before your auditory cortex sends it to your conscious mind – and vice versa.  We all know people who tune in and tune out of a conversation or lecture.  We find ourselves doing it, too.  We joke about things going “in one ear and out the other.”  The amusing line sums up the problem: “I know you believe you understand what you think I said, but I’m not sure you realize that what you heard is not what I meant.”
    This filtering and processing happens in all our senses, individually and in concert.  To what extent, then, can we know anything outside of our minds as it “really is”?  Interesting questions – with no simple answers.  It’s what leads some philosophers to become solipsists (“only I exist”) and others to become realists, trusting that our senses provide reliable representations of reality, while other philosophers camp on a variety of positions in between.  We know how realistic dreams can seem, complete with sounds, sights and physiological responses.  Exercise: try proving that you are not just a brain in a vat, with someone sending you impulses from an elaborate program called “This is your life.”
    Enough of that.  Assuming a degree of realism and trustworthiness of our senses, we are at the threshold of understanding the mental processes involved in hearing.  As if the ears themselves were not remarkable enough, what the brain does after receiving the nerve impulses remains a vast uncharted territory.  We have just the first glimpses of what is going on in the black box.  All that these two authors have described, though, still involves the physical – the midbrain, the auditory cortex, the thalamus.  Above that is an additional layer we call “consciousness” (as if giving it a name confers understanding).  How these layers upon layers of complexity interact to give us a life that is simultaneously physical, mental, emotional and spiritual is a puzzle whose sophistication is underscored by each attempt to tease out the details.  This is irreducible complexity to the extreme.
    That is why we think it is essential to be reminded daily of the details under the hood of life.  Every time someone comes along claiming that something as elaborate as hearing emerged out of deaf chemicals by mistake, through long processes of purposeless, directionless, disinterested collisions of matter, you can ask some probing questions.  Hold up a head-shaped rock next to his head, and ask him what’s the difference in response when pressure waves impinge on the two shapes?  Read this article to him.  If he refuses to listen, you can say to the rock, he who has ears to hear, let him hear.

(Visited 32 times, 1 visits today)

Leave a Reply