Christine Portfors, a neurologist, tends a lair of 23 tropical moustache bats at WSU Vancouver in order to tease apart the question of how they distinguish between sounds-for example, between those they use for echolocation and those they use to communicate.

Bat communication sounds, like speech sounds, are very complex in terms of frequency and timing, says Portfors. Beyond that, “We don’t know anything about how the brain actually processes those types of sounds.”

Earlier work by Portfors revealed that bats have neurons that are very sensitive to the timing of the echolocation sound, between when they emit it and when the echo comes back. Firing at different times, the neurons create a mental map that analyzes target distance information. Other neurons are so sensitive that bats can pick out a particular species of moth based on the amplitude modulation of the echolocation signal.

Bat talk. 

Portfors is currently focusing on the sounds bats use to communicate with each other. How their brains process communication sounds is apparently very similar to how humans process speech. Neural strategies seem to follow a common pattern among mammals.

Portfors is conducting experiments to determine what these communication sounds actually mean. How, for example, does a mother bat distinguish between her pup’s call and that of another?

Our understanding of how the auditory system does this is poor, says Portfors.

This current focus reflects Portfors’s interest in behavior, an unusual sympathy for a neuroscientist. However, the ultimate question piquing her curiosity is neurological.

When you hear a sound, its frequencies are processed in your ear by the cochlea , the spiral-shaped cavity in the inner ear that contains the nerve endings necessary for hearing. From there, the sound is split into different frequency components. Like a piano, says Portfors, high frequencies on one side, low on the other.

Conventional scientific wisdom has it that the individual frequency components stay within this sequential process, running individually through the auditory system. An initial neuron that responds to a high frequency will project the signal to another neuron higher up the auditory system that responds to the same frequency. But at what point, asks Portfors, does the brain put the signals together? At what point, and how, does that complex mixture of frequency modulation and timing become a sound in the brain?

Portfors has shown that this integration occurs at a lower evolutionary level of the brain than was hitherto thought. It was previously believed that it takes place at a very high level of the cortex. Portfors has shown that it actually occurs somewhere in the more primitive midbrain.

 Recognizing a voice.

Besides filling some big gaps in our knowledge about how the auditory system works and suggesting some very tantalizing evolutionary implications, Portfors’s work also has practical applications. She is part of a scientific advisory board for a company that is developing software for voice identification.

“Basically, we’re modeling what we know about the auditory system,” she says.

Her work on this project, which is directly related to her basic research, concerns how we group the different components of sounds together. Even the best computerized voice recognition systems have great difficulty interpreting more than one voice, which is a struggle in itself. A human voice is unique, composed of a number of components working together. It may contain components identical to that of another voice, but the combination makes it distinct. Software that could isolate and analyze these components would greatly improve voice recognition systems. By drawing on the work of research scientists, says Portfors, the company she works with is trying to reverse-engineer the brain.