Christine Portfors (photo) is putting together a puzzle that makes a 10,000-piece pop-art jigsaw look simple. She’s trying to figure out how the brain converts a complex sound such as a baby’s cry into a meaningful message—and how hearing loss affects our ability to understand it.

Portfors, a neuroscientist at WSU Vancouver, combines neurophysiology—finding which brain cells fire in response to what sounds—with behavioral studies—observing what an animal does when it hears a certain sound.

Even the simplest vocalizations are dizzyingly complex. A real-time diagram of the tones in a single spoken word (right) shows dozens of peaks, each one representing a different frequency, or pitch. The inner ear starts processing all that information by separating the peaks and converting them to electrical signals. Yet, “what we hear is a real sound, a full, complex sound,” says Portfors. “We don’t hear the individual tones. That’s the major goal [of my work]: how do the cells in the brain combine all the different frequencies so that what we perceive is a whole sound?”

For decades, the conventional view has been that the various frequencies are processed in parallel, with information about each one being conveyed along a dedicated pathway to a location in the cortex, where they are reintegrated to form the whole sound. Working with mice, Portfors is finding that the situation is much more complicated than that. Instead of parallel processing, signals split and recombine in more of a web-like arrangement.

“Probably the most important aspect of the work I do is showing that neurons are getting connections from lots of different frequency areas,” she says. “Once you start looking for these [connections], you find them all over the place in the brain.”

Portfors has earned funding from the National Institutes of Health and the National Science Foundation for her real-life approach. Most researchers in the field anesthetize their mice during the physiological tests and use synthesized, “pure” tones—easy to measure, but meaningless to the subjects. Portfors keeps her mice awake and offers them “real” sounds that carry important information, such as “help” calls emitted by pups. Female mice who hear a pup call rush to the source of the sound to carry the lost pup back to the nest. In physiological tests, they show clear, strong responses at several locations in the brain. Pure tones don’t evoke the same neurological responses.

Using sounds with meaning lets Portfors investigate the loss of meaning that occurs when hearing fails. By removing some frequencies from the total sound before playing it for the mice, she mimics partial deafness and explores what parts of the sound are needed for the mouse to understand the message. She also works with a strain of mice that lose the ability to hear high pitches, a condition that afflicts almost all humans starting at age 25.

In addition to using natural sounds, Portfors houses her mice in mixed-gender, “environmentally-enriched” cages. The conventional arrangement of housing two or three mice of the same sex in a “shoebox” cage just didn’t do the job, she says. Males won’t vocalize unless they’re in the presence of a female, and mice of either gender don’t develop normal behavioral or neural responses if they don’t hear the full range of sounds from other mice.

“They’re growing up in deprived acoustical environments,” she says, “And then we’re studying their auditory system.”

She shakes her head.

“If you’re going to understand anything, you’ve got to come up with a way to study these processes in an animal that’s awake and listening to the sound, and as a further-along goal, is actually behaving, is doing something,” she says, “instead of just an animal sitting there, anesthetized, hearing tones. That’s not going to tell us how the system works.”