Mammals Compute Sound Timing in the Microsecond Range
Evolution News | @DiscoveryCSC
Evolution News | @DiscoveryCSC
At a basic level, we all know that two ears give us the ability to detect the direction of a sound. Cover one ear, and it’s hard to tell. Uncover; we hear in stereo. But when you look into the physics of sound localization, the requirements are stringent.
Sound waves coming from the left hit your left eardrum only microseconds (millionths of a second) before they hit the right eardrum. Your ears must not only be able to capture that tiny difference in arrival time, but preserve the information through noisy channels on the way to the brain. And they must be able to do that continuously. Consider an ambulance siren moving left to right; the inter-aural time difference (ITD) is constantly changing. Your ears need to keep up with the microsecond-by-microsecond changes as they occur, without the prior information getting swamped by the new information.
Now consider being in an auditorium, listening to an orchestra with your eyes closed. You can tell where each instrument is located, even when they are playing together, just by the ITDs from each player. How amazing is that?
This can only work if the auditory system maintains the information all the way to the brain. The brain receives the timing differences after a delay: first, the eardrum converts pressure waves to membrane vibrations, which trigger mechanical movements of the middle ear bones (ossicles), which convert the mechanical motions into fluid waves in the cochlea, which converts the fluid waves to electrical impulses in the neurons. These things take time, but we’re still not there.
Each axon of each neuron has to cross synapses where the electrical information is converted to chemical information and back again in the next neuron. This is getting very complicated! There’s bound to be some noise in the transmission pathway. How can the ITD at the outer ear be maintained all the way to the brain through these multiple energy conversions?
Two neurobiologists from the Ludwig-Maximilian University of Munich, appreciating the problem of maintaining sound localization information, decided to run experiments on mice and gerbils. Think how much closer together those ears are than human ears! The smaller inter-aural distance compounds the problem, tightening the requirements even more. Under the news headline “Auditory perception: where microseconds matter,” Drs. Grothe and Pecka announce what they found.
Gerbils (who depend on sound localization more than mice) use multiple mechanisms to maintain accurate ITD information in their sound transmission apparatus. The researchers explain the challenge:
In the mammalian auditory system, sound waves impinging on the tympanic membrane of the ear are transduced into electrical signals by sensory hair cells and transmitted via the auditory nerve to the brainstem. The spatial localization of sound sources, especially low-frequency sounds, presents the neuronal processing system with a daunting challenge, for it depends on resolving the difference between the arrival times of the acoustic stimulus at the two ears. The ear that is closer to the source receives the signal before the contralateral ear. But since this interval – referred to as the interaural timing difference (ITD) — is on the order of a few microseconds, its neuronal processing requires exceptional temporal precision. [Emphasis added.]
Grothe and Pecka, along with seven other colleagues, published the results of their research in an open-access paper in the Proceedings of the National Academy of Sciences (PNAS). They report “a specific combination of mechanisms, which plays a crucial role in ensuring that auditory neurons can measure ITDs with the required accuracy.”
Back in 2015, the team observed structural modifications of the myelin sheaths wrapping the auditory nerves. The axons of these neurons, they also noted, were particularly thick. Discontinuities in the sheaths, coupled with the axon thickness, seemed to turbo-charge the neurons “to enable rapid signal transmission.” That’s necessary for sound localization, but it’s not enough. If the synapses introduce additional varying delays, you’ll just get faulty information transmitted faster. There must be something else going on. Here’s what they found this time:
Before cells in the auditory brainstem can determine the ITD, the signals from both ears must first be transmitted to them via chemical synapses that connect them with the sensory neurons. Depending on the signal intensity, synapses themselves can introduce varying degrees of delay in signal transmission. The LMU team, however, has identified a pathway in which the synapses involved respond with a minimal and constant delay. “Indeed, the duration of the delay remains constant even when rates of activation are altered, and that is vital for the precise processing of interaural timing differences,” Benedikt Grothe explains.
Specifically, the team discovered “stable synaptic delays” in the transmission neurons by a unique mechanism previously unknown in other neural circuits. Without a unique “inhibitory pathway” described in the paper, synapse transmission times would vary under continuous excitation, wiping out the ITD information. (This can happen, for instance, as a result of changes in vesicle abundance needed to carry the neurotransmitter molecules across a synapse.)
Functionally, stable synaptic delays seem to represent a specific adaptation for faithful ITD processing, because it would prevent fluctuations in the relative timing of direct excitation and indirect inhibition for responses to onsets vs. ongoing sounds in the range of tens to hundreds of microseconds. Such fluctuations may be negligible for most neuronal computations, but not for microsecond ITD processing of low-frequency sounds.
We now know the challenge; something needs to keep these synapses in a consistent readiness state, so that the crossing time delays are constant. One method might be buffering, so that enough vesicles are always at the ready. That’s one mechanism they observed, but not the only one. The solution also involves computation. There are two bodies at the receiving end, named the LSO and the MSO, that share information. The LSO deals with sound levels, and is less stringent about timing. The MSO, however, requires precise time information to calculate ITDs. By comparing one another’s inputs, the LSO and MSO can “detect coincidences between inputs from the two ears.” The authors note another “striking shared structural feature is the contralateral inhibitory pathway that is specialized for speed and reliability.”
That’s still not all. Two other structures upstream from the MSO are involved, but they cannot inhibit too much, or they, too, will introduce noise. So they, too, are finely tuned:
Recently we showed that the inhibitory pathway conquers this challenge via a two- to threefold thicker axon diameter of GBCs [globular bushy cells] compared with the spherical bushy cells, which comprise the excitatory input. Moreover, we revealed the presence of a dramatic decrease of internode length toward the terminal region in both fiber classes.
The details of these specializations need not concern us here. Suffice it to say that multiple mechanisms ensure that ITD information is preserved from eardrum to brain: structural properties of axon diameter and sheathing patterns, buffering of vesicles, and computation of differences between inputs received at the auditory cortex. No other part of the body requires this level of timing precision, and no other circuit achieves it.
For a real-world application of this need for precision, consider the echolocating bat. This creature darts about in the air, making sudden turns every second, listening to echoes from its high-frequency chirps. Research at Johns Hopkins finds that bats respond to a noisy environment by turning up the volume. We humans do that, too, but bats do it in 30 milliseconds: 10 times faster than the blink of an eye! That means that these little flying mammals, with ears much closer together than ours, are able to respond to the sound location information calculated from their ITDs extremely fast, while simultaneously operating their wings in a constantly changing auditory environment.
Our brief look into the complexity of auditory localization in mammals provides a good example of not only Behe’s irreducible complexity, but also what Douglas Axe calls functional coherence, “the hierarchical arrangement of parts needed for anything to produce high-level function — each part contributing in a coordinated way to the whole” (Undeniable, p. 144). None of these parts (MSO, myelin, synapses) perform sound localization individually, but collectively, they do.
We could explore the hierarchy further by looking more closely at how molecular machines within the neuron cells participate in the “functional whole” of sound localization. Taking the wide-angle view, we see how all the lower levels in the hierarchy contribute to the bat’s amazing ability to catch food on the wing. Functional coherence is not just beyond the reach of chance (Axe, p. 160), it provides positive evidence for intelligent design. In all our uniform human experience, only minds are capable of engineering complex, hierarchical systems exhibiting functional coherence. The complexity of this one circuit — sound localization — makes that loud and clear.
No comments:
Post a Comment