Design at Your Fingertips: Researchers Struggle to Model Sense of Touch
Evolution News | @DiscoveryCSC
Slowly adapting (SA) sensors: these respond primarily to spatial information from the stimulus.
Evolution News | @DiscoveryCSC
The late pianist Victor Borge (1909-2000) was beloved not only for his comedy shtick but also for the sensitivity of his keyboard touch. He maintained the ability to interpret the most subtle pieces such as Claire de Lune (click on the image above to go there) with extreme delicacy all the way to age 90, when he was still giving 60 performances a year. It would be hard to design a robot with that level of durability, reliability, or sensitivity. Scientists know, because they’re having a hard time understanding it, let alone imitating it.
Four researchers from the University of Chicago and the University of Sheffield (UK) have made major progress over previous attempts to model the sense of touch. In a paper in the Proceedings of the National Academy of Sciences, “Simulating tactile signals from the whole hand with millisecond precision,” they announce their new mathematical model of a single hand’s neural responses under a variety of fingertip-touch experiments, hoping to assist robotics engineers wishing to imitate human touch response. Note the words code and information:
When we grasp an object, thousands of tactile nerve fibers become activated and inform us about its physical properties (e.g., shape, size, and texture). Although the properties of individual fibers have been described, our understanding of how object information is encoded in populations of fibers remains primitive. To fill this gap, we have developed a simulation of tactile fibers that incorporates much of what is known about skin mechanics and tactile nerve fibers. We show that simulated fibers match biological ones across a wide range of conditions sampled from the literature. We then show how this simulation can reveal previously unknown ways in which populations of nerve fibers cooperate to convey sensory information and discuss the implications for bionic hands. [Emphasis added.]
Unlike previous experiments that attempted to measure neural spikes from individual sensors in the skin of monkeys or humans, this new model simulates the responses of thousands of sensors based on knowledge of their classifications and distributions in the skin of the human hand. The team incorporated three classes of nerve fibers into the model:
- Rapidly adapting (RA) sensors: twice as densely packed as SA sensors, these provide a mix of spatial and vibration responses.
- Pacinian sensors: less densely packed than the other types, these neurons are sensitive to vibrations and waves generated by movement across the skin.
Each of these fibers produces spike trains that encode different aspects of the stimulus, such as edges, compression, and vibration. One type alone might not convey much about the source, but together, they give the brain a rich array of data. Interpreted correctly, this information allows the brain to draw conclusions about size, shape, and texture of an object by touch alone. A blind person can thus “see” Braille letters with the fingertips where these neurons are most densely packed: “each fingertip contains just under 1,000 fibers,” the paper states, providing fine resolution, especially from the high-resolution SA1 fibers.
The spike trains become more complex as the fingertip is moved into or across the source, activating more of the RA and PC fibers. Simply pressing a key on a computer keyboard is a complex act, with surrounding neurons becoming involved as pressure is applied or released. Moving a finger across a surface sets up waves that propagate throughout the hand, activating more sensors along the length of the finger and into the palm. This all happens within milliseconds (thousandths of a second), as it must when you consider the fast action of typing or playing a rapid piano piece. Even though PC fibers are less densely populated, their activity “dwarfs that of active SA1 or RA fibers,” the authors say, since they almost all become activated during a grasping operation or when feeling vibrations.
The authors describe their efforts to “tune” or “fit” their model to known facts about neurons in the hand. Eventually, they achieved a good match for things like edge detection, edge orientation, and direction of motion for simple actions. Nevertheless, they omitted important capabilities such as temperature or pain — two important inputs that can generate reflex actions that activate arm muscles to jerk the hand away before the brain is aware of danger. Needless to say, their model completely overlooks things like sweat glands, blood vessels, immune cells, and all the other equipment packed into a fingertip.
While the new model reflects admirable progress in understanding the sense of touch, and while it will undoubtedly help engineers seeking to improve prosthetic devices and robotic capabilities, the authors admit in the last section a number of limitations to their model. For instance, they tuned their model to information from rhesus macaques, knowing that humans have an additional type of tactile sensor called the SA2 fiber. They also fit their model to compression actions but not to sliding actions. In addition, they didn’t take fingerprints into account. Here’s why that could be a serious shortcoming of the model:
Third, the skin mechanics model treats the skin as a flat surface, when in reality, it is not. The 3D shape of the skin matters during large deformations of the fingertip. For example, pressing the fingerpad on a flat surface causes the skin on the side of the fingertip to bulge out, which in turn, causes receptors located there to respond. Such complicated mechanical effects can be replicated using finite element mechanical models but not using the continuum mechanics (CM) model adopted here. To the extent that friction is a critical feature of a stimulus — for example, when sliding a finger across a smooth, sticky surface — or that the finger geometry plays a critical role in the interaction between skin and stimulus — as in the example of high-force loading described above — the accuracy is compromised. Under most circumstances, the model will capture the essential elements of the nerves’ response.
Another limitation may be even more significant. They didn’t take into account the networking of responses in adjacent nerves. Their model treats an affected area as an isotropic “hotspot” wherein all the fibers react the same way, but nerve fibers are known to branch out and affect neighboring fibers. This can produce complex interactions between neurons, adding to the encoded tactile information the brain receives.
Let’s dive one level deeper into the details to consider what goes on at the cellular level. A neuron embedded in the skin does not see anything. It “feels” the outer skin deforming slightly because it contains mechanosensitive portals in its membranes. These portals let some ions in, and others out, creating a wave train of signals down the cell’s length. That’s the electrical “spike” the authors talk about, but it doesn’t just happen without each neural cell first being equipped with molecular machines able to respond to pressure, and able to quickly reset and re-fire as the source changes. As the signals propagate toward the brain, the neurons must cross synapses that convert the electrical signals to chemical signals and back again, preserving the information and the timing of the signals as we saw in the case of 3-D hearing.
Once again, the simplest, ordinary action of touching a fingertip on a surface is vastly more complex than we could conceive, challenging scientists to come up with simplified models to understand it. With this in mind, try an experiment: with your eyes closed, touch your index finger to a variety of surfaces around you: a table top, clothing, bread, liquid, the skin of your arm, a puff of air from your lips. Try to discern by touch alone information about each object’s friction, temperature, smoothness, shape, and hardness. Think of all those thousands of sensors providing that information to the brain with millisecond precision! Imagine what the brain has to deal with you when you plunge your whole body into a cold pool on a hot summer day.
The authors say nothing about evolution in their paper. Design is so abundantly obvious in the human body, as Steve Laufmann discussed in his recent ID the Future podcasts about Howard Glicksman’s series on physiology, our best engineers cannot even conceive of approximating that level of functional coherence, performance and integration. Not even close.
No comments:
Post a Comment