Speaker: Kathleen Cullen
Department: Department of Biomedical Engineering, John Hopkins University
Subject: Internal Models of active self-motion in the primate cerebellum: Implications for perception and action
Location: Erasmus Medical Centre
Date: 6 February 2017
Author: Romano van Genderen
The first speaker of this two-part seminar series by the Department of Neurology was by professor Cullen. She started the talk off slow, by talking about the fact that there are two types of stimuli, namely those caused by the self (reafference) and those by the external environment (exafference). She explained this difference by saying that you react differently when you move your own eye and when your eye is pushed.
Then she started talking about the vestibular organ, which plays a large role in posture, but also in the perception of self-motion. She explained the role it plays to keep your posture upright, but also that it can work against active motion, using a very clever example of an ice skater vs someone slipping on some ice. Using this example she explained that the vestibular organ does not see any difference between intentional movement and external movement. She then showed experiments on monkeys that were placed on turntables. These showed that there are special neurons that cancel the vestibular signals that are generated in response to self-motion.
But how does the nervous system see the difference? The cancellation signal is not made as a response to proprioceptors, the receptors for spatial location in the brain. Also, motor commands are not responsible. And by doing an experiment where the monkey was trained to turn a wheel instead of moving himself, it was also shown that higher level knowledge was also not the single cause. This led her to hypothesize that inhibition occurs when sensory feedback is equal to the expected result.
Next she wanted to talk about the role the central nervous system plays in this process. But before that, she told of a recent funny development in the field. A so-called “Tickle Robot” was developed. It is well-known that you are unable to tickle yourself. But this robot solves this so-called problem. This robot works by you first tickling the “skin” of the robot. Then, the robot adds a slight noise to the signal and then translates the tickle to you. Because the signal has a slight change added to it, it is not equal to the expected result, leading to a tickling sensation. And by increasing the noise added, the tickling sensation becomes stronger.
Then she told that in fish the a specific lobe has been found that plays the role of comparing the expected and real result. This lobe works according to a so-called forward model, that also plays a large role in motor learning.
Figure 1: Model of motion, using both a predicted and a sensed angular velocity
But she did want some real evidence to support the hypothesis. They applied a restrictive torque to the head of a monkey and then made him turn his head by using a banana. The first few times the monkey was unable to push the load away, but after a while he learned to push harder. But the neurons actually rewired themselves in such a way that they completely ignore the extra load. Because the active command and the expected results differed, the neurons physically changed. The same thing was done by removing the load and seeing how the monkey learned to deal with the new load. This supported the hypothesis that inhibition occurs when sensory feedback is equal to the expected result, even with physical results.
The second part on vestibular dysfunction was told very quickly, but it showed that when the vestibular organ is damaged, the proprioceptors in the skin are able to take over their function. Also, the vestibular organ on the other side of the head can take over some function.
I really liked this talk, but the fact that monkeys are so often used in brain research was pretty shocking to me. The practical applications were not that clear, but the theory was very interesting, if not a bit too abstract.