MOTORLAB
University
of Pittsburgh
Image Caption Action potentials from neurons are used to control man-made devices. In this cartoon, the axon of a pyramidal neuron (reconstructed from 20-micron histological sections) wraps around a cylinder to form a solenoid that activates a watch escapement.

RESEARCH

Our research interests span:

  • Neural Correlates of Action: Single Neurons and Neural Populations
  • Cortical Physiology
  • Cortical-Muscular Activation
  • Skeletal Biomechanics
  • Visual Motor Transformation
  • Arm Reaching
  • Reach-to-grasp
  • Control of Dexterity
  • Neural Prosthetics
  • Anthropomorphic Robotics
  • Neural Statistics
  • System Control

Neural correlates of action

We are interested in how motor intentions-- the way we want to act on our surroundings -- are represented in the firing patterns of individual cortical neurons, as well as in the patterns of neural activity within cortical populations. This is studied from the perspective of encoding (models of factors predictive of a neuron’s firing rate) and decoding (the extraction of intended movement from populations of recorded neural activity). Our experimental paradigms are based on behavioral tasks that allow subjects to have a great deal of freedom in making movements. Tasks include reaching to targets, orienting the wrist, and grasping/manipulating objects. In the laboratory, we train monkeys to perform these tasks, and then record single-unit neural activity from motor-related cortical areas as the tasks are carried out. Since 2012, we have been recording the same type of neural activity in a paralyzed human subject who is learning to use her own brain activity to control a prosthetic arm and hand.

We have built encoding models based primarily on cosine tuning of movement direction. Over the years we have increased the number of movement parameters in the model to include wrist orientation and hand shape, in addition to the original three dimensions of hand velocity from our early work (Georgopoulos et al, 1986). Most of the extraction algorithms used for decoding are based on the population vector, although we have also explored the use of more elaborate methods such as: Kalman, Laplace Gaussian and particle filters. In addition to encoding and decoding, we are also interested in the structure of interactions between simultaneously active neurons in a population. In particular, we look at sources that generate correlations between neurons. Historically, most examinations of neuron-neuron interactions have focused on synchrony effects that “bind” groups of neurons together. The idea has been that precise, millisecond-scale synchronous activity is efficacious in activating neurons through excitatory synaptic integration. In contrast, our recent work has emphasized longer timescale correlations and the idea that neurons correlated with this criterion have “common drive.” We view these drivers as input to the population, and have used dimensionality-reduction to identify these drivers from records of simultaneously-recorded population activity. One of our research objectives is to characterize these drivers in different behavioral contexts, to relate them to behavioral and extrinsic parameters, and to describe the neurons that are sensitive to the same drivers.

We are now extending our studies to include interaction with objects. This complex behavior requires explicit and implicit knowledge of the object. This knowledge is key to the ability to place the fingers correctly on an object, to exert the proper forces on the object to maintain stable grasp, to move the object and, finally, to use the object as a tool. Cognitive processing and actions in the motor areas of cortex are essential for this type of behavior. In time, these models can be used to study cognitive operations, and will likely lead to a rigorous and robust substrate for research in cognitive theory.

Cortical relation to muscle activity

Although the primary motor cortex has anatomical connectivity to motoneurons in the spinal cord, the functional consequence of motor cortical activity on muscle excitability is complex. The effect of a cortical action potential on muscle activation is determined by the background excitability of the motoneuron which is generated by its many synaptic inputs. Whether an action potential in a cortical neuron can excite an anatomically connected muscle is therefore conditional on the probability that the other inputs to the same motoneuron are also active. This suggests that M1-muscle functionalconnectivity is context-dependent. We are using a correlation approach to compare single-unit activity recorded in motor cortex to EMG activity during a variety of arm movement tasks. So far, we have found that the correlation between cortical and muscle activity varies in a consistent way within a single task. For instance, during ellipse drawing, a neuron-muscle pair will be correlated for only a small segment of the trajectory. This correspondence appears to be determined by the ratio of a cortical cell’s preferred direction, measured in a hand-centered coordinate system, and the impulse-contraction-induced movement of the hand by the studied muscle. This non-stationary functional connectivity is being modeled and new data gathered to detail the general features of the corticomuscular system.

Cortical prosthetics

Over the last 20 years we have developed technology to transform cortical activity to a signal that can be used to control a robotic arm to perform movements that closely resemble those carried out by normal individuals with their own limbs.

Arrays of chronic microelectrodes are implanted permanently in the motor cortical areas of monkeys trained to move their arms in three-dimensional space. Single-unit activity recorded from these electrodes is discriminated and the resulting firing rates are processed with an extraction algorithm that generates a velocity signal of the hand every 30 ms.

Early on in our experiments, the monkeys worked in a virtual reality, reaching with their hands for targets located in different parts of the 3D space. In those experiments, the hand was tracked and displayed as a ball-shaped cursor. Once the animal was trained in the task and after the electrodes were implanted, the extracted signal allowed the animal to direct a 3D computer cursor to targets in a virtual-reality environment without moving its own limbs. The monkey accomplished this movement by modulating its own cortical activity to produce a velocity signal.

Recently, we have replaced the virtual-reality environment with an anthropomorphic robot arm. The extracted velocity signal is used as an input to an inverse kinematic algorithm that gives joint-angles for each of the four robot motors. Initially, a child-sized arm with a fully mobile shoulder and elbow was outfitted with a simple gripper. Using the principles we developed with the VR task, monkeys were trained to use the arm with their cortical signals to reach out with natural movements to grasp and retrieve pieces of food.

We extended this control to the wrist and fingers of a more elaborate effector (Modular Prosthetic Limb from the Applied Physics Lab—“MPL,” Johns Hopkins University). In February 2012, a quadriplegic woman was implanted with two recording arrays at the University of Pittsburgh. Using our approach, she was able to operate the advanced prosthetic to pick up and manipulate a variety of objects and to feed herself.

Using brain signals to control a prosthetic device can be thought of as a control loop in which behavioral output (in this case, motor intention) is estimated from brain activity and then expressed as movement of the prosthetic device that the subject can see. When the subject recognizes that her movements are “off,” she can learn to correct them by modifying the patterns of her own neural activity that is being recorded by the electrode arrays. This capacity significantly improves the ability of subjects to control the device. At the same time, we have a rich paradigm for studying learning at the level of single neurons and populations.

Other projects

Engineered Learning

Task difficulty can be regulated in a number of different ways, but, in general, learning can be described as successive completion of increasingly difficult tasks. Difficulty can be considered in an information context as a sub-volume of success in an overall volume (entropy) of all possible movement. The smaller the ratio of sub-space/entropy, the more difficult the task. In our paradigm of reaching, the difficulty can be easily controlled by target size. The concept of “target” can be expanded, for instance, during object grasping by the set of finger placements on the object resulting in a stable grasp. Utility curves plot task difficulty vs success rate and these are sigmoidal in shape, so that easy tasks have a high success rate and difficult tasks are rarely successful. The issue for training is where to position the task difficulty on the curve for each training session to maximize the learning rate. The objective is to shift the utility curve toward increased difficulty. Within a single training session, if the difficulty is too great, the subject will give up; if too low, not enough errors will be experienced to generate learning. If the proper point is chosen, then the subject’s performance will increase during the training session. This increase can be characterized by a classic learning curve (time vs success rate). This learned performance increase can be expected to carry over to the next training session and can be re-estimated to be a point on a new utility curve that has moved toward increased difficulty from the previous session. The overall idea is to estimate each day’s utility curve from a limited sample and then adjust the difficulty for that session at an optimal position on the curve. This general approach can be used in a wide range of training paradigms.

Tuning function stability

Recently, it has been reported that the tuning functions used to describe the relation between firing rate and movement direction are labile-- that their preferred directions shift within a single movement. We have been studying this phenomenon, in detail, to determine whether this is a shortcoming of the tuning model in terms of its form or the parameters upon which it is based. Alternatively, the apparent shifts may be due to the noise structure of the firing rates themselves which make the determination of preferred direction difficult to estimate.

Information estimation

Following the work of Paul Fitts, performance can be characterized as information transmission (bits/sec). Physical systems can be considered to have a limited information rate and the maximum transmission rate is referred to as information capacity. This means that task duration is a function of the number of bits that need to be transmitted (task difficulty) and channel capacity. We are exploring this concept as monkeys perform a number of tasks with varying difficulty with their own limbs and with brain-controlled devices. This metric will be used to assess different prosthetic approaches which can be compared to each other as well as to natural performance. Information capacity can be estimated directly from recorded neural activity and then estimates can be made as to how this information is degraded following signal extraction and movement execution.

Sensory replacement

Tactile and torque-related signals from the MPL prosthetic arm will be encoded as electrical stimuli used to activate S1 sensory cortex in monkeys carrying out brain-computer interface-controlled movements. The utility of these signals in controlling prosthetic movement during object manipulation will be optimized in the laboratory. This approach will then be used for the next patient study in which a paralyzed individual will receive two recording arrays in the motor cortex and two stimulating arrays in S1.