WeChat Mini Program
Old Version Features

Visual Information Following Object Grasp Supports Digit Position Variability and Swift Anticipatory Force Control

Journal of Neurophysiology(2023)

Univ Oregon

Cited 1|Views4
Abstract
Anticipatory force control underlying dexterous manipulation has historically been understood to rely on visual object properties and on sensorimotor memories associated with previous experiences with similar objects. However, it is becoming increasingly recognized that anticipatory force control also relies on how an object is grasped. Experiments that allow unconstrained grasp contact points when preventing tilting an object with an off-centered mass show trial-to-trial variations in digit position and sub-sequent scaling of lift forces, all before feedback of object properties becomes available. Here, we manipulated the availability of visual information before reach onset and after grasp contact (with no vision during the reach) to determine the contribution and timing of visual information processing to the scaling of fingertip forces during dexterous manipulation at flexible contact points. Results showed that anticipatory force control was similarly successful, quantified as an appropriate compensatory torque at lift onset that counters the external torque of an object with a left and right center of mass, irrespective of the timing and availability of visual information. However, the way in which anticipatory force control was achieved varied depending on the availability of visual information. Visual information following grasp contact was associated with greater use of an asymmetric thumb and index finger grasp configuration to generate a compensatory torque and digit position variability, together with faster fingertip force scaling and sensorimotor learning. This result supports the hypothesis that visual information at a critical and func-tionally relevant time point following grasp contact supports variable and swift digit-based force control for dexterous object manipulation.NEW & NOTEWORTHY Humans excel in dexterous object manipulation by precisely coordinating grasp points and fingertip forces, highlighted in scenarios requiring countering object torques in advance, e.g., lifting a teacup without spilling will demand a unique digit force pattern based on the grip configuration at lift onset. Here, we show that visual information following grasp contact, a critical and functionally relevant time point, supports digit position variability and swift anticipatory force control to achieve a dexterous motor goal.
More
Translated text
Key words
anticipatory force control,feedforward motor control,grasp,object manipulation,visual feedback
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined