Location- and Object-Based Representational Mechanisms Account for Bilateral Field Advantage in Multiple-Object Tracking.

eNeuro(2024)

引用 0|浏览0
暂无评分
摘要
Keeping track of multiple visually identical and independently moving objects is a remarkable feature of the human visual system. Theoretical accounts for this ability focus on resource-based models that describe parametric decreases of performance with increasing demands during the task (i.e., more relevant items, closer distances, higher speed). Additionally, the presence of two central tracking resources, one within each hemisphere, has been proposed, allowing for an independent maintenance of moving targets within each visual hemifield. Behavioral evidence in favor of such a model shows that human subjects are able to track almost twice as many targets across both hemifields compared with within one hemifield. A number of recent publications argue for two separate and parallel tracking mechanisms during standard object tracking tasks that allow for the maintenance of the relevant information in a location-based and object-based manner. Unique electrophysiological correlates for each of those processes have been identified. The current study shows that these electrophysiological components are differentially present during tracking within either the left or right hemifield. The present results suggest that targets are mostly maintained as an object-based representation during left hemifield tracking, while location-based resources are preferentially engaged during right hemifield tracking. Interestingly, the manner of representation does not seem to have an impact on behavioral performance within the subjects, while the electrophysiological component indicating object-based tracking does correlate with performance between subjects. We propose that hemifield independence during multiple-object tracking may be an indication of the underlying hemispheric bias for parallel location-based and object-based tracking mechanisms.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要