Transfer of Task-Probability-induced Biases in Parallel Dual-Task Processing Occurs in Similar, but is Constraint in Distinct Task Sets.
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION(2024)
Univ Greifswald
Abstract
Although humans often multitask, little is known about how the processing of concurrent tasks is managed. The present study investigated whether adjustments in parallel processing during multitasking are local (task-specific) or global (task-unspecific). In three experiments, participants performed one of three tasks: a primary task or, if this task did not require a response, one of two background tasks (i.e., prioritized processing paradigm). To manipulate the degree of parallel processing, we presented blocks consisting mainly of primary or background task trials. In Experiment 1, the frequency manipulation was distributed equally across the two background tasks. In Experiments 2 and 3, only one background task was frequency-biased (inducer task). The other background task was presented equally often in all blocks (diagnostic task) and served to test whether processing adjustments transferred. In all experiments, blocks with frequent background tasks yielded stronger interference between primary and background tasks (primary task performance) and improved background task performance. Thus, resource sharing appeared to increase with high background task probabilities even under triple task requirements. Importantly, these adjustments generalized across the background tasks when they were conceptually and visually similar (Experiment 2). Implementing more distinct background tasks limited the transfer: Adjustments were restricted to the inducer task in background task performance and only small transfer was observed in primary task performance (Experiment 3). Overall, the results indicate that the transfer of adjustments in parallel processing is unrestricted for similar, but limited for distinct tasks, suggesting that task similarity affects the generality of resource allocation in multitasking. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
MoreTranslated text
Key words
multitasking,prioritized processing paradigm,task probability,crosstalk,resource allocation
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined