How to Teach Bayesian Reasoning: an Empirical Study Comparing Four Different Probability Training Courses
LEARNING AND INSTRUCTION(2025)
Univ Regensburg
Abstract
Background: Bayesian reasoning is understood as the updating of hypotheses based on new evidence (e.g., the likelihood of an infection based on medical test results). As experts and students alike often struggle with Bayesian reasoning, previous research has emphasised the importance of identifying supportive strategies for instruction. Aims: This study examines the learning of Bayesian reasoning by comparing five experimental conditions: two "level-2" training courses (double tree and unit square, each based on natural frequencies), two "level-1" training courses (natural frequencies only and a school-specific visualisation "probability tree"), and a "level-0" control group (no training course). Ultimately, the aim is to enable experts to make the right decision in high-stake situations. Sample: N = 515 students (in law or medicine) Method: In a pre-post-follow-up training study, participants' judgments regarding Bayesian reasoning were investigated in five experimental conditions. Furthermore, prior mathematical achievement was used for predicting Bayesian reasoning skills with a linear mixed model. Results: All training courses increase Bayesian reasoning, yet learning with the double tree shows best results. Interactions with prior mathematical achievement generally imply that students with higher prior mathematical achievement learn more, yet with notable differences: instruction with the unit square is better suited for high achievers than for low achievers, while the double tree training course is the only one equally suited to all levels of prior mathematical achievement. Conclusion: The best learning of Bayesian reasoning occurs with strategies not yet commonly used in school.
MoreTranslated text
Key words
Bayesian reasoning,Training study,Double tree,Unit square,Natural frequencies,Probability tree
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined