WeChat Mini Program
Old Version Features

Combining Mendelian Randomization with the Sibling Comparison Design.

STATISTICS IN MEDICINE(2024)

Karolinska Inst

Cited 0|Views29
Abstract
Mendelian randomization (MR) is a popular epidemiologic study design that uses genetic variants as instrumental variables (IVs) to estimate causal effects, while accounting for unmeasured confounding. The validity of the MR design hinges on certain IV assumptions, which may sometimes be violated due to dynastic effects, population stratification, or assortative mating. Since these mechanisms act through parental factors it was recently suggested that the bias resulting from violations of the IV assumptions can be reduced by combing the MR design with the sibling comparison design, which implicitly controls for all factors that are constant within families. In this article, we provide a formal discussion of this combined MR-sibling design. We derive conditions under which the MR-sibling design is unbiased, and we relate these to the corresponding conditions for the standard MR and sibling comparison designs. We proceed by considering scenarios where all three designs are biased to some extent, and discuss under which conditions the MR-sibling design can be expected to have less bias than the other two designs. We finally illustrate the theoretical results and conclusions with an application to real data, in a study of low-density lipoprotein and diastolic blood pressure using data from the Swedish Twin Registry.
More
Translated text
Key words
bias,causal inference,Mendelian randomization,sibling comparison
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined