ContIG: Self-supervised Multimodal Contrastive Learning for Medical Imaging with Genetics

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 38|浏览54
暂无评分
摘要
High annotation costs are a substantial bottleneck in applying modern deep learning architectures to clinically relevant medical use cases, substantiating the need for novel algorithms to learn from unlabeled data. In this work, we propose ContIG, a self-supervised method that can learn from large datasets of unlabeled medical images and genetic data. Our approach aligns images and several genetic modalities in the feature space using a contrastive loss. We design our method to integrate multiple modalities of each individual person in the same model end-to-end, even when the available modalities vary across individuals. Our procedure outperforms state-of-the-art self-supervised methods on all evaluated downstream benchmark tasks. We also adapt gradient-based explainability algorithms to better understand the learned cross-modal associations between the images and genetic modalities. Finally, we perform genome-wide association studies on the features learned by our models, uncovering interesting relationships between images and genetic data. 1 1 Source code at: https://github.com/HealthML/ContIG
更多
查看译文
关键词
Medical,biological and cell microscopy, Explainable computer vision, Representation learning, Self-& semi-& meta- Vision + X, Vision applications and systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要