Self Supervised Contrastive Learning on Multiple Breast Modalities Boosts Classification Performance.

PRIME@MICCAI(2021)

引用 2|浏览2
暂无评分
摘要
Medical imaging classification tasks require models that can provide high accuracy results. Training these models requires large annotated datasets. Such datasets are not openly available, are very costly, and annotations require professional knowledge in the medical domain. In the medical field specifically, datasets can also be inherently small. Self-supervised methods allow the construction of models that learn image representations on large unlabeled image sets; these models can then be fine-tuned on smaller datasets for various tasks. With breast cancer being a leading cause of death among women worldwide, precise lesion classification is crucial for detecting malignant cases. Through a set of experiments on 30K unlabeled mammography (MG) and ultrasound (US) breast images, we demonstrate a practical way to use self-supervised contrastive learning to improve breast cancer classification. Contrastive learning is a machine learning technique that teaches the model which data points are similar or different by using representations that force similar elements to be equal and dissimilar elements to be different. Our goal is to show the advantages of using self-supervised pre-training on a large unlabeled set, compared to training small sets from scratch. We compare training from scratch on small labeled MG and US datasets to using self-supervised contrastive methods and supervised pre-training. Our results demonstrate that the improvement in biopsy classification using self-supervision is consistent on both modalities. We show how to use self-supervised methods on medical data and propose a novel method of training contrastive learning on MG, which results in higher specificity classification.
更多
查看译文
关键词
Self-supervised, Mammography, Ultrasound, Contrastive learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要