Flexible Distribution Alignment: Towards Long-tailed Semi-supervised Learning with Proper Calibration
arxiv(2023)
摘要
Long-tailed semi-supervised learning (LTSSL) represents a practical scenario
for semi-supervised applications, challenged by skewed labeled distributions
that bias classifiers. This problem is often aggravated by discrepancies
between labeled and unlabeled class distributions, leading to biased
pseudo-labels, neglect of rare classes, and poorly calibrated probabilities. To
address these issues, we introduce Flexible Distribution Alignment (FlexDA), a
novel adaptive logit-adjusted loss framework designed to dynamically estimate
and align predictions with the actual distribution of unlabeled data and
achieve a balanced classifier by the end of training. FlexDA is further
enhanced by a distillation-based consistency loss, promoting fair data usage
across classes and effectively leveraging underconfident samples. This method,
encapsulated in ADELLO (Align and Distill Everything All at Once), proves
robust against label shift, significantly improves model calibration in LTSSL
contexts, and surpasses previous state-of-of-art approaches across multiple
benchmarks, including CIFAR100-LT, STL10-LT, and ImageNet127, addressing class
imbalance challenges in semi-supervised learning. Our code will be made
available upon paper acceptance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要