Uniform Priors for Data-Efficient Learning.

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 2|浏览80
暂无评分
摘要
Few or zero-shot adaptation to novel tasks is important for the scalability and deployment of machine learning models. It is therefore crucial to find properties that encourage more transferable features in deep networks for generalization. In this paper, we show that models that learn uniformly distributed features from the training data, are able to perform better transfer learning at test-time. Motivated by this, we evaluate our method: uniformity regularization (UR) on its ability to facilitate adaptation to unseen tasks and data on six distinct domains: Few-Learning with Images, Few-shot Learning with Language, Deep Metric Learning, 0-Shot Domain Adaptation, Out-of-Distribution classification, and Neural Radiance Fields. Across all experiments, we show that using UR, we are able to learn robust vision systems which consistently offer benefits over baselines trained without uniformity regularization and are able to achieve state-of-the-art performance in Deep Metric Learning, Few-shot learning with images and language.
更多
查看译文
关键词
data-efficient Learning,zero-shot adaptation,machine learning models,transferable features,deep networks,uniformly distributed features,training data,uniformity regularization,UR,unseen tasks,Few-shot Learning,Deep Metric Learning,0-Shot Domain Adaptation,Out-of-Distribution classification,Few-shot learning,uniform priors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要