Scaling Laws for Galaxy Images
arxiv(2024)
摘要
We present the first systematic investigation of supervised scaling laws
outside of an ImageNet-like context - on images of galaxies. We use 840k galaxy
images and over 100M annotations by Galaxy Zoo volunteers, comparable in scale
to Imagenet-1K. We find that adding annotated galaxy images provides a power
law improvement in performance across all architectures and all tasks, while
adding trainable parameters is effective only for some (typically more
subjectively challenging) tasks. We then compare the downstream performance of
finetuned models pretrained on either ImageNet-12k alone vs. additionally
pretrained on our galaxy images. We achieve an average relative error rate
reduction of 31
finetuned models are more label-efficient and, unlike their
ImageNet-12k-pretrained equivalents, often achieve linear transfer performance
equal to that of end-to-end finetuning. We find relatively modest additional
downstream benefits from scaling model size, implying that scaling alone is not
sufficient to address our domain gap, and suggest that practitioners with
qualitatively different images might benefit more from in-domain adaption
followed by targeted downstream labelling.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要