Analyzing and Exploring Training Recipes for Large-Scale Transformer-Based Weather Prediction
arxiv(2024)
摘要
The rapid rise of deep learning (DL) in numerical weather prediction (NWP)
has led to a proliferation of models which forecast atmospheric variables with
comparable or superior skill than traditional physics-based NWP. However, among
these leading DL models, there is a wide variance in both the training settings
and architecture used. Further, the lack of thorough ablation studies makes it
hard to discern which components are most critical to success. In this work, we
show that it is possible to attain high forecast skill even with relatively
off-the-shelf architectures, simple training procedures, and moderate compute
budgets. Specifically, we train a minimally modified SwinV2 transformer on ERA5
data, and find that it attains superior forecast skill when compared against
IFS. We present some ablations on key aspects of the training pipeline,
exploring different loss functions, model sizes and depths, and multi-step
fine-tuning to investigate their effect. We also examine the model performance
with metrics beyond the typical ACC and RMSE, and investigate how the
performance scales with model size.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要