Parametric Augmentation for Time Series Contrastive Learning
CoRR(2024)
摘要
Modern techniques like contrastive learning have been effectively used in
many areas, including computer vision, natural language processing, and
graph-structured data. Creating positive examples that assist the model in
learning robust and discriminative representations is a crucial stage in
contrastive learning approaches. Usually, preset human intuition directs the
selection of relevant data augmentations. Due to patterns that are easily
recognized by humans, this rule of thumb works well in the vision and language
domains. However, it is impractical to visually inspect the temporal structures
in time series. The diversity of time series augmentations at both the dataset
and instance levels makes it difficult to choose meaningful augmentations on
the fly. In this study, we address this gap by analyzing time series data
augmentation using information theory and summarizing the most commonly adopted
augmentations in a unified format. We then propose a contrastive learning
framework with parametric augmentation, AutoTCL, which can be adaptively
employed to support time series representation learning. The proposed approach
is encoder-agnostic, allowing it to be seamlessly integrated with different
backbone encoders. Experiments on univariate forecasting tasks demonstrate the
highly competitive results of our method, with an average 6.5% reduction in
MSE and 4.7% in MAE over the leading baselines. In classification tasks,
AutoTCL achieves a 1.2% increase in average accuracy.
更多查看译文
关键词
contrastive learning,neural networks,stream data analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要