Time Series Representation Learning with Supervised Contrastive Temporal Transformer
arxiv(2024)
摘要
Finding effective representations for time series data is a useful but
challenging task. Several works utilize self-supervised or unsupervised
learning methods to address this. However, there still remains the open
question of how to leverage available label information for better
representations. To answer this question, we exploit pre-existing techniques in
time series and representation learning domains and develop a simple, yet novel
fusion model, called: Supervised COntrastive
Temporal Transformer (SCOTT). We first investigate suitable
augmentation methods for various types of time series data to assist with
learning change-invariant representations. Secondly, we combine Transformer and
Temporal Convolutional Networks in a simple way to efficiently learn both
global and local features. Finally, we simplify Supervised Contrastive Loss for
representation learning of labelled time series data. We preliminarily evaluate
SCOTT on a downstream task, Time Series Classification, using 45 datasets from
the UCR archive. The results show that with the representations learnt by
SCOTT, even a weak classifier can perform similar to or better than existing
state-of-the-art models (best performance on 23/45 datasets and highest rank
against 9 baseline models). Afterwards, we investigate SCOTT's ability to
address a real-world task, online Change Point Detection (CPD), on two
datasets: a human activity dataset and a surgical patient dataset. We show that
the model performs with high reliability and efficiency on the online CPD
problem (∼98% and ∼97% area under precision-recall curve
respectively). Furthermore, we demonstrate the model's potential in tackling
early detection and show it performs best compared to other candidates.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要