WeChat Mini Program
Old Version Features

Semi-supervised Prediction Method for Time Series Based on Monte Carlo and Time Fusion Feature Attention

Yang,Jing Zhang, Lulu Wang

APPLIED SOFT COMPUTING(2024)

Kunming Univ Sci & Technol

Cited 0|Views8
Abstract
Accurate and reliable forecasting of time sequences is a challenging task in cyber-physical systems (CPS). Traditional time series prediction models struggle to provide accurate predictions for diverse types of time series data, especially with missing data due to variations in scale, types, and the complexity of real- world production environments. In this paper, we introduce a hybrid model named Temporal Feature Fusion Attention-based Monte Carlo Semi-supervised Long Short Term Memory (LSTM) network to address this issue. The model encodes the current state and historical information using a current time feature state vector. It then calculates the hidden feature vectors for the time series at different time points (past, present, and future), as well as for the current Monte Carlo filtering sequences. This approach leverages the correlation of time series features, transfers and fuses crucial historical features within the sequence with the optimized sequence feature information from the Monte Carlo algorithm. Our experiments confirm that with 10% of labeled data missing, our proposed method significantly improves the evaluation metric Mean Absolute Percentage Error (MAPE) by 17.827% compared to the baseline LSTM model. Moreover, our method surpasses other state-of-the-art methods across four distinct time series datasets, achieving the best prediction results.
More
Translated text
Key words
Time series forecasting,Cyber-physical system,Semi-supervised,Monte Carlo approach,Soft-attention mechanism
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined