WeChat Mini Program
Old Version Features

DPHM-Net:de-redundant Multi-Period Hybrid Modeling Network for Long-Term Series Forecasting

WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS(2024)

Shandong University

Cited 1|Views21
Abstract
Deep learning models have been widely applied in the field of long-term forecasting has achieved significant success, with the incorporation of inductive bias such as periodicity to model multi-granularity representations of time series being a commonly employed design approach in forecasting methods. However, existing methods still face challenges related to information redundancy during the extraction of inductive bias and the learning process for multi-granularity features. The presence of redundant information can impede the acquisition of a comprehensive temporal representation by the model, thereby adversely impacting its predictive performance. To address the aforementioned issues, we propose a De-Redundant Multi-Period Hybrid Modeling Network (DPHM-Net) that effectively eliminates redundant information from the series inductive bias extraction mechanism and the multi-granularity series features in the time series representation learning. In DPHM-Net, we propose an efficient time series representation learning process based on a period inductive bias and introduce the concept of de-redundancy among multiple time series into the representation learning process for single time series. Additionally, we design a specialized gated unit to dynamically balance the elimination weights between series features and redundant semantic information. The advanced performance and high efficiency of our method in long-term forecasting tasks against previous state-of-the-art are demonstrated through extensive experiments on real-world datasets.
More
Translated text
Key words
Time series forecasting,Multi-granularity learning,Information de-redundancy,Inductive bias extraction
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined