WeChat Mini Program
Old Version Features

AI-Driven Iterative Receiver for Superimposed Pilot Schemes in MIMO-OFDM Systems

Xinjie Li, Xingyu Zhou,Jing Zhang,Chao-Kai Wen,Shi Jin

IEEE Wireless Communications and Networking Conference(2025)

National Mobile Communications Research Laboratory

Cited 0|Views3
Abstract
The superimposed pilot (SIP) transmission scheme shows great potential for improving spectral efficiency in MIMO-OFDM systems. However, it also introduces complex challenges for receiver design, particularly due to pilot contamination and data interference. To address these issues, the joint channel estimation, signal detection, and decoding (JCDD) framework has emerged as a promising solution, utilizing iterative refinement to enhance receiver performance. Despite this, existing JCDD methods either focus heavily on theoretical analysis, often neglecting practical application scenarios, or experience performance limitations due to inherent design flaws. In this paper, we propose an advanced iterative JCDD receiver that effectively mitigates the negative effects of pilot contamination and data interference. Our approach improves traditional linear minimum mean-square error (LMMSE) channel estimation by incorporating state-of-the-art techniques—specifically variational message passing (VMP) and deep learning (DL)—allowing for better adaptation to varying channel conditions. Extensive empirical evaluations demonstrate that our proposed SIP receiver not only surpasses the conventional orthogonal pilot (OP) scheme but also exhibits outstanding adaptability in mismatched channel environments, thanks to the VMP and DL-based improvements.
More
Translated text
Key words
MIMO-OFDM,superimposed pilots,iterative receivers,deep learning,variational message-passing
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined