Video Compressed Sensing Reconstruction via an Untrained Network with Low-Rank Regularization


Cited 0|Views12
No score
Deep image prior (DIP) is an emerging technology that indicates that the structure of an untrained network can serve as an excellent prior for image restoration. It bridges the gap between training-based and training-free methods and exhibits considerable potential in image compressed sensing (CS) reconstruction. In this article, we extend DIP and propose a novel Low-Rank Regularization Video Compressed Sensing Network for CS video reconstruction (dubbed LRR-VCSNet). We explore the application of a low-rank latent tensor with an untrained network for global low-rank regularization on video reconstruction, and the interframe low-rank approximation for framewise nonlocal low-rank regularization in the data space is also exploited. In addition, we design the structure of the untrained network based on the encoder-decoder architecture to improve the performance. Extensive experiments on six standard CIF video sequences show that LLR-VCSNet significantly outperforms traditional video CS methods and achieves competitive results when compared with the state-of-the-art training-based video CS method.
Translated text
Key words
Image reconstruction,Video sequences,Electronics packaging,Correlation,Compressed sensing,Training,Loss measurement,Deep image prior,latent space and data space,low-rank regularization,spatiotemporal correlation,Video compressed sensing
AI Read Science
Must-Reading Tree
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined