Trainable TV- L^1 model as recurrent nets for low-level vision

Neural Computing and Applications(2020)

Cited 3|Views8
No score
Abstract
TV- L^1 is a classical diffusion–reaction model for low-level vision tasks, which can be solved by a duality-based iterative algorithm. Considering the recent success of end-to-end learned representations, we propose a TV-LSTM network to unfold the duality-based iterations of TV- L^1 into long short-term memory (LSTM) cells. In particular, we formulate the iterations as customized layers of a LSTM neural network. Then, the proposed end-to-end trainable TV-LSTMs can be naturally connected with various task-specific networks, e.g., optical flow, image decomposition and event-based optical flow estimation. Extensive experiments on optical flow estimation and structure + texture decomposition have demonstrated the effectiveness and efficiency of the proposed method.
More
Translated text
Key words
Total variation,Optical flow,Recurrent network,Image decomposition
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined