Overcomplete Transform Learning With The Log Regularizer

IEEE ACCESS(2018)

引用 1|浏览36
暂无评分
摘要
Transform learning has been proposed as a new and effective formulation for analysis dictionary learning, where the l(0) norm or the l(1) norm are generally used as sparsity constraint. The sparse solutions can be obtained by the hard thresholding or the soft thresholding. The hard thresholding is actually a greedy algorithm, which only obtains the approximate solutions; while the soft thresholding has a certain bias for the large elements. In this paper, we propose to employ the log regularizer instead of the l(0) norm and the e l norm in the overcomplete transform learning problem. Our minimization problem is nonconvex due to the log regularizer. We propose to employ a simple proximal alternating minimization method, where a closed-form solution of the log function could be obtained based on the proximal operator. Hence, an efficient and fast overcomplete transform learning algorithm is developed, which iterates based on the analysis coding stage and the transform update stage. The proposed algorithm can obtain sparser solutions and more accurate results from the theoretical analysis. Numerical experiments verify that the proposed algorithm outperforms existing transform learning approaches with the l(0) norm or the l(1) norm. Furthermore, the proposed algorithm is on par with the state-of-the-art image denoising algorithms.
更多
查看译文
关键词
Analysis dictionary learning, transform learning, log regularizer, proximal alternating minimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要