The Local Geometry Of Orthogonal Dictionary Learning Using L1 Minimization

CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS(2019)

引用 0|浏览29
暂无评分
摘要
Feature learning that extracts concise and generalizable representations for data is one of the central problems in machine learning and signal processing. Sparse dictionary learning, also known as sparse coding, distinguishes from other feature learning techniques in sparsity exploitation, allowing the formulation of nonconvex optimizations that simultaneously uncover a structured dictionary and sparse representations. Despite the popularity of dictionary learning in applications, the landscapes of these optimizations that enable effective learning largely remain a mystery. This work characterizes the local optimization geometry for a simplified version of sparse coding where the L1 norm of the sparse coefficient matrix is minimized subject to orthogonal dictionary constraints. In particular, we show that the ground-truth dictionary and coefficient matrix are locally identifiable under the assumption that the coefficient matrix is sufficiently sparse and the number of training data columns is sufficiently large.
更多
查看译文
关键词
local geometry,orthogonal dictionary learning,feature learning,general- izable representations,machine learning,signal processing,sparse dictionary learning,sparse coding,sparsity exploitation,nonconvex optimizations,structured dictionary,sparse representations,local optimization geometry,sparse coefficient matrix,orthogonal dictionary constraints,ground-truth dictionary,L1 minimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要