Convergence Of A Neural Network For Sparse Approximation Using The Nonsmooth Lojasiewicz Inequality

Neural Networks(2013)

引用 12|浏览9
暂无评分
摘要
Sparse approximation is an optimization program that produces state-of-the-art results in many applications in signal processing and engineering. To deploy this approach in real-time, it is necessary to develop faster solvers than are currently available in digital. The Locally Competitive Algorithm (LCA) is a dynamical system designed to solve the class of sparse approximation problems in continuous time. But before implementing this network in analog VLSI, it is essential to provide performance guarantees. This paper presents new results on the convergence of the LCA neural network. Using recentlydeveloped methods that make use of the Lojasiewicz inequality for nonsmooth functions, we prove that the output and state trajectories converge to a single fixed point. This improves on previous results by guaranteeing convergence to a singleton even when the optimization program has infinitely many and nonisolated solution points.
更多
查看译文
关键词
approximation theory,convergence of numerical methods,neural nets,optimisation,signal processing,LCA neural network,dynamical system,locally competitive algorithm,neural network convergence,nonsmooth Łojasiewicz inequality,nonsmooth functions,optimization program,output trajectory,signal engineering,signal processing,sparse approximation,state trajectory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要