Better and simpler error analysis of the Sinkhorn–Knopp algorithm for matrix scaling

symposium on discrete algorithms(2020)

引用 49|浏览22
暂无评分
摘要
Given a non-negative n × m real matrix A , the matrix scaling problem is to determine if it is possible to scale the rows and columns so that each row and each column sums to a specified positive target values. The Sinkhorn–Knopp algorithm is a simple and classic procedure which alternately scales all rows and all columns to meet these targets. The focus of this paper is the worst-case theoretical analysis of this algorithm. We present an elementary convergence analysis for this algorithm that improves upon the previous best bound. In a nutshell, our approach is to show (i) a simple bound on the number of iterations needed so that the KL-divergence between the current row-sums and the target row-sums drops below a specified threshold δ , and (ii) then show that for a suitable choice of δ , whenever KL-divergence is below δ , then the ℓ _1 -error or the ℓ _2 -error is below ε . The well-known Pinsker’s inequality immediately allows us to translate a bound on the KL divergence to a bound on ℓ _1 -error. To bound the ℓ _2 -error in terms of the KL-divergence, we establish a new inequality, referred to as ( KL vs ℓ _1/ℓ _2 ). This inequality is a strengthening of Pinsker’s inequality and may be of independent interest.
更多
查看译文
关键词
Matrix scaling, Alternate minimization, KL divergence, Matchings
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要