Conditional Rényi Divergence Saddlepoint and the Maximization of α-Mutual Information.

ENTROPY(2019)

引用 12|浏览13
暂无评分
摘要
Renyi-type generalizations of entropy, relative entropy and mutual information have found numerous applications throughout information theory and beyond. While there is consensus that the ways A. Renyi generalized entropy and relative entropy in 1961 are the "right" ones, several candidates have been put forth as possible mutual informations of order alpha. In this paper we lend further evidence to the notion that a Bayesian measure of statistical distinctness introduced by R. Sibson in 1969 (closely related to Gallager's E0 function) is the most natural generalization, lending itself to explicit computation and maximization, as well as closed-form formulas. This paper considers general (not necessarily discrete) alphabets and extends the major analytical results on the saddle-point and saddle-level of the conditional relative entropy to the conditional Renyi divergence. Several examples illustrate the main application of these results, namely, the maximization of alpha-mutual information with and without constraints.
更多
查看译文
关键词
information measures,relative entropy,conditional relative entropy,mutual information,Renyi divergence,alpha-mutual information,channel capacity,minimax redundancy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要