Correntropy For Random Processes: Properties And Applications In Signal Processing
INFORMATION THEORETIC LEARNING: RENYIS ENTROPY AND KERNEL PERSPECTIVES(2010)
摘要
The previous chapter defined cross-correntropy for the case of a pair of scalar random variables, and presented applications
in statistical inference. This chapter extends the definition of correntropy for the case of random (or stochastic) processes,
which are index sets of random variables. In statistical signal processing the index set is time; we are interested in random
variables that are a function of time and the goal is to quantify their statistical dependencies (although the index set can
also be defined over inputs or channels of multivariate random variables). The autocorrelation function, which measures the
statistical dependency between random variables at two different times, is conventionally utilized for this goal. Hence, we
generalize the definition of autocorrelation to an autocorrentropy function. The name correntropywas coined to reflect the fact that the function “looks like” correlation but the sum over the lags (or over dimensions of
the multivariate random variable) is the information potential (i.e., the argument of Renyi’s quadratic entropy). The definition
of cross-correntropy for random variables carries over to time series with a minor but important change in the domain of the
variables that now are an index set of lags. When it is clear from the context, we simplify the terminology and refer to the
different functions (autocorrentropy, or crosscorrentropy) simply as correntropy function, but keep the word “function” to distinguish them from Chapter 10 quantities.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要