Group Action Equivariance And Generalized Convolution In Multi-Layer Neural Networks

2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)(2019)

引用 1|浏览10
暂无评分
摘要
Convolutional neural networks have achieved great success in speech, image, and video signal processing tasks in recent years. There have been several attempts to justify the convolutional architecture and to generalize the convolution operation for treatment of other data types such as graphs and manifolds. Based on group representation theory and noncommutative harmonic analysis, it has recently been shown that the so-called group equivariance requirement of a feed-forward neural network necessitates the convolutional architectures. In this paper, based on the familiar concepts of linear time-invariant systems, we develop an elementary proof of the same result. The nonlinear activation function, being a necessary components of practical deep neural networks, has been glossed over in previous analyses of the connection between equivariance and convolution. We identify sufficient conditions for the non-linear activation functions to preserve equivariance, and hence the necessity of the group convolution structure. Our analysis method is simple and intuitive, and holds the potential to be applied to more challenging scenarios such as non-transitive domains and multiple simultaneous equivariances.
更多
查看译文
关键词
group equivariance, convolutional neural network, algebraic convolution, nonlinear activation function
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要