Invertibility of Convolutional Generative Networks from Partial Measurements.

ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018)(2018)

引用 88|浏览59
暂无评分
摘要
The problem of inverting generative neural networks (i.e., to recover the input latent code given partial network output), motivated by image inpainting, has recently been studied by a prior work that focused on fully-connected networks. In this work, we present new theoretical results on convolutional networks, which are more widely used in practice. The network inversion problem is highly non-convex, and hence is typically computationally intractable and without optimality guarantees. However, we rigorously prove that, for a 2-layer convolutional generative network with ReLU and Gaussian-distributed random weights, the input latent code can be deduced from the network output efficiently using simple gradient descent. This new theoretical finding implies that the mapping from the low-dimensional latent space to the high-dimensional image space is one-to-one, under our assumptions. In addition, the same conclusion holds even when the network output is only partially observed (i.e., with missing pixels). We further demonstrate, empirically, that the same conclusion extends to networks with multiple layers, other activation functions (leaky ReLU, sigmoid and tanh), and weights trained on real datasets.
更多
查看译文
关键词
gradient descent,neural networks,convolutional neural networks,activation function
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要