Neural networks with linear threshold activations: structure and algorithms

INTEGER PROGRAMMING AND COMBINATORIAL OPTIMIZATION, IPCO 2022(2023)

引用 4|浏览0
暂无评分
摘要
In this article we present new results on neural networks with linear threshold activation functions x ↦1_{ x > 0} . We precisely characterize the class of functions that are representable by such neural networks and show that 2 hidden layers are necessary and sufficient to represent any function representable in the class. This is a surprising result in the light of recent exact representability investigations for neural networks using other popular activation functions like rectified linear units (ReLU). We also give upper and lower bounds on the sizes of the neural networks required to represent any function in the class. Finally, we design an algorithm to solve the empirical risk minimization (ERM) problem to global optimality for these neural networks with a fixed architecture. The algorithm’s running time is polynomial in the size of the data sample, if the input dimension and the size of the network architecture are considered fixed constants. The algorithm is unique in the sense that it works for any architecture with any number of layers, whereas previous polynomial time globally optimal algorithms work only for restricted classes of architectures. Using these insights, we propose a new class of neural networks that we call shortcut linear threshold neural networks . To the best of our knowledge, this way of designing neural networks has not been explored before in the literature. We show that these neural networks have several desirable theoretical properties.
更多
查看译文
关键词
Neural Networks, Complexity bounds, Polyhedral theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要