Defining Neural Network Architecture through Polytope Structures of Dataset
CoRR(2024)
摘要
Current theoretical and empirical research in neural networks suggests that
complex datasets require large network architectures for thorough
classification, yet the precise nature of this relationship remains unclear.
This paper tackles this issue by defining upper and lower bounds for neural
network widths, which are informed by the polytope structure of the dataset in
question. We also delve into the application of these principles to simplicial
complexes and specific manifold shapes, explaining how the requirement for
network width varies in accordance with the geometric complexity of the
dataset. Moreover, we develop an algorithm to investigate a converse situation
where the polytope structure of a dataset can be inferred from its
corresponding trained neural networks. Through our algorithm, it is established
that popular datasets such as MNIST, Fashion-MNIST, and CIFAR10 can be
efficiently encapsulated using no more than two polytopes with a small number
of faces.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要