Self-Improved Learning for Scalable Neural Combinatorial Optimization
arxiv(2024)
摘要
The end-to-end neural combinatorial optimization (NCO) method shows promising
performance in solving complex combinatorial optimization problems without the
need for expert design. However, existing methods struggle with large-scale
problems, hindering their practical applicability. To overcome this limitation,
this work proposes a novel Self-Improved Learning (SIL) method for better
scalability of neural combinatorial optimization. Specifically, we develop an
efficient self-improved mechanism that enables direct model training on
large-scale problem instances without any labeled data. Powered by an
innovative local reconstruction approach, this method can iteratively generate
better solutions by itself as pseudo-labels to guide efficient model training.
In addition, we design a linear complexity attention mechanism for the model to
efficiently handle large-scale combinatorial problem instances with low
computation overhead. Comprehensive experiments on the Travelling Salesman
Problem (TSP) and the Capacitated Vehicle Routing Problem (CVRP) with up to
100K nodes in both uniform and real-world distributions demonstrate the
superior scalability of our method.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要