## AI helps you reading Science

## AI Insight

AI extracts a summary of this paper

Weibo:

# Graph Colouring Meets Deep Learning: Effective Graph Neural Network Models For Combinatorial Problems

2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), (2019): 879-885

EI

Abstract

Deep learning has consistently defied state-of-the-art techniques in many fields over the last decade. However, we are just beginning to understand the capabilities of neural learning in symbolic domains. Deep learning architectures that employ parameter sharing over graphs can produce models which can be trained on complex properties of ...More

Code:

Data:

Introduction

- Deep Learning (DL) models have defied several state-ofthe-art techniques in tasks such as image recognition [1]–[3] and natural language processing [4], [5].
- In this sense, applying DL models to combinatorial problems arises as one of the main approaches towards achieving integrated machine learning (ML) and reasoning [12]
- This family of problems do not show a simple mathematical structure but, in several cases, there are plenty of exact solvers available to them, which allows one to produce labelled datasets in any desired amount, even for DL models whose requirements for training data can be substantial.
- While (1) is due to the natural combinatorial optimisation structure, (2) is the main principle of all machine learning strategies

Highlights

- Deep Learning (DL) models have defied several state-ofthe-art techniques in tasks such as image recognition [1]–[3] and natural language processing [4], [5]
- We stopped the training procedure when the model achieved 82% of accuracy and 0.35 of Binary Cross Entropy loss averaged over 128 batches containing 16 instances at the end of 5300 epochs
- We compared Graph Neural Networks-graph colouring problem’s performance with two heuristics: Tabucol [24], a local search algorithm which inserts single moves into a tabu list; and a greedy algorithm which assigns to a vertex the first available colour
- We have shown how Graph Neural Networks models effectively tackle the Graph Colouring Problem
- After 32 messagepassing iterations between adjacent vertices and between vertices and colours each vertex voted for a final answer on whether the given graph admits a C-colouring
- In spite of being trained on the verge of satisfiability, we showed a curve depicting how our model behaved to varying values of the target colour C higher or lower than its chromatic number (Figure 3)

Methods

- To train these message computing and updating modules, MLPs and RNNs respectively, the authors used the Stochastic Gradient Descent algorithm implemented via TensorFlow’s Adam optimiser.
- The authors' training instances, with number of vertices n ∼ U(40, 60), were produced on the verge of phase transition: for each instance I = (G = (V, E), C)|C = χ(G), there is an adversarial instance I = (G = (V, E ), C)|C + 1 = χ(G ) such that E = E only for a single edge.
- An example of such training is depicted in Fig. 1

Results

**EXPERIMENTAL RESULTS AND ANALYSES**

The authors stopped the training procedure when the model achieved 82% of accuracy and 0.35 of Binary Cross Entropy loss averaged over 128 batches containing 16 instances at the end of 5300 epochs.- The authors compared GNN-GCP’s performance with two heuristics: Tabucol [24], a local search algorithm which inserts single moves into a tabu list; and a greedy algorithm which assigns to a vertex the first available colour.
- As both heuristics outcomes are valid colouring assignments, they never underestimate the chromatic number, as opposed to the model

Conclusion

**CONCLUSIONS AND FUTURE WORK**

In this paper, the authors have shown how GNN models effectively tackle the Graph Colouring Problem.- The authors demonstrated how this trained model was able to generalise its results to previously unseen target C and structured and larger instances, yielding a performance comparable to a well-know heuristic (Tabucol).
- In spite of being trained on the verge of satisfiability, the authors showed a curve depicting how the model behaved to varying values of the target colour C higher or lower than its chromatic number (Figure 3)

- Table1: THE CHROMATIC NUMBER PRODUCED BY OUR MODEL AND TWO HEURISTICS ON SOME INSTANCES OF THE COLOR02/03/04 DATASET. AS
- Table2: STRICT ACCURACY OF OUR MODEL AND THE TWO ALGORITHMS

Funding

- This research was partly supported by Coordenacao de Aperfeicoamento de Pessoal de Nıvel Superior (CAPES) - Finance Code 001 and by the Brazilian Research Council CNPq

Reference

- A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012, pp. 1097–1105.
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
- H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua, “A convolutional neural network cascade for face detection,” in CVPR, 2015.
- K. Cho, B. van Merrienboer, D. Bahdanau, and Y. Bengio, “On the properties of neural machine translation: Encoder-decoder approaches,” in Proc. of SSST@EMNLP, 2014, pp. 103–111. [Online]. Available: http://aclweb.org/anthology/W/W14/W14-4012.pdf
- D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
- V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare, A. Graves et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, p. 529, 2015.
- D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez et al., “Mastering the game of go without human knowledge,” Nature, vol. 550, no. 7676, p. 354, 2017.
- A. d’Avila Garcez, T. Besold, L. De Raedt, P. Foldiak, P. Hitzler, T. Icard, K. Kuhnberger, L. Lamb, R. Miikkulainen, and D. Silver, “Neural-symbolic learning and reasoning: contributions and challenges,” in Proc. AAAI Spring Symposium on Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches, Stanford, 2015.
- S. Bader and P. Hitzler, “Dimensions of neural-symbolic integration A structured survey,” in We Will Show Them! Essays in Honour of Dov Gabbay, 2005, pp. 167–194.
- A. d’Avila Garcez, L. Lamb, and D. Gabbay, Neural-Symbolic Cognitive Reasoning, ser. Cognitive Technologies. Springer, 2009. [Online]. Available: https://doi.org/10.1007/978-3-540-73246-4
- R. Khardon and D. Roth, “Learning to reason with a restricted view,” Machine Learning, vol. 35, no. 2, pp. 95–116, 1999. [Online]. Available: https://doi.org/10.1023/A:1007581123604
- P. Battaglia, J. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, Malinowski et al., “Relational inductive biases, deep learning, and graph networks,” arXiv preprint arXiv:1806.01261, 2018.
- Y. Bengio, A. Lodi, and A. Prouvost, “Machine learning for combinatorial optimization: a methodological tour d’horizon,” arXiv preprint arXiv:1811.06128, 2018.
- J. Gilmer, S. Schoenholz, P. Riley, O. Vinyals, and G. Dahl, “Neural message passing for quantum chemistry,” arXiv preprint arXiv:1704.01212, 2017.
- R. Palm, U. Paquet, and O. Winther, “Recurrent relational networks for complex relational reasoning,” arXiv preprint arXiv:1711.08028, 2017.
- M. Gori, G. Monfardini, and F. Scarselli, “A new model for learning in graph domains,” in IJCNN-05. IEEE, 2005.
- D. Selsam, M. Lamm, B. Bunz, P. Liang, L. de Moura, and D. Dill, “Learning a sat solver from single-bit supervision,” arXiv preprint arXiv:1802.03685, 2018.
- F. Scarselli, M. Gori, A. Tsoi, M. Hagenbuchner, and G. Monfardini, “The graph neural network model,” IEEE Tran. Neural Networks, vol. 20, no. 1, pp. 61–80, 2009.
- N. Barnier and P. Brisset, “Graph coloring for air traffic flow management,” Annals of Operations Research, vol. 130, no. 1, pp. 163–178, Aug 2004.
- S. Thevenin, N. Zufferey, and J. Potvin, “Graph multi-coloring for a job scheduling application,” Discrete App. Math., vol. 234, pp. 218 – 235, 2018.
- W. Chen, G. Lueh, P. Ashar, K. Chen, and B. Cheng, “Register allocation for intel processor graphics,” in CGO 2018, 2018, pp. 352–364. [Online]. Available: http://doi.acm.org/10.1145/3168806
- M. Prates, P. Avelar, H. Lemos, L. Lamb, and M. Vardi, “Learning to solve NP-complete problems - a graph neural network for decision TSP,” arXiv preprint arXiv:1809.02721, 2018.
- J. Culberson and I. Gent, “Frozen development in graph coloring,” Theoretical Computer Science, vol. 265, no. 1, pp. 227 – 264, 2001. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0304397501001645
- A. Hertz and D. de Werra, “Using tabu search techniques for graph coloring,” Computing, vol. 39, no. 4, pp. 345–351, Dec. 1987. [Online]. Available: http://dx.doi.org/10.1007/BF02239976
- J. Dudek, K. Meel, and M. Vardi, “Combining the k-CNF and XOR phase-transitions,” in IJCAI-16, 2016, pp. 727–734. [Online]. Available: http://www.ijcai.org/Abstract/16/109
- L. Zdeborovaand F. Krzakala, “Phase transitions in the coloring of random graphs,” Phys. Rev. E, vol. 76, p. 031131, Sep 2007. [Online]. Available: https://link.aps.org/doi/10.1103/PhysRevE.76.031131
- D. Watts and S. Strogatz, “Collective dynamics of ‘smallworld’networks,” Nature, vol. 393, no. 6684, p. 440, 1998.
- P. Holme and B. Kim, “Growing scale-free networks with tunable clustering,” Phys Rev E, vol. 65, no. 2, p. 026107, 2002.

Tags

Comments

数据免责声明

页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果，我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问，可以通过电子邮件方式联系我们：report@aminer.cn