Graph Neural Architecture Search

Hong Yang
Hong Yang
Peng Zhang
Peng Zhang

IJCAI, pp. 1403-1409, 2020.

Cited by: 0|Bibtex|Views54|Links
EI
Keywords:
powerful toolpolicy gradientsearch spacegood graphnetwork datumMore(16+)
Weibo:
We present a graph neural architecture search method that enables automatic design of the best graph neural architecture based on reinforcement learning

Abstract:

Graph neural networks (GNNs) emerged recently as a powerful tool for analyzing non-Euclidean data such as social network data. Despite their success, the design of graph neural networks requires heavy manual work and domain knowledge. In this paper, we present a graph neural architecture search method (GraphNAS) that enables automatic des...More

Code:

Data:

0
Introduction
  • Graph neural networks (GNNs) emerged recently as a powerful tool for analyzing non-Euclidean data such as social network data.
  • Despite the success of GNNs, the design of graph neural architectures requires both heavy manual work and domain knowledge.
  • Similar to CNNs that contain many manual parameters such as the size of filters and the type of pooling layers, the results of GNNs heavily rely on the graph neural architectures including the receptive fields, message functions and aggregation functions
Highlights
  • Graph neural networks (GNNs) emerged recently as a powerful tool for analyzing non-Euclidean data such as social network data
  • We present a new model GraphNAS to enable the automatic search of the best graph neural architecture, where a new search space is designed that covers the operators from the state-of-the-art Graph neural networks, and a policy gradient algorithm is used to iteratively solve the problem
  • To validate the performance of node classification, we compare the models designed by GraphNAS with the benchmark Graph neural networks in semi-supervised task and supervised task
  • We study the challenging problem of graph neural architecture search using reinforcement learning
  • A new search space is designed to include the operators from the state-of-the-art Graph neural networks, and a policy gradient algorithm is used to iteratively solve the learning
  • Experiment results on real-world datasets show that GraphNAS can design a novel network architecture that rivals the best human-invented architecture in terms of validation set accuracy
Methods
  • The authors first formulate the problem of graph neural architecture search with reinforcement learning.
  • Given a search space of a graph neural architecture M, and a validation set D, the authors aim to find the best architecture m∗ ∈.
  • Figure 1 shows the reinforcement learning framework used to solve Eq(1) by continuously sampling architectures m ∈ M and evaluating the accuracy R on the validation set D.
  • The generated model m is trained on a given graph G and tested on the validate set D.
  • The test result is taken as a reward signal R to update the reinforcement learning
Results
  • To validate the performance of node classification, the authors compare the models designed by GraphNAS with the benchmark GNNs in semi-supervised task and supervised task.
  • The architectures designed by GraphNAS are showed in Figure 5
  • In this part, the authors apply the architectures discovered by GraphNAS on the citation networks to supervised node classification on different data sets, such as the coauthor networks MS-CS and MS-Physics, and the product networks of Amazon Computers and Amazon Photo.
  • Based on the original GraphNAS, the authors construct two variants: GraphNAS-R that randomly selects graph neural network architectures from a given search space, and GraphNAS-S that searches for an entire graph neural architecture where each layer containing only one computational node and take the last layer as input
Conclusion
  • The authors study the challenging problem of graph neural architecture search using reinforcement learning.
  • The authors present a new model GraphNAS which can automatically design the best graph neural architectures.
  • A new search space is designed to include the operators from the state-of-the-art GNNs, and a policy gradient algorithm is used to iteratively solve the learning.
  • Experiment results on real-world datasets show that GraphNAS can design a novel network architecture that rivals the best human-invented architecture in terms of validation set accuracy.
  • The authors release the python codes on Github for comparison
Summary
  • Introduction:

    Graph neural networks (GNNs) emerged recently as a powerful tool for analyzing non-Euclidean data such as social network data.
  • Despite the success of GNNs, the design of graph neural architectures requires both heavy manual work and domain knowledge.
  • Similar to CNNs that contain many manual parameters such as the size of filters and the type of pooling layers, the results of GNNs heavily rely on the graph neural architectures including the receptive fields, message functions and aggregation functions
  • Methods:

    The authors first formulate the problem of graph neural architecture search with reinforcement learning.
  • Given a search space of a graph neural architecture M, and a validation set D, the authors aim to find the best architecture m∗ ∈.
  • Figure 1 shows the reinforcement learning framework used to solve Eq(1) by continuously sampling architectures m ∈ M and evaluating the accuracy R on the validation set D.
  • The generated model m is trained on a given graph G and tested on the validate set D.
  • The test result is taken as a reward signal R to update the reinforcement learning
  • Results:

    To validate the performance of node classification, the authors compare the models designed by GraphNAS with the benchmark GNNs in semi-supervised task and supervised task.
  • The architectures designed by GraphNAS are showed in Figure 5
  • In this part, the authors apply the architectures discovered by GraphNAS on the citation networks to supervised node classification on different data sets, such as the coauthor networks MS-CS and MS-Physics, and the product networks of Amazon Computers and Amazon Photo.
  • Based on the original GraphNAS, the authors construct two variants: GraphNAS-R that randomly selects graph neural network architectures from a given search space, and GraphNAS-S that searches for an entire graph neural architecture where each layer containing only one computational node and take the last layer as input
  • Conclusion:

    The authors study the challenging problem of graph neural architecture search using reinforcement learning.
  • The authors present a new model GraphNAS which can automatically design the best graph neural architectures.
  • A new search space is designed to include the operators from the state-of-the-art GNNs, and a policy gradient algorithm is used to iteratively solve the learning.
  • Experiment results on real-world datasets show that GraphNAS can design a novel network architecture that rivals the best human-invented architecture in terms of validation set accuracy.
  • The authors release the python codes on Github for comparison
Tables
  • Table1: Operators of search space M
  • Table2: Correlation coefficients of the entire L layers. For example, consider a GNN with two layers. The first layer consists of GCN with 16 hidden units and an activation function relu. The second layer consists of GAT with eight heads, 16 hidden units and an activation function elu. Then, the architecture is described by concatenating the operators of the two layers, which formulates a longer list of operators as follows
  • Table3: Node classification results w.r.t. accuracy, where ”semi” stands for semi-supervised learning experiments, ”sup” for supervised learning experiments and ”rand” for supervised learning experiments with randomly split data
  • Table4: Transferring architectures designed by GraphNAS on the citation networks to the other four datasets
Download tables as Excel
Related work
  • 2.1 Neural Architecture Search (NAS)

    NAS has been popularly used to design convolutional architectures [Zoph and Le, 2016; Pham et al, 2018; Xie et al, 2019; Bello et al, 2017; Liu et al, 2018a; Cai et al, 2019]. The basic idea of NAS is to use reinforcement learning to find the best neural architectures. Specifically, NAS uses a recurrent network to generate architecture descriptions of CNNs and RNNs. Based on NAS, evolution-based NAS [Real et al, 2018] is proposed to use evolution algorithms to simultaneously optimize topology alongside with parameters. ENAS [Pham et al, 2018] allows the sharing of parameters among child models, which enables the search speed 1000 times faster than the standard NAS and obtains a new convolution architecture in 0.45 GPU days. DARTS [Liu et al, 2018a] formulates the task in a differentiable manner which shortens the search of high-performance convolution architectures within four GPU days. Following DARTS [Liu et al, 2018a], GDAS [Dong and Yang, 2019] enables the search speed in four GPU hours, and Proxyless NAS [Cai et al, 2019] claims that the search process can directly operate on the large-scale target tasks and the target hardware platforms. Due to NASbased search algorithms achieve promising results for designing new architectures for CNNs and RNNs, we extend NAS to design graph neural architectures for GNNs in this paper.
Funding
  • This work was supported in part by the National Key Research and Development Program of China (No 2017YFB0803300), the NSFC (No 61872360), the Youth Innovation Promotion Association CAS (No 2017210), and an Australian Government Research Training Program Scholarship
Reference
  • [Bello et al., 2017] Irwan Bello, Barret Zoph, Vijay Vasudevan, and Quoc V. Le. Neural optimizer search with reinforcement learning. In ICML, 2017.
    Google ScholarLocate open access versionFindings
  • [Bianchi et al., 2019] Filippo Maria Bianchi, Daniele Grattarola, Lorenzo Livi, and Cesare Alippi. Graph neural networks with convolutional arma filters. ArXiv, abs/1901.01343, 2019.
    Findings
  • [Cai et al., 2019] Han Cai, Ligeng Zhu, and Song Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations, 2019.
    Google ScholarLocate open access versionFindings
  • [Defferrard et al., 2016] Michael Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In NIPS, 2016.
    Google ScholarLocate open access versionFindings
  • [Dong and Yang, 2019] Xuanyi Dong and Yi Yang. Searching for a robust neural architecture in four gpu hours. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1761–1770, 2019.
    Google ScholarLocate open access versionFindings
  • [Fey and Lenssen, 2019] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
    Google ScholarLocate open access versionFindings
  • [Gori et al., 2005] M. Gori, G. Monfardini, and F. Scarselli. A new model for learning in graph domains. In IEEE International Joint Conference on Neural Networks, 2005.
    Google ScholarLocate open access versionFindings
  • [Hamilton et al., 2017] William L. Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NIPS, 2017.
    Google ScholarLocate open access versionFindings
  • [Hu et al., 2019] Fenyu Hu, Yanqiao Zhu, Shu Wu, Liang Wang, and Tieniu Tan. Hierarchical graph convolutional networks for semi-supervised node classification. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, (IJCAI), 2019.
    Google ScholarLocate open access versionFindings
  • [Kipf and Welling, 2017] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017.
    Google ScholarLocate open access versionFindings
  • [Klicpera et al., 2019] Johannes Klicpera, Aleksandar Bojchevski, and Stephan Gunnemann. Combining neural networks with personalized pagerank for classification on graphs. In International Conference on Learning Representations, 2019.
    Google ScholarLocate open access versionFindings
  • [Liu et al., 2018a] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. CoRR, abs/1806.09055, 2018.
    Findings
  • [Liu et al., 2018b] Ziqi Liu, Chaochao Chen, Longfei Li, Jun Zhou, Xiaolong Li, and Le Song. Geniepath: Graph neural networks with adaptive receptive paths. CoRR, abs/1802.00910, 2018.
    Findings
  • [Niepert et al., 2016] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In ICML, 2016.
    Google ScholarLocate open access versionFindings
  • [Pham et al., 2018] Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. In ICML, 2018.
    Google ScholarLocate open access versionFindings
  • [Real et al., 2018] Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. Regularized evolution for image classifier architecture search. CoRR, abs/1802.01548, 2018.
    Findings
  • [Shchur et al., 2018] Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan
    Google ScholarFindings
  • [Vaswani et al., 2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Lawrence Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017.
    Google ScholarLocate open access versionFindings
  • [Velickovic et al., 2017] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. CoRR, abs/1710.10903, 2017.
    Findings
  • [Williams, 1992] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229–256, 1992.
    Google ScholarLocate open access versionFindings
  • [Wu et al., 2019a] Felix Wu, Amauri H. Souza, Tianyi Zhang, Christopher Fifty, Rui Zhang, and Kilian Q. Weinberger. Simplifying graph convolutional networks. In ICML, 2019.
    Google ScholarLocate open access versionFindings
  • [Wu et al., 2019b] Shu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, and Tieniu Tan. Session-based recommendation with graph neural networks. In AAAI, 2019.
    Google ScholarLocate open access versionFindings
  • [Xie et al., 2019] Sirui Xie, H P Zheng, Chunxiao Liu, and Liang Lin. Snas: Stochastic neural architecture search. In International Conference on Learning Representations, 2019.
    Google ScholarLocate open access versionFindings
  • [Xu et al., 2018] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? CoRR, abs/1810.00826, 2018.
    Findings
  • [Yan et al., 2018] Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In AAAI, 2018.
    Google ScholarLocate open access versionFindings
  • [You et al., 2019] Jiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. In ICML, 2019.
    Google ScholarLocate open access versionFindings
  • [Yu et al., 2018] Bing Yu, Haoteng Yin, and Zhanxing Zhu. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. In IJCAI, 2018.
    Google ScholarLocate open access versionFindings
  • [Zaheer et al., 2017] Manzil Zaheer, Satwik Kottur, Siamak Ravanbhakhsh, Barnabas Poczos, and Alexander Smola. Deep sets. 2017.
    Google ScholarFindings
  • [Zoph and Le, 2016] Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. CoRR, abs/1611.01578, 2016.
    Findings
Your rating :
0

 

Tags
Comments