AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We propose fast subtree kernels on graphs

Fast subtree kernels on graphs.

NIPS, pp.1660-1668, (2009)

Cited by: 274|Views342
EI
Full Text
Bibtex
Weibo

Abstract

In this article, we propose fast subtree kernels on graphs. On graphs with n nodes and m edges and maximum degree d, these kernels comparing subtrees of height h can be computed in O(mh), whereas the classic subtree kernel by Ramon & Gartner scales as O(n24dh). Key to this efficiency is the observation that the Weisfeiler-Lehman test of i...More

Code:

Data:

0
Introduction
  • Graph kernels have recently evolved into a branch of kernel machines that reaches deep into graph mining.
  • While fast computation techniques have been developed for graph kernels based on walks [12] and on limited-size subgraphs [11], it is unclear how to compute subtree kernels efficiently.
  • As a consequence, they have been applied to relatively small graphs representing chemical compounds [9] or handwritten digits [1], with approximately twenty nodes on average.
  • In Section 4, the authors compare these two subtree kernels to each other, as well as to a set of four other state-of-the-art graph kernels and report results on kernel computation runtime and classification accuracy on graph benchmark datasets
Highlights
  • Graph kernels have recently evolved into a branch of kernel machines that reaches deep into graph mining
  • Several different graph kernels have been defined in machine learning which can be categorized into three classes: graph kernels based on walks [5, 7] and paths [2], graph kernels based on limited-size subgraphs [6, 11], and graph kernels based on subtree patterns [9, 10]
  • While fast computation techniques have been developed for graph kernels based on walks [12] and on limited-size subgraphs [11], it is unclear how to compute subtree kernels efficiently
  • The N 2 sparse vector multiplications that have to be performed for kernel computation with global WL do not dominate runtime here
  • We have defined a fast subtree kernel on graphs that combines scalability with the ability to deal with node labels
  • It is competitive with state-of-the-art kernels on several classification benchmark datasets in terms of accuracy, even reaching the highest accuracy level on three out of four datasets, and outperforms them significantly in terms of runtime on large graphs, even the efficient computation schemes for random walk kernels [12] and graphlet kernels [11] that were recently defined. This new kernel opens the door to applications of graph kernels on large graphs in bioinformatics, for instance, protein function prediction via detailed graph models of protein structure on the amino acid level, or on gene networks for phenotype prediction
Methods
  • The authors empirically compared the runtime behaviour of the two variants of the WeisfeilerLehman (WL) kernel.
  • The first variant computes kernel values pairwise in O(N 2hm).
  • The second variant computes the kernel values in O(N hm + N 2hn) on the dataset simultaneously.
  • The authors will refer to the former variant as the ‘pairwise’ WL, and the latter as ‘global’ WL
Results
  • The authors observe that the pairwise kernel scales quadratically with dataset size N.
  • When varying the number of nodes n per graph, the authors observe that the runtime of global WL scales linearly with n, and is much faster than the pairwise WL for large graphs.
  • The authors observe the same picture for the height h of the subtree patterns
  • The runtime of both kernels grows linearly with h, but the global WL is more efficient in terms of runtime in seconds.
  • The graphlet kernel is faster than the WL kernel on MUTAG and the NCI datasets, and about a
Conclusion
  • The authors have defined a fast subtree kernel on graphs that combines scalability with the ability to deal with node labels
  • It is competitive with state-of-the-art kernels on several classification benchmark datasets in terms of accuracy, even reaching the highest accuracy level on three out of four datasets, and outperforms them significantly in terms of runtime on large graphs, even the efficient computation schemes for random walk kernels [12] and graphlet kernels [11] that were recently defined.
  • An exciting algorithmic question for further studies will be to consider kernels on graphs with continuous or high-dimensional node labels and their efficient computation
Tables
  • Table1: Prediction accuracy (± standard error) on graph classification benchmark datasets
  • Table2: CPU runtime for kernel computation on graph classification benchmark datasets factor of 3 slower on D&D. However, this efficiency comes at a price, as the kernel based on size-3 graphlets turns out to lead to poor accuracy levels on three datasets. Using larger graphlets with 4 or 5 nodes that might have been more expressive led to infeasible runtime requirements in initial experiments (not shown here)
Download tables as Excel
Related work
  • The subtree kernels in [9] and [1] refine the above definition for applications in chemoinformatics and hand-written digit recognition. Maheand Vert [9] define extensions of the classic subtree kernel that avoid tottering [8] and consider unbalanced subtrees. Both [9] and [1] propose to consider α-ary subtrees with at most α children per node. This restricts the set of matchings to matchings of up to α nodes, but the runtime complexity is still exponential in this parameter α, which both papers describe as feasible on small graphs (with approximately 20 nodes) with many distinct node labels. We present a subtree kernel that is efficient to compute on graphs with hundreds and thousands of nodes next.

    3.1 The Weisfeiler-Lehman test of isomorphism

    Our algorithm for computing a fast subtree kernel builds upon the Weisfeiler-Lehman test of isomorphism [14], more specifically its 1-dimensional variant, also known as “naive vertex refinement”, which we describe in the following.

    Assume we are given two graphs G and G and we would like to test whether they are isomorphic. The 1-dimensional Weisfeiler-Lehman test proceeds in iterations, which we index by h and which comprise the following steps: Algorithm 1 One iteration of the 1-dimensional Weisfeiler-Lehman test of graph isomorphism 1: Multiset-label determination
Reference
  • F. R. Bach. Graph kernels between point clouds. In ICML, pages 25–32, 2008.
    Google ScholarLocate open access versionFindings
  • K. M. Borgwardt and H.-P. Kriegel. Shortest-path kernels on graphs. In Proc. Intl. Conf. Data Mining, pages 74–81, 2005.
    Google ScholarLocate open access versionFindings
  • A. K. Debnath, R. L. Lopez de Compadre, G. Debnath, A. J. Shusterman, and C. Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. J Med Chem, 34:786–797, 1991.
    Google ScholarLocate open access versionFindings
  • P. D. Dobson and A. J. Doig. Distinguishing enzyme structures from non-enzymes without alignments. J Mol Biol, 330(4):771–783, Jul 2003.
    Google ScholarLocate open access versionFindings
  • T. Gartner, P.A. Flach, and S. Wrobel. On graph kernels: Hardness results and efficient alternatives. In B. Scholkopf and M. Warmuth, editors, Sixteenth Annual Conference on Computational Learning Theory and Seventh Kernel Workshop, COLT. Springer, 2003.
    Google ScholarLocate open access versionFindings
  • T. Horvath, T. Gartner, and S. Wrobel. Cyclic pattern kernels for predictive graph mining. In Proceedings of the International Conference on Knowledge Discovery and Data Mining, 2004.
    Google ScholarLocate open access versionFindings
  • H. Kashima, K. Tsuda, and A. Inokuchi. Marginalized kernels between labeled graphs. In Proceedings of the 20th International Conference on Machine Learning (ICML), Washington, DC, United States, 2003.
    Google ScholarLocate open access versionFindings
  • P. Mahe, N. Ueda, T. Akutsu, J.-L. Perret, and J.-P. Vert. Extensions of marginalized graph kernels. In Proceedings of the Twenty-First International Conference on Machine Learning, 2004.
    Google ScholarLocate open access versionFindings
  • P. Maheand J.-P. Vert. Graph kernels based on tree patterns for molecules. q-bio/0609024, September 2006.
    Google ScholarFindings
  • J. Ramon and T. Gartner. Expressivity versus efficiency of graph kernels. Technical report, First International Workshop on Mining Graphs, Trees and Sequences (held with ECML/PKDD’03), 2003.
    Google ScholarFindings
  • N. Shervashidze, S.V.N. Vishwanathan, T. Petri, K. Mehlhorn, and K. M. Borgwardt. Efficient graphlet kernels for large graph comparison. In Artificial Intelligence and Statistics, 2009.
    Google ScholarLocate open access versionFindings
  • S. V. N. Vishwanathan, Karsten Borgwardt, and Nicol N. Schraudolph. Fast computation of graph kernels. In B. Scholkopf, J. Platt, and T. Hofmann, editors, Advances in Neural Information Processing Systems 19, Cambridge MA, 2007. MIT Press.
    Google ScholarLocate open access versionFindings
  • N. Wale and G. Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. In Proc. of ICDM, pages 678–689, Hong Kong, 2006.
    Google ScholarLocate open access versionFindings
  • B. Weisfeiler and A. A. Lehman. A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsia, Ser. 2, 9, 1968.
    Google ScholarFindings
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科