NLocalSAT: Boosting Local Search with Solution Prediction

IJCAI, pp. 1177-1183, 2020.

Cited by: 0|Bibtex|Views89|Links
EI
Keywords:
complete problemcdcl solversat competitionconjunctive normal formboolean satisfiabilityMore(14+)
Weibo:
This paper explores a novel perspective of combining stochastic local search with a solution prediction model

Abstract:

The boolean satisfiability problem is a famous NP-complete problem in computer science. An effective way for this problem is the stochastic local search (SLS). However, in this method, the initialization is assigned in a random manner, which impacts the effectiveness of SLS solvers. To address this problem, we propose NLocalSAT. NLocalS...More

Code:

Data:

0
Introduction
  • Boolean satisfiability is the problem to determine whether there exists a set of assignments for a given boolean formula to make the formula evaluate to true.
  • SAT is widely used in solving combinatorial problems, which are generated from various applications, such as program analysis [Harris et al, 2010], program verification [Leino, 2010], and scheduling [Kasi and Sarma, 2013].
  • These applications first reduce the target problem into a SAT formula and find a solution using a SAT solver.
  • SLS solvers initialize an assignment for all variables and find a solution by constantly flipping the assignment of variables to optimize some score
Highlights
  • Boolean satisfiability is the problem to determine whether there exists a set of assignments for a given boolean formula to make the formula evaluate to true
  • SAT is widely used in solving combinatorial problems, which are generated from various applications, such as program analysis [Harris et al, 2010], program verification [Leino, 2010], and scheduling [Kasi and Sarma, 2013]
  • The experiments of stochastic local search solvers are performed three times to reduce the randomness of results
  • This paper explores a novel perspective of combining stochastic local search with a solution prediction model
  • We propose NLocalSAT to boost stochastic local search solvers
  • CCAnr, Sparrow, CPSparrow, YalSAT, and probSAT with NLocalSAT solve respectively 41%, 30%, and 27%, 62%, and 62% more problems than the original solvers
  • Experimental results show that NLocalSAT significantly increases the number of problems solved and decreases the solving time for hard problems
Methods
  • The authors' model was trained on a dataset with generated problems with small SAT instances and a dataset with problems in random tracks of SAT Competitions in 2012, 2013, 2014, 2016, 2017 (The competition of SAT in 2015 was called SAT Race 2015.
  • The authors' model was evaluated on problems in the random track of SAT Com-.
  • The authors found that there are several duplicate problems in 2018 and previous years, so the authors removed them from the training and validation datasets to ensure problems in the Dataseteval are generated with different random seeds with those in Datasetcomp.
  • There will be some similar substructures between the training set and the test set, so that neural networks can predict by learning these substructures
Results
  • 4.1 Number of Problems Solved

    Table 1 shows the number of problems solved in 1000 seconds time limit.
  • 4.1 Number of Problems Solved.
  • Table 1 shows the number of problems solved in 1000 seconds time limit.
  • Each row represents a tested solver.
  • The experiments of SLS solvers are performed three times to reduce the randomness of results.
  • Each number in the rows of SLS solvers is the average and the standard deviation of results in the three experiments.
  • Each column in the table represents a category of problems in the dataset.
  • The number in
Conclusion
  • Conclusion and Future Work

    This paper explores a novel perspective of combining SLS with a solution prediction model.
  • The authors propose NLocalSAT to boost SLS solvers.
  • Sparrow and CPSparrow with the proposed NLocalSAT perform better than state-of-the-art CDCL, SLS, and hybrid solvers on the random track of SAT Competition 2018.
  • NLocalSAT can boost SLS SAT solvers effectively.
  • With this learning-based method, the authors may build a domain-specific SAT solver without expertise in the domain by training NLocalSAT with SAT instances from the domain.
  • It is interesting to further explore building domain-specific solvers with NLocalSAT
Summary
  • Introduction:

    Boolean satisfiability is the problem to determine whether there exists a set of assignments for a given boolean formula to make the formula evaluate to true.
  • SAT is widely used in solving combinatorial problems, which are generated from various applications, such as program analysis [Harris et al, 2010], program verification [Leino, 2010], and scheduling [Kasi and Sarma, 2013].
  • These applications first reduce the target problem into a SAT formula and find a solution using a SAT solver.
  • SLS solvers initialize an assignment for all variables and find a solution by constantly flipping the assignment of variables to optimize some score
  • Methods:

    The authors' model was trained on a dataset with generated problems with small SAT instances and a dataset with problems in random tracks of SAT Competitions in 2012, 2013, 2014, 2016, 2017 (The competition of SAT in 2015 was called SAT Race 2015.
  • The authors' model was evaluated on problems in the random track of SAT Com-.
  • The authors found that there are several duplicate problems in 2018 and previous years, so the authors removed them from the training and validation datasets to ensure problems in the Dataseteval are generated with different random seeds with those in Datasetcomp.
  • There will be some similar substructures between the training set and the test set, so that neural networks can predict by learning these substructures
  • Results:

    4.1 Number of Problems Solved

    Table 1 shows the number of problems solved in 1000 seconds time limit.
  • 4.1 Number of Problems Solved.
  • Table 1 shows the number of problems solved in 1000 seconds time limit.
  • Each row represents a tested solver.
  • The experiments of SLS solvers are performed three times to reduce the randomness of results.
  • Each number in the rows of SLS solvers is the average and the standard deviation of results in the three experiments.
  • Each column in the table represents a category of problems in the dataset.
  • The number in
  • Conclusion:

    Conclusion and Future Work

    This paper explores a novel perspective of combining SLS with a solution prediction model.
  • The authors propose NLocalSAT to boost SLS solvers.
  • Sparrow and CPSparrow with the proposed NLocalSAT perform better than state-of-the-art CDCL, SLS, and hybrid solvers on the random track of SAT Competition 2018.
  • NLocalSAT can boost SLS SAT solvers effectively.
  • With this learning-based method, the authors may build a domain-specific SAT solver without expertise in the domain by training NLocalSAT with SAT instances from the domain.
  • It is interesting to further explore building domain-specific solvers with NLocalSAT
Tables
  • Table1: Number of problems solved in time limit by different solvers. (Mean values and standard deviations of results with three different random seeds)
  • Table2: Average running time with timeout penalty comparing different solvers. (Mean values and standard deviations of results with three different random seeds)
Download tables as Excel
Related work
  • Recently, several studies have investigated how to make use of neural networks in solving NP-complete constraint problems. There are two categories of methods to solve NPcomplete problems using neural networks. The first category of methods is end-to-end approaches using end-to-end neural networks to solve problems, i.e., given the problem as an input, the neural network outputs the solution directly. In these methods, the neural network can learn to solve the problem itself [Amizadeh et al, 2019; Galassi et al, 2018; Selsam et al, 2019; Xu et al, 2018; Prates et al, 2019]. However, due to the accuracy and structural limitations of neural networks, the end-to-end methods can only solve small problems. The other category of methods is heuristic methods that treat neural networks as heuristics [Balunovic et al, 2018; Li et al, 2018; Selsam and Bjørner, 2019]. Among these methods, traditional solvers work with neural networks together. Neural networks generate some predictions, and the solvers will use these predictions as heuristics to solve the problem. Constraints in the problems can be maintained in the solver, so these methods can solve large-scale problems. Our proposed method (i.e., NLocalSAT) belongs to the second category, heuristic methods, so NLocalSAT can be used for larger instances than those end-to-end methods. Balunovic et al [Balunovic et al, 2018] proposed a method to learn a strategy for Z3, which greatly improves the efficiency of Z3. Li et al [Li et al, 2018] proposed a model on solving maximal independent set problems with a tree search algorithm. NeuroCore [Selsam and Bjørner, 2019] is a method to improve CDCL solvers with predictions of unsat-cores. None of these methods is used to boost stochastic local search solvers with solution predictions and none of these is an offline method to boost SAT solvers. The training data used in NLocalSAT are solutions of problems or solution space distribution of problems, which is also different from previous works, where NeuroSAT uses the satisfiability of problems and NeuroCore uses unsat-core predictions.
Reference
  • [Amizadeh et al., 2019] Saeed Amizadeh, Sergiy Matusevych, and Markus Weimer. Learning to solve circuit-sat: An unsupervised differentiable approach. In ICLR, 2019.
    Google ScholarLocate open access versionFindings
  • [Balint and Frohlich, 2010] Adrian Balint and Andreas Frohlich. Improving stochastic local search for SAT with a new probability distribution. In SAT, 2010.
    Google ScholarLocate open access versionFindings
  • [Balint and Manthey, 2013] Adrian Balint and Norbert Manthey. Boosting the performance of SLS and CDCL solvers by preprocessor tuning. In POS, 2013.
    Google ScholarLocate open access versionFindings
  • [Balint and Manthey, 2018] Adrian Balint and Norbert Manthey. Sparrowtoriss. Proceedings of SAT Competition 2018, 2018.
    Google ScholarLocate open access versionFindings
  • [Balint and Schoning, 2014] Adrian Balint and Uwe Schoning. probsat. Proceedings of SAT Competition 2018, 2014.
    Google ScholarLocate open access versionFindings
  • [Balunovic et al., 2018] Mislav Balunovic, Pavol Bielik, and Martin T. Vechev. Learning to solve SMT formulas. In NeurIPS, 2018.
    Google ScholarLocate open access versionFindings
  • [Balyo and Chrpa, 2018] Tomas Balyo and Lukas Chrpa. Using algorithm configuration tools to generate hard SAT benchmarks. In SOCS, 2018.
    Google ScholarLocate open access versionFindings
  • [Biere, 2016] Armin Biere. Splatz, lingeling, plingeling, treengeling, yalsat entering the sat competition 2016. Proceedings of SAT Competition 2016, 2016.
    Google ScholarLocate open access versionFindings
  • [Cai and Su, 2013] Shaowei Cai and Kaile Su. Local search for boolean satisfiability with configuration checking and subscore. Artif. Intell., 204, 2013.
    Google ScholarLocate open access versionFindings
  • [Cai et al., 2015] Shaowei Cai, Chuan Luo, and Kaile Su. Ccanr: A configuration checking based local search solver for non-random satisfiability. In SAT, 2015.
    Google ScholarLocate open access versionFindings
  • [Cook, 1971] Stephen A. Cook. The complexity of theoremproving procedures. In Proceedings of the 3rd Annual ACM Symposium on Theory of Computing, 1971.
    Google ScholarLocate open access versionFindings
  • [Edwards and Xie, 2016] Michael Edwards and Xianghua Xie. Graph based convolutional neural network. arXiv preprint arXiv:1609.08965, 2016.
    Findings
  • [Galassi et al., 2018] Andrea Galassi, Michele Lombardi, Paola Mello, and Michela Milano. Model agnostic solution of csps via deep learning: A preliminary study. In CPAIOR, 2018.
    Google ScholarLocate open access versionFindings
  • [Harris et al., 2010] William R. Harris, Sriram Sankaranarayanan, Franjo Ivancic, and Aarti Gupta. Program analysis via satisfiability modulo path programs. In POPL, 2010.
    Google ScholarLocate open access versionFindings
  • [Heule, 2018] Marijn JH Heule. Generating the uniform random benchmarks. Proceedings of SAT Competition 2018, 2018.
    Google ScholarLocate open access versionFindings
  • [Kasi and Sarma, 2013] Bakhtiar Khan Kasi and Anita Sarma. Cassandra: proactive conflict minimization through optimized task scheduling. In ICSE, 2013.
    Google ScholarLocate open access versionFindings
  • [Kingma and Ba, 2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
    Google ScholarLocate open access versionFindings
  • [Leino, 2010] K. Rustan M. Leino. Dafny: An automatic program verifier for functional correctness. In LPAR-16, 2010.
    Google ScholarLocate open access versionFindings
  • [Li et al., 2018] Zhuwen Li, Qifeng Chen, and Vladlen Koltun. Combinatorial optimization with graph convolutional networks and guided tree search. In NeurIPS, 2018.
    Google ScholarLocate open access versionFindings
  • [Luo et al., 2017] Mao Luo, Chu-Min Li, Fan Xiao, Felip Manya, and Zhipeng Lu. An effective learnt clause minimization approach for CDCL SAT solvers. In IJCAI, 2017.
    Google ScholarLocate open access versionFindings
  • [Mikolov et al., 2010] Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH, 2010.
    Google ScholarLocate open access versionFindings
  • [Nadel and Ryvchin, 2018] Alexander Nadel and Vadim Ryvchin. Chronological backtracking. In SAT, 2018.
    Google ScholarLocate open access versionFindings
  • [Prates et al., 2019] Marcelo O. R. Prates, Pedro H. C. Avelar, Henrique Lemos, Luıs C. Lamb, and Moshe Y. Vardi. Learning to solve np-complete problems: A graph neural network for decision TSP. In AAAI, 2019.
    Google ScholarLocate open access versionFindings
  • [Ryvchin and Nadel, 2018] Vadim Ryvchin and Alexander Nadel. Maple lcm dist chronobt: Featuring chronological backtracking. Proceedings of SAT Competition 2018, 2018.
    Google ScholarLocate open access versionFindings
  • [Selsam and Bjørner, 2019] Daniel Selsam and Nikolaj Bjørner. Guiding high-performance SAT solvers with unsat-core predictions. In SAT, 2019.
    Google ScholarLocate open access versionFindings
  • [Selsam et al., 2019] Daniel Selsam, Matthew Lamm, Benedikt Bunz, Percy Liang, Leonardo de Moura, and David L. Dill. Learning a SAT solver from single-bit supervision. In ICLR, 2019.
    Google ScholarLocate open access versionFindings
  • [Simonyan and Zisserman, 2015] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR 2015, 2015.
    Google ScholarLocate open access versionFindings
  • [Tseitin, 1983] Grigori S Tseitin. On the complexity of derivation in propositional calculus. In Automation of reasoning. Springer, 1983.
    Google ScholarFindings
  • [Xu et al., 2018] Hong Xu, Sven Koenig, and T. K. Satish Kumar. Towards effective deep learning for constraint satisfaction problems. In CP, 2018.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments