Generative Pre-Trained Transformer for Symbolic Regression Base In-Context Reinforcement Learning
arxiv(2024)
摘要
The mathematical formula is the human language to describe nature and is the
essence of scientific research. Finding mathematical formulas from
observational data is a major demand of scientific research and a major
challenge of artificial intelligence. This area is called symbolic regression.
Originally symbolic regression was often formulated as a combinatorial
optimization problem and solved using GP or reinforcement learning algorithms.
These two kinds of algorithms have strong noise robustness ability and good
Versatility. However, inference time usually takes a long time, so the search
efficiency is relatively low. Later, based on large-scale pre-training data
proposed, such methods use a large number of synthetic data points and
expression pairs to train a Generative Pre-Trained Transformer(GPT). Then this
GPT can only need to perform one forward propagation to obtain the results, the
advantage is that the inference speed is very fast. However, its performance is
very dependent on the training data and performs poorly on data outside the
training set, which leads to poor noise robustness and Versatility of such
methods. So, can we combine the advantages of the above two categories of SR
algorithms? In this paper, we propose FormulaGPT, which trains a GPT
using massive sparse reward learning histories of reinforcement learning-based
SR algorithms as training data. After training, the SR algorithm based on
reinforcement learning is distilled into a Transformer. When new test data
comes, FormulaGPT can directly generate a "reinforcement learning process" and
automatically update the learning policy in context. Tested on more than ten
datasets including SRBench, formulaGPT achieves the state-of-the-art
performance in fitting ability compared with four baselines. In addition, it
achieves satisfactory results in noise robustness, versatility, and inference
efficiency.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要