Back translation for molecule generation

BIOINFORMATICS(2022)

引用 5|浏览15
暂无评分
摘要
Motivation: Molecule generation, which is to generate new molecules, is an important problem in bioinformatics. Typical tasks include generating molecules with given properties, molecular property improvement (i.e. improving specific properties of an input molecule), retrosynthesis (i.e. predicting the molecules that can be used to synthesize a target molecule), etc. Recently, deep-learning-based methods received more attention for molecule generation. The labeled data of bioinformatics is usually costly to obtain, but there are millions of unlabeled molecules. Inspired by the success of sequence generation in natural language processing with unlabeled data, we would like to explore an effective way of using unlabeled molecules for molecule generation. Results: We propose a new method, back translation for molecule generation, which is a simple yet effective semi-supervised method. Let X be the source domain, which is the collection of properties, the molecules to be optimized, etc. Let y be the target domain which is the collection of molecules. In particular, given a main task which is about to learn a mapping from the source domain X to the target domain y, we first train a reversed model g for the y to X mapping. After that, we use g to back translate the unlabeled data in y to X and obtain more synthetic data. Finally, we combine the synthetic data with the labeled data and train a model for the main task. We conduct experiments on molecular property improvement and retrosynthesis, and we achieve state-of-the-art results on four molecule generation tasks and one retrosynthesis benchmark, USPTO-50k. Availability and implementation: Our code and data are available at https://github.com/fyabc/BT4MolGen.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要