Towards Foundational Models for Molecular Learning on Large-Scale Multi-Task Datasets

Dominique Beaini,Shenyang Huang, João Paulo Cunha, Zhiyi Li, Gabriela Moisescu-Pareja, Oleksandr Dymov, S. Maddrell-Mander, Cameron McLean,Ali Parviz, Luis T. Díaz Müller, Jama Hussein Mohamud,Frederik Wenkel,Michael Craig, Michał Koziarski,Jiarui Lu,Zhaocheng Zhu, Cristian Gabellini,Guillaume Rabusseau,Reihaneh Rabbany,Jian Tang,Christopher G. Morris, Mirco Ravanelli,Guy Wolf,Prudencio Tossou, Hadrien Mary, Błażej Banaszewski, Christian Martín,Dominic Masters

Zenodo (CERN European Organization for Nuclear Research)(2023)

引用 0|浏览13
暂无评分
摘要
Recently, pre-trained foundation models have enabled significant advancements in multiple fields. In molecular machine learning, however, where datasets are often hand-curated, and hence typically small, the lack of datasets with labeled features, and codebases to manage those datasets, has hindered the development of foundation models. In this work, we present seven novel datasets categorized by size into three distinct categories: ToyMix, LargeMix and UltraLarge. These datasets push the boundaries in both the scale and the diversity of supervised labels for molecular learning. They cover nearly 100 million molecules and over 3000 sparsely defined tasks, totaling more than 13 billion individual labels of both quantum and biological nature. In comparison, our datasets contain 300 times more data points than the widely used OGB-LSC PCQM4Mv2 dataset, and 13 times more than the quantum-only QM1B dataset. In addition, to support the development of foundational models based on our proposed datasets, we present the Graphium graph machine learning library which simplifies the process of building and training molecular machine learning models for multi-task and multi-level molecular datasets. Finally, we present a range of baseline results as a starting point of multi-task and multi-level training on these datasets. Empirically, we observe that performance on low-resource biological datasets show improvement by also training on large amounts of quantum data. This indicates that there may be potential in multi-task and multi-level training of a foundation model and fine-tuning it to resource-constrained downstream tasks.
更多
查看译文
关键词
molecular learning,foundational models,large-scale large-scale,multi-task
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要