: A Parameter-Efficient Foundation Model for Molecular Learning
arxiv(2024)
摘要
In biological tasks, data is rarely plentiful as it is generated from
hard-to-gather measurements. Therefore, pre-training foundation models on large
quantities of available data and then transfer to low-data downstream tasks is
a promising direction. However, how to design effective foundation models for
molecular learning remains an open question, with existing approaches typically
focusing on models with large parameter capacities. In this work, we propose
, a foundational model for molecular learning with 10 million
parameters. is pre-trained on a mix of roughly 3300 sparsely
defined graph- and node-level tasks of both quantum and biological nature. The
pre-training dataset includes approximately 6 million molecules and 500 million
labels. To demonstrate the generalizability of across tasks,
we evaluate it on downstream tasks from the Therapeutic Data Commons (TDC)
ADMET group showing significant improvements over the prior state-of-the-art
foundation model across 17 tasks. will be a public and
open-sourced model for future research.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要