TensorBank: Tensor Lakehouse for Foundation Model Training

Romeo Kienzler, Leonardo Pondian Tizzei,Benedikt Blumenstiel,Zoltan Arnold Nagy,S. Karthik Mukkavilli,Johannes Schmude,Marcus Freitag,Michael Behrendt, Daniel Salles Civitarese, Naomi Simumba, Daiki Kimura,Hendrik Hamann

arxiv(2023)

引用 0|浏览5
暂无评分
摘要
Storing and streaming high dimensional data for foundation model training became a critical requirement with the rise of foundation models beyond natural language. In this paper we introduce TensorBank, a petabyte scale tensor lakehouse capable of streaming tensors from Cloud Object Store (COS) to GPU memory at wire speed based on complex relational queries. We use Hierarchical Statistical Indices (HSI) for query acceleration. Our architecture allows to directly address tensors on block level using HTTP range reads. Once in GPU memory, data can be transformed using PyTorch transforms. We provide a generic PyTorch dataset type with a corresponding dataset factory translating relational queries and requested transformations as an instance. By making use of the HSI, irrelevant blocks can be skipped without reading them as those indices contain statistics on their content at different hierarchical resolution levels. This is an opinionated architecture powered by open standards and making heavy use of open-source technology. Although, hardened for production use using geospatial-temporal data, this architecture generalizes to other use case like computer vision, computational neuroscience, biological sequence analysis and more.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要