TensorBank: Tensor Lakehouse for Foundation Model Training
arxiv(2023)
摘要
Storing and streaming high dimensional data for foundation model training
became a critical requirement with the rise of foundation models beyond natural
language. In this paper we introduce TensorBank, a petabyte scale tensor
lakehouse capable of streaming tensors from Cloud Object Store (COS) to GPU
memory at wire speed based on complex relational queries. We use Hierarchical
Statistical Indices (HSI) for query acceleration. Our architecture allows to
directly address tensors on block level using HTTP range reads. Once in GPU
memory, data can be transformed using PyTorch transforms. We provide a generic
PyTorch dataset type with a corresponding dataset factory translating
relational queries and requested transformations as an instance. By making use
of the HSI, irrelevant blocks can be skipped without reading them as those
indices contain statistics on their content at different hierarchical
resolution levels. This is an opinionated architecture powered by open
standards and making heavy use of open-source technology. Although, hardened
for production use using geospatial-temporal data, this architecture
generalizes to other use case like computer vision, computational neuroscience,
biological sequence analysis and more.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要