From Bricks to Bridges: Product of Invariances to Enhance Latent Space Communication
arxiv(2023)
摘要
It has been observed that representations learned by distinct neural networks
conceal structural similarities when the models are trained under similar
inductive biases. From a geometric perspective, identifying the classes of
transformations and the related invariances that connect these representations
is fundamental to unlocking applications, such as merging, stitching, and
reusing different neural modules. However, estimating task-specific
transformations a priori can be challenging and expensive due to several
factors (e.g., weights initialization, training hyperparameters, or data
modality). To this end, we introduce a versatile method to directly incorporate
a set of invariances into the representations, constructing a product space of
invariant components on top of the latent representations without requiring
prior knowledge about the optimal invariance to infuse. We validate our
solution on classification and reconstruction tasks, observing consistent
latent similarity and downstream performance improvements in a zero-shot
stitching setting. The experimental analysis comprises three modalities
(vision, text, and graphs), twelve pretrained foundational models, nine
benchmarks, and several architectures trained from scratch.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要