Visually Grounded Speech Models have a Mutual Exclusivity Bias
CoRR(2024)
摘要
When children learn new words, they employ constraints such as the mutual
exclusivity (ME) bias: a novel word is mapped to a novel object rather than a
familiar one. This bias has been studied computationally, but only in models
that use discrete word representations as input, ignoring the high variability
of spoken words. We investigate the ME bias in the context of visually grounded
speech models that learn from natural images and continuous speech audio.
Concretely, we train a model on familiar words and test its ME bias by asking
it to select between a novel and a familiar object when queried with a novel
word. To simulate prior acoustic and visual knowledge, we experiment with
several initialisation strategies using pretrained speech and vision networks.
Our findings reveal the ME bias across the different initialisation approaches,
with a stronger bias in models with more prior (in particular, visual)
knowledge. Additional tests confirm the robustness of our results, even when
different loss functions are considered.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要