Bridging the data gap between children and large language models.

Trends in cognitive sciences(2023)

引用 4|浏览6
暂无评分
摘要
Large language models (LLMs) show intriguing emergent behaviors, yet they receive around four or five orders of magnitude more language data than human children. What accounts for this vast difference in sample efficiency? Candidate explanations include children's pre-existing conceptual knowledge, their use of multimodal grounding, and the interactive, social nature of their input.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络