ChinaOpen: A Dataset for Open-world Multimodal Learning

CoRR(2023)

引用 1|浏览28
暂无评分
摘要
This paper introduces ChinaOpen, a dataset sourced from Bilibili, a popular Chinese video-sharing website, for open-world multimodal learning. While the state-of-the-art multimodal learning networks have shown impressive performance in automated video annotation and cross-modal video retrieval, their training and evaluation have primarily been conducted on YouTube videos with English text. Their effectiveness on Chinese data remains to be verified. In order to support multimodal learning in the new context, we construct ChinaOpen-50k, a webly annotated training set of 50k Bilibili videos associated with user-generated titles and tags. Both text-based and content-based data cleaning are performed to remove low-quality videos in advance. For a multi-faceted evaluation, we build ChinaOpen-1k, a manually labeled test set of 1k videos, wherein each video is accompanied with a manually checked user title and a manually written caption. Besides, each test video is manually tagged to describe what visual entities / actions / scenes are present in the visual content. The original user tags are also manually checked. Moreover, with all the Chinese text translated into English, ChinaOpen-1k is also suited for evaluating models trained on English data. In addition to ChinaOpen, we propose Generative Video-to-text Transformer (GVT) for Chinese video captioning. We conduct an extensive evaluation of the state-of-the-art single-task / multi-task models on the new dataset, resulting in a number of novel findings and insights.
更多
查看译文
关键词
dataset,learning,open-world
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要