Learning Effective Representations from Sparse Mutlimodal Data on Content Curation Social Networks.

ICDM Workshops(2019)

引用 0|浏览45
暂无评分
摘要
Content curation social networks (CCSNs), which provide users a platform to share their interests by multimedia information, are the most rapidly growing social networks in recent years. Since large-scale multimodal data have been generated by CCSN users, learning multimodal representations for contents have become the key to the progress of many applications such as user interest analysis and recommender system for curation networks. Learning representations for CCSNs faces a vital challenge: the sparsity of multimodal data. It is difficult for most existing approaches to learn effective representations for multimodal CCSNs because they didnu0027t provide a solution on how to model sparse and noisy multimodal data. In this paper, we propose a 2-step approach to learn accurate multimodal representations from sparse multimodal data. First, we propose a novel Board-Image-Word (BIW) graph to model the multimodal data. Benefited from the unique board-image relation on CCSNs, embeddings of images and texts which endow semantic relations are learned from the network topology of the BIW graph. As the second step, a deep vision model with modified loss function are trained by minimizing the distance between the visual features of contents and their corresponding semantic relation embeddings to learn representations which incorporate visual information and graph-based semantic relations. Experiments on the dataset from Huaban.com demonstrate that under the circumstance of sparser text modality, our method significantly outperformed multimodal DBN, DBM and unimodal representation learning methods on pin classification and board recommendation tasks.
更多
查看译文
关键词
representation learning,content curation social networks,multimodal analysis,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要