Inferring User Interests On Social Media From Text And Images

2015 IEEE International Conference on Data Mining Workshop (ICDMW)(2015)

引用 11|浏览45
暂无评分
摘要
Inferring user interests on social media from text and images is addressed as a multi-class classification problem. We proposed approaches to infer user interest on Social media where often multi-modal data (text, image etc.) exists. We use user-generated data from Pinterest.com as a natural expression of users' interests. We consider each pin (image-text pair) as a category label that represents a broad user interest, since users collect images that they like on the social media platform and often assign a category label. This task is useful beyond Pinterest because most user-generated data on the Web is not necessarily readily categorized into interest labels. In addition to predicting users' interests, our main contribution is exploiting a multi-modal space composed of images and text. This is a natural approach since humans express their interests with a combination of modalities. Exploiting multi-modal spaces in this context has received little attention in the literature. We performed eleven experiments using the state-of-the-art image and textual representations, such as convolutional neural networks, word embeddings, and bags of visual and textual words. Our experimental results show that in fact jointly processing image and text increases the overall interest classification accuracy, when compared to uni-modal representations (i.e., using only text or using only images).
更多
查看译文
关键词
inferring user interests,user modeling,term frequencies,bag of words (BoW),convolutional neural networks (CNN),word embeddings
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要