Supervised Multi-Modal Dictionary Learning For Clothing Representation

PROCEEDINGS OF THE FIFTEENTH IAPR INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS - MVA2017(2017)

引用 1|浏览12
暂无评分
摘要
Clothing appearances have complex visual properties, such as color, texture, shape and structure. Different modalities of visual features provide information complementary to each other. Combining multi modal visual features can lead to a comprehensive description of Clothing appearances. Meanwhile, categories provide sufficient semantic information, which can lead to discriminative representations. Clothing categories exhibit hierarchical structure, which could benefit the learning algorithm. In this paper, we propose a multi-view learning algorithm, named Supervised Multi-modal Dictionary Learning (SMMDL), which learns a latent space encoding multi-modal visual properties and semantic relationships between clothing samples. Experiments on the image classification task show that SMMDL outperforms baseline methods.
更多
查看译文
关键词
supervised multimodal dictionary learning,clothing representation,clothing appearances,visual features modalities,multimodal visual features,discriminative representations,clothing categories,hierarchical structure,multiview learning algorithm,SMMDL,latent space encoding,multimodal visual properties,image classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要