Multi-modal feature fusion for geographic image annotation.

Pattern Recognition(2018)

引用 58|浏览75
暂无评分
摘要
•Multi-modal feature construction: as for the shallow modality features, we propose a mixed shallow feature model which combines Color, LBP, and SIFT features to represent the extrinsic visual properties of geographic images; as for the deep modality features, we design a specialized DCNN to extract the intrinsic semantic information for geographic images.•Multi-modal feature fusion: we propose a multi-modal feature fusion model based on DBNs and RBM to build a powerful joint representation for geographic images. The model has been shown to be effective to capture both the intrinsic and extrinsic semantic information.•Open geographic image dataset: we have built a geographic image dataset which contains 300 images (600  ×  600) in six typical areas such as urban, rural, and mountain.
更多
查看译文
关键词
Convolutional neural networks (CNNs),Deep learning,Geographic image annotation,Multi-modal feature fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要