Bottom-up and top-down reasoning with convolutional latent-variable models

arXiv: Computer Vision and Pattern Recognition(2015)

引用 25|浏览44
暂无评分
摘要
Convolutional neural nets (CNNs) have demonstrated remarkable performance in recent history. Such approaches tend to work in a unidirectional bottom-up feed-forward fashion. However, biological evidence suggests that feedback plays a crucial role, particularly for detailed spatial understanding tasks. This work introduces bidirectional architectures that also reason with top-down feedback: neural units are influenced by both lower and higher-level units. We do so by treating units as latent variables in a global energy function. We call our models convolutional latent-variable models (CLVMs). a theoretical perspective, CLVMs unify several approaches for recognition, including CNNs, generative deep models (e.g., Boltzmann machines), and discriminative latent-variable models (e.g., DPMs). From a practical perspective, CLVMs are particularly well-suited for multi-task learning. We describe a single architecture that simultaneously achieves state-of-the-art accuracy for tasks spanning both high-level recognition (part detection/localization) and low-level grouping (pixel segmentation). Bidirectional reasoning is particularly helpful for detailed low-level tasks, since they can take advantage of top-down feedback. Our architectures are quite efficient, capable of processing an image in milliseconds. We present results on benchmark datasets with both part/keypoint labels and segmentation masks (such as PASCAL and LFW) that demonstrate a significant improvement over prior art, in both speed and accuracy.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要