Combining Bottom-Up And Top-Down Attentional Influences

HUMAN VISION AND ELECTRONIC IMAGING XI(2006)

引用 7|浏览12
暂无评分
摘要
Visual attention to salient and relevant scene regions is crucial for an animal's survival in the natural world. It is guided by a complex interplay of at least two factors { image-driven, bottom-up salience (1) and knowledge-driven, top-down guidance (2, 3). For instance, a ripe red fruit among green leaves captures visual attention due to its bottom-up salience, while a non-salient camouaged predator is detected through top-down guidance to known predator locations and features. Although both bottom-up and top-down factors are important for guiding visual attention, most existing models and theories are either purely top-down (4) or bottom-up (5, 6). Here, we present a combined model of bottom-up and top-down visual attention. Our proposed model rst computes the naive, bottom-up salience of every scene location for dieren t local visual features (e.g., dieren t colors, orientations and intensities) at mul- tiple spatial scales in a manner described in (6). Next, the top-down component uses learnt statistical knowledge of the local features of the target and distracting clutter, to optimize the relative weights of the bottom-up maps such that the overall salience of the target is maximized relative to the surrounding clutter. Such optimization renders the target more salient than the distractors, thereby maximizing target detection speed (7). Finding the optimal top-down weights that maximize the target's salience relative to
更多
查看译文
关键词
top down,bottom up
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要