Distributed Real-Time Embedded Video Processing

msra(2004)

引用 23|浏览7
暂无评分
摘要
The embedded systems group at Princeton University is building a distributed system for real-time analysis of video from multiple cameras. Most work in multiple- camera video systems relies on centralized processing. However, performing video computations at a central server has several disadvantages: it introduces latency that reduces the response time of the video system; it increases the amount of buffer memory required; and it consumes network bandwidth. These problems cause centralized video processing systems to not only provide lower performance but to use excess power as well. A deployable multi-camera video system must perform distributed computation, including computation near the camera as well as remote computations, in order to meet performance and power requirements. Smart cameras combine sensing and computation to perform real-time image and video analysis. A smart camera can be used for many applications, including face recognition and tracking. We have developed a smart camera system (Wol02) that performs real-time gesture recognition. This system, which currently runs on a Trimedia TM-100 VLIW processor, classifies gestures such as walking, standing, waving arms. It currently runs at 25 frames/sec on the Trimedia processor. The application uses a number of standard vision algorithms as well as some improvements of our own; the details of the algorithms are not critical to the distributed system research we propose here. However, real-time vision is very well suited to distributed system implementation. Using multiple cameras simplifies some important problems in video analysis. Occlusion causes many problems in vision; for example, when the subject turns such that only one arm can be seen from a single camera, the algorithms must infer that the arm exists in order to confirm that the subject in front of the camera is a person and not something else. When views are available from multiple cameras, the data can be fused to provide a global view of the subject that provides more complete information for higher-level analysis. Multiple cameras also allow us to replace mechanical panning and zooming with electronic panning and zooming. Electronically panned/zoomed cameras do not have inertia that affect tracking; they are also more reliable under harsh environmental conditions.
更多
查看译文
关键词
real time,algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要