Distributed visual processing for augmented reality

Mixed and Augmented Reality(2012)

引用 13|浏览0
暂无评分
摘要
Recent advances have made augmented reality on smartphones possible but these applications are still constrained by the limited computational power available. This paper presents a system which combines smartphones with networked infrastructure and fixed sensors and shows how these elements can be combined to deliver real-time augmented reality. A key feature of this framework is the asymmetric nature of the distributed computing environment. Smartphones have high bandwidth video cameras but limited computational ability. Our system connects multiple smartphones through relatively low bandwidth network links to a server with large computational resources connected to fixed sensors that observe the environment. By contrast to other systems that use preprocessed static models or markers, our system has the ability to rapidly build dynamic models of the environment on the fly at frame rate. We achieve this by processing data from a Microsoft Kinect, to build a trackable point cloud model of each frame. The smartphones process their video camera data on-board to extract their own set of compact and efficient feature descriptors which are sent via WiFi to a server. The server runs computationally intensive algorithms including feature matching, pose estimation and occlusion testing for each smartphone. Our system demonstrates real-time performance for two smartphones.
更多
查看译文
关键词
limited computational ability,fixed sensor,efficient feature descriptors,augmented reality,limited computational power,large computational resource,key feature,feature matching,multiple smartphones,frame rate,visual processing,pose estimation,feature extraction,pattern matching,data visualisation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要