Analyzing Before Solving: Which Parameters Influence Low-Level Surgical Activity Recognition.

arXiv: Human-Computer Interaction(2017)

引用 23|浏览5
暂无评分
摘要
Automatic low-level surgical activity recognition is today well-known technical bottleneck for smart and situation-aware assistance for the operating room of the future. Our study sought to discover which sensors and signals could facilitate this recognition. Low-level surgical activity represents semantic information about a surgical procedure that is usually expressed by the following elements: an action verb, surgical instrument, and operated anatomical structure. We hypothesized that activity recognition does not require sensors for all three elements. We conducted a large-scale study using deep learning on semantic data from 154 operations from four different surgeries. The results demonstrated that the instrument and verb encode similar information, meaning only one needs to be tracked, preferably the instrument. The anatomical structure, however, provides some unique cues, and it is thus crucial to recognize it. For all the studied surgeries, a combination of two elements, always including the structure, proved sufficient to confidently recognize the activities. We also found that in the presence of noise, combining the information about the instrument, structure, and historical context produced better results than a simple composition of all three elements. Several relevant observations about surgical practices were also made in this paper. Such findings provide cues for designing a new generation of operating rooms.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要