Tri-modal Confluence with Temporal Dynamics for Scene Graph Generation in Operating Rooms
arxiv(2024)
摘要
A comprehensive understanding of surgical scenes allows for monitoring of the
surgical process, reducing the occurrence of accidents and enhancing efficiency
for medical professionals. Semantic modeling within operating rooms, as a scene
graph generation (SGG) task, is challenging since it involves consecutive
recognition of subtle surgical actions over prolonged periods. To address this
challenge, we propose a Tri-modal (i.e., images, point clouds, and language)
confluence with Temporal dynamics framework, termed TriTemp-OR. Diverging from
previous approaches that integrated temporal information via memory graphs, our
method embraces two advantages: 1) we directly exploit bi-modal temporal
information from the video streaming for hierarchical feature interaction, and
2) the prior knowledge from Large Language Models (LLMs) is embedded to
alleviate the class-imbalance problem in the operating theatre. Specifically,
our model performs temporal interactions across 2D frames and 3D point clouds,
including a scale-adaptive multi-view temporal interaction (ViewTemp) and a
geometric-temporal point aggregation (PointTemp). Furthermore, we transfer
knowledge from the biomedical LLM, LLaVA-Med, to deepen the comprehension of
intraoperative relations. The proposed TriTemp-OR enables the aggregation of
tri-modal features through relation-aware unification to predict relations so
as to generate scene graphs. Experimental results on the 4D-OR benchmark
demonstrate the superior performance of our model for long-term OR streaming.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要