WeChat Mini Program
Old Version Features

MM-OR: A Large Multimodal Operating Room Dataset for Semantic Understanding of High-Intensity Surgical Environments

CVPR 2025(2025)

Cited 0|Views7
Abstract
Operating rooms (ORs) are complex, high-stakes environments requiring precise understanding of interactions among medical staff, tools, and equipment for enhancing surgical assistance, situational awareness, and patient safety. Current datasets fall short in scale, realism and do not capture the multimodal nature of OR scenes, limiting progress in OR modeling. To this end, we introduce MM-OR, a realistic and large-scale multimodal spatiotemporal OR dataset, and the first dataset to enable multimodal scene graph generation. MM-OR captures comprehensive OR scenes containing RGB-D data, detail views, audio, speech transcripts, robotic logs, and tracking data and is annotated with panoptic segmentations, semantic scene graphs, and downstream task labels. Further, we propose MM2SG, the first multimodal large vision-language model for scene graph generation, and through extensive experiments, demonstrate its ability to effectively leverage multimodal inputs. Together, MM-OR and MM2SG establish a new benchmark for holistic OR understanding, and open the path towards multimodal scene analysis in complex, high-stakes environments. Our code, and data is available at https://github.com/egeozsoy/MM-OR.
More
Translated text
Key words
Large Datasets,Multimodal Dataset,Tracking Data,Holistic Understanding,Situational Awareness,Audio Data,Multimodal Analysis,Multimodal Model,Realistic Dataset,Scene Graph,Semantic Graph,RGB-D Data,Point Cloud,Language Model,Video Data,Temporal Data,Activity Prediction,Depth Camera,Human-robot Interaction,Point Cloud Data,Unicompartmental Knee Arthroplasty,Surgical Staff,Surgical Activity,High-resolution View,Scene Understanding,Token Embedding,Robotic Setup,Surgical Tools,Long-term Context,Audio Cues
PDF
Bibtex
收藏
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined