Automated multimodal sensemaking: Ontology-based integration of linguistic frames and visual data

COMPUTERS IN HUMAN BEHAVIOR(2024)

引用 0|浏览16
暂无评分
摘要
Frame evocation from visual data is an essential process for multimodal sensemaking, due to the multimodal abstraction provided by frame semantics. However, there is a scarcity of data-driven approaches and tools to automate it. We propose a novel approach for explainable automated multimodal sensemaking by linking linguistic frames to their physical visual occurrences, using ontology-based knowledge engineering techniques. We pair the evocation of linguistic frames from text to visual data as "framal visual manifestations". We present a deep ontological analysis of the implicit data model of the Visual Genome image dataset, and its formalization in the novel Visual Sense Ontology (VSO). To enhance the multimodal data from this dataset, we introduce a framal knowledge expansion pipeline that extracts and connects linguistic frames - including values and emotions - to images, using multiple linguistic resources for disambiguation. It then introduces the Visual Sense Knowledge Graph (VSKG), a novel resource. VSKG is a queryable knowledge graph that enhances the accessibility and comprehensibility of Visual Genome's multimodal data, based on SPARQL queries. VSKG includes frame visual evocation data, enabling more advanced forms of explicit reasoning, analysis and sensemaking. Our work represents a significant advancement in the automation of frame evocation and multimodal sense-making, performed in a fully interpretable and transparent way, with potential applications in various fields, including the fields of knowledge representation, computer vision, and natural language processing.
更多
查看译文
关键词
Multimodal sensemaking,Ontology engineering,Knowledge graph construction,Frame -based reasoning,Visual and linguistic frames
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要