Fusion of Multimodal Textual and Visual Descriptors for Analyzing Disaster Response

2023 5th International Conference on Smart Systems and Inventive Technology (ICSSIT)(2023)

引用 0|浏览0
暂无评分
摘要
People are using social media (SM) sites like Twitter, Facebook and Instagram more frequently to report senous catastrophes or disaster situations The scale of the event, the number of victims, and the extent of the infrastructure damage are all frequently revealed in mullimodal data published on these platforms The information can give local govemment off$\iota$cials and aid otganizations a comprehensive ovenfiew of the situation. It can also be used to eff$\iota$ciently and quickly plan relief efforts The goal of the suggested effort is to solve the diff$\iota$cully of locating pertinent infomwtion among the numerous published SM posts In particular, embeddings that encapsulate the relatedness of multimodal SM post in the context of disaster occurrences are created using pretrained deep learning models One dataset that offers annotations in addition to textual and visual data is the Multi-modal Datasets from Natural Disasters named as CnsisMMD captured the posts from social media, which can be used by researchers to create cnsis response systems This study has examined CnsisMM$D'$s multi-modal data on seven significant natural disasters, including earthquakes, floods, humcanes, and f$\iota$res, and developed an eff$\iota$cient model for categonzing social media data into useful and non-useful categones Proposed model makes use of transfer learning approach for extraction imwge features using DenseNet and transformer-based BERT model for extraction of textual features. This multimodal fusion approach has achieved the accuracy of 85.33 %,which isproved to be better than state of art techniques
更多
查看译文
关键词
Pretrained Models,DenseNet,BERT,Multimodal Fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要