Two heads are better than one: Enhancing medical representations by pre-training over structured and unstructured electronic health records

ArXiv(2022)

引用 0|浏览6
暂无评分
摘要
The massive amount of electronic health records (EHRs) has created enormous potentials for improving healthcare, among which structured (coded) data and unstructured (clinical narratives) data are two important textual modalities. They do not exist in isolation and can complement each other in many real-life clinical scenarios. Most existing studies in medical informatics, however, either only focus on a particular modality or apply simple and naïve ways to concatenate data from different modalities, which ignores the interactions between them. To address these issues, we proposed a Unified Medical Multimodal Pre-trained Language Model, named UMM-PLM, to jointly learn enhanced representations from both structured and unstructured EHRs. In UMM-PLM, an unimodal information extraction module is used to learn representative characteristics from each data modality respectively, where two Transformer-based components are adopted. A cross-modal module is then introduced to model the interactions between the two modalities. We pre-trained the model on a large EHR dataset containing both structured data and unstructured data, and verified the effectiveness of the model on three downstream clinical tasks, i.e., medication recommendation, 30-day readmission, and ICD coding, through extensive experiments. The results demonstrate the power of UMM-PLM compared with benchmark methods and state-of-the-art baselines. Further analyses show that UMM-PLM can effectively integrate multimodal textual information and potentially provide more comprehensive interpretations for clinical decision-making.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要