Combining Multi-task Learning with Transfer Learning for Biomedical Named Entity Recognition

Procedia Computer Science(2020)

引用 10|浏览12
暂无评分
摘要
Multi-task learning approaches have shown significant improvements in different fields by training different related tasks simultaneously. The multi-task model learns common features among different tasks where they share some layers. However, it is observed that the multi-task learning approach can suffer performance degradation with respect to single task learning in some of the natural language processing tasks, specifically in sequence labelling problems. To tackle this limitation we formulate a simple but effective approach that combines multi-task learning with transfer learning. We use a simple model that comprises of bidirectional long-short term memory and conditional random field. With this simple model, we are able to achieve better F1-score compared to our single task and the multi-task models as well as state-of-the-art multi-task models.
更多
查看译文
关键词
Biomedical Named Entity Recognition,Multi-task Learning,Transfer Learning,Deep Learning,Long Short-Term Memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要