Using Mention Segmentation to Improve Event Detection with Multi-head Attention

2019 International Conference on Asian Language Processing (IALP)(2019)

引用 0|浏览45
暂无评分
摘要
Sentence-level event detection (ED) is a task of detecting words that describe specific types of events, including the subtasks of trigger word identification and event type classification. Previous work straightforwardly inputs a sentence into neural classification models and analyzes deep semantics of words in the sentence one by one. Relying on the semantics, probabilities of event classes can be predicted for each word, including the carefully defined ACE event classes and a “N/A” class(i.e., non-trigger word). The models achieve remarkable successes nowadays. However, our findings show that a natural sentence may posses more than one trigger word and thus entail different types of events. In particular, the closely related information of each event only lies in a unique sentence segment but has nothing to do with other segments. In order to reduce negative influences from noises in other segments, we propose to perform semantics learning for event detection only in the scope of segment instead of the whole sentence. Accordingly, we develop a novel ED method which integrates sentence segmentation into the neural event classification architecture. Bidirectional Long Short-Term Memory (Bi-LSTM) with multi-head attention is used as the classification model. Sentence segmentation is boiled down to a sequence labeling problem, where BERT is used. We combine embeddings, and use them as the input of the neural classification model. The experimental results show that the performance of our method reaches 76.8% and 74.2% F 1 -scores for trigger identification and event type classification, which outperforms the state-of-the-art.
更多
查看译文
关键词
Event Detection,Mention Segmentation,Multi-head Attention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要