Contextual-Semantic-Aware Linkable Knowledge Prediction in Stack Overflow via Self-Attention

2021 IEEE 32nd International Symposium on Software Reliability Engineering (ISSRE)(2021)

引用 2|浏览17
暂无评分
摘要
In Stack Overflow, a question and its answers are defined as a knowledge unit. These knowledge units can be linked together for different purposes, which typically subdivided into four classes: Duplicate, Directly linkable, Indirectly linkable, and Isolated. Developers usually use these linkable knowledge units to search for more targeted information. Prior studies have found that deep learning or SVM technique can effectively predict the class of linkable knowledge units. However, they focus on short-distance semantic relationship but fail to capture global information (semantic relationship between a word and all the words in the same knowledge unit) and ignore joint semantics (semantic relationship between a word with all the words in different knowledge units). To address the issues, we propose a Self-Attention-based contextual semantic aware Linkable Knowledge prediction model (SALKU). SALKU leverages self-attention to pay attention to all the words in a knowledge unit and fully capture the global information needed for each word, then utilizes a variant of self-attention to extract joint semantics between two knowledge units. Experiment results on an existing dataset show that SALKU out-performs the state-of-the-art approaches CNN, Tuning SVM, and Soft-cos SVM in terms of three metrics, respectively. Additionally, SALKU is faster than the three baseline approaches.
更多
查看译文
关键词
self-attention,link prediction,joint semantic,Stack Overflow
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要