Domain Model Extraction from User-authored Scenarios and Word Embeddings

2022 IEEE 30th International Requirements Engineering Conference Workshops (REW)(2022)

引用 2|浏览7
暂无评分
摘要
Domain models are used by requirements analysts to rationalize domain phenomena into discrete entities that drive requirements elicitation and analysis. Domain models include entities, actors or agents, their actions, and desired qualities assigned to states in the domain. Domain models are acquired through a wide range of sources, including interviews with subject matter experts, and by analyzing text-based scenarios, regulations and policies. Requirements automation to assist with elicitation or text analysis can be supported using masked language models (MLM), which have been used to learn contextual information from natural language sentences and transfer this learning to natural language processing (NLP) tasks. The MLM can be used to predict the most likely missing word in a sentence, and thus be used to explore domain concepts encoded in a word embedding. In this paper, we explore an approach of extracting domain knowledge from user-authored scenarios using typed dependency parsing techniques. We also explore the efficacy of a complementary approach of using a BERT-based MLM to identify entities and associated qualities to build a domain model from a single-word seed term.
更多
查看译文
关键词
requirements,domain model,word embedding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要