Detecting mistakes in a domain model.

ACM/IEEE International Conference on Model Driven Engineering Languages and Systems (MoDELS)(2022)

引用 0|浏览6
暂无评分
摘要
Domain models are a fundamental part of software engineering, and it is important for every software engineer to be taught the principles of domain modeling. Instructors play a vital role in teaching students the skills required to understand and design domain models. Instructors check models created by students for mistakes by comparing them with a correct solution. While this did not use to be an overwhelming task, this is not the case anymore nowadays due to a rapid increase in the number of students wanting to become software engineers, leading to larger class sizes. Hence, students may need to wait for a longer time to get feedback on their solutions and the feedback may be more superficial due to time constraints. In this paper, we propose a mistake detection system (MDS) that aims to automate the manual approach of checking student solutions and help save both students' and instructors' time. MDS automatically indicates the exact location and the type of the mistake to the student. At present, MDS accurately detects 83 out of 97 identified different types of mistakes that may exist in a student solution. A prototype tool verifies the feasibility of the proposed approach. When synonyms are considered by MDS, recall of 0.93 and precision of 0.79 are achieved based on the results for real student solutions. The proposed MDS takes us one step closer to automating the existing manual approach, freeing up instructor time and helping students learn domain modeling more effectively.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要