Parametrized Gradual Semantics Dealing with Varied Degrees of Compensation

IJCAI 2023(2023)

引用 0|浏览21
暂无评分
摘要
Compensation is a strategy that a semantics may follow when it faces dilemmas between quality and quantity of attackers. It allows several weak attacks to compensate one strong attack. It is thus based on \textit{compensation degree}, which is a pair of two parameters: i) a parameter showing to what extent an attack is weak, and ii) a parameter indicating the number of weak attackers needed to compensate a strong one. Existing principles on compensation do not specify the parameters, thus it is unclear whether semantics satisfying them compensate at only one degree or several degrees, and which ones. This paper proposes a parameterised family of gradual semantics that is based on a parameter $\alpha$ taking values from the interval $(0,+\infty)$, each of which leads to a different semantics. The family unifies multiple semantics that share some principles but differ in their strategy regarding solving dilemmas. Indeed, we show that the two semantics taking the extreme values of $\alpha$ favor respectively quantity and quality, while all the remaining ones compensate at any degree. We define three classes of compensation degrees and show that the novel family is able to compensate at any of them while none of the existing gradual semantics does.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要