Enabling Explainable AI in Cybersecurity Solutions

Imdad Ali Shah,Noor Zaman Jhanjhi, Sayan Kumar Ray

Advances in Explainable AI Applications for Smart Cities Advances in Computational Intelligence and Robotics(2024)

引用 0|浏览0
暂无评分
摘要
The public needs to be able to understand and accept AI's decision-making if it is to acquire their trust. A compelling justification can outline the reasoning behind a choice in terms that the person hearing it will find “comfortable.” A suitable level of complexity is present in the explanation's combination of facts. As AI becomes increasingly complex, humans find it challenging to comprehend and track the algorithm's actions. These “black box” models are built purely from this information. It might be required to meet regulatory standards, or it might be crucial to provide people impacted by a decision the opportunity to contest. With explainable AI, a company may increase model performance and solve issues while assisting stakeholders in comprehending the actions of AI models. Evaluation of the model is sped up by displaying both positive and negative values in the model's behaviour and using data to generate an explanation.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要