Making Smart Cities Explainable: What XAI Can Learn from the “Ghost Map”

CHI Extended Abstracts(2023)

引用 1|浏览1
暂无评分
摘要
How can we visualize civic algorithms in ways that illuminate both their positive and negative spatial impacts? Civic algorithms guide everyday decisions that cumulatively create city life. Yet, their broader effects remain invisible to their creators and city inhabitants. Recent scholarship on “algorithmic harms” presents an urgent need to make smart cities explainable. We argue that existing Explainable AI (XAI) approaches are limited across four important dimensions: accessibility, cultural reflexivity, situatedness, and visibility into internal representations. Our research explores the potential of conventional maps in addressing these limits and providing what we call “grounded explanations”. As a salient example, we harness the historical case of the “Ghost Map”, designed by John Snow to visualize and resolve the 1854 London Cholera epidemic. We believe that such examples can help the XAI community learn from the cultural history of city representations, as they seek to establish public processes for explaining and evaluating “smart cities”.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要