Budgeted And Non-Budgeted Causal Bandits

24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS)(2021)

引用 20|浏览4
暂无评分
摘要
Learning good interventions in a causal graph can be modeled as a stochastic multi-armed bandit problem with side-information. First, we study this problem when interventions are more expensive than observations, and a budget is specified. If there are no backdoor paths from the intervenable nodes to the reward node, then we propose an algorithm to minimize simple regret that optimally trades-off observations and interventions based on the cost of interventions. We also propose an algorithm that accounts for the cost of interventions, utilizes causal side-information and minimizes the expected cumulative regret without exceeding the budget. Our algorithm performs better than standard algorithms that do not take side-information into account. Finally, we study the problem of learning best interventions without budget constraint in general graphs and give an algorithm that achieves constant expected cumulative regret in terms of the instance parameters when the parent distribution of the reward variable for each intervention is known. Our results are experimentally validated and compared to the best-known bounds in the current literature.
更多
查看译文
关键词
non-budgeted
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要