基本信息
浏览量:211
职业迁徙
个人简介
I studied computer science and received a PhD at the Technical University of Berlin, after which I went on to work as a postdoc for the University of Oxford. I started this September as an assistant professor in the Algorithmics group within the Department of Software Technology here at the Delft University of Technology.
My research interest focuses at the intersection between inductive and deductive reasoning in Artificial Intelligence (AI). Traditionally deductive approaches like Operations Research (OR) have dominated AI, but over the last two decades, inductive approaches like Machine Learning (ML) captured both the name AI and the majority of public perception. While these new techniques address one of the major underlying flaws of deductive reasoning, namely the mismatch between model and reality, they come with their own blind spots. This is nowhere more visible than in Reinforcement Learning (RL), which inductively learns to interact with an unknown and possibly non-deterministic environment. Over the last 10 years, this paradigm has set world records in learning to play a variety of computer and board games, and is generally considered one of the most promising paths to general AI. At the same time, however, RL violates some of the most basic assumption that make ML so successful in practical applications. These discrepancies lie at the heart of inductive reasoning: generalization from examples. In RL these examples are interactions with the environment, which by the very nature of interactivity are changing with the agent's behavior and are limited to the exact circumstances during training. Not only can methods developed for ML not cope well with these challenges, learned solutions also go against the core competency of inductive reasoning: adaptation to reality.
My research interest focuses at the intersection between inductive and deductive reasoning in Artificial Intelligence (AI). Traditionally deductive approaches like Operations Research (OR) have dominated AI, but over the last two decades, inductive approaches like Machine Learning (ML) captured both the name AI and the majority of public perception. While these new techniques address one of the major underlying flaws of deductive reasoning, namely the mismatch between model and reality, they come with their own blind spots. This is nowhere more visible than in Reinforcement Learning (RL), which inductively learns to interact with an unknown and possibly non-deterministic environment. Over the last 10 years, this paradigm has set world records in learning to play a variety of computer and board games, and is generally considered one of the most promising paths to general AI. At the same time, however, RL violates some of the most basic assumption that make ML so successful in practical applications. These discrepancies lie at the heart of inductive reasoning: generalization from examples. In RL these examples are interactions with the environment, which by the very nature of interactivity are changing with the agent's behavior and are limited to the exact circumstances during training. Not only can methods developed for ML not cope well with these challenges, learned solutions also go against the core competency of inductive reasoning: adaptation to reality.
研究兴趣
论文共 42 篇作者统计合作学者相似作者
按年份排序按引用量排序主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
COMPUTER VISION AND IMAGE UNDERSTANDING (2024): 103876-103876
CoRR (2023)
引用0浏览0EI引用
0
0
arXiv (Cornell University) (2023)
CoRR (2023)
引用0浏览0EI引用
0
0
2023 INTERNATIONAL SYMPOSIUM ON MULTI-ROBOT AND MULTI-AGENT SYSTEMS, MRS (2023): 149-155
AUTONOMOUS ROBOTSno. 8 (2023): 1275-1297
加载更多
作者统计
合作学者
合作机构
D-Core
- 合作者
- 学生
- 导师
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn