Offline Reinforcement Learning with Imbalanced Datasets
arxiv(2023)
摘要
The prevalent use of benchmarks in current offline reinforcement learning
(RL) research has led to a neglect of the imbalance of real-world dataset
distributions in the development of models. The real-world offline RL dataset
is often imbalanced over the state space due to the challenge of exploration or
safety considerations. In this paper, we specify properties of imbalanced
datasets in offline RL, where the state coverage follows a power law
distribution characterized by skewed policies. Theoretically and empirically,
we show that typically offline RL methods based on distributional constraints,
such as conservative Q-learning (CQL), are ineffective in extracting policies
under the imbalanced dataset. Inspired by natural intelligence, we propose a
novel offline RL method that utilizes the augmentation of CQL with a retrieval
process to recall past related experiences, effectively alleviating the
challenges posed by imbalanced datasets. We evaluate our method on several
tasks in the context of imbalanced datasets with varying levels of imbalance,
utilizing the variant of D4RL. Empirical results demonstrate the superiority of
our method over other baselines.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要