Deep reinforcement learning applied to a sparse-reward trading environment with intraday data

EXPERT SYSTEMS WITH APPLICATIONS(2024)

引用 0|浏览27
暂无评分
摘要
Deep reinforcement learning (DRL) has made remarkable strides in empowering computational models to tackle intricate decision-making tasks. In quantitative trading, DRL trading agents have emerged as a means to optimize decisions across diverse market scenarios, culminating in developing profitable trading strategies by assimilating knowledge from past experiences. This study introduces an innovative trading system centered around the Deep Q-Network (DQN) algorithm called Extended Trading DQN (ETDQN). ETDQN stands out by its ability to adapt its learning process to trade effectively across varying market conditions, with feedback received exclusively upon trade liquidation. This contrasts with models that inundate agents with continuous feedback signals. ETDQN leverages distributional learning and several other independent extensions to enhance its DRL capabilities, streamlining its decision-making process. The model accomplishes this by prioritizing experiences encompassing diverse sub-objectives, facilitating the accumulation of maximum profit while obviating the need for intricate reward fine-tuning. Through extensive training on three distinct financial time series signals, ETDQN demonstrates its proficiency in identifying trading opportunities, particularly during periods of heightened price volatility. Notably, the model exhibits a more assertive approach towards managing annual returns volatility compared to the conventional DQN model, outperforming it by a factor of 1.46 and 7.13 concerning average daily cumulative returns, as evidenced in the historical data of Western Digital Corporation and the Cosmos cryptocurrency, respectively.
更多
查看译文
关键词
Quantitative trading,Deep reinforcement learning,Deep Q-learning,Sparse-reward trading environment,Financial signal processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要