A Reinforcement Learning based Collection Approach

SIU(2023)

引用 0|浏览4
暂无评分
摘要
Reaching out to customers for debt collection is an important process for banks. The most commonly used channels for reaching out are text messages and phone calls. While phone calls are more effective, there is a daily capacity limit. Currently, the customers to be called are determined by a rule-based system. The rules are based on the customer's risk segment and the number of late days. It is anticipated that making a customer specific decision is more efficient than using general segments. In this study, an offline reinforcement learning-based approach that uses the existing data to make call decisions for individual customers has been developed. In this approach, customer information, customer behaviors and previous collection actions define the state space and whether to call the customer or not define the action space. Furthermore, a reward function that uses the call costs and late days is designed. This formulation is then used to learn a Q-value model using the random ensemble method. A call-decision approach is developed using the output of this model alongside daily call capacity and additional rules. In live A-B tests, the developed method was observed to yield better results than the current rule-based method.
更多
查看译文
关键词
offline reinforcement learning,banking,decision aids
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要