Reinforcement Learning-based Kalman Filter for Adaptive Brain Control in Brain-Machine Interface

2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC)(2021)

引用 3|浏览2
暂无评分
摘要
Brain-Machine Interfaces (BMIs) convert paralyzed people's neural signals into the command of the neuro-prosthesis. During the subject's brain control (BC) process, the neural patterns might change across time, making it crucial and challenging for the decoder to co-adapt with the dynamic neural patterns. Kalman Filter (KF) is commonly used for continuous control in BC. However, if the neural patterns become quite different compared with the training data, KF needs a re-calibration session to maintain its performance. On the other hand, Reinforcement Learning (RL) has the advantage of adaptive updating by the reward signal. But it is not very suitable for generating continuous motor states in BC due to the discrete action selection. In this paper, we propose a reinforcement learning-based Kalman filter. We maintain the state transition model of KF for a continuous motor state prediction. At the same time, we use RL to generate the action from the corresponding neural pattern, which is then used as a correction for the state prediction. The RL's parameters are continuously adjusted by the reward signal in BC. In this way, we could achieve a continuous motor state prediction when the neural patterns have drifted across time. The proposed algorithm is tested on a simulated rat lever-pressing experiment, where the rat's neural patterns have drifted across days. Compared with pure KF without re-calibration, our algorithm could follow the neural pattern drift in an online fashion and maintain good performance.
更多
查看译文
关键词
Animals,Brain,Brain-Computer Interfaces,Learning,Rats,Reinforcement, Psychology,Reward
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要