Physics-Based Motion Control Through DRL's Reward Functions.

SVR(2021)

引用 0|浏览3
暂无评分
摘要
Producing natural physically-based motions of articulated characters is a challenging problem. The animator needs to figure out high-dimensional parameters of a motion controller to get good visual quality, while still having to deal with the basic functioning of the controller. However, those parameters generally have an unintuitive relationship with the resulting motion. Deep Reinforcement Learning (DRL) has been recently explored to solve such problem. With DRL, it is possible to set a neural network with observation and action parameters and control the animation through a reward function. Nevertheless, choosing good parameters and a reward function is not a simple task. In this paper, we investigate how the animator can control the motion by manipulating simple reward functions. We propose a control structure with DRL, in which the reward function can be adapted to the desired motion and to the morphology of the controlled character. Moreover, we introduce speed in the training process so that, after training the neural network, the character is able to adapt its motion to different speeds in real time. Through a series of tests, we assess animation and speed controls of characters with different morphologies.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要