Model-Free Reinforcement Learning and Bayesian Classification in System-Level Power Management.

IEEE Trans. Computers(2016)

引用 42|浏览17
暂无评分
摘要
To cope with uncertainties and variations that emanate from hardware and/or application characteristics, dynamic power management (DPM) frameworks must be able to learn about the system inputs and environmental variations, and adjust the power management policy on the fly. In this paper, an online adaptive DPM technique is presented based on the model-free reinforcement learning (RL) method, which requires no prior knowledge of the state transition probability function and the reward function. In particular, this paper employs the temporal difference (TD) learning method for semi-Markov decision process (SMDP) as the model-free RL technique since the TD method can accelerate convergence and alleviate the reliance on the Markovian property of the power-managed system. In addition, a novel workload predictor based on an online Bayesian classifier is presented to provide effective estimation of the workload characteristics for the RL algorithm. Several improvements are proposed to manage the size of the action space for the learning algorithm, enhance its convergence speed, and dynamically change the action set associated with each system state. In the proposed DPM framework, power-latency tradeoffs of the power-managed system can be precisely controlled based on a user-defined parameter. Extensive experiments on hard disk drives and wireless network cards show that the maximum power saving without sacrificing any latency is 18.6 percent compared to a reference expert-based approach. Alternatively, the maximum latency saving without any power dissipation increase is 73.0 percent compared to the existing best-of-breed DPM techniques.
更多
查看译文
关键词
Prediction algorithms,Markov processes,Heuristic algorithms,Power demand,Bayes methods,Uncertainty,Classification algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要