On the Complexity of Speed Scaling

Lecture Notes in Computer Science(2015)

引用 9|浏览47
暂无评分
摘要
The most commonly studied energy management technique is speed scaling, which involves operating the processor in a slow, energy-efficient mode at non-critical times, and in a fast, energy-inefficient mode at critical times. The natural resulting optimization problems involve scheduling jobs on a speed-scalable processor and have conflicting dual objectives of minimizing energy usage and minimizing waiting times. One can formulate many different optimization problems depending on how one models the processor (e.g., whether allowed speeds are discrete or continuous, and the nature of relationship between speed and power), the performance objective (e.g., whether jobs are of equal or unequal importance, and whether one is interested in minimizing waiting times of jobs or of work), and how one handles the dual objective (e.g., whether they are combined in a single objective, or whether one objective is transformed into a constraint). There are a handful of papers in the algorithmic literature that each give an efficient algorithm for a particular formulation. In contrast, the goal of this paper is to look at a reasonably full landscape of all the possible formulations. We give several general reductions which, in some sense, reduce the number of problems that are distinct in a complexity theoretic sense. We show that some of the problems, for which there are efficient algorithms for a fixed speed processor, turn out to be NP-hard. We give efficient algorithms for some of the other problems. Finally, we identify those problems that appear to not be resolvable by standard techniques or by the techniques that we develop in this paper for the other problems.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要