Reducing Power Consumption and Latency in Mobile Devices Using an Event Stream Model.

ACM Trans. Embedded Comput. Syst.(2016)

引用 9|浏览2
暂无评分
摘要
Most consumer-based mobile devices use asynchronous events to awaken apps. Currently, event handling is implemented in either an application or an application framework such as Java’s virtual machine (VM) or Microsoft’s .NET, and it uses a “polling loop” that periodically queries an event queue to determine if an event has occurred. These loops must awaken the process, check for an event, and then put the process back to sleep many times per second. This constant arousal prevents the CPU from being put into a deep sleep state, which increases power consumption. Additionally, the process cannot check for events while it sleeps, and this delay in handling events increases latency, which is the time that elapses between when an event occurs and when the application responds to the event. We call this model of event handling a “pull” model because it needs to query hardware devices or software queues in order to “pull” events from them. Recent advances in input devices support direct, informative interrupts to the kernel when an event occurs. This allows us to develop a much more efficient event-handling model called the “Event Stream Model” (ESM). This model is a push model that allows a process to sleep as long as no event occurs but then immediately awakens a process when an event occurs. This model eliminates the polling loop, thus eliminating latency-inducing sleep between polls and reducing unnecessary power consumption. To work properly, the ESM model must be implemented in the kernel rather than in the application. In this article, we describe how we implemented the ESM model in Android operating system (OS). Our results show that with the event stream model, power consumption is reduced by up to 23.8% in certain circumstances, and latency is reduced by an average of 13.6ms.
更多
查看译文
关键词
Graphical user interfaces, kernel event stream model, mobile devices, power conservation, push model, reduced latency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要