Dynamic Power Management Policies for Embedded Systems

1         Introduction

Power consumption is a key issue in the design of embedded systems today as it directly affects their battery life. The battery technology has not been able to match the advancements in the hardware that drives these systems in the recent years. Dynamic power management (DPM) [1] – which refers to the selective shutdown of system components that are idle or underutilized – has proven to be a particularly effective technique for reducing power dissipation in such systems. However employing such techniques also incurs performance loss because of the overhead associated with shutdowns and wakeups. An effective DPM policy must therefore seek to maximize power savings while keeping performance degradation within acceptable limits. Design of such policies has been an active research topic and several policies have been proposed in the past. In the following sections I briefly discuss and analyze a few of these policies.

2         Dynamic Power Management

2.1        Overview

Power management is a prediction problem; it seeks to forecast whether an idle period will be long enough to compensate for the overhead of power state changes. The minimum length of an idle period to save power is called the break-even time (Tbe) [11]. It depends on individual devices and is independent of requests.

 

Consider a device whose state transition delay is To (including shutdown and wake-up delays) and the transition energy is Eo. Suppose its power in the working and sleeping states is Pw and Ps. On the left of Figure 1, the device is kept in the working state; on the right, the device is shut down. The break-even time makes energy consumption in both cases equal.

Namely, Pw ื Tbe = Eo + Ps ื (Tbe − To) or Tbe = (Eo − Ps ืTo)/(Pw − Ps).

The break-even time has to be larger than the transition delay; therefore,

Tbe = max[(Eo − Ps ื To)/(Pw − Ps), To].

 

Power

 

Figure 1

Primary goal of any DPM policy is to make the device sleep for atleast Tbe. Otherwise it might end up consuming more power than an always ‘on’ case.

2.2        Policies

2.2.1        Overview

The existing DPM policies can be broadly classified into 3 categories:

  1. Timeout Policies
  2. Predictive Policies
  3. Stochastic Policies

2.2.2        Timeout Policies

Timeout policies have a timeout value τ. They put the device into sleep if it is idle for than τ. The basic assumption is that if the device remains idle for τ, then it should further stay idle for atleast Tbe. The drawback of these policies is that they waste energy while waiting for the timeout to expire.

The timeout period might be fixed or adaptive. A simple fixed timeout policy [2] has τ equal to the Tbe of the device it is managing. This policy guarantees that it would not consume more than twice the energy of an ideal offline policy. Adaptive timeout policies dynamically modify τ based on certain parameters. In [3], Douglis et al adjust τ on the basis of ratio of performance delay and sleep time for the previous idle period. If the ratio is high, τ is increased otherwise it is decreased. A floor and ceiling value for τ is also included to prevent the policy from becoming either too aggressive or conservative respectively. The increase/decrease can be either arithmetic or geometric.

2.2.3        Predictive Policies

Predictive policies predict the length of the upcoming idle period. Thus one can make a decision immediately on whether to sleep or not depending upon the prediction being greater or less than Tbe.

In [4], Hwang et al propose an exponential average scheme. The policy predicts the upcoming idle period length by taking an exponential average of the predicted and actual lengths of the previous idle period. The recursive prediction formula is:

In+1 = a * In + (1 - a) * In,

where In+1 is the new predicted value, In is the last predicted value, In is the latest idle period, and ‘a’ is a constant attenuation factor in the range between 0 to 1. In this formula, In is the inertia and In is the force to push the predicted idle period toward the actual idle period. The scheme also proposes predictive wakeup to avoid performance delays.

Chung et al [5] propose a predictive policy that uses an adaptive learning tree to make predictions. An adaptive learning tree encodes the sequence of idle periods into tree nodes. Leaf nodes store the prediction confidence level (PCL) associated with the respective sequence. PCL is updated using a finite state machine. For instance if an idle period is predicted to be longer than Tbe and it indeed is longer, then PCL is increased otherwise it is decreased using the specified FSM. This scheme is capable of managing multiple power states.

In [6], Lu proposes a policy that is a mix of adaptive timeout and predictive schemes. The policy clusters sequence of user requests into groups called sessions. It predicts the length of the current session based on the predicted and actual lengths of the previous sessions (similar to [4]). If no request arrives, instead of immediately issuing a shutdown command the session length is decreased by an adjustment factor. Conversely, if some request arrives it is increased by the same adjustment factor. A shutdown command is issued when the device has been idle long enough compared to the predicted session length.

2.2.4        Stochastic Policies

Stochastic policies model request arrival and device power state changes as stochastic processes. Minimizing power consumption and performance delays then become stochastic optimization problems.

In [7], Paleologo et al model the service requestor, request queue and service provider (the device) as discrete time markov processes. They define system states that are concatenation of states of service requestor, queue and the service provider. Policy is a matrix that gives an action command corresponding to a particular system state. It might be deterministic or probabilistic. They perform policy optimization for maximizing the power savings keeping performance delay as a constraint. The resulting policy is optimal for the given system model.

In [8], service provider and request queue are modeled as continuous time markov decision process and inter request arrival time is modeled as an exponential distribution. With this model, there is no need to evaluate the appropriate power states periodically. Instead, the arrival and service of requests are the events that trigger state transition decisions.

Both [7] and [8] model request arrival and the service provider using memoryless distributions, which is not accurate in real world situations [10]. In [9], the request arrival is modeled as a pareto distribution while the transition times of the service provider between the power states is modeled as a uniform distribution. Besides it reevaluates the decisions during idle periods until either a request occurs or the device is shut down.

3         Conclusion

In this paper we looked at different approaches people have proposed for DPM. Each approach has its own pro’s and con’s. For instance timeout policies are extremely simple to implement but they have an inherent cost of wasting energy while waiting for the timeout to expire. Predictive policies can solve this problem but they work on a fundamental assumption that the service requests have a high degree of correlation. Another problem with these two classes of policies is that they only aim and maximizing energy savings. They do not take performance loss in the system into consideration. Only [4] speaks about predictive wakeup to solve this problem but that is a heuristic approach and cannot guarantee performance bounds. Stochastic policies have robust power savings and performance delay bounds but they assume specific distributions for the request arrival. The models do not stay that accurate if the request pattern changes.

DPM is an inherently online problem and involves predicting or estimating real world request pattern, which is highly unpredictable. This is what makes it an extremely challenging and exciting research problem.

4         References

[1]     L. Benini and G. De Micheli, Dynamic Power Management:Design Techniques and CAD Tools, Kluwer Academic Publishers, 1997.

[2]     A. Karlin, M. Manesse, L. McGeoch and S. Owicki, "Competitive Randomized Algorithms for Nonuniform Problems", Algorithmica, pp. 542-571, 1994.

[3]     F. Douglis, P. Krishnan, and B. Bershad. Adaptive Disk Spin-Down Policies for Mobile Computers. In Computing Systems, volume 8, pages 381–413, 1995.

[4]     C.-H. Hwang and A. C. Wu. A Predictive System Shutdown Method for Energy Saving of Event-Driven Computation. In International Conference on Computer-Aided Design, pages 28–32, 1997.

[5]     E.-Y. Chung, L. Benini, and G. D. Micheli. Dynamic power management using adaptive learning tree. In International Conference on Computer-Aided Design, pages 274–279, 1999.

[6]     Y.-H. Lu and G. D. Micheli. Adaptive Hard Disk Power Management on Personal Computers. In Great Lakes Symposium on VLSI, pages 50–53, 1999.

[7]     G. A. Paleologo, L. Benini, A. Bogliolo, and G. D. Micheli. Policy Optimization for Dynamic Power Management. In Design Automation Conference, pages 182–187, 1998.

[8]     Q. Qiu and M. Pedram. Dynamic Power Management Based on Continuous-Time Markov Decision Processes. In Design Automation Conference, pages 555–561, 1999.

[9]     T. Simunic, L. Benini, and G. D. Micheli. Event-Driven Power Management of Portable Systems. In International Symposium on System Synthesis, pages 18–23, 1999.

[10]  Yung-Hsiang Lu, Eui-Young Chung, Tajana Simunic, Luca Benini, and Giovanni De Micheli. Quantitative Comparison of Power Management Algorithms. In Design Automation and Test in Europe, pages 20–26, 2000.

[11]  Yung-Hsiang Lu , Giovanni De Micheli, Comparing System-Level Power Management Policies, IEEE Design & Test, v.18 n.2, p.10-19, March 2001