Power
consumption is a key issue in the design of embedded systems today as it
directly affects their battery life. The battery technology has not been able
to match the advancements in the hardware that drives these systems in the
recent years. Dynamic power management (DPM) [1] which refers to the selective shutdown of system
components that are idle or underutilized has proven to be a particularly
effective technique for reducing power dissipation in such systems. However
employing such techniques also incurs performance loss because of the overhead
associated with shutdowns and wakeups. An effective DPM policy must therefore
seek to maximize power savings while keeping performance degradation within
acceptable limits. Design of such policies has been an active research topic
and several policies have been proposed in the past. In the following sections
I briefly discuss and analyze a few of these policies.
Power
management is a prediction problem; it seeks to forecast whether an idle period
will be long enough to compensate for the overhead of power state changes. The
minimum length of an idle period to save power is called the break-even time
(Tbe) [11]. It depends on individual devices and is independent
of requests.
Consider
a device whose state transition delay is To (including shutdown and wake-up
delays) and the transition energy is Eo. Suppose its power in the working and
sleeping states is Pw and Ps. On the left of Figure 1, the device is kept in
the working state; on the right, the device is shut down. The break-even time
makes energy consumption in both cases equal.
Namely,
Pw ื Tbe = Eo + Ps ื (Tbe − To) or Tbe = (Eo − Ps ืTo)/(Pw −
Ps).
The
break-even time has to be larger than the transition delay; therefore,
Tbe
= max[(Eo − Ps ื To)/(Pw − Ps), To].
Power
Figure 1
Primary goal of any DPM policy
is to make the device sleep for atleast Tbe. Otherwise it might end up
consuming more power than an always on case.
The existing DPM policies can
be broadly classified into 3 categories:
Timeout policies have a timeout value τ. They put
the device into sleep if it is idle for than τ. The basic assumption is
that if the device remains idle for τ, then it should further stay idle
for atleast Tbe. The drawback of these policies is that they waste energy while
waiting for the timeout to expire.
The timeout period might be fixed or adaptive. A
simple fixed timeout policy [2] has τ equal to the Tbe of the device it is
managing. This policy guarantees that it would not consume more than twice the
energy of an ideal offline policy. Adaptive timeout policies dynamically modify
τ based on certain parameters. In [3], Douglis et al adjust τ on the basis of ratio of
performance delay and sleep time for the previous idle period. If the ratio is
high, τ is increased otherwise it is decreased. A floor and ceiling value
for τ is also included to prevent the policy from becoming either too
aggressive or conservative respectively. The increase/decrease can be either
arithmetic or geometric.
Predictive policies predict the length of the upcoming
idle period. Thus one can make a decision immediately on whether to sleep or
not depending upon the prediction being greater or less than Tbe.
In [4], Hwang et al propose an exponential average scheme.
The policy predicts the upcoming idle period length by taking an exponential
average of the predicted and actual lengths of the previous idle period. The
recursive prediction formula is:
In+1 = a * In
+ (1 - a) * In,
where In+1 is the new predicted value, In is the last
predicted value, In is the latest
idle period, and a is a constant attenuation factor in the range between 0 to
1. In this formula, In is the inertia and In
is the force to push the predicted idle period toward the actual idle period.
The scheme also proposes predictive wakeup to avoid performance delays.
Chung et al [5] propose a predictive policy that uses an adaptive
learning tree to make predictions. An adaptive learning tree encodes the
sequence of idle periods into tree nodes. Leaf nodes store the prediction confidence level (PCL)
associated with the respective sequence. PCL is updated using a finite state
machine. For instance if an idle period is predicted to be longer than Tbe and
it indeed is longer, then PCL is increased otherwise it is decreased using the
specified FSM. This scheme is capable of managing multiple power states.
In [6], Lu proposes a policy that is a mix of adaptive
timeout and predictive schemes. The policy clusters sequence of user requests
into groups called sessions. It
predicts the length of the current session based on the predicted and actual
lengths of the previous sessions (similar to [4]). If no request arrives, instead of immediately
issuing a shutdown command the session length is decreased by an adjustment
factor. Conversely, if some request arrives it is increased by the same
adjustment factor. A shutdown command is issued when the device has been idle
long enough compared to the predicted session length.
Stochastic policies model request arrival and device
power state changes as stochastic processes. Minimizing power consumption and performance
delays then become stochastic optimization problems.
In [7], Paleologo et al model the service requestor, request
queue and service provider (the device) as discrete time markov processes. They
define system states that are concatenation of states of service requestor,
queue and the service provider. Policy is a matrix that gives an action command
corresponding to a particular system state. It might be deterministic or
probabilistic. They perform policy optimization for maximizing the power
savings keeping performance delay as a constraint. The resulting policy is
optimal for the given system model.
In [8], service provider and request queue are modeled as continuous
time markov decision process and inter request arrival time is modeled as an
exponential distribution. With this model, there is no need to evaluate the
appropriate power states periodically. Instead, the
arrival and service of requests are the
events that trigger state transition decisions.
Both [7] and [8] model request arrival and the service provider using
memoryless distributions, which is not accurate in real world situations [10]. In [9], the request arrival is modeled as a pareto
distribution while the transition times of the service provider between the
power states is modeled as a uniform distribution. Besides it reevaluates the
decisions during idle periods until either a request occurs or the device is
shut down.
In this paper we looked at different approaches people
have proposed for DPM. Each approach has its own pros and cons. For instance
timeout policies are extremely simple to implement but they have an inherent
cost of wasting energy while waiting for the timeout to expire. Predictive
policies can solve this problem but they work on a fundamental assumption that
the service requests have a high degree of correlation. Another problem with
these two classes of policies is that they only aim and maximizing energy
savings. They do not take performance loss in the system into consideration.
Only [4] speaks about predictive wakeup to solve this problem
but that is a heuristic approach and cannot guarantee performance bounds.
Stochastic policies have robust power savings and performance delay bounds but
they assume specific distributions for the request arrival. The models do not
stay that accurate if the request pattern changes.
DPM is an inherently online problem and involves
predicting or estimating real world request pattern, which is highly
unpredictable. This is what makes it an extremely challenging and exciting
research problem.
[3]
F. Douglis, P. Krishnan,
and B. Bershad. Adaptive Disk Spin-Down Policies for