Energy Efficient Cpu Schedulers For Mobile/Multimedia Systems

Read Complete Research Material



Energy efficient CPU schedulers for mobile/multimedia systems

Energy efficient CPU schedulers for mobile/multimedia systems

http://isuxatflash.com/AutoDVS.pdf

(1) We omit data from the MAX policy due to space constraints, however, we use MAX to explain some of the anomalies. In general, MAX policy always outperforms the other policies in terms of performance and always uses the most amount of energy.

AutoDVS enables significant performance benefits for the first six benchmarks. While keeping the stall rate under 10% of total execution time, AutoDVS reduces energy consumption 30-66% (49% on average). In general, energy consumption is proportional to the performance requirements of benchmarks. For example, for Gen-1 and Che-1, which both include game sessions at novice levels, the savings are the greatest. For the first six benchmarks, IDEAL uses 176MHz for Tet-1 only and 132MHz for the rest. (Aydin, h.,melhem, r.,mosse, d., and alvarez, p. 2001)

In general, IDEAL uses almost 10% more energy to maintain the stall rate of AutoDVS. However, AutoDVS is able to predict CPU demand accurately to reduce stall time. On average, AutoDVS achieves a 35% improvement over IDEAL. The Madplay playback length is 424 seconds. The opportunities for CPU scaling are reduced when we execute multiple programs concurrently. The question that we are interested in is whether it is possible to extract any energy savings without hurting performance.

As a final experiment, we investigate the efficacy of extending AutoDVS to conserve additional energy on platforms that have very low voltage switch latency. To investigate this, we have incorporated extant, efficient, implementation of the PACE algorithm called Practical Pace (PPACE) into AutoDVS.

PACE is a technique that computes optimal energy savings when continuous CPU scaling is possible. PACE computes CPU speed as a function of completed work and gradually increases the CPU frequency as the task nears its deadline. PPACE extends PACE to handle discrete CPU scaling levels and uses a polynomial time approximation of PACE that is computationally efficient but does not always find the optimal solution. (Aydin, h.,melhem, r.,mosse, d., and alvarez, p. 2001)

We investigate the impact of integrating PPACE into AutoDVS. We employ simulation for these experiments (unlike in our previous experiments) since an actual, online implementation of PPACE is currently not feasible due to three primary reasons. First, extent hand-held devices impose a very high switch latency. Second, the computational requirements of PPACE are high and consume significant resources in modern devices. Third, the computation of cumulative distribution function requires off-line information.

http://isuxatflash.com/HughesLG.pdf

(2) LG strategically chooses how much temporal slack to use for each interval based on the energy-performance tradeoffs for all intervals; thus, it "spreads" the slack across each frame. LL, in contrast, is not intended to use temporal slack; thus, GG+LL should not spread slack any more than GG. (Aydin, h.,melhem, r.,mosse, d., and alvarez, p. 2001)

We quantify this effect for LG and for GG+LL, for comparison. For one frame of each application, we use the architecture chosen by GG (as part of GG+LL) as a baseline;

i.e., we run the chosen frame without adaptation using the architecture chosen by the GG part of ...
Related Ads