April 25, 2014

132 Views

Happy power saving in embedded

In today’s fast moving world, it is not wrong to say that we all heavily rely on batteries for most of our work, yet the average device fall drastically short of its ever increasing demand. Portable devices have become smaller and more powerful; and have managed to replace the varied functions of such multiple devices. We expect the embedded devices to have more features and perform even better.

Our smart phones can also act as WiFi hotspots, video players, high definition games consoles and more. All these applications utilize the battery energy. Though there has been appreciable advancement in the embedded performance, but not much is done in terms of the battery performance. This has made the developers and engineers to become conscious of the battery power consumption. There is an increasing demand for managing the energy usage of system by software. Energy consumption can be reduced by taking into account memory, peripherals and other system resources.

In this blog we will look at various methods to get maximum energy from the battery in embedded system.

Computational Efficiency

All computing machines carry out two essential functions – computation and communication.

All programs perform computations, like comparisons, analyses, calculations or manipulations. Computation is carried out by some processing unit on the values stored in machine registers. This process involves executing a set of instructions in the shortest possible time.

Communication refers to moving data from one place to another.  The application cannot process any information without moving it from one place to another. For processing values stored in memory are required to move into core registers. After processing is done, the result is written back to memory.

The computational activity should be carried out efficiently - Executing fewer instructions leads to less consumption of energy.

Some of the points to consider while writing applications for embedded:

  • Knowledge of architecture version, core implementation is important. The compiler and linker do not carry most of the optimizations on the target platform.
  • Choose data types sensibly to avoid unnecessary operations. On the ARM architecture 32-bit data types are efficient. Though 8-bit and 16-bit types occupy less storage, are less efficient to process.
  • Algorithm selection - For a single operation, there is certainly multiple possible algorithms and multiple possible implementations of those algorithms. In general, algorithms and implementations which favor computation over communication are the best choice. A simple example would be an algorithm for swapping two variables. An implementation using temporary variable, certainly access memory more often than the one which swaps without using a third variable. The second one is more efficient in terms of energy consumption.
  • Loops should be well defined. Produce shorter and faster loops to facilitate less register usage. Use unsigned integer counters, count down and test for equality with zero as a termination condition.
  • Accuracy in calculation - Fixed point implementations are usually computationally much more efficient than floating point ones, even when the floating point hardware is available. 

Memory Access

Tightly-coupled memory or TCM is the most efficient memory in terms of energy consumption. Unlike ARM, where TCM is connected to the core by a dedicated and optimized interface, not all systems have TCM. Many systems do have some type of on-chip, fast, wide memory often referred to as “scratchpad memory” or SPM.

Mostly all embedded systems do have a cache; but some have a hierarchy of Level 1 and Level 2 caches. Cache automatically holds the copy of a recently used data. It operates on groups or lines of a fixed size. An access to a single cacheable memory location may cause a bunch of memory accesses to load nearby data into the cache. This additional data loaded into cache may never be used, resulting in loss of energy spent in loading this data in cache. In order to avoid this, it should be used in a Cache-friendly way.

Cache-friendly way - Large data structures like arrays should be accessed in a cache-friendly way. In C programming, accessing array by column is extremely cache unfriendly. Accessing array by row allows the sequence of memory accesses and is thereby cache friendly.  

System Efficiency - No to “Busy” loop

There could be a situation where the system is doing nothing and is waiting for an event to happen. In this case, we should not spin round and burn power to check frequently for the activity to occur.

When the system is sitting idle and polling peripherals a lot of energy is wasted. Rather, prefer using an interrupt mechanism over polling when dealing with system peripherals.

Friendly Peripherals                                       

To increase the system efficiency the peripheral systems should be used to their maximum advantage. There are many peripherals attached to the board which can take off some of the CPU load and thus saving CPU cycles to save energy. In order to copy a large chunk of data the DMA engine can be used. In the meantime, CPU can do something else in parallel, if nothing else needs doing, put it to sleep and wake it up when data transfer is done.

Another important factor is to compare the relative speed of the core and the peripherals. Even though the core is running fast, the response time of peripheral is limited by its speed. If there is any task which is bounded by the speed of the peripherals, running the core at full speed will result in energy wastage.

Let’s take the example of a flash memory. While programming, flash memory, system will wait for the response from the memory device. If the core is not doing anything, then we can reduce the clock speed of the core to allow it to spend the same time idle. In this case less energy is utilized while waiting for response from the flash memory.

Turning off the peripherals and subsystems

When some of the subsystems are not in use, it is better to shutdown or lowers the power state of the peripherals in order to save energy.

Subsystems which are not in use can be powered down by application or OS. Or power down the subsystem if it is idle for a certain period and power it up when required.

There is a catch involved here. If shutting down and powering it up for a certain period cost more energy than running the subsystem for the same time, then it is advisable to continue running the subsystem. So, there should be some power down scheme in order to save energy.

Tools – energyAware Profiler

The energyAware Profiler displays the energy consumption of a software application allowing developers to perform optimizations to reduce power consumption.

The energyAware Profiler uses the built-in PC sampling and IRQ event tracking in the EFM32™. The profiler displays the power information in the form of a graph.

Conclusion

With the use of above, the system power will be saved and it also improves operating efficiency. It is important to note that these tips need to be considered during the design cycle.

Selecting a low power microcontroller is an obvious first step, but there are a number of software and hardware tips that can be followed to ensure that the battery power is put to good use.

Even though the hardware is low-power but without the use of energy-efficient software the capabilities of hardware cannot be leveraged.

Happy power savings! Visit this section to to know more about HCL's embedded engineering services.

Reference

http://www.embedded.com/design/real-time-and-performance/4425442/Maximize-the-battery-life-of-your-embedded-platform

http://www.silabs.com/products/mcu/lowpower/pages/online-help-energyaware-profiler.aspx