GetConnected logo

How to optimize energy efficiency in smart devices

low-power-battery.jpg

Taking accurate measurements is the first step to optimise energy efficiency, while finding ways to move as much of your processing as possible into dedicated hardware modules can cut your power usage by a surprising amount.

In my last article, I looked at the confusing terminology around power consumption and how to approach lowering it. This time, I’ll look at optimizing energy efficiency in connected smart devices, a process that starts with taking accurate measurements.

To measure energy consumption accurately requires something more capable than a simple ampere meter. Such a device won’t generate correct numbers with a constantly varying current flow.

Visit our resource page on home and building automation

Power analyzers are expensive, but there are alternatives

If you have a multimeter with a long integration time this may do, but the integration time must be longer than multiple cycles of what you are trying to measure. The best choice remains a power analyzer specifically designed to do this kind of measurement. Major lab tools vendors will typically have one of these with high resolution and high accuracy, but they are quite costly and you can easily pay USD $10,000.

A lot of the low power MCU vendors have made lower-cost solutions available to allow developers to have one on their desk during the development phase. These devices will not provide the same accuracy as the lab tools but the likely error percentage will be down in the single digits, which is accurate enough to derive reasonable estimates of energy consumption and battery lifetime.

Dedicated hardware modules

Optimizing the energy used by the CPU is a crucial step towards improving the energy efficiency of any system. Modern CPUs are very flexible and with programming you can make them do anything you want, but even the most efficient CPU architecture will not be as efficient as a dedicated hardware module.

A common situation with embedded systems is the need to perform small operations spread out over time while waiting. If you use the CPU to handle this then you have two basic choices. Firstly, let the CPU run a wait loop in between the actions, or alternatively go into to a low power mode and wake up to execute only when required.

As the CPU is typically one of the most power-hungry devices in the system, the first option will burn a lot of energy doing little useful. Reducing the clock speed while waiting would improve this, but that would reduce efficiency when doing the work, as CPUs are typically most power efficient when running at full speed. Using the second approach will cause the CPU regularly starting up and stopping, which is typically a better approach, but still there is some energy and time lost every time the CPU has to be woken up again from a low power state.

Being able to do all the smaller control operations in dedicated hardware and only wake up the CPU when there is data processing to be performed can improve the power efficiency dramatically.

Find the right balance

The choice between using dedicated less flexible but power-efficient hardware versus the more flexible but less power-efficient CPU is an important balance to strike. In more modern MCUs, hardware acceleration is being added to improve power consumption.

The Nordic Semiconductor nRF family of chips uses DMA for all data transfers and has a hardware control system called PPI to send control signal between hardware modules. By using these with the radio, for example, you can keep the CPU in low power mode most of the time while running a Bluetooth link.

If you compare this to a system without the hardware acceleration then the CPU usually runs all the time to control the radio, which again leads to increased energy consumption. This also means that a system with lower peak power consumption values may well have higher overall energy consumption, as the same job takes more time.

Hardware acceleration also allows you to optimize the CPU for efficiency, instead of compromising to enable slow operation along with high speed operation. You allow the CPU to wake up, do the calculations as fast and efficiently as possible, and then return to a low power mode.

Benchmarking helps make sense of complexity

This increase in complexity is one of the reasons why new benchmarks are becoming commonplace. For instance, EEMBC is an industry alliance working with benchmarking who have been fronting CoreMark for many years. Coremark is being used extensively to test CPU performance but it only tests performance.

Recently more benchmarks relating to power consumption have been announced from EEMBC. The ULPMark Core profile targets low power applications with a mixture of work cycles using low power modes in between. This is now being extended with the ULPMark peripheral profile, where peripherals will be used to simulate a more complete system. There is also an IoTMark benchmark in progress where wireless connectivity will be added to simulate end nodes.

> Read more: Benchmark reveals efficiency of Bluetooth low energy ‘end nodes’

Since just reading the mA numbers in the datasheet doesn't cut it anymore, these benchmarks enable an apples-to-apples comparison to be made on low power devices. Although such benchmarks are useful, the value of them will be higher the closer the benchmark solution is to your actual application.

A word of warning

As a final takeaway, optimizing digital devices for energy efficiency makes it more difficult to estimate energy consumption based on mA numbers from the datasheets. If you are told you can do this, beware! You may find that either the device is not optimized as well as it should be, or you are not being told the full story.

 

Subscribe to  The Get Connected Blog