As computers, data centers and mobile devices become more powerful, their energy requirements are likewise generally increasing. Possible solutions to the problem include power-saving sleep modes, devices that keep computers from drawing a current when supposedly turned off, and water-cooled processors. EnerJ, a new solution created at the University of Washington, takes a different approach - it supplies less power to regions of the chip that are performing processes that don't require absolute precision. In lab simulations, it has already cut power consumption by up to 50 percent, although that amount could potentially reach as high as 90 percent.

With some processes, such as password encryption, close isn't good enough. The chip regions handling those functions, therefore, would still receive maximum power. Other processes, however, are already designed to tolerate small errors. These include things such as games, streaming audio and video, and real-time image recognition on mobile devices. They would be handled by a region receiving less voltage.

Sick of Ads?

Join more than 500 New Atlas Plus subscribers who read our newsletter and website without ads.

It's just US$19 a year.

More Information

"Image recognition already needs to be tolerant of little problems, like a speck of dust on the screen," said Adrian Sampson, a UW doctoral student in computer science and engineering. "If we introduce a few more dots on the image because of errors, the algorithm should still work correctly, and we can save energy."

The EnerJ system itself incorporates two interlocking pieces of code, one handling the precise functions, and one allocated to those that still work with errors. A barrier within the system would keep the two parts separate, so the more laid-back code would never accidentally end up being used for functions where precision was crucial.

Conceivably, the same principles could be applied purely to software, as opposed to chips. In programs where exact figures aren't important, for instance, numbers could be rounded off. In other cases, less accuracy checks could be performed.

So far, simulations have shown that machines running EnerJ-controlled hardware would have an average power-savings of 20 to 25 percent, although that climbed to almost 50 percent in the case of one program. By also applying the system to software, it is estimated that an additional 30 to 50 percent more power could be saved. Altogether, the UW team estimates savings of up to 90 percent.

"Our long-term goal would be ten times improvement in battery life," said Luis Ceze,an assistant professor of computer science and engineering, and lead author of the study. "I don't think it is totally out of the question to have an order of magnitude reduction if we continue squeezing unnecessary accuracy."

EnerJ (which takes its name from the Java programming language, of which it is an extension) will next be tested on actual hardware. There are plans to release its open-source code for general use, in coming months

The system is reminiscent of Rice University's experiments with "chip pruning," in which non-essential portions of circuits were physically removed from device-specific chips, in order to save power and increase processing speed.