Researchers at Rice University are developing new camera technology that's able to constantly watch its surroundings, but only pick out the information it's told to. Known as RedEye, it works by focusing on analyzing analog imagery, allowing for drastically lower battery usage than conventional set-ups.
The team set out with one goal in mind: to create camera technology capable of providing computers with continuous vision, allowing them to see constantly, just as we do. The researchers liken the potential of such technology to having a personal assistant with you at all times, remembering the people you met, where you've been, and much more. If that vision were to be fully realized, then it could revolutionize the booming wearable technology industry.
In order to really provide that personal assistant-type experience, devices need to be able to constantly see the world around them. The problem is that the energy consumption of today's cameras, such as the one you'll find in your smartphone, is far too high. In short, current camera tech drains batteries much too quickly, especially when processing video in real-time.
Research from 2012 showed that off-the-shelf image sensors needed to be some 100 times more energy-efficient for the Rice team's vision to be realized. Last year, the researchers published a paper showing that they were able to provide a tenfold improvement in power consumption through software optimization alone.
Since then, the researchers have been working towards another tenfold improvement, developing the technology on both a hardware and software level. Along the way, they identified a significant energy bottleneck in the form of the conversion of image from analog to digital format.
Real-world signals are received by camera sensors in analog format, and are then converted to digital formats, which are less noisy and therefore easier to read. The team decided to experiment with looking at the analog signal rather than converting it, which is an energy-intensive task. This required the researchers to demonstrate that they could accurately analyze analog footage, despite its inherently noisy nature.
To do so, they took advantage of machine learning breakthroughs, as well as developments in circuit design and system architecture. For example, they used a machine learning technique called a "convolutional neural network," which is inspired by the structure of the visual cortex in animals.
The fruit of their labor is the RedEye system, which is able to recognize objects such as phones, faces, and even different species of animals, without looking at the image itself. Rather than convert the data being received into a digital format, it's able to pick out objects by looking at the analog output from the vision sensor.
This means that only certain images – those determined to be useful – are actually converted, massively lowering the amount of energy required. This method also has significant positives when it comes to privacy, with the designers able to define particular rules under which the system discards the unwanted data.
"So, if there are times, places or specific objects a user doesn't want to record – and doesn't want the system to remember – we should design mechanisms to ensure that photos of those things are never created in the first place," said Rice graduate student Robert LiKamWa.
The researchers published full details of the research online.
Source: Rice University