Automotive

EyeDAR tech could give self-driving cars expanded radar perception

EyeDAR tech could give self-driving cars expanded radar perception
Lead scientist Kun Woo Cho with a prototype EyeDAR device
Lead scientist Kun Woo Cho with a prototype EyeDAR device
View 4 Images
Postdoctoral researcher Kun Woo Cho (right) works in the lab of Prof. Ashutosh Sabharwal (left)
1/4
Postdoctoral researcher Kun Woo Cho (right) works in the lab of Prof. Ashutosh Sabharwal (left)
The positioning of each element within the lens enables it to determine the exact direction and intensity of incoming signals solely from its physical shape
2/4
The positioning of each element within the lens enables it to determine the exact direction and intensity of incoming signals solely from its physical shape
Testing revealed that EyeDAR can resolve target directions more than 200 times faster than conventional radar designs
3/4
Testing revealed that EyeDAR can resolve target directions more than 200 times faster than conventional radar designs
Lead scientist Kun Woo Cho with a prototype EyeDAR device
4/4
Lead scientist Kun Woo Cho with a prototype EyeDAR device
View gallery - 4 images

Many cultures explore the idea of a figurative third eye that enhances perception. For autonomous vehicles, the concept is quite literal: radar, working alongside cameras and LiDAR. These systems are typically onboard the vehicle, actively gathering data on its surroundings as it moves. Now, researchers at Rice University, led by Kun Woo Cho, have developed an off-vehicle radar system, EyeDAR, which they say can significantly enhance vehicles' sensing accuracy by communicating critical traffic data to onboard systems.

Autonomous vehicles “see” their surroundings with a combination of three complementary systems: radar, LiDAR, and cameras. Cameras, the optical sensors we know and love, enable visual perception by fully identifying pedestrians, vehicles, and traffic control devices. Next is LiDAR (Light Detection and Ranging), an active sensing technology that emits laser pulses and measures the time of their return to generate a high-resolution 3D point cloud. LiDAR fills key gaps in spatial perception left by radar and vision systems. It also provides accurate depth perception, but like cameras, it is susceptible to weather conditions.

Finally, we have radar. Very similar to bats’ echolocation, this radio frequency technology works by emitting radio waves in predetermined directions. Objects within the radar's path reflect some of the waves back at the radar emitter. Onboard processors then analyze the characteristics of the returning waves to determine the distance, position, and other information about the reflecting objects. Radar isn't impacted by lighting and weather conditions.

Postdoctoral researcher Kun Woo Cho (right) works in the lab of Prof. Ashutosh Sabharwal (left)
Postdoctoral researcher Kun Woo Cho (right) works in the lab of Prof. Ashutosh Sabharwal (left)

However, one key limitation of radar lies in its very mode of operation. The technology relies on reflected waves to obtain data, but those reflections are not always efficient. In practice, objects in a radar’s path often return only a fraction of the transmitted signal, with much of the waves scattered away instead. For autonomous vehicles, this means the onboard radar frequently receives incomplete information.

As a result, traffic elements such as road users around corners, pedestrians behind large objects, and other obscured hazards can easily go unnoticed. Even when a radar system detects the presence of something, the weak return signal can make precise identification difficult. For example, a stop sign might as well be a tall, slender man with a large red hat!

As autonomous vehicles, from freight trucks to delivery robots, become increasingly commonplace, safety demands are rising as well. Once accepted, sensor limitations are now viewed as concerning vulnerabilities.

To address detection lapses in radar, the researchers are looking to extend sensing beyond in-vehicle systems to road infrastructure using EyeDAR. The device is a low-power millimeter-wave radar sensor that could enhance vehicle sensing accuracy by providing in-vehicle systems with critical data on surrounding traffic.

The positioning of each element within the lens enables it to determine the exact direction and intensity of incoming signals solely from its physical shape
The positioning of each element within the lens enables it to determine the exact direction and intensity of incoming signals solely from its physical shape

Mounted on roadside infrastructure such as traffic lights, road signs, and billboards, EyeDAR enhances sensing by capturing the bulk of reflected waves that would otherwise scatter away, providing autonomous vehicles with a far more complete picture of surrounding traffic.

“It is like adding another set of eyes for automotive radar systems,” says Cho.

The device itself is an orange-sized sensor comprising two main components that function much like the lens and retina of the eyes. The first is a Luneburg metamaterial lens, 3D-printed from resin. The lens focuses incoming signals from different directions onto a fixed focal point, bringing us to the second component – an antenna array lined up behind the lens. The antenna array receives and detects the spatial information of the incoming signals, then delivers this information back to the automotive radar.

Two outstanding characteristics of EyeDAR that make it fascinating are its compact design and its metamaterial-based signal processing. While traditional radar systems use large antenna arrays and complex computational algorithms to interpret data, EyeDAR’s physical design performs the processing.

“Our lens consists of over 8,000 uniquely shaped, extremely small elements with a varying refractive index,” Cho says.

The positioning of each element within the lens enables it to determine the exact direction and intensity of incoming signals solely from its physical shape. This design is what makes it a metamaterial. Because these thousands of tiny structures are engineered to bend and focus waves as they pass through, the material itself acts as a hardwired analog processor. This essentially "pre-calculates" spatial data at the speed of light, eliminating the need for the heavy, power-hungry digital computing typically required to make sense of a chaotic traffic environment.

Testing revealed that EyeDAR can resolve target directions more than 200 times faster than conventional radar designs, marking a significant advancement in analog processing over digital.

Another unique characteristic of the device is that it does not generate new waves. Instead, it gathers and processes scattered waves from the target objects and reflects the clean signals back to the in-vehicle radar system.

Cho believes that EyeDAR's combination of compact, low-cost, non-complex architecture with ultrafast analog processing makes widespread deployment across roads and highways feasible.

Testing revealed that EyeDAR can resolve target directions more than 200 times faster than conventional radar designs
Testing revealed that EyeDAR can resolve target directions more than 200 times faster than conventional radar designs

However, Emeka Moronu, manufacturing industry expert, expresses doubts about its widespread adoption. “Even if the math is perfect, the manufacturing precision required is staggering," says Moronu. "We are asking a 3D printer to perfectly execute thousands of microscopic geometries that must remain flawless while baked in the sun or frozen in a storm. Scaling that level of metamaterial complexity from a controlled lab to a rugged, mass-produced roadside product is the ultimate hurdle.”

Regardless, EyeDAR has real potential. A network of these devices would allow cars to see far beyond the range of onboard radar systems. In addition, this technology can be adapted for drones, robotics, and surveillance.

Source: Rice University

View gallery - 4 images
No comments
0 comments
There are no comments. Be the first!