Automotive

Dual-radar tech could help self-driving cars see through fog

Dual-radar tech could help self-driving cars see through fog
Neither LiDAR nor traditional radar systems are particularly good at imaging vehicles that are hidden on foggy roads
Neither LiDAR nor traditional radar systems are particularly good at imaging vehicles that are hidden on foggy roads
View 1 Image
Neither LiDAR nor traditional radar systems are particularly good at imaging vehicles that are hidden on foggy roads
1/1
Neither LiDAR nor traditional radar systems are particularly good at imaging vehicles that are hidden on foggy roads

Self-driving cars typically use both LiDAR and radar to detect obstacles on the road ahead, yet neither system is adept at identifying vehicles through fog. Now, though, engineers have discovered that radar is good at the task if it's "doubled up."

LiDAR (Light Detection And Ranging) sensors gauge the shape and distance of an object by sending out pulses of laser light, then measuring how long it takes that light to reflect back off of the item. Radar units send out radio waves, that are likewise reflected back by objects sitting in their path.

Unfortunately, airborne obstructions such as fog, dust, rain or snow absorb the light used by LiDAR systems, making them unreliable in such conditions. And while radar isn't as adversely affected, it can only ever create a partial image of what it detects – this is because even under ideal conditions, only a small percentage of its emitted radio signals get reflected back to its sensor.

Led by Prof. Dinesh Bharadia, a team at the University of California San Diego addressed the latter problem by installing two radar units on the hood of a car, approximately one car-width (1.5 m/4.9 ft) apart from each other. Special algorithms combine the reflected signals that they receive to create one composite image, while also filtering out irrelevant background "noise." The setup has already been successfully tested under simulated foggy conditions.

"By having two radars at different vantage points with an overlapping field of view, we create a region of high-resolution, with a high probability of detecting the objects that are present," says PhD student Kshitiz Bansal.

The researchers are now in talks with Toyota, which may combine the technology with optical cameras on its vehicles. More expensive LiDAR sensors could ultimately prove to be unnecessary.

Source: UC San Diego Jacobs School of Engineering

2 comments
2 comments
Daishi
You can create a partially self driving car with 1 camera and some basic software to stay between lanes but things get a whole lot more complicated from there. Scenarios like this where instead of using 1 radar system they use 2 from different vantage points and stitch them together in processing and software will become the norm. Cars will bring in data from a tons of sources and have to make sense of it all which will lead to complex scenarios when they have conflicts between systems. I suspect full self driving vehicles will be "almost ready" for a long long time.
Don Duncan
Your suspicions are not warranted. You are underestimating Elon. This is his goal, to leap frog the auto industry.