Automotive

128-laser LiDAR sensor significantly sharpens autonomous cars' vision

128-laser LiDAR sensor significantly sharpens autonomous cars' vision
An optical image of the resolution by the Velodyne LiDAR VLS-128 in operation
An optical image of the resolution by the Velodyne LiDAR VLS-128 in operation
View 4 Images
The Velodyne LiDAR VLS-128 is encased in a housing that hides the spinning components but enables the use of smaller sensors
1/4
The Velodyne LiDAR VLS-128 is encased in a housing that hides the spinning components but enables the use of smaller sensors
An optical image of the resolution by the Velodyne LiDAR VLS-128 in operation
2/4
An optical image of the resolution by the Velodyne LiDAR VLS-128 in operation
Velodyne LiDAR VLS-128 sensor
3/4
Velodyne LiDAR VLS-128 sensor
10X the resolution: Comparison of the VLS-128 point cloud (top) to the HDL-64 point cloud (bottom)
4/4
10X the resolution: Comparison of the VLS-128 point cloud (top) to the HDL-64 point cloud (bottom)
View gallery - 4 images

In what promises to be a big step forward in 3D vision systems for autonomous vehicles, Velodyne has announced a new 128-channel LiDAR sensor that boasts the longest range and highest-resolution on the market.

LiDAR sensors are used to provide real-time 3D mapping and object detection in many autonomous driving systems. Velodyne LiDAR developed and patented the world's first 3D real-time rotating LiDAR sensor for advanced automotive safety applications in 2005. Its first application was during the Defense Advanced Research Projects Agency(DARPA) Grand Challenge that year, in which autonomous vehicles competed to complete acourse through a mock urban environment.

Since then, Velodyne's increasingly capable sensors have been installed in thousands of vehicles around the world, and currently provide core technology for several autonomous vehicle development programs.

Velodyne LiDAR VLS-128 sensor
Velodyne LiDAR VLS-128 sensor

The new flagship VLS-128 model is claimed to have 10 times the resolving power of the company's previous benchmark model, the HDL-64. As well as doubling the channel numbers, channel density has been tripled and the zoom resolution doubled, enabling it to detect objects more clearly and identify them more accurately. The resulting range is 300 meters (984 ft) and the high-resolution data gathered enables it to directly detect objects without additional sensor fusion, reducing computational complexity.

Despite the increase in resolving power, the VLS-128 is around one third the weight of the HDL-64 and it features auto-alignment technology that will be progressively installed in Velodyne's other LiDAR offerings.

Velodyne claims that as well as its capabilities in low-speed urban environments, the VLS-128 will help autonomous vehicles to function at highway speeds, where it's "designed to solve for all corner cases needed for full autonomy."

10X the resolution: Comparison of the VLS-128 point cloud (top) to the HDL-64 point cloud (bottom)
10X the resolution: Comparison of the VLS-128 point cloud (top) to the HDL-64 point cloud (bottom)

"We think the biggest unsolved problem for autonomous driving at highway speeds is avoiding road debris," says company founder and CEO, David Hall. "That's tough, because you have to see way out ahead. The self-driving car needs to change lanes, if possible, and do so safely. On top of that, most road debris is shredded truck tire – all black material on a dark surface. Especially at night, that type of object recognition is challenging, even for the LiDAR sensors we've previously built. The autonomous car needs to see further out, with denser point clouds and higher laser repetitions."

Velodyne says it will begin shipping the VLS-128 by year's end.

Source: Velodyne

View gallery - 4 images
2 comments
2 comments
ljaques
Great news, Velodyne. The resolution increase and resultant decrease in the computer complexity is outstanding, making it much more readable by humans now, too.
christopher
Note to self: don't wear horizontally striped shirts when crossing roads if you want driverless cars to see you...
We're going the wrong way - we should be creating alternatives to cars, not autonomous non-intelligent blobs of steel to share our roads.