Automotive

New algorithm to improve pedestrian recognition accuracy of driverless cars

New algorithm to improve pedestrian recognition accuracy of driverless cars
The goal was real-time vision so that the computer would be capable of recognizing and categorizing objects, especially humans, in normal urban driving conditions
The goal was real-time vision so that the computer would be capable of recognizing and categorizing objects, especially humans, in normal urban driving conditions
View 5 Images
Pedestrian detection can be challenging for engineers because humans come in varied sizes and distance changes the perspective and size of objects
1/5
Pedestrian detection can be challenging for engineers because humans come in varied sizes and distance changes the perspective and size of objects
In a typical real-time application, finding pedestrians involves processing millions of these image windows at 5-30 frames per second
2/5
In a typical real-time application, finding pedestrians involves processing millions of these image windows at 5-30 frames per second
In the UCSD system, frames that have relatively uniform shapes and colors (the sky, for example) are ignored in favor of frames that are busier
3/5
In the UCSD system, frames that have relatively uniform shapes and colors (the sky, for example) are ignored in favor of frames that are busier
The goal was real-time vision so that the computer would be capable of recognizing and categorizing objects, especially humans, in normal urban driving conditions
4/5
The goal was real-time vision so that the computer would be capable of recognizing and categorizing objects, especially humans, in normal urban driving conditions
Most pedestrian detection systems divide an image into small sections (referred to as “windows”)
5/5
Most pedestrian detection systems divide an image into small sections (referred to as “windows”)
View gallery - 5 images

Researchers at the University of California, San Diego (UCSD) have developed a pedestrian detection system they claim performs in near real-time at higher accuracy than existing systems. The researchers believe that the algorithm and technology could be used in self-driving vehicles, robotics, and in image and video search systems.

The system was developed by electrical engineering professor Nuno Vasconcelos in the UCSD Jacobs School of Engineering. His team combined traditional computer vision models with deep learning to improve accuracy and speed.

The goal was real-time vision that would allow the system to recognize and categorize objects, especially humans, in normal urban driving conditions. This would allow a self-driving car, delivery robot, or low-flying drone to detect and avoid pedestrians and potential conflicts and congestion.

In the UCSD system, frames that have relatively uniform shapes and colors (the sky, for example) are ignored in favor of frames that are busier
In the UCSD system, frames that have relatively uniform shapes and colors (the sky, for example) are ignored in favor of frames that are busier

Most pedestrian detection systems divide an image into small sections (referred to as "windows") that are processed by a classification program to determine the presence of a human form. This can be challenging for engineers because humans come in various shapes and sizes and distance changes the perspective and size of objects. In a typical real-time application, this involves processing millions of these windows at 5-30 frames per second.

The cascade detection technique employed in the UCSD system does the same basic function, but does it in stages rather than in one go. This allows the algorithm to quickly discard frames that have no likelihood of containing a human form and concentrate on those that may. So frames that have relatively uniform shapes and colors (the sky, for example) are ignored in favor of frames that are busier.

The second stage classifies and discards frames that have objects similar in shape or color variance to humans, but are not pedestrians (trees, shrubs, other vehicles). The final stages classify in finer and finer detail until just pedestrians are left and marked. Although these final calculations and processes are processor-heavy, only a few of them are required by comparison, so it is done quickly.

Most pedestrian detection systems divide an image into small sections (referred to as “windows”)
Most pedestrian detection systems divide an image into small sections (referred to as “windows”)

Traditionally, cascade detection systems use simple classifiers, referred to as "weak learners." In the UCSD system, the later-stage detection systems learn as they go, so the classifiers get more and more sophisticated and thus faster. The classifiers in each stage thus get more robust over time and are not all the same from one stage to the next, which is a key difference between this new algorithm and current systems for pedestrian detection.

The algorithm does this, says Vasconcelos, by learning which combinations of weak learners were able to detect the pedestrians in one frame and put more emphasis on those as frames progress, quickening the detection process. The goal is to continually optimize the trade-off between detection accuracy and speed.

For now, the algorithm works only in binary (yes/no) detection tasks, but the UCSD team hopes to extend its capabilities to detect multiple object types simultaneously.

Source: UCSD and ICCV

View gallery - 5 images
4 comments
4 comments
Mel Tisdale
If I ever by some misfortune find myself riding in a driverless car, I won't give damn whether what we are about to run into is a human being or not. It could easily be a fridge-freezer that has fallen off the back of a lorry, or some other hefty item for all I would care. As long as we miss hitting it is all that matters. In those circumstances micro seconds either way could make the difference between there being a collision or not, so the time taken to discard images that it decides are not human seems a dangerous practice.
The only decision of importance is: 'Are we in danger of hitting whatever it is? If so, avoid doing so as safely as conditions allow.
For a semi-autonomous car, this should be easily dealt with. It would sound a warning from the direction of the problem item, so that the human aural system automatically drew attention to where the problem had presented itself, leaving the human driver to assess the danger and take whatever avoiding action was appropriate. It is also the reason why the steering of the vehicle must be left in the hands of a human being so that they retain awareness of the situation facing the vehicle at all times and take such responsibility as necessary should the worst occur.
natosoco
You know. If it can recognize us to stop... it can recognize us... to attack. O: !
Really though, this is cool. Yay for safety!
Bob Flint
Presumably it can also read understand, and react to a small child running, or fallen down. Looks as if line of sight is still a big limitation, still cannot see through objects or humans from behind things such as the car ahead, or parked beside the road. Previous articles mentioned the screen displays on the back of trucks could help.
Since many cars already have rear view cameras, add another perhaps include thermal & visual to the front, and broadcast on rear of vehicles instead of a license plate a small display for the human driver, and broadcast information for the autonomous vehicle so it can also virtually through one or even several cars in a line waiting at a light.
Stephen N Russell
For all driverless cars, Lisc & produce for.