MIT can now train self-driving cars in a photorealistic simulator
Simulators have been used for years to train human drivers, pilots and astronauts, and now they're being used to train self-driving cars too – and a new system developed by researchers at MIT could be the most promising yet.
The Virtual Image Synthesis and Transformation for Autonomy (VISTA) simulation system means cars don't have to venture out onto real streets straight away. Instead they can cruise through the virtual world created for them, with an infinite number of steering possibilities to choose from.
This is particularly useful for edge cases: rare incidents like a near miss or getting forced off the road, where there isn't a whole lot of real world data available for self-driving cars to use as a training model. Inside VISTA, these events can be "experienced" safely.
When the driverless car controller is set off inside the simulation, they're only given a small dataset of real-world, human driving to work from. The controller has to work out itself how to get from A to B safely, and is rewarded for traveling further and further.
When mistakes are made, the system uses what's known as reinforcement learning to teach the self-driving controller to make a better choice next time. Gradually, it can drive for longer and longer periods without crashing.
"It's tough to collect data in these edge cases that humans don’t experience on the road," says PhD student Alexander Amini, from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). "In our simulation, however, control systems can experience those situations, learn for themselves to recover from them, and remain robust when deployed onto vehicles in the real world."
Simulation engines have been used to prep and train self-driving before, but there are usually discrepancies between the artificial, simulated world designed by artists and engineers for the simulator, and the real world outside.
In VISTA's case, the simulator is driven by data, so new elements can be synthesized from real data. A convolutional neural network – the sort of AI usually deployed to process images – is used to map out a 3D scene and create a photorealistic representation that the autonomous controller can then respond to.
Other moving objects in the scene, including cars and people, can also be mapped out by the neural networks powering VISTA. It's a departure from the traditional training models, which either follow human-defined rules or try to imitate what human drivers would do.
"We basically say, 'Here's an environment. You can do whatever you want. Just don't crash into vehicles, and stay inside the lanes,'" says Amini.
It seems to work too – a controller transplanted from 10,000 kilometers (6,214 miles) of VISTA training to an actual self-driving car was able to safely navigate through streets it hadn't seen before, and recover from near-crash situations (like being half way off the road). The next stage is to introduce complications, like bad weather or erratic behavior from other elements in a scene.
A paper outlining the system has been published in IEEE Robotics and Automation Letters, and will be presented at the upcoming International Conference on Robotics and Automation (ICRA).
Please keep comments to less than 150 words. No abusive material or spam will be published.