Semiautonomous driving system takes over when drivers make mistakes
We all like to think we're in control ... never more so than when we're behind the wheel of a car, but there are occasions when errors in judgement can lead to a gentle bump, or something far worse. MIT researchers have developed a semiautonomous collision avoidance system where the human driver has full control of the vehicle until the system detects that the car is headed for a collision or is too close to an obstacle for safety. When such a hazard is detected, the system will take control of the vehicle, bring it back within a calculated safe zone, and then hand control back over to the driver.
The so-called intelligent co-pilot system is the work of Sterling Anderson (PhD student at MIT's Department of Mechanical Engineering) and Karl Iagnemma (principal research scientist at the Institute's Robotic Mobility Group). Instead of using a path-based control, such as self-parking systems where a driver allows the vehicle to take over control of the vehicle to safely park, the system uses selective enforcement of constraints.
"This basis in constraints and corresponding fields of safe travel allows us to do something more than autonomous systems can do," Anderson told Gizmag. "Rather than simply control the vehicle autonomously (which, without a human in the loop is a much simpler proposition), our system is also capable of sharing control with the human driver. Additionally, our approach bases its control actions on threat - the perceived need for intervention - and allows us to tailor the mode and level of intervention to the performance and/or preference of the human driver."
Data gathered by onboard sensors, a front-facing camera and laser rangefinder is analyzed by a custom algorithm, which determines a safe zone where the human driver has full navigational control of the vehicle. Should the semiautonomous safety system detect that the actions of the driver are about to take the vehicle outside of that zone, perhaps heading straight for an obstacle or hazard, it takes over and steers the vehicle back to safety. Once within the zone again, control is handed back to the driver.
Anderson and Iagnemma have put the system through more than 1,200 trials in Michigan since September 2011, where test drivers were sat in front of a computer monitor showing a forward-pointing video feed streamed wirelessly from a heavily modified Kawasaki 4010 Mule out on an obstacle-laden test range. The utility vehicle was equipped with a Velodyne LIDAR, an inertial measurement unit, GPS, an onboard Linux PC for processing the sensor and positioning data, and steering/accelerator/braking actuators.
"Our Kalman filter combines the data provided by the GPS and IMU into a more accurate estimate of the vehicle's true position (gets us down to ~0.5 meters accuracy)," explained Anderson. "Note that because we use the laser to sense obstacles, the relative position of obstacles with respect to the vehicle is known with greater (~0.1 meter) precision. The controller identifies, evaluates, and selects one of the various path homotopies (or 'corridors') available in the environment, designs vehicle position constraints to bound it, combines those position constraints with known limits on the vehicle state and actuators (ie. steering limits, tire friction limits, etc.), and predicts an optimal escape trajectory. Basically, this trajectory tells us how close the vehicle will get to its limits if it is to remain within the safe corridor. We use this prediction to guide when, how, and how much the system intervenes."
Test drivers used a torque-enabled steering wheel and gas/brake pedals to navigate the vehicle over the obstacle course, occasionally receiving instructions from the researchers to head straight for an obstruction and let the system kick in and do its stuff. There were still a few collisions recorded, however.
"System failures that we've experienced to date reflect an experimental platform whose quirks we've identified and (believe we) know how to solve, but which we've largely relegated to later refinement," said Anderson. "In its current configuration and on a challenging obstacle course, the system reduces the occurrence of accidents by over 75 percent, while allowing the driver to decrease his/her course completion time by >30 percent. We believe we can reduce the collision rate to zero with the integration of a tactical-grade IMU (as opposed to the cheap one we're using currently). This will allow us to, for example, more accurately track and avoid obstacles that pass through the LIDAR's ~3 meter [9.8-foot] blind spot. Other changes to our obstacle detection approach (like simply lowering the LIDAR to reduce its blind spot) can also eliminate some of these failures."
Perhaps a manual over-ride of some sort might be a good idea, so that drivers can take back complete control in the event of system failure. Interestingly, Anderson observed that test drivers who put complete faith in the system performed better than those who were untrusting. He also says that drivers unaware that the system is operating may just attribute effective collision avoidance to good driving, which he acknowledged as not necessarily being a good thing (especially for those just starting out, possibly building false confidence in a driver's own weak ability and leading to poor skill development).
Experts, too, may well find the system too controlling. Imagine a police officer unable to catch up with a fleeing suspect because the onboard system determines it unsafe to do so. To make the system more adaptable, the researchers have included tweaks to cater for different levels of driving experience.
"As written, our algorithm allows for adaptation to various levels of driver preference or performance," said Anderson. "For those who prefer smoother, safer rides at the expense of some control freedom, the system is more active. Those who need or prefer more freedom can dial back the level of intervention, reducing it to a late-stage backup that does not kick in until the very last minute."
They're also looking at the possibility of using the camera, accelerometer and gyro in a dash-mounted smartphone to provide the necessary feedback to the system.
The research was supported by the United States Army Research Office and the Defense Advanced Research Projects Agency. The experimental platform was developed in collaboration with Quantum Signal LLC with assistance from James Walker, Steven Peters and Sisir Karumanchi.
A paper entitled Constraint-Based Planning and Control for Safe, Semi-Autonomous Operation of Vehicles was presented at the Intelligent Vehicles Symposium in Spain last month.