MIT system promotes AI from backup to co-pilot
MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) is developing an AI co-pilot for aircraft called Air Guardian that actively co-operates with the pilot, making the computer part of a team instead of an emergency backup.
Flying a modern plane may be exhilarating, but it can also be alarmingly difficult at times. Taking off, landing, flying in crowded airspaces, or dealing with a sudden malfunction can see the pilot facing an overwhelming influx of data from multiple displays and only a fraction of a second to process it all and make a decision.
One example of this was on January 15, 2009 when US Airways Flight 1549 struck a flock of birds while taking off from LaGuardia Airport in New York. Pilot Chesley "Sully" Sullenberger became a hero that day when he made the decision to ditch the Airbus A320 in the Hudson River, saving the lives of the 155 passengers and crew.
The irony of the incident is that, according to an AI expert who reviewed the incident and prefers to remain anonymous, Sullenberger didn't need to ditch the plane and could have made it to an airfield. The problem was that he simply didn't have enough time to properly assess the situation and had to make the best call that he could.
A study of the incident found that if the aircraft had been equipped with an AI system, the ditching could have been avoided because of its ability to handle the data overload.
Such AI flight systems have garnered a lot of attention in recent years because of their safety potential as well as the possibility of replacing human crews on routine cargo flights. However, the usual approach is to treat the AI as something like an emergency warning system. Essentially, its job is to just sit in its box monitoring the flight data and then kicking in if something strays out of the designated safety parameters.
According to MIT, Air Guardian takes a different approach by monitoring not just the aircraft, but also the pilot, so it acts more like a co-pilot than an emergency brake. It does this by tracking the pilot's eye movements and building up "saliency maps," which is a ten-dollar word for noting where the pilot is looking and how much attention is being paid to what's being looked at.
This sounds very simple, but it relies on some extremely sophisticated algorithms, and what are called "liquid neural networks," which are very flexible networks that can adapt even after they've been trained. They can also clear some mathematical bottlenecks, allowing the AI to build a model of what is happening from second to second and learn to cooperate with the pilot.
What this boils down to is that the pilot is the one who flies the plane, so their expertise and experience are best used. Meanwhile, Air Guardian monitors the pilot's attention. If they are not paying attention to something important or too much attention to something else, the AI steps in to avoid potential risks.
The results of this were seen in recent field tests where the pilot and Air Guardian made decisions based on identical images. By teaming human and machine, the risk level of the test flight was reduced and the success of navigating between target points was increased.
"This system represents the innovative approach of human-centric AI-enabled aviation," said Ramin Hasani, MIT CSAIL research affiliate and inventor of liquid neural networks. "Our use of liquid neural networks provides a dynamic, adaptive approach, ensuring that the AI doesn't merely replace human judgment but complements it, leading to enhanced safety and collaboration in the skies."
The research was published here.