Military

Human fighter pilots set to take on AI drones in aerial dogfights in 2021

Human fighter pilots set to take on AI drones in aerial dogfights in 2021
Next year may see man vs machine in aerial dogfights for the first time
Next year may see man vs machine in aerial dogfights for the first time
View 2 Images
Next year may see man vs machine in aerial dogfights for the first time
1/2
Next year may see man vs machine in aerial dogfights for the first time
The unmanned Kratos’ XQ-58A Valkyrie
2/2
The unmanned Kratos’ XQ-58A Valkyrie

The first aerial showdown between a human fighter pilot and an autonomous aircraft is slated for July 2021, according to a fascinating interview with the outgoing Director of the US Department of Defense's Joint Artificial Intelligence Center (JAIC).

Speaking to the Mitchell Institute for Aerospace Studies as part of its Aerospace Nation interview series, Lt. General John Shanahan spoke passionately at length about his work building the JAIC from the ground up, the challenges facing the US armed forces at the dawn of the AI era, the challenges inherent in bringing next-gen technologies through the notoriously conservative bureaucracies of the military, and the ethics at the heart of any weaponized artificial intelligence program.

As an aside to the nearly hour-long interview, Shanahan mentioned an email conversation he'd been having with Dr. Steven Rogers, Senior Scientist for Automatic Target Recognition and Sensor Fusion at the Air Force Research Laboratory (AFRL), Wright-Patterson AFB, Ohio.

"Cap Rogers and I exchanged emails just this weekend," said Shanahan, "on the work he's doing trying to field, in July of next year, an autonomous system to go up against a human, manned system in some sort of air-to-air. Bold, bold idea."

Shanahan didn't confirm what kind of aircraft would be involved in the challenge, whether it might entail fitting out an older fighter jet with a tactical autopilot system or using something like the Kratos XQ-58A Valkyrie combat drone, which is intended for use as an autonomous escort flying alongside manned F-22 or F-35 fighters, and is already flying.

The unmanned Kratos’ XQ-58A Valkyrie
The unmanned Kratos’ XQ-58A Valkyrie

Comparing the initiative to early chess matches between world champion Garry Kasparov and IBM's Deep Blue supercomputer, Shanahan said he didn't expect the autonomous system to chalk up its first victory.

"Cap's probably going to have a hard time getting to that flight next year where the machine beats the human," he said. "But go back to DARPA grand challenge. Who finished that first DARPA grand challenge? Nobody. Nobody came close. It might've been about a mile down the road. But how much has played out since then? This is less about beating a human in 2021 – if he does it, great, that'll be a record all by itself – but it's about learning about what it takes to build a different kind of system that's not the kind of thing we're used to building in the past."

"The future of warfare is algorithm against algorithm," Shanahan said. In some sense, it always has been, with humans, human strategies and human organizational systems being the best algorithms we've had available. But looking at how quickly AI has mastered incredibly complex systems like chess, go and others, it seems inevitable that our meatware will quickly become outdated.

The full interview is well worth a watch, but we've transcribed a few other comments from Lt. Gen. Shanahan that we felt were of note.

On AI and the Department of Defence

"It is my conviction, and my deep passion that AI will transform the character of warfare and the department of defense in the next 20 years. There is no part of the department that will not be impacted by this, from the back office to the battlefield, from undersea to cyberspace and outer space, and all points in between. Everything could be made better through the application of AI."

On the Joint Artificial Intelligence Center

"As recently as June 2018, the JAIC boiled down to four volunteers with no money. Today, I'm proud to say, we've grown to 185 people, with a US$1.3-billion budget. We've grown so fast that we've exceeded our current spaces and we've moved into a new facility. All of that's happened in 18 months. For the Department of Defense, that's about as fast of a growth curve as you could possibly imagine."

On whether AI is ready for military service yet

"We used to have these discussions like 'hey, this technology's still pretty brittle, it's a bit fragile. Shouldn't we just wait a little while until the technology is better?' No. The absolute worst answer is to stop and wait for technology to catch up. You've gotta learn how to do it."

On what an AI-enabled military future might look like

"In general, smaller, cheaper, more disposable, swarming autonomous systems. And with autonomy comes AI-enabled autonomy. There's a tendency to conflate autonomy with AI-enabled autonomy, but they're two very different things. There's a lot of autonomous systems in the DOD today. There are very few, I'd say really no significant AI-enabled systems ... so, you might have a manned airplane that's quarterbacking a lot of autonomous, swarming systems. I think the only failure we'll have is a failure of imagination. Anything's on the table."

On the ethics involved in AI-enabled warfare

"We have a grounding in the principles of ethics, that we're not going to go out there and just use these things without the standard foundational elements of the Law of Armed Conflict, the International Humanitarian Law … we take that into account from the very beginning. But we have to address it. In fact, we're now starting to sense that we're going to be far enough along in our Joint Warfighting Mission that we're going to have to sit down and do some test cases to work through what's acceptable in the field."

"We get accused in this department of going after killer robots. No commander would want robots with self agency just indiscriminately out on the battlefield making life and death decisions. That would not happen. You would have rules of engagement, all these other things that we do for a living. We'll take those into account."

"But it is developing fast enough that we have to look at the ethical use of artificial intelligence. We're not just going to be leading the government, we're going to lead the world in these things. Because what we don't want to happen is to have China take over this conversation, saying the right things but doing something entirely different. And we know that would be the case."

"We need to put some big bets down. And they are big bets. They're not risk-free bets. But when we look at what China and Russia are doing, especially China, where they're investing, I almost say we can't afford to do it any other way. We've gotta build toward that AI-enabled force of the future, or I think we have an unacceptably high risk of losing. And we're not used to doing that."

On the future role of humans, and how the look of these systems will change

"A lot of people have pondered over the last couple of decades, where do we really need a human in these systems? Are we trying to build the next manned fighter, as opposed to building the system with the best possible capability for the environment it'll face in, say, Indo-Pacific?"

"If you look at the MQ9 ground control station, are you really trying to make that look like an F-16 cockpit, or just the most functional use of a keyboard and a couple of other things, because that's what the world has evolved to? What we have to do is take account of a different mindset, of people who've grown up differently than a lot of people like me, who have three and a half decades of this behind me and all the old bad habits and patterns we grew up with."

"Maybe we shouldn't be thinking about a 65-foot (20-m) wingspan. Maybe it is a small, autonomous swarming capability. But then I've gotta solve for battery, I've gotta solve for size, weight and power problems, which are going to be a short-term challenge."

On how the JAIC is looking to the business world for ideas

"We should also be taking the best of the lessons coming out of the commercial car industry. And I like talking about this, because it's a cautionary tale. Ten companies, I think about 13 billion dollars or so over the last decade, and there is no level 4 autonomous car available on the road today. A cautionary tale. On the other hand, that's a decade worth of experience we should be pulling into the military. I think it's less about autonomy, and more about all the lessons they're learning by building those capabilities out."

Source: Mitchell Institute for Aerospace Studies via Air Force Magazine

4 comments
4 comments
Brian M
Its not going to be about the AI, that will come, its about the fact that a dedicated AI fighter design does not have to worry about the limitations of the human body, G forces, survivability, concentration, fear etc., so its going to out perform a man aircraft based on that alone - hardly a contest.

The downside is you don't have a human in the local part of the command train. What happens if communication links are lost, in a manned aircraft 'human' decisions can still be made on new information. For example civilians in the attack area, changing levels of acceptable colleterial damage, enemy bluffing etc. Can and should those decisions be automated?
Username
Glad to read Cap America is involved!
WB
I think we are half a year away from a level 4 autonomous car.. Tesla... look at their recent stock price to get a sense what's coming - also best selling cars ever. If you buy a new car look at a tesla if you dont want to waste your money. Its that simple... stock price tells the entire story
akarp
The U.S. is finally talking about police reform and discussing how to use resources to create and build 'safe communities' other than arrests and prison.
Maybe we can do the same with the military budget. What else can we do with this technology other than make 'boom booms'?