Military

OpenAI, Meta, and Anthropic partner with US military and its allies

OpenAI, Meta, and Anthropic partner with US military and its allies
Anduril's Bolt-M quadcopter: one of the company's many autonomous, AI-powered lethal weapons systems
Anduril's Bolt-M quadcopter: one of the company's many autonomous, AI-powered lethal weapons systems
View 5 Images
Anduril's Bolt-M quadcopter: one of the company's many autonomous, AI-powered lethal weapons systems
1/5
Anduril's Bolt-M quadcopter: one of the company's many autonomous, AI-powered lethal weapons systems
"Army of Drones" posted this graphic to its Facebook page showing the amount of personnel, vehicles, and infrastructure taken out by Ukrainian-built drones, illustrating how warfare has evolved
2/5
"Army of Drones" posted this graphic to its Facebook page showing the amount of personnel, vehicles, and infrastructure taken out by Ukrainian-built drones, illustrating how warfare has evolved
The Bolt-M can fly, with pin-point accuracy, to its target before exploding with an array of different munition types
3/5
The Bolt-M can fly, with pin-point accuracy, to its target before exploding with an array of different munition types
Workers at Anduril Industries on 9/11/22
4/5
Workers at Anduril Industries on 9/11/22
Anduril Industries Dive-LD center stage in front of the House of Appropriations committee
5/5
Anduril Industries Dive-LD center stage in front of the House of Appropriations committee
View gallery - 5 images

Three of America's leading AI companies have now signed up to share their technology with the US defence forces and military contractors, even after initially insisting they wouldn't – and the age of autonomous warfare now seems close at hand.

On Dec 4th, 2024, in a reversal of its longstanding policy against military applications, OpenAI announced a partnership with Anduril Industries, a defense technology company that specializes in autonomous munitions and reconnaissance drones, rocket motors, and various unmanned aerial vehicles (UAV) and underwater uncrewed vehicles (UUV) ... and whose Lattice swarm management software platform can control them all at once.

"Lattice accelerates complex kill chains by orchestrating machine-to-machine tasks at scales and speeds beyond human capacity," is the tagline for the Lattice Command and Control software, posted on Anduril's website.

Anduril has quite an impressive portfolio of machines capable of raining hellfire with the nudge of a joystick and a push of a button from any bunker in any part of the world at any time. Most have intimidating names, like Fury, "... a high-performance, multi-mission group 5 autonomous air vehicle (AAV) ..."

"Group 5" refers to a categorization system used by the US Department of Defense (DoD) to classify drones based on their size, weight, and capabilities. Group 5 is the cream of the crop. Examples of Group 5 drones include the MQ-9 Reaper and RQ-4 Global Hawk.

Anduril also designed and manufactures the Bolt-M, a 12-lb (5.4-kg) drone that looks very much like a hobbyist DJI drone you'd see flying over football games or scenic tourist areas. Except the "M" in Bolt-M stands for "munitions," and it carries a 3-lb (1.4-kg) anti-personnel/anti-materiel payload capable of serious destruction.

The Bolt-M can fly, with pin-point accuracy, to its target before exploding with an array of different munition types
The Bolt-M can fly, with pin-point accuracy, to its target before exploding with an array of different munition types

On May 16, 2023, during a Senate Judiciary oversight hearing, Sam Altman warned of the dangers of AI technology, saying "If this technology goes wrong, it can go quite wrong," and could cause "significant harm to the world." Altman also went on to say "GPT-4 is more likely to respond helpfully and truthfully, and refuse harmful requests, than any other widely deployed model of similar capability."

Granted, much has changed over the last 18 months. For example, in January of 2024, OpenAI updated its usage policies, removing the explicit prohibition against using AI models for "military and warfare" applications, opening the door for last week's announcement.

While revenue for OpenAI is projected at US$3.7 billion in 2024, operational costs are substantial and the company is expected to lose roughly $5 billion this year. AI model training is a significant portion of that cost. Government and defense-related contracts could certainly look like an enticing way to offset these losses.

Just hours after last week's announcement of the OpenAI Anduril partnership, employees of OpenAI raised concerns over the ethical implications of applying artificial intelligence to military operations, but also how the company's reputation could be impacted negatively.

OpenAI assures the public that the technology is strictly used for defensive purposes, and not for offensive tactics. However, one employee of the multi-billion-dollar company was reportedly quick to point out that Skynet – the fictional AI software in the original Terminator movies that led to humanity's near-destruction – was also initially deployed as an air-defense tool for North America.

Why the partnership between a defense company and the maker of ChatGPT?

"The first drone war."

When Russia again invaded Ukraine in February of 2022, Russia held nearly every military advantage going in: a larger combat-ready force, more ammunition, armored vehicles, and a capable air force. By the numbers, Russia should have made short work of its objectives in Ukraine.

But Ukraine fought back. And small consumer-grade UAVs jury-rigged DJI Mavic and similar drones became one of the most effective weapons in Ukraine's arsenal, with the country even going so far as to crowd-fund additional drones, parts, and money for more drones and parts.

It's estimated that one in every three strikes against Russian tanks and vehicles was carried out by a hobby drone dropping munitions from above or a hobby kamikaze drone loaded with an explosive.

"Army of Drones" posted this graphic to its Facebook page showing the amount of personnel, vehicles, and infrastructure taken out by Ukrainian-built drones, illustrating how warfare has evolved
"Army of Drones" posted this graphic to its Facebook page showing the amount of personnel, vehicles, and infrastructure taken out by Ukrainian-built drones, illustrating how warfare has evolved

The concept had been field-tested during Russia's first invasion of Ukraine in 2014, but by late 2023, nearly all Ukrainian combat brigades had at least one drone unit to carry out missions. A sub-one-thousand-dollar drone could incapacitate multiple Russian T-80 tanks, worth upwards of two million dollars each, with relative ease and little danger to the operator who could be miles away.

This leap to inexpensive, highly available, extremely precise and absolutely devastating use of technology has changed the face of modern warfare. And how to defend against it.

The "problem"

The US military is currently unmatched in terms of resources, global reach, nuclear arsenal, and advanced weaponry. China has the most manpower, with the largest standing army in the world.

According to Anduril, the AI race between the two countries may be the deciding factor when it comes to which superpower ultimately dominates the 21st-century battlefield and defines the future of global security for decades to come.

A OpenAI-Anduril partnership was thus born, though it went against OpenAI's original usage policies.

Workers at Anduril Industries on 9/11/22
Workers at Anduril Industries on 9/11/22

The "solution"

The OpenAI and Anduril collaboration focuses on countering aerial threats, such as drones and manned aircraft. Implementing AI can improve real-time threat detection, assessment, and response. Training AI on Anduril's data library will reduce operator burden while increasing situational awareness to protect the United States and its allies.

"Our partnership with OpenAI will allow us to utilize their world-class expertise in artificial intelligence to address urgent Air Defense capability gaps across the world," said Brian Schimpf, co-founder & CEO of Anduril Industries, in a press release. "Together, we are committed to developing responsible solutions that enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations."

Sam Altman, CEO of OpenAI added, "Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel and will help the national security community understand and responsibly use this technology to keep our citizens safe and free."

Certainly, autonomous drone swarm technology has come a long way in the last few years, as evidenced by this spectacular 10,000-drone aerial display from Shenzhen:

OpenAI isn't the only one

This news comes on the heels of Meta opening its usage of its AI language model, called Llama, to US government agencies and defense contractors such as Lockheed Martin. Other defense-oriented contractors, like Booz Allen Hamilton, Palantir, and even Anduril are on the list of companies Meta will be providing its AI technology to – an about-face to Meta's previous policy of forbidding its tech from military usage. While Llama is an open source project, it had previously been prohibited from military use.

"As an American company, and one that owes its success in no small part to the entrepreneurial spirit and democratic values the United States upholds, Meta wants to play its part to support the safety, security and economic prosperity of America – and of its closest allies too," writes Nick Clegg, President of Global Affairs for Meta.

Days after Meta's announcement, Anthropic, the maker of Claude AI, also announced its partnership with Palantir Technologies, which has deep ties to defense, intelligence, and law enforcement agencies.

Palantir has been criticized in the past for its secretive nature, surveillance programs, and Immigrations and Customs Enforcement (ICE) programs. It also has received accolades for assisting during the COVID pandemic in tracking the spread of the virus and aiding in counterterrorism operations for the FBI and CIA.

OpenAI, Meta, and Anthropic had all initially disallowed the usage of their technology for military purposes, only to shift courses.

The list goes on

Microsoft, Google, Oracle and even Amazon are household names; powerful companies involved in nearly every facet of day to day life, no matter what you do or where you live. Each has its hand in the Department of Defense (DoD).

The Joint Warfighting Cloud Capability (JWCC), which replaced the Microsoft-owned Joint Enterprise Defense Infrastructure (JEDI) contract in December of 2022, has a $9-billion budget, earmarked for the four companies mentioned above.

What is JWCC? It's a cloud-based venture that covers all DoD missions, from high-level decision-making to tactical operations and contains secret and top-secret classified data. It also provides interoperability for artificial intelligence and machine learning (ML) across all DoD systems.

Each company has something unique to offer the Joint Warfighting program, outside of its cloud services. Google's AI can accelerate "time to decision by up to 30 times." Amazon touts "advanced data analytics" and "advisory and assistance services." Microsoft highlighted the 165,000 miles [265,550 km] of undersea and fiber optic cables it owns. Oracle offers "predictive analysis," monitoring and automation.

"This partnership is both promising and thought-provoking." - chatGPT

An LLM's perspective:

I copied Anduril's press release over to ChatGPT and asked "Tell me how you feel about this?":

"This partnership is both promising and thought-provoking."

It also went on to say "AI is great at making quick decisions, but it isn’t immune to errors. A misclassification or data bias could lead to catastrophic decisions – imagine a friendly drone mistaken for a threat and shot down."

I was quick to note that ChatGPT's solely mentioned concern was a friendly-fire drone casualty being a catastrophic unintended consequence of mistaken identity, while never mentioning the potential for accidental loss of human or other biological life.

The race to integrate AI into military policy, strategy, and execution is well underway and there's little the common folk can do to change it. Partnerships between the military and private tech companies are nothing new, though the introduction of artificial intelligence is.

The question isn't so much as whether AI will define modern and future warfare, but more whether we're ready for the consequences of it.

So long as Gemini – and its recent draconian spate of "please die" over asking for little homework help – doesn't get anywhere near access to missiles ...

Source: Anduril Industries

View gallery - 5 images
No comments
0 comments
There are no comments. Be the first!