This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

reputable news agency

proofread

Pentagon's AI initiatives accelerate hard decisions on lethal autonomous weapons.

Pentagon's AI initiatives accelerate hard decisions on lethal autonomous weapons.
The Longshot, an air-launched unmanned aircraft that General Atomics is developing with the Defense Advanced Research Project Agency for use in tandem with piloted Air Force jets, is displayed at the Air & Space Forces Association Air, Space & Cyber Conference, Wednesday, Sept. 13, 2023 in Oxon Hill, Md. Pentagon planners envision using such drones in "human-machine teaming" to overwhelm an adversary. But to be fielded, developers will need to prove the AI tech is reliable and trustworthy enough. Credit: AP Photo/Alex Brandon

Artificial intelligence employed by the U.S. military has piloted pint-sized surveillance drones in special operations forces' missions and helped Ukraine in its war against Russia. It tracks soldiers' fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.

Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative—dubbed Replicator—seeks to "galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many," Deputy Secretary of Defense Kathleen Hicks said in August.

While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy—including on weaponized systems.

There is little dispute among scientists, industry experts and Pentagon officials that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles.

That's especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them—and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.

It's unclear if the Pentagon is currently formally assessing any fully autonomous lethal weapons system for deployment, as required by a 2012 directive. A Pentagon spokeswoman would not say.

Paradigm shifts

Replicator highlights immense technological and personnel challenges for Pentagon procurement and development as the AI revolution promises to transform how wars are fought.

"The Department of Defense is struggling to adopt the AI developments from the last machine-learning breakthrough," said Gregory Allen, a former top Pentagon AI official now at the Center for Strategic and International Studies think tank.

The Pentagon's portfolio boasts more than 800 AI-related unclassified projects, much still in testing. Typically, machine-learning and are helping humans gain insights and create efficiencies.

"The AI that we've got in the Department of Defense right now is heavily leveraged and augments people," said Missy Cummings, director of George Mason University's robotics center and a former Navy fighter pilot." "There's no AI running around on its own. People are using it to try to understand the fog of war better."

Pentagon's AI initiatives accelerate hard decisions on lethal autonomous weapons.
A Blackhawk helicopter circles Peterson Air Force Base before a visit by then Vice President Mike Pence, on April 18, 2020, in Colorado Springs, Colo. AI's predictive capabilities are helping the Air Force keep its fleet aloft, anticipating when key aircraft need maintenance. Credit: AP Photo/David Zalubowski, File

Space, war's new frontier

One domain where AI-assisted tools are tracking potential threats is space, the latest frontier in military competition.

China envisions using AI, including on satellites, to "make decisions on who is and isn't an adversary," U.S. Space Force chief technology and innovation officer Lisa Costa, told an online conference this month.

The U.S. aims to keep pace.

An operational prototype called Machina used by Space Force keeps tabs autonomously on more than 40,000 objects in space, orchestrating thousands of data collections nightly with a global telescope network.

Machina's algorithms marshal telescope sensors. Computer vision and large language models tell them what objects to track. And AI choreographs drawing instantly on astrodynamics and physics datasets, Col. Wallace 'Rhet' Turnbull of Space Systems Command told a conference in August.

Another AI project at Space Force analyzes radar data to detect imminent adversary missile launches, he said.

Maintaining planes and soldiers

Elsewhere, AI's predictive powers help the Air Force keep its fleet aloft, anticipating the maintenance needs of more than 2,600 aircraft including B-1 bombers and Blackhawk helicopters.

Machine-learning models identify possible failures dozens of hours before they happen, said Tom Siebel, CEO of Silicon Valley-based C3 AI, which has the contract. C3's tech also models the trajectories of missiles for the the U.S. Missile Defense Agency and identifies insider threats in the federal workforce for the Defense Counterintelligence and Security Agency.

Among health-related efforts is a pilot project tracking the fitness of the Army's entire Third Infantry Division—more than 13,000 soldiers. Predictive modeling and AI help reduce injuries and increase performance, said Maj. Matt Visser.

Pentagon's AI initiatives accelerate hard decisions on lethal autonomous weapons.
Brandon Tseng, president and co-founder of Shield AI, is shown Monday, Sept. 11, 2023, at the Air, Space, Cyber Conference at National Harbor, Maryland. Shield AI and competitor Anduril are each backed by hundreds of millions in venture capital funding. Both have a software-first approach in their military autonomy product development and have obtained uncrewed drones in acquisitions or partnered with aircraft makers. Credit: AP Photo/Frank Bajak

Aiding Ukraine

In Ukraine, AI provided by the Pentagon and its NATO allies helps thwart Russian aggression.

NATO allies share intelligence from data gathered by satellites, drones and humans, some aggregated with software from U.S. contractor Palantir. Some data comes from Maven, the Pentagon's pathfinding AI project now mostly managed by the National Geospatial-Intelligence Agency, say officials including retired Air Force Gen. Jack Shanahan, the inaugural Pentagon AI director,

Maven began in 2017 as an effort to process video from drones in the Middle East—spurred by U.S. Special Operations forces fighting ISIS and al-Qaeda—and now aggregates and analyzes a wide array of sensor- and human-derived data.

AI has also helped the U.S.-created Security Assistance Group-Ukraine help organize logistics for military assistance from a coalition of 40 countries, Pentagon officials say.

All-Domain Command and Control

To survive on the battlefield these days, military units must be small, mostly invisible and move quickly because exponentially growing networks of sensors let anyone "see anywhere on the globe at any moment," then-Joint Chiefs chairman Gen. Mark Milley observed in a June speech. "And what you can see, you can shoot."

To more quickly connect combatants, the Pentagon has prioritized the development of intertwined battle networks—called Joint All-Domain Command and Control—to automate the processing of optical, infrared, radar and other data across the armed services. But the challenge is huge and fraught with bureaucracy.

Christian Brose, a former Senate Armed Services Committee staff director now at the defense tech firm Anduril, is among military reform advocates who nevertheless believe they "may be winning here to a certain extent."

"The argument may be less about whether this is the right thing to do, and increasingly more about how do we actually do it—and on the rapid timelines required," he said. Brose's 2020 book, "The Kill Chain," argues for urgent retooling to match China in the race to develop smarter and cheaper networked weapons systems.

To that end, the U.S. military is hard at work on "human-machine teaming." Dozens of uncrewed air and sea vehicles currently keep tabs on Iranian activity. U.S. Marines and Special Forces also use Anduril's autonomous Ghost mini-copter, sensor towers and counter-drone tech to protect American forces.

Industry advances in computer vision have been essential. Shield AI lets drones operate without GPS, communications or even remote pilots. It's the key to its Nova, a quadcopter, which U.S. special operations units have used in conflict areas to scout buildings.

On the horizon: The Air Force's "loyal wingman" program intends to pair piloted aircraft with autonomous ones. An F-16 pilot might, for instance, send out drones to scout, draw enemy fire or attack targets. Air Force leaders are aiming for a debut later this decade.

Pentagon's AI initiatives accelerate hard decisions on lethal autonomous weapons.
Troops wait for President Barack Obama to arrive at the Third Infantry Division Headquarters, on April 27, 2012, at Fort Stewart, Ga. Among health-related efforts is a pilot project involving the entire Third Infantry Division — more than 13,000 soldiers. Tracking soldiers' physical fitness, predictive modeling and AI are used to reduce injuries and increase performance Credit: AP Photo/Carolyn Kaster, File

The race to full autonomy

The "loyal wingman" timeline doesn't quite mesh with Replicator's, which many consider overly ambitious. The Pentagon's vagueness on Replicator, meantime, may partly intend to keep rivals guessing, though planners may also still be feeling their way on feature and mission goals, said Paul Scharre, a military AI expert and author of "Four Battlegrounds."

Anduril and Shield AI, each backed by hundreds of millions in venture capital funding, are among companies vying for contracts.

Nathan Michael, chief technology officer at Shield AI, estimates they will have an autonomous swarm of at least three uncrewed aircraft ready in a year using its V-BAT aerial drone. The U.S. military currently uses the V-BAT—without an AI mind—on Navy ships, on counter-drug missions and in support of Marine Expeditionary Units, the company says.

It will take some time before larger swarms can be reliably fielded, Michael said. "Everything is crawl, walk, run—unless you're setting yourself up for failure."

The only weapons systems that Shanahan, the inaugural Pentagon AI chief, currently trusts to operate autonomously are wholly defensive, like Phalanx anti-missile systems on ships. He worries less about autonomous weapons making decisions on their own than about systems that don't work as advertised or kill noncombatants or friendly forces.

The department's current chief digital and AI officer Craig Martell is determined not to let that happen.

"Regardless of the autonomy of the system, there will always be a responsible agent that understands the limitations of the system, has trained well with the system, has justified confidence of when and where it's deployable—and will always take the responsibility," said Martell, who previously headed machine-learning at LinkedIn and Lyft. "That will never not be the case."

As to when AI will be reliable enough for lethal autonomy, Martell said it makes no sense to generalize. For example, Martell trusts his car's adaptive cruise control but not the tech that's supposed to keep it from changing lanes. "As the responsible agent, I would not deploy that except in very constrained situations," he said. "Now extrapolate that to the military."

Martell's office is evaluating potential generative AI use cases—it has a special task force for that—but focuses more on testing and evaluating AI in development.

One urgent challenge, says Jane Pinelis, chief AI engineer at Johns Hopkins University's Applied Physics Lab and former chief of AI assurance in Martell's office, is recruiting and retaining the talent needed to test AI tech. The Pentagon can't compete on salaries. Computer science Ph.D.s with AI-related skills can earn more than the military's top-ranking generals and admirals.

Testing and evaluation standards are also immature, a recent National Academy of Sciences report on Air Force AI highlighted.

Might that mean the U.S. one day fielding under duress autonomous weapons that don't fully pass muster?

"We are still operating under the assumption that we have time to do this as rigorously and as diligently as possible," said Pinelis. "I think if we're less than ready and it's time to take action, somebody is going to be forced to make a decision."

© 2023 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation: Pentagon's AI initiatives accelerate hard decisions on lethal autonomous weapons. (2023, November 25) retrieved 27 April 2024 from https://techxplore.com/news/2023-11-pentagon-ai-hard-decisions-lethal.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

The AI of war: Computers and autonomous killing

16 shares

Feedback to editors