Credit: Unsplash/CC0 Public Domain

Countries including the United States and China called Thursday for urgent action to regulate the development and growing use of artificial intelligence in warfare, warning that the technology "could have unintended consequences".

A two-day meet in The Hague involving more than 60 countries took the first steps towards establishing on use of AI on the battlefield, aimed at establishing an agreement similar to those on chemical and .

"AI offers great opportunities and has extraordinary potential as an enabling technology, enabling us among other benefits to make powerful use of previously unimaginable quantities of data and improving ," the countries said in a joint call to action after the meeting.

But they warned: "There are concerns worldwide around the use of AI in the military domain and about the potential unreliability of AI systems, the issue of human involvement, the lack of clarity with regards to liability and potential unintended consequences."

The roughly 2,000 delegates, from governments, and , also agreed to launch a global commission to give clarity on its uses of AI in warfare and set down certain guidelines.

Militarily, AI is already used for reconnaissance and surveillance as well as analysis, and could eventually be used for autonomous choosing of targets—for example by "swarms" of drones sent into enemy territory.

China was invited to the conference as a key player in tech and AI, Dutch officials said, but Russia was not because of its invasion of Ukraine almost a year ago.

"We've clearly established the urgent nature of this subject. We now need to take further steps," Dutch Foreign Minister Wopke Hoekstra said at the conference's end.

Although experts say a treaty regulating the use of AI in war may still be a long way off, attendees agreed that guidelines urgently needed to be established.

"In the end it's always the human who needs to make the decision" on the battlefield, General Joerg Vollmer, a former senior NATO commander, told delegates.

"Whatever we're talking about, AI can be helpful, can be supportive, but never let the human out of the responsibility they have to bear—never, ever hand it over to AI," Vollmer said in a panel discussion.