September 27, 2022

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

How to stop cities from being turned into AI jungles

In the City of London, security cameras can even be found in cemeteries. In 2021, the mayor’s office launched an effort to establish guidelines for research around emerging technology. Credit: Acabashi/Wikimedia, CC BY
× close
In the City of London, security cameras can even be found in cemeteries. In 2021, the mayor’s office launched an effort to establish guidelines for research around emerging technology. Credit: Acabashi/Wikimedia, CC BY

As artificial intelligence grows more ubiquitous, its potential and the challenges it presents are coming increasingly into focus. How we balance the risks and opportunities is shaping up as one of the defining questions of our era. In much the same way that cities have emerged as hubs of innovation in culture, politics, and commerce, so they are defining the frontiers of AI governance.

Some examples of how cities have been taking the lead include the Cities Coalition for Digital Rights, the Montreal Declaration for Responsible AI, and the Open Dialogue on AI Ethics. Others can be found in San Francisco's ban of facial-recognition technology, and New York City's push for regulating the sale of automated hiring systems and creation of an algorithms management and policy officer. Urban institutes, universities and other educational centers have also been forging ahead with a range of AI ethics initiatives.

These efforts point to an emerging paradigm that has been referred to as AI Localism. It's a part of a larger phenomenon often called New Localism, which involves cities taking the lead in regulation and policymaking to develop context-specific approaches to a variety of problems and challenges. We have also seen an increased uptake of city-centric approaches within international law frameworks.

In so doing, municipal authorities are filling gaps left by insufficient state, national or global governance frameworks related to AI and other complex issues. Recent years, for example, have seen the emergence of "broadband localism," in which local governments address the digital divide; and "privacy localism," both in response to challenges posed by the increased use of data for law enforcement or recruitment.

AI localism encompasses a wide variety of issues, stakeholders, and contexts. In addition to bans on AI-powered facial recognition, local governments and institutions are looking at procurement rules pertaining to AI use by public entities, public registries of ' AI systems, and public education programs on AI. But even as initiatives and case studies multiply, we still lack a systematic method to assess their effectiveness—or even the very need for them. This limits policymakers' ability to develop appropriate regulation and more generally stunts the growth of the field.

Building an AI Localism framework

Below are ten principles to help systematize our approach to AI Localism. Considered together, they add up to an incipient framework for implementing and assessing initiatives around the world:

AI Localism is an emergent area, and both its practice and research remain in flux. The technology itself continues to change rapidly, offering something of a moving target for governance and regulation. Its state of flux highlights the need for the type of framework outlined above. Rather than playing catch-up, responding reactively to successive waves of technological innovation, policymakers can respond more consistently, and responsibly, from a principled bedrock that takes into account, the often competing needs of various stakeholders.

Provided by The Conversation

Load comments (0)