Research news on AI governance

AI governance addresses the design, implementation, and evaluation of legal, institutional, and procedural mechanisms that guide the development and deployment of artificial intelligence systems. It encompasses national and international regulatory frameworks, management-based regulation, and dynamic governance models aimed at mitigating societal and existential risks. The domain integrates AI ethics, safety standards, red-teaming, and accountability tools, and involves multistakeholder participation in setting global norms, establishing red lines, and aligning AI infrastructure and literacy initiatives with evidence-based public policy.

Machine learning & AI

US to assess new AI models before their release

The US government on Tuesday announced in a policy shift that it will have access to tech giants' new AI models to evaluate them before they are released.

Machine learning & AI

New report looks at how AI is impacting software development

Generative AI tools are rapidly transforming how software is built—and raising new risks in the process, according to a new TechBrief from the Association for Computing Machinery's Technology Policy Council (TPC) on the rise ...

Business

EU tells Google to open Android to AI rivals

The EU on Monday laid out measures it wants Google to take to open up its operating system to rival AI services, in a move slammed by the US tech giant.

page 1 from 21