Page 4: Research news on AI governance

AI governance addresses the design, implementation, and evaluation of legal, institutional, and procedural mechanisms that guide the development and deployment of artificial intelligence systems. It encompasses national and international regulatory frameworks, management-based regulation, and dynamic governance models aimed at mitigating societal and existential risks. The domain integrates AI ethics, safety standards, red-teaming, and accountability tools, and involves multistakeholder participation in setting global norms, establishing red lines, and aligning AI infrastructure and literacy initiatives with evidence-based public policy.

Security

Why the future of AI depends on trust, safety, and system quality

When Daniel Graham, an associate professor in the University of Virginia School of Data Science, talks about the future of intelligent systems, he does not begin with the usual vocabulary of cybersecurity or threat mitigation. ...

Security

How do we make sure AI is fair, safe, and secure?

AI is ubiquitous now—from interpreting medical results to driving cars, not to mention answering every question under the sun as we search for information online. But how do we know it is safe to use, and that it's not ...

page 4 from 18