Page 28: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Machine learning & AI

Navigating trust in an age of increasing AI influence

In 2025, it can seem as though the future generations of AI advocates promised have finally arrived. We see the benefits of artificial intelligence on a daily basis—we use it to help us navigate traffic, to identify new ...

Engineering

Study reveals barriers to AI integration in manufacturing design

The integration of Artificial Intelligence (AI) into manufacturing processes has huge potential for improving productivity, efficiency, and safety. Machine learning models are already used to monitor equipment health and ...

Machine learning & AI

Generative AI rivals racing to the future

Since ChatGPT burst onto the scene in late 2022, generative artificial intelligence (GenAI) models have been vying for the lead—with the US and China hotbeds for the technology.

page 28 from 29