Page 25: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Machine learning & AI

Tech on the treetops: How AI can protect forests

Artificial Intelligence (AI) is the newest tool in the arsenal to prevent the degradation and depletion of forests, with new research revealing how the technology can help protect the ecosystem.

Machine learning & AI

Research shows humans are still better than AI at reading the room

Humans, it turns out, are better than current AI models at describing and interpreting social interactions in a moving scene—a skill necessary for self-driving cars, assistive robots, and other technologies that rely on ...

Machine learning & AI

Study cracks the code behind why AI behaves as it does

AI models like ChatGPT have amazed the world with their ability to write poetry, solve equations and even pass medical exams. But they can also churn out harmful content, or promote disinformation.

Machine learning & AI

We need to stop pretending AI is intelligent. Here's how

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

page 25 from 28