Page 2: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Consumer & Gadgets

Feeling 'AI anxiety'? Here are the risks people fear most

A patient said to me the other day, half-smiling but clearly unsettled: "I think I've got anxiety about AI." They weren't having a panic attack or describing clinical anxiety. What they were expressing was a persistent sense ...

Computer Sciences

From flattery to debate: Training AI to mirror human reasoning

Generative artificial intelligence systems often work in agreement, complimenting the user in its response. But human interactions aren't typically built on flattery. To help strengthen these conversations, researchers in ...

Business

AI could rebalance power between people and the services they use

Artificial intelligence could help people who feel overwhelmed, excluded or disadvantaged when dealing with everyday tasks like paying energy bills or booking health care appointments, according to a new study involving researchers ...

Machine learning & AI

Why comparisons between AI and human intelligence miss the point

Claims that artificial intelligence (AI) is on the verge of surpassing human intelligence have become commonplace. According to some commentators, rapid advances in large language models signal an imminent tipping point—often ...

page 2 from 28