Page 19: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Consumer & Gadgets

Why human empathy still matters in the age of AI

A new international study finds that people place greater emotional value on empathy they believe comes from humans—even when the exact same response is generated by artificial intelligence.

Machine learning & AI

Q&A: When talking about AI, definitions matter

Artificial intelligence is everywhere lately—on the news, in podcasts and around every water cooler. A new, buzzy term, artificial general intelligence (AGI), is dominating conversations and raising more questions than ...

Machine learning & AI

New method can teach AI to admit uncertainty

In high-stakes situations like health care—or weeknight "Jeopardy!"—it can be safer to say "I don't know" than to answer incorrectly. Doctors, game show contestants, and standardized test-takers understand this, but most ...

page 19 from 28