Page 18: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Security

RisingAttacK: New technique can make AI 'see' whatever you want

Researchers have demonstrated a new way of attacking artificial intelligence computer vision systems, allowing them to control what the AI "sees." The research shows that the new technique, called RisingAttacK, is effective ...

Consumer & Gadgets

Why human empathy still matters in the age of AI

A new international study finds that people place greater emotional value on empathy they believe comes from humans—even when the exact same response is generated by artificial intelligence.

page 18 from 28