Page 21: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Computer Sciences

Why AI can't understand a flower the way humans do

Even with all its training and computer power, an artificial intelligence (AI) tool like ChatGPT can't represent the concept of a flower the way a human does, according to a new study.

Machine learning & AI

Top scientist wants to prevent AI from going rogue

Concerned about the rapid spread of generative AI, a pioneer researcher is developing software to keep tabs on a technology that is increasingly taking over human tasks.

page 21 from 28