Page 5: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Computer Sciences

Generative AIs fail at the game of visual 'telephone'

Generative AIs may not be as creative as we assume. Publishing in the journal Patterns, researchers show that when image-generating and image-describing AIs pass the same descriptive scene back and forth, they quickly veer ...

Computer Sciences

New system efficiently explains AI judgments in real-time

A research team led by Professor Jaesik Choi of KAIST's Kim Jaechul Graduate School of AI, in collaboration with KakaoBank Corp, has developed an accelerated explanation technology that can explain the basis of an artificial ...

Consumer & Gadgets

Can AI be a good creative partner?

What generative AI typically does best—recognize patterns and predict the next step in a sequence—can seem fundamentally at odds with the intangibility of human creativity and imagination. However, Cambridge researchers ...

Computer Sciences

AI can pick up cultural values by mimicking how kids learn

Artificial intelligence systems absorb values from their training data. The trouble is that values differ across cultures. So an AI system trained on data from the entire internet won't work equally well for people from different ...

page 5 from 28