Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Security

Can people distinguish between AI-generated and human speech?

In a collaboration between Tianjin University and the Chinese University of Hong Kong, researchers led by Xiangbin Teng used behavioral and brain activity measures to explore whether people can discern between AI-generated ...

Security

AI education could be crucial in tackling rising voice scams

A new study from Abertay University reveals that the most effective way to protect people from AI voice scams is not through traditional warning messages, but by educating them about how advanced and authentic AI voices have ...

page 1 from 30