Page 20: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Machine learning & AI

Researchers are teaching AI to see more like humans

At Brown University, an innovative new project is revealing that teaching artificial intelligence to perceive things more like people may begin with something as simple as a game. The project invites participants to play ...

Consumer & Gadgets

AI-generated podcasts open new doors to make science accessible

The first study to use artificial intelligence (AI) technology to generate podcasts about research published in scientific papers has shown the results were so good that half of the papers' authors thought the podcasters ...

Machine learning & AI

Can AI help you identify a scam? An expert explains

Imagine that you've received an email asking you to transfer money to a bank account. Some of the details look right, but how can you be sure the message is legitimate?

page 20 from 28