Page 3: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Machine learning & AI

OpenClaw's AI agent does everything, even social media

Meet OpenClaw: the AI assistant that promised to be your dream intern, terrified cybersecurity experts, and now thrives on chatbot-only social media—all in just a few weeks.

Computer Sciences

New method helps AI reason like humans without extra training data

A study led by UC Riverside researchers offers a practical fix to one of artificial intelligence's toughest challenges by enabling AI systems to reason more like humans—without requiring new training data beyond test questions.

Computer Sciences

Creative talent: Has AI knocked humans out?

Are generative artificial intelligence systems such as ChatGPT truly creative? A research team led by Professor Karim Jerbi from the Department of Psychology at the Université de Montréal, and including AI pioneer Yoshua ...

Computer Sciences

Using AI to understand how emotions are formed

Emotions are a fundamental part of human psychology—a complex process that has long distinguished us from machines. Even advanced artificial intelligence (AI) lacks the capacity to feel. However, researchers are now exploring ...

page 3 from 28