Page 12: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Machine learning & AI

AI can write your college essay, but it won't sound like you

Students who plan to use ChatGPT to write their college admissions essays should think twice: Artificial intelligence tools write highly generic personal narratives, even when prompted to write from the perspective of someone ...

Robotics

Creating robots that adapt to your emotion

Robots might be getting smarter, but to truly support people in daily life, they also need to become more empathetic. That means recognizing and responding to human emotions in real time.

page 12 from 28