Page 17: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Machine learning & AI

Democratizing AI-powered sentiment analysis

Artificial intelligence is accelerating at breakneck speed, with larger models dominating the scene—more parameters, more data, more power. But here is the real question: Do we really need bigger to be better? We challenged ...

Machine learning & AI

Does AI understand?

Imagine an ant crawling in sand, tracing a path that happens to look like Winston Churchill. Would you say the ant created an image of the former British prime minister? According to the late Harvard philosopher Hilary Putnam, ...

Computer Sciences

Can ChatGPT actually 'see' red? New study results are nuanced

ChatGPT works by analyzing vast amounts of text, identifying patterns and synthesizing them to generate responses to users' prompts. Color metaphors like "feeling blue" and "seeing red" are commonplace throughout the English ...

page 17 from 28