Page 22: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Computer Sciences

Beyond translation: Multilingual benchmark makes AI multicultural

Imagine asking a conversational bot like Claude or ChatGPT a legal question in Greek about local traffic regulations. Within seconds, it replies in fluent Greek with an answer based on UK law. The model understood the language, ...

Machine learning & AI

How trustworthy is AI?

Artificial intelligence is everywhere—writing emails, recommending movies and even driving cars—but what about the AI you don't see? Who (or what) is behind the scenes developing the algorithms that go unnoticed? And ...

Computer Sciences

AI approach developed with human decision-makers in mind

As artificial intelligence takes off, how do we efficiently integrate it into our lives and our work? Bridging the gap between promise and practice, Jann Spiess, an associate professor of operations, information, and technology ...

Computer Sciences

AI learns languages similarly to humans, study shows

An AI system that learns language autonomously develops a language structured in the same way as human language. And just as we humans learn from previous generations, AI models get better when they take advantage of the ...

Computer Sciences

Team teaches AI models to spot misleading scientific reporting

Artificial intelligence isn't always a reliable source of information: large language models (LLMs) like Llama and ChatGPT can be prone to "hallucinating" and inventing bogus facts. But what if AI could be used to detect ...

Machine learning & AI

Q&A: Multimodality as the next big leap for AI

As the head of the Natural Language Processing Laboratory at EPFL, Antoine Bosselut keeps a close eye on the development of generative artificial intelligence tools such as ChatGPT. He looks back at their evolution over the ...

page 22 from 28