Page 16: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Machine learning & AI

Researchers optimize AI systems for science

Using services like ChatGPT or Microsoft Copilot can sometimes seem like magic—to the point it can be easy to forget about the advanced science running behind the scenes of any artificial intelligence (AI) system. Like ...

Business

Palantir, the AI giant that preaches US dominance

Palantir, an American data analysis and artificial intelligence company, has emerged as Silicon Valley's latest tech darling—one that makes no secret of its macho, America-first ethos now ascendant in Trump-era tech culture.

page 16 from 28