Page 6: Research news on AI alignment

AI alignment examines how artificial systems acquire, represent, and act on goals, values, and social norms, and why their behavior often diverges from human expectations. Work in this area studies systematic failures such as bias, sycophancy, hallucinations, deceptive or selfish reasoning, and cultural or linguistic inequities, as well as limitations in commonsense, emotion, and social understanding. It also develops methods for preference learning, norm-following, interpretability, and reliability guarantees to better align AI behavior with human values and societal constraints.

Consumer & Gadgets

Fairness in AI: Study shows central role of human decision-making

AI-supported recommender systems should provide users with the best possible suggestions for their inquiries. These systems often have to serve different target groups and take other stakeholders into account who also influence ...

Consumer & Gadgets

Using food to uncover AI's cultural blind spots

CISPA researcher Tejumade Àfọ̀njá has co-authored a new international study that uses food as a starting point to reveal significant cultural blind spots in today's AI systems. The study also introduces a new participatory ...

Hi Tech & Innovation

Biological intelligence as the basis for new AI systems

In a new research project led by the Central Institute of Mental Health (CIMH) in Mannheim, scientists are investigating how insights into learning processes in animal brains can be used to make artificial intelligence (AI) ...

page 6 from 28