Page 8: Research news on Generative AI ethics

Generative AI ethics examines how text-, image-, and audio-generating systems reshape cognition, creativity, work practices, and public decision-making, and how these changes raise normative and regulatory questions. The field investigates trust and distrust in algorithmic guidance, human–AI collaboration in creative and professional domains, risks such as misinformation, bias, rights violations, and safety failures, and the erosion or transformation of expertise. It integrates humanities and socio-technical perspectives to guide responsible deployment, governance, and human-centric design of generative AI systems.

Consumer & Gadgets

Using food to uncover AI's cultural blind spots

CISPA researcher Tejumade Àfọ̀njá has co-authored a new international study that uses food as a starting point to reveal significant cultural blind spots in today's AI systems. The study also introduces a new participatory ...

Machine learning & AI

How Greek myths and Hollywood hits can help us understand AI today

Nina Beguš remembers being at an event 10 years ago where a group of engineers showcased new robots that could recognize human emotions and offer basic compliments. It was years before AI chatbots would become fixtures of ...

Hi Tech & Innovation

Biological intelligence as the basis for new AI systems

In a new research project led by the Central Institute of Mental Health (CIMH) in Mannheim, scientists are investigating how insights into learning processes in animal brains can be used to make artificial intelligence (AI) ...

Computer Sciences

Six criteria for the reliability of AI

Language models based on artificial intelligence (AI) can answer any question, but not always correctly. It would be helpful for users to know how reliable an AI system is. A team at Ruhr University Bochum and TU Dortmund ...

page 8 from 33