Page 27: Research news on Generative AI ethics

Generative AI ethics examines how text-, image-, and audio-generating systems reshape cognition, creativity, work practices, and public decision-making, and how these changes raise normative and regulatory questions. The field investigates trust and distrust in algorithmic guidance, human–AI collaboration in creative and professional domains, risks such as misinformation, bias, rights violations, and safety failures, and the erosion or transformation of expertise. It integrates humanities and socio-technical perspectives to guide responsible deployment, governance, and human-centric design of generative AI systems.

Machine learning & AI

Anthropic's Claude AI gets smarter—and mischievious

Anthropic launched its latest Claude generative artificial intelligence (GenAI) models on Thursday, claiming to set new standards for reasoning but also building in safeguards against rogue behavior.

Machine learning & AI

Neurosymbolic AI could be leaner and smarter than today's LLMs

Could AI that thinks more like a human be more sustainable than today's LLMs? The AI industry is dominated by large companies with deep pockets and a gargantuan appetite for energy to power their models' mammoth computing ...

page 27 from 33