Page 23: Research news on Generative AI ethics

Generative AI ethics examines how text-, image-, and audio-generating systems reshape cognition, creativity, work practices, and public decision-making, and how these changes raise normative and regulatory questions. The field investigates trust and distrust in algorithmic guidance, human–AI collaboration in creative and professional domains, risks such as misinformation, bias, rights violations, and safety failures, and the erosion or transformation of expertise. It integrates humanities and socio-technical perspectives to guide responsible deployment, governance, and human-centric design of generative AI systems.

Business

Meta's AI talent war raises questions about strategy

Mark Zuckerberg and Meta are spending billions to recruit top artificial intelligence talent, triggering debates about whether the aggressive hiring spree will pay off in the competitive generative AI race.

Consumer & Gadgets

Why human empathy still matters in the age of AI

A new international study finds that people place greater emotional value on empathy they believe comes from humans—even when the exact same response is generated by artificial intelligence.

Machine learning & AI

Q&A: When talking about AI, definitions matter

Artificial intelligence is everywhere lately—on the news, in podcasts and around every water cooler. A new, buzzy term, artificial general intelligence (AGI), is dominating conversations and raising more questions than ...

Machine learning & AI

New method can teach AI to admit uncertainty

In high-stakes situations like health care—or weeknight "Jeopardy!"—it can be safer to say "I don't know" than to answer incorrectly. Doctors, game show contestants, and standardized test-takers understand this, but most ...

page 23 from 33