Page 30: Research news on Generative AI ethics

Generative AI ethics examines how text-, image-, and audio-generating systems reshape cognition, creativity, work practices, and public decision-making, and how these changes raise normative and regulatory questions. The field investigates trust and distrust in algorithmic guidance, human–AI collaboration in creative and professional domains, risks such as misinformation, bias, rights violations, and safety failures, and the erosion or transformation of expertise. It integrates humanities and socio-technical perspectives to guide responsible deployment, governance, and human-centric design of generative AI systems.

Computer Sciences

AI with, for and by everyone can help maximize its benefits

Humans' ability to learn from one another across cultures over generations drives our success as a species as much as our individual intelligence. This collective cultural brain has led to new innovations and developed bodies ...

Software

AI threats in software development revealed in new study

UTSA researchers recently completed one of the most comprehensive studies to date on the risks of using AI models to develop software. In a new paper, they demonstrate how a specific type of error could pose a serious threat ...

Internet

Dataset reveals how Reddit communities are adapting to AI

Researchers at Cornell Tech have released a dataset extracted from more than 300,000 public Reddit communities, and a report detailing how Reddit communities are changing their policies to address a surge in AI-generated ...

Machine learning & AI

Opinion: We must balance the risks and benefits of AI

The potential of AI to transform people's lives in areas ranging from health care to better customer service is enormous. But as the technology advances, we must adopt policies to make sure the risks don't overwhelm and stifle ...

page 30 from 33