Computer Sciences

New mitigation framework reduces bias in classification outcomes

We use computers to help us make (hopefully) unbiased decisions. The problem is that machine-learning algorithms do not always make fair classifications if human bias is embedded in the data used to train them—which is ...

Machine learning & AI

Building fairness into AI is crucial, and hard to get right

Artificial intelligence's capacity to process and analyze vast amounts of data has revolutionized decision-making processes, making operations in health care, finance, criminal justice and other sectors of society more efficient ...

Machine learning & AI

Researchers surprised by gender stereotypes in ChatGPT

A DTU student has analyzed ChatGPT and revealed that the online service is extremely stereotypical when it comes to gender roles. The analysis is the first step toward providing AI developers with a tool for testing against ...

Business

US Supreme Court hears challenges to social media laws

The US Supreme Court, in a case that could determine the future of social media, heard arguments on Monday about whether a pair of state laws that limit content moderation are constitutional.

Robotics

Why are so many robots white?

Problems of racial and gender bias in artificial intelligence algorithms and the data used to train large language models like ChatGPT have drawn the attention of researchers and generated headlines. But these problems also ...

Computer Sciences

Scientists tackle AI bias with polite prodding

The troubling presence of racial bias in AI output may be easier to contain than many thought. Scientists at AI research company Anthropic say a little politeness may just do the trick, at least in some instances.

page 1 from 9