Page 6: Research news on Trustworthy machine learning

Trustworthy machine learning addresses methods for training and deploying models that are secure, privacy-preserving, and robust to manipulation. Work in this area develops federated and decentralized learning schemes, cryptographic and homomorphic encryption frameworks, and privacy-preserving compression to protect data and models. It also studies adversarial example generation and defenses, certified unlearning, bias and spurious correlation mitigation, and the use of synthetic and filtered data. Applications span fraud and cyberattack detection, fake news and deception detection, and secure automation systems.

Security

Fairness tool catches AI bias early

Machine learning software helps agencies make important decisions, such as who gets a bank loan or what areas police should patrol. But if these systems have biases, even small ones, they can cause real harm. A specific group ...

Software

A new way to test how well AI systems classify text

Is this movie review a rave or a pan? Is this news story about business or technology? Is this online chatbot conversation veering off into giving financial advice? Is this online medical information site giving out misinformation?

Security

One tiny flip can open a dangerous back door in AI

A self-driving motor vehicle is cruising along, its numerous sensors and cameras telling it when to brake, change lanes, and make turns. The vehicle approaches a stop sign at a high rate of speed, but instead of stopping, ...

Security

How poisoned data can trick AI, and how to stop it

Imagine a busy train station. Cameras monitor everything, from how clean the platforms are to whether a docking bay is empty or occupied. These cameras feed into an AI system that helps manage station operations and sends ...

page 6 from 14