Research news on Trustworthy machine learning

Trustworthy machine learning addresses methods for training and deploying models that are secure, privacy-preserving, and robust to manipulation. Work in this area develops federated and decentralized learning schemes, cryptographic and homomorphic encryption frameworks, and privacy-preserving compression to protect data and models. It also studies adversarial example generation and defenses, certified unlearning, bias and spurious correlation mitigation, and the use of synthetic and filtered data. Applications span fraud and cyberattack detection, fake news and deception detection, and secure automation systems.

Security

Photon framework scales AI vulnerability discovery

Oak Ridge National Laboratory's Center for Artificial Intelligence Security Research (CAISER) is shining a light on AI vulnerabilities. While AI models offer tremendous economic, humanitarian and national security potential, ...

Consumer & Gadgets

New deep learning framework solves the cold-start problem

Recommender systems suggest potentially relevant content by evaluating user preferences and are essential in reducing information overload. However, when users join a new online platform, recommendation systems often struggle ...

Security

Can people distinguish between AI-generated and human speech?

In a collaboration between Tianjin University and the Chinese University of Hong Kong, researchers led by Xiangbin Teng used behavioral and brain activity measures to explore whether people can discern between AI-generated ...

page 1 from 15