Page 12: Research news on Trustworthy machine learning

Trustworthy machine learning addresses methods for training and deploying models that are secure, privacy-preserving, and robust to manipulation. Work in this area develops federated and decentralized learning schemes, cryptographic and homomorphic encryption frameworks, and privacy-preserving compression to protect data and models. It also studies adversarial example generation and defenses, certified unlearning, bias and spurious correlation mitigation, and the use of synthetic and filtered data. Applications span fraud and cyberattack detection, fake news and deception detection, and secure automation systems.

Computer Sciences

Novel technique overcomes spurious correlations problem in AI

AI models often rely on "spurious correlations," making decisions based on unimportant and potentially misleading information. Researchers have now discovered these learned spurious correlations can be traced to a very small ...

Computer Sciences

New method efficiently safeguards sensitive AI training data

Data privacy comes with a cost. There are security techniques that protect sensitive user data, like customer addresses, from attackers who may attempt to extract them from AI models—but they often make those models less ...

Computer Sciences

How to build trustworthy AI without trusted data

Today, almost everybody has heard of AI and millions around the world already use, or are exposed, to it—from ChatGPT writing our emails, to helping with medical diagnosis.

page 12 from 14