Page 14: Research news on Trustworthy machine learning

Trustworthy machine learning addresses methods for training and deploying models that are secure, privacy-preserving, and robust to manipulation. Work in this area develops federated and decentralized learning schemes, cryptographic and homomorphic encryption frameworks, and privacy-preserving compression to protect data and models. It also studies adversarial example generation and defenses, certified unlearning, bias and spurious correlation mitigation, and the use of synthetic and filtered data. Applications span fraud and cyberattack detection, fake news and deception detection, and secure automation systems.

Machine learning & AI

New technique overcomes spurious correlations problem in AI

AI models often rely on "spurious correlations," making decisions based on unimportant and potentially misleading information. Researchers have now discovered these learned spurious correlations can be traced to a very small ...

Energy & Green Tech

New method significantly reduces AI energy consumption

AI applications such as large language models (LLMs) have become an integral part of our everyday lives. The required computing, storage and transmission capacities are provided by data centers that consume vast amounts of ...

Security

New AI defense method shields models from adversarial attacks

Neural networks, a type of artificial intelligence modeled on the connectivity of the human brain, are driving critical breakthroughs across a wide range of scientific domains. But these models face significant threat from ...

page 14 from 14