Page 3: Research news on Trustworthy machine learning

Trustworthy machine learning addresses methods for training and deploying models that are secure, privacy-preserving, and robust to manipulation. Work in this area develops federated and decentralized learning schemes, cryptographic and homomorphic encryption frameworks, and privacy-preserving compression to protect data and models. It also studies adversarial example generation and defenses, certified unlearning, bias and spurious correlation mitigation, and the use of synthetic and filtered data. Applications span fraud and cyberattack detection, fake news and deception detection, and secure automation systems.

Computer Sciences

New AI technique sounding out audio deepfakes

Researchers from Australia's national science agency CSIRO, Federation University Australia and RMIT University have developed a method to improve the detection of audio deepfakes.

Computer Sciences

Human-centric photo dataset aims to help spot AI biases responsibly

A database of more than 10,000 human images to evaluate biases in artificial intelligence (AI) models for human-centric computer vision is presented in Nature this week. The Fair Human-Centric Image Benchmark (FHIBE), developed ...

Machine learning & AI

AI teaches itself and outperforms human-designed algorithms

Like humans, artificial intelligence learns by trial and error, but traditionally, it requires humans to set the ball rolling by designing the algorithms and rules that govern the learning process. However, as AI technology ...

Computer Sciences

A new 'blueprint' for advancing practical, trustworthy AI

A new "blueprint" for building AI that highlights how the technology can learn from different kinds of data—beyond vision and language—to make it more deployable in the real world, has been developed by researchers at ...

page 3 from 14