Page 2: Research news on Trustworthy machine learning

Trustworthy machine learning addresses methods for training and deploying models that are secure, privacy-preserving, and robust to manipulation. Work in this area develops federated and decentralized learning schemes, cryptographic and homomorphic encryption frameworks, and privacy-preserving compression to protect data and models. It also studies adversarial example generation and defenses, certified unlearning, bias and spurious correlation mitigation, and the use of synthetic and filtered data. Applications span fraud and cyberattack detection, fake news and deception detection, and secure automation systems.

Software

No-code machine learning development tools

Since 2021, Korean researchers have been providing a simple software development framework to users with relatively limited AI expertise in industrial fields such as factories, medical, and shipbuilding, providing them with ...

Software

New software could reduce dependency on big data centers for AI

EPFL researchers have developed new software—now spun-off into a start-up—that eliminates the need for data to be sent to third-party cloud services when AI is used to complete a task. This could challenge the business ...

Computer Sciences

New system efficiently explains AI judgments in real-time

A research team led by Professor Jaesik Choi of KAIST's Kim Jaechul Graduate School of AI, in collaboration with KakaoBank Corp, has developed an accelerated explanation technology that can explain the basis of an artificial ...

Computer Sciences

'Periodic table' for AI methods aims to drive innovation

Artificial intelligence is increasingly used to integrate and analyze multiple types of data formats, such as text, images, audio and video. One challenge slowing advances in multimodal AI, however, is the process of choosing ...

page 2 from 14