Page 12: Research news on Large language models

Large language models are high-capacity neural sequence models trained on massive text and multimodal corpora to perform language understanding, generation, and reasoning. Current work examines their internal representations, cognitive and social behavior analogies to humans, and limitations in mathematical, causal, and strategic reasoning. Research also addresses alignment with human values and brain activity, safety and security vulnerabilities, privacy and de-anonymization risks, cross-lingual and sociocultural biases, scaling and efficiency laws, and frameworks for tool use, multi-agent interaction, and domain-specific deployment.

Computer Sciences

Platform can make machine learning more transparent and accessible

What began as a Ph.D. project has grown into a website with 120,000 unique visitors each year. With the platform OpenML, researcher Jan van Rijn is contributing to open science, aiming to make machine learning more transparent, ...

page 12 from 19