Page 18: Research news on Large language models

Large language models are high-capacity neural sequence models trained on massive text and multimodal corpora to perform language understanding, generation, and reasoning. Current work examines their internal representations, cognitive and social behavior analogies to humans, and limitations in mathematical, causal, and strategic reasoning. Research also addresses alignment with human values and brain activity, safety and security vulnerabilities, privacy and de-anonymization risks, cross-lingual and sociocultural biases, scaling and efficiency laws, and frameworks for tool use, multi-agent interaction, and domain-specific deployment.

Machine learning & AI

Democratizing AI-powered sentiment analysis

Artificial intelligence is accelerating at breakneck speed, with larger models dominating the scene—more parameters, more data, more power. But here is the real question: Do we really need bigger to be better? We challenged ...

Machine learning & AI

New research reveals AI has a confidence problem

Large language models (LLMs) sometimes lose confidence when answering questions and abandon correct answers, according to a new study by researchers at Google DeepMind and University College London.

page 18 from 24