Page 5: Research news on Large language models

Large language models are high-capacity neural sequence models trained on massive text and multimodal corpora to perform language understanding, generation, and reasoning. Current work examines their internal representations, cognitive and social behavior analogies to humans, and limitations in mathematical, causal, and strategic reasoning. Research also addresses alignment with human values and brain activity, safety and security vulnerabilities, privacy and de-anonymization risks, cross-lingual and sociocultural biases, scaling and efficiency laws, and frameworks for tool use, multi-agent interaction, and domain-specific deployment.

Computer Sciences

Shrinking AI memory boosts accuracy, study finds

Researchers have developed a new way to compress the memory used by AI models to increase their accuracy in complex tasks or help save significant amounts of energy.

Machine learning & AI

New study reveals that AI cannot fully write like a human

A world's first study shows that AI-generated writing continues to display distinct stylistic patterns that set it apart from human prose. Led by researchers at University College Cork (UCC), the research explores whether ...

page 5 from 19