Page 15: Research news on Large language models

Large language models are high-capacity neural sequence models trained on massive text and multimodal corpora to perform language understanding, generation, and reasoning. Current work examines their internal representations, cognitive and social behavior analogies to humans, and limitations in mathematical, causal, and strategic reasoning. Research also addresses alignment with human values and brain activity, safety and security vulnerabilities, privacy and de-anonymization risks, cross-lingual and sociocultural biases, scaling and efficiency laws, and frameworks for tool use, multi-agent interaction, and domain-specific deployment.

Computer Sciences

Beyond translation: Multilingual benchmark makes AI multicultural

Imagine asking a conversational bot like Claude or ChatGPT a legal question in Greek about local traffic regulations. Within seconds, it replies in fluent Greek with an answer based on UK law. The model understood the language, ...

Computer Sciences

AI learns languages similarly to humans, study shows

An AI system that learns language autonomously develops a language structured in the same way as human language. And just as we humans learn from previous generations, AI models get better when they take advantage of the ...

Computer Sciences

Team teaches AI models to spot misleading scientific reporting

Artificial intelligence isn't always a reliable source of information: large language models (LLMs) like Llama and ChatGPT can be prone to "hallucinating" and inventing bogus facts. But what if AI could be used to detect ...

page 15 from 19