Page 19: Research news on Large language models

Large language models are high-capacity neural sequence models trained on massive text and multimodal corpora to perform language understanding, generation, and reasoning. Current work examines their internal representations, cognitive and social behavior analogies to humans, and limitations in mathematical, causal, and strategic reasoning. Research also addresses alignment with human values and brain activity, safety and security vulnerabilities, privacy and de-anonymization risks, cross-lingual and sociocultural biases, scaling and efficiency laws, and frameworks for tool use, multi-agent interaction, and domain-specific deployment.

Computer Sciences

Can ChatGPT actually 'see' red? New study results are nuanced

ChatGPT works by analyzing vast amounts of text, identifying patterns and synthesizing them to generate responses to users' prompts. Color metaphors like "feeling blue" and "seeing red" are commonplace throughout the English ...

Computer Sciences

From position to meaning: How AI learns to read

The language capabilities of today's artificial intelligence systems are astonishing. We can now engage in natural conversations with systems like ChatGPT, Gemini, and many others, with a fluency nearly comparable to that ...

page 19 from 24