Page 15: Research news on Large language models

Large language models are high-capacity neural sequence models trained on massive text and multimodal corpora to perform language understanding, generation, and reasoning. Current work examines their internal representations, cognitive and social behavior analogies to humans, and limitations in mathematical, causal, and strategic reasoning. Research also addresses alignment with human values and brain activity, safety and security vulnerabilities, privacy and de-anonymization risks, cross-lingual and sociocultural biases, scaling and efficiency laws, and frameworks for tool use, multi-agent interaction, and domain-specific deployment.

Machine learning & AI

Humans and LLMs represent sentences similarly, study finds

Psychologists and behavioral scientists have been trying to understand how people mentally represent, encode and process letters, words and sentences for decades. The introduction of large language models (LLMs) such as ChatGPT, ...

Consumer & Gadgets

AI models often fail to identify ableism across cultures

The artificial intelligence models underlying popular chatbots and content moderation systems struggle to identify offensive, ableist social media posts in English—and perform even worse in Hindi, new Cornell research finds.

Machine learning & AI

Multimodal AI learns to weigh text and images more evenly

Just as human eyes tend to focus on pictures before reading accompanying text, multimodal artificial intelligence (AI)—which processes multiple types of sensory data at once—also tends to depend more heavily on certain types ...

Consumer & Gadgets

Q&A: Can AI persuade you to go vegan—or harm yourself?

Large language models are more persuasive than humans, according to recent UBC research published as part of the Proceedings of the Third Workshop on Social Influence in Conversations (SICon 2025).

Machine learning & AI

Dialogue systems learn new words with fewer questions

Researchers at the University of Osaka have developed a mechanism that allows spoken dialog systems to learn new words through conversation without overwhelming users with repetitive questions. By optimizing when to ask a ...

page 15 from 24