Page 3: Research news on Large language models

Large language models are high-capacity neural sequence models trained on massive text and multimodal corpora to perform language understanding, generation, and reasoning. Current work examines their internal representations, cognitive and social behavior analogies to humans, and limitations in mathematical, causal, and strategic reasoning. Research also addresses alignment with human values and brain activity, safety and security vulnerabilities, privacy and de-anonymization risks, cross-lingual and sociocultural biases, scaling and efficiency laws, and frameworks for tool use, multi-agent interaction, and domain-specific deployment.

Computer Sciences

New method helps AI reason like humans without extra training data

A study led by UC Riverside researchers offers a practical fix to one of artificial intelligence's toughest challenges by enabling AI systems to reason more like humans—without requiring new training data beyond test questions.

Computer Sciences

Creative talent: Has AI knocked humans out?

Are generative artificial intelligence systems such as ChatGPT truly creative? A research team led by Professor Karim Jerbi from the Department of Psychology at the Université de Montréal, and including AI pioneer Yoshua ...

Computer Sciences

Using AI to understand how emotions are formed

Emotions are a fundamental part of human psychology—a complex process that has long distinguished us from machines. Even advanced artificial intelligence (AI) lacks the capacity to feel. However, researchers are now exploring ...

page 3 from 19