Research news on Large language models

Large language models are high-capacity neural sequence models trained on massive text and multimodal corpora to perform language understanding, generation, and reasoning. Current work examines their internal representations, cognitive and social behavior analogies to humans, and limitations in mathematical, causal, and strategic reasoning. Research also addresses alignment with human values and brain activity, safety and security vulnerabilities, privacy and de-anonymization risks, cross-lingual and sociocultural biases, scaling and efficiency laws, and frameworks for tool use, multi-agent interaction, and domain-specific deployment.

Machine learning & AI

Exploring AI's growing role in scientific peer review

James Zou is a computer scientist at Stanford University who has been exploring how large language models (LLMs) can assist scientific peer review—and more broadly, how AI agents might accelerate research. It is a provocative ...

Computer Sciences

Can AI understand literature? Researchers put it to the test

Even with all the recent advances in the ability of large language models (like ChatGPT) to help us think, research, summarize, and learn complex and technical texts, how do they fare in understanding storytelling and literature? ...

Consumer & Gadgets

AI overly affirms users asking for personal advice, study finds

In a new study published in Science, Stanford computer scientists showed that artificial intelligence large language models are overly agreeable, or sycophantic, when users solicit advice on interpersonal dilemmas. Even when ...

page 1 from 22