Page 4: Research news on Large language models

Large language models are high-capacity neural sequence models trained on massive text and multimodal corpora to perform language understanding, generation, and reasoning. Current work examines their internal representations, cognitive and social behavior analogies to humans, and limitations in mathematical, causal, and strategic reasoning. Research also addresses alignment with human values and brain activity, safety and security vulnerabilities, privacy and de-anonymization risks, cross-lingual and sociocultural biases, scaling and efficiency laws, and frameworks for tool use, multi-agent interaction, and domain-specific deployment.

Consumer & Gadgets

LLMs and creativity: AI responses show less variety than human ones

Can using a large language model (LLM) make a person more creative? Prior work has shown that using LLMs can make creative outputs more homogeneous, but this homogenization could stem from the specific LLM used or from widespread ...

Software

Top AI coding tools make mistakes one in four times, study shows

New research from the University of Waterloo shows that artificial intelligence (AI) still struggles with some basic software development tasks, raising questions about how reliably AI systems can assist developers. As Large ...

Computer Sciences

What flocking birds can teach AI about reducing noise

Among the primary concerns surrounding artificial intelligence is its tendency to yield erroneous information when summarizing long documents. These "hallucinations" are problematic not only because they convey falsehoods, ...

page 4 from 24