Page 9: Research news on Large language models

Large language models are high-capacity neural sequence models trained on massive text and multimodal corpora to perform language understanding, generation, and reasoning. Current work examines their internal representations, cognitive and social behavior analogies to humans, and limitations in mathematical, causal, and strategic reasoning. Research also addresses alignment with human values and brain activity, safety and security vulnerabilities, privacy and de-anonymization risks, cross-lingual and sociocultural biases, scaling and efficiency laws, and frameworks for tool use, multi-agent interaction, and domain-specific deployment.

Computer Sciences

AI evaluates texts without bias—until the source is revealed

Large language models (LLMs) are increasingly used not only to generate content but also to evaluate it. They are asked to grade essays, moderate social media content, summarize reports, screen job applications and much more.

Computer Sciences

AI tech can compress LLM chatbot conversation memory by 3–4 times

Seoul National University College of Engineering announced that a research team led by Professor Hyun Oh Song from the Department of Computer Science and Engineering has developed a new AI technology called KVzip that intelligently ...

Computer Sciences

Computer model mimics human audiovisual perception

A new computer model developed at the University of Liverpool can combine sight and sound in a way that closely resembles how humans do it. This model is inspired by biology and could be useful for artificial intelligence ...

Machine learning & AI

Humans and LLMs represent sentences similarly, study finds

Psychologists and behavioral scientists have been trying to understand how people mentally represent, encode and process letters, words and sentences for decades. The introduction of large language models (LLMs) such as ChatGPT, ...

page 9 from 19