Large language models pose risk to science with false answers, says study
Large Language Models (LLMs) pose a direct threat to science because of so-called "hallucinations" (untruthful responses), and should be restricted to protect scientific truth, says a new paper from leading Artificial Intelligence ...
Nov 20, 2023
1
7