Confabulation

Scientists Develop Method to Spot 'Hallucinating' Large Language Models in AI Research
A new study published in Nature demonstrates a novel method to detect when a large language model (LLM) is likely to "hallucinate".
2024-07-05
A new study published in Nature demonstrates a novel method to detect when a large language model (LLM) is likely to "hallucinate".
2024-07-05