HIX AI
Collapse
Simple
Home > Discover > Scientists Develop Method to Spot 'Hallucinating' Large Language Models in AI Research

Scientists Develop Method to Spot 'Hallucinating' Large Language Models in AI Research

Written by
ArticleGPT

Reviewed and fact-checked by the HIX.AI Team

2 min readJul 05, 2024
Scientists Develop Method to Spot 'Hallucinating' Large Language Models in AI Research

In a Nutshell

Researchers at the University of Oxford have made significant progress in ensuring that information generated by artificial intelligence (AI) is robust and reliable.

In a groundbreaking development, researchers from the University of Oxford have made crucial progress in identifying and preventing the phenomenon of hallucination in large language models (LLMs) used in artificial intelligence (AI) research.

The researchers have devised a novel method to detect when LLMs are likely to "hallucinate" or invent plausible-sounding but imaginary facts.

A more straightforward explanation may be that an LLM does not identify what defines a right answer but feels obligated to offer one, leading it to fabricate information, a behavior known as “confabulation”.

The results of this research could come in handy, especially in fields like legal and medical question-answering, where inaccuracies can lead to severe consequences.

Methodology Behind Detecting LLM Confabulations

The methodology developed by the research team is strongly based on statistics and focuses on estimating uncertainty at the level of meaning instead of individual word sequences.

The method utilizes semantic entropy, which measures the amount of variation between multiple outputs, to calculate the uncertainty in LLM responses.

By translating the probabilities produced by LLMs into probabilities over meanings, the researchers were able to identify instances where LLMs were uncertain about the actual meaning of their answers, not just the phrasing.

During their experiments, the new method consistently outperformed previous approaches in detecting confabulations.

The research team tested the method against six open-source LLMs, including well-known models such as GPT-4 and LLaMA 2, using diverse datasets ranging from Google searches to technical biomedical questions and mathematical word problems. The method even successfully identified specific false claims in short biographies generated by ChatGPT.

One major advantage of this technique is that, unlike previous approaches that required task-specific data, this method operates on various datasets and tasks without prior knowledge. Its robust generalization to new tasks makes it valuable for ensuring accuracy and reliability in a wide range of applications.

While the detection method addresses specific reliability problems related to confabulations, more challenges lie ahead. Consistent mistakes made by LLMs are an area that requires further attention.

The most detrimental failures of AI occur when a system consistently produces incorrect but confident and systematic results. Researchers acknowledge that there is still much work to be done in this regard.

Based on 3 search sources

3 sources

Research into 'hallucinating' generative models advances reliability of artificial intelligence

Researchers from the University of Oxford have made a significant advance toward ensuring that information produced by generative artificial intelligence (AI) is robust and reliable.

Scientists might have found a way to overcome ‘hallucinations’ that plague AI systems like ChatGPT

Scientists may have created a way to help overcome one of the biggest problems with popular artificial intelligence systems.

Researchers describe how to tell if ChatGPT is confabulating

It's one of the world's worst-kept secrets that large language models give blatantly false answers to queries and do so with a confidence that's indistinguishable from when they get things right. There are a number of reasons for this.

On This Page

  • Methodology Behind Detecting LLM Confabulations