Researchers in China developed a hallucination correction engine for AI models

A team of scientists from the University of Science and Technology of China and Tencent’s YouTu Lab have developed a tool to combat “hallucination” by artificial intelligence (AI) models. 

Hallucination is the tendency for an AI model to generate outputs with a high level of confidence that don’t appear based on information present in its training data. This problem permeates large language model (LLM) research. Its effects can be seen in models such as OpenAI’s ChatGPT and Anthropic’s Claude.

The USTC/Tencent team developed a tool called “Woodpecker” that they claim is capable of correcting hallucinations in multi-modal large language models (MLLMs).

This subset of AI…

Read more on Cointelegraph

35.7K Reads