Researchers hope to eliminate AI hallucination bugs that arise from words with more than one meaning.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The AI ​​boom has allowed the average user to use AI chatbots like ChatGPT to get information from cues that show both breadth and depth. However, these AI models are still prone to deception, where incorrect answers are provided. Moreover, AI models can provide clearly incorrect (sometimes dangerous) answers. While some hallucinations are caused by incorrect training data, generalization, or other side effects of data harvesting, the Oxford researchers have targeted the problem from another angle. In Nature, they published details of a newly developed method for detecting perturbations—or arbitrary and erroneous species.

LLMs find answers by looking for specific patterns in their training data. This doesn't always work, because there's still the possibility that an AI bot can find a pattern where none exists, similar to how a human can spot animal shapes in clouds. However, the difference between humans and AI is that we know these are just cloud formations, not an actual giant elephant floating in the sky. On the other hand, an LLM might see it as gospel truth, thus leading to future tech that doesn't exist yet, and other nonsense.

Semantic entropy is the key.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment