• Wed. Nov 29th, 2023

New Research from Oxford University Highlights the Dangers of AI-Induced Hallucinations in Science

ByEditor

Nov 20, 2023

New Research Warns About Language Models in Chatbots Hallucinating False Content

Chatbots are becoming increasingly popular in various industries, including science and education. However, researchers at the Oxford Internet Institute are warning about the potential dangers of using Large Language Models (LLMs) to generate information. These models can hallucinate false content and present it as accurate, posing a direct threat to scientific truth.

The paper published in Nature Human Behaviour highlights that LLMs are designed to produce helpful and convincing responses without any guarantees regarding their accuracy or alignment with fact. While LLMs are often treated as knowledge sources and used to generate information in response to questions or prompts, the data they are trained on may not always be factually correct.

One reason for this is that LLMs often use online sources which can contain false statements, opinions, and inaccurate information. Users often trust LLMs as a human-like information source due to their design as helpful, human-sounding agents. This can lead users to believe that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.

Researchers urge the scientific community to use LLMs responsibly by treating them as “zero-shot translators.” This means that users should provide the model with the appropriate data and ask it to transform it into a conclusion or code rather than relying on the model itself as a source of knowledge. This approach makes it easier to verify that the output is factually correct and aligned with the provided input.

While LLMs will undoubtedly assist with scientific workflows, it is crucial for researchers to maintain clear expectations of how they can contribute while also being aware of their limitations and potential risks.

Leave a Reply