- Home
- Scientists...
Scientists might have found a way to overcome ‘hallucinations’ that plague AI systems like ChatGPT
Scientists may have created a way to help overcome one of the biggest problems with popular artificial intelligence systems.
A new tool might allow the tools to find when they are “hallucinating”, or making up facts. That is currently a major danger when relying on large language models, or LLMs.
LLMs, such as those that underpin ChatGPT and similar tools, are built to produce language rather than facts. That means they can often produce “hallucinations”, where they make claims that are confidently stated and appear legitimate but actually have no relationship with the truth.
Fixing that problem has proven difficult, in part because new systems produce such plausible looking text. But it is also central to any hope of using the technology in a broad range of applications, since people need to be able to trust that any text produced by the systems is truthful and reliable.
The new method allows scientists to find what they call “confabulations”, when LLMs produce inaccurate and arbitrary text. They often do so when they do not have the knowledge to answer a question.
It is done by using another LLM to check the work of the original one, and then another which evaluates that work. A researcher not involved the work described it as “fighting fire with fire”, suggesting that LLMs could be a key part of controlling themselves.
The work focuses not on the words themselves but on the meanings. They fed the outputs of the system that needed to be checked into another that worked out whether its statements implied the other, essentially looking for paraphrases.
Those paraphrases could then be used to understand how likely the original system’s output was to be reliable. Research showed that a third LLM evaluating that work came out with roughly the same results as when a person did.
The system could be valuable in making LLMs more reliable and therefore able to be used across a more broad set of tasks as well as in more important settings. But it could also bring other dangers, scientists warned.
As we look further into using LLMs for this purpose, “researchers will need to grapple with the issue of whether this approach is truly controlling the output of LLMs, or inadvertently fuelling the fire by layering multiple systems that are prone to hallucinations and unpredictable errors,” wrote Karin Verspoor, from the University of Melbourne, in an accompanying article.
The work is described in a new paper, ‘Detecting hallucinations in large language models using semantic entropy’, published in Nature.
The Independent is the world’s most free-thinking news brand, providing global news, commentary and analysis for the independently-minded. We have grown a huge, global readership of independently minded individuals, who value our trusted voice and commitment to positive change. Our mission, making change happen, has never been as important as it is today.
- https://www.msn.com/en-sg/news/techandscience/scientists-might-have-found-a-way-to-overcome-hallucinations-that-plague-ai-systems-like-chatgpt/ar-BB1owqRX?ocid=00000000
Related
North Korea fires two ballistic missiles, one may have fallen on land
The latest launches come as US, Japan and South Korea wrap up three days of military exercises dubbed ‘Freedom Edge’.
NewsUS Supreme Court tosses judicial decision rejecting Donald Trump's immunity bid
By John Kruzel and Andrew Chung WASHINGTON (Reuters) -The U.S. Supreme Court threw out a judicial decision rejecting Donald Trump's bid to shield himself from federal criminal charges involving his efforts to overturn his 2020 election loss in a major ruling on Monday involving the scope of presidential immunity from prosecution. The court decided the blockbuster case on the last day of its term
NewsWho are the Haredim in Israel and what are their demands?
The military’s moves to conscript ultra-Orthodox Jews has led to protests and anger among the religious community.
NewsChinese badminton player, 17, dies after collapsing on court
A 17-year-old Chinese badminton player died after collapsing on court during an international tournament in Indonesia, officials said on Monday, mourning him as "outstanding" and "talented". "China's Zhang Zhijie, a singles player, collapsed on the court during a match in the evening," Badminton Asia and the Badminton Association of Indonesia (PBSI) said in a joint statement on Monday.
NewsThe Toilet Theory of the Internet
Google is serving an audience that wants quick and easy results. That may lead to disaster.
NewsTop-secret US aquatic military vessel spotted on Google maps
Top-secret aquatic vessel, the "Manta Ray," was exposed on Google Maps and recognized by eagle-eyed users, according to a report.
NewsWhat I Learned Working With White Men in Corporate America
I wanted so badly to fit in, be accepted, and just be "one of the boys."
NewsUS military rebuilds runway on site of ‘nightmare’ World War II battle
Marine Corps aircraft lands on the Pacific island of Peleliu, site of one of the bloodiest battles of World War II, and a possible basing option to counter China.
News