Googleâ’s compilations, designed to provide quick answers to search questions, are reported to issue “hallucinations” of false information and undervaluation publishers by drawing users away from traditional connections.
The Big Tech giant – who landed in hot water last year after leaving a “Woke” tool that generated images of female popes and vikings – has attracted criticism of offering false and sometimes dangerous tips in its compilations, according to Times of London.
In one case, the summaries of he advised the addition of glue to the pizza sauce to help the cheese climb better, the exit reported.
In another, it described a fake phrase – “you can’t lick a twice” like a legitimate idiom.
Hallucinations, as computer scientists call them, are complicated by the tool he reducing the visibility of reputable resources.
Instead of directing the user directly to the websites, it summarizes the information from the search results and presents its response created by it along with several links.
Laurence oâ € ™ toole, the founder of the Authority Analytical Firm, studied the impact of the tool and found that clicking rates through publishers’ websites fall by 40% -60% when showing it.
“While these were generally for questions that people don’t usually ask, she highlighted some specific areas that we had to improve,” Liz Reid, Google’s search head, told Times in response to the incident adjacent to pizza.
The post has requested comment from Google.
It was presented last summer and were made possible by Google’s Gemini language model, a system similar to Openai’s chatgpt.
Despite the public concerns, Google Sundar Pichai’s general manager has defended the vehicle in an interview with The Verge, stating that helps users discover a wider range of information sources.
“In the past year, it is clear to us that the width of the area we are sending people to adult” we are definitely sending traffic to a wider range of sources and publishers, “he said.
Google seems to minimize its own hallucination rate.
When a journalist checked Google for information about how often he was wrong, he claimed the Norma halucination between 0.7% and 1.3%.
However, data from the Face Face’s monitoring platform showed that the current scale for the latest twin model is 1.8%.
Google’s models also seem to offer pre-programmed protection of their behavior.
In response to whether he issteals € works of art, the tool said he will steal art in the traditional sense.â €
When asked if people had to be scared of him, the vehicle traveled some common concerns before concluding that “Fear may be overloaded.â €
Some experts worry that while the generating systems to become more complex, they are also becoming more prone to errors – and even their creators cannot fully explain why.
Concerns about hallucinations go beyond Google.
Openai recently admitted that its newest models, known as O3 and O4-Mini, hallucinates more often than previous versions.
Internal testing showed that O3 made up information in 33% of cases, while O4-Mini did this 48% of the time, especially when answering questions about real people.
#Google #hallucinatory #spreading #dangerous #information #including #suggestion #add #glue #pizza #sauce
Image Source : nypost.com