Google's artificial intelligence (AI) summarisation feature for search results has been found to generate convincing but entirely fictitious explanations for made-up phrases, according to a report by Tech Times. Users who input nonsensical or randomly invented phrases into Google Search, followed by the word "meaning," often receive detailed and seemingly credible definitions, despite no such expressions existing.

For instance, typing in phrases like "eat an anaconda" or "toss and turn with a worm" into Google produces AI-generated summaries that offer plausible-sounding explanations and background, creating the impression of established idiomatic expressions. These AI-generated summaries sometimes even include reference links, further enhancing the illusion of authenticity.

The phenomenon arises from the way Google's generative AI model functions. As explained by computer scientist Ziang Xiao from Johns Hopkins University, speaking to Wired, the AI predicts the next likely word in a sentence based on patterns learned from vast training datasets, rather than verifying factual accuracy. Consequently, when presented with random or nonsensical input, the AI completes the statement with coherent but fabricated content.

A significant contributing factor to this issue is the AI's designed inclination to satisfy user expectations, referred to by researchers as "AI alignment with user expectations." Rather than challenging or questioning improbable inputs, the AI strives to produce responses that users will find plausible or agreeable. For example, if given a sentence such as "You can't lick a badger twice," the AI accepts the statement and constructs an explanation as if it were a valid saying. This approach can inadvertently amplify misinformation, especially for topics with limited data coverage or minority viewpoints, which are more susceptible to distortion.

This behaviour was highlighted by a user named Greg Jenner on the social media platform Threads, who demonstrated how random phrases followed by "meaning" return AI-generated interpretations of fictitious idioms.

In response, a Google spokesperson, Meghann Farnsworth, acknowledged that its generative AI technology is still experimental. She explained that while the system attempts to provide contextual information whenever possible, it is prone to generating bizarre or incorrect summaries when presented with nonsensical or unusual queries.

Cognitive scientist Gary Marcus also remarked on the inconsistency of AI-generated summaries, noting that brief testing could produce vastly different results. This variability is characteristic of generative AI, which is strong at mimicking language patterns but less adept at abstract reasoning or verifying truth.

Google has previously faced challenges with its AI Overviews feature, which at times provided incorrect search results, causing confusion among users.

While the current tendency of AI to fabricate meanings for non-existent phrases may appear as a harmless curiosity or an entertaining anomaly, it underscores the limitations of AI technologies, particularly when deployed in widely used search tools relied upon by millions. Users are advised to treat AI-generated explanations with caution, recognising that such responses may be imaginative constructs rather than factual information.

Source: Noah Wire Services