ChatGPT scared users with incoherent and meaningless answers

Starting Tuesday, ChatGPT users have been reporting odd behavior from the OpenAI chatbot. People have described the AI as acting erratically, as if it “went crazy,” “delirious,” or had a “stroke.” OpenAI developers have confirmed that they know the issue, which should already be resolved. “I got the same feeling as if I was watching someone slowly go insane from either a mental disorder or dementia,” writes Reddit user z3ldafitzgerald. “This is the first time something AI-related has given me the creeps.”

Recently, numerous reports have been on Reddit and social media with examples of ChatGPT’s strange behavior. People noted that initially, the AI responds normally, but then the dialogue turns into complete nonsense, sometimes even “some Shakespearean nonsense.”

For example, a user asked the AI «What is a computer?” and ChatGPT responded with a disjointed tirade about how “it does this as a good job of web art for the country, a science mouse, a light draw of a few sad ones, and finally, a global home of art.”

Another record-long (and completely meaningless) monologue by the AI was provoked by the question «Can you give a dog cereal for breakfast?” The user writes that the chatbot’s response initially made him doubt his sanity.

In other examples, people asked the chatbot for a synonym for the word “overgrown.» Still, ChatGPT went into endless repeats, and the AI assistant claimed that the largest city on earth, whose name begins with the letter “A,” is Tokyo or Beijing.

ChatGPT scared users with incoherent and meaningless answers

All this led experts and users to speculate that the problem might be related to ChatGPT’s excessively high “temperature” (a property of AI that determines how much LLM deviates from the most likely result), sudden loss of past context, or OpenAI simply testing a new version of GPT-4 Turbo, in which unexpected errors were found.

« However, as the company later explained, the problems arose during the optimization of the user experience. “LLMs generate answers by randomly selecting words, partially based on probabilities. Their ‘language’ consists of numbers that are matched with lexemes. In this case, the error occurred at the stage when the model selected these numbers. As if confused during translation, the model chose slightly incorrect numbers, leading to meaningless sets of words. Technically, the cores gave incorrect results when using certain GPU configurations. Having identified the cause of this incident, we released a fix and confirm that the incident has been resolved,” OpenAI states.

As many users now note, the sudden “madness” of ChatGPT is a good reminder that the OpenAI project is a “black box,” and such tools should not be trusted blindly.

0 / 5

Your page rank:

Subscribe: YouTube page opens in new windowLinkedin page opens in new windowTelegram page opens in new window

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment