Home » Technology » ChatGPT And Google’s Bard Provide Misinformation

ChatGPT And Google’s Bard Provide Misinformation

Microsoft launched its new ChatGPT-powered Bing a few weeks ago and now Google has followed suit with its AI bot Bard. Since the search engines have been fighting a duel, the number of misinformation and also partly pure nonsense is increasing.

At the beginning of February, Microsoft presented the new version of its search engine Bing, which has its own area in which you can chat with the OpenAI development ChatGPT and its successor GPT-4. In concrete terms, this means that you can talk to a chatbot and it will provide advanced and, above all, human-sounding answers. That triggered a “red alert” at Google, and since yesterday the Californian competitor has officially had its own chat AI called Bard.

Misunderstandings between bots

The two chatbots “know” each other now – but that doesn’t bode well. Because as The Verge reports, the two AIs make each other bad or spread untruths about themselves. You certainly can’t assume any intention here, but the thing shows well how much (negative) potential for misinformation Bard and Bing have with ChatGPT.

Microsoft’s Bing, for example, recently answered yes to the question of whether Google’s bard had been discontinued. The chain of evidence and argumentation shows how information can be falsified in the silent message principle. Because Bing cites in its (false) reply an article that discusses a tweet and Bard’s reply to it, which again contains a hoax on Hacker News predicting that something like this could happen.

Essentially, it shows that an AI can sound human, but not necessarily recognize the nuances of human communication. Microsoft was quick to spot this bug, but according to The Verge, it’s a great example of how quickly such artificial “intelligence” can create a hoax.

The Verge, which has dubbed it a “shit show,” said: “We’re dealing with a first sign that we’re stumbling into a giant game of AI misinformation where chatbots are unable to provide reliable Evaluating news sources, misreading stories about themselves, and misreporting their own abilities. In this case, it all started with a single joking comment on Hacker News.”

The author says that all this happened involuntarily, but one should only imagine what could happen if one wanted to consciously manipulate such systems: “It’s a ridiculous situation, but one with potentially serious consequences.”