AI debacle: Apple spreads fake news
The organization Reporters Without Borders has Apple prompted to remove its newly launched AI news aggregation feature. Because it simply works incorrectly and generates false statements.
Hallucinations in the electronic brain
For example, an AI error involved sending a push notification that incorrectly reported that the suspect Luigi Mangione, who is linked to the murder of the head of the US company UnitedHealthcare, had shot himself. The BBC, whose original report was the basis, then contacted Apple to highlight the problem and request a correction.
Vincent Berthier, head of technology and journalism at Reporters Without Borders, called on Apple to “act responsibly and remove this feature” in a statement. Berthier criticized the fact that AI works “on a probabilistic basis” and therefore cannot provide reliable facts. The false information spread by AI on behalf of media poses a threat to the credibility of the affected media and the public’s right to reliable information.
The organization expressed general concern about the risks posed by the use of AI in the media. The incident makes it clear that the technology is still “too immature” to reliably provide information to the general public.
Under someone else’s logo
Apple’s generative AI was launched in the US in June and is intended to summarize messages in compact formats such as paragraphs or bullet points. Since its public launch in October, the feature has made repeated mistakes. Another example: The AI incorrectly summarized a New York Times article claiming that Israeli Prime Minister Benjamin Netanyahu had been arrested.
In fact, the International Criminal Court had issued an arrest warrant for Netanyahu, but this was not accurately reported. The incorrect summaries not only pose the risk of spreading disinformation, but can also damage the credibility of news organizations. The summaries ultimately appear under the logo of the respective media, without them having any influence on the content.