AI 'news' apps reporting murders that never happened

Artificial Intelligence generates artificial news

Artificial Intelligence (AI) is a new source of misinformation. AI-generated news is churning out made-up news, including murders and crimes that never occurred. 

Popular app publishes fake news reports

Newsbreak, a popular app for local news, has been spreading fake news. The Chinese-affiliated company, which has about 50 million monthly users, has published 40 fake news stories since 2021, as revealed by Reuters. Newsbreak obtains licensed content from major media outlets but also scrapes the internet for local news which it then rewrites with the help of AI, explained author Jim Pearson.

It publishes licensed content from major media outlets, including Reuters, Fox, AP and CNN as well as some information obtained by scraping the internet for local news or press releases which it rewrites with the help of AI. 

AI tools enabled the app to create ten stories from local news sites under fictitious bylines. Internal memos show that the company itself was concerned about its “AI-generated stories." Other documents revealed that it was guilty of copyright infringement: 

[T}he app's use of AI tools affected the communities it strives to serve, with Newsbreak publishing erroneous stories; creating 10 stories from local news sites under fictitious bylines; and lifting content from its competitors, according to a Reuters review of previously unreported court documents related to copyright infringement, cease-and-desist emails and a 2022 company memo registering concerns about "AI-generated stories."

In March, Newbreak added a disclaimer to its homepage, warning that its content “may not always be error-free.”

Reuters highlighted a false Newsbreak story that told of a murder in a small New Jersey town on Christmas day. The article, "Christmas Day Tragedy Strikes Bridgeton, New Jersey Amid Rising Gun Violence in Small Towns," was totally false according to police. “The Bridgeton, New Jersey police department posted a statement on Facebook on December 27 dismissing the article - produced using AI technology - as ‘entirely false.’” 

Newsbreak said the false article came from another news source, findplace.xyz.  

The company said "the inaccurate information originated from the content source," and provided a link to the website adding: “When NewsBreak identifies any inaccurate content or any violation of our community standards, we take prompt action to remove that content.”

The operators of the website, findplace.xyz, did not respond to a request from Reuters for comment. The police declined to provide further comment.

Reuters tweeted about the fake news story, including its diagram of the company’s China relationship. 

Original source was also AI?

ArsTechnica, in its report on Reuters' false murder report, leaves the reader wondering if the source Newsbreak's AI picked up was also written by AI. The mysterious journalist who ostensibly wrote the story has no public profile and her image appears to be a stock photo.

 The content source identified by NewsBreak is an article on a news site called FindPlace.xyz. It was written by a journalist named Amelia Washington, who has contributed most of the site's most recent content. There is no public profile for Amelia Washington outside of the news site, and a reverse image search of the photo used with her bio suggests a stock photo was used. The same photo appeared on a testimonial for a nutritional supplement on Amazon and on posts for foreign freelance sites where her name and background do not match her FindPlace.xyz bio.

FindPlace.xyz did not respond to Ars' request to connect with Washington or provide comment.

Google AI Overviews “flawed by design”

ArsTechnica author Ashley Belanger pointed out that shortly before Reuters published this story, Google had "admitted its AI Overviews summarizing the Internet were flawed by design after people quickly found the tool spouted inaccurate, misleading, and dangerous answers."

AI “content farms” look like regular news sources

Belanger also referred to a report by The Register which discussed the large number of AI content farms that popped up during the past year, which look like regular news sites but aren't.

The Reuters report also follows a spike in AI content farms last year, The Register reported, which look like regular news sites but are entirely AI-generated. Not only do these increasingly common AI content farms not seem to break Google's rules—perhaps increasing the likelihood that they will be surfaced and scraped—but Internet jokesters have also already started pranking these AI sites to intentionally place false information on them. 

AI turns basketball star into a criminal 

Tech.co has been keeping track of false AI-generated stories, some of which besmirched reputations. One such incident was when Golden State Warriors Guard Klay Thompson was accused of vandalism because X's chatbot didn't recognize sports slang.

X’s chatbot accuses Golden State Warriors Guard Klay Thompson of vandalizing a string of homes in Sacramento. The story was generated after Grok took social media posts that said Thompson was “shooting bricks” (Basketball slang for “missing his shots”) a little bit too literally.

The app mistook the term “shooting bricks,” which in basketball slang means missing shots, for smashing windows with bricks. The app duly reported his “crime” but added that he hadn't injured anyone.

“In a bizarre turn of events, NBA star Klay Thompson has been accused of vandalizing multiple houses with bricks in Sacramento,” Grok wrote. “Authorities are investigating the claims after several individuals reported their houses being damaged, with windows shattered by bricks. Klay Thompson has not yet issued a statement regarding the accusations. The incidents have left the community shaken, but no injuries were reported. The motive behind the alleged vandalism remains unclear.” 

Can't trust Google's Bard AI Chatbot either

Another time, reported Tech.co's Aaron Drapkin, false allegations by Google's Bard AI chatbot against four consulting firms resulted in their being called out during a parliamentary hearing in Australia.

A team of academics from Australia apologize after Google’s Bard AI chatbot makes a number of damaging accusations about the Big Four consulting firms and their involvement with other companies. The false allegations are then referenced during a parliamentary inquiry calling for better regulation of the companies.

One of the accusations was that consulting giant KPMG was in charge of a Commonwealth Bank audit during a planning scandal, when in reality, KPMG has never audited the bank in question. Several other, similar errors were made about Deloitte during the inquiry.

 

AI not yet ready for real-time

In another incident, as reported by WACH Fox, the Facebook chatbot misinformed Horry County, South Carolina parents, worried about bomb threats at the local high school, that there was a shooting incident with multiple wounded. In its reporting of the AI misinformation, HTC Cybersecurity professor Stan Greenawalt explains that AI is “not great for real-time yet.”

'It's not great for real-time yet. We haven't gotten there. . . . So if I'm a parent, I go to other sources. . . .  If I want something close to real-time, I'm going to go to a news agency and look for real-time information because they'll vet that information."

Watch WACH Fox's video tweet of his warning to parents not to rely on AI for current information:

 

Real problems with AI beyond real-time news 

Greenawalt appears to be correct in his assessment that no one should rely on AI for real-time news. However, considering the magnitude of the problem, one has to wonder what AI can safely be relied upon for in the foreseeable future and how it imagines the news stories it makes up.

GIGO — garbage in, garbage out

Benj Edwards, who reported on Google's AI Overview's flawed design for ArsTechnica, identified part of the misinformation problem as stemming from Overview's use of Google's faulty algorithm results.

Here we see the fundamental flaw of the system: "AI Overviews are built to only show information that is backed up by top web results." The design is based on the false assumption that Google's page-ranking algorithm favors accurate results and not SEO-gamed garbage. Google Search has been broken for some time, and now the company is relying on those gamed and spam-filled results to feed its new AI model.

Inaccurate conclusions from accurate data

Edwards also showed, in the image above, that Google's AI can take accurate information and compile a false report. 

Even if the AI model draws from a more accurate source, as with the 1993 game console search seen above, Google's AI language model can still make inaccurate conclusions about the "accurate" data, confabulating erroneous information in a flawed summary of the information available. 

The moral of the story — if you want accurate information do your own homework. 

AI doesn't seem to be replacing humans any time soon.

Keep Reading

No items found.
Gold Report