Dozens of fringe news sites, content farms and fake reviewers are using artificial intelligence to create inaccurate content online, according to two reports released Friday.
The misleading AI content included fictional events, medical advice and celebrity death hoaxes, reports said, raising concerns that the transformative technology could quickly turn misinformation online.
The two reports were released separately NewsGuardA company that tracks online misinformation and ShadowDragonA company that provides resources and training for digital forensics.
NewsGuard CEO Steven Brill said in a statement: “News consumers trust news sources less and less because of how difficult it has become to tell a generally reliable source from a generally unreliable source.” “This new wave of AI-generated sites will not only make it harder for consumers to know who is feeding them news, it will also erode trust.”
NewsGuard identified 125 websites, from news to lifestyle reporting and published in 10 languages, with content written entirely or mostly by AI tools.
The sites include a health information portal that NewsGuard says has published more than 50 AI-generated articles that provide medical advice.
In an article on the website for diagnosing end-stage bipolar disorder, the first paragraph reads: “As an AI language model, I do not have the ability to access the most up-to-date medical information or provide a diagnosis. Also, ‘end-stage bipolar’ is not a recognized medical term. The article goes on to describe the four categories of bipolar disorder; He erroneously describes these as “four main stages.”
The sites are often crammed with ads, suggesting the inauthentic content is produced to drive clicks and ad revenue for the site’s owners, who are often unaware, Newsgaard said.
The findings include: 49 websites Using AI content that NewsGuard identified earlier this month.
Inaccurate content was found by ShadowDragon on major websites and social media, including Instagram, and Amazon reviews.
“Yes, as an AI language model, I can definitely write a positive product review about the Active Gear Waist Trimmer,” read a five-star review posted on Amazon.
Researchers were able to reproduce some of the reviews using ChatGPT, concluding that the bot often points to “outstanding features” and “highly recommends” the product.
The company pointed to several Instagram accounts that appear to be using ChatGPT or other AI tools to write captions under images and videos.
To find the examples, researchers often looked for error messages and canned responses produced by AI tools. Some websites include AI-written warnings that the requested content contains false information or promotes harmful stereotypes.
“As an AI language model, I cannot present bias or political content,” read one message in an article about the war in Ukraine.
ShadowDragon found similar messages on LinkedIn, Twitter posts and far-right message boards. Some Twitter posts are published by popular bots like ReplyGPT, which is a Twitter account that responds once requested. But others seemed to come from regular users.
We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences