Major global brands are unintentionally contributing to the expansion of Unreliable Artificial Intelligence-Generated News websites (UAINs) by directing programmatic advertising funds to them, according to an analysis by NewsGuard.
Programmatic advertising is an automated process that places ads on websites without regard to their content, leading to ads from major brands being displayed on UAIN sites. These sites generate large volumes of articles daily, enabling them to attract a high number of programmatic ads.
NewsGuard defines UAINs as websites that operate with minimal human oversight and primarily publish articles created by bots. It recently increased the number of sites on its UAIN site tracker from 49 to 217, indicating the rapid proliferation of such sites.
In May and June 2023, NewsGuard identified 393 programmatic ads from 141 major brands on 55 UAINs. Most of these ads were served by Google Ads. The brands likely had no knowledge of their ads appearing on these sites due to the opaque nature of programmatic advertising.
These unreliable AI-driven websites included brands from various sectors, such as banks, financial firms, luxury department stores, sports apparel, appliance manufacturers, consumer technology companies, global e-commerce companies, broadband providers, streaming services, a digital platform, and a supermarket chain.
Many of the UAINs were found to use AI tools to rewrite articles from mainstream news outlets without providing credit, while some were identified as promoting unproven and potentially harmful natural health remedies.
Google, despite its policy against ‘’spammy automatically-generated content,’’ was found to be the primary platform serving ads on the UAINs. NewsGuard found more than 90% of the ads identified on UAINs were served by Google Ads.
NewsGuard only began tracking UAINs in May of this year and can usually only identify these automated news sites when they publish recognisable error messages created by AI models. This means that the actual number of sites using AI to mindlessly plagiarise work from other platforms to generate traffic is much larger.
Indeed, a future web filled with re-purposed junk continues to be a significant concern about the deployment of new AI language models. In a web dominated by Google’s search engine and advertising services, clicks and traffic have been the primary way for businesses and individuals online to get noticed and get paid. Entire search-engine-optimisation industries have grown around the need to develop an online presence that gets onto the first page of Google.
With the ever-decreasing cost of AI-generated text, there is a serious financial incentive for anyone able to push out automated content that gets even a small amount of clicks. Major publications like Gizmodo and CNET have also begun publishing articles written wholly by AI.
All of this begins to conjure up a vision of an internet filled with algorithmic babble, the same story rewritten a hundred ways with nothing new to say. While some version of this future seems unavoidable, it is also unsustainable. Current AI methods are great at collecting, synthesising, and rewriting content but are not meant to generate novel information.
Automated content farms need real people to see things happening in the world and write stories for their AI to plagiarise. As an abundance of AI plagiarism pushes publications to put their content behind a paywall, automated content seems likely to become a snake eating its own tail.
James Browning is a freelance tech writer and local music journalist.
BUSINESS REPORT