top of page
  • Robert Farago

AI Model Collapse Explained

The Truth About AI

To create the large language models upon which AI chatbots depend, ChatGPT and the rest of the AI industrial complex scraped the entire the world wide web. It was and is the greatest copyright theft in the history of the world, ever. And now AI chatbots are taking their victims’ jobs

As we’ve reported, the avaricious mainstream media is replacing human generated text with AI-generated content. As AI reduces the number of two-legged reporters, writers, editors and commentators, it’s reducing both the amount and quality of content it can rape going forward.

If this negative feedback loop continues, at some point AI chatbots will fall into what’s called “model collapse.” They’ll be basing their replies on… their replies.

Although some are looking forward to that day, not everyone is at risk.

Big data companies who aren’t in the business of providing content for the gen pop – banks, brokers, hospitals, law firms, etc. – are addressing the issue of AI model collapse.

To limit legal liability (for plagiarism, racism, stupid mistakes, etc.) and security breaches, they’re implementing “walled garden” AI systems. AI apps that use internally generated or approved material, and nothing else.

This kinda sorta tackles the problem of AI hallucinations: “misinformation” spit out by large language models. Walled garden AI fact checkers still need apply, but less of them, less vexed. For now.

Equally important, a walled garden AI system won’t send data back to the AI industry, which logs every query and response, and all the other “private” information they can grab. In-house AI is also less susceptible to hackers and ransomeware attacks.

Moving on, what impact would an AI model collapse have on hundreds of millions of independent ad and sales-supported websites and blogs? Severely degraded AI content would help them, right? If AI acts like it’s on LSD, info seekers will go back to surfing for reliable sites. Yeah. No.

First, accuracy concerns, like privacy concerns, are vastly over-rated. When AI got Star Wars chronology wrong on Gizmodo, the number of people who stopped using AI as a search engine was, uh, none. And now a word from our sponsor:

“A lot of people think of ChatGPT as a search engine, but it’s not,” an OpenAI spokeswoman told the Wall Street Journal. Fuck yes it is.

What’s the best used pickup? Ask AI. What’s a good recipe for fettuccine Alfredo? Ask AI. Why do Rhode Islanders prattle on about the sinking of the Gaspee? Ask AI.

Unless you’re asking AI to do your taxes, and maybe even then, who cares if it gets it wrong? Who notices?

Second, by the time an AI model collapse stimulates readers to return to “real” writers – in theory – it’ll be too late for the smalls. Traffic gone. Websites gone.

Websites depend on Google search. AI chatbots are giving Google search the bum’s rush. Google’s Search Generative Experience might slow this extinction-level event, a bit. But probably not.

How will smaller text-based websites generate life-sustaining traffic, of which there’s less and less, thanks to an increasingly illiterate audience spending their time flipping through AI-selected videos?Hell if I know. Anyway, good news! AI model collapse is not a done deal.

As mentioned at the top of this post, AI chatbots are based on content scraped from hundreds of millions of websites. For which OpenAI, Alphabet, Microsoft and the rest paid precisely nothing. It’s all in the public domain! So FOAD.

A class action lawsuit – the first of many – disagrees. Despite Google’s high-priced lawyers (who no doubt saw this coming), I reckon authors/websites will score a significant payout. The lawyers, mostly. Eventually.

How the billion dollar bonanza will be distributed is anyone’s guess. There’ll be enough money swashing around to… wait for it… pay writers. Hell, the AI industry might hire its own writers to generate content and/or maintain the guardrails.

Nah. If Big AI does pay for non-AI content to power their AI content, odds are they’ll shovel cash at “trusted sites.” News and opinion orgs that give gravitas to the bots while adhering to their masters’ political bias. Like… The New York Times. The Washington Post. NBC.

Big News would love an AI model collapse! Less competitors that are harder to find with crappier content? What’s not to love?

Whether or not AI model collapse is a thing, independent text-based truth tellers are still looking at a less literate, less editorially diverse, less brave new world.

A word of advice to my fellow soldiers: you can’t stop the signal. By the same token, don’t expect the signal to stop itself.

0 views0 comments


bottom of page