In late 2022, the world was introduced to ChatGPT, a powerful artificial intelligence chatbot developed by OpenAI.
Its arrival was hailed as a historic moment, on par with inventions like the printing press, the internet, and social media. And like those innovations, AI is poised to change life as we know it, for better and worse.
While the benefits of AI are undeniable (in-fact it helped me edit this write up), its rapid rise is triggering a quiet collapse beneath the surface, especially for content creators, journalists, and website owners.
The internet, once a bustling ecosystem of knowledge exchange, is now under threat from the very technology it helped spawn.
Welcome to the age of AI cannibalism, where the machine feeds on us, and we cheer it on.
The Disruption Pattern: From Kodak to Content Creators
History has seen this movie before.
Think of Kodak( fell to digital photography), Nokia, and BlackBerry (were decimated by smartphones), once market leaders, they failed to adapt to technological change. The result? Multibillion-dollar companies wiped out, and millions of jobs lost.
AI is following the same script, but this time, it’s targeting the very source that feeds it, especially those whose roles are repetitive, routine, or content-based.
And unlike past disruptions, AI doesn’t just replace workers.
It consumes the very output those workers produce to get smarter. In doing so, it is draining the internet of its lifeblood: human-generated content.
How AI Works — And Why That’s a Problem
At first glance, AI tools like ChatGPT look like magic. Ask a question, and get a clean, coherent answer in seconds. No need to scroll through 10 tabs on google search. No need to compare articles or double-check citations.
But what most users don’t realise is this: AI doesn’t create knowledge. It recycles it.
These tools are trained on vast amounts of publicly available data — websites, blogs, forums, news articles, Wikipedia pages, and more. In essence, they’re not inventing new ideas; they’re hashing together what’s already online and presenting it with style.
This means AI is completely dependent on existing human content. But here’s the twist: by replacing the need to visit original websites, AI is undermining the very creators it relies on.
The Traffic Drain Is Real — And Growing
Many websites depend on ad revenue, which comes from page visits and user engagement. But as users shift from search engines to AI tools, those clicks are vanishing.
Recent industry data paints a worrying picture:
In 2023, some online publishers saw traffic drops of 20–40 % due to generative AI tools.
Experiments with Google’s AI-powered search showed websites losing up to 60 % of clicks in trial regions.
AI content farms, created to game the system, are already folding due to lack of unique content and declining user trust.
When traffic drops, so does ad revenue. When revenue dries up, creators stop creating. And when creators disappear, the internet — as a dynamic space of ideas — begins to decay.
This disruption extends beyond journalism to any content-driven industry. Educational websites, how-to guides, product reviews, recipe sites, and countless other information sources face the same existential threats.
The creators of wikipedia, stack overflow, reddit discussions and academic papers(sources that make ai responses possible) receive no compensation or recognition for their contributions to Ai’s training data or real-time responses.
A Vicious Cycle: AI Consumes, But Doesn’t Contribute
This cycle presents a dangerous paradox:
Humans create content.
AI learns from that content.
Users use AI instead of visiting original sources.
Original sources lose traffic and income.
Humans stop creating.
AI has nothing new to learn from.
AI is, in effect, sucking the internet dry. It’s like a parasite consuming its host and we’ve yet to build a sustainable way to feed both.
Legal Battles Are Already Brewing
The pushback has begun. Major publishers like The New York Times, AFP, and News Corp are suing companies like OpenAI, Microsoft, and Google for using their content without consent.
Many are demanding fair compensation for training AI models on their work. Others are calling for AI transparency laws, data licensing standards, and regulation of AI outputs.
But regulation is slow and AI moves fast. While the legal system catches up, entire sectors are being disrupted or drained.
What Happens When There’s Nothing Left?
The greatest risk isn’t just economic, it’s epistemic. AI models are already being trained on AI-generated content, leading to feedback loops of misinformation, bland repetition, and factual decay.
Without human writers, thinkers, researchers, and storytellers, the richness of online content — its diversity, depth, and credibility — will fade.
The internet could become a hollow shell of synthetic summaries, indistinguishable from one another.
And when AI tells us to “verify from credible sources,” we may ask: which ones are left?
Conclusion: A Call for Balance
AI is not evil. It’s powerful, fascinating, and capable of remarkable things. But if we don’t address its dependence on human-generated content, and its impact on the economic ecosystem of the internet, we will soon find ourselves facing an information drought.
The comparison to the printing press is apt, but somehow incomplete since the printing press created new opportunities for information distribution back then, AI today threatens to collapse the economic foundation of information creation.
We must support content creators, through policy, payment, partnership, or platform redesign, to keep the well from drying up.
Because when AI finishes feeding on us, it may have nothing left to say.
