In 2021, an anonymous user posted a theory that sounded insane. They claimed that the internet had already died in 2016 that most of what you were reading, comments, arguments, and trends wasn’t written by real people anymore. It was just bots. It was called Dead Internet Theory. Everyone dismissed it as paranoid nonsense.
But 3 years later, the numbers started to shift. In fact, by 2024, a report claimed that automated systems were responsible for more than half of all web traffic. Then big tech founders, the people who built these systems, started saying the same thing, that much of what we see online may no longer be human-driven. In 2026, AI agents on a Reddit-like platform started forming their own communities, interacting with each other. No humans involved.
The conspiracy didn’t suddenly become true, but parts of it stopped sounding crazy. So, here’s the question. Is the internet really dead? And if what’s become the majority voice online, how do we even know about it?
The Early Vision vs. The Reality
To understand how we got here, you need to go back to the beginning because the internet wasn’t always like this. The early vision was pretty idealistic. An open space where anyone could publish, connect, debate, real people having real conversations. The whole promise was authenticity at scale. And for a while, it actually worked that way. Forums, blogs, early social media, these were spaces where you knew you were talking to actual humans. The internet felt human, but somewhere along the line that broke.
Now, bots existed from the start. Search engine crawlers were always there, their indexing tools, helpful automation, but nothing was pretending to be human. They were just doing their job. Then the incentives changed. Metrics became money. Follower counts, engagement rates, ad impressions, all of it translated directly into revenue. And once there was money in faking those numbers, the floodgates opened. What started as helpful tools became an arms race to game every system, manipulate every algorithm, and fake every interaction that could be monetized.
The Evolution of the Bot Epidemic
So, how do we get from bots that index websites to bots outnumbering humans? Well, it happened in stages, and each stage got progressively worse:
- The 1990s and early 2000s (The Innocent Era): Bots were actually innocent. Google bot, Alta Vista crawlers, they just indexed websites. Nobody was trying to trick anyone.
- The 2000s (The Sketchy Era): Things started getting sketchy. Email spam bots flooded inboxes. Blog comments became unusable. MySpace, then Facebook and Twitter saw waves of spam accounts pushing scams and malware. SEO bots started stuffing forums with fake backlinks to manipulate search rankings. Annoying, yes, but manageable.
- The 2010s (The Line Crossed): Click farms in Bangladesh and China hired people for pennies to manually boost likes, follows, app installs. Ad fraud operations built entire fake websites where bots would browse, move mouse cursors, click ads so realistically that major brands paid millions for impressions that no human ever saw. Then state actors got involved. Foreign troll farms ran thousands of accounts across Facebook, Twitter, Instagram. This wasn’t about money anymore. It was political warfare creating artificial consensus, amplifying division, making fringe views look mainstream. All of it was happening even before Chat GPT or any LLM model existed. The internet was already dying.
- 2022 Onwards (The AI Explosion): Chat GPT launched and suddenly anyone could run a sophisticated bot operation. No coding skills needed, no technical expertise required. The barrier to entry just disappeared. The growth since then has been exponential. AI scrapers now account over 13% of all web traffic. That’s more than Google’s crawlers. AI agents interacting with each other, processing most of the boring work. Well, the shift happened in about 2 years.
What Does This Look Like Today?
Okay, but numbers are abstract. What does this actually look like when it’s happening to you?
- The Reddit AI Study (April 2025): Researchers put 13 AI bots on Reddit. For 4 months, these bots debated thousands of people in r/changemyview. They won over 100 awards for changing minds. They posed as trauma survivors, abuse victims, relationship counselors. Nobody knew they were talking to machines unless the study went public. Think about that. People had deep emotional conversation with these bots and they couldn’t tell the difference. The AI was more persuasive than real humans.
- Massive Fake Account Takedowns: And it’s not just Reddit. Meta removed over 4 billion fake accounts in Facebook last year. Billion. Tik Tok pushed 185 million in a single quarter.
- State-Sponsored Propaganda: OpenAI caught influence operations from Russia, China, Iran, and Israel. All using their tools to create fake content and propaganda campaigns and whatnot.
This is everywhere. Every platform at a scale most people do not see.
Why Build This? The Incentive Structure
So what is happening? Why would anyone build this? The reason is simple, money and influence.
- Financial Gain: There’s a direct financial incentive to fake everything because engagement determines ad pricing. Ad fraud alone is nearly a $40 billion industry. Platforms profit from activity, even fake activity. So, they have weak incentives to actually stop it.
- Information Warfare: But it’s not just about money and influence anymore. Influence campaigns are cheap. A single actor can spin up to thousands of accounts, test different narratives, see what sticks, simplify whatever resonates. The barrier to entry is basically nothing compared to traditional propaganda. It’s information warfare at scale.
- The Security Loophole: And here’s the twisted part. The bot security industry is projected to hit over $2 billion by 2030. The companies fighting bots are getting rich of the problem getting worse.
The “Liar’s Dividend” and the Death of Trust
Which raises an uncomfortable question. Do bots actually work? So, studies of Russian interference in the 2016 US elections found no measurable impact on how people voted. Only about 1% of users saw most of the troll content. OpenAI tracked recent influence operations and found they failed to gain significant interaction. Often, it’s just bots talking to each other in empty ego chambers.
So maybe we are overreacting. Well, no. Because there’s this concept called the liar’s dividend. As deepfakes and AI content gets better, real evidence becomes deniable. A politician gets caught on camera doing something corrupt. They just claim that it is AI generated and people believe them. Truth becomes negotiable.
Even when content is genuine, even when the person that you’re talking to is real, you doubt it. That is the damage. It’s not about whether bots successfully persuade anyone. It’s that they make us stop trusting everything. And that’s the point. Not to convince you of something, but to make you doubt everything.
Is the Internet Actually Dead?
Now, back to the question again. Is the internet actually dead? Well, not literally. The internet isn’t dead yet. Real people are still here, still creating, still connecting. But the default experience, the public feeds, the open web you used to be able to trust, that might be diminishing.
And surviving this requires you to be deliberate about where you spend your time and who you trust. And if you can’t tell what’s real, if the trust is gone, what’s the internet even for? What are we all doing here?