Humans are no longer the dominant force on the internet.

Automated bots make up more than half of global internet traffic.

Your instinct is that human beings visit websites and send email. That’s outdated. Bot activity surpassed human-generated activity for the first time in 2024. AI models, automated crawlers, good bots, and especially bad bots have begun to take over, doing more online than people do.

Our global communications network is used for many things besides browsing websites, so let’s make it even more alarming and hard to understand. If you narrow the focus and only measure visits to websites, some reports indicate that bots account for 80% of all web visits.

Another way to say that: only one in five website visitors is a real human.

Bots have been used online for decades. The surge in the last few years is related to AI but not as directly as you think.

Trigger warning: this is where the story turns dark and the rabbit hole becomes an abyss.

Two-thirds of all bot activity comes from “bad bots” designed for malicious purposes like fraud, theft, and disruption. Bad bots have been running around for many years but AI is responsible for their rapid increase during the last three years: Very Bad People are leveraging AI to embiggen their Very Bad Things.

Here’s the way the math works when you put a couple of these stats together. In 2024 bad bots run by criminals were responsible for 37% of all global web traffic.

Bad bots are distorting key business metrics and inflating tech company valuations, as well as causing substantial financial losses, reputational damage, and operational challenges across virtually all industries.

In the last article I talked about the collapse of the advertising business model for the internet because AI search responses are causing legitimate website visits to drop.

This article is the other part of that story. And this one doesn’t have a happy ending. It’s pretty much all doom and gloom.

What are good bots?

Bots are computer programs that do things automatically on the internet.

Here are some examples.

Site bots When you go to Google Flights and put in your travel dates, Google sends automated bots to gather information from each of the airlines and display the results for you. Many online services work by using bots to aggregate information from other sites.

Indexing the web For thirty years, Google has been running web crawlers – bots that visit every website indexing videos, images, text, and links. That’s how Google indexes the internet for searches. Interesting fact: traffic from Google’s web crawlers has been declining steadily since AI arrived.

AI training AI-related traffic to websites is sharply growing. The huge technology companies have relentless armies of bots scraping websites to train AI models. OpenAI runs bots to train ChatGPT; they singlehandedly accounted for 13% of all web traffic in 2024.

Chatbots Chatbots are the creations of ChatGPT and many other AI companies to carry on conversations with you.

More good bots There are many other automated programs online. They’re interacting on social media; checking the links a website receives from other sites, a crucial part of how search engines work; monitoring websites to alert owners if sites are under attack by hackers or go offline; and much more.

Those are the good bots. They’re our friends. They account for about a third of the bots running around online.

What are bad bots?

The bad guys are running bots designed for fraud, theft, and disruption. The scale of badness is mindboggling.

You probably heard the term “bots” used to describe the automated Russian bots that posted propaganda to support Trump and seed chaos on Facebook and Twitter before the 2016 election. And sure, that’s part of it, but only a miniscule part of the criminal activity enabled by bad bots. They are behind a seemingly endless variety of online criminal behavior. Much of it predates AI, but AI has accelerated its growth.

AI is delivering a superpowered boost to cybercriminals for development of more sophisticated bots. Now bad bots can evade detection by mimicking human behavior – mouse movements, click patterns, bypassing CAPTCHAs, using hacked residential equipment to appear as legitimate users, emulating mobile browser traffic, and more.

I know your eyes will glaze over, but just look at this list. And these are only the most obvious examples!

  • Financial Fraud and Theft: These bots aim to directly steal money, compromise financial systems, or drain budgets.

    • Ad fraud: Falsifying ad interactions through automated clicks or impressions, causing companies to pay for non-human traffic.

    • Card cracking: Using brute-force attacks to test stolen payment card details for unauthorized purchases or to identify active cards.

    • Cashing out: Converting stolen assets like gift cards or cryptocurrency into money at scale.

    • Scalping/Inventory hoarding: Buying large quantities of limited-inventory goods (like concert tickets or products) for resale at inflated prices. In 2016 bots gobbled up all the Taylor Swift tickets, leading President Obama to call for and sign the BOTS Act (“Better Online Ticket Sales”).

    • Loan or credit card fraud: Using fake identities generated by bots to apply for loans or credit cards with no intent to repay.

    • Fraudulent transactions: Performing high volumes of small, unauthorized transactions across multiple accounts.

  • Data and Content Misuse: This involves illegally obtaining, copying, or exploiting valuable information and intellectual property.

    • Web scraping: Extracting data from websites or web applications, often for malicious purposes like competitive data mining, price manipulation, or harvesting personal and financial data.

    • Stealing data: Using bots to hack into business or government networks and extract sensitive data, personal details, or transaction records, potentially leading to identity theft or data sales on the dark web.

    • Misuse of content and virtual currencies: In online gaming, bots can manipulate in-game economies by seizing and selling items at premium rates or accumulating virtual currency through automated gameplay.

  • Account Compromise: Gaining unauthorized access to user accounts.

    • Account takeover: Bots using stolen login data to access user accounts, leading to theft of funds or sensitive information.

    • Credential stuffing: Cybercriminals use bots to test large lists of stolen username/password pairs against other accounts to find matches.

    • Credential cracking: Using brute-force attacks to try combinations of usernames and passwords to gain account access.

    • Fake account creation: Automated scripts are used to create fake user accounts that can flood platforms with spam, malicious links, or facilitate post-holiday schemes like return and refund fraud.

  • Website and Service Interference: Disrupting online operations, distorting metrics, or overwhelming systems.

    • Denial of Service (DoS): Overwhelming a website or service with traffic to make it inaccessible to legitimate users.

    • Denial of inventory: Sabotaging sites by filling shopping carts or making reservations without completing purchases, leading to artificial stock-outs.

    • Skewing: Interacting with a site to distort traffic data and mislead results. Bots can inflate metrics like pageviews, clicks, impressions, user sessions, conversion rates, and average session duration.

    • Spam: Emailing or posting questionable information, malware, pop-ups, pictures, videos, or ads.

Almost forty percent of all web traffic is created by bad bots.

The mind, it boggles.

What damage is caused by bad bots?

This is dystopian enough already. Are you sure you want to know?

Don’t say I didn’t warn you.

Massive financial losses from ad fraud and ad budgets drained by non-human clicks. Up to a third of worldwide ad spending is estimated to have been driven by bad bots rather than human clicks – hundreds of billions of dollars in 2024 alone. Add the cost of reputational harm, the skewed business decisions, the infrastructure costs, and the straight-out theft, and the true cost is astronomical.

The possibility that the AI boom is a bubble that might burst. The business metrics distorted by bots are an important part of the way that tech companies are valued. A few weeks ago a Fortune magazine article summed it up this way: “The AI boom is now bigger than the ’90s dotcom bubble—and it’s built on the backs of bots, maybe more than real users. . . . If the AI trade is a bubble, it’s a bigger bubble than the one that popped in the days of the “dotcom crash,” which led to a nasty recession.” The same bot-inflated metrics are used to pump up the valuations of startup companies.

Lost sales, reputational harm, legal risks and fines for failing to defend against bot attacks, industry-specific damage, increased costs of hosting and security, and maybe fire, famine, pestilence, plague, and locusts, because probably, why not?

In other words, our internet behind the scenes looks like a portent of the looming collapse of civilization, like everything else in 2025.

Personally, I think I’ll have a lie-down, then write an article or two about book collecting, which is peaceful and doesn’t fill me with existential dread.

Share This