There used to be a pause in the rhythm of digital security. A software bug was found; the vendor created and tested and released a patch; and businesses had a comfortable few weeks to install it before hackers figured out how to use the flaw.

In between there was time. Time for software companies to evaluate bug reports. Time for IT departments to test updates. Time for companies to schedule maintenance windows. Time, even, for procrastination.

That world ended in 2026. The gap is disappearing.

Security researchers are calling it “the Great Compression.” The timeline between discovering a flaw and exploiting it has collapsed.

And AI is the force doing the compressing.

Trigger warning

I’m not pulling this out of nowhere and jumping out of the bushes for fun to say BOO and scare you. If you’ve seen any coverage of Anthropic’s superpowered AI model Claude Mythos, this is the explanation of why it has shaken the security world.

This is bad news about a serious risk, just like everything else. In the next article I’ll tell you some of the terrible things that might happen to disrupt our world now that hackers and criminals are using AI tools.

Abyss gaze” is a useful term for the depression that settles in when you pick a trend and study it closely and discover that there is no hope. It happens regardless of what aspect of the future you study - climate change, drone warfare, or genetic engineering, say. When you look at the details of any of them, we’re in far worse shape than you realized and experts are terrified.

So it goes with cybersecurity. This will have you gazing into the abyss.

Proceed with caution.

Bug fixes at human speed

Commercial software used to be written by people, special human beings who understood complex and difficult languages. It was (and is) common for programs to have tiny flaws - a single misplaced instruction, a forgotten safety check, a digital loose floorboard hidden in millions of lines of code.

The process of dealing with those flaws was a well defined dance until things began to accelerate a few years ago.

This is the way it used to happen in slower times.

Months or years after a program was released, a security researcher, criminal hacker, or curious hobbyist would stumble across strange behavior. Maybe a program crashes unexpectedly. Maybe it accepts data it shouldn’t. Maybe someone notices they can trick the program into doing something odd, like opening a locked door with the wrong key.

A responsible researcher might spend days or weeks investigating how the bug works and reporting it privately to the software company. The vendor then had to do its own analysis to reproduce the bug, understand the underlying cause, make sure the fix doesn’t break something else, test the repair across many systems, package the patch, and distribute it to customers.

CVE (Common Vulnerabilities and Exposures) is a global system for tracking software flaws. Software companies apply for a unique CVE ID for each bug, and the details are published when the patch is prepared, to alert IT departments and users that it’s time to update their systems.

The appearance of a bug in a CVE alert would set off a frenzy in the hacking community as the bad guys sought to reverse-engineer the flaw and find a way to exploit it, then use it as a weapon against computers that were not patched promptly. Turning a vulnerability into a working attack required skill and time. It was a craft. Attackers often needed days or weeks to build something usable. Many hackers lacked the expertise, others moved too slowly, but in any case it took time before a flaw became something the hackers could use.

It was an imperfect, awkward race: the vendor racing to distribute patches, customers slowly installing them, and attackers slowly weaponizing the flaw. It was messy and stressful but the friction and delays have been an important part of our defense against bad guys for decades.

AI is beginning to dissolve that friction like acid on ice.

Accelerating to hyperspeed

Researchers track something called “time to exploit” (aka TTE), the delay between public disclosure of a vulnerability and the first real-world attacks using it. For some time, the numbers have been steadily shrinking. By 2024, the average time to exploit a newly discovered software bug had plummeted from 32 days to just five days.

Then AI tools were introduced and that pace began to look leisurely.

Because the introduction of AI to write and evaluate code isn’t just faster. It’s different in kind.

Imagine handing a dense, technical security report to a human expert. They would read it, deliberate, and slowly begin experimenting.

Now imagine feeding that same report into an AI system trained on millions of lines of code and every publicly known exploit technique. It doesn't just read; it absorbs. It doesn't test one hypothesis at a time. It tests thousands simultaneously. And it never stops.

This is where the Great Compression becomes terrifyingly dramatic. AI systems discover and weaponize software defects at a speed that exceeds human limits by orders of magnitude. The timeline between "the bug exists" and "the world is under attack" is collapsing toward zero. Hacking exploits appear literally within seconds after a flaw is disclosed.

An increasing number of bugs are discovered because bad guys have weaponized them before the developer knows they exist. They’re referred to as “zero day” bugs because the vendor has no notice until the day attacks are launched. Zero day flaws used to be scarce diamonds for criminals, hard to find and very valuable. Now AI tools are finding them routinely, even in hardened programs that have been vetted for decades.

It’s worse than that! The hacker market is shifting to “Negative TTE” where criminal software exploits appear before software is released. The bad guys are using AI to monitor and weaponize flaws in Github and other software repositories where developers store work in progress.

AI also allows less skilled hackers to become involved.

Hacking used to be artisanal. Skilled individuals or small teams would build attacks carefully, often tailoring them to specific targets.

AI turns that into something closer to mass production.

Picture a factory line. On one end, a newly discovered vulnerability goes in. On the other end, a range of possible exploits comes out, tested, refined, and ready to use. Variations are generated automatically. Failures are discarded without hesitation.

This doesn’t eliminate expertise. It amplifies it. A small group of knowledgeable attackers can now operate at a scale that would have required an army a decade ago.

And less experienced attackers can simply use the tools.

The arrival of Claude Mythos

All that badness is going on right now, today, with the rapidly improving AI models that are revolutionizing the world of computer programming. Security researchers have already been losing sleep.

But then on April 7, 2026, Anthropic announced that it had developed a staggeringly powerful new AI model for computer security tasks. The announcement said:

“During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The vulnerabilities it finds are often subtle or difficult to detect. Many of them are ten or twenty years old.”

Anthropic has created a cyber-savant capable of finding software bugs more effectively than anything ever before.

Capable of fixing the bugs.

And weaponizing them.

I’ll tell you that story in the next article.

Share This