We’ve been here before in cyber security (even though this feels different). There have been big moments that promised to change how we “do” cyber security. After SQLSlammer and Mimikatz and the APT-1 report and wide-scale ransomware and SolarWinds. And now: AI-accelerated vulnerability discovery and exploitation.
Zero-day vulnerabilities are not new. Supply chain attacks are not new. Attackers who are faster, better resourced, and more patient than defenders are not new. What is new and what does change the calculus is the velocity and the accessibility of that velocity to a much wider pool of adversaries.
Models like Mythos and its successors aren’t going to invent a new category of attack. They’re going to take every tedious, time-consuming part of offensive security: fuzzing, variant analysis, chaining primitives, identifying misconfiguration patterns across large target sets and compress it. The vulnerability was always there.
It’s like using the warp pipes in Super Mario. Those other levels still exist, there was just a way to bypass them to achieve your end goal quicker.
That is the shift worth paying attention to. And the response to it is probably not what you expect (or want to admit).
The Boring Truth About What Actually Works
When defenders think about surviving in a world of AI-accelerated exploitation, the temptation is to reach for AI-assisted defense, automated triage, ML-based anomaly detection, intelligent patching pipelines. Those tools will matter. But they’re not where most organizations are losing.
Most organizations will lose because they haven’t done the boring things well.
Egress filtering. Credential management. Network segmentation. Patch discipline. These aren’t glamorous. They don’t win budget battles or generate vendor excitement. But they are the controls that consistently appear in post-incident reviews as the things that weren’t in place and would have mattered. In a world where exploitation speed increases by an order of magnitude, the boring things become load-bearing walls.
How Attackers Will Actually Use These Models
To understand what to prioritize, it helps to think about what an attacker gains from a Mythos-class tool and where it changes their workflow.
Reconnaissance with Laser Precision
AI models can reason across data sources, public code repositories, job postings, certificate transparency logs, historical breach data, and build a precise picture of a target’s technology stack before a single packet hits their network. The attacker shows up already knowing what you’re running and roughly how it’s configured.
Exploit Development Is No Longer a Bottleneck
The gap between “a library has a bug” and “here is a working exploit for it” used to be measured in weeks or months or years of skilled human effort. That gap collapses. Variant analysis (finding similar bugs in adjacent codebases) becomes trivial. A single disclosed CVE becomes a template for ten others.
Supply Chain Targeting Gets Smarter
The SolarWinds and XZ Utils attacks required significant human intelligence and patience to execute. An AI-assisted attacker can analyze the dependency graphs of high-value targets at scale, identify the thinnest links like the understaffed open source maintainer, the rarely-audited vendor library, the CI/CD integration nobody reviews, and prioritize accordingly. The attack surface hasn’t changed. The attacker’s ability to find the weakest point in it has.
Credential Abuse Becomes Automated and Adaptive
Given a set of harvested credentials, an AI-assisted attacker can quickly reason about what environments they’re likely valid in, which services to try them against, and what access they’re likely to yield before touching a single target.
The Fundamentals Are Load-Bearing Now More Than Ever
Here’s what most of these attack chains have in common: they depend on the defender failing at something basic. Not exotic. Basic.
Egress Filtering on Servers
This one is painfully under-implemented for how effective it is. Most servers have no legitimate reason to make outbound connections to arbitrary Internet destinations. Mike Pinch talked about this when SolarWinds happened (https://sra.io/blog/solarwinds-breach-how-do-we-stop-this-from-happening-again/ ) 6 years ago and it’s still relevant. A web application server should talk to its database, its cache, maybe a few internal services, and that’s it. It should not be able to reach a random C2 server.
When a 0-day lands and an attacker gets code execution, the first thing they need to do is call home, pull a second-stage payload, establish a C2 channel, or exfiltrate something. If your egress filtering is tight, you’ve just turned a critical RCE into a dead end. The exploit worked. The attack didn’t.
Implement outbound firewall rules on servers. Enforce them at the network layer, not just the host. Default-deny outbound on your server VLANs and allowlist explicitly. This is not hard. It’s boring, and most organizations just haven’t done it.
Proper Network Segmentation
Flat networks are where breaches go to become catastrophes. When devices can reach every (or most) other devices, a 0-day on a perimeter-facing system becomes a skeleton key to the entire environment.
Real segmentation means your finance systems aren’t reachable from your marketing laptops. Your domain controllers aren’t directly accessible from your DMZ. Your OT and IT networks are genuinely separated. Your developer environments don’t have a clear path to production infrastructure.
In the AI-accelerated threat model, an attacker who lands on a compromised endpoint will immediately begin reasoning about what that foothold can reach. Segmentation doesn’t stop the initial compromise. It stops the attacker from turning that compromise into anything meaningful before alerts fire and you can auto-contain.
Enterprise Credential Management
Credential abuse is the backbone of nearly every significant breach. Not because passwords are inherently weak, but because organizations let credentials sprawl. Shared service accounts, hardcoded passwords in config files (yep, those vibe coded apps count), developers with production access they got two roles ago, stale API keys committed to internal repositories. Build environments and pipelines are target-rich for attackers because of the ways credentials are stored and passed.
An AI-assisted attacker is very good at credential correlation and systematically enumerate service accounts that are likely to have weak or default passwords.
The response is not sexy: a PAM solution with session recording, just-in-time access, and credential vaulting. Password rotation that’s actually enforced. Service accounts that are inventoried, scoped, and reviewed. LAPS or equivalent for local admin credentials. Strong MFA on everything that matters as hard enforcement, not an afterthought.
Patch Velocity as a Security Control
The conventional wisdom is that you can’t patch fast enough to beat zero-day exploits. That’s still true in a narrow sense. But in practice, the majority of exploitation attempts target known vulnerabilities in unpatched systems. The 0-day window is shrinking, but the unpatched-known-CVE window is still often measured in months.
Invest in the infrastructure that makes patching fast: automated regression pipelines, clear tiered SLAs based on asset criticality and exposure, and the organizational muscle memory to execute emergency patching without it being a fire drill every time. When a 0-day does drop, the organizations that patch fastest survive it.
Manage Your Attack Surface, It’s Yours After All
This might be the most painfully obvious one. If it’s on the Internet and you own it, a third-party owns it on your behalf, or it contains your data, it is your problem. Attackers laugh (literally) at your pen test scope exclusions. Perform continuous attack surface management, maintain that inventory, and eliminate what does not need to publicly facing. Use IP restrictions where appropriate and MFA everything that needs to be out there.
Detection and Containment for a World Without Signatures
You will not have signatures for 0-days. By definition, you won’t. But you can detect and contain what attackers do after they’re in, because post-exploitation tradecraft is remarkably consistent regardless of how the initial access was obtained.
A web server spawning a command shell is suspicious whether it was exploited via a known CVE or a 0-day. LSASS access from an unexpected process is suspicious regardless of how that process got there. Outbound connections to newly-registered domains from server infrastructure are suspicious no matter what put them there.
This is why TTP-based detection aligned to frameworks like MITRE ATT&CK and tested thoroughly with purple teams remains durable even in a world of accelerating vulnerability discovery. The 0-day gets the attacker in. What they do next is bound by physics, operating systems, and the constraints of the target environment. Detect the what, not the how.
Invest in your detection coverage for post-exploitation behaviors. Tune out the noise so your analysts can see clearly. Build playbooks around the actions that matter, not the specific exploits that trigger them. This is where good AI assisted (yes, defenders can use AI too) engineering works. Not chasing every new threat actor report, but maintaining high-fidelity visibility into what actually happens in your environment. Need to know where to start? https://vectr.io/test-plans/
The Supply Chain Problem Is an Inventory Problem
Supply chain attacks succeed for one primary reason: defenders don’t know what they’re running. An attacker who poisons a library that 40 of your applications import silently succeeds not because the attack was sophisticated, but because nobody had mapped that dependency.
Software composition analysis needs to be a first-class capability, integrated into your CI/CD pipeline so that when a 0-day drops in a transitive dependency, you know within minutes which applications are affected and can triage accordingly.
Extend the same logic to your vendors: know which SaaS providers and managed services have privileged access to your environment, what they can reach, and what your detection coverage looks like if they’re compromised. SolarWinds was a shock partly because many organizations couldn’t answer the question “what would a compromised SolarWinds agent actually have access to in our environment?”
Closing Thought: The threat isn’t new. The speed is.
And when speed increases dramatically, the advantage goes to the defender who has already done the work, who doesn’t need to make decisions under pressure because the segmentation is already in place, the credentials are already managed, the egress rules are already enforced, and the detection already fires and contains on the right behaviors.
The organizations that will fare worst in the AI-accelerated exploitation era are not the ones who failed to buy the newest tool. They’re the ones who put off the boring work and are now trying to do it at a sprint while the clock is running.
The organizations that will fare best are probably the ones who never found the boring work boring in the first place, who understood that operational discipline is a competitive advantage, and who built security programs on a foundation that doesn’t depend on knowing what the next exploit looks like.
Chris Salerno
Chris leads SRA’s 24x7 CyberSOC services. His background is in cybersecurity strategy based on NIST CSF, red and purple teams, improving network defenses, technical penetration testing and web applications.
Prior to shifting his focus to defense and secops, he led hundreds of penetration tests and security assessments and brings that deep expertise to the blue team.
Chris has been a distinguished speaker at BlackHat Arsenal, RSA, B-Sides and SecureWorld.
Prior to Security Risk Advisors, Chris was the lead penetration tester for a Big4 security practice.





