Think about you get up tomorrow to some genuinely thrilling information: you’ve been approved to rent 1,000 new expert-level teammates. Builders, entrepreneurs, ops specialists, knowledge analysts, product managers — good at their jobs, accessible across the clock, by no means burned out, by no means distracted.
It’s each enterprise chief’s dream. That product line you’ve needed to launch for 2 years however by no means had the engineering capability for? Now you do. That new market you’ve been eyeing however couldn’t employees correctly? It’s inside attain. The backlog of strategic initiatives that stored getting pushed as a result of everybody was heads-down on the pressing stuff? You can begin working by means of it.
For the primary time, the restrict on what your group can pursue isn’t headcount or finances. It’s your personal creativeness. Sounds unimaginable, proper?
There’s an enormous catch, although. All these new digital coworkers…You may’t examine their references. You may’t run a background examine. It’s important to give them entry to all of your techniques on day one. And right here’s the half that ought to actually offer you pause: they observe directions actually, they don’t know proper from mistaken, and so they face zero penalties if one thing goes mistaken.
Nonetheless excited?
That thought experiment isn’t hypothetical. It’s the place most enterprises are proper now with AI brokers. And it’s the dilemma I’ll be exploring later right this moment in my keynote at RSA.
From Answering to Performing
Not way back, AI meant chatbots — instruments that helped you write an e-mail, summarize a doc, reply a query. Helpful, spectacular even, however essentially passive. If a chatbot gave you a nasty reply, you’d shrug and transfer on.
We’re now in a unique period completely. AI brokers don’t simply reply. They act. They plan multi-step duties, name exterior instruments, make choices, and execute workflows autonomously. They will ship emails in your behalf, modify information, run database instructions, place orders, change firewall guidelines.
The shift from data to motion adjustments all the things about how we want to consider danger.
Right here’s a helpful approach to consider it: with a chatbot, the worst case is a mistaken reply. With an agent, the worst case is a mistaken motion, and a few actions can’t be undone.
There are already hundreds of examples of the place this shift has gone mistaken. My “favourite” was a scenario the place an investor ran an AI coding agent throughout a code freeze. The instruction was express: “don’t change something with out permission.” The agent ran database instructions anyway, deleted a reside manufacturing database, tried to cowl its tracks by creating pretend knowledge, after which when the injury turned clear, apologized.
Properly, an apology will not be a guardrail.
The Hole Between Pilots and Manufacturing
Right here’s a quantity that tells the entire story. In a current Cisco survey of main enterprises, 85% reported having AI agent pilots underway. Solely 5% had moved these brokers into manufacturing.
That 80-point hole isn’t skepticism about AI’s potential. It’s a rational response to a real safety drawback. Organizations can see what brokers can do. They’re unsure but they will belief them to do it safely.
Closing that hole is what we’re centered on at Cisco. And at RSA this week, we’re laying out our strategy throughout three areas: defending brokers from the world, defending the world from brokers, and detecting and responding to issues on the pace brokers function.
Defending brokers from the world means making certain brokers can’t be manipulated by unhealthy actors.
That is far more delicate than it sounds. Conventional safety scanning instruments have been constructed to check static software program. They will’t simulate what it seems like when an adversary tries to trick an AI mid-task into ignoring its directions. Immediate injection (hiding malicious instructions inside content material that an agent reads) is already an actual assault vector, and it’s getting extra refined.
Our Cisco Talos 2025 Yr in Overview report (launched right this moment) reveals how AI is already getting used to construct new exploit kits, with the React2Shell vulnerability going from public disclosure to probably the most actively exploited flaw of 2025 in a matter of days. The pace of weaponization is accelerating, and we will’t assume there’ll be time to react after a vulnerability is disclosed.
To assist organizations take a look at their brokers earlier than they go wherever close to manufacturing, we’re launching AI Protection Explorer Version, a self-service purple teaming device that lets builders and safety groups run adversarial assaults in opposition to their very own brokers and discover vulnerabilities first.
We’re additionally releasing an Agent Runtime SDK that embeds coverage enforcement immediately into agent workflows at construct time, and an LLM Safety Leaderboard that provides organizations a transparent, goal option to consider how totally different AI fashions maintain up in opposition to adversarial assaults, going properly past the efficiency benchmarks that dominate most AI comparisons right this moment.
Final yr at RSAC, we made historical past with the primary open supply basis AI safety mannequin. Since then, we’ve continued constructing within the open, releasing a collection of instruments designed to reply the safety questions builders face each day:
- Abilities Scanner — What abilities is that this agent operating, and are they secure?
- MCP Scanner — Are my MCP servers exposing malicious actions?
- AI BoM — What’s inside my AI system — fashions, reminiscence, dependencies?
- CodeGuard — Is the AI-generated code I’m delivery introducing vulnerabilities?
- Mannequin Provenance — The place did this mannequin originate from, and has it been modified?
This yr we’re open sourcing DefenseClaw — a safe agent framework that brings all of those instruments collectively and makes use of hooks in Nvidia’s OpenShell. With DefenseClaw, builders can deploy safe brokers quicker than ever:
- Each ability is scanned and sandboxed
- Each MCP server is checked for malicious actions
- Each AI asset — fashions, reminiscence, abilities — is routinely inventoried
The result’s zero handbook safety steps and nil separate device installs. Safety is a staff sport, and nobody is aware of that higher than Cisco.
Defending the world from brokers is an id and entry drawback.
Right this moment, most enterprises don’t have a transparent image of which brokers are operating of their surroundings, what they’ve entry to, or who’s accountable if one thing goes mistaken. That’s a severe governance hole, and it’s not remotely theoretical.
Turning to the Talos 2025 Yr in Overview once more, analysis reveals that attackers are centered on the techniques that confirm id and dealer entry: login flows, entry gateways, and administration platforms that sit on the middle of how organizations grant belief. Practically a 3rd of all multi-factor authentication spray assaults focused id and entry administration techniques particularly, a six p.c leap from the yr earlier than.
Adversaries go the place they will do probably the most injury with the least effort, and proper now, id is that place.
The excellent news is that we’ve a blueprint for this problem. Take into consideration the way you’d onboard a brand new worker. You confirm who they’re, outline their position, give them entry solely to what they want for his or her job, and maintain them accountable to a supervisor. Brokers want the identical remedy. Each agent ought to have a verified id, an outlined scope of permissions, and a human proprietor who’s accountable for its habits.
This week, Cisco is extending Zero Belief to the agentic workforce by means of new capabilities in Duo IAM and Safe Entry, so that each agent will get time-bound, task-specific permissions and safety groups get real-time visibility into each agent and gear operating of their surroundings, together with those no person formally sanctioned.
Lastly, we’ve to detect and reply to safety threats and incidents at machine pace.
Brokers function quicker than any human can monitor. When an assault unfolds by means of automated agentic exercise, the window between “one thing is mistaken” and “the injury is completed” could be seconds. That math doesn’t work in case your safety operations middle continues to be operating at human tempo. Adversaries are already utilizing agentic AI to scale their very own operations by automating reconnaissance, constructing exploit kits, and increasing what one particular person or group can accomplish in a single marketing campaign. Defenders want the identical leverage.
We’re serving to evolve the Safety Operations Heart (SOC) from reactive to proactive with new capabilities in Splunk, together with Publicity Analytics for steady real-time danger scoring, Detection Studio for streamlining how detections are constructed and deployed, and Federated Search that lets analysts examine throughout distributed knowledge environments with out first pulling all the things right into a central location — a big benefit as agentic exercise generates exponentially extra knowledge.
We’re additionally deploying specialised AI brokers throughout the SOC itself for detection, triage, and response. To not substitute analysts, however to deal with the repetitive investigative work in order that people can give attention to the choices that want expertise and judgment.
Safety is the Accelerator
Right here’s what I discover genuinely thrilling about this second. For many of the historical past of know-how, safety has performed an necessary, however conservative position: figuring out what may go mistaken, slowing rollouts, and including friction within the title of danger mitigation.
With agentic AI, the dynamic flips. Safety isn’t the explanation to decelerate. It’s the explanation you can transfer quick. The 80-point hole between organizations piloting brokers and people operating them in manufacturing isn’t a know-how hole. It’s a belief deficit that we will solely make up if we reimagine safety for the agentic workforce.
We’ve been right here earlier than. We made the web reliable for commerce. We found out cloud and cellular. The instruments and psychological fashions took time to develop, however they received there. The agentic period is the subsequent frontier, and the organizations that get safety proper would be the ones that unlock the true potential of AI.
Let’s get to it.