Synthetic intelligence is coming into a brand new section. The dialog is shifting from mannequin innovation to operational actuality. Organizations are discovering that constructing AI fashions is usually the best a part of the journey. Operating these fashions reliably, securely, and at scale throughout enterprise environments is the place complexity emerges.
Many AI initiatives decelerate, not as a result of groups lack GPUs, knowledge, or expertise, however as a result of there isn’t any unified working sample that safely connects all of these components into manufacturing. AI techniques usually are not single functions. They’re distributed ecosystems of information pipelines, inference companies, orchestration layers, and more and more, autonomous brokers interacting with enterprise techniques in actual time.
Cisco Safe AI Manufacturing facility with NVIDIA is constructed round a easy however transformative concept. AI have to be handled as an end-to-end system. Efficiency, knowledge readiness, cloud-native operations, and safety can’t be designed individually. They have to be engineered collectively from the start.
At VAST Ahead 2026, we’re demonstrating how that precept interprets right into a working safe AI knowledge platform. This isn’t a future idea or hypothetical structure. It’s a actual, deployable reference implementation constructed utilizing NVIDIA accelerated computing infrastructure and software program, VAST knowledge companies, Cisco infrastructure, the Isovalent Enterprise Platform based mostly on Cilium and Tetragon, and Cisco AI Protection. It displays a repeatable strategy to operationalize AI as we speak whereas persevering with to evolve towards deeper integration over time.
The brand new actuality of enterprise AI
The rise of retrieval-augmented technology (RAG) and agent-driven functions is essentially reshaping how organizations work together with their knowledge. AI techniques are now not remoted workloads. They repeatedly retrieve data, trade context between companies, and execute automated actions throughout enterprise environments.
This transformation introduces a brand new sort of operational problem. The assault floor expands dramatically as AI workloads generate fixed east-west visitors inside Kubernetes clusters. Runtime conduct turns into extra dynamic as containers load libraries, execute helper processes, and work together with exterior companies. On the similar time, fashions and brokers introduce dangers that conventional safety instruments have been by no means designed to deal with, together with immediate injection, delicate knowledge leakage, and uncontrolled software execution.
Enterprise leaders usually are not asking whether or not these dangers exist. They’re asking whether or not AI will be trusted to ship measurable outcomes with out exposing the group to unacceptable operational or regulatory publicity. The reply lies in designing AI platforms the place safety is inseparable from efficiency and scalability.
Constructing the platform from the info outward
Each efficient AI system begins with knowledge that’s accessible, constant, and instantly usable. VAST Information Platform and VAST InsightEngine rework enterprise knowledge into an energetic participant in AI workflows slightly than a passive storage layer. By automating ingestion, indexing, and retrieval pipelines, the platform allows enterprise knowledge to change into a dependable context for AI techniques with out the delicate and sophisticated knowledge engineering pipelines that usually gradual innovation.
Operating this knowledge intelligence layer on Cisco UCS and NVIDIA accelerated computing, software program, and networking permits the platform to maneuver past experimental deployments. It creates a repeatable constructing block that organizations can deploy throughout environments with constant efficiency and lifecycle administration. Manufacturing AI requires this stage of operational self-discipline. With out it, scaling AI turns into unpredictable and tough to manipulate.
The place safety should dwell in fashionable AI platforms
Essentially the most important shift in AI safety is the placement. Safety can now not focus solely on defending the community perimeter or scanning container photos earlier than deployment. In AI knowledge platforms, nearly all of danger now exists inside Kubernetes clusters and inside AI software interactions themselves.
The primary crucial problem is controlling east-west visitors. AI microservices talk repeatedly as retrieval pipelines, embedding companies, and inference engines trade knowledge. With out robust segmentation, unintended service reachability can emerge as clusters scale, permitting lateral motion throughout workloads.
The Isovalent Enterprise Platform based mostly on Cilium addresses this problem by imposing identity-based community insurance policies immediately inside Kubernetes. As a substitute of counting on fragile, IP-based guidelines, insurance policies comply with workload id as companies scale, migrate, or restart. This ensures that solely approved companies talk with each other whereas sustaining excessive efficiency by eBPF-accelerated networking. The result’s constant enforcement of least-privileged communication throughout the cluster.
Nevertheless, community segmentation alone can not detect surprising conduct inside containers. AI workloads steadily execute processes, entry delicate recordsdata, and dynamically load instruments and libraries. Even when community communication is restricted, compromised workloads can nonetheless behave unpredictably at runtime.
Isovalent Enterprise Runtime Safety, constructed on Tetragon, addresses this second layer of danger. By offering kernel-level observability of course of execution and file exercise, it permits operators to know what workloads are doing inside containers. Suspicious conduct will be recognized early, serving to organizations examine and reply earlier than points escalate.
Collectively, these capabilities create a significant and enforceable Kubernetes safety posture. They management how companies talk and supply visibility into how workloads behave throughout execution.
Extending safety to the AI layer itself
The quickest rising danger floor in AI environments sits on the mannequin boundary. Fashions and brokers function in dynamic environments the place consumer prompts, enterprise knowledge, and exterior instruments intersect. Conventional safety instruments weren’t constructed to detect manipulation of AI interactions or unsafe agent conduct.
Cisco AI Protection brings safety immediately into the AI software layer. It helps organizations analyze mannequin parts for vulnerabilities, apply runtime guardrails to prompts and responses, and monitor how fashions work together with instruments and knowledge sources. This supplies visibility into how AI techniques behave and helps scale back the chance of enterprise knowledge or automated agent actions creating unintended publicity.
With this layer in place, safety spans the total lifecycle of AI workloads, from infrastructure and knowledge to Kubernetes operations and AI software conduct.
Demonstrating the safe AI knowledge platform in motion
At VAST Ahead 2026, we’re displaying this structure working as a whole and purposeful resolution. Enterprise knowledge is reworked into AI-ready context by the VAST pipeline. The platform runs on Cisco infrastructure aligned to Cisco Safe AI Manufacturing facility with NVIDIA design rules. Kubernetes east-west visitors is segmented utilizing the Isovalent Enterprise Platform based mostly on Cilium, whereas runtime conduct is monitored utilizing Isovalent Enterprise Runtime Safety constructed on Tetragon. The AI interplay layer is protected utilizing Cisco AI Protection.
This isn’t a theoretical blueprint. It’s a dwell, deployable reference structure that prospects can implement as we speak whereas persevering with to evolve towards deeper integration and automation.
The shift towards safe AI outcomes
A very powerful lesson rising from enterprise AI adoption is that safety can’t be measured by the variety of controls deployed. It have to be measured by the flexibility to function AI safely and confidently at scale.
A safe AI knowledge platform allows organizations to ship this consequence by making certain:
- AI pipelines stay remoted throughout groups and workloads
- East-west visitors inside Kubernetes is managed and observable
- Runtime conduct inside containers is monitored and understood
- Fashions and agent interactions are protected against rising AI-specific threats
When these components are designed collectively, organizations achieve the boldness to scale AI initiatives throughout departments, functions, and enterprise items.
The way forward for accountable AI operations
Cisco Safe AI Manufacturing facility with NVIDIA represents a blueprint for the way enterprise AI might be constructed transferring ahead. It brings efficiency, knowledge intelligence, cloud-native operations, and AI-native safety collectively in a unified operational sample.
Organizations now not want to decide on between velocity and security. They will deploy AI techniques which can be each modern and reliable, permitting them to maneuver from experimental initiatives to manufacturing AI companies that ship actual enterprise influence.
In case you are attending VAST Ahead 2026, we invite you to expertise this resolution firsthand and discover what it means to construct AI techniques designed for manufacturing from day one.