The following is an excerpt from our new white paper, 5 Steps to Close the AI Security Gap in Your Cloud Security Strategy. Get your full copy here.
When it comes to AI, most security teams are still figuring out where to start. New LLMs and other AI infrastructure have popped up across cloud and hybrid IT environments in recent years. More recently, AI agents and semi-autonomous workflows have added yet another layer of complexity and unpredictability. With change coming fast and furious, there's a growing disconnect between current security practices and what's needed to secure these new technologies.
You're not alone if you're asking questions like:
- Which specific AI assets or models really require my attention?
- Do my existing cloud security tools: CSPM, DSPM, and CNAPP cover new risks?
- Which policies need updating?
- Where should I even begin?
The answers are not always straightforward. AI isn't the first technology shift to which security teams have had to adapt quickly. But the current generation of AI tooling comes with its own unique characteristics that pose distinct challenges for traditional security frameworks, processes, and tools.
When Old Problems Meet New Risks
The AI security gap manifests in four main ways. It’s important to note that not all these challenges are new – skill gaps with emerging cloud technologies have been a problem for some time, as have issues related to managing sprawling cloud data estates. The rush to AI, however, tends to amplify these challenges dramatically.
Here are the four primary ways that AI can create difficulties for current security paradigms.
1. Old Problems that Return with a Vengeance
In certain cases, AI will not create a new category of problems as much as it will amplify existing challenges. Take the example of data access controls. If an LLM is trained on your cloud-hosted customer database, it can memorize sensitive information and reproduce it during inference months later – long after initial access has been removed. Similarly, AI agents can create an explosion of nonhuman identities (beyond what's already happening today), further complicating data access governance.
2. Need for New Categories of Control for AI Models
Your existing security monitoring wasn't designed for AI models, which require entirely different types of oversight. These may include tracking model provenance and lineage to prevent supply chain attacks, monitoring training data for bias or poisoning attempts, implementing guardrails against prompt injection, detecting unauthorized model training, and continuously evaluating model outputs for safety violations or data leakage.
3. Skill and Tool Gaps
Security teams are tasked with the unenviable responsibility of building AI security expertise while trying to secure implementations, which is not trivial even for the most senior and technical professionals. The abundance of disconnected point solutions for individual AI risks, meanwhile, isn't helping. For example, when one tool monitors model access, another checks prompt security, and a third handles data lineage, critical relationships can go undetected.
4. Emerging Compliance Mandates
New compliance frameworks and industry standards are creating requirements that your existing compliance program wasn't designed to address. For example, NIST AI 600-1 requires specific documentation of training data sources, while the OWASP Top 10 for LLM Applications highlight AI-specific vulnerabilities like training data poisoning and prompt injection that don't map cleanly to traditional vulnerability management categories.
When Models Become Agents, Everything Gets More Complex
Every problem described above is compounded when LLMs move from stand-alone components into agentic workflows. AI agents are typically used to describe AI-powered systems that combine multiple LLMs and API integrations to perform actions, make decisions, and leverage tools. These agents can call cloud APIs, access cloud databases, and execute multi-step tasks with minimal human intervention.
Consider a finance department that deploys an agent to review quarterly reports, generate financial analyses, and email its findings to relevant stakeholders. This creates a nonhuman identity comparable to a human actor in terms of security overhead, since agents often require a broad set of permissions for diverse internal and external systems. It also requires a different approach to security compared to traditional IT systems due to non-deterministic output and behavior, and creates greater risk of LLM-focused attacks such as prompt injection due to potential downstream implications on other systems.
Agents add another layer of opacity and non-determinism to what's actually going on, since you're dealing with dozens or hundreds of "black-box" decisions at once. Add to that the broad set of permissions they require, and you find yourself with an exponentially larger pool of attack paths to consider.
Solutions Require an Integrated Approach
AI security should never exist in isolation. When it's treated as a separate problem from your overall cloud security strategy you lose important context. For example, a publicly-accessible VM might not seem like a major issue, but becomes so if the machine is running an open-source model trained on your codebase.
The most effective security programs layer business and application context on top of technical telemetry. This means understanding not only that an LLM exists in your environment, but also how it's being used across your business. Is it powering a customer-facing tool? Receiving data from the web? Automating employee workflows? Each use case carries different risk profiles and requires different security controls.
Rather than chase isolated alerts, teams need to see the connections and understand how combinations of model, data, and access risks consolidate into exploitable attack paths. Attack path analysis that accounts for the entire application context can help teams prioritize the most pertinent issue rather than address thousands of disconnected findings.
The Path Forward
So is AI a new challenge for cloud security? The answer is both yes and no. It amplifies familiar problems in unexpected ways, while introducing genuinely new attack vectors that traditional frameworks weren't designed to handle. The key is recognizing that AI creates a hybrid threat landscape within cloud environments that requires an integrated approach – one that builds on existing cloud security foundations while addressing AI-specific risks.
To learn more about how to close the AI security gap, read our full white paper.