AI-forward organizations are already making the shift from using third-party foundation models to building custom applications for their unique business use cases. This shift introduces several unknowns in model and application behavior, which is why assessing security risks is becoming increasingly important. Prisma® AIRS AI Red Teaming delivers on the quintessential behavioral risk assessment requirements of probabilistic AI systems.
Vulnerability Assessments Need to Scale at the Speed of AI
Traditional security tools are ineffective against new threats such as prompt injection and data exfiltration by jailbreaking an AI application or an AI agent. Manual red teaming cannot scale at the speed of AI. Automated means of identifying behavioral vulnerabilities in your AI systems are table stakes.
AI red teaming needs to scale with AI adoption for every organization. But it needs to balance the friction that a necessary vulnerability assessment brings, with the value it delivers to innovating at speed. Organizations developing AI systems need to be able to do these risk assessments continuously. Moreover, each assessment should be comprehensive in its threat coverage and must deliver contextual insights, unique to the use case and usage of the AI system.

Organizations need a solution that sits at the intersection of contextual, continuous and comprehensive.
Contextual Security Assessments
Generic assessments no longer serve the requirement. Every AI system in an organization is being built to solve specific business problems and is at a very specific stage of development. Every risk assessment needs to take the uniqueness of the application or the model into consideration.
An application designed to summarize financial statements for a company could have access to sensitive databases. A contextual security assessment should be able to craft attacks to exfiltrate this sensitive data. Similarly, an AI agent built to read support emails and send tickets for refund-related issues has access to PII user data.
Red teaming this AI agent cannot solely be based on generic goals such as “building bombs” or “Molotov cocktails.” The agent might be designed to access very specific tools and repositories to carry out its job. Moreover, AI applications and agents today are being designed to deliver specific user experiences. An AI red teaming solution needs to be able to factor these human aspects of the interaction with the agent and carry out assessments accordingly.
Prisma AIRS AI Red Teaming leverages a proprietary AI agent to carry out contextual risk assessments of AI systems. With PrismaⓇ AIRS, you can simulate attack scenarios by allowing the agent to profile the application and run automated tests or augment its ability by providing contextual information of your own. You can even prompt the agent to achieve specific goals over and above the ones it has listed down for itself. You can simulate interaction scenarios such as input validations on character limits and session lengths to allow the agent to understand the constraints of the environment being tested.
Continuous Testing for Security Loopholes
Every fine-tuning of a model changes the security alignment of the application it powers. Any change to the system prompt or the sensitivity to the guardrails can cause a disproportional change to the nature of output generated by AI applications. These problems magnify in the case of AI agents, where there are several interconnected systems and a deviation in each component multiplies across the entire ecosystem.
Proactive AI risk assessments enabled at scale become a catalyst to faster AI value generation.
This is why Prisma AIRS AI Red Teaming is built to integrate into your CI/CD pipelines. It is already a well-established practice to run functional evaluations after every deployed change. Prisma AIRS AI Red Teaming allows developers to run security evaluations hand-in-hand with performance. Through well-documented APIs and quick and easy templates (coming soon), one can run red teaming on their AI applications and understand the security impact of every change deployed. This proactive approach reduces technical debt and avoids surprises later on in the development cycle.
Comprehensive Coverage of AI Threat Vectors
An AI prompt-based attack is essentially a combination of a technique and an objective. An AI organization should be able to understand what combination of techniques are available to bypass your safety guardrails to achieve certain attack objectives.
Comprehensive threat intelligence, therefore, is the most important aspect of an AI security solution. An AI red teaming solution that lacks this depth cannot guide AI developers with actionable insights. The reports of such red teaming will always be incomplete and outdated because AI attacks evolve too quickly.
Palo Alto Networks, with the combined powers of Huntr and Unit 42 Threat Research, is able to stay ahead of attackers by building these techniques and exploits into Prisma AIRS. You can carry out risk assessments using over 500 attack vectors covering over 50 techniques. All simulated attacks in Prisma AIRS AI Red Teaming can be categorized into security, safety and compliance by mapping to the popular AI risk frameworks such as OWASP Top 10 for LLMs and NIST AI-RMF. An AI Red Teaming report generated by Prisma AIRS gives you a broad spectrum analysis of the risk profile of your AI systems. The team at Palo Alto Networks works hand-in-hand with its threat research communities to constantly update attack vectors to keep your AI-driven future secure.
Thorough and Proactive AI Risk Assessment at Scale
It is estimated that over 50 AI threat research papers have been published on Arxiv alone in the last 10 months. Practitioners on X and LinkedIn are posting about their findings almost every day. Meanwhile, attackers are already leveraging AI to attack all connected systems.
Many foundational AI companies have already announced that their models are now being built in ways that cannot be weaponized. It is imperative that AI-forward organizations depend on a robust AI risk assessment method that is comprehensive, can be used continuously and provides contextual risk insights. Prisma AIRS AI Red Teaming sits at the intersection of these three requirements.
Be proactive. Deploy Bravely with Prisma AIRS.