* [Blog](https://www2.paloaltonetworks.com/blog) * [Network Security](https://www2.paloaltonetworks.com/blog/network-security/) * [AI Application Security](https://www2.paloaltonetworks.com/blog/network-security/category/ai-application-security/) * Building Secure AI by Des... # Building Secure AI by Design: A Defense-in-Depth Approach [](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww2.paloaltonetworks.com%2Fblog%2Fnetwork-security%2Fbuilding-secure-ai-by-design-a-defense-in-depth-approach%2F) [](https://twitter.com/share?text=Building+Secure+AI+by+Design%3A+A+Defense-in-Depth+Approach&url=https%3A%2F%2Fwww2.paloaltonetworks.com%2Fblog%2Fnetwork-security%2Fbuilding-secure-ai-by-design-a-defense-in-depth-approach%2F) [](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww2.paloaltonetworks.com%2Fblog%2Fnetwork-security%2Fbuilding-secure-ai-by-design-a-defense-in-depth-approach%2F&title=Building+Secure+AI+by+Design%3A+A+Defense-in-Depth+Approach&summary=&source=) [](https://www.paloaltonetworks.com//www.reddit.com/submit?url=https://www2.paloaltonetworks.com/blog/network-security/building-secure-ai-by-design-a-defense-in-depth-approach/&ts=markdown) \[\](mailto:?subject=Building Secure AI by Design: A Defense-in-Depth Approach) Link copied By [Hitendar Sethi](https://www.paloaltonetworks.com/blog/author/hitendar-sethi/?ts=markdown "Posts by Hitendar Sethi") and [Kay Clark](https://www.paloaltonetworks.com/blog/author/kayclark/?ts=markdown "Posts by Kay Clark") Nov 25, 2025 8 minutes [AI Application Security](https://www.paloaltonetworks.com/blog/network-security/category/ai-application-security/?ts=markdown) [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown) [defense in depth](https://www.paloaltonetworks.com/blog/tag/defense-in-depth/?ts=markdown) [Secure AI](https://www.paloaltonetworks.com/blog/tag/secure-ai/?ts=markdown) [Secure AI by Design Framework](https://www.paloaltonetworks.com/blog/tag/secure-ai-by-design-framework/?ts=markdown) *This is the second installment in a four-part series on implementing Secure by Design principles in AI system development.* In our [previous article](https://www.paloaltonetworks.com/blog/network-security/the-evolution-of-ai-security-why-secure-ai-by-design-matters/), we explored the evolving AI security landscape and introduced CISA's Secure by Design framework. Now, we'll dive deeper into how organizations can implement these principles through a comprehensive security strategy that spans the entire AI development lifecycle. # Fundamental Security Requirements for AI Implementing secure design principles for [GenAI systems](https://www.paloaltonetworks.com/cyberpedia/what-is-generative-ai-security) requires a focused approach to security fundamentals. The CIA triad --- confidentiality, integrity and availability --- forms the cornerstone of this framework when adapted to AI contexts. 1. Confidentiality in GenAI --------------------------- GenAI systems demand robust access controls and encryption for both training data and model parameters. This prevents unauthorized exposure of sensitive information that might be embedded within models or extracted through sophisticated prompting techniques. 2. Integrity for AI Systems --------------------------- Integrity requires mechanisms to verify that AI outputs remain accurate and unaltered. This includes defense against adversarial attacks that could subtly manipulate model responses, as well as enabling traceability between inputs and outputs. 3. Availability Considerations ------------------------------ Availability focuses on maintaining consistent AI system performance while preventing denial-of-service (DoS) through resource exhaustion or [prompt injection attacks](https://www.paloaltonetworks.com/cyberpedia/what-is-a-prompt-injection-attack). # AI Model and Data Governance Effective model and data governance complements these principles through: * Comprehensive inventories of models and datasets * Clear documentation of data provenance and model limitations * Regular security assessments focusing on AI-specific vulnerabilities * Robust change management protocols * Continuous monitoring for model drift or anomalous behavior Machine learning security operations (MLSecOps) creates a layered architecture where security is woven into every phase of the AI lifecycle. Organizations that implement security at only one stage leave critical vulnerabilities elsewhere in the pipeline --- like securing your front door with four deadbolts while leaving the windows unlocked. To be truly effective, security for AI must span from the very initial scoping phase all the way through continuous monitoring in deployment. # Secure by Design in the MLSecOps Lifecycle Building secure-by-design AI systems requires a defense-in-depth (DiD) approach, integrating security controls at every phase of the MLSecOps lifecycle, as seen in the image below. ![](https://www.paloaltonetworks.com/blog/wp-content/uploads/2025/11/word-image-349123-1.png) As agentic AI systems --- those capable of autonomous decision-making --- become more prevalent, the convergence of MLSecOps with DevSecOps practices is crucial to manage the expanded attack surface and leverage consistent security policies across both AI-specific and traditional software risks. This integration enables comprehensive monitoring, policy enforcement and incident response capabilities, which are essential for mitigating the unique vulnerabilities associated with agentic AI. Let's explore how security tasks within the MLSecOps lifecycle map to the OWASP^®^ Top 10 for LLMs and GenAI, MITRE ATLAS™ and NIST AI Risk Management Framework (AI-RMF) bodies of work, providing practical guidance on how to build AI systems that are Secure by Design. 1. Scope -------- The Scope phase aligns with NIST AI-RMF's "Map" function, focusing on identifying attack surfaces and defining security requirements early. This phase includes threat modeling tailored to AI systems, which helps anticipate potential risks like prompt injection (OWASP LLM01) and supply chain vulnerabilities (OWASP LLM03). Threat models should consider the entire AI pipeline, from data ingestion to deployment, highlighting risks such as [data poisoning](https://www.paloaltonetworks.com/cyberpedia/what-is-data-poisoning) and [model inversion attacks](https://www.paloaltonetworks.com/cyberpedia/what-are-adversarial-attacks-on-AI-Machine-Learning). Key ML techniques from MITRE ATLAS that can be threat modeled during this phase include: * **ML Supply Chain Compromise:** identifying vulnerabilities in pretrained models, datasets and dependencies * **Model Reconnaissance:** probing to understand model boundaries and behaviors * **Exfiltration via ML Inference:** extracting sensitive information through model outputs Security requirements must specify controls for confidentiality, integrity and availability, ensuring that AI systems can protect sensitive information embedded in training data (OWASP LLM02) and maintain accurate, unaltered outputs. Additionally, policy considerations must address regulatory compliance and ethical use of AI, aligning with NIST's "Governance" function. 2. Data Preparation ------------------- The Data Preparation phase focuses on maintaining data integrity and privacy, aligning with the NIST AI-RMF "Measure" function. Key controls include data validation and labeling, which help prevent data and model poisoning (OWASP LLM04). Validating that data sources are vetted and trustworthy is crucial for mitigating the risks of misinformation and adversarial inputs that could corrupt model training. Privacy considerations must include techniques like differential privacy and encryption to protect sensitive information during both the training and inference phases. Threat modeling should be refined to include specific risks in the data supply chain, such as unauthorized access and tampering. Aligning these practices helps build robust defenses against techniques described in the MITRE ATLAS matrix, such as: * Data Manipulation * Data Poisoning * Exfiltration of Sensitive Information 3. Model Training ----------------- The Model Training phase is where secure coding practices and rigorous testing come into play. Aligning with NIST's "Manage" function, this phase must incorporate model risk assessments to identify vulnerabilities like improper output handling (OWASP LLM05) and vector and embedding weaknesses (OWASP LLM08). AI pipeline security, including dependency management and validation of pretrained models, is critical for preventing supply chain risks. Appropriate controls and defenses around techniques in the ATLAS matrix that map to this phase include: * AI Model Inference API Access * ML Model Access * ML Supply Chain Compromise * Poison Training Data Key practices include secure coding standards tailored for AI, such as input validation and output sanitization, to defend against excessive agency risks (OWASP LLM06). Integrating security testing, including adversarial robustness assessments and model scanning for known vulnerabilities, help confirm that models do not produce unsafe or biased outputs. Moreover, evaluating the trade-offs between performance and security during this phase helps in balancing risk and efficiency. 4. Testing ---------- In the Testing phase, aligning with the NIST AI-RMF "Measure" function, security testing must be comprehensive and continuous. This includes adversarial testing ([red teaming/penetration testing](https://protectai.com/recon)) to uncover vulnerabilities such as system prompt leakage (OWASP LLM07) and unbounded consumption (OWASP LLM10). Security testing methodologies should also validate compliance with regulatory standards and internal policies, ensuring that AI systems are robust against both technical and operational threats. Protection against ATLAS ML techniques that should be tested for in this phase include: * Model Evasion * Prompt Extraction * Prompt Injection * Inference Manipulation Testing agentic AI systems presents unique challenges, as the behavior of AI agents can differ significantly when deployed as part of a larger ecosystem. Comprehensive testing must cover not only individual model components but also the interactions between them, identifying risks that emerge only in full-system operations. This phase should also incorporate behavioral analysis to detect anomalies and verify that AI agents act within predefined policy boundaries. 5. Deployment and Monitoring ---------------------------- The Deployment and Monitoring phases align with the NIST "Govern" function, emphasizing secure deployment patterns and continuous oversight. Security controls must include model signing to verify model authenticity and prevent unauthorized modifications. Additionally, supply chain vulnerability management should address risks associated with [third-party libraries and pretrained models](https://protectai.com/guardian) integrated into the AI pipeline. Continuous monitoring is critical for detecting emerging threats, such as misinformation (OWASP LLM09), and for keeping AI systems in compliance with evolving regulations. Monitoring should include anomaly detection mechanisms to identify deviations in model behavior, potentially indicating adversarial attacks or data drift. Policy enforcement must extend to incident response, ensuring that security breaches are promptly detected, contained and addressed. MITRE ATLAS techniques to monitor for include: * Denial of Service * Model Tampering * Exfiltration via both Inference APIs and Cyber Means * Jailbreaks For agentic AI systems, monitoring requirements are more stringent due to the autonomous nature of decision-making processes. Effective monitoring must track not only model outputs but also the decision pathways and external interactions, providing comprehensive oversight of AI behaviors. It is also critical to watch for lateral movement and attempts to access systems beyond authorized boundaries, particularly in agentic AI systems that interact with multiple environments. Lateral movement can occur when an AI agent leverages initial access to one system to navigate to adjacent systems, potentially expanding its reach beyond intended operational constraints. Such movement might manifest as an agent using credentials or permissions granted for one task to access unrelated databases, APIs or computing resources. In agentic AI, this lateral movement presents unique challenges, as the agent's autonomy means it may discover and exploit pathways that weren't anticipated during system design. Monitoring within these systems will need to focus on mapping the complete operational graph of agent activities, tracking not just what resources are accessed but the sequential relationship between access events. # Holistic Security for Complex \& Autonomous AI Systems As AI systems grow more complex and autonomous, securing them requires a holistic approach that extends beyond individual safeguards. Integrating security throughout the MLSecOps lifecycle is essential to address vulnerabilities at every phase from initial scoping to continuous monitoring. By aligning security tasks with established frameworks like the 2025 OWASP Top 10 for LLMs and GenAI, MITRE ATLAS and NIST AI-RMF, organizations can build AI solutions that are not only secure by design but also resilient to evolving threats. In the next part of our series, we'll focus on agentic AI systems and the unique security challenges they present, exploring how MLSecOps and DevSecOps must converge to create truly secure autonomous AI. Ready to dive deeper? Get the full whitepaper, "[Securing AI's Front Lines: A Framework for Building Trustworthy, Defensible AI Systems](https://www.paloaltonetworks.com/resources/whitepapers/securing-ai-front-lines)." ### Also in this series Part 1 | [The Evolution of AI Security: Why Secure AI by Design Matters](https://www.paloaltonetworks.com/blog/network-security/the-evolution-of-ai-security-why-secure-ai-by-design-matters/) *** ** * ** *** ## Related Blogs ### [AI Application Security](https://www.paloaltonetworks.com/blog/network-security/category/ai-application-security/?ts=markdown), [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown) [#### Tools and Technologies for Secure by Design AI Systems](https://www2.paloaltonetworks.com/blog/network-security/tools-and-technologies-for-secure-by-design-ai-systems/) ### [AI Application Security](https://www.paloaltonetworks.com/blog/network-security/category/ai-application-security/?ts=markdown), [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown) [#### Securing Agentic AI: Where MLSecOps Meets DevSecOps](https://www2.paloaltonetworks.com/blog/network-security/securing-agentic-ai-where-mlsecops-meets-devsecops/) ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown) [#### The Evolution of AI Security: Why Secure AI by Design Matters](https://www2.paloaltonetworks.com/blog/network-security/the-evolution-of-ai-security-why-secure-ai-by-design-matters/) ### [AI and Cybersecurity](https://www.paloaltonetworks.com/blog/security-operations/category/ai-and-cybersecurity/?ts=markdown), [AI Application Security](https://www.paloaltonetworks.com/blog/network-security/category/ai-application-security/?ts=markdown), [AI Governance](https://www.paloaltonetworks.com/blog/category/ai-governance/?ts=markdown), [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown) [#### OpenClaw (formerly Moltbot, Clawdbot) May Signal the Next AI Security Crisis](https://www2.paloaltonetworks.com/blog/network-security/why-moltbot-may-signal-ai-crisis/) ### [AI Application Security](https://www.paloaltonetworks.com/blog/network-security/category/ai-application-security/?ts=markdown), [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown) [#### Can Your AI Be Manipulated Into Generating Malware?](https://www2.paloaltonetworks.com/blog/network-security/can-your-ai-be-manipulated-into-generating-malware/) ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown), [CIO/CISO](https://www.paloaltonetworks.com/blog/category/ciociso/?ts=markdown), [Points of View](https://www.paloaltonetworks.com/blog/category/points-of-view/?ts=markdown) [#### A CIO's First Principles Reference Guide for Securing AI by Design](https://www2.paloaltonetworks.com/blog/2025/11/cios-first-principles-reference-guide-securing-ai-design/) ### Subscribe to Network Security Blogs! Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more. ![spinner](https://www2.paloaltonetworks.com/blog/wp-content/themes/panwblog2023/dist/images/ajax-loader.gif) Sign up Please enter a valid email. By submitting this form, you agree to our [Terms of Use](https://www.paloaltonetworks.com/legal-notices/terms-of-use?ts=markdown) and acknowledge our [Privacy Statement](https://www.paloaltonetworks.com/legal-notices/privacy?ts=markdown). Please look for a confirmation email from us. If you don't receive it in the next 10 minutes, please check your spam folder. This site is protected by reCAPTCHA and the Google [Privacy Policy](https://policies.google.com/privacy) and [Terms of Service](https://policies.google.com/terms) apply. {#footer} {#footer} ## Products and Services * [AI-Powered Network Security Platform](https://www.paloaltonetworks.com/network-security?ts=markdown) * [Secure AI by Design](https://www.paloaltonetworks.com/precision-ai-security/secure-ai-by-design?ts=markdown) * [Prisma AIRS](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [AI Access Security](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) * [Cloud Delivered Security Services](https://www.paloaltonetworks.com/network-security/security-subscriptions?ts=markdown) * [Advanced Threat Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Advanced URL Filtering](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) * [Advanced WildFire](https://www.paloaltonetworks.com/network-security/advanced-wildfire?ts=markdown) * [Advanced DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Enterprise Data Loss Prevention](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Enterprise IoT Security](https://www.paloaltonetworks.com/network-security/enterprise-device-security?ts=markdown) * [Medical IoT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [Industrial OT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [Next-Generation Firewalls](https://www.paloaltonetworks.com/network-security/next-generation-firewall?ts=markdown) * [Hardware Firewalls](https://www.paloaltonetworks.com/network-security/hardware-firewall-innovations?ts=markdown) * [Software Firewalls](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Strata Cloud Manager](https://www.paloaltonetworks.com/network-security/strata-cloud-manager?ts=markdown) * [SD-WAN for NGFW](https://www.paloaltonetworks.com/network-security/sd-wan-subscription?ts=markdown) * [PAN-OS](https://www.paloaltonetworks.com/network-security/pan-os?ts=markdown) * [Panorama](https://www.paloaltonetworks.com/network-security/panorama?ts=markdown) * [Secure Access Service Edge](https://www.paloaltonetworks.com/sase?ts=markdown) * [Prisma SASE](https://www.paloaltonetworks.com/sase?ts=markdown) * [Application Acceleration](https://www.paloaltonetworks.com/sase/app-acceleration?ts=markdown) * [Autonomous Digital Experience Management](https://www.paloaltonetworks.com/sase/adem?ts=markdown) * [Enterprise DLP](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Prisma Access](https://www.paloaltonetworks.com/sase/access?ts=markdown) * [Prisma Browser](https://www.paloaltonetworks.com/sase/prisma-browser?ts=markdown) * [Prisma SD-WAN](https://www.paloaltonetworks.com/sase/sd-wan?ts=markdown) * [Remote Browser Isolation](https://www.paloaltonetworks.com/sase/remote-browser-isolation?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [AI-Driven Security Operations Platform](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cloud Security](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Cortex Cloud](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Application Security](https://www.paloaltonetworks.com/cortex/cloud/application-security?ts=markdown) * [Cloud Posture Security](https://www.paloaltonetworks.com/cortex/cloud/cloud-posture-security?ts=markdown) * [Cloud Runtime Security](https://www.paloaltonetworks.com/cortex/cloud/runtime-security?ts=markdown) * [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud?ts=markdown) * [AI-Driven SOC](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cortex XSIAM](https://www.paloaltonetworks.com/cortex/cortex-xsiam?ts=markdown) * [Cortex XDR](https://www.paloaltonetworks.com/cortex/cortex-xdr?ts=markdown) * [Cortex XSOAR](https://www.paloaltonetworks.com/cortex/cortex-xsoar?ts=markdown) * [Cortex Xpanse](https://www.paloaltonetworks.com/cortex/cortex-xpanse?ts=markdown) * [Unit 42 Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) * [Threat Intel and Incident Response Services](https://www.paloaltonetworks.com/unit42?ts=markdown) * [Proactive Assessments](https://www.paloaltonetworks.com/unit42/assess?ts=markdown) * [Incident Response](https://www.paloaltonetworks.com/unit42/respond?ts=markdown) * [Transform Your Security Strategy](https://www.paloaltonetworks.com/unit42/transform?ts=markdown) * [Discover Threat Intelligence](https://www.paloaltonetworks.com/unit42/threat-intelligence-partners?ts=markdown) ## Company * [About Us](https://www.paloaltonetworks.com/about-us?ts=markdown) * [Careers](https://jobs.paloaltonetworks.com/en/) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [Corporate Responsibility](https://www.paloaltonetworks.com/about-us/corporate-responsibility?ts=markdown) * [Customers](https://www.paloaltonetworks.com/customers?ts=markdown) * [Investor Relations](https://investors.paloaltonetworks.com/) * [Location](https://www.paloaltonetworks.com/about-us/locations?ts=markdown) * [Newsroom](https://www.paloaltonetworks.com/company/newsroom?ts=markdown) ## Popular Links * [Blog](https://www.paloaltonetworks.com/blog/?ts=markdown) * [Communities](https://www.paloaltonetworks.com/communities?ts=markdown) * [Content Library](https://www.paloaltonetworks.com/resources?ts=markdown) * [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) * [Event Center](https://events.paloaltonetworks.com/) * [Manage Email Preferences](https://start.paloaltonetworks.com/preference-center) * [Products A-Z](https://www.paloaltonetworks.com/products/products-a-z?ts=markdown) * [Product Certifications](https://www.paloaltonetworks.com/legal-notices/trust-center/compliance?ts=markdown) * [Report a Vulnerability](https://www.paloaltonetworks.com/security-disclosure?ts=markdown) * [Sitemap](https://www.paloaltonetworks.com/sitemap?ts=markdown) * [Tech Docs](https://docs.paloaltonetworks.com/) * [Unit 42](https://unit42.paloaltonetworks.com/) * [Do Not Sell or Share My Personal Information](https://panwedd.exterro.net/portal/dsar.htm?target=panwedd) ![PAN logo](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/pan-logo-dark.svg) * [Privacy](https://www.paloaltonetworks.com/legal-notices/privacy?ts=markdown) * [Trust Center](https://www.paloaltonetworks.com/legal-notices/trust-center?ts=markdown) * [Terms of Use](https://www.paloaltonetworks.com/legal-notices/terms-of-use?ts=markdown) * [Documents](https://www.paloaltonetworks.com/legal?ts=markdown) Copyright © 2026 Palo Alto Networks. All Rights Reserved * [![Youtube](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/youtube-black.svg)](https://www.youtube.com/user/paloaltonetworks) * [![Podcast](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/icons/podcast.svg)](https://www.paloaltonetworks.com/podcasts/threat-vector?ts=markdown) * [![Facebook](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/facebook-black.svg)](https://www.facebook.com/PaloAltoNetworks/) * [![LinkedIn](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/linkedin-black.svg)](https://www.linkedin.com/company/palo-alto-networks) * [![Twitter](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/twitter-x-black.svg)](https://twitter.com/PaloAltoNtwks) * EN Select your language