* [Blog](https://www2.paloaltonetworks.com/blog) * [Palo Alto Networks](https://www2.paloaltonetworks.com/blog/corporate/) * [AI Security](https://www2.paloaltonetworks.com/blog/category/ai-security/) * DeepSeek's Rise Shows AI ... # DeepSeek's Rise Shows AI Security Remains a Moving Target [](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww2.paloaltonetworks.com%2Fblog%2F2025%2F01%2Fdeepseek-rise-shows-ai-security-remains-moving-target%2F) [](https://twitter.com/share?text=DeepSeek%E2%80%99s+Rise+Shows+AI+Security+Remains+a+Moving+Target&url=https%3A%2F%2Fwww2.paloaltonetworks.com%2Fblog%2F2025%2F01%2Fdeepseek-rise-shows-ai-security-remains-moving-target%2F) [](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww2.paloaltonetworks.com%2Fblog%2F2025%2F01%2Fdeepseek-rise-shows-ai-security-remains-moving-target%2F&title=DeepSeek%E2%80%99s+Rise+Shows+AI+Security+Remains+a+Moving+Target&summary=&source=) [](https://www.paloaltonetworks.com//www.reddit.com/submit?url=https://www2.paloaltonetworks.com/blog/2025/01/deepseek-rise-shows-ai-security-remains-moving-target/&ts=markdown) \[\](mailto:?subject=DeepSeek’s Rise Shows AI Security Remains a Moving Target) Link copied By [Anand Oswal](https://www.paloaltonetworks.com/blog/author/anand-oswal/?ts=markdown "Posts by Anand Oswal") Jan 30, 2025 6 minutes [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown) [Company \& Culture](https://www.paloaltonetworks.com/blog/category/company-culture/?ts=markdown) [Unit 42](https://unit42-dev2.paloaltonetworks.com) [AI Security](https://www.paloaltonetworks.com/blog/tag/ai-security-2/?ts=markdown) [DeepSeek](https://www.paloaltonetworks.com/blog/tag/deepseek/?ts=markdown) If you've been following tech news in the last few days, you've heard of [DeepSeek](https://news.sky.com/story/what-is-deepseek-the-low-cost-chinese-ai-firm-that-has-turned-the-tech-world-upside-down-13298039). This large language model (LLM) is threatening to disrupt current AI market leaders and fundamentally change the economics of AI-powered applications. Released by a 200-person Chinese startup, the model appears as capable as state-of-the-art tools offered by OpenAI and Google with the benefit of being significantly faster and less expensive to run. What's more, DeepSeek has been released to open source and is lightweight enough to run on commodity hardware -- any developer can start tinkering with it without having to access costly GPUs. DeepSeek has been heralded as a "[Sputnik moment" for AI](https://www.npr.org/2025/01/28/g-s1-45061/deepseek-did-a-little-known-chinese-startup-cause-a-sputnik-moment-for-ai) and has sent shockwaves through financial markets. Palo Alto Networks Unit 42 also uncovered concerning [vulnerabilities in DeepSeek](https://thecyberwire.com/podcasts/threat-vector/901/notes), revealing that it can be easily jailbroken to produce nefarious content with little to no specialized knowledge or expertise. Unit 42 researchers recently uncovered two novel and effective jailbreaking techniques, [Deceptive Delight](https://unit42.paloaltonetworks.com/jailbreak-llms-through-camouflage-distraction/) and [Bad Likert Judge](https://unit42.paloaltonetworks.com/multi-turn-technique-jailbreaks-llms/). Given their success against other LLMs, Unit 42 tested these two [jailbreaks and another multistage jailbreaking](https://unit42.paloaltonetworks.com/jailbreaking-deepseek-three-techniques/) technique, called [Crescendo](https://crescendo-the-multiturn-jailbreak.github.io/), against DeepSeek models, finding high rates in bypassing safeguards. These techniques enabled explicit guidance on malicious activities, including keylogger creation, data exfiltration and even instructions for incendiary devices, demonstrating the tangible security risks. ## What Does All This Mean for Security Leaders Like You? Every organization will have its policies about new AI models. Some will ban them completely; others will allow limited, experimental and heavily guardrailed use. Still others will rush to deploy it in production, looking to eke out that extra bit of performance and cost optimization. But beyond your organization's need to decide on a new specific model, DeepSeek's rise offers several lessons about AI security in 2025. AI's pace of change, and the surrounding sense of urgency, can't be compared to other technologies. How can you plan ahead when a somewhat obscure model (and the more than 500 derivatives already available on Hugging Face) becomes the number-one priority seemingly out of nowhere? The short answer: you can't. AI security remains a moving target and is going to stay that way for a while. And things are changing quickly. DeepSeek isn't going to be the last model that catches the world by surprise. It will take time before AI technologies are fully understood and clear leaders emerge. Until then, you have no choice but to expect the unexpected. Organizations can switch LLMs at little to no cost, which allows development teams to move quickly. Replacing software that relies on OpenAI, Google or Anthropic's models with DeepSeek (or whatever model comes out tomorrow) usually requires updating just a few lines of code. The temptation for product builders to test the new model to see if it can solve a cost issue or latency bottleneck or outperform on a specific task is huge. And if the model turns out to be the missing piece that helps bring a potentially game-changing product to market, you don't want to be the one who stands in the way. ## Secure AI by Design While it can be challenging to guarantee complete protection against all adversarial techniques for a specific LLM, organizations can implement security measures that can help monitor when and how employees are using LLMs. This becomes crucial when employees are using unauthorized third-party LLMs. That's the goal we had in mind when we [rolled out](https://www.paloaltonetworks.com/company/press/2024/palo-alto-networks-launches-new-security-solutions-infused-with-precision-ai-to-defend-against-advanced-threats-and-safeguard-ai-adoption) our [Secure AI by Design](https://www.paloaltonetworks.com/precision-ai-security/secure-ai-by-design) portfolio last year. It was all about empowering organizations to allow their employees to safely adopt AI tools and deploy enterprise AI applications. To do this, we developed [AI Access Security](https://www.paloaltonetworks.com/network-security/ai-access-security) to empower our customers to enable secure use of third party AI tools by employees: * Gain real-time visibility into what GenAI applications are being used across an organization and by whom. * Streamline access controls with the ability to block unsanctioned apps, apply InfoSec policies, and protect against threats. * Protect sensitive data from unauthorized access and leakage to risky GenAI applications and AI training models. We built AI Runtime Security and AI-SPM to enable organizations to secure their own AI-powered applications: * Understand what AI assets are in the environment and where they are located, as well as which models and data sources they are connected to, and who has access to this ecosystem. * Protect apps and models against supply chain, configuration and runtime risks for all AI apps and models. * Secure the data within applications and models from leaks, whether through intentional actions or inadvertent exposures. ## Great Technology Meets Good Governance In our whitepaper [Establishing AI Governance for AI-Powered Applications](https://www.paloaltonetworks.com/resources/whitepapers/ai-governance), we suggest that organizations adopt frameworks for **visibility and control** over AI usage -- from model training to application deployment. #### Four Steps to Manage Risks Related to New LLMs **1. Create centralized visibility into AI model usage in the organization --** Setting controls at the model provider level will quickly become a game of whack-a-mole and is impossible to do with open-source models. Instead, look to establish cross-cloud and cross-organizational visibility into the existing model inventory, with systems in place to monitor when new models are deployed. Sign up for a [free demo of AI Runtime Security](https://start.paloaltonetworks.com/ai-runtime-security-demo.html) and learn how you can perform a no-cost, risk-free AI discovery. **2. Maintain clear policies regarding sanctioned and unsanctioned models** **--** While every organization will have a different tolerance for risk when it comes to new technologies, it's best not to make these decisions on a completely ad hoc basis. Having a well-established process for vetting, evaluating and approving new models will prove useful in these situations. **3. Decide on relevant guardrails for models in production** **--** Again, there's a question of risk tolerance here, but the decision should be made in advance. Guardrails can be applied on model input and/or model output. AI Runtime Security allows you to inspect and block AI-specific threats, such as prompt injections, malicious URLs and insecure data outputs. **4. Reduce opportunities for data exfiltration --** Once models have access to sensitive data (for training or inference purposes), you're playing a much higher stakes game. Adopting a unified security perspective that gives you a single view into data, potential attack paths, code and applications will help you understand where sensitive data is at risk. AI Runtime Security can detect thousands of predefined data patterns to secure against data exfiltration through intentional actions or inadvertent exposures. ![Visibility on models, data, use cases, access, compliance.](https://www.paloaltonetworks.com/blog/wp-content/uploads/2025/01/word-image-333700-1.png) ## A Wake Up Call The public launch of DeepSeek has been a wake-up call for financial markets and industries across the board looking to adopt AI. If this has been a wake-up call for your organization, let us help you navigate the road ahead. Want to know what AI applications are in your cloud environments and how you can secure them? Sign up for a [free demo of AI Runtime Security](https://start.paloaltonetworks.com/ai-runtime-security-demo.html). *** ** * ** *** ## Related Blogs ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown), [Announcement](https://www.paloaltonetworks.com/blog/category/announcement/?ts=markdown), [Company \& Culture](https://www.paloaltonetworks.com/blog/category/company-culture/?ts=markdown), [Partners](https://www.paloaltonetworks.com/blog/category/partners/?ts=markdown), [Products and Services](https://www.paloaltonetworks.com/blog/category/products-and-services/?ts=markdown) [#### Securing Every Identity in the Age of AI](https://www2.paloaltonetworks.com/blog/2026/02/securing-every-identity-in-the-age-of-ai/) ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown), [Malware](https://www.paloaltonetworks.com/blog/category/malware-2/?ts=markdown), [Products and Services](https://www.paloaltonetworks.com/blog/category/products-and-services/?ts=markdown), [Research](https://www.paloaltonetworks.com/blog/category/research/?ts=markdown), [Unit 42](https://unit42-dev2.paloaltonetworks.com) [#### From Ransom to Revenue Loss](https://www2.paloaltonetworks.com/blog/2025/10/from-ransom-to-revenue-loss/) ### [AI and Cybersecurity](https://www.paloaltonetworks.com/blog/security-operations/category/ai-and-cybersecurity/?ts=markdown), [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown), [Cybersecurity](https://www.paloaltonetworks.com/blog/category/cybersecurity-2/?ts=markdown), [Data Security](https://www.paloaltonetworks.com/blog/category/data-security/?ts=markdown), [Incident Response](https://www.paloaltonetworks.com/blog/category/incident-response/?ts=markdown), [Reports](https://www.paloaltonetworks.com/blog/category/reports/?ts=markdown), [Unit 42](https://unit42-dev2.paloaltonetworks.com) [#### The Case for Multidomain Visibility](https://www2.paloaltonetworks.com/blog/2025/10/case-for-multidomain-visibility/) ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown) [#### Securing the Future by Protecting Sensitive Data in AI Systems](https://www2.paloaltonetworks.com/blog/network-security/securing-the-future-by-protecting-sensitive-data-in-ai-systems/) ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown), [CSO Perspective](https://www.paloaltonetworks.com/blog/category/cso-perspective/?ts=markdown), [Points of View](https://www.paloaltonetworks.com/blog/category/points-of-view/?ts=markdown), [Predictions](https://www.paloaltonetworks.com/blog/category/predictions/?ts=markdown), [Unit 42](https://unit42-dev2.paloaltonetworks.com) [#### Securing the AI Before Times](https://www2.paloaltonetworks.com/blog/2025/08/securing-ai-before-times/) ### [AI Security](https://www.paloaltonetworks.com/blog/category/ai-security/?ts=markdown), [Must-Read Articles](https://www.paloaltonetworks.com/blog/security-operations/category/must-read-articles/?ts=markdown), [News \& Events](https://www.paloaltonetworks.com/blog/sase/category/news-events/?ts=markdown), [Reports](https://www.paloaltonetworks.com/blog/category/reports/?ts=markdown), [Vulnerability Exposed](https://www.paloaltonetworks.com/blog/category/vulnerability-exposed/?ts=markdown) [#### DeepSeek Unveiled --- Exposing the GenAI Risks Hiding in Plain Sight](https://www2.paloaltonetworks.com/blog/2025/02/deepseek-unveiled-exposing-genai-risks-hiding-in-plain-sight/) ### Subscribe to the Blog! Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more. ![spinner](https://www2.paloaltonetworks.com/blog/wp-content/themes/panwblog2023/dist/images/ajax-loader.gif) Sign up Please enter a valid email. By submitting this form, you agree to our [Terms of Use](https://www.paloaltonetworks.com/legal-notices/terms-of-use?ts=markdown) and acknowledge our [Privacy Statement](https://www.paloaltonetworks.com/legal-notices/privacy?ts=markdown). Please look for a confirmation email from us. If you don't receive it in the next 10 minutes, please check your spam folder. This site is protected by reCAPTCHA and the Google [Privacy Policy](https://policies.google.com/privacy) and [Terms of Service](https://policies.google.com/terms) apply. {#footer} {#footer} ## Products and Services * [AI-Powered Network Security Platform](https://www.paloaltonetworks.com/network-security?ts=markdown) * [Secure AI by Design](https://www.paloaltonetworks.com/precision-ai-security/secure-ai-by-design?ts=markdown) * [Prisma AIRS](https://www.paloaltonetworks.com/prisma/prisma-ai-runtime-security?ts=markdown) * [AI Access Security](https://www.paloaltonetworks.com/sase/ai-access-security?ts=markdown) * [Cloud Delivered Security Services](https://www.paloaltonetworks.com/network-security/security-subscriptions?ts=markdown) * [Advanced Threat Prevention](https://www.paloaltonetworks.com/network-security/advanced-threat-prevention?ts=markdown) * [Advanced URL Filtering](https://www.paloaltonetworks.com/network-security/advanced-url-filtering?ts=markdown) * [Advanced WildFire](https://www.paloaltonetworks.com/network-security/advanced-wildfire?ts=markdown) * [Advanced DNS Security](https://www.paloaltonetworks.com/network-security/advanced-dns-security?ts=markdown) * [Enterprise Data Loss Prevention](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Enterprise IoT Security](https://www.paloaltonetworks.com/network-security/enterprise-device-security?ts=markdown) * [Medical IoT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [Industrial OT Security](https://www.paloaltonetworks.com/network-security/medical-device-security?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [Next-Generation Firewalls](https://www.paloaltonetworks.com/network-security/next-generation-firewall?ts=markdown) * [Hardware Firewalls](https://www.paloaltonetworks.com/network-security/hardware-firewall-innovations?ts=markdown) * [Software Firewalls](https://www.paloaltonetworks.com/network-security/software-firewalls?ts=markdown) * [Strata Cloud Manager](https://www.paloaltonetworks.com/network-security/strata-cloud-manager?ts=markdown) * [SD-WAN for NGFW](https://www.paloaltonetworks.com/network-security/sd-wan-subscription?ts=markdown) * [PAN-OS](https://www.paloaltonetworks.com/network-security/pan-os?ts=markdown) * [Panorama](https://www.paloaltonetworks.com/network-security/panorama?ts=markdown) * [Secure Access Service Edge](https://www.paloaltonetworks.com/sase?ts=markdown) * [Prisma SASE](https://www.paloaltonetworks.com/sase?ts=markdown) * [Application Acceleration](https://www.paloaltonetworks.com/sase/app-acceleration?ts=markdown) * [Autonomous Digital Experience Management](https://www.paloaltonetworks.com/sase/adem?ts=markdown) * [Enterprise DLP](https://www.paloaltonetworks.com/sase/enterprise-data-loss-prevention?ts=markdown) * [Prisma Access](https://www.paloaltonetworks.com/sase/access?ts=markdown) * [Prisma Browser](https://www.paloaltonetworks.com/sase/prisma-browser?ts=markdown) * [Prisma SD-WAN](https://www.paloaltonetworks.com/sase/sd-wan?ts=markdown) * [Remote Browser Isolation](https://www.paloaltonetworks.com/sase/remote-browser-isolation?ts=markdown) * [SaaS Security](https://www.paloaltonetworks.com/sase/saas-security?ts=markdown) * [AI-Driven Security Operations Platform](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cloud Security](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Cortex Cloud](https://www.paloaltonetworks.com/cortex/cloud?ts=markdown) * [Application Security](https://www.paloaltonetworks.com/cortex/cloud/application-security?ts=markdown) * [Cloud Posture Security](https://www.paloaltonetworks.com/cortex/cloud/cloud-posture-security?ts=markdown) * [Cloud Runtime Security](https://www.paloaltonetworks.com/cortex/cloud/runtime-security?ts=markdown) * [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud?ts=markdown) * [AI-Driven SOC](https://www.paloaltonetworks.com/cortex?ts=markdown) * [Cortex XSIAM](https://www.paloaltonetworks.com/cortex/cortex-xsiam?ts=markdown) * [Cortex XDR](https://www.paloaltonetworks.com/cortex/cortex-xdr?ts=markdown) * [Cortex XSOAR](https://www.paloaltonetworks.com/cortex/cortex-xsoar?ts=markdown) * [Cortex Xpanse](https://www.paloaltonetworks.com/cortex/cortex-xpanse?ts=markdown) * [Unit 42 Managed Detection \& Response](https://www.paloaltonetworks.com/cortex/managed-detection-and-response?ts=markdown) * [Managed XSIAM](https://www.paloaltonetworks.com/cortex/managed-xsiam?ts=markdown) * [Threat Intel and Incident Response Services](https://www.paloaltonetworks.com/unit42?ts=markdown) * [Proactive Assessments](https://www.paloaltonetworks.com/unit42/assess?ts=markdown) * [Incident Response](https://www.paloaltonetworks.com/unit42/respond?ts=markdown) * [Transform Your Security Strategy](https://www.paloaltonetworks.com/unit42/transform?ts=markdown) * [Discover Threat Intelligence](https://www.paloaltonetworks.com/unit42/threat-intelligence-partners?ts=markdown) ## Company * [About Us](https://www.paloaltonetworks.com/about-us?ts=markdown) * [Careers](https://jobs.paloaltonetworks.com/en/) * [Contact Us](https://www.paloaltonetworks.com/company/contact-sales?ts=markdown) * [Corporate Responsibility](https://www.paloaltonetworks.com/about-us/corporate-responsibility?ts=markdown) * [Customers](https://www.paloaltonetworks.com/customers?ts=markdown) * [Investor Relations](https://investors.paloaltonetworks.com/) * [Location](https://www.paloaltonetworks.com/about-us/locations?ts=markdown) * [Newsroom](https://www.paloaltonetworks.com/company/newsroom?ts=markdown) ## Popular Links * [Blog](https://www.paloaltonetworks.com/blog/?ts=markdown) * [Communities](https://www.paloaltonetworks.com/communities?ts=markdown) * [Content Library](https://www.paloaltonetworks.com/resources?ts=markdown) * [Cyberpedia](https://www.paloaltonetworks.com/cyberpedia?ts=markdown) * [Event Center](https://events.paloaltonetworks.com/) * [Manage Email Preferences](https://start.paloaltonetworks.com/preference-center) * [Products A-Z](https://www.paloaltonetworks.com/products/products-a-z?ts=markdown) * [Product Certifications](https://www.paloaltonetworks.com/legal-notices/trust-center/compliance?ts=markdown) * [Report a Vulnerability](https://www.paloaltonetworks.com/security-disclosure?ts=markdown) * [Sitemap](https://www.paloaltonetworks.com/sitemap?ts=markdown) * [Tech Docs](https://docs.paloaltonetworks.com/) * [Unit 42](https://unit42.paloaltonetworks.com/) * [Do Not Sell or Share My Personal Information](https://panwedd.exterro.net/portal/dsar.htm?target=panwedd) ![PAN logo](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/pan-logo-dark.svg) * [Privacy](https://www.paloaltonetworks.com/legal-notices/privacy?ts=markdown) * [Trust Center](https://www.paloaltonetworks.com/legal-notices/trust-center?ts=markdown) * [Terms of Use](https://www.paloaltonetworks.com/legal-notices/terms-of-use?ts=markdown) * [Documents](https://www.paloaltonetworks.com/legal?ts=markdown) Copyright © 2026 Palo Alto Networks. All Rights Reserved * [![Youtube](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/youtube-black.svg)](https://www.youtube.com/user/paloaltonetworks) * [![Podcast](https://www.paloaltonetworks.com/content/dam/pan/en_US/images/icons/podcast.svg)](https://www.paloaltonetworks.com/podcasts/threat-vector?ts=markdown) * [![Facebook](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/facebook-black.svg)](https://www.facebook.com/PaloAltoNetworks/) * [![LinkedIn](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/linkedin-black.svg)](https://www.linkedin.com/company/palo-alto-networks) * [![Twitter](https://www.paloaltonetworks.com/etc/clientlibs/clean/imgs/social/twitter-x-black.svg)](https://twitter.com/PaloAltoNtwks) * EN Select your language