{"id":333700,"date":"2025-01-30T17:09:23","date_gmt":"2025-01-31T01:09:23","guid":{"rendered":"https:\/\/www.paloaltonetworks.com\/blog\/?p=333700"},"modified":"2025-02-04T13:00:40","modified_gmt":"2025-02-04T21:00:40","slug":"deepseek-rise-shows-ai-security-remains-moving-target","status":"publish","type":"post","link":"https:\/\/www2.paloaltonetworks.com\/blog\/2025\/01\/deepseek-rise-shows-ai-security-remains-moving-target\/","title":{"rendered":"DeepSeek\u2019s Rise Shows AI Security Remains a Moving Target"},"content":{"rendered":"<p>If you\u2019ve been following tech news in the last few days, you\u2019ve heard of <a href=\"https:\/\/news.sky.com\/story\/what-is-deepseek-the-low-cost-chinese-ai-firm-that-has-turned-the-tech-world-upside-down-13298039\" rel=\"nofollow,noopener\" >DeepSeek<\/a>. This large language model (LLM) is threatening to disrupt current AI market leaders and fundamentally change the economics of AI-powered applications.<\/p>\n<p>Released by a 200-person Chinese startup, the model appears as capable as state-of-the-art tools offered by OpenAI and Google with the benefit of being significantly faster and less expensive to run. What\u2019s more, DeepSeek has been released to open source and is lightweight enough to run on commodity hardware \u2013 any developer can start tinkering with it without having to access costly GPUs. DeepSeek has been heralded as a \u201c<a href=\"https:\/\/www.npr.org\/2025\/01\/28\/g-s1-45061\/deepseek-did-a-little-known-chinese-startup-cause-a-sputnik-moment-for-ai\" rel=\"nofollow,noopener\" >Sputnik moment\u201d for AI<\/a> and has sent shockwaves through financial markets.<\/p>\n<p>Palo Alto Networks Unit 42 also uncovered concerning <a href=\"https:\/\/thecyberwire.com\/podcasts\/threat-vector\/901\/notes\" rel=\"nofollow,noopener\" >vulnerabilities in DeepSeek<\/a>, revealing that it can be easily jailbroken to produce nefarious content with little to no specialized knowledge or expertise. Unit 42 researchers recently uncovered two novel and effective jailbreaking techniques, <a href=\"https:\/\/unit42.paloaltonetworks.com\/jailbreak-llms-through-camouflage-distraction\/\">Deceptive Delight<\/a> and <a href=\"https:\/\/unit42.paloaltonetworks.com\/multi-turn-technique-jailbreaks-llms\/\">Bad Likert Judge<\/a>. Given their success against other LLMs, Unit 42 tested these two <a href=\"https:\/\/unit42.paloaltonetworks.com\/jailbreaking-deepseek-three-techniques\/\">jailbreaks and another multistage jailbreaking<\/a> technique, called <a href=\"https:\/\/crescendo-the-multiturn-jailbreak.github.io\/\" rel=\"nofollow,noopener\" >Crescendo<\/a>, against DeepSeek models, finding high rates in bypassing safeguards. These techniques enabled explicit guidance on malicious activities, including keylogger creation, data exfiltration and even instructions for incendiary devices, demonstrating the tangible security risks.<\/p>\n<h2><a id=\"post-333700-_fk86n8sxm1b0\"><\/a>What Does All This Mean for Security Leaders Like You?<\/h2>\n<p>Every organization will have its policies about new AI models. Some will ban them completely; others will allow limited, experimental and heavily guardrailed use. Still others will rush to deploy it in production, looking to eke out that extra bit of performance and cost optimization.<\/p>\n<p>But beyond your organization\u2019s need to decide on a new specific model, DeepSeek\u2019s rise offers several lessons about AI security in 2025. AI\u2019s pace of change, and the surrounding sense of urgency, can\u2019t be compared to other technologies. How can you plan ahead when a somewhat obscure model (and the more than 500 derivatives already available on Hugging Face) becomes the number-one priority seemingly out of nowhere?<\/p>\n<p>The short answer: you can\u2019t. AI security remains a moving target and is going to stay that way for a while. And things are changing quickly. DeepSeek isn\u2019t going to be the last model that catches the world by surprise. It will take time before AI technologies are fully understood and clear leaders emerge. Until then, you have no choice but to expect the unexpected.<\/p>\n<p>Organizations can switch LLMs at little to no cost, which allows development teams to move quickly. Replacing software that relies on OpenAI, Google or Anthropic\u2019s models with DeepSeek (or whatever model comes out tomorrow) usually requires updating just a few lines of code. The temptation for product builders to test the new model to see if it can solve a cost issue or latency bottleneck or outperform on a specific task is huge. And if the model turns out to be the missing piece that helps bring a potentially game-changing product to market, you don\u2019t want to be the one who stands in the way.<\/p>\n<h2><a id=\"post-333700-_ujrbp617fmeu\"><\/a>Secure AI by Design<\/h2>\n<p>While it can be challenging to guarantee complete protection against all adversarial techniques for a specific LLM, organizations can implement security measures that can help monitor when and how employees are using LLMs. This becomes crucial when employees are using unauthorized third-party LLMs. That\u2019s the goal we had in mind when we <a href=\"https:\/\/www.paloaltonetworks.com\/company\/press\/2024\/palo-alto-networks-launches-new-security-solutions-infused-with-precision-ai-to-defend-against-advanced-threats-and-safeguard-ai-adoption\">rolled out<\/a> our <a href=\"https:\/\/www.paloaltonetworks.com\/precision-ai-security\/secure-ai-by-design\">Secure AI by Design<\/a> portfolio last year. It was all about empowering organizations to allow their employees to safely adopt AI tools and deploy enterprise AI applications.<\/p>\n<p>To do this, we developed <a href=\"https:\/\/www.paloaltonetworks.com\/network-security\/ai-access-security\">AI Access Security<\/a> to empower our customers to enable secure use of third party AI tools by employees:<\/p>\n<ul>\n<li>Gain real-time visibility into what GenAI applications are being used across an organization and by whom.<\/li>\n<li>Streamline access controls with the ability to block unsanctioned apps, apply InfoSec policies, and protect against threats.<\/li>\n<li>Protect sensitive data from unauthorized access and leakage to risky GenAI applications and AI training models.<\/li>\n<\/ul>\n<p>We built AI Runtime Security and AI-SPM to enable organizations to secure their own AI-powered applications:<\/p>\n<ul>\n<li>Understand what AI assets are in the environment and where they are located, as well as which models and data sources they are connected to, and who has access to this ecosystem.<\/li>\n<li>Protect apps and models against supply chain, configuration and runtime risks for all AI apps and models.<\/li>\n<li>Secure the data within applications and models from leaks, whether through intentional actions or inadvertent exposures.<\/li>\n<\/ul>\n<h2><a id=\"post-333700-_y0k7xau23nys\"><\/a>Great Technology Meets Good Governance<\/h2>\n<p>In our whitepaper <a href=\"https:\/\/www.paloaltonetworks.com\/resources\/whitepapers\/ai-governance\">Establishing AI Governance for AI-Powered Applications<\/a>, we suggest that organizations adopt frameworks for <strong>visibility and control<\/strong> over AI usage \u2013 from model training to application deployment.<\/p>\n<h4><a id=\"post-333700-_85jg6h75f2rj\"><\/a>Four Steps to Manage Risks Related to New LLMs<\/h4>\n<p><strong>1. Create centralized visibility into AI model usage in the organization \u2013<\/strong> Setting controls at the model provider level will quickly become a game of whack-a-mole and is impossible to do with open-source models. Instead, look to establish cross-cloud and cross-organizational visibility into the existing model inventory, with systems in place to monitor when new models are deployed. Sign up for a <a href=\"https:\/\/start.paloaltonetworks.com\/ai-runtime-security-demo.html\">free demo of AI Runtime Security<\/a> and learn how you can perform a no-cost, risk-free AI discovery.<\/p>\n<p><strong>2. Maintain clear policies regarding sanctioned and unsanctioned models<\/strong> <strong>\u2013<\/strong> While every organization will have a different tolerance for risk when it comes to new technologies, it\u2019s best not to make these decisions on a completely ad hoc basis. Having a well-established process for vetting, evaluating and approving new models will prove useful in these situations.<\/p>\n<p><strong>3. Decide on relevant guardrails for models in production<\/strong> <strong>\u2013<\/strong> Again, there\u2019s a question of risk tolerance here, but the decision should be made in advance. Guardrails can be applied on model input and\/or model output. AI Runtime Security allows you to inspect and block AI-specific threats, such as prompt injections, malicious URLs and insecure data outputs.<\/p>\n<p><strong>4. Reduce opportunities for data exfiltration \u2013<\/strong> Once models have access to sensitive data (for training or inference purposes), you\u2019re playing a much higher stakes game. Adopting a unified security perspective that gives you a single view into data, potential attack paths, code and applications will help you understand where sensitive data is at risk. AI Runtime Security can detect thousands of predefined data patterns to secure against data exfiltration through intentional actions or inadvertent exposures.<\/p>\n<p><div style=\"max-width:100%\" data-width=\"762\"><span class=\"ar-custom\" style=\"padding-bottom:102.62%;\"><img loading=\"lazy\" decoding=\"async\"  class=\"aligncenter wp-image-333701 lozad\"  data-src=\"https:\/\/www.paloaltonetworks.com\/blog\/wp-content\/uploads\/2025\/01\/word-image-333700-1.png\" alt=\"Visibility on models, data, use cases, access, compliance.\" width=\"762\" height=\"782\" srcset=\"https:\/\/www2.paloaltonetworks.com\/blog\/wp-content\/uploads\/2025\/01\/word-image-333700-1.png 762w, https:\/\/www2.paloaltonetworks.com\/blog\/wp-content\/uploads\/2025\/01\/word-image-333700-1-230x236.png 230w, https:\/\/www2.paloaltonetworks.com\/blog\/wp-content\/uploads\/2025\/01\/word-image-333700-1-500x513.png 500w, https:\/\/www2.paloaltonetworks.com\/blog\/wp-content\/uploads\/2025\/01\/word-image-333700-1-292x300.png 292w, https:\/\/www2.paloaltonetworks.com\/blog\/wp-content\/uploads\/2025\/01\/word-image-333700-1-39x40.png 39w\" sizes=\"auto, (max-width: 762px) 100vw, 762px\" \/><\/span><\/div><\/p>\n<h2><a id=\"post-333700-_apqb3qxlbmx\"><\/a>A Wake Up Call<\/h2>\n<p>The public launch of DeepSeek has been a wake-up call for financial markets and industries across the board looking to adopt AI. If this has been a wake-up call for your organization, let us help you navigate the road ahead.<\/p>\n<p>Want to know what AI applications are in your cloud environments and how you can secure them? Sign up for a <a href=\"https:\/\/start.paloaltonetworks.com\/ai-runtime-security-demo.html\">free demo of AI Runtime Security<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>DeepSeek is an LLM threatening to disrupt current AI market leaders and fundamentally change the economics of AI-powered applications.<\/p>\n","protected":false},"author":723,"featured_media":333727,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[9943,6719,483],"tags":[9992,10074],"coauthors":[7076],"class_list":["post-333700","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-security","category-company-culture","category-unit42","tag-ai-security-2","tag-deepseek"],"jetpack_featured_media_url":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-content\/uploads\/2025\/01\/AdobeStock_640765504-1-1.jpeg","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/posts\/333700","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/users\/723"}],"replies":[{"embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/comments?post=333700"}],"version-history":[{"count":3,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/posts\/333700\/revisions"}],"predecessor-version":[{"id":333742,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/posts\/333700\/revisions\/333742"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/media\/333727"}],"wp:attachment":[{"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/media?parent=333700"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/categories?post=333700"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/tags?post=333700"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/coauthors?post=333700"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}