{"id":298905,"date":"2023-07-20T08:00:58","date_gmt":"2023-07-20T15:00:58","guid":{"rendered":"https:\/\/www.paloaltonetworks.com\/blog\/?p=298905"},"modified":"2023-07-20T13:17:33","modified_gmt":"2023-07-20T20:17:33","slug":"llm-in-the-cloud","status":"publish","type":"post","link":"https:\/\/www2.paloaltonetworks.com\/blog\/2023\/07\/llm-in-the-cloud\/","title":{"rendered":"LLM in the Cloud \u2014 Advantages and Risks"},"content":{"rendered":"<h2><a id=\"post-298905-_n1qkdqathrt0\"><\/a>LLM and Cloud Security<\/h2>\n<p>Let\u2019s explore the relationship between LLMs and cloud security, discussing how these advanced models can be dangerous, as well as leveraged to improve the overall security posture of cloud-based systems. Simply put, a large language model (LLM) is an artificial intelligence program designed to understand and generate human language. It is trained on vast amounts of text data from the internet, learning grammar, facts and reasoning abilities. With this knowledge, an LLM can answer questions, generate text and even hold a conversation with users. Examples of LLMs include OpenAI's ChatGPT, Google\u2019s Bard and Microsoft's new Bing search engine.<\/p>\n<p>As cloud computing continues to dominate the technology landscape, it has become more important than ever to ensure robust security for the services and data residing in the cloud. The development of large language models (LLMs) has shown great promise in enhancing cloud security.<\/p>\n<h2><a id=\"post-298905-_m8ibnjcsfhcn\"><\/a>Risks of LLM<\/h2>\n<p>As revolutionary as the LLM technology can be, it is still in its infancy, and there are known issues and limitations that AI researchers have yet to conquer. These issues may be the showstoppers for some applications. And, like any tool accessible to the public, LLM can be used for benign, as well as malign purposes. While generative AI can produce helpful and accurate content for society, it can also create misinformation that deludes the content for consumers.<\/p>\n<h2><a id=\"post-298905-_g5v8ofm1ujsy\"><\/a>Risky Characteristics<\/h2>\n<h4><a id=\"post-298905-_samjprfhlghs\"><\/a>Hallucination<\/h4>\n<p>LLM may generate output that cannot be grounded by the input context or the knowledge of the model. It means that the language model generates text that is not logically consistent with the input, or is semantically incorrect but still sounds plausible to a human reader.<\/p>\n<h4><a id=\"post-298905-_sogv7of8ui4p\"><\/a>Bias<\/h4>\n<p>Most LLM applications rely on pretrained models because creating a model from scratch is too expensive for most organizations. However, there is no perfectly balanced training data, and thus every model will always be biased in certain aspects. For example, the training data may contain more English texts than Chinese texts or more knowledge about liberalism than conservatism. When humans rely on the recommendations from these models, their biases can result in unfair or discriminatory decisions.<\/p>\n<h4><a id=\"post-298905-_hu7ya565nghy\"><\/a>Consistency<\/h4>\n<p>LLM may not always generate the same outputs that are given the same inputs. Under the hood, LLMs are probabilistic models that continue to predict the next word based on certain probability distributions.<\/p>\n<h4><a id=\"post-298905-_flgfci9i9pcr\"><\/a>Filter Bypass<\/h4>\n<p>LLM tools are typically built with security filters to prevent the models from generating unwanted content, such as adult, violent or proprietary content. Such filters, however, can sometimes be bypassed by manipulating the inputs (e.g., prompt injection). Researchers have demonstrated various <a href=\"https:\/\/www.jailbreakchat.com\/\" rel=\"nofollow,noopener\" >techniques<\/a> to successfully instruct ChatGPT to generate offensive texts or make ungrounded predictions.<\/p>\n<h4><a id=\"post-298905-_nt1yjyem9svv\"><\/a>Data Privacy<\/h4>\n<p>By design, LLM can only take unencrypted inputs and generate unencrypted output. When a proprietary LLM is offered as a service like OpenAI, the service providers hoard a large amount of sensitive or classified information. The outcome of a data breach incident can be catastrophic, as seen in the recent <a href=\"https:\/\/www.securityweek.com\/openai-patches-account-takeover-vulnerabilities-in-chatgpt\/\" rel=\"nofollow,noopener\" >account takeover<\/a> and <a href=\"https:\/\/www.bleepingcomputer.com\/news\/security\/openai-chatgpt-payment-data-leak-caused-by-open-source-bug\/?mc_cid=0abe1de3f3&amp;mc_eid=a48de14a58\" rel=\"nofollow,noopener\" >leaked queries<\/a> incidents.<\/p>\n<h2><a id=\"post-298905-_ekidkjr7kay1\"><\/a>Malicious Usages<\/h2>\n<h4><a id=\"post-298905-_28wve129oyl9\"><\/a>Misinformation and Disinformation<\/h4>\n<p>With their advanced language generation capabilities, LLMs can create convincing, but false content. This contributes to the spread of fake news, conspiracy theories or malicious narratives.<\/p>\n<h4><a id=\"post-298905-_tazksguscle6\"><\/a>Social Engineering Attacks<\/h4>\n<p>Malicious actors can weaponize LLMs to create sophisticated social engineering attacks, such as spear phishing emails and deep fake content.<\/p>\n<h4><a id=\"post-298905-_tewypz9a9ali\"><\/a>Intellectual Property Infringement<\/h4>\n<p>LLMs can be used to generate content that closely resembles copyrighted or proprietary material. This poses a risk to organizations that rely on intellectual property to maintain a competitive advantage.<\/p>\n<h4><a id=\"post-298905-_oo37x7w2unz7\"><\/a>Offensive Tools Creation<\/h4>\n<p>Generative AI has been used for auditing source code and writing new code. Researchers demonstrated it could also write malicious code like <a href=\"https:\/\/www.malwarebytes.com\/blog\/news\/2023\/03\/chatgpt-happy-to-write-ransomware-just-really-bad-at-it\" rel=\"nofollow,noopener\" >ransomware<\/a>. There are also <a href=\"https:\/\/arstechnica.com\/information-technology\/2023\/01\/chatgpt-is-enabling-script-kiddies-to-write-functional-malware\/\" rel=\"nofollow,noopener\" >reports<\/a> showing that cybercriminals use ChatGPT to create offensive scripts.<\/p>\n<h2><a id=\"post-298905-_eol2a9gsr0nx\"><\/a>LLM Use Cases in Cloud Security<\/h2>\n<p>However, if used correctly, LLM can also be leveraged to improve cloud security.<\/p>\n<h4><a id=\"post-298905-_rw206qgfmloh\"><\/a>Automating Threat Detection and Response<\/h4>\n<p>One of the most significant benefits of LLMs in the context of cloud security is their ability to streamline threat detection and response processes. By incorporating natural language understanding and machine learning, LLMs can identify potential threats hidden in large volumes of data and user behavior patterns. By continuously learning from new data, LLMs can adapt to emerging threats and provide real-time threat information, enabling organizations to respond quickly and efficiently to security incidents.<\/p>\n<h4><a id=\"post-298905-_i0k6lnigzv5y\"><\/a>Enhancing Security Compliance<\/h4>\n<p>As regulatory frameworks continue to evolve, organizations face the challenge of <em>maintaining compliance<\/em> with various security standards and requirements. LLMs can be used to analyze and interpret regulatory texts, allowing organizations to understand and implement necessary security controls easily. By automating compliance management, LLMs can significantly reduce the burden on security teams and enable them to focus on other critical tasks.<\/p>\n<p>This is extremely relevant to compliance-heavy products, such as Prisma Cloud, and even more relevant when the customer managing the product is trying to comply with certain regulations.<\/p>\n<h4><a id=\"post-298905-_no80vqf9unv5\"><\/a>Social Engineering Attack Prevention<\/h4>\n<p>Social engineering attacks, such as phishing and pretexting, are among the most prevalent threats to cloud security. By utilizing LLMs to analyze communication patterns and identify potential threats, organizations can proactively detect and block social engineering attacks. With advanced language understanding capabilities, LLMs can discern the subtle differences between legitimate and malicious communications, providing an additional layer of protection for cloud-based systems.<\/p>\n<h4><a id=\"post-298905-_ibqemf9zt9gz\"><\/a>Improving Incident Response Communication<\/h4>\n<p>Effective communication is a critical aspect of incident response in cloud security. LLMs can be used to generate accurate and timely reports, making it easier for security teams to understand the nature of incidents and coordinate their response efforts. Additionally, LLMs can be employed to create clear and concise communications with stakeholders, helping organizations manage the reputational risks associated with security breaches.<\/p>\n<h2><a id=\"post-298905-_yahxlvxozd9o\"><\/a>Prisma Cloud and AI<\/h2>\n<p>LLM, AI and ML aren\u2019t strangers to Prisma Cloud. We are currently leveraging those technologies to improve our customers\u2019 cloud security in several ways. For example, Prisma Cloud provides a rich set of machine-learning-based UEBA anomaly policies to help customers <a href=\"https:\/\/www.paloaltonetworks.com\/blog\/prisma-cloud\/threat-detection-using-tor-networks\/\">identify attacks launched against their cloud environments<\/a>. The <a href=\"https:\/\/docs.paloaltonetworks.com\/prisma\/prisma-cloud\/prisma-cloud-admin\/prisma-cloud-policies\/anomaly-policies\">policies continuously inspect<\/a> the event logs generated from the activity of the existing subjects in each environment and look for any mischievous activity.<\/p>\n<figure id=\"attachment_298936\" aria-describedby=\"caption-attachment-298936\" style=\"width: 1288px\" class=\"wp-caption alignnone\"><div style=\"max-width:100%\" data-width=\"1288\"><span class=\"ar-custom\" style=\"padding-bottom:68.94%;\"><img loading=\"lazy\" decoding=\"async\"  class=\"wp-image-298936 lozad\"  data-src=\"https:\/\/www.paloaltonetworks.com\/blog\/wp-content\/uploads\/2023\/07\/word-image-298905-1.png\" alt=\"List of Prisma Cloud anomalies by policy name, policy type and severity.\" width=\"1288\" height=\"888\" \/><\/span><\/div><figcaption id=\"caption-attachment-298936\" class=\"wp-caption-text\">Some Prisma Cloud Anomalies<\/figcaption><\/figure>\n<p>Prisma Cloud is committed to being at the forefront of technological advancements, enabling us to anticipate and proactively address emerging threats and risks in the era of generative AI. We persistently leverage the power of AI to streamline security operations, identify novel threats, and efficiently close security gaps. Recognizing the limitations and risks of generative AI, we will proceed with utmost caution and prioritize our customers' security and privacy.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The development of large language models (LLMs) has shown great promise in enhancing cloud security. <\/p>\n","protected":false},"author":677,"featured_media":298922,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[6719,6724],"tags":[6613,109,9491,9294],"coauthors":[8326,7010],"class_list":["post-298905","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-company-culture","category-points-of-view","tag-ai","tag-cloud","tag-llm","tag-ml-2","cloud_sec_category-cloud-security","cloud_sec_category-code-security"],"jetpack_featured_media_url":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-content\/uploads\/2023\/07\/NetSec-Adhoc-Updated-Blog-Image-Resize-508484039-1.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/posts\/298905","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/users\/677"}],"replies":[{"embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/comments?post=298905"}],"version-history":[{"count":5,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/posts\/298905\/revisions"}],"predecessor-version":[{"id":299028,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/posts\/298905\/revisions\/299028"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/media\/298922"}],"wp:attachment":[{"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/media?parent=298905"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/categories?post=298905"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/tags?post=298905"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/coauthors?post=298905"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}