{"id":11528,"date":"2015-12-29T03:00:53","date_gmt":"2015-12-29T11:00:53","guid":{"rendered":"https:\/\/www.paloaltonetworks.com\/blog\/?p=11528"},"modified":"2016-02-11T11:44:56","modified_gmt":"2016-02-11T19:44:56","slug":"is-the-end-the-beginning-where-are-we-in-the-endpoint-journey","status":"publish","type":"post","link":"https:\/\/www2.paloaltonetworks.com\/blog\/2015\/12\/is-the-end-the-beginning-where-are-we-in-the-endpoint-journey\/","title":{"rendered":"Is the End the Beginning: Where Are We in The Endpoint Journey?"},"content":{"rendered":"<p>Over the last 18 months, there has been much discussion of the \u201cnew endpoint\u201d; and, whilst no one wants to be the first to, I suspect 2016 will be the year many start to make some level of endpoint changes. The question is: which of the many new concepts will become the new endpoint standard, and will it really be new or just a twist on existing techniques and concepts?<\/p>\n<p>Today the threats are complex, often made of multiple facets, and easily tuned so each instance looks unique. It\u2019s not surprising, then, that people talk about the death of antivirus. Yet the reality is that most antivirus relies on a multitude of techniques to discover attacks, in additional to the founding method.<!--more--><\/p>\n<p>In 1991 I started working for Dr. Solomon\u2019s antivirus, which aimed to detect and block attacks, based on the concept of getting a sample and writing a pattern matching rule to block further instances. I remember the founder saying that you can create as many variants as you like but, fundamentally, we have solved the problem; subsequently he sold the company, seeing no further future.<\/p>\n<p>As I walked around RSA 2015, there was a plethora of new endpoint solutions available introducing ever-more-creative techniques, be they various iterations of sandboxing techniques, mathematical analysis, statistical anomaly detection, or new forms of behavioral detection. The question today is: what are the techniques that are effective; and is there a clear winner, or is it a blend of the right concepts?<\/p>\n<p>Here are my top tips to consider as you assess what is the right solution for you:<\/p>\n<p><strong>1. Behavior vs. pattern \u2013 <\/strong>Traditional pattern approaches had high accuracy ratios but, with increasingly unique attacks, can be too slow. Many new solutions look for behaviors at some level. It could be the exploit techniques used, the changes the attack makes to the system, or the communications techniques used. \u00a0The challenge is finding behaviors that are common to the attack and rarely, if ever, seen in other circumstances. What is the acceptable ratio for your business between detection and false alerts? How much data will the solution generate; how long would it take you to analyze this?<\/p>\n<p><strong>2. Where &amp; how is the analysis done? \u2013 <\/strong>Doing behavioral analysis, there is a balance between real-time versus offline. Simple behavioral matches take limited resources; but, for example, complex statistical analysis (e.g., looking for behavioral anomalies) takes more computation resources, so will likely have more impact on the system or be throttled, which may mean they are near real-time. Sandboxing, typically, is also near real-time, but will depend on the complexity of the environment it needs to emulate and the resources available to dynamically instigate the virtual session, gather the indicators of compromise, and convert them into a blocking control. You will need to decide if you want this load on the client or outsourced to a central, dedicated system, be that in the cloud or on-premise. One advantage of having a dedicated system for sandboxing is, unlike normal endpoints, you don\u2019t have the same concerns of results being tainted if the end system is compromised.<\/p>\n<p><strong>3. What changes and what stays the same (attack attributes) \u2013 <\/strong>Each time an attack is instigated, there is typically a blend between constants in the attack and parts that are altered to avoid detection. For example, most commonly changed are the actual attack binary (in an aim to avoid pattern matches) and, where used, the structure of the email it\u2019s delivered in. What changes less frequently are the exploits used, the communications with the attacker, and the underlying infrastructure behind these. If you are going to look for behavior, it makes sense to look for what remains more constant than those attributes that are typically dynamic.<\/p>\n<p><strong>4. Attack lifecycle \u2013 isolated component or big picture<\/strong> \u2013 When antivirus started the attack was just a single binary file. Today\u2019s attacks can include a multitude of components, which make up the lifecycle of the attack. All too often we look for specific attributes in isolation, which leads to high false positives (imagine trying to uniquely identify someone using only the fact that the person has brown hair). Much like a human photo fit, the more we can join the different aspects together, the more accurate the result typically is. As such, when looking for your next endpoint, you need to consider not only does it look for different aspects in the lifecycle but, critically, does it look at these in a correlated, single-pass, automated process; otherwise, all you create is huge volumes of false positives.<\/p>\n<p><strong>5. Usability \u2013 automated or heavy human input (setup and ongoing)<\/strong> \u2013 This is often the overlooked but most critical aspect. Skills shortage is the biggest limiting factor that puts us behind the attacker. Typically, attacks are limited by CPU and network speed; yet, if our security requires human input, we will always be too slow. When considering any new endpoint there are three aspects to consider.<\/p>\n<ul>\n<li>How many man-hours of effort are required to deploy the solution and tune it to a usable state?<\/li>\n<li>What are the typical man-hours required to get actionable events from the solution?<\/li>\n<li>How much human effort is required to take the action and apply it across your business to all the solutions where it should be applied?<\/li>\n<\/ul>\n<p>Lab tests all too often fail to identify this; and, whilst the product may do \u201cas stated on the tin,\u201d in practical terms \u2013 when deployed en masse \u2013 can become unusable.<\/p>\n<p><strong>6. Stand-alone versus platform <\/strong>\u2013 The more we automate, the better we keep pace with the attacker. Any new endpoint needs to natively integrate with your other security solutions. As one component discovers a threat, as many of your existing solutions as possible should be able to dynamically benefit from this and vice versa. Intelligence is the glue linking consolidated actions; but this must be timely, machine readable, and actionable with confidence. Otherwise we reinsert human process that doesn\u2019t scale.\u00a0\u00a0 How many solutions will be able to interoperate with your new endpoint as a common, automated platform?<\/p>\n<p><strong>7. Measure of success<\/strong> \u2013 Before you consider any change, you need to recognize there is a gap in your current capabilities. How do you qualify this? You could look at the number of detections you missed, but that is subjective of you finding everything your existing controls did miss. For me one of the truest measures is time to detect. Today we too often find attacks too slowly. Time to detect is a measure we can both test and monitor on an ongoing basis. It also allows us to qualify what resources are required to meet such an ongoing objective. Ultimately this will dictate when the right time is for you to evolve your endpoint strategy.<\/p>\n<p><strong>8. Replacement or complement?<\/strong> \u2013 It\u2019s easy to say antivirus is no longer viable, but the reality is that, typically, we don\u2019t use just antivirus. Typically, we leverage consolidated endpoint security suites that can include, as examples, DLP, firewall, and encryption with antivirus at the core, making it harder to remove just the antivirus component. Likewise there are millions of known threats still in the wild today that antivirus does block every day, either from known patterns or the behavioral capabilities most include. As much as we want to reduce the load security adds to the endpoint, the decision we each must make is whether new endpoints are a replacement or a complement. I suspect, for many, the goal is to replace, but this will be a phased transition where, initially, both may run in parallel, whilst confidence is built into the new solution, with the long term being to remove capabilities that don\u2019t meet your measures of success.<\/p>\n<p>Today there are both pressures from our own endpoint security deficiencies and impending new legislation in the EU pointing towards state-of-the-art capabilities. <a href=\"https:\/\/en.wikipedia.org\/wiki\/State_of_the_art\" rel=\"nofollow,noopener\"  target=\"_blank\">Wiki defines this as<\/a>\u00a0\"<em>the highest level of general development, as of a device, technique, or scientific field achieved at a particular time.\"<\/em><\/p>\n<p>Antivirus has been around for close to three decades, in which time there has been a lot of innovation in new techniques to detect and block attacks, some of which started as add-ons to antivirus. As the attack has evolved, many of these concepts have spun out to become solutions in their own right. Each organization must consider just how much regard they need to have for leveraging state-of-the-art endpoint capabilities.\u00a0\u00a0 The scope of endpoint solutions available has never been broader. The decision each business must make in 2016 is: which endpoint techniques provide the capabilities to prevent against today\u2019s and future attacks?<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Over the last 18 months, there has been much discussion of the \u201cnew endpoint\u201d; and, whilst no one wants to be the first to, I suspect 2016 will be the year many &hellip;<\/p>\n","protected":false},"author":150,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1766,598],"tags":[1463,1655,509,1815],"coauthors":[1466],"class_list":["post-11528","post","type-post","status-publish","format-standard","hentry","category-cso-perspective","category-endpoint-2","tag-antivirus","tag-dlp","tag-encryption","tag-firewall"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/posts\/11528","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/users\/150"}],"replies":[{"embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/comments?post=11528"}],"version-history":[{"count":1,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/posts\/11528\/revisions"}],"predecessor-version":[{"id":11529,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/posts\/11528\/revisions\/11529"}],"wp:attachment":[{"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/media?parent=11528"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/categories?post=11528"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/tags?post=11528"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www2.paloaltonetworks.com\/blog\/wp-json\/wp\/v2\/coauthors?post=11528"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}