We recently commissioned Forrester Consulting to survey IT security professionals to find out what their desired end state was when it came to correlating security intelligence from network and endpoint. Bringing together these two disparate threat vectors allows organizations to:
◈ Increase detection and prevention capabilities
◈ Reduce manpower and resources needed for containment (and therefore costs)
◈ Exponentially decrease remediation time
In short, these are perceived benefits as they are not really happening today. Surprisingly, most organizations reported high confidence in their current threat detection and remediation systems.
Turns out – No. Perception and reality differ in this case. Many respondents claim to have integrated systems but in practice, being able to make decisions about endpoint and network security requires considerable time and effort from teams, if the data can be used at all. This shouldn’t really come as much of a shock at all since we asked what security technologies they had implemented and what they were planning to implement. While there is no clear standout winner for what is going to be implemented, what is clear is of the 21 solutions that we inquired about, respondents are spreading their capital expenses all over the place. This is why most organizations are doing the work manually.
With so many different security solutions in place, it’s no wonder there is so much time spent doing manual analysis and investigation into security incidents. Earlier this summer I spoke with a lot of security professionals at the Gartner Security Summit and at Cisco Live who talked about how siloed their products were. The data produced by one tool couldn’t even be consumed by another, and the information they could correlate took forever. One conversation in particular that stands out was an incident responder from a large power company who talked about how they had taken more than 6 months to investigate a single incident because they couldn’t track back the path of infection, and identify how it was propagating through their network. This is not an uncommon story that we hear. Over the last decade so many tools have been deployed that it is now making the job harder, not easier. If only they could have a security architecture where the tools talked to each other, and correlated data automatically.
◈ Increase detection and prevention capabilities
◈ Reduce manpower and resources needed for containment (and therefore costs)
◈ Exponentially decrease remediation time
In short, these are perceived benefits as they are not really happening today. Surprisingly, most organizations reported high confidence in their current threat detection and remediation systems.
But do they really have the problem covered?
Turns out – No. Perception and reality differ in this case. Many respondents claim to have integrated systems but in practice, being able to make decisions about endpoint and network security requires considerable time and effort from teams, if the data can be used at all. This shouldn’t really come as much of a shock at all since we asked what security technologies they had implemented and what they were planning to implement. While there is no clear standout winner for what is going to be implemented, what is clear is of the 21 solutions that we inquired about, respondents are spreading their capital expenses all over the place. This is why most organizations are doing the work manually.
Too many tools, little integration, no automation
With so many different security solutions in place, it’s no wonder there is so much time spent doing manual analysis and investigation into security incidents. Earlier this summer I spoke with a lot of security professionals at the Gartner Security Summit and at Cisco Live who talked about how siloed their products were. The data produced by one tool couldn’t even be consumed by another, and the information they could correlate took forever. One conversation in particular that stands out was an incident responder from a large power company who talked about how they had taken more than 6 months to investigate a single incident because they couldn’t track back the path of infection, and identify how it was propagating through their network. This is not an uncommon story that we hear. Over the last decade so many tools have been deployed that it is now making the job harder, not easier. If only they could have a security architecture where the tools talked to each other, and correlated data automatically.
Automating data analysis for improved detection is a reality
The term “architecture” has been used so much it quite possibly is one of the few terms that requires more definition than “cloud”. Simply put, we view an architecture as something that works together. Not a bunch of API’s that get cobbled together to push data somewhere (and eventually the API gets changed and that’s all broken…), and then the manual analysis happens, but a set of technologies, and specifically security tools, that all work together – automatically – to reduce the manual effort. This means having your endpoint detection and response solution (EDR) correlating files seen by your firewall or intrusion detection system with those analyzed your sandbox, and connect it with telemetry from the web proxy to identify associated traffic as well as command & control (CNC) infrastructure, and additional tools attackers are using – and all without you having to do anything.
While it may sound absurd, we call it Advanced Malware Protection, or AMP Everywhere. When you put the same eyes everywhere, you see everything. More visibility means a better ability to prevent advanced attacks.
For a good technical overview of how AMP works, check out this chalk talk.