Using ATT&CK Evaluations
Who We Are
MITRE Engenuity evaluates cybersecurity products using an open methodology based on the ATT&CK® knowledge base. Our goals are to improve organizations against known adversary behaviors by:
These evaluations are not a competitive analysis. We show the detections we observed without providing a “winner.” Because there is no singular way for analyzing, ranking, or rating the solutions, we instead show how each vendor approaches threat defense within the context of ATT&CK.
Our evaluation methodologies are publicly available, and the results are publicly released. We continue to evolve and extend our methodologies and content to ensure a fair, transparent, and useful evaluation process.
Interpreting Our Results
Making sense of ATT&CK Evaluation results can be challenging. Each participant has their own take on the results, and every user has their own criteria for success. Fortunately, to accompany the raw results, we also have tools and resources that can help you use and understand the data.
This guide helps you understand how to use the evaluation results to assess security products and select an endpoint threat detection tool. In this guide, we address using ATT&CK Evaluations to inform two of the key questions:
- Does this tool detect known threats to your organization (i.e., ATT&CK technique coverage)?
- How does the tool present the data to your analysts (i.e., Graphical User Interface)?”
This post helps you understand the nuance required to utilize ATT&CK Evaluation data. It explores why so many participants declare themselves the winner, as well as common pitfalls people make when analyzing the information, including the thought that ATT&CK Evaluations can answer every question, and overgeneralizing the results.
Key Things to Know Before You Begin
ATT&CK Evaluations are a starting point. We use an open-book and minimally sized environment to understand baseline capabilities of solutions. Operationalization of these solutions is important to consider in the context of your organization, including false positive generation.
There are no winners. The goal of ATT&CK Evaluations is to show the different capabilities of each vendor.
Counting has limitations. We don’t think any single way to count is right for everyone. As we’ve described, you should consider your own needs and then consider scoring based on those.
Not all techniques are created equal. A technique detection for Credential Dumping may not have same value as a technique for Process Discovery, due to the severity of the action. The category gives you a general idea, but you should dive into the details to understand the technique and detection.
Not all procedures are created equal. Process Discovery (T1057) via Command-Line Interface (T1059) can be detected with most process monitoring. Process Discovery via API (T1106) would need API monitoring. A vendor could have a detection for one but not the other.
- Making Sense of ATT&CK Evaluation Data
- Actionable Detections: An Analysis of ATT&CK Evaluations Data Part 2 of 2
- Dissecting a Detection: An Analysis of ATT&CK Evaluations Data (Sources) Part 1 of 2
- Part 2: Would a Detection by Any Other Name Detect as Well?
- Part 1: Would a Detection by Any Other Name Detect as Well?