Getting Started

Using ATT&CK Evaluations

Who We Are

MITRE Engenuity evaluates cybersecurity products using an open methodology based on the ATT&CK® knowledge base. Our goals are to improve organizations against known adversary behaviors by:

Empowering end-users with objective insights into how to use participating security products
Providing transparency around the true capabilities of participating security products
Driving vendors to enhance their capabilities

These evaluations are not a competitive analysis. We show the detections we observed without providing a “winner.” Because there is no singular way for analyzing, ranking, or rating the solutions, we instead show how each vendor approaches threat defense within the context of ATT&CK.

MITRE Engenuity’s evaluation methodology is publicly available, and the results are publicly released. MITRE will continue to evolve the ATT&CK Evaluation methodology and content to ensure a fair, transparent, and useful evaluation process.



User Guide

This guide helps you understand how to use the evaluation results to assess security products and select an endpoint threat detection tool. In this guide, we address using ATT&CK Evaluations to inform two of the key questions:

  • Does this tool detect known threats to your organization (i.e., ATT&CK technique coverage)?
  • How does the tool present the data to your analysts (i.e., Graphical User Interface)?”




Interpreting Our Results

The ATT&CK Evaluation methodology and results can assist organizations as they make critical decisions about which vendor capabilities best suit their needs. These evaluation results describe how product users could address specific ATT&CK Technique implementations under perfect circumstances with knowledge of what the adversary did and without environmental noise. Our evaluations look at each vendor’s capabilities within their own context, while doing so in a way that is consistent across all vendors who participate.

By looking at the capabilities of these products and weighing their unique constraints, organizations may be able to “down-select” products that appear to best meet their requirements. Decision makers should consider other factors that we are not accounting for when determining which tool best fits their needs as our evaluations are narrowly focused on the technical ability to defend against specific adversaries. For additional guidance on how to leverage our results to compare vendor performance, please see our Using Results to Evaluate Endpoint Detection Products guide.

As many use our results to develop their own scores, rankings, or ratings, we encourage you to consider how those analyses relate to your unique requirements. We have released a number of tools on this site to help you with your analysis. We also encourage you to use our methodologies to test the solutions in your own environment. This allows false positives, environmental noise, and system impact to be considered in a way that is tailored to your organization, as well as provide your analysts an opportunity to get hands-on with the user interfaces.




Key Things to Know Before You Begin

ATT&CK Evaluations are a starting point. We use an open-book and minimally sized environment to understand baseline capabilities of solutions. Operationalization of these solutions is important to consider in the context of your organization, including false positive generation.

There are no scores or winners. The goal of ATT&CK Evaluations is to show the different capabilities of each vendor.

Counting has limitations. We don’t think any single way to count is right for everyone. As we’ve described, you should consider your own needs and then consider scoring based on those.

Not all techniques are created equal. A technique detection for Credential Dumping may not have same value as a technique for Process Discovery, due to the severity of the action. The category gives you a general idea, but you should dive into the details to understand the technique and detection.

Not all procedures are created equal. Process Discovery (T1057) via Command-Line Interface (T1059) can be detected with most process monitoring. Process Discovery via API (T1106) would need API monitoring. A vendor could have a detection for one but not the other.