Enterprise Evaluation 2018
- Call For Participation
APT3 is a China-based threat group that researchers have attributed to China's Ministry of State Security.   This group is responsible for the campaigns known as Operation Clandestine Fox, Operation Clandestine Wolf, and Operation Double Tap.   As of June 2015, the group appears to have shifted from targeting primarily US victims to primarily political organizations in Hong Kong. 
APT3 relies on harvesting credentials, issuing on-keyboard commands (versus Windows API calls), and using programs already trusted by the operating system (“living off the land”). Similarly, they are not known to do elaborate scripting techniques, leverage exploits after initial access, or use anti-EDR capabilities such as rootkits or bootkits.
The Operations Flow chains techniques together into a logical order that commonly occur across APT3/Gothic Panda operations. In the case of APT3, we break out the operations into two scenarios, each leveraging a different tool set to explore variation in execution and detectability. Cobalt Strike was used to execute the first scenario, while PowerShell Empire was used to execute the second.
In the two scenarios access is established on the target victim. The scenario then proceeds into local/remote discovery, elevation of privileges, grabbing available credentials, then finally lateral movement within the breached network before collecting and exfiltrating sensitive data. Both scenarios include executing previously established persistence mechanisms executed after a simulated time lapse.
For details on the APT3 emulation please refer to the Operational Flow.
For the APT3 evaluation, we tested 56 Enterprise ATT&CK techniques across 10 ATT&CK tactics. The Initial Access tactic was considered out of scope for the APT3 evaluation. The in-scope techniques for the APT3 evaluation are displayed below.
We divided the tested techniques into “Primary” techniques and “Enabling” techniques. Execution of many of the techniques required Command-Line Interface, Execution through API, and PowerShell. We considered these to be “Enabling” techniques for the evaluation, and we generally did not capture detections directly associated with their execution (except in cases where one of those techniques was executing the behavior under test, such as “RunAs”). Instead, we focused on the Primary technique that was performed, rather than the mechanism of execution (which was considered the Enabling technique). For example, if Process Discovery was performed via Command-Line Interface, we captured detections for Process Discovery but not Command-Line Interface.
You can view the in scope Techniques for the APT3 evaluation in the ATT&CK Navigator by checking out the layer file we made available here. A preview is shown below! The Techniques in scope for the APT3 evaluation are highlighted in green.
Figure 1: APT3 Evaluation Environment
The APT3 evaluations used a Windows domain hosted in Microsoft Azure, with one domain controller, one file server and three clients. All VMs were the “Standard B4MS” instance, with four vCPUs and 16GB memory. The servers ran Windows Server with SKU: “2016-Datacenter” and the clients ran Windows 10 1803 with SKU “rs4-pro”.
In addition to disabling Windows Defender we made a number of modifications to the standard images to execute our evaluation. Full details can be found here.
Vendors use their own terminology and approaches to detect potential adversary behavior. They provide this information to us in their unique way, and then it is our responsibility to abstract the data using detection categories to talk about the products in similar ways.
These categories are divided into two types: “Main” and “Modifier.” Each detection receives one main category designation, which relates to the amount of context provided to the user. A detection may optionally receive one or more modifier category designations that help describe the detection in more detail. For the APT3 evaluation, there are six main detection categories representing the amount of context provided to the analyst, as well as the logic used to generate the detection.
You can learn more about our process for processing detections here.