Offensive security moves at a breakneck pace and keeping up with new tools and TTPs (tactics, techniques, and procedures) can be an undertaking of its own for a security team. Creating, testing, and managing the detection rules for those changes is even more so a challenge, especially at scale. To address this challenge of quickly testing detection rules, we developed the tool Dredd. Dredd automates both the analysis of Sigma rules against Mordor datasets (collections of logs of attacker procedures), and the analysis of IDS rules against PCAPs. In this blog, we will discuss Dredd and its potential uses.
Siem Rules Analysis
Much like its namesake – Judge Dredd – Dredd is judge, jury, and executioner. Dredd handles the log ingestion and normalization, detection rule conversion, and detection rule evaluation. For the detection rule side of Dredd, Sigma was chosen. Sigma is a SIEM agnostic rule format that is designed to be interoperable. The Sigma project provides a utility (sigmac) and library (sigmatools) to translate their format to other platforms like Splunk, QRadar, and Elastic.
Converting a Sigma rule (left) to a Splunk query (right) using sigmac
The other major component of Dredd is Mordor. The Mordor project aims to encapsulate attacker procedures as logs and distribute them openly to allow for ease detection content development. This bypasses both the need to have a testing environment with logging infrastructure and the need to accurately simulate the attacker procedure of interest.
Mordor entry for DCSync via Covenant C2
Under the hood, Dredd creates an Elasticsearch Docker container, ingests the Mordor logs into the container, translates the Sigma rules to Elasticsearch DSL queries, then runs those queries against the container and reports the results. Below is a video demonstrating the analysis process.
Using Dredd to evaluate merged logs
(Note: all videos are sped-up)
A Sigma rule that uses Sysmon logs to look for “schtasks.exe” gets a hit against the two Mordor log sets. In this example, the two Mordor logs are merged and the rules are run against the merged logs. However, this same procedure can be performed without merging the logs:
Using Dredd to evaluate unmerged logs
Here you can see that the “schtasks.exe” rule had a hit against the scheduled task log set but not the DCSync log set.
Other Data Sources
Dredd currently only supports Mordor datasets for Sigma analysis. The Mordor project lays out different environments where the logs were captured so that analysts can lookup which hosts map to which purpose (ex: IP 172.18.39.8 is the attacker C2 in the Shire environment). Dredd however does not enforce conformity those Mordor environments or require additional metadata (like the attacker view). In practice, Dredd will support any JSON logs produced by Winlogbeat or logs matching that schema. This means that support for projects like EVTX-ATTACK-SAMPLES, a project similar to Mordor but using evtx files (Windows Event Log files), can be added fairly easily.
Converting an evtx export to JSON using PowerShell
Archiving the JSON export using 7-Zip
Evaluating the export against a Sigma rule using Dredd
The MMC lateral movement rule worked against the export
IDS Rule Analysis
Dredd also supports the evaluation of Snort/Suricata IDS rules against PCAPs using Suricata. Unlike with Elasticsearch however, Dredd can simply mount the rules and PCAPs into a container as-is and use Suricata to perform offline analysis. There are several good sources for attacker PCAPs, including the Mordor project, which has PCAPs for the APT29 emulation plan by MITRE, PCAP-ATTACK, which is by the same author as the EVTX-ATTACK-SAMPLES, and PacketTotal, a VirusTotal like website for packet captures.
When analyzing these captures with Dredd, it will merge all rules together then evaluate them against each PCAP individually. However, there is also the option to evaluate the rules against all PCAPs at once.
Using Dredd to evaluate unmerged PCAPs
The most obvious use case for Dredd is as a tool to help detection content developers analyze new and existing detections against attacker activities. Upon release, an analyst can download a log export of the tool or procedure, load it into Dredd with their existing detection rules, then determine if they have coverage for it. If they lack coverage, they can build a new detection and use Dredd to validate that the rule works as expected. In this same scenario, Dredd can be used to test the rule’s propensity to false positives by evaluating the rule against benign sets of logs.
Dredd can also help in situations where an analyst may not have access to a detection platform (SIEM/IDS) or the platform is more tightly controlled. Analysts on the go or working off hours may not have the ability to run arbitrary queries against the platform. In other situations, query access to the platform could be restricted for reasons such as limiting performance degradation of scheduled queries and licensing. Dredd can help in either case by acting as a proxy for the platform when evaluating detection content.
If the detection team(s) version controls their detection logic, Dredd can be used as part of a continuous integration/continuous deployment (CI/CD) pipeline to automate rule validation. Once new rules or changes to existing rules are committed to the source control management system (e.g. GitHub, Bitbucket), an automated process can be initiated to evaluate the rules against various logs. To that end Dredd has support for using non-zero exit codes (in addition to the normal structured results) when rules do not have any results, thereby causing a build failure.
Changing the exit code behavior
This type of continuous testing is more commonly found in traditional software development practices but is just as applicable – and useful – in testing security detection content.
Red team operators can also make use of Dredd. The operator first generates the log data by practicing with their different tools and TTPs within a testing environment set to log the appropriate sources. Then the operator evaluates those logs either against their organization’s existing detection rules or against publicly available detection rules. Using the results, they can identify areas of their methodology that are likely to be detected and either avoid them or innovate on them. The red team operator will also get a better understanding of what characteristics are common in detection logic so they can avoid those where possible when developing their methodologies.
Dredd can be found on the Security Risk Advisors GitHub page here: https://github.com/SecurityRiskAdvisors/dredd. If you have any questions about this article, please feel free to reach out to me on Twitter @2xxeformyshirt.
Featured on Cyber Kumite