Malware analysis has many benefits to organizations and their defenders; however, most organizations do not have processes defined for performing these actions. This post will walk through the questions that malware analysis can answer along with defining an approach that can be used for getting started.
Malware Analysis Overview
According to the 2020 Verizon Data Breach Investigations Report (DBIR), phishing attacks involving malware are one of the top two threats organizations face . Malware, in a general sense, can be defined as code that is used to perform malicious actions. For organizations and defenders, performing malware analysis, even at cursory level, can help to answer questions and enhance defensive capabilities. Performing malware analysis on a regular basis allows an organization to:
- Assess current threats to the organization
- Determine the potential scope of an incident
- Determine threat-specific remediation tasks
- Improve the ability of teams to handle incidents
- Improve system and network based defensive security
- new alerts
- new blocks
- new monitoring rules
- and more
- Develop new and/or updated threat hunting campaigns
- Enhance purple team engagements
- new attack emulations
- new attack scenarios
While the pros from performing malware analysis often outweigh the cons, many organizations still struggle to understand suspicious artifacts that are identified during the incident response process. For many organizations, the analysis of a malicious artifact may be the following:
An analyst receives an alert involving a suspicious file. The file is uploaded to VirusTotal for analysis. If less than five engines flag the file as malicious then close the alert as a false positive. If five or more engines flag the file as a generic trojan then they make a note, remove the file, close the alert ticket, and move on to the next alert.
Sandboxes, especially public cloud sandboxes, are frequently used by many organizations and defenders and are the source on whether an artifact is benign or malicious. While this approach may be enough to get by for daily triage, it does not provide knowledge into the capabilities of a particular artifact or threat actor that is targeting the organization. We can do better.
Threat actors are constantly adapting, evolving, and looking for new opportunities to circumvent defensive and detection mechanisms. If we are not keeping up with the current threat landscape or keeping our fingers on the pulse of new threats, we are missing an opportunity to build our threat intelligence and take a strategic approach to defensive security. One of the most shared challenges for defenders is knowing where and how to get started.
Getting Started with Malware Analysis
Developing processes and skills over time is a fantastic way to introduce malware analysis as a new capability. When beginning your journey, consider starting with phishing attacks. They are a persistent threat to all organizations and have a wide variety of techniques that are often combined to try and gain access to an organization. Being proactive and analyzing a few samples every week is a promising place to start. Some questions to answer when analyzing phishing attacks include:
- What indicators of compromise can be identified in the email?
- Are there links in the email? If so, what domains, URLs, IP addresses are used?
- Do email attachments contain malicious code or objects?
- See resources below for tools that can be used to perform this analysis
- What is the goal of the phish?
- Is the goal of the phish to steal credentials?
- Is the goal of the phish to have the user download something?
- Is the download link live?
- Safely download the file to a VM
- Get a hash of the file, e.g., sha256
- Submit the hash to a sandbox
- Is the download link live?
Answering these questions will help build a profile of the attack and understand the potential scope if the message was successfully delivered. This information can then be used to start building a profile of the threat actor. This information should be saved and can serve several different purposes:
- Gain insights into existing threats to the organization
- IOCs can be searched for through the environment
- Improve system and network security
- Used to develop threat hunts
- Correlate data with future attacks
Operational security is crucial when performing malware analysis. Threat actors can monitor public sandboxes, sites used to distribute malware, etc. Doing so enables them to know when an analyst is researching the threat which allows them to quickly pivot, change tactics, and continue operations. This is the typical cat-and-mouse game defenders and attackers play. The following list of precautions can be taken to help hide the activity and identity of analysts.
- Gather information about the host system and network the malware was targeting. This may become important if the malware is using environmental keying to restrict execution
- Do not upload files to public sandboxes
- Use private sandboxes when possible
- Use a private VPN service when interacting with suspicious websites (not your orgs VPN)
- Perform analysis inside virtual machines with networking set to host-only
- A spare “bare metal” machine (disable networking or isolate on an air gapped network) for running malware can be useful if you identify or suspect the malware to have anti-virtualization protections.
Tor can also be used during investigations. However, it is important to note that threat actors can and do monitor network traffic. Threat actors may also implement defensive security measures enabling them to be alerted when their infrastructure is being accessed by someone performing research. This applies to VPN use as well, however, detecting Tor use is very easy to do.
Priorities and goals become more important to set as the malware analysis program matures. Begin with simple malware analysis techniques and work towards more complex techniques over time. Analysis techniques are frequently intertwined and repeated during an investigation.
- Automated analysis
- Private sandboxes
- Public sandboxes
- Static analysis
- Identify embedded strings
- Identify embedded objects
- Identify file metadata and structure
- Dynamic analysis
- Interactive behavior
- Run the malware in an isolated lab
- Run the malware in an interactive debugger
- Manual code reverse engineering
- Analysis of disassembled code
Malware Analysis Goals
Setting goals before an investigation will keep the team focused and will drive the analysis process. Some goals to help get you started are:
- Determine if the artifact is malicious
- Determine the family/type of malware
- Identify indicators of compromise (IOCs): file hashes, domain names, IP addresses, URLs, etc.
- Identify actions and behaviors: tactics, techniques, and procedures (TTPs)
- Track all findings and identify trends over time
Malware analysis is not an elusive process that organizations should ignore. Quite the opposite is true. Small modest beginnings can lead to remarkable things. Malware analysis adds strategic, tactical, and operational value to defensive security operations.
- oletools – https://github.com/decalage2/oletools
- ViperMonkey – https://github.com/decalage2/ViperMonkey
- Didier Stevens Suite – https://github.com/DidierStevens/DidierStevensSuite
- REMnux – https://remnux.org/
International political relationships sometimes have the potential to create an elevated risk of cyber-attacks. In light of recent events, there are escalated concerns about attacks from Iranian based APT groups, but regardless of whether these APT groups make an impact, there will continue to be threats from new and existing APT groups from around the world.
A well-designed threat intelligence program can help to understand the most likely and impactful threats to an organization to help drive program improvements. Many of our clients that utilize the VECTR™ platform do so not just as a purple team management tool, but also a way to operationalize threat intelligence against their defense success metrics. In an organization that records their results within the VECTR™ platform, they have a quick and simple method to understand where any gaps exist that need to be addressed.
In situations involving known threat actors, threat intel programs typically identify specific threat actor groups to consider. MITRE has an excellent repository of this information available as a starting point. A sample of relevant threat groups are listed below, with mappings to their MITRE group profiles:
- APT33 – https://attack.mitre.org/groups/G0064/
- OilRig / APT34 – https://attack.mitre.org/groups/G0049/
- APT39 – https://attack.mitre.org/groups/G0087/
- Charming Kitten – https://attack.mitre.org/groups/G0058/
- CopyKittens – https://attack.mitre.org/groups/G0052/
- APT28 – https://attack.mitre.org/groups/G0007/
MITRE continues to provide regular updates on the ATT&CK website for new and updated threat group activity, techniques, tools, and malware attributed to various threat groups. In addition to the wealth of information on the wiki, MITRE’s Cyber Threat Intelligence (CTI) from ATT&CK is available on GitHub (https://github.com/mitre/cti) in STIX 2.0 bundles that can be directly consumed by platforms like VECTR™. Because we have our existing purple team datasets active in VECTR, we can generate reports to show us how our existing purple team testing coverage stacks up against the techniques and procedures observed from selected threat actor groups or collectively across the entire ATT&CK framework. The following screenshots show how to do this with a sample dataset.
Sample dataset showing MITRE ATT&CK™ Heatmap Report filtering on one or more APT’s
When filtering is complete, we can see a filtered list of the attack techniques used by APT33, color coded by the most recent assessment status of each purple team test case mapped to the associated technique IDs. From this dataset, we can see that boxes in red/orange/yellow are areas the threat actors we selected tend to exploit and where the organization’s defenses need the most improvement. The grey boxes are those that haven’t yet been covered in purple team testing – so you know what to tackle next.
Given that an organization can instantly understand their expected defense success against these threat actors, this is a powerful tool to help prioritize mitigations and use cases for new alerts. It becomes both a tactical action plan for security operations teams and a strategic communication vehicle for leadership to convey your understanding of threats and demonstrate the vigilance of the information security program.
If you’re not already performing purple teams with VECTR™, but want to start modeling out these threat actor tactics, it’s not difficult to get started. VECTR™ includes the ability to drag and drop STIX 2.0 data from the MITRE ATT&CK™ framework and use this CTI to plan your own assessments and threat emulations. In the next major release of VECTR we will open a public TAXII server to enable community-driven sharing and enrichment of CTI data, including new assessment plans for threat analysts and red teamers, and detection rules & analytics for defenders.
In Administration, import from Enterprise ATT&CK (Full)
Select the specific APT groups you’d like to bring in
Filter test cases if desired
Create a dedicated campaign based on these threat actor groups’ techniques
You can use VECTR™ entirely for free, download the latest version on GitHub (https://github.com/SecurityRiskAdvisors/VECTR) and join the VECTR community mailing list at vectr.io to stay up to date with new releases and upcoming features.
*Updated October 2, 2019
Red and Purple Teaming serve distinct purposes, and we think NIST CSF backs us up on that. We outline why we believe in starting with Purple Teams to validate Protect and Detect before using Red Teams to validate Respond. I’ve heard the question, “Do Purple Teams help to test the incident response process?” over and over again.
My response? No. At an FS-ISAC Summit presentation this Spring (2019), an insightful attendee asked the articulate presenter (I was neither person), “Do Purple Teams help to test the incident response process?”
My answer: No.
Purpose of Purple and Red
Purple Teams seek to understand and fill gaps in Protect and Detect controls. Purple is the readiness test for technical configurations, controls and alerts. Purple’s approach is comprehensive and layered, just like the spirit of NIST CSF’s Protect and Detect which together comprise more than half of the Framework’s total controls. There is a clear message that there is more to cover in these two CSF Functions.
If Protect and Detect are appropriately evaluated, Respond does not need to be comprehensive, and that is why the Red Team function is the best way to test it. The Red Team is the stealthiest path possible to simulate compromise. It purposefully avoids the obvious walls and alarms and answers the real-world question “will the organization Respond?” Purple, however, wants to provoke as many alarms as possible, refine and make them more meaningful, more complete, and more ready to be surprise-tested by Red.
Still, why can’t Purple verify the Respond function? If during Purple Teams an attack operation triggers a low severity alert in SIEM, and the participating Incident Responders say “yeah, we’d react to that,” who’s to say? An alert fired at low severity (fact). The participating responders are speculating about what their response would be (not a fact). If Red is the test of the Respond, we live in the land of facts – the alert will be actioned or not in the course of continuous monitoring.
Sequence of Purple and Red
NIST CSF starts with Identify. It’s a big mixed bucket of governance, risk and asset management needed to plan and maintain the scope of the program. Got it. Then comes Protect, then Detect, then Respond. There is a logical sequence to NIST, with Purple playing an essential role in the verification of Protect and Detect, and Red is the vital real-time audit of the Respond function. There is less value in testing Respond before you’re reasonably ready. Many organizations are still doing this in the wrong order. Protect and Detect intentionally come first in NIST CSF, and should be verified first by Purple Teams (not a checklist audit). If Protect and Detect controls are low maturity, you are sure to follow with low maturity Respond.
Chances are if you’ve been affected by cybercrime in the past year, you’ve been the victim of a banking trojan. Proofpoint’s latest quarterly threat report notes that over half of all successful email-based attacks were propagated by banking trojans (meanwhile ransomware, once one of the greatest threats to enterprises, came in at a mere .1% of total attacks).
This is no coincidence. Unlike the obtrusiveness of most ransomware attacks, where the attacker makes money by getting the victim to pay for the return of their files, a banking trojan is much more pernicious: infected hosts contribute to identity theft by quietly siphoning off sensitive information and login credentials, all the while using the host’s computing power to mine cryptocurrencies and send out spam emails in the background.
A look at Emotet, one of the most prominent banking trojan from the past 18 months, gives insight into the advanced and destructive nature that these attacks can wreak upon an organization.
Emotet, also known as Geodo, has been around for almost five years and started off primarily self-distributed through attempts at brute-forcing user accounts. Attackers would attempt easily-guessable passwords and those found from compromised sites that were sold or published on the darkweb. Instituting password requirements, password rotation, and password lock outs were enough to thwart most initial Emotet iterations. Recently, however, it has gained and maintained relevance by switching to phishing campaigns that use enticing emails and malicious payloads that resist detection and analysis.
An attack usually starts by a victim receiving an email from either a spoofed sender address or a compromised legitimate account. The email and link/attachment are usually themed as something the user would want to click on due to its urgent (invoices, shipping notifications) or contextual (tax season, holiday season) nature. Recent iterations of these malspam emails have a malicious link or macro-enabled Word document which launch when clicked upon, in turn running a PowerShell script that either downloads or runs an already-downloaded malicious payload.
Emotet is largely resistant to signature-based detection because it is polymorphic, meaning it will change its code in slight but meaningful ways every time it is downloaded. Attackers will routinely change the IP addresses and domains that the links and attachments will reach out to, further evading detection solutions. It can also frustrate analysts looking to study the malware because if it senses that it’s in a virtual machine, it won’t download or execute its payload like it would in a normal environment.
Emotet is also modular in nature, meaning attackers are able to customize the payload and specify their malware campaign to fit their particular goals. While it primarily delivers trojans that scrape credentials and mine Monero (a cryptocurrency that obscures the source, amount, and destination of its transaction), it’s able to release a host of other attacks into an organization’s network, including ransomware. Once a system is compromised, however, most variations will look to establish persistence on the machine its currently on and spread to more machines by using captured credentials and send out more malicious emails via the victim’s email accounts.
An organization affected by a banking trojan like Emotet could have their sensitive or proprietary information stolen or altered and could witness a disruption to their productivity, files, and reputation. In some cases the cost for the remediation of an incident caused by Emotet costs upwards of $1 million (according to https://www.us-cert.gov/ncas/alerts/TA18-201A and https://www.infosecurity-magazine.com/news/allentown-struggles-with-1-million/).
Organizations can take measures to significantly reduce the chance of a successful Emotet phishing campaign. Here are some proactive steps that SRA recommends:
- Purple Teams: Test yourself and inspect what you expect. Conducting a purple team campaign focused on the TTP’s that the Emotet campaigns use you’ll be able to create a defensive playbook to implement.
- Email Defense: Help users discern a malicious email from a legitimate one by enabling DMARC and rules to mark external emails, which let your users know when an email is masquerading as an internal email. Security Risk Advisors encourages organizations to utilize an email defense platform that monitors and quarantines malicious emails at the gateway before a user ever sees that email.
- Limit Macro Functionality: If a user clicks on an attachment and opens it, Emotet will attempt to run macros or PowerShell scripts to download payloads and establish persistence. By disabling auto-enabled macros and PowerShell for users that don’t need PowerShell, you limit the ability for an attacker to compromise a user’s endpoint via malicious attachment.
- LSA Protection and Credential Guard: An attacker is going to want to escalate their privileges by gaining new credentials, oftentimes succeeding by dumping cleartext or hashed credentials stored in memory or a suspended VM. On Windows machines, you can prevent some of these credential attacks by enabling LSA Protection as well as Credential Guard within Windows 10. Both of these protections look to isolate credential processes that attackers love to exploit. While not bulletproof, both help to mitigate common credential dumping techniques.
- Monitor, Alert and Hunt: Emotet will attempt to maintain persistence by creating other services and scheduled tasks that a user would never notice. Utilize built-in Windows functionality or a third-party application to monitor for scheduled task or service creation. Conduct threat hunting exercises on the network looking for previously compromised systems.
- Quarantine and Investigate: Do your best to quarantine infected hosts from the rest of your network. Perform an investigation upon how the payload propagated and determine what variant it is. Depending on the variant, a simple restart after removing the malicious files could be enough to re-image the endpoint; others will require more serious changes, like backup restoration and the removal of current registry keys, startup items, or services. Security Risk Advisors recommends an EDR solution that will quarantine devices immediately upon detection of a malware infection.
- Determine Blast Radius: Using your initial investigation, identify other hosts with similar activity and perform recovery upon those endpoints as well. Determining the vector of attack and the methods through which the attack propagated could reveal vulnerabilities in your architecture.
- Reset Credentials: After devices have been successfully segmented and reimaged, make sure to reset accounts passwords for the hosts and applications that have been compromised. Don’t authenticate to infected systems with domain or shared local administrator credentials, as you could allow an attacker to gain further footholds in your network.
- Implement Additional Monitoring: Continue to closely monitor those endpoints and your network as a whole for indicators of compromise gathered from your investigation and quarantining. Block those suspicious IP and domain addresses, hashes, macros, and filepaths that were found to execute during the attack. Seeing these still in your environment will be an indicator that a new infection has occurred or the old infection persists.
Emotet isn’t going away, but that doesn’t mean you have to fear it. By practicing common sense principles regarding email and web use, a phishing campaign can be stopped before a banking trojan reaches your network.
Back in December 2018, MITRE released the first round of its evaluations on EDR tools, including Carbon Black, CounterTack, Crowdstrike, Endgame, RSA, Sentinal One, and Windows Defender. Specifically, MITRE tested the APT3 threat group (https://attack.mitre.org/groups/G0022/) against the products and rated how well they performed.
Recently MITRE published the first phase of its “Rolling Admissions” program, which added vendors FireEye and Cybereason. Last time around (http://securityriskadvisors.com/blog/a-closer-look-at-mitre-attck-evaluation-data/), SRA scraped all the test result data from the MITRE results, and published it in a more head-to-head view, so that you could see how each vendor did against one another.
We recently updated our dataset (stored here: https://github.com/SecurityRiskAdvisors/mitreevalsdb) and have re-run some of our favorite queries to see how the new additions faired against the first wave of competitors. What did we find? Excellent performance from FireEye, and mid-pack performance from Cybereason. In any case, this is a high level summary and detailed results should be examined if you’re seriously considering any of these products. We tend to give the most credit to those orgs that went into the first round of this test blindly, and it seems that the ‘rolling admissions’ participants have a leg up in that they are taking an open-book test now. That being said, Crowdstrike continued its dominance in this test, even while being from the first wave of participants. Details below:
Query: select vendor, count(vendor) as total_detections from edr WHERE General = ‘yes’ or Specific = ‘yes’ group by vendor ORDER BY total_detections DESC;
If you want to recreate these results yourself, visit our github page here https://github.com/SecurityRiskAdvisors/mitreevalsdb to download mitreevals.db, then load that sqlite database into a DBMS, such as the web based system here: http://inloop.github.io/sqlite-viewer/
For more information, view the data yourself here! https://attackevals.mitre.org/evaluations.html