Understanding and Preparing for the Shift to XDR

Understanding and Preparing for the Shift to XDR

The CyberSOC model is changing, driven by cloud adoption and improvements in detection technologies on tools like Endpoint Detection and Response (EDR). Extended Detection and Response (XDR) is the realization of these changes, putting less pressure on the SIEM to correlate complex security alerts, but serve more as a single pane of glass for ticketing, alerting, and automation & response orchestration. XDR is a real opportunity to lower platform costs and improve detection, but it requires committing to a few principles that go against the established way of thinking.

 

Product-Native Detections have Gotten Dramatically Better

Takeaway: Focus on building detections as close to the threat as possible – SIEM should be a last line of defense.

Traditionally, the SIEM itself was one of the only tools that could correlate and analyze raw logs and identify “Alerts” that needed to be addressed. This was largely a reflection of other defensive tools being single-purpose and generally bad at identifying issues themselves. The approach that made sense was to ship it all to the SIEM and create complex correlation rules to sort the signal from the noise. Today’s landscape has changed – consider EDR. Modern EDR is essentially SIEM on the endpoint. The same capabilities exist to write detection rules on endpoints as they did in the SIEM, but now there is no need to ship every bit of telemetry data into the SIEM. In addition, vendors have gotten markedly better at building and maintaining out-of-the-box rules and alerts to get you started. We have consistently seen a sizable decrease in detections being attributed to the SIEM during our purple team engagements, and instead tools like EDR and NGFW have become more effective at both detection and prevention. There are exceptions. One of the only common essential detections that you need to rely on the SIEM for is Kerberoasting, as on-prem Active Directory doesn’t have much coverage for that. As you move to pure cloud for Active Directory, even those detections will be handled by “edge” tools like Defender ATP.

 

Your SIEM Really Doesn’t Matter

Takeaway: Having a deliberate process to consistently measure and improve your detection capabilities is far more valuable than having any specific SIEM tool on the market.

Purple teaming has allowed us to test and map how well our clients engineer their detections and alerts. We score all the results quantitatively and trend improvements over time. One clear takeaway is that what brand of SIEM you buy has absolutely no measurable correlation to purple team scores. Process, tuning, and testing is what matters. There are differences in products that can make them a better or worse fit for your environment or make maintenance and quality of life for your SOC a bit easier, but buying a different SIEM on the basis of trying to improve your capabilities is no longer a viable path.

 

Cloud SIEM is Incompatible with Old Fashioned “Log Everything” Approach

Takeaway: Every log you send to your SIEM should be directly attributed to one or more key detection capabilities you are attempting to achieve. MITRE ATT&CK is the Blue Team’s compass, but just because it maps to MITRE doesn’t mean it MATTERS. Evaluate the cost/benefit of each detection.

Nearly all the organizations we talk to have implemented or will implement their first generation of “cloud” hosted SIEM. An intuitive goal is to maintain parity with on-premise tooling, so carrying over every log and alert to the cloud makes sense right? Doing this without analysis leads to untenable SIEM costs. Cloud SIEM can be expensive and not all log data is created equal – some data is key for alerting, while other data is critical to supporting an investigation. The former should go into your SIEM, the latter somewhere else, like a data lake. Consider firewall ‘allow’ events. Most of us have sent these to the SIEM for years. Some large organizations could be paying 6 figures (or more) alone just by sending these events to your SIEM. What is the value? A possible threat intel hit that your NGFW already missed? Why not put that in a data lake, reduce your costs by 90 percent, and use a SOAR automation to search it once a day?

 

Intelligent Data Pipeline and Data Lake is a Necessity

Takeaway: Put the work into your log data pipeline to remove all waste prior to storing that data in the most appropriate location based on its attributes.

Managing your data pipeline intelligently can have massive impacts on controlling your spend. With SRA’s XDR service we pre-process every log and attempt to eliminate excess waste. When your primary cost driver is gb/day, consider the following example showing the before/after size of Windows AD logs.

Full Event Length Number of Fields Number of Events
Raw 3.75KB 75 1
Optimized 1.18KB 30 1
Realized Benefit ↓-68.48% ↓-60.00% 0.00%

 

Our average inbound event had 75 fields, and a size of 3.75kb. After we removed redundant or unnecessary fields, we were left with an average log with 30 fields and a size of 1.18kb. That is a 68.48% reduction in your gb/day for this log source.

While eliminating the fat in each log is great, applying similar value-analysis in where you send each log is equally important. Once you have each log optimized, it is time to decide whether it’s a log that drives a key detection (send to SIEM), or there to support investigating an alert (send to data lake). An intelligent data pipeline can make on-the-fly routing decisions for each log, and even further reduce your costs.

 

EDR, SIEM & SOAR Align

Takeaway: Security automation is the future but should be approached with caution and carefully aligned with your validated detection capabilities. Not all SOAR needs to be without any human involvement, initially depend on your analysts to push the “automate” button until you’re 100% comfortable with your configuration.

We can’t talk about XDR without SOAR. The future of XDR is coupled with tightly integrated SOAR technologies and baked-in integrations for key threat-neutralization technologies, like EDR, User Directories, and networking tools. XDR concepts recognize that what really matters is not how fast can you detect a threat, but how fast can you neutralize a threat. Overly simplified “If this – then that” SOAR automation methodologies aren’t effective in real-world scenarios. An integrated solution allows for efficient development of security automations, better ways to selectively trigger automation, and provide semi-automated “guided” responses to incidents. One of the best approaches we’ve seen to actualizing value in XDR automation is to:

  • Conduct a purple team to identify which current detection events are optimized (very low false positive rates) and can be trusted with an automated response.
  • Map the detection event to the automated response, but insert steps to let the automation portions be initiated by a human. This will let you gain confidence before you turn it fully over to automation.

 

XDR is a buzzword, but when viewed in a technology-agnostic fashion it is based on good foundations. Where organizations are most likely to fail is by trying to “do it all”, as in applying legacy SIEM management philosophies to modern XDR platforms. Follow our key takeaways as your program design philosophies and you will likely improve your capabilities AND reduce your costs.

 

Ransomware Resilience – Interview with Tim Wainwright Presented by TAG Cyber

Ransomware Resilience – Interview with Tim Wainwright Presented by TAG Cyber

Tim Wainwright, CEO of SRA, chats with TAG Cyber CEO, Ed Amoroso about SRA’s Ransomware Resilience Assessment services. Tim also shares his valuable insights into the state of modern cyber threats to enterprise.
I

Ransomware Resilience Assessment

SRA reviews your ransomware incident response plan and conduct interviews to build an understanding of your processes. We then perform a technical review of prevent, detect, respond and restore capabilities as they are designed to manage ransomware risk.

Learn more about this service here!

Get Started!

Let us know if you would like us to provide ransomware resilience services for you by completing the contact form.
Malware Analysis: A General Approach

Malware Analysis: A General Approach

TL; DR

Malware analysis has many benefits to organizations and their defenders; however, most organizations do not have processes defined for performing these actions. This post will walk through the questions that malware analysis can answer along with defining an approach that can be used for getting started.

 

Malware Analysis Overview

According to the 2020 Verizon Data Breach Investigations Report (DBIR), phishing attacks involving malware are one of the top two threats organizations face [1]. Malware, in a general sense, can be defined as code that is used to perform malicious actions. For organizations and defenders, performing malware analysis, even at cursory level, can help to answer questions and enhance defensive capabilities. Performing malware analysis on a regular basis allows an organization to:

  • Assess current threats to the organization
  • Determine the potential scope of an incident
  • Determine threat-specific remediation tasks
  • Improve the ability of teams to handle incidents
  • Improve system and network based defensive security
    • new alerts
    • new blocks
    • new monitoring rules
    • and more
  • Develop new and/or updated threat hunting campaigns
  • Enhance purple team engagements
    • new attack emulations
    • new attack scenarios

While the pros from performing malware analysis often outweigh the cons, many organizations still struggle to understand suspicious artifacts that are identified during the incident response process. For many organizations, the analysis of a malicious artifact may be the following:

An analyst receives an alert involving a suspicious file. The file is uploaded to VirusTotal for analysis. If less than five engines flag the file as malicious then close the alert as a false positive. If five or more engines flag the file as a generic trojan then they make a note, remove the file, close the alert ticket, and move on to the next alert.

Sandboxes, especially public cloud sandboxes, are frequently used by many organizations and defenders and are the source on whether an artifact is benign or malicious. While this approach may be enough to get by for daily triage, it does not provide knowledge into the capabilities of a particular artifact or threat actor that is targeting the organization. We can do better.

Threat actors are constantly adapting, evolving, and looking for new opportunities to circumvent defensive and detection mechanisms. If we are not keeping up with the current threat landscape or keeping our fingers on the pulse of new threats, we are missing an opportunity to build our threat intelligence and take a strategic approach to defensive security. One of the most shared challenges for defenders is knowing where and how to get started.

 

Getting Started with Malware Analysis

Developing processes and skills over time is a fantastic way to introduce malware analysis as a new capability. When beginning your journey, consider starting with phishing attacks. They are a persistent threat to all organizations and have a wide variety of techniques that are often combined to try and gain access to an organization. Being proactive and analyzing a few samples every week is a promising place to start. Some questions to answer when analyzing phishing attacks include:

  • What indicators of compromise can be identified in the email?
    • Are there links in the email? If so, what domains, URLs, IP addresses are used?
  • Do email attachments contain malicious code or objects?
    • See resources below for tools that can be used to perform this analysis
  • What is the goal of the phish?
    • Is the goal of the phish to steal credentials?
    • Is the goal of the phish to have the user download something?
      • Is the download link live?
        • Safely download the file to a VM
        • Get a hash of the file, e.g., sha256
        • Submit the hash to a sandbox

Answering these questions will help build a profile of the attack and understand the potential scope if the message was successfully delivered. This information can then be used to start building a profile of the threat actor. This information should be saved and can serve several different purposes:

  • Gain insights into existing threats to the organization
  • IOCs can be searched for through the environment
  • Improve system and network security
  • Used to develop threat hunts
  • Correlate data with future attacks

 

Operational Security

Operational security is crucial when performing malware analysis. Threat actors can monitor public sandboxes, sites used to distribute malware, etc. Doing so enables them to know when an analyst is researching the threat which allows them to quickly pivot, change tactics, and continue operations. This is the typical cat-and-mouse game defenders and attackers play. The following list of precautions can be taken to help hide the activity and identity of analysts.

  • Gather information about the host system and network the malware was targeting. This may become important if the malware is using environmental keying to restrict execution
  • Do not upload files to public sandboxes
  • Use private sandboxes when possible
  • Use a private VPN service when interacting with suspicious websites (not your orgs VPN)
  • Perform analysis inside virtual machines with networking set to host-only
  • A spare “bare metal” machine (disable networking or isolate on an air gapped network) for running malware can be useful if you identify or suspect the malware to have anti-virtualization protections.

Tor can also be used during investigations. However, it is important to note that threat actors can and do monitor network traffic. Threat actors may also implement defensive security measures enabling them to be alerted when their infrastructure is being accessed by someone performing research. This applies to VPN use as well, however, detecting Tor use is very easy to do.

 

Prioritize Analysis

Priorities and goals become more important to set as the malware analysis program matures. Begin with simple malware analysis techniques and work towards more complex techniques over time. Analysis techniques are frequently intertwined and repeated during an investigation.

  • Automated analysis
    • Private sandboxes
    • Public sandboxes
  • Static analysis
    • Identify embedded strings
    • Identify embedded objects
    • Identify file metadata and structure
  • Dynamic analysis
    • Interactive behavior
    • Run the malware in an isolated lab
    • Run the malware in an interactive debugger
  • Manual code reverse engineering
    • Analysis of disassembled code

 

Malware Analysis Goals

Setting goals before an investigation will keep the team focused and will drive the analysis process. Some goals to help get you started are:

  • Determine if the artifact is malicious
  • Determine the family/type of malware
  • Identify indicators of compromise (IOCs): file hashes, domain names, IP addresses, URLs, etc.
  • Identify actions and behaviors: tactics, techniques, and procedures (TTPs)
  • Track all findings and identify trends over time

 

Summary

Malware analysis is not an elusive process that organizations should ignore. Quite the opposite is true. Small modest beginnings can lead to remarkable things. Malware analysis adds strategic, tactical, and operational value to defensive security operations.

 

Resources

SolarWinds Breach: How do we stop this from happening again?

SolarWinds Breach: How do we stop this from happening again?

The SolarWinds breach is perhaps one of the worst, if not the worst public hacking events in history. Much has been written on what happened, and I’m not going to regurgitate those details. There is inestimable complexity ahead for CISOs to try and identify the extent of their compromise and then get comfort that the threat actors are out of their environment.

Even after the dust settles from the SolarWinds breach, the fact remains that there is nothing preventing a similar attack in the future. Supplier / third party risk assessment questionnaires and vuln scanning didn’t help here (we’ve talked about this in our Cyber Kumite series) and they won’t help next time either, but you can be sure that over the coming months you will see hundreds of emails from vendors telling you how their product could have stopped these attacks. From next-gen-next-gen EDR, to Zero Trust widgets, they all will claim to solve these problems, but they won’t. So, how do we as cyber defenders do something about this?

 

Defending against future attacks like SolarWinds

There is a straight-forward solution that has been available for many years, but which very few put in the time and effort necessary to do it properly. The answer is to only explicitly allow data center connections out to the Internet, aka “allow list”. Whether you are worried about Russian hackers compromising your vendor products, or a lazy system admin browsing sketchy sites from that server they are doing maintenance on, this solution goes a long way to undo bad decisions and uncontrollable trojans. Your data center assets are your most critical resources; allowing them access the large part of the Internet is head scratching. Servers are designed to be single-purpose devices, and the scope of Internet access a server needs should be minimal and be able to be quickly mapped and reviewed by administrators. This type of configuration prevents any traffic (including C2) to domains that aren’t explicitly allowed. There are many ways to do this, varying from cheap to expensive. A few solutions:

  • DNS Security – establish system-based DNS security policies that will prevent all domains from resolving unless explicitly allowed. Tools like Cisco Umbrella, Infoblox DNS Firewall, and many others can get the job done.  The downside is that many of these solutions will do nothing about direct IP connections going outbound.  There will likely be a cost to address that.
  • Border Firewall – using your border firewall to limit the destinations allowed from any of your data center network segments. You’ll likely need several profiles to address different types of computing needs, such as VDI etc.  Chances are you own the tools to do this today.
  • Forward Proxy – the true “free” method which sets up a forward proxy gateway, such as Apache or NGINX, and allows you to create a choke point for outbound network traffic. An allow list can be created and managed to ensure that you’re only granting access to sites you trust.  This should be entirely free and effective.

One of the key points to success is to make sure you develop processes so you can maintain these allow lists efficiently, and so it doesn’t slow down your day-to-day IT operations.  The up-front effort is to safely model the network flows of new systems that need public Internet access prior to deploying a system.  Also, plan for your system updates and patching needs.  Solutions like SCCM for Microsoft systems and Red Hat Satellite for Linux can have you covered and allow you to keep your systems from talking directly to the internet.

 

Do this before you buy a new tool

The impact of the Solar Winds hack is wide ranging and will create an enormous amount of cleanup for many organizations.  One of the best defense-in-depth solutions to this problem has been around for decades.  Take a hard look at this approach before committing to new products and technologies.  “Never let a good crisis go to waste” is a common mantra in the security world, so this time around, maybe we should all be looking for an extra network security resource to run this process rather than a tool that will let you down.

 

User Data Leaks via GIFs in Messaging Apps

User Data Leaks via GIFs in Messaging Apps

An investigation into how Teams, Discord, and Signal handle Giphy integrations

When everyone is working from home, a well-timed GIF sent to coworkers can lighten the mood.  So, when I arrived at SRA and found that they had disabled Teams’ Giphy search feature, I was disappointed. I asked a colleague why it was disabled and was told “we thought it could be a privacy issue.” Determined to prove that GIFs were harmless, I started examining the Teams/Giphy interaction. My findings led me to investigate how two other popular applications, Discord and Signal, handle user privacy with Giphy searches. Unfortunately, I found that it’s easy to leak user information when searching for the perfect GIF.

 

 

Summary

Here is a brief overview of the platforms investigated:

TeamsDiscordSignal
Owned ByMicrosoftDiscord Inc.Signal Technology Foundation (non-profit)
PricingPer-user licensing with Microsoft 365Free with premium optionsFree
Target AudiencesPrimarily business and education customers"Anyone who could use a place to talk with their friends and communities"People who like privacy

For actions performed on each platform, the following information is disclosed to Giphy:

TeamsDiscordSignal
Search ResultsSearch term + ??Search term + ??Search term
Search PreviewsIP address, tracking tokenIP address, Discord channel, tracking tokenTracking token
Loading GIF MessagesIP address, tracking tokenNoneTracking token

 

How Giphy Tracks Users

Here is an example URL returned from the Giphy Search API:

https://media0.giphy.com/media/Ju7l5y9osyymQ/giphy.gif?cid=de9bf95evmdivzh16orm7svyp9ticugu4abuyc3ty2df5y9i&rid=giphy.GIF

We’re going to focus on the “cid” parameter, which appears to be an analytics token. It is unmentioned in Giphy’s API documentation. Here is what I have found out about the cid parameter:

  • When you make a search request, every GIF returned will have the same cid
  • The first 8 characters (“app id”) are consistent across every search made with the same API token. For example:
    • Teams searches start with “de9bf95e”
    • Discord searches start with “73b8f7b1”
    • Signal (on Android) searches start with “c95d8580”
  • The remaining 40 characters (“search id”) vary based on the following factors
    • Search string
    • Number of results requested
    • Results offset
    • Results rating
    • Geographic region (not simply IP address)
    • Time (duration unknown)

For example, take a search for “cats”.  Every image returned in the results will have the same search id.  If the same API key is used, subsequent searches from the same host will return the same search id, although this does not occur indefinitely. The search ID will change occasionally with time. A friend down the street in the same region but with a different IP would also get the same search id if they used the same API key. However, use of a proxy to simulate requests from other countries confirmed that the search id does change with enough distance.

Since the cid parameter is returned in search results, it tends to be “sticky”. Unless it’s explicitly stripped out by someone, it will be included wherever you send the GIF URL and will persist in the message history.

 

Signal

Signal has documented its quest for user privacy with GIF searches in two blog posts: here and here. According to their blog posts and a review of their apps’ code on GitHub, here is how Signal apps interact with Giphy:

 

 

In the flow above, the user never communicates directly with Giphy’s servers; all communication is tunneled through Signal’s servers first. The URLs all still have the cid tracking parameter, but since no other data such as the user’s IP is ever revealed, the parameter has no effect on user privacy. Additionally, as mentioned in their blog posts, Signal ensures that the Signal servers can’t see any traffic passing through, giving you near-complete privacy.

 

How can Signal improve user privacy?

Signal has put a lot of thought into how to serve GIFs privately and it shows. There are no obvious privacy improvements they could make to the user flow. If Signal wanted to go above and beyond, they could make fake search and download requests from their servers to hide overall trends in user activity hours, but that has marginal benefits for a lot of extra bandwidth.

 

Discord

On Discord, the main built-in GIF picker widget uses Tenor, not Giphy. However, there is a built-in /giphy command that allows users to search and send GIFs using Giphy which we will investigate. Here is a diagram of the requests made during a Giphy search on Discord as mapped using Burp:

Note that some of these requests can be WebSocket messages under normal circumstances, but I represent them all as synchronous requests for simplicity.

 

 

As you can see, the only time the user connects directly to the Giphy servers is step 6 when previews of all the GIFs are fetched during a user’s search. This is a screenshot of that preview request:

 

 

By making a request directly to Giphy, your IP address is exposed. The request also includes two other pieces of trackable information: the cid parameter, and a referer header. As discussed above, the cid parameter is based on your original search request, so including this parameter could allow Giphy to match your IP with your search requests even though the original search request went through Discord’s servers first. However, the larger data leak is the referer header. It includes the exact channel URL that you are currently talking in, whether it’s a public server or a DM. If you and your friends like to send GIFs to each other in servers and DMs, then Giphy could use this information to build a map of who you talk to, who is in which channels, etc.

Besides the preview requests, everything else in the flow respects your privacy. The search requests and the retrieval of full-size GIFs in messages are proxied through Discord’s servers. When a user sends a message with an embedded GIF, Discord appears to use the Giphy API to retrieve the full-resolution URLs. This is independent of any search request so the cid tracking parameter is rendered useless.

Additionally, when Discord reaches out to 3rd party servers to retrieve media, the request is very clean and does not include any tracking information:

 

How can Discord improve user privacy?

  • Strip the cid parameter from preview URLs before returning search results
  • Do not send referer headers when fetching GIF previews (or on any external request really)
  • Proxy GIF preview requests through the Discord servers like other external media

 

Teams

Teams includes an on-by-default Giphy search feature that can be disabled at an organizational level. Here is a diagram of the requests made during a Giphy search on Teams as mapped using Burp:

 

 

The user connects directly to Giphy’s servers in steps 6, 8, and 14. This, combined with the cid tracking token could lead to a large amount of data being leaked. Say for example you have a private message with your friends at work. You search for a GIF that relates to a specific inside joke you share and then you send it to the group. Now thanks to the cid token, when each of those people download your attached gif, Giphy can identify which IP addresses belong to the people you’re talking to. Facebook could then use this information along with their data collection from other sites to build a profile of who you talk to at work, what you talk about, how you feel, etc.

Teams also allows custom search extensions. I was curious if I could make one to search Giphy more privately, so I built a prototype. Here is a diagram of requests made by a normal 3rd party extension in Teams:

 

 

With Teams 3rd-party extensions, the user never directly interacts with media servers because in step 4, Microsoft turns all returned URLs to proxied URLs. This means that Microsoft treats their 1st party feature differently from normal 3rd party extensions. Even more surprising is that Giphy URLs are explicitly exempted from proxying. For my 3rd party Giphy extension clone, when I return raw Giphy URLs, there is no proxying applied. But when I return my custom domain, the URLs are proxied.

Here is when my custom extension returns raw Giphy URLs. There is no proxying.

 

 

And here is when it returns my own domain instead of Giphy. The URLs have been changed to proxied URLs.

 

 

It appears that Giphy has been given a free pass by Microsoft to bypass their image proxying altogether whether it’s from their built-in features or a third-party extension.

 

How can Teams improve user privacy?

  • Strip the cid parameter from returned Giphy URLs
  • Treat Giphy like a normal search extension and proxy their images

 

Conclusions

Everyone likes GIFs. Everyone wants privacy. But combining the two is no easy feat. Unintentional leaks of user data will occur unless you design applications with user privacy in mind from the beginning.

Discord and Teams both opaquely proxy the initial search request to Giphy. There is no way for us to know what additional information they are sending to Giphy when that request is forwarded.  The Giphy API includes an optional “random_id” parameter that can be sent with search requests to personalize search results per user. I hope that Discord and Teams are not using these parameters and aren’t sending any additional information in headers.

Discord and Teams also have web clients. These can leak even more information because of cookies that Giphy may have set in your browser.

Signal, living up to its privacy-focused mission, has implemented a gif searching solution that preserves user privacy without sacrificing usability.

Discord proxies almost all requests, but the GIF previews still leak quite a lot of information that can be used to paint a picture of user habits and who they’re talking to.

Teams explicitly bypasses their proxies for Giphy media requests, despite already having a privacy-preserving flow for normal 3rd party integrations.

So, be careful where you search for cat GIFs. You never know who could be watching.

 

Getting Specific with Ransomware Preparedness

Getting Specific with Ransomware Preparedness

Most industry ransomware guidance is focused on SMB protections for commodity malware that exploits low-hanging fruit via worming and trashing share drives and document folders. “Have good backups” is still good advice, but there is much more we can do and with more specificity.

Major industry ransomware attacks resulting in catastrophic operational impacts are often due to privileged accounts being compromised and abused to issue broad network instructions to deploy encryption tools throughout the environment. Many technologies include limitations on privileged accounts, but these are frequently highly consolidated and hence the organization is still at risk. Attackers are increasingly destroying systems and/or exfiltrating sensitive data to further extort payments.

The CL0P group is known for several ransomware attacks in the past year, most notably their attack against ExecuPharm where they stole and destroyed data. The attack against Cognizant is a prominent example of how crippling a ransomware attack can be on an organization. This incident resulted in significant financial impacts with estimated losses between $50m-$70m along with difficult-to-quantify reputation damage.

Organizations need more specific action to help prevent these types of losses. This post aims to outline specific technology and process controls to improve our detection, prevention, and preparedness for ransomware attacks.

 

Backup and Storage Teams

  1. Use vendor-provided hardening guides for backup systems.
    1. An example of a guide can be found here, https://www.netapp.com/us/media/tr-4572.pdf.
  2. Guides include access controls for the backup system.
    1. Some platforms can generate SIEM alerts when backup routines have changed or are deleted.
  3. Settings can affect MFA capabilities.
    1. Some backup systems can also generate MFA challenges within the application when key settings are changed.
  4. Use snapshots and immutable snap lock solutions.
    1. In production servers, set snapshot volumes to a high frequency to capture changes on a frequent basis. Publishing snapshot directories for your organization ahead of time can enable self-service rollbacks at scale when safely using snap lock settings. Educate your organization ahead of time on how to use snapshots and restoration procedures.
  5. Domain-joined backup systems can fall in a ransomware attack.
    1. Consider having backup systems and their storage unit controllers not joined to the domain to insulate them.
  6. Use offline backups as a secondary backup mechanism.
    1. Since online backups can become infected, creating offline backups provides an additional layer by effectively segmenting backups of critical systems and data to avoid corruption during a ransomware event.
  7. Data management tools can aid resilience against ransomware.
    1. Tools such as Varonis can be effective in assisting with resilience against ransomware attacks. Their ransomware modules detect changes and keep file versioning in place for rollback.
  8. Exercise recovery procedures and have a plan.
    1. Maintain a list of the priority of systems and test restore speed from backup. Have a plan to rebuild workstations at scale and increase IT Operations support.
  9. Inform staff how to safely restore their files at scale.
    1. Socialize self-service features and viewable snapshots or MsFt OneDrive. Prepare your helpdesk and support teams with cybersecurity.

 

Applications Management and Change Management

MFA…enough said, right? MFA must be used consistently and more effectively. Do not forget to disable basic authentication, which can be used to bypass MFA. Consider implementing conditional access to restrict access further on o365.

  1. Privileged access management is a must-have.
    • Passwords need to be one-time, have a short lifespan, and domain admins should never hold passwords on their computers or their clipboards.
    • Require Hashicorp Vault for secrets management and fully segment use of privileged accounts within applications and containers. Never hard code secrets into source code or in a non-secure shared manner. Do not use or share accounts for different tasks (i.e. TSGOps). Create new accounts for each task with least privilege roles.
    • Require MFA prior to accessing Domain Admin, DBO or Enterprise Admin accounts. Enforce it vigorously on all Cloud CSP privileged accounts and restrict ability to accountable platform teams away from developers. Beware of DA-like privileges and use account discovery tools to look for non-DA accounts that may have widespread local admin rights. Privileged access management tools have discovery capabilities to help identify privileged Windows credentials.
    • Microsoft’s segmentation of administrative rights with the PAWS model can be effective against ransomware by using privileged access workstations for administrators and special accounts. These PAW kiosk workstations should not be allowed to use the Internet, get email, or be pingable from any other devices on the network. Your PAM tool should not be accessible except from this PAW network (the clean source principle), and likewise, your servers and systems management interfaces should not be accessible except from your PAM platform.
    • Use advanced PAM features to examine and manage application context privileges (running as admin) as well as system to system accounts and interfaces.
    • In addition to tightly managing DA privileges, vault and rotate local admin credentials including built-in admin accounts. This can be accomplished through CyberArk. Microsoft also provides a free solution (LAPS).
  2. Review UNIX reliance on NFS and consider resilience delivery speed to immutable snapshots.
    1. Maintaining backups is a fundamental strategy intended to support operational continuity. However, without the proper considerations a ransomware event can render your backups useless.
  3. Review your change management strategy.
    1. Change management based on the clean source strategy and a Red Forest/ESAE style architecture with well-thought out trust levels make ransomware compromise significantly more challenging for an attacker while increasing your operational resilience. The ESAE Active Directory architecture introduces segmentation to isolate privileged credentials. Additional details around this architecture and enhanced change management strategy can be found here, https://docs.microsoft.com/en-us/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material.

 

Finance, Legal, and Compliance Teams

  1. Include ransomware as a scenario in your next incident response tabletop exercise.
    1. We recommend that you have your cyber insurance, retained forensics, retained legal counsel vendors defined and rehearsed with a tabletop. Make sure your scenario also includes corporate communications to test your communications plan.
  2. Cybersecurity and Senior management with key control functions (legal, finance, technology, security) must establish a defined payment procedure ahead of time.
    1. This process should prepare for financial payment approvals and decisions on how to involve law enforcement. Reinforcing payment processes through a 3rd party vendor capable of making the payments if determined appropriate can provide key experience to negotiate or de-risk the situation. Verify their crypto wallets with your finance and AML/KFC process.

 

SOC Team

  1. Create and tune alerts in your SIEM to monitor the following:
    • Privileged account use
    • File creation with ransomware extensions, including but not limited to, .crypto, .kratos, .ecc, .exx, .encrypted, .locked, .locky, .wcry, or an extension with random characters
    • Detection of a single process writing to many files
    • Use of PsExec, Mimikatz, Cobalt Strike, and Bloodhound
    • Use of encoded PowerShell commands
    • Ransomware IOCs from threat intel sources
  2. Perform attack simulations to model attacker TTPs.
    1. Regularly perform attack simulations including Purple Teams to model attacker TTPs and evaluate the effectiveness of your controls in detecting activities that could lead to a compromise of admin credentials – Pass the Hash, Kerberoasting, retrieving an Activity Directory database, etc.
  3. Data Access Monitoring tools can help detect ransomware attacks
    1. When correctly configured, these tools can use an aggressive response process for anomalous file browsing as where attackers try to exfiltrate data to create greater leverage. Detection can be based on file classification for targeted monitoring of sensitive data, or more broadly.
  4. Monitor for less common IOCs and terminate processes.
    1. Monitor for common IOCs associated with known ransomware campaigns but also for specific IOCs such as ransom notes that might include Bitcoin addresses, TOR URLs, etc. Use EDR automation to terminate processes that try to create these items.
  5. Turn on prevention mode in EDR.
    1. An effective EDR toolset is invaluable when it comes to protecting against ransomware attacks. Work with IT Ops to identify needs for encoded PowerShell within the environment and develop an allow on an exception basis.
  6. Automate PsKill to neutralize PsExec threats.
    1. The use of PsExec to deploy ransomware across a network is the most commonly used mechanism for distributing ransomware. Automate the opposite command, PsKill, as a scripted way to neutralize PsExec threats. Many response teams have not learned to do this yet.
      • Consider disabling remote execution through PsExec where possible and only allow it explicitly. Regardless, make sure your EDR is configured to detect PsExec usage.