Project SHADOWSTAR: A Data Driven Approach to Network Block Enumeration (Part 2)

Project SHADOWSTAR: A Data Driven Approach to Network Block Enumeration (Part 2)

TL; DR

In Part 1, we discussed network blocks, Internet registries, and standard network block enumeration methods for penetration tests and red teams. In this post, we’ll walk through the release of the SHADOWSTAR tool and explore how it can help teams perform network block enumeration significantly faster and more thoroughly.

All the source code and documentation for SHADOWSTAR can be found on our GitHub: https://github.com/SecurityRiskAdvisors/SHADOWSTAR.[1]

 

Introduction to SHADOWSTAR

Recall that in Part 1, we discussed Regional Internet Registry (RIR) data and how it can be used to help identify target network blocks. We also discussed a lesser-known data source called Internet Routing Registry (IRR) data and why it is useful as a complementary data source to RIR data. At the time of this blog post, there are several tools for querying RIR data; however, we were unaware of any that query IRR data. Hence SHADOWSTAR was born.

The objectives of the SHADOWSTAR tool are as follows:

  1. Provide the best possible interface for discovering network blocks belonging to an organization.
  2. Provide a secure and simplified interface that can be shared with a team of operators.
  3. Automatically perform database updates and abstract away the complications from ingesting RIR/IRR data simultaneously.

 

 

SHADOWSTAR Architecture

Below is a simple block diagram which shows the key components of the SHADOWSTAR architecture. There is a web application hosted in S3 that is connected to a REST API which calls the Athena service and returns results based on keyword searches.

For auto-updating there is an AWS Fargate task which is invoked at a regular interval by a CloudWatch rule. This task periodically updates the data from the various RIR/IRR sources enumerated in Part 1.

You’ll need to wait about an hour for the collection and parsing to place before the system will have any data that can be queried. Once that’s done, you’re ready to go!

The first thing you’ll notice about SHADOWSTAR relative to other enumeration methods is that it is extremely fast. This is due to its offline data-driven approach, which uses the Athena service and custom data parser to project the disparate data into a simple unified data model:

SHADOWSTAR produced 36,390 network blocks in less than 10 seconds

 

CIDR Reduction and the Overlapping Network Blocks problem

One curious feature of SHADOWSTAR is the checkbox that says “CIDR Reduce Upon Save” this is deserving of some explanation.

Notice that from above, when we export our results using the save button, the number of effective blocks we see is not 36,390. There are only 5,338! So where did our blocks go!?

SHADOWSTAR data after it has been reduced

The quick answer is that SHADOWSTAR is intelligent enough to know when blocks overlap, and it performs an algorithm called CIDR Reduction. We essentially optimize the result set down to the minimal spanning set of CIDR blocks that encompass the same logical span as the original set. The following diagram may be helpful to explain what’s happening:

Example of overlapping network blocks

Given a setup like the one above, SHADOWSTAR would select only the “3.128.0.0/15” network block since it encompasses all the others and thus it is pointless to keep the child blocks.

This inference is almost always valid because these overlapping blocks were collected from the same keyword search. Meaning that it is extremely likely that you will only need the largest one. However, if you prefer to keep the entire raw result set, you can simply uncheck the box.

Note: since we are consuming IRR data in conjunction with RIR data, this is a very common problem and moreover it is not desirable to have a non-reduced set of CIDR blocks since this leads to confusion and often duplication of work.

 

The Power of IRR Data

We spoke in part 1 about our relative success with using IRR data and we think it’s worth highlighting an example. Consider a company like Spotify, Inc. Let’s observe what happens if we select only IRR data sources for our keyword search:

Example of searching for Spotify blocks only in IRR data sources

Notice that there are 2 results (identical in this case), and they both came from the “LEVEL3” (CenturyLink) source. Recall that these objects are not actual network blocks, they are routes; However, as explained in Part 1, it is often possible to blur the lines between routes and network blocks to treat them as being one and the same.

Specifically, in this example, the results are highly interesting examined within the RIR responsible for managing them (RIPE-NCC). You see this:

Network block WHOIS information from RIPE-NCC.

With the image above, we can see that there is no mention of Spotify anywhere! Our hypothesis currently is that this block has been sub-allocated to Spotify by what appears to be an ISP based out of Sweden[2] and the IRR route refers to this sub-allocation.

OK, cool. But how do we know that this is related to Spotify? As mentioned in Part 1, validating IRR data is critical because the information is not authoritative. We can try several methods to gather evidence to support or disprove our hypothesis.

A personal favorite of ours is to use tools like Shodan[3], Censys[4], and ZoomEye[5] to check if there are live hosts within that range that indicate a positive association to the organization in question. In this case, there is one interesting host within this range (or at least there was one in 2018):

The host “pub0-2.itvpn00.ash.spotify.net” appears to be hosting an SSL VPN back in 2018. This seems to lend evidence to our hypothesis that this IRR route object can be considered a CIDR block for Spotify. Of course, your milage may vary here, but we’ve had great success in identifying unique systems and services from this IRR data. As always, confirm with clients before assuming a particular block belongs to them.

 

Conclusions

In our first post we covered a lot of content related to network block enumeration. In this post we covered how we adapted our process at SRA to improve our visibility into RIRs we have historically not been able to operationalize, showed the practical benefits of including IRR data in with RIR data, and released our tool that can help you level up your recon game.

We hope this post was informative and that you have been convinced to give SHADOWSTAR a try – we think that if you try it, you’ll never want to do network block enumeration any other way. Cheers!

 

References

  1. https://github.com/SecurityRiskAdvisors/SHADOWSTAR
  2. https://ip-osteraker.se/
  3. https://www.shodan.io
  4. https://censys.io/
  5. https://www.zoomeye.org

 

Project SHADOWSTAR: A Data Driven Approach to Network Block Enumeration (Part 1)

Project SHADOWSTAR: A Data Driven Approach to Network Block Enumeration (Part 1)

Reconnaissance (recon) is a critical yet often underserved area in information security. For most, recon simply doesn’t have the same allure as its cousins: enumeration, exploitation, escalation, etc. and thus it often doesn’t get the attention it rightfully deserves.

 

TL; DR

In this post, we put recon at center stage and discuss:

  • An introduction to network block enumeration and why it matters.
  • Discuss some network protocols, Internet history and enumeration methodologies that are commonly employed.
  • Some of the pitfalls we’ve encountered with these existing protocols and techniques.

In the follow-up to this post (coming soon), we’ll show you how we’ve leveled up our recon game at SRA to automate this process and take advantage of these techniques. We’ll also walkthrough the release of SHADOWSTAR so you can level up your recon game.

 

Introduction to Network Blocks

Network blocks are a fundamental Internet resource that many organizations own. Every network block corresponds to an IPv4 or IPv6 range that is assigned to a particular entity. They can range in size from very large blocks of thousands of IPs like this 108.177.0.0/16 to just a single IP like this 108.177.16.19/32. An entity can own multiple blocks and sometimes a single block can be owned by multiple entities simultaneously through a process called sub-allocation. We’ll touch on sub-allocation shortly.

With all the network blocks and address space in the world, it would be difficult to have a single organization coordinate all allocations globally. This is where registries come into play.

Registries exist to manage, organize, and allocate the network blocks of the world, and are federated into a hierarchy-like system. Registries operate at different levels of scope, most being familiar with the regional variety such as ARIN and RIPE-NCC. Regional Internet Registries (RIRs) like these manage entire geographic regions themselves; however, some RIRs like APNIC and LACNIC further federate management to National Internet Registries (NIRs) who in turn can federate even further to Local Internet Registries (LIRs) and Internet Service Providers (ISPs).

In addition to the registries, different international organizations exist to help coordinate the allocation and assignment of these resources. The major players here are IANA (Internet Assigned Numbers Authority) and the NRO (Number Resource Organization). Their main job is to work with the registries to coordinate their allocations and ensure that there are no problems as well as provide statistics to the public to show allocation trends over time.

Sub-allocation occurs when an entity that owns a network block has decided to partition the block into several pieces and give those pieces out to other entities for them to operate (possibly autonomously). It is important to distinguish that with sub-allocation, we are NOT talking about entities like ARIN and RIPE-NCC but entities like ISPs who own large network blocks allocated to them by a registry like ARIN; it is ISPs and alike who perform the sub-allocation to their customers or partners.

One final point to mention about sub-allocation is that sub-allocating entities are not strictly required to report their sub-allocations back to the NIRs/RIRs; the implication of this being that Internet registries may not (and experimentally often don’t) have a complete record of all of the network blocks that belong to an organization. Note: This is an interesting point and will motivate some discussions about using BGP and IRR data later.

Network block enumeration refers to the process of identifying network blocks that have been allocated or assigned to a particular entity. Network block enumeration plays a critical role in the reconnaissance phase of a penetration test and helps to provide visibility into IP space which may host on-premises infrastructure or other in-scope systems for testing. Typically network block enumeration is achieved via keyword searching using a variety of different methodologies.

For more information about registries, sub-allocation, and other details relating to Internet number management, you should refer to RFC 7020 [1].

If there is one thing to take away from this section it’s that when you sit down to perform the process of network block enumeration, there are a lot of different entities to be considered depending upon your client and scope. In the next section we will cover the prevailing methodologies that exist for network block enumeration.

 

Network Block Enumeration – Typical Discovery Methods

The way we used to perform network block enumeration at SRA for penetration tests and Red Teams was by doing keyword searches on the RIRs. We would go through the WHOIS web services exposed on their respective websites and collect any network blocks that matched our queries.

Instead of doing it this way, you could also collect this information from the RIRs using the two different lookup protocols: WHOIS and RDAP; these protocols allow you to query for a resource like an IP address or domain name and get back registration information, that is, who owns it. Let’s explore these a bit.

WHOIS is still a very widely used protocol: according to the ARIN 42 talk titled “Directory Service Defense[2], 90% of ARIN’s requests still come from WHOIS over port 43. Recall WHOIS only allows you to perform lookups and that’s it. Moreover, WHOIS lookups themselves are not very useful since you cannot do any kind of keyword-based searching.

With that said, every RIR seems to have developed some varying degree of non-standard extensions to WHOIS to make more robust queries possible. ARIN and RIPE-NCC have developed their own (incompatible) web services which wrap the WHOIS protocol and make it significantly easier to perform robust enumeration, WHOIS Restful Web Service (WHOIS-RWS). But what about the other RIRs: LACNIC, AFRINIC, APNIC? Historically, we always had difficulty operationalizing these RIRs and to explain why it helps to discuss the different lookup interfaces that exist for RIRs.

 

WHOIS vs WHOIS-RWS vs RDAP

There’s a good chance you may not have heard of RDAP before, so we will explore some fundamental differences between RDAP and WHOIS before proceeding. RDAP is a relatively new protocol defined in 2015 in RFC 7480 [3]. The protocol’s primary reasons for existing are standardization and internationalization.

The WHOIS protocol has suffered from its own success. It has become one of the most widely used protocols since its definition in 1985, yet the protocol itself has no mechanisms for dealing with common internationalization concerns such as textual encodings other than ASCII. This, combined with the fact that the WHOIS protocol definition is very minimal, has led to inconsistent implementations between the RIRs. RDAP was meant to try and correct these and other failings of WHOIS.

Note: RDAP is not the same as WHOIS-RWS. WHOIS’s simplicity and ubiquity gave rise to powerful RESTful web services (RWS) that are provided by registries like ARIN [4] and RIPE-NCC [5].

This is why we had difficulty operationalizing APNIC, LACNIC, AFRINIC since they do not have the same kind of WHOIS-RWS that ARIN/RIPE do; instead, they merely expose a web interface to perform direct WHOIS lookups. Recall there is no conception of “search” or “organization” in regular WHOIS, just object lookups. RIRs implement their own custom extensions for providing those abstractions at their own discretion and APNIC, LACNIC and AFRINIC simply don’t expose the kind of interface we want.

Back to RDAP. You can think of RDAP as basically being “WHOIS over HTTP”. RDAP is basically a REST API which returns registrant information as structured data in JSON format. Here’s an example from ARIN’s RDAP server:

https://rdap.arin.net/registry/ip/8.8.8.8

This lookup would be analogous to doing a WHOIS lookup on 8.8.8.8. RDAP also supports a standard search interface which allows you to perform keyword-based searching unlike WHOIS, which only natively supports direct lookups.

 

RIR Data Dumps

Most people who do network block enumeration use one or more of the lookup methods described above. However, there is another way which is not as widely popularized.

We didn’t realize that many RIRs publish daily snapshots of their databases and provide them for you to download. These exports have personally identifiable information (PII) redacted but contain useful information for doing network block enumeration. The dumps that we are aware of as of writing this are:

https://ftp.ripe.net/ripe/dbase/
https://ftp.lacnic.net/lacnic/dbase/
https://ftp.afrinic.net/pub/dbase/
https://ftp.apnic.net/pub/apnic/whois/

Three things are worth mentioning:

  1. These dumps are in a format called RPSL (routing policy specification language) which is defined in RFC 2622 [6]
  2. LACNIC’s data is heavily redacted. Their database dump is produced to support a GeoIP initiative they have [7]. They release details on every allocated IPv4/IPv6 block with almost every field redacted except for the geographic location of the registrant of each network block.
  3. ARIN does not publish a public database dump of WHOIS registrant information. They publish a public dataset as part of the Internet Routing Registry (IRR) program. This dataset is not the same as the WHOIS database but there is some overlap.

ARIN and LACNIC have a formal process for requesting bulk data access. At the time of writing, LACNIC appears to not be taking requests, but that may change in the future:

Assuming you can acquire one or both datasets to use ethically, then you will have very good theoretical visibility into the global picture of network block allocation.

 

RIR vs IRR data for enumeration

As mentioned above, ARIN does not publish WHOIS registrant information, but they do publish a dataset of Internet Routing Registry (IRR) data. Essentially, IRR data dumps are compilations of CIDR prefixes which are supposed to correspond to actual routes advertised by ASNs. IRR data is commonly used for network engineering related to Internet routing.

The IRR data is not an authoritative source of routing prefixes advertised by ASNs nor does it have to correspond to real routes; it is simply an auxiliary data source offered voluntarily by a loose federation of entities which make up the IRR providers. Here’s a short list of the key players:

  1. ARIN
  2. RIPE
  3. APNIC
  4. AfriNIC
  5. LACNIC
  6. LEVEL3 (now CenturyLink)
  7. NTTCOM
  8. RADB

You’ll notice that there is significant overlap between the RIR data sources and the IRR data sources, indeed every RIR is also an IRR data source. However, there are now some other players too like NTT and CenturyLink.

The amount of data within IRR data dumps is generally a lot less than the RIR dumps but more importantly, the data from IRR dumps must be used with caution. IRR data is known to be less accurate in general than RIR data because it is often not as actively maintained. This means that when you receive results back from an IRR data source, you should more thoroughly analyze the reported prefix to try to determine if it is still valid.

If we have an issue with validating the authenticity of IRR data, it is natural to wonder why we would bother with IRR data at all, why not just use RIR data? Recall network block sub-allocation: this is where IRR data comes into play.

Routes, though they are not explicitly network blocks per se, often can be treated as such, especially if they come from ISPs and do correspond to network blocks, just sub-allocated ones.

Notice that IRR players like LEVEL3 and NTT are ISPs; they provide Internet services to customers in addition to supporting global routing. We have found many routes listed within IRR data which tied back to our clients, assuming we can perform some validation, we treat those routes as CIDR blocks which are sub-allocated to our clients.

In practice, we have had tremendous success using IRR data. Here’s a sample of some of the things we’ve found that we otherwise wouldn’t have:

  • Exposed network infrastructure: Cisco routers/switches
  • Exchange and OWA servers
  • Single-factor VPN portals
  • Miscellaneous web application administration consoles

In summary, IRR data is plentiful, exposes intra-ASN routes as well as inter-ASN routes, and is available for bulk download, thus we chose to use IRR data in the SHADOWSTAR tool as a primary data source.

If you’d like to learn more about IRR, you can read about it on their website [10]. You may also like to check out the RADB’s website (a popular IRR provider) here. [11]

So there we have it, a solid breakdown of the various components that go into network block enumeration. In our next post, we’ll go over the release of SHADOWSTAR and how to get set up with it. We’ll also highlight some of the areas that it can help level up your recon game.

 

References

  1. https://tools.ietf.org/html/rfc7020
  2. https://www.youtube.com/watch?v=JLBS7UOr_YI
  3. https://tools.ietf.org/html/rfc7480
  4. https://whois.arin.net/
  5. https://apps.db.ripe.net/db-web-ui/fulltextsearch
  6. https://tools.ietf.org/html/rfc2622
  7. https://www.lacnic.net/3106/2/lacnic/ip-geolocation
  8. https://www.arin.net/reference/research/bulkwhois/
  9. https://www.lacnic.net/2472/2/lacnic/request-bulk-whois-access
  10. http://irr.net/
  11. https://www.radb.net/
User Data Leaks via GIFs in Messaging Apps

User Data Leaks via GIFs in Messaging Apps

An investigation into how Teams, Discord, and Signal handle Giphy integrations

When everyone is working from home, a well-timed GIF sent to coworkers can lighten the mood.  So, when I arrived at SRA and found that they had disabled Teams’ Giphy search feature, I was disappointed. I asked a colleague why it was disabled and was told “we thought it could be a privacy issue.” Determined to prove that GIFs were harmless, I started examining the Teams/Giphy interaction. My findings led me to investigate how two other popular applications, Discord and Signal, handle user privacy with Giphy searches. Unfortunately, I found that it’s easy to leak user information when searching for the perfect GIF.

 

 

Summary

Here is a brief overview of the platforms investigated:

TeamsDiscordSignal
Owned ByMicrosoftDiscord Inc.Signal Technology Foundation (non-profit)
PricingPer-user licensing with Microsoft 365Free with premium optionsFree
Target AudiencesPrimarily business and education customers"Anyone who could use a place to talk with their friends and communities"People who like privacy

For actions performed on each platform, the following information is disclosed to Giphy:

TeamsDiscordSignal
Search ResultsSearch term + ??Search term + ??Search term
Search PreviewsIP address, tracking tokenIP address, Discord channel, tracking tokenTracking token
Loading GIF MessagesIP address, tracking tokenNoneTracking token

 

How Giphy Tracks Users

Here is an example URL returned from the Giphy Search API:

https://media0.giphy.com/media/Ju7l5y9osyymQ/giphy.gif?cid=de9bf95evmdivzh16orm7svyp9ticugu4abuyc3ty2df5y9i&rid=giphy.GIF

We’re going to focus on the “cid” parameter, which appears to be an analytics token. It is unmentioned in Giphy’s API documentation. Here is what I have found out about the cid parameter:

  • When you make a search request, every GIF returned will have the same cid
  • The first 8 characters (“app id”) are consistent across every search made with the same API token. For example:
    • Teams searches start with “de9bf95e”
    • Discord searches start with “73b8f7b1”
    • Signal (on Android) searches start with “c95d8580”
  • The remaining 40 characters (“search id”) vary based on the following factors
    • Search string
    • Number of results requested
    • Results offset
    • Results rating
    • Geographic region (not simply IP address)
    • Time (duration unknown)

For example, take a search for “cats”.  Every image returned in the results will have the same search id.  If the same API key is used, subsequent searches from the same host will return the same search id, although this does not occur indefinitely. The search ID will change occasionally with time. A friend down the street in the same region but with a different IP would also get the same search id if they used the same API key. However, use of a proxy to simulate requests from other countries confirmed that the search id does change with enough distance.

Since the cid parameter is returned in search results, it tends to be “sticky”. Unless it’s explicitly stripped out by someone, it will be included wherever you send the GIF URL and will persist in the message history.

 

Signal

Signal has documented its quest for user privacy with GIF searches in two blog posts: here and here. According to their blog posts and a review of their apps’ code on GitHub, here is how Signal apps interact with Giphy:

 

 

In the flow above, the user never communicates directly with Giphy’s servers; all communication is tunneled through Signal’s servers first. The URLs all still have the cid tracking parameter, but since no other data such as the user’s IP is ever revealed, the parameter has no effect on user privacy. Additionally, as mentioned in their blog posts, Signal ensures that the Signal servers can’t see any traffic passing through, giving you near-complete privacy.

 

How can Signal improve user privacy?

Signal has put a lot of thought into how to serve GIFs privately and it shows. There are no obvious privacy improvements they could make to the user flow. If Signal wanted to go above and beyond, they could make fake search and download requests from their servers to hide overall trends in user activity hours, but that has marginal benefits for a lot of extra bandwidth.

 

Discord

On Discord, the main built-in GIF picker widget uses Tenor, not Giphy. However, there is a built-in /giphy command that allows users to search and send GIFs using Giphy which we will investigate. Here is a diagram of the requests made during a Giphy search on Discord as mapped using Burp:

Note that some of these requests can be WebSocket messages under normal circumstances, but I represent them all as synchronous requests for simplicity.

 

 

As you can see, the only time the user connects directly to the Giphy servers is step 6 when previews of all the GIFs are fetched during a user’s search. This is a screenshot of that preview request:

 

 

By making a request directly to Giphy, your IP address is exposed. The request also includes two other pieces of trackable information: the cid parameter, and a referer header. As discussed above, the cid parameter is based on your original search request, so including this parameter could allow Giphy to match your IP with your search requests even though the original search request went through Discord’s servers first. However, the larger data leak is the referer header. It includes the exact channel URL that you are currently talking in, whether it’s a public server or a DM. If you and your friends like to send GIFs to each other in servers and DMs, then Giphy could use this information to build a map of who you talk to, who is in which channels, etc.

Besides the preview requests, everything else in the flow respects your privacy. The search requests and the retrieval of full-size GIFs in messages are proxied through Discord’s servers. When a user sends a message with an embedded GIF, Discord appears to use the Giphy API to retrieve the full-resolution URLs. This is independent of any search request so the cid tracking parameter is rendered useless.

Additionally, when Discord reaches out to 3rd party servers to retrieve media, the request is very clean and does not include any tracking information:

 

How can Discord improve user privacy?

  • Strip the cid parameter from preview URLs before returning search results
  • Do not send referer headers when fetching GIF previews (or on any external request really)
  • Proxy GIF preview requests through the Discord servers like other external media

 

Teams

Teams includes an on-by-default Giphy search feature that can be disabled at an organizational level. Here is a diagram of the requests made during a Giphy search on Teams as mapped using Burp:

 

 

The user connects directly to Giphy’s servers in steps 6, 8, and 14. This, combined with the cid tracking token could lead to a large amount of data being leaked. Say for example you have a private message with your friends at work. You search for a GIF that relates to a specific inside joke you share and then you send it to the group. Now thanks to the cid token, when each of those people download your attached gif, Giphy can identify which IP addresses belong to the people you’re talking to. Facebook could then use this information along with their data collection from other sites to build a profile of who you talk to at work, what you talk about, how you feel, etc.

Teams also allows custom search extensions. I was curious if I could make one to search Giphy more privately, so I built a prototype. Here is a diagram of requests made by a normal 3rd party extension in Teams:

 

 

With Teams 3rd-party extensions, the user never directly interacts with media servers because in step 4, Microsoft turns all returned URLs to proxied URLs. This means that Microsoft treats their 1st party feature differently from normal 3rd party extensions. Even more surprising is that Giphy URLs are explicitly exempted from proxying. For my 3rd party Giphy extension clone, when I return raw Giphy URLs, there is no proxying applied. But when I return my custom domain, the URLs are proxied.

Here is when my custom extension returns raw Giphy URLs. There is no proxying.

 

 

And here is when it returns my own domain instead of Giphy. The URLs have been changed to proxied URLs.

 

 

It appears that Giphy has been given a free pass by Microsoft to bypass their image proxying altogether whether it’s from their built-in features or a third-party extension.

 

How can Teams improve user privacy?

  • Strip the cid parameter from returned Giphy URLs
  • Treat Giphy like a normal search extension and proxy their images

 

Conclusions

Everyone likes GIFs. Everyone wants privacy. But combining the two is no easy feat. Unintentional leaks of user data will occur unless you design applications with user privacy in mind from the beginning.

Discord and Teams both opaquely proxy the initial search request to Giphy. There is no way for us to know what additional information they are sending to Giphy when that request is forwarded.  The Giphy API includes an optional “random_id” parameter that can be sent with search requests to personalize search results per user. I hope that Discord and Teams are not using these parameters and aren’t sending any additional information in headers.

Discord and Teams also have web clients. These can leak even more information because of cookies that Giphy may have set in your browser.

Signal, living up to its privacy-focused mission, has implemented a gif searching solution that preserves user privacy without sacrificing usability.

Discord proxies almost all requests, but the GIF previews still leak quite a lot of information that can be used to paint a picture of user habits and who they’re talking to.

Teams explicitly bypasses their proxies for Giphy media requests, despite already having a privacy-preserving flow for normal 3rd party integrations.

So, be careful where you search for cat GIFs. You never know who could be watching.

 

MSSpray: Wait, how many endpoints DON’T have MFA??

MSSpray: Wait, how many endpoints DON’T have MFA??

A Little Backstory

As more companies move their infrastructure into the cloud, attackers are adapting their techniques to target these resources. One of the bigger changes is the shift to using Azure Active Directory (Azure AD) rather than an on-site solution. We’ll focus here on password spray attacks. Normal techniques, such as an automated spray against Office 365, can still prove successful, but it is becoming more common to see these thwarted by more secure configurations, defensive toolsets, and increased awareness by the blue team. It then becomes clear that attackers need a new approach to password spraying.

 

The Blind Spot

Microsoft’s definition of the Azure AD Graph API is as follows: “The Azure Active Directory Graph API provides programmatic access to Azure AD through REST API endpoints.” [1] Why should we focus here?

  • No Browser Needed: It takes the browser out of the equation for performing authentication to Azure AD.
  • Relatively Obscure: Graph API is a commonly over-looked endpoint for MFA implementations.
  • Blue Team Blind Spot: We have observed much fewer detections and alerts around this endpoint through purple teams, making it a good target for a stealthy password spray.

So properly protecting the Graph API should keep Azure AD safe right?

Not so much. Microsoft has provided several other resources for programmatic access into an Azure AD environment and many of them are just as over-looked as the Graph API, if not more so. A few examples of these resources include Office 365 Exchange Online, OneNote, and the Azure Key Vault.[2][3] We will expand on these later on, but for now let’s take a look at how an attacker can abuse these endpoints.

  1. Credential Validation: These endpoints can validate whether stolen or sprayed credentials are valid in the environment
  2. Access Token: Once authenticated, an access token is obtained that can be used to gain additional access into an Azure AD environment. For example, with an access token obtained from the Graph API, you can enumerate Azure AD objects through the use of tools such as ROADrecon.[4] Please note that access tokens have a specific scope and may not work against other resources. However some endpoints are more broadly scoped than others.
  3. Inconsistent MFA: Often, endpoints have conditional access policies applied that force users to have MFA, while a subset of other endpoints are left untouched by those policies. A common configuration that we see is when MFA is properly configured to access sites like SharePoint and Outlook, but the Graph API is left untouched by the conditional access policies.

As Azure AD presents new targets for attackers, blue teamers must also have a way to keep track of these endpoints and determine what may have slipped through the cracks of an otherwise well-covered MFA deployment.

 

Introducing MSSpray

The tool can be found in the Security Risk Advisors GitHub: https://github.com/SecurityRiskAdvisors/msspray.git

Taking the view from both sides, MSSpray is a tool to aid in performing targeted password sprays as well as endpoint validation to highlight the defensive gaps an organization may have with their Azure AD tenancy. MSSpray is written in Python3 and relies on the ADAL library written by Microsoft for use with Azure. It allows you to authenticate against a targets Azure AD by using the ADAL library’s “acquire_token_with_username_password” function. Relying on this function, we created two modules, Spray and Validate.

Running the tool with no arguments will present you with a help menu and list out the available endpoints for authentication.

 

Spray Module

MSSpray’s Spray module allows the user to perform a targeted password spray, taking an input list of email addresses (one per line), a spray password, and the endpoint to authenticate against. Additionally, you can specify stop as the last argument to tell it to stop the password spray once the first successful login is obtained (NOTE: This will not trigger on “conditional” logins such as MFA required, locked or disabled accounts, etc..)

Selecting an endpoint from the list above, an attacker can then perform a spray against that endpoint using the following command:

python3 msspray.py spray <user_list> <password> <endpoint selection> <stop/blank>

As successful spray can be seen below, highlighting a true login, and noting any success that has a condition (MFA Required and Password Expired in the example).

One final aspect of the spray module is that it will halt upon receiving five 50053 error codes in a row, which correspond to Azure’s locked account error. This is due to behavior we have seen while performing password sprays that were determined to be blocked by the target. MSSpray will prompt the user to either continue, if you suspect that a block was NOT the case, or to quit and dump all previous attempts to a file.

 

Validate Module and a Fix

MSSpray’s Validate module allows the blue team to enumerate which endpoints in their Azure AD environment are configured with MFA. The approach here is to use a valid account and attempt to login to each resource endpoint on the list. MSSpray will then read the error codes and interpret them as either a successful login or MFA required. This can be greatly beneficial in determining defensive gaps in authentication, point out endpoints that may not currently be on the blue team’s radar.

The Validate module can be run by supplying the valid account name and password.

python3 msspray.py validate bill.smith@microsoft.com ReallyBadPass!

Seen below are three different runs of the validate function against a test domain. In the first, we can see that many of the endpoints come back with “Successful login”, suggesting that MFA controls are not present on those resources. In that run we only applied the MFA conditional access policy that we most commonly see in the real world, being only applied to Office 365 and a few related applications such as Exchange Online and SharePoint Online.

 

We then changed our conditional access policy in Azure to apply to ALL cloud applications. This time around, we can see that our applied changes to the Azure policies were successful in enforcing MFA for all applications for the user. The only caveat is there are some discrepancies on what gets properly protected.

We also tested the legacy control of the MFA Enforcement Policy in Azure. This approach is successful in applying MFA to all of the resource endpoints, but Microsoft will soon be deprecating this functionality in the interest of using conditional access policies.

MSSpray generates logs from every spray or validation attempt. In the spray logs, you will see the full tokens obtained from successful login attempts, which can be used for further access into the Azure AD environment.

 

Microsoft Documentation

Guidance for configuring Azure conditional access polices regarding MFA can be found here[5] and here[6].

 

Future Development

MSSpray will be supported and updated continuously. Please submit any bugs or feature requests to the GitHub repo directly. If you have any questions about the tool, please feel free to reach out to me on Twitter @__TexasRanger[7].

 

References

[1] https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-graph-api

[2] https://www.shawntabrizi.com/aad/common-microsoft-resources-azure-active-directory/

[3] https://github.com/Gerenios/AADInternals/blob/master/AccessToken_utils.ps1

[4] https://github.com/dirkjanm/ROADtools

[5] https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks

[6] https://docs.microsoft.com/en-us/azure/active-directory/authentication/tutorial-enable-azure-mfa

[7] https://twitter.com/__TexasRanger

Automated Detection Rule Analysis with Dredd

Automated Detection Rule Analysis with Dredd

Offensive security moves at a breakneck pace and keeping up with new tools and TTPs (tactics, techniques, and procedures) can be an undertaking of its own for a security team. Creating, testing, and managing the detection rules for those changes is even more so a challenge, especially at scale. To address this challenge of quickly testing detection rules, we developed the tool Dredd. Dredd automates both the analysis of Sigma[1] rules against Mordor[2] datasets (collections of logs of attacker procedures), and the analysis of IDS rules against PCAPs. In this blog, we will discuss Dredd and its potential uses.

Siem Rules Analysis

Much like its namesake – Judge Dredd – Dredd is judge, jury, and executioner. Dredd handles the log ingestion and normalization, detection rule conversion, and detection rule evaluation. For the detection rule side of Dredd, Sigma was chosen. Sigma is a SIEM agnostic rule format that is designed to be interoperable. The Sigma project provides a utility (sigmac) and library (sigmatools)[3] to translate their format to other platforms like Splunk, QRadar, and Elastic.

Converting a Sigma rule[4] (left) to a Splunk query (right) using sigmac

The other major component of Dredd is Mordor. The Mordor project aims to encapsulate attacker procedures as logs and distribute them openly to allow for ease detection content development. This bypasses both the need to have a testing environment with logging infrastructure and the need to accurately simulate the attacker procedure of interest.

Mordor entry for DCSync via Covenant C2[5]

Under the hood, Dredd creates an Elasticsearch Docker container, ingests the Mordor logs into the container, translates the Sigma rules to Elasticsearch DSL[6] queries, then runs those queries against the container and reports the results. Below is a video demonstrating the analysis process.


Using Dredd to evaluate merged logs

(Note: all videos are sped-up)

A Sigma rule that uses Sysmon logs to look for “schtasks.exe” gets a hit against the two Mordor log sets. In this example, the two Mordor logs are merged and the rules are run against the merged logs. However, this same procedure can be performed without merging the logs:


Using Dredd to evaluate unmerged logs

Here you can see that the “schtasks.exe” rule had a hit against the scheduled task log set but not the DCSync log set.

Other Data Sources

Dredd currently only supports Mordor datasets for Sigma analysis. The Mordor project lays out different environments where the logs were captured so that analysts can lookup which hosts map to which purpose (ex: IP 172.18.39.8 is the attacker C2 in the Shire[7] environment). Dredd however does not enforce conformity those Mordor environments or require additional metadata (like the attacker view). In practice, Dredd will support any JSON logs produced by Winlogbeat or logs matching that schema[8]. This means that support for projects like EVTX-ATTACK-SAMPLES[9], a project similar to Mordor but using evtx files (Windows Event Log files), can be added fairly easily.

Here[10] is a PowerShell script to convert evtx files to Winlogbeat JSON files using Winlogbeat. Once converted and archived as tar.gz files, they can be used alongside Mordor log sets.

Converting an evtx export[11] to JSON using PowerShell

Archiving the JSON export using 7-Zip

Evaluating the export against a Sigma rule[12] using Dredd
The MMC lateral movement rule worked against the export

IDS Rule Analysis

Dredd also supports the evaluation of Snort/Suricata IDS rules against PCAPs using Suricata. Unlike with Elasticsearch however, Dredd can simply mount the rules and PCAPs into a container as-is and use Suricata to perform offline analysis. There are several good sources for attacker PCAPs, including the Mordor project, which has PCAPs for the APT29 emulation plan by MITRE[13], PCAP-ATTACK[14], which is by the same author as the EVTX-ATTACK-SAMPLES, and PacketTotal[15], a VirusTotal like website for packet captures.

When analyzing these captures with Dredd, it will merge all rules together then evaluate them against each PCAP individually. However, there is also the option to evaluate the rules against all PCAPs at once.


Using Dredd to evaluate unmerged PCAPs

Applications

The most obvious use case for Dredd is as a tool to help detection content developers analyze new and existing detections against attacker activities. Upon release, an analyst can download a log export of the tool or procedure, load it into Dredd with their existing detection rules, then determine if they have coverage for it. If they lack coverage, they can build a new detection and use Dredd to validate that the rule works as expected. In this same scenario, Dredd can be used to test the rule’s propensity to false positives by evaluating the rule against benign sets of logs.
Dredd can also help in situations where an analyst may not have access to a detection platform (SIEM/IDS) or the platform is more tightly controlled. Analysts on the go or working off hours may not have the ability to run arbitrary queries against the platform. In other situations, query access to the platform could be restricted for reasons such as limiting performance degradation of scheduled queries and licensing. Dredd can help in either case by acting as a proxy for the platform when evaluating detection content.
If the detection team(s) version controls their detection logic, Dredd can be used as part of a continuous integration/continuous deployment (CI/CD) pipeline to automate rule validation. Once new rules or changes to existing rules are committed to the source control management system (e.g. GitHub, Bitbucket), an automated process can be initiated to evaluate the rules against various logs. To that end Dredd has support for using non-zero exit codes (in addition to the normal structured results) when rules do not have any results, thereby causing a build failure.


Changing the exit code behavior

This type of continuous testing is more commonly found in traditional software development practices but is just as applicable – and useful – in testing security detection content.
Red team operators can also make use of Dredd. The operator first generates the log data by practicing with their different tools and TTPs within a testing environment set to log the appropriate sources. Then the operator evaluates those logs either against their organization’s existing detection rules or against publicly available detection rules. Using the results, they can identify areas of their methodology that are likely to be detected and either avoid them or innovate on them. The red team operator will also get a better understanding of what characteristics are common in detection logic so they can avoid those where possible when developing their methodologies.

Dredd

Dredd can be found on the Security Risk Advisors GitHub page here: https://github.com/SecurityRiskAdvisors/dredd. If you have any questions about this article, please feel free to reach out to me on Twitter @2xxeformyshirt.

Featured on Cyber Kumite

Dredd is featured on Security Risk Advisors’ weekly YouTube and podcast series, Cyber Kumite. Watch the episode below:

 

References

[1] https://github.com/Neo23x0/sigma

[2] https://mordordatasets.com/introduction.html

[3] https://github.com/Neo23x0/sigma/tree/master/tools

[4] https://github.com/Neo23x0/sigma/blob/master/rules/windows/builtin/win_user_creation.yml

[5] https://mordordatasets.com/notebooks/small/windows/06_credential_access/SD-191027064128.html

[6] https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html

[7] https://mordordatasets.com/mordor_shire.html

[8] https://www.elastic.co/guide/en/beats/winlogbeat/current/exported-fields.html

[9] https://github.com/sbousseaden/EVTX-ATTACK-SAMPLES/

[10] https://gist.github.com/2XXE-SRA/548f856f2161341a6c405944db91d645

[11] https://github.com/sbousseaden/EVTX-ATTACK-SAMPLES/blob/master/Lateral%20Movement/LM_impacket_docmexec_mmc_sysmon_01.evtx

[12] https://github.com/Neo23x0/sigma/blob/master/rules/windows/builtin/win_mmc20_lateral_movement.yml

[13] https://attackevals.mitre.org/APT29/

[14] https://github.com/sbousseaden/PCAP-ATTACK

[15] https://packettotal.com/