As physical offices have cleared out and VPNs and remote access solutions are maxing out, many IT security departments have a sudden new challenge: how to identify an attacker with all this new noise at the border? It was hard enough finding an attacker coming into the network before we entered into the world’s largest work from home experiment; however, with the addition of all these new connections the haystack got significantly larger and the needle stayed the same size. For years security experts have been saying that the perimeter is dead and with the sudden and unexpected switch to remote work for most of the world that point is being driven home now more than ever. Utilizing optimized queries to search big data sets is the key to finding these attackers with surgical precision given the massive increase in data volume. 

Correlation rules in a Security Incident and Event Management (SIEM) system can be extremely useful in finding attackers, but they have a hard time finding abnormal user behavior on their own. User Behavior Analytics (UBA) and User and Entity Behavior Analytics (UEBA) software was designed to find abnormal behavior such as an account logging in from a new IP at an out of ordinary time, but these solutions can be costly both from an initial investment and people hour cost perspectives. So, if UBA/UEBA is the solution, what do you do if you don’t have the money to bring in a solution now or you need a solution right immediately? 

Two of the most popular SIEMs, IBM’s QRadar and Splunk, can be engineered to look for attackers leveraging the VPN and/or remote access solutions right under your nose without the need for UBA/UEBA. This will not replace a UBA/UEBA solution completely but may help with a few use cases during this new remote access landscape. 

 

Disclaimer 

There are inherent difficulties when creating rules to monitor for suspicious user logins. The rule suggestions in the next section are not magic bullets, and in some instances may be overly burdensome to implement properly. A sampling of problems that arise when deploying these types of rules include: 

  • The information logged by a system, and the accuracy of that data 
    • For example:  Source IP is not always accurately reported in some logs 
  • The number of log sources to be monitored for suspicious activity 
    • The more diverse the log sources the more complex the rules need to be 
  • The size and locations of a business
    • A small bank with a single branch will not have employee’s logging in from different countries to work whereas a large global bank with branches around the world will have a harder time pinpointing abnormal logins based solely on country of origin 
  • The typical working habits of an organization
    • An in-office worker suddenly logging in from another US state is much more suspicious, than remote workers who are staffed through the country who can and will log in from anywhere. Thus, depending on your workforces normal login habits, deploying location-based rules may be a challenge.  

Before deploying rules or committing to deploying any rules associated with geographic locations of user logins complete a very thorough analysis to see if the rules will work in your environment. The above factors can cause a high number of inaccurate results which can be extraordinarily taxing on Security Operations Centers (SOCs) or security teams and lead to alert fatigue.  

 

The Rules 

There are several different rules to help filter through large data sets to find attackers accessing your systems side-by-side with your work from home workforce: 

Rule: Impossible Travel
Use of IP location to determine the speed at which a user would have to travel in order to sign in on two different IPs (note: this would look at both concurrent logins and consecutive logins)
Complicating factors:
  • Jumping from a home ISP to a mobile carrier ISP can cause a large location swing. This is especially troublesome if a user is using their phone for one border application and their computer on their home network for another.

  • The use of non-company owned VPN services can cause IP location to change drastically.
Fidelity/effort rating:High fidelity/High effort
Rule: Unexpected Region
Use of IP location to determine if a user is logging in from an unexpected geographic region
Complicating factors:
  • The use of non-company owned VPN services can cause IP location to be from anywhere in the world.

  • Larger organizations with a global workforce would need to expect logins from virtually all regions.
Fidelity/effort rating:Medium fidelity/Medium effort
Rule: Concurrent Logins
Multiple concurrent VPN/remote access logins from different locations using the same user account
Complicating factors:
  • Use of multiple devices for one user could cause this alert to be a false positive
Fidelity/effort rating:High fidelity/Low effort
Rule: VIP Home City Activity
Identify VIP logins from outside their home geographic location with the current limit on travel, access from outside those geographic regions is cause for concern.
Complicating factors:
  • Not all travel is banned, so a traveling executive could cause this alert to be a false positive.

  • The use of non-company owned VPN services by a VIP would cause this alert to be a false positive.
Fidelity/effort rating:Medium fidelity/Low effort

Many of the rules above will not be present by default in a SIEM or UBA/UEBA deployment, so an analyst will need to create these on their own. These rules are meant to assist in the identification of less sophisticated attackers as a more advanced attacker, like an APT group, may leverage VPN or other tactics so the activity appears normal. 

 

Rule Implementation Examples 

QRadar 

Unlike QRadar’s UBA app which unfortunately is not domain aware (as of this writing), the Impossible Travel rule can be implemented in QRadar without the need for UBA and can even be modified for domain aware deployments. This rule will require the following:

  • A reference table with a username key, source IP key and date key. 
  • A rule used to populate the reference table when a new connection is established on the VPN or remote access system. 
  • A rule to compare the reference table (the previous source IP) and the newest connection established on the VPN or remote access system.
Reference Table Creation 
Requirements 
  • Access to the QRadar API 
Steps 
  1. Using the QRadar API create a reference table go to “/reference_data/tables” and select POST. 
  2. Fill out the information similar to the screenshot below:                                                                                                                                                        
    • For key_name_types use the following JSON:
      [{“element_type”: “ALNIC”,”key_name”: “usernameKey” },{“element_type”: “IP”,”key_name”: “ipKey” },{“element_type”: “DATE”,”key_name”: “dateKey” }

             3.  Finally click the Try It Out button to create the table returning a response similar to the one below:


Rule to Populate the Reference Table

Requirements:
  • The log source (preferably either one log source or a group of log sources all from the same type of appliance.)
  • The QID (QRadar event type mapping) that denotes a successful login to the VPN or remote login system.
  • A username and source IP need to be mapped for the QID
Steps
  1. Create a new rule using the Rule Creation Wizard.
  2. Select Event rule as the type and click Next.
  3. Add the following filters:
    • And when the event(s) were detected by one or more of these log sources
    • And when the event QID is one of the following QIDs
    • And NOT when the source IP is one of the following IPs
    • And when the event matches search filter
  4. Fill in the log sources you will be monitoring for VPN or remote login services.
  5. Fill in the QID with the successful login for the VPN or remote login services.
  6. Fill the data in for the last two filters as seen below:                                                                                                                                                                                                                           
  7. Enter in an appropriate rule name and description, then click Next.
  8. Check the box next to Add to Reference Data to open up a new section.
    • Select Add to a Reference Table in this new section.
    • Add in the all the key mapping as shown below                                                                                                                                                                                         
  9. Ensure “Enable this rule” is checked and click Finish.

 

Rule Creation: Detect Impossible Travel

Requirements:
  • Same QID used for the rule to populate the reference table
Steps
  1. Create a new rule using the Rule Creation Wizard.
  2. Choose Events as the source for the rule and click Next.
  3. Add the following rule filters:
    • And when the event(s) were detected by one or more of these log sources
    • And when the event QID is one of the following QIDs
    • And NOT when the source IP is one of the following IPs
    • And when the event matches search filter
    • and when the event matches this AQL Query
  4. Enter in the same log sources, QID, IPs and search filter as used in the previous rule.
  5. The biggest change to this rule is the introduction of an AQL query. The AQL query is seen below and must be modified if you named the table different then “Impossible_Travel” or the keys different then “dateKey” and “ipKey”. Finally to modify the kilometers per hour that is deemed impossible, change “> 200” to whatever you choose.
    (GEO::DISTANCE(sourceip, REFERENCETABLE('user-ip-date','ipKey',username))/DOUBLE(DOUBLE(starttime-REFERENCETABLE('user-ip-date','dateKey',username))/3600000)) > 100.0 and GEO::DISTANCE(sourceip, REFERENCETABLE('user-ip-date','ipKey',username)) > 0
  6. Enter an appropriate name and description and click Next.
  7. For Rule Action select “Ensure the detected event is part of an offense”, indexing off the Username field. (It may be advisable to first dispatch a new event (step 8) and see how many hits you get and testing the rule before creating offenses)
  8. If desired, select Dispatch New Event, filling in all the appropriate fields.
  9. Ensure the checkbox for enabling the rule is selected and click Finish.

Congratulations, you can now detect Impossible Travel!

 

Splunk

There are fewer steps in the creation of the Splunk alert; however, the query is much longer. A high-level walkthrough of the necessary steps is below, followed by the full query.

High Level Walkthrough
  1. Grab all events for the event source being targeted. Make sure to narrow the scope as much as possible by only grabbing a specific event type (seen in the grey box below) such as successful logins.
    | index=vpn eventtype=session_start
  2. Based on all of the results, eliminate all users and associated events where they only have 1 IP associated with them throughout the entire dataset.
    | eventstarts dc(src_ip) as src_ip_count by user
    | search src_ip_count > 1
  3. Run streamstats to add in the previous time and source IP used by the user.
    | streamstats last(src_ip) as last_src_ip last(_time) as last_time by user
    | where src_ip != last_src_ip
  4. Now that there are two different source IPs with respective timestamps, the IP addresses can be geolocated, and any location that doesn’t have a city, region or country can be discarded.
    | iplocation src_ip
    | search Country!="" (City!="" OR Region!="" )
    | sort 0 _time
    | eval City=if(City=="","Unknown",City), Region=if(Region=="","Unknown",Region), Country=if(Country=="","Unknown",Country)
  5. Once the final list of IP addresses is compiled, each pair of IP and timestamps can be sent through the haversine formula.
    streamstats current=f last(_time) AS last_time last(src_ip) AS last_src_ip last(lat) AS last_lat last(lon) AS last_lon last(City) AS last_city last(Country) AS last_country last(Region) AS last_region BY user
    | where src_ip!=last_src_ip
    | `haversine_distance(lat,lon,last_lat,last_lon)`
  6. The formula will return the distance between the two IP addresses, which can be used to calculate speed.
    | eval elapsed_time=_time-last_time, speed=distance/(elapsed_time/60/60)
  7. This speed can be used to search for the impossible traveler. Adjust the number to accommodate different forms of travel. For instance if the workforce doesn’t use air travel often then 75 mph may be appropriate but if air travel is frequently used then 600 mph may be more appropriate (with the average speed of a plane being 575 mph).
    | search speed>600
  8. The result set is now down to the impossible travelers. Some formatting for better display of results and addition of incident review metadata can be added.
Haversine/Great Circle formula

The Haversine formula calculates the distance between two latitude/longitude points. The formula below will work in Splunk and is referenced as ‘haversine_distance” in Step 5.

| eval time1=1584988575, time2=1584992175
| eval lat1=39.9526, lon1=-75.1652, lat2=47.6062, lon2=-122.3321
| eval rlat1 = pi()*lat1/180, rlat2=pi()*lat2/180, rlat = pi()*(lat2-lat1)/180, rlon= pi()*(lon2-lon1)/180
| eval a = sin(rlat/2) * sin(rlat/2) + cos(rlat1) * cos(rlat2) * sin(rlon/2) * sin(rlon/2)
| eval c = 2 * atan2(sqrt(a), sqrt(1-a))
| eval distance = 6371 * c
| eval time_delta=time_2-time_1
| eval speed_mph=distance/(time_delta*60*60)
| where speed_mph > 600
Full Splunk Query  
| tstats `summariesonly` count FROM datamodel=Network_Sessions.All_Sessions WHERE sourcetype=cisco:asa All_Sessions.tag=start All_Sessions.tag=vpn BY All_Sessions.src_ip All_Sessions.user _time span=1s
| `drop_dm_object_name("All_Sessions")`
`comment("FILTER OUT EVENTS WITH ONLY ONE SOURCE IP, AS THERE WILL BE NO DISTANCE OR TIME TO COMPUTE")`
| eventstats dc(src_ip) AS src_ip_count BY user
| search count>1
`comment("ADD GEOLOCATION INFORMATION FOR THE SOURCE IP ADDRESS. FILTER OUT ENTRIES WHICH DON'T AT LEAST INCLUDE COUNTRY AND EITHER REGION OR CITY.")`
| iplocation src_ip
| search Country!="" (City!="" OR Region!="" )
`comment("SORT THE ENTRIES BY TIME, RUN STREAMSTATS TO APPEND TO EACH EVENT THE IP ADDRESS AND GEOGRAPHIC LOCATION OF THE PREVIOUS EVENT, AND REMOVE ANY EVENTS WHERE THE LOCATION IS THE SAME")`
| sort 0 _time
| eval City=if(City=="","Unknown",City), Region=if(Region=="","Unknown",Region), Country=if(Country=="","Unknown",Country)
| streamstats current=f last(_time) AS last_time last(src_ip) AS last_src_ip last(lat) AS last_lat last(lon) AS last_lon last(City) AS last_city last(Country) AS last_country last(Region) AS last_region BY user
| where src_ip!=last_src_ip
`comment("CALCULATE THE DISTANCE BETWEEN THE CURRENT AND PREVIOUS LOCATION, THE ELAPSED TIME BETWEEN LOCATIONS, AND THE SPEED NECESSARY TO TRAVEL THE DISTANCE IN THE CALCULATED TIME")`
| `haversine_distance(lat,lon,last_lat,last_lon)`
| eval elapsed_time=_time-last_time, speed=distance/(elapsed_time/60/60)
| search speed>600 NOT (Region=Pennsylvania AND last_region=Pennsylvania) NOT (Region=California AND last_region=Pennsylvania) NOT (Region=California AND last_region=California)
| sort 0 -speed
| dedup user
`comment("NORMALIZE THE USERNAME FIELD, PREPARE NUMERICAL FIELDS FOR PRESENTATION, AND RENAME FIELDS TO MATCH THEIR INCIDENT RESPONSE LABELS")`
| eval user=lower(user), last_time=strftime(last_time,"%Y %b %e %I:%M:%S %p"), distance=tostring(round(distance,0),"commas")." Miles", speed=tostring(round(speed,0),"commas")." MPH", elapsed_time=tostring(elapsed_time,"duration")
| rex field=user "(?ca|ga|pa|mx|os|pr|az|de|fs(?:\d{5}|[a-z]{3,4}\d))$"
| rename last_city AS src_city, last_region AS src_region, last_country AS src_country, last_lat AS src_lat, last_lon AS src_long, City AS dest_city, Region AS dest_region, Country AS dest_country, lat AS dest_lat, lon AS dest_long, elapsed_time AS duration, last_time AS start_time, _time AS end_time, src_ip AS dest_ip
| rename last_src_ip AS src_ip
`comment("ADD INCIDENT REVIEW METADATA")`
| eval review_env="production"
| eval mitre_technique_id="T1133,T1078"
| makemv delim="," mitre_technique_id

 

Conclusion

As employees work from home in greater numbers, the volume of available data on remote connections to enterprise spikes. Identifying evidence of malicious actors in the new mountain of data is a daunting but vital task for security departments. Fortunately, most SIEM tools can present useful data about unusual account activity when one uses specific queries.

A properly tuned and highly concentrated query can help wade through the massive data sets within a SIEM to find attackers looking to take advantage of the new remote access landscape. Utilizing some location-based rules in your SIEM is one alternative to rushing to invest in UBA/UEBA tools. While it may make sense to have UBA/UEBA on your roadmap, the goal for the rules listed above is to help you increase your security posture quickly so you can get back to focusing on your family, employees, and business during these strange times.

Stay safe.

Greg Stachura
Senior Consultant, GFCA, CISSP and Security+

Greg focuses on Incident Response and the Cyber Security Operations Center. Greg has experience managing SIEM, as well as orchestration and automations platforms. He also has extensive background in Incident Response playbook development, forensics and log analysis. Prior to joining Security Risk Advisors, Greg worked extensively in the financial, healthcare and education sectors.

Tyler Frederick
Manager | Archive

Tyler oversees security engineering and advanced response services for the 24x7x365 CyberSOC, including forensics, incident response, threat hunting and threat intelligence, purple teams, and platform engineering. He has extensive experience developing advanced SIEM and EDR correlation logic, conducting purple team assessments, leading incident response activities, and automating security operations.

Tyler is a graduate of Penn State University, holding a Master's degree in information sciences and technology (IST), as well as degrees in cybersecurity, computer science, and information systems.

Prior to joining SRA, Tyler worked as an IT manager and system administrator and brings with him an understanding of the challenges involved with implementing and managing security controls in enterprise networks.