Go back

The Essential User Access Review Template [Checklist Download]

The Essential User Access Review Template

Contents

Imagine having the power to scrutinize user permissions with the finesse of a master locksmith, uncovering hidden backdoors and granting access only to the deserving. Sounds great, right? However, in order to do that, we need to first start our process with a User Access Review (UAR).

As cloud adoption continues to surge ahead, User Access Reviews are increasingly becoming essential as part of any access management audit process. This necessity is punctuated by the fact that 33% of breaches have human error at their root, but it’s not always the user’s fault. Some employees are over-privileged without even realizing it, and it’s easy for inactive accounts to fly under the radar without regular auditing and UARs. 

It’s no longer just about who is on your network; a UAR tackles the chaos by ensuring everyone has the right key to do their job – no more, no less. Beyond being a best practice, User Access Reviews are often mandated under regulatory frameworks.

Let’s decode the DNA of this essential template, discovering what a UAR is, why you need it, and how to do it.

What is a User Access Review?

A User Access Review (UAR) is a security and compliance process that ensures that only authorized individuals can access specific systems and data within an organization. Conducted periodically (e.g., monthly or quarterly) or during role changes, a User Access Review is an essential part of your cloud security toolkit, helping you create an inventory of user accounts and their privileges and verify their appropriateness based on job roles. 

Managers or system owners often participate in the review to confirm the necessity of these privileges. The process identifies and rectifies inactive, duplicate, or overly privileged accounts, reducing the risk of unauthorized access and leaked secrets. UARs are crucial for meeting regulatory requirements like NIST and GDPR and maintaining a secure environment.

Why Do You Need to Do a User Access Review?

Imagine an intern with more access rights than your CEO – it’s not a crazy or far-fetched idea. Organizations often grant access rights but neglect the importance of revocation. This leads to something called privilege creep, where permissions accumulate as employees transition roles, support other teams, or simply navigate their tasks. 

Unfortunately, the accumulation of access rights is a ticking time bomb, as excessive privileges expose your organization to the cycle of compromised identities, account takeover, misuse of privileges, and other threats. Regularly auditing who has access to certain resources allows organizations to better defend against internal and external threats – after all, it only takes one disgruntled employee to trigger a significant data leak. 

A User Access Review offers a way to maintain accountability, visibility, and data integrity across your organization, eliminating cloud identity risk. While having the exact permissions they need helps streamline employees’ workflows, visibility into active, inactive, and redundant accounts is particularly valuable in forensic investigations following data breaches or during employee transitions.  

Download the Free User Access Review Checklist

Which Standards Require User Access Review

Access reviews aren’t just a choice; they are a mandate dictated by various IT frameworks:

  • ISO 27001: Achieving ISO 27001 certification requires organizations to demonstrate a commitment to systematically managing and protecting sensitive information and data. 
  • GDPR: Europe’s data protection regulation emphasizes limiting access to personal data to individuals with a legitimate interest. This necessitates audits of who can access personal data, reinforcing compliance.
  • NIST: The NIST Cybersecurity Framework is a voluntary guideline for cybersecurity best practices, and its special publications, like 800-53 and 800-171, stress auditing accounts for compliance.
  • PCI DSS: The Payment Card Industry Data Security Standard ensures that all organizations that accept, process, store, or transmit cardholder information meet strict access control and cybersecurity compliance requirements.

The Essential User Access Review Template

From creating an access policy and involving stakeholders to embracing the principle of least privilege, here are the essential steps you can take to complete a User Access Review.

Regularly Update Your Access Management Policy

You can continually review and update your access management policy to reflect organizational changes, new technologies, or compliance requirements. Establish a schedule for these reviews, such as quarterly or biannually, to ensure the policy remains current and effective. You can also get everyone involved and consult with departments like IT, HR, and legal during a policy update to ensure it is comprehensive and aligns with all organizational needs.

Review the User Access Audit Procedure

Keep your processes agile by continually assessing how you conduct User Access Reviews. Firstly, you can revisit your audit procedures to ensure they align with current best practices and regulatory requirements. Secondly, make sure you know what data you’ll collect, how you’ll analyze it, and what metrics will indicate success or issues. Finally, you can utilize audit software or tools that provide detailed logs and real-time monitoring capabilities to streamline the audit procedure.

Implement Role-based Access Control

Use Role-based Access Control (RBAC) to assign permissions based on roles within the organization. This makes managing and reviewing access rights easier, as employees changing roles can simply be switched from one predefined role to another, aligning access with job responsibilities. Periodically re-evaluate the roles and associated permissions to ensure they remain aligned with changing job responsibilities and organizational structures.

Involve Regular Employees and Management

While it’s your job as DevOps, CISO, SecOps, or IAM engineer to prioritize access control, it’s also everybody’s concern – yep, right down to the interns and temp staff. Be sure to include both regular employees and management in the review process to get a 360-degree view of access needs and usage. Management can confirm which access levels are appropriate for specific job roles, while employees can identify potentially unnecessary or missing access privileges. Structured interviews or surveys can help gather insights about access needs and potential security risks.

Document Each Step of the Process

Thorough documentation is your ally in understanding challenges and optimizing the review process. Maintaining comprehensive documentation of the User Access Review is critical for audit trails and future reviews. As a bare minimum, you should record who was involved in each step, what changes were made, and why, as well as any anomalies or issues that arose and how they were addressed. Securely store the documentation in a centralized repository that is only accessible to authorized personnel (of course!) to maintain confidentiality and integrity.

Educate Your Personnel

You don’t know what you don’t know, right? All employees should be aware of the importance of proper access management for security and compliance. Provide training on requesting access, reporting issues, and understanding the impact of access controls on data security. Implement regular refresher courses and updates to keep the workforce on top of any changes in policy or emerging security threats, and pair the training with other cybersecurity know-how sessions like phishing simulations.

Choose the Right Access Management Platform

You can choose an access management platform to automate privilege management and help meet compliance goals. The right platform will facilitate reviews, manage role-based access controls, and offer features like automated alerts for suspicious activity or non-compliance. Most companies are already jumping on board – this year, 65% of large enterprises will use IAM software to enhance security measures and make compliance easier. For example, some platforms (like Rezonate) help you see IAM problems and solutions by discovering, profiling, and protecting human and machine identities, automatically and proactively enforcing real-world least privileged access. 

Get a Complete Picture of Your Access Control Compliance 

User Access Reviews have emerged as a critical weapon against unauthorized access and potential breaches, and the secret to success relies on the regularity and longevity of your IAM strategy. Thankfully, protecting identities and meeting regulatory targets doesn’t mean adding more tasks to your to-do list – simply automate it. 

Rezonate simplifies compliance tasks by enabling Admins to easily confirm that each user has the correct access rights for their job, providing much-needed visibility over access journeys and the IAM map for confident real-time detection, response, and security. 

Rezonate easily categorize and highlights dormant identities across the identity fabric – from workforce identities no longer active, to machine identities such as roles and access keys. 

In addition to that, Rezonate enables simple a flow to review access of specific subsets or groups of identities based on specific attributes, such as:

  • Identities that are members of the marketing team and can access the cloud providers such as Azure or AWS
  • Identities that have Administrative privileges and can access SaaS applications such as Salesforce
  • Identities that did not login for more than 30 days and can access specific service on the cloud provider such as RDS in AWS

Rezonate’s Identity Centric for Access Review

All is done automatically as part of Rezonate’s Identity discovery and effective privileges modules which enables Access Reviews in a click of a button. See Rezonate in action today.

Loading

Continue Reading

More Articles
Threat Hunting for Identity Threats in Snowflake

Frosty Trails: Threat-Hunting for Identity Threats in Snowflake

The Snowflake platform has revolutionized how organizations store, process, and analyze large volumes of data. It offers a fully-managed data warehouse-as-a-service solution, providing a scalable and flexible architecture allowing seamless data integration from multiple sources. As Snowflake rises in popularity as one of the top ten cloud vendors in the world, its attractiveness to organizations also draws the attention of malicious attackers. The platform's widespread adoption and extensive use in storing valuable data make it a lucrative target for cyber threats. As a result, you must implement security measures, stay vigilant against emerging threats, and continuously update your defense mechanisms to safeguard Snowflake data sharing and infrastructure from potential attackers.This post will help you grasp how to use Snowflake's built-in logging features for your security operation routine. We will explore the relevant data Snowflake exposes for hunting and describe ten threat scenarios and how to detect them. We will also share a script to execute them and perform quick threat-hunting operations in your environment to stay secure and audit-ready. What is Snowflake? Snowflake is a cloud-based data platform that provides a fully managed and scalable solution for storing, processing, and analyzing large volumes of data. It is designed to work on top of popular cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Snowflake is built with a unique architecture that separates storage and computing, allowing identities to scale resources independently based on their needs.  This architecture, combined with its ability to process semi-structured and structured data, enables Snowflake to deliver high performance and cost-efficiency for data workloads of any size. As we'll see in the next section, a Snowflake account can hold multiple databases without taking care of the infrastructure. Key Snowflake Logging Features Each Snowflake account has a default database called “SNOWFLAKE”. It is a shared read-only database that holds metadata and historical usage data related to the objects within your organization and accounts.   The SNOWFLAKE database has a built-in schema called "ACCOUNT_USAGE", which is a system-defined schema containing a set of views providing access to comprehensive and granular usage information for the Snowflake account. It is a valuable tool for monitoring and understanding how resources are utilized within your Snowflake environment.  The schema includes views that cover  user activity  query history  warehouse usage  login history  data transfer details, and more.   There are more logging mechanisms in Snowflake, such as INFORMATION_SCHEMA and READER_ACCOUNT_USAGE. During this post, we will rely on the following ACCOUNT_USAGE views: Exploring ACCOUNT_USAGE By default, each Snowflake account has a database called SNOWFLAKE that is accessible to the ACCOUNTADMIN role. You can grant additional roles and have access through the following command: GRANT imported privileges on database snowflake to role rezonate_integration; You can explore the available views by logging in as an ACCOUNTADMIN to your Snowflake account and performing the following steps: From the left pane, choose Data and then Databases. Select the SNOWFLAKE database and expand it. 3. Expand ACCOUNT_USAGE and select any of the views within it. Each available view in the schema has its own column structure. You can see the available columns for each view by clicking on the view name, choosing the Columns tab, and selecting “Explore available columns”. Comprehensive documentation per view is available, including the retention period and logging latency. Snowflake Data Governance - How to Access Snowflake Audit Logs The Snowflake logs are accessible through a few methods, as you’ll see below. 1. Snowflake Console  The most straightforward method of accessing the logs is logging in to a Snowflake account with a user that has read permissions to the  ACCOUNT_USAGE schema. Then, choose "Worksheets" from the left pane and ensure the worksheet is querying the correct data source. You should see something like the query browser below. 2. SnowSQL SnowSQL is a command-line tool provided by Snowflake designed to interact with Snowflake's data warehouse and execute SQL queries, manage data, and perform various administrative tasks. It acts as the official command-line client for Snowflake, allowing users to connect to their Snowflake accounts and work with data using SQL commands. Information about installing, configuring, and using it is available in Snowflake’s documentation. 3. Exporting to external storage Snowflake facilitates data export to contemporary storage services like AWS S3 through  "Stage" functionality. The data exported from Snowflake can be saved in various file formats. You can find detailed information about Stages on Snowflake's official documentation page.  Once the Stage setup is complete and data is exported, you have the flexibility to utilize your preferred analysis tool to centralize the data in a location of your choice. 4. Snowflake SDKs As well as the structured methods mentioned earlier, Snowflake supports native REST API accessible through different SDKs. It can be used by any script or tool for purposes like exporting data. An example is the Rezonate Threat-Hunting Tool, which takes advantage of Snowflake Python SDK to execute threat-hunting queries. We’ll find out more later in the blog.  Besides the Python SDK, the Snowflake team has developed drivers for many popular languages, including .NET, NodeJS, and Go. The full list is available here. 10 Snowflake Threat-Hunting Techniques to Implement Now  Now we’ve learned about Snowflake’s structure, basic permissions, and integrations, we can start threat-hunting. In this section, we will guide you through some critical threat-hunting scenarios to look out for and explain each. We will also mark the relevant Snowflake views, align them to the specific MITRE ATT&CK technique, and include our own query in Snowflake query syntax. Remember, you can copy and paste them directly to your worksheet. It is important to highlight that some hunting queries may have false positives, depending on the environment and may need adjustments to reduce noisy results. Scenario 1 - Brute Force on a Snowflake User A brute force attack on a Snowflake user happens when an attacker uses trial-and-error to repeatedly submit different combinations of usernames and passwords and eventually gain unauthorized access. To hunt for this type of attack, you can search for an attacker that performed more than X failed login attempts on at least Y target users, failing or ending up with a successful login. In failure cases, the activity may result in a user's lockout. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY Query -- Get users who failed to login from the same IP address at least 5 times select CLIENT_IP, USER_NAME, REPORTED_CLIENT_TYPE, count(*) as FAILED_ATTEMPTS, min(EVENT_TIMESTAMP) as FIRST_EVENT, max(EVENT_TIMESTAMP) as LAST_EVENT from SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY where IS_SUCCESS = 'NO' and ERROR_MESSAGE in ('INCORRECT_USERNAME_PASSWORD', 'USER_LOCKED_TEMP') and FIRST_AUTHENTICATION_FACTOR='PASSWORD' and       EVENT_TIMESTAMP >= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); group by 1,2,3 having FAILED_ATTEMPTS >= 5 order by 4 desc; -- For Each result, check if the source IP address managed to login to the target user AFTER the "lastEvent" time MITRE Technique Credential Access | Brute Force | ATT&CK T1110  Scenario 2 - Password Spray on a Snowflake Account A brute force attack on a Snowflake account involves an attacker repeatedly submitting different combinations of usernames and passwords to eventually manage to log in and gain unauthorized access. To hunt for any occurrence of this scenario, you can search for an attacker that performed more than 1 failed login attempt on at least Y unique target users, from the same IP address. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY Query -- Get users who failed to login from the same IP address at least 5 times select CLIENT_IP, REPORTED_CLIENT_TYPE, count(distinct USER_NAME) as UNIQUE_USER_COUNT, min(EVENT_TIMESTAMP) as FIRST_EVENT, max(EVENT_TIMESTAMP) as LAST_EVENT from SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY where IS_SUCCESS = 'NO' and ERROR_MESSAGE in ('INCORRECT_USERNAME_PASSWORD', 'USER_LOCKED_TEMP') and FIRST_AUTHENTICATION_FACTOR='PASSWORD' and       EVENT_TIMESTAMP >= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); group by 1,2 having UNIQUE_USER_COUNT >= 1 order by 3 desc; -- For Each result, check if the source IP address managed to login to the target user AFTER the "lastEvent" time MITRE Technique Credential Access | Brute Force | ATT&CK T1110 Scenario 3 - Unauthorized Login Attempt to a Disabled/Inactive User  In some cases, Snowflake user accounts might have been disabled due to security concerns or maybe even as part of employee off-boarding. Monitoring login attempts to disabled users can help you detect unauthorized activities. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY Query -- Search for login attempts to disabled users select * from SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY where IS_SUCCESS = 'NO' and  ERROR_MESSAGE  = 'USER_ACCESS_DISABLED' MITRE Technique Credential Access | Brute Force | ATT&CK T1110  Scenario 4 - Login Attempt Blocked by Network Policy Snowflake network policies are a set of rules that govern network communication and access control within the Snowflake data platform. A network policy can deny a connection based on the client’s characteristics, such as IP address, to enforce organization policy and reduce the chances of a compromised account in case of leaked credentials.By searching for these failed logins we can identify violations of the organizational policies that may suggest compromised credentials. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY Query -- Search for network policies blocked IP addresses select * from SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY where IS_SUCCESS = 'NO' and  ERROR_MESSAGE  = 'INCOMING_IP_BLOCKED' and EVENT_TIMESTAMP >= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); Scenario 5 - Exfiltration Through Snowflake Data Sharing Snowflake administrators can share data stored in their accounts with other Snowflake accounts. An attacker might use shares to exfiltrate data from Snowflake resources stored on compromised accounts to external locations. Any unauthorized event of this nature is a big red flag. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY Query -- Search for new data shares select *  from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY  where REGEXP_LIKE(QUERY_TEXT, 'create\\s+share\\s.*','i') or REGEXP_LIKE(QUERY_TEXT, '\\s+to\\s+share\\s.*','i') and START_TIME>= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique Exfiltration | Transfer Data to Cloud Account| ATT&CK T1537  Scenario 6 - Exfiltration Through Snowflake Stage A Snowflake stage is an external storage location that serves as an intermediary for loading or unloading data into or from Snowflake, providing seamless integration with various cloud-based storage services. For example, an AWS S3 bucket can serve as a stage. You can use the following queries to search potential data exfiltration using this feature.  Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY SNOWFLAKE.ACCOUNT_USAGE.STAGES Query -- Search for stage-related statements select * from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY where QUERY_TEXT ilike '%COPY INTO%' and QUERY_TEXT ilike '%@%'; -- The following query will show the stages that were created in the last 24 hours select * from SNOWFLAKE.ACCOUNT_USAGE.STAGES where CREATED>= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique Exfiltration | Transfer Data to Cloud Account| ATT&CK T1537  Scenario 7 - Persistency Through Snowflake Procedures & Tasks Procedures and tasks are Snowflake features that automate and manage workflows. Snowflake Procedures: Procedures in Snowflake are user-defined scripts written in SQL or JavaScript that allow you to encapsulate a series of SQL or JavaScript statements as a reusable unit. Snowflake Tasks: Tasks are scheduled operations that automate repetitive tasks or workflows. They are defined using SQL or JavaScript and can include SQL queries, DML statements, or calls to procedures. Tasks are scheduled to run at specific intervals, such as hourly, daily, or weekly, making them ideal for automating data pipelines and regular data processing. An attacker might utilize procedures and tasks to maintain persistently in the organization or exfiltrate data over time. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY Query -- Search for new tasks select * from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY  where REGEXP_LIKE(QUERY_TEXT, '.*CREATE\\s+(OR\\s+REPLACE\\s+)?TASK.*', 'i') and START_TIME >= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); -- Search for new procedures select * from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY  where REGEXP_LIKE(QUERY_TEXT, '.*CREATE\\s+(OR\\s+REPLACE\\s+)?PROCEDURE.*', 'i') and START_TIME >= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique Execution | Scheduled Task/Job | ATT&CK T1053  Execution  | Automated Exfiltration | ATT&CK T1020  Scenario 8 - Defense Evasion Through Unset Masking Policy A Snowflake masking policy is a security mechanism that protects sensitive data within a database. It allows you to define rules for obscuring or redacting specific data elements, such as Social Security Numbers (SSNs) or credit card numbers, to limit their visibility to unauthorized users. Attackers might bypass masking policies by unsetting them, given the right permission, and then exfiltrating sensitive information. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY Query -- Search for unsetting of a masking policy select * from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY where QUERY_TEXT ilike '%UNSET MASKING POLICY%' and START_TIME >= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique Data Manipulation| Stored Data Manipulation | ATT&CK T1565  Scenario 9 - Data Exfiltration: Spikes in User Queries Volume If an attacker manages to infiltrate a Snowflake account, they may attempt to extract data from the databases hosted in the compromised account. To detect this type of activity, you can identify users who exhibit significantly higher data querying rates than their typical usage patterns. The subsequent query lets us pinpoint users who have executed queries resulting in larger data volumes than their average daily activity over the previous week.  Triage tip: The suspicion level increases as the difference between the calculated standard deviation of “total_bytes_written” and the sum of “stddev_daily_bytes” and “avg_daily_bytes” grows larger. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY Query -- Spikes in user queries WITH user_daily_bytes AS (   SELECT     USER_NAME AS user_name,     DATE_TRUNC('DAY', END_TIME) AS query_date,     SUM(BYTES_WRITTEN_TO_RESULT) AS total_bytes_written   FROM ACCOUNT_USAGE.QUERY_HISTORY   WHERE END_TIME >= CURRENT_TIMESTAMP() - INTERVAL '7 DAY'   GROUP BY user_name, query_date ), user_daily_average AS (   SELECT     user_name,     AVG(total_bytes_written) AS avg_bytes_written,     STDDEV_SAMP(total_bytes_written) AS stddev_bytes_written   FROM user_daily_bytes   GROUP BY user_name ) SELECT   u.user_name,   ROUND(u.total_bytes_written, 2) AS today_bytes_written,   ROUND(a.avg_bytes_written, 2) AS avg_daily_bytes,   ROUND(a.stddev_bytes_written, 2) AS stddev_daily_bytes FROM user_daily_bytes u JOIN user_daily_average a    ON u.user_name = a.user_name WHERE query_date = CURRENT_DATE()   AND u.total_bytes_written > a.avg_bytes_written   AND u.total_bytes_written > stddev_daily_bytes + avg_daily_bytes ORDER BY u.user_name; MITRE Technique Exfiltration | ATT&CK TA0010 Scenario 10 - Anomaly in Client Application For User If a user’s credentials are compromised or there is an insider threat, the attacker may attempt to use enumeration tools or client apps to perform massive data exfiltration. It’s likely that these tools haven't been used by the legitimate user in the past. For this case, detecting any new client app used by the user could be a red flag that is worth investigating. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.SESSIONS SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY Query -- User uses a new client application WITH user_previous_applications AS (   SELECT     USER_NAME AS user_name,     ARRAY_AGG(DISTINCT CLIENT_APPLICATION_ID) AS previous_applications   FROM ACCOUNT_USAGE.SESSIONS   WHERE DATE_TRUNC('DAY', CREATED_ON) < CURRENT_DATE()   GROUP BY user_name ), latest_login_ips  AS (   SELECT     USER_NAME,     EVENT_ID,     CLIENT_IP   FROM ACCOUNT_USAGE.LOGIN_HISTORY )  SELECT   s.USER_NAME AS user_name,   ARRAY_AGG(DISTINCT s.SESSION_ID),   ARRAY_AGG(DISTINCT s.CLIENT_APPLICATION_ID) AS new_application_id,   lh.CLIENT_IP as ip_address FROM ACCOUNT_USAGE.SESSIONS s JOIN user_previous_applications u   ON s.USER_NAME = u.user_name JOIN latest_login_ips lli   ON s.USER_NAME = lli.USER_NAME JOIN ACCOUNT_USAGE.LOGIN_HISTORY lh   ON s.LOGIN_EVENT_ID = lli.EVENT_ID WHERE DATE_TRUNC('DAY', s.CREATED_ON) = CURRENT_DATE()   AND NOT ARRAY_CONTAINS(s.CLIENT_APPLICATION_ID::variant, u.previous_applications) group by s.USER_NAME,lh.CLIENT_IP; MITRE Technique Credential Access |  ATT&CK TA0006 4 Additional Queries to Identify Snowflake Threats On top of the scenarios mentioned above, there are more relevant queries you can use to hunt for threats in a Snowflake environment. However, the results of these queries are harder to rely on since they require a deeper context of the regular activities in the organization to differentiate the legitimate operations from those that may be part of a threat. For example, imagine that MFA has been disabled for an administrator. This activity could be either part of a malicious operation or just an operational benign activity. To answer this question, you would need additional context: Who disabled the MFA device? Is it part of any task associated with an active project or duty? And If not,was it really the user, or is it a persistent action caused by an attacker? Query 1 - New Administrative Role Assignment  In the post-exploitation phase of a Snowflake attack, the attacker might create a new administrative user as a persistence mechanism. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_USERS Query -- Search for new admin role assignments select * from SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_USERS where ROLE in ('ORGADMIN', 'ACCOUNTADMIN')  and CREATED_ON>= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique https://attack.mitre.org/techniques/T1136/ Query 2 - New Permissions Assigned to a Role In the post-exploitation phase of a Snowflake attack, an attacker may add permissions to a non-privileged role in an effort to achieve persistence using a low-privileged user. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_ROLES Query -- Search for new admin role assignments select * from SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_ROLES where ROLE NOT in ('ORGADMIN', 'ACCOUNTADMIN')  and CREATED_ON>= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique https://attack.mitre.org/techniques/T1136/ https://attack.mitre.org/tactics/TA0004/ Query 3 - Changes to Users Security Settings An attacker might change user security settings like passwords, MFA settings, or other authentication methods to ensure persistence, or as a post-exploitation step.  Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY Query -- Search for user security settings changes select * from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY where QUERY_TEXT ilike '%ALTER%USER%'        and QUERY_TYPE = 'ALTER_USER'        and REGEXP_LIKE(QUERY_TEXT, '.*(PASSWORD|ROLE|DISABLED|EXPIRY|UNLOCK|MFA|RSA|POLICY).*', 'i')       and START_TIME>= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique https://attack.mitre.org/techniques/T1098/ Query 4 - Changes to Network Policies An attacker might alter network policies to allow traffic from a specific IP address.    Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY Query -- Search for network policies settings changes select * from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY where (QUERY_TYPE in ('CREATE_NETWORK_POLICY', 'ALTER_NETWORK_POLICY', 'DROP_NETWORK_POLICY') or        QUERY_TEXT ilike any ('% set network_policy%', '% unset network_policy%') )       and START_TIME>= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique https://attack.mitre.org/techniques/T1562/004/ Choose a Reliable, Fast, and Simple Snowflake Threat-Hunting Tool In our pursuit to empower organizations with proactive cybersecurity measures, Rezonate is excited to introduce our open-source tool designed specifically for threat-hunting in Snowflake. This tool leverages Snowflake SDK to run the different threat models mentioned in this post and allows you to easily start your own hunting journey in your own environment. Feel free to suggest expansions and improvements – we’d love to hear from you 🙂 You can find a link to our repository here.
Read More

Okta Logs Decoded: Unveiling Identity Threats Through Threat Hunting

In the ever-evolving world of cybersecurity, staying steps ahead of potential threats is paramount. With identity becoming a key for an organization's security program, we increasingly rely on Identity providers (IdP) like Okta for identity and access management, and for federating access to cloud services, systems, and critical SaaS applications. Therefore, the logs produced by these systems become a critical source of information that can help you detect and eliminate threats before they wreak havoc. This blog post is your compass across a wide range of available Okta logs. Whether you’re a seasoned security professional or just getting started in the field, this step-by-step guide will empower you to turn raw data into actionable insights. We’ll explore: Each Okta audit log, learning how to analyze and extract critical information from How to uncover hidden threats, analyze their patterns, and respond effectively. From detection of brute force and MFA fatigue attempts to impossible traveler and privilege escalation techniques A set of free tools the Rezonate team has provided you to collect, analyze, hunt, and detect identity threats faster and easier. Understanding Okta Audit Logs Okta's System Log API records various system events related to an organization, providing an audit trail that can be used to understand platform activity and diagnose problems. The System Log API gives near real-time, read-only access, capturing a wide range of data types and the exact structure of each change. That being said, some data points are agnostic and appear in each log record. Here is the log structure scheme, as defined in the Okta Documentation: Event Structure Schema (Okta docs) Every property in this log could be useful in certain use cases, but we highlighted some properties that you should focus on for most investigations and hunting scenarios: UUIDUnique identifier for the eventPublishedEvent timeEvent TypeDescribe the type of the event, from a list of ~850 event typesActorDescribe the entity (user, app, client, etc.) that acts. It includes details like ID, type of actor, alternative ID (which is the user’s email address), and display nameClientDescribes the client that issues the request that triggers an event. It provides contextual information about the user, such as HTTP user agent, geographical context, IP address, device type​​ , and network zone.TargetDescribes the entity that an actor acts on, such as an app user or a sign-in token. It also includes details like ID, type, alternative ID, and display name.Note that in some events, there could be more than one Target object, in these cases, it's best to find the relevant target based on its type (AppInstance, AppUser, etc..)Authentication ContextProvides context about the credentials provider and authentication type of the connection. Includes the externalSessionId which is the Session ID of the operating user.Security ContextInclude context regarding the IP Address of the client. Useful data points within this object are isp (Internet Provider that the request was sent from) and asOrg (the organization that is associated with the asn)Debug ContextInclude detailed, per event, context with additional information such as device hash (DtHash) or ThreatSuspected OutcomeInclude the result for the event (such as Login request), in the Result field and the reason for this result in the Reason field. Accessing Okta Audit Logs Okta keeps the data accessible for customers with a retention of 90 days, and during this period there are primarily two ways to access it: 1. Okta Admin Console Via the Okta Admin Console, Administrative roles, enabled for management of policies, users, groups, and “Audit Log” reports, can use the interface by clicking “System Log” in the reporting menu: Alternatively, the web interface can be accessed directly through the following URL(replace OKTA_DOMAIN with your unique okta domain name) https://{OKTA_DOMAIN}-admin.okta.com/report/system_log_2 Through the web interface, we can apply different filters on the event time or any of its properties, and see the results, and several statistics, directly from the console. For example, in the query below we can see all of the activity against one specific application in the tenant - In this case, it's AWS Client VPN. You can also see the different actors that performed the operations involving this target application. Query results - Okta Admin Console On top of the basic Search panel it is also possible to add combinations of filters for more specific criteria. In the example below, we are looking for all events performed by a specific user, against a specific application, which resulted in ALLOW. This can be achieved by clicking the “Advanced Filters”. Advanced filters, Okta Query Log After applying filters, it's possible to either examine the details of each event or to export the filtered logs to a CSV file (by clicking on the Download CSV button). Note that this feature is limited to 200,000 results, so for bigger exports, the Okta System Log API is preferred. 2. Okta Admin Console The Okta System Log API is the programmatic counterpart of the System Log UI, and it offers the ability to execute more advanced queries and filters against the Okta logs. Operating through this interface requires either OAuth integration with the okta.logs.read scope or Read-only API Key. Here is an example of an API call, selecting all events of a specific type (user.session.end): GET /api/v1/logs?filter=eventType eq "user.session.end" HTTP/1.1 Host: {OKTA_DOMAIN} Accept: application/json Content-Type: application/json Authorization: SSWS {{apikey}} The most common use case of operating through the API Interface would be to export batch data in real-time to another system, such as streaming logs into a SIEM or any other security product, to monitor and conduct introspection and audit​.  One of the biggest advantages of exporting the data with the System Log API is to correlate the collected logs with other data sources, adding critical context and completeness of data making the advanced investigation a lot easier. Rezonate real-time correlated information for users' activities 3. Exporting The Logs As mentioned, there is more than one way to get your hands on the relevant Okta logs and perform your threat-hunting actions on them. Exporting through the Admin Console is easy, yet size-limited while exporting the data through the API could be more tricky for beginners. To get around this limit, we have created a basic tool that allows you to export Okta logs into a file, based on a time frame. It can be downloaded directly from the Rezonate github repository. Let the Hunt Begin After we have exported the logs through one of the methods, we can get to work and start analyzing the data to start identifying potential risks and threats. For the hunting process, you can use any data analytics solution or database based on your  preferences, as long as it supports filtering and grouping of data. In this section, we will guide you through some of the top-relevant threat scenarios to look out for, explaining them, marking the relevant Okta events, aligning to the specific MITRE ATT&CK technique, and including our own query in PostgreSQL query syntax. Important to highlight that some of the hunting queries may have false positives, depending on the environment, and may need some adjustments to reduce noisy results. Scenario 1 - Brute Force on an Okta User A brute force attack on an Okta user involves an attacker repeatedly trying different passwords in an attempt to eventually guess correctly and gain unauthorized access. To hunt for any occurrence of this scenario, you can search for an actor that performed more than X failed login attempts on at least Y target user, failing or ending up with a successful login. In cases of failure, the activity may result in a user's lockage, or Okta blocking the client IP. The same logic can be applied to two different types of events: user.session.start - Search for traditional Brute Force attack user.authentication.auth_via_richclient - Search for Brute Force attack that uses legacy authentication protocols. Legacy authentication does not support MFA and is thus being used to guess passwords on a large scale Relevant Okta Eventsuser.session.startuser.authentication.auth_via_richclientQuery-- Get users who failed to login from the same IP address at least 5 timesselect count(id), "clientIpAddress", "actorAlternateId", min(time) as "firstEvent", max(time) as "lastEvent"from okta_logswhere "eventType" ='user.session.start' and "actionResult"='FAILURE' and "resultReason" in ('INVALID_CREDENTIALS', 'LOCKED_OUT')and "time" > now() -interval '1 day'group by "clientIpAddress", "actorAlternateId"having count(id) >= 5order by count desc-- For Each result, check if the source IP address managed to login to the target user AFTER the "lastEvent" timeMITRE TechniqueCredential Access | Brute Force | ATT&CK T1110 It's also worth mentioning that based on the tenant  behavioral configuration Okta can also enrich each sign-in attempt with additional fields that add more context such as: Threat Suspected  New Device New IP Address New Geo Location  (Country\City\State) Including these enrichments in the query can help reduce false positives and focus on the more relevant events. Scenario 2 - MFA Push Notifications Fatigue Okta MFA Push Notification Fatigue refers to user exhaustion or annoyance resulting from frequent multi-factor authentication (MFA) push notifications sent by Okta for verification purposes. In this scenario, we assume that an adversary has already compromised user credentials and start flooding the legitimate user with Push notifications, with the hope that the user will approve one of them by mistake. To hunt for this threat scenario, you can search for more than X MFA push notifications, within a short period of time, originating from the same IP address. A successful MFA fatigue will also generate a user.authentication.auth_via_mfa event. This event will be logged after the targeted user was tricked to allow suspicious access. Relevant Okta Eventssystem.push.send_factor_verify_pushuser.authentication.auth_via_mfauser.mfa.okta_verify.deny_pushQuery-- Genericselect count(id), "clientIpAddress", "actorAlternateId", min(time) as "firstEvent", max(time) as "lastEvent"from audit_log_okta_idp_entitywhere "eventType" ='system.push.send_factor_verify_push'and "time" > now() -interval 'X hour'group by "clientIpAddress", "actorAlternateId"having count(id) >= 5 -- configurable number of MFA attemptsorder by count desc-- Find FAILED MFA fatigue attempts that were denied by the userselect count(id), "clientIpAddress", "actorAlternateId", min(time) as "firstEvent", max(time) as "lastEvent"from audit_log_okta_idp_entitywhere "eventType" ='user.mfa.okta_verify.deny_push'and "time" > now() -interval '24 days'group by "clientIpAddress", "actorAlternateId"having count(id) >= 5order by count descMITRE TechniqueCredential Access | Multi-Factor Authentication Request Generation | ATT&CK T1621 Scenario 3 - Okta ThreatInsight Detection Okta ThreatInsight is a security module that aggregates sign-in activities meta-data across the Okta customer base to analyze and detect potentially malicious IP addresses and prevent credential-based attacks. It is also a great starting point to find an initial indication for identifying targeted attacks against specific identities in the organization’s directory. Relevant Okta Eventssecurity.threat.detectedQueryselect min(time) as "first_event", max(time) as "last_event", "actorName", "actorType", "actorAlternateId", "eventType", "threatDetections"from audit_log_okta_idp_entity aloiewhere "eventType" ='security.threat.detected'group by "actorName", "actorType", "actorAlternateId", "eventType", "threatDetections"MITRE TechniqueCredential Access | Brute Force | ATT&CK T1110 Scenario 4 - Okta Session Hijacking A Session Hijacking attack refers to a situation in which an attacker was able to get his hand on the browser cookies of an authenticated Okta user. This risk is mostly involving targeted attacks and includes either malware infection on the user endpoint or a man-in-a-middle (MITM) attack that hijacks the user's traffic. (read more in this Okta Article). Okta’s dtHash serves as a useful tool for identifying stolen Okta sessions. Okta's dtHash, also known as the "de-identified token hash," is a cryptographic hash function utilized to safeguard user identifiers within Okta sessions. Its purpose is to mitigate the risk of sensitive user information being compromised in the event of a data breach or unauthorized access to Okta's portal or applications. For our hunting, we will search for a stolen Okta user's session that is being utilized in a different geographical location. We will detect it by searching for a dtHash that has been used from multiple geo-locations. Important Note - To enhance the effectiveness of detection, the session length limit plays a vital role. Okta recommends customers set a session length limit of 2 hours. It is worth noting that increasing the length limit raises the possibility of encountering false positives in the detection process. Relevant Okta EventsEvery Okta eventQueryselect count(distinct "clientCountry"), "actorAlternateId", "dtHash" from audit_log_okta_idp_entitywhere "dtHash" is not nullgroup by "actorAlternateId","dtHash"having count(distinct "clientCountry") >1MITRE TechniqueCollection | Browser Session Hijacking | ATT&CK T1185  Scenario 5 - Okta Privilege Escalation via Impersonation In this threat scenario, an Okta application administrator could impersonate another user by modifying an existing application assignment, specifically by editing the 'User Name' field used by Okta to identify users in the destination application. This manipulation allows the administrator to authenticate themselves as a different user in any federated application, presenting a risk of privilege escalation, especially in critical SaaS applications like AWS IAM Identity Center. Relevant Okta EventsApplication.user_membership.change_usernameQueryselect * from audit_log_okta_idp_entity aloie where "eventType" ='application.user_membership.change_username'-- For each result, check the target application. If the target application is relevant for this detection, the target AppUser is the field that we need to validate. An impersonation configuration was set if there's a mismatch between the targetName and the targetAlternateId.MITRE TechniquePersistence | Account Manipulation | ATT&CK T1098 Scenario 6 - Phishing Attempt (Blocked by FastPass) FastPass is Okta’s passwordless solution designed to minimize friction for the end-user during the login process while protecting against real-time phishing attacks. By adding additional layers of context to the login process (such as managed device information) it allows Okta to identify potentially suspicious authentication flows and it automatically blocks them, generating an indicative log in the audit. We can use this event result to identify potentially compromised credentials of Okta identities. Relevant Okta EventsUser.authentication.auth_via_mfaQueryselect * from audit_log_okta_idp_entity aloie where "eventType" ='user.authentication.auth_via_mfa' and "actionResult" ='FAILURE' and "actionResult" = 'FastPass declined phishing attempt'MITRE TechniqueInitial Access | Phishing | ATT&CK T1566 Scenario 7 - Okta Impossible Traveler Within the realm of threat hunting, the concept of the "impossible traveler" denotes a detection method employed to uncover compromised identities. Specifically, it involves identifying instances where an identity records successful login events from two distinct geographical locations within a brief time span, which may suggest a compromise. To identify potentially compromised identities, conduct a search for users who have experienced successful sign-in events from different geographical locations within a short timeframe. It is recommended to exclude VPN and proxy addresses from the analysis to focus on genuine geographic variations and to avoid false positives. If pre-configured properly, you can also use  Okta's velocity within the triage process to elevate the suspicion level of a particular sign-in location over others.  Relevant Okta EventsUser.session.startQueryselect count(distinct "clientCountry"), "actorAlternateId" from audit_log_okta_idp_entity aloie where "eventType" ='user.session.start'and "time" > now() -interval '1day'group by "actorAlternateId"having count(distinct "clientCountry") > 1MITRE TechniqueInitial Access | Valid Accounts | ATT&CK T1078 Scenario 8 - Cleartext Credentials Transfer Using SCIM  The SCIM (System for Cross-domain Identity Management) protocol is a standardized method for managing user identities and provisioning them across different systems and applications. It simplifies user management by providing a common framework for creating, updating, and deleting user accounts, as well as managing user attributes and group memberships, across various platforms. One of Okta's features allows setting a sync workflow, pushing any password changes to a target SCIM application. Configuring this requires Admin privileges to the Okta Console, so most likely to be a legitimate operation, yet, on rare occasions could be part of a hostile password-stealing attack by an insider.To detect this, we can search for the credentials export activity, and check that all of the target applications are legitimate and intended. Relevant Okta Eventsapp.user_management.push_okta_password_updateQueryselect * from audit_log_okta_idp_entity where "eventType" ='app.user_management.push_okta_password_update'MITRE TechniqueCredential Access | Exploitation for Credential Access | ATT&CK T1212 Scenario 9 - Application Access Brute Force When an attacker gains access to a compromised Okta user, they may attempt to use Okta’s portal to connect to various trusted applications. However, the attacker's attempts to access multiple apps can be denied by authentication policy requirements that have not been satisfied, such as the absence of MFA. An attacker may try to access different applications one by one, until finding those that allow him to operate without additional factors or conditions. To identify this behavior we will search for a user who has experienced multiple failed access attempts to different applications within a short time frame. This could raise a red flag and require a follow-up investigation of the user’s activity. Relevant Okta Eventsapplication.policy.sign_on.deny_accesstQueryselect count(targets."targetId"), logs."actorAlternateId", logs."clientIpAddress", logs."actorAlternateId"from audit_log_okta_idp_entity logs, audit_log_target_okta_idp_entity targetswhere "eventType"='application.policy.sign_on.deny_access'and targets."auditLogId" = logs."id"and targets."targetType" = 'AppInstance'and "time" > now() -interval '1 month'group by "clientIpAddress", "actorAlternateId"having count(targets."targetId") >= 5order by count descMITRE TechniqueCredential Access | Exploitation for Credential Access | ATT&CK T1212 On top of the scenarios mentioned above, there are more interesting events that can be used to hunt for threats in an Okta environment. These events are harder to rely on since they require having a deeper context of the regular activities in the organization to differentiate the legitimate operations from those that may be part of an attack. For example, an API Token created by an administrative user. It could be malicious or legitimate, and requires triage for a verdict:  Why did the user create this API key? Is it part of any task associated with an active project? If not,  Was it really the user, or is it a persistent action by a hostile actor? Okta Event TypeDefinitionMITRE ATT&CKuser.session.access_admin_appOkta admin T1078system.api_token.createAdministrative API Token CreatedT1098.001user.account.privilege.grantgroup.privilege.grantAdministrative Privileges Assignment N/Auser.mfa.factor.*MFA Changes T1556system.idp.lifecycle.create system.agent.ad.createAddition of external IdPT1556policy.rule.*policy.lifecycle.*application.policy.*Authentication Policy Changes.T1556network_zone.rule.disabledzone.*Changes to Network ZonesT1556user.account.report_suspicious_activity_by_enduserSuspicious Activity ReportedN/Auser.mfa.attempt_bypassAttempt to Bypass MFAN/Asecurity.request.blockedAccess from a Known-Bad IP was Blocked N/Auser.session.impersonation.initiateOkta Impersonation Session StartedN/A To see how Rezonate can help detect risks and threats across your Okta infrastructure, contact us for more information or request a free demo. Like this article? Follow us on LinkedIn.
Read More
Breaking the Identity Cycle

Breaking The Vicious Cycle of Compromised Identities

As we at Rezonate  analyze the 2023 Verizon Data Breach Investigations Report, an unmistakable deja vu moment grips us: A staggering 74% of all breaches are still exploiting the human factor — be it through errors, misuse of privileges, stolen credentials, or social engineering. This recurring theme serves as a clear call for businesses to switch gears and move away from static security approaches towards a more dynamic, identity-centric model. An Unyielding Threat Landscape Year after year, our IT landscape and attack surface continue to expand. Cloud adoption has soared, hybrid work becoming the norm, and our infrastructure continues to evolve. Yet, the threat statistics remain frustratingly consistent. This consistency points to a key issue: our security measures aren’t keeping up. Traditional security approaches, designed for a static operational model, distributed across tools and teams, are only increasing complexity and not meeting the demands of an ever-changing, dynamic infrastructure. In turn, this provides ample opportunities for attackers. The commonplace of Shadow access, increased attack surface, and greater reliance on third-parties all present identity access risks, making it harder see, understand and secure the enterprise critical data and systems. How Are Attackers Winning? Attackers are using simple yet effective methods to gain access to valuable data without the need of any complex malware attacks. A variety of account takeover tactics, bypassing stronger controls such as MFA, compromising identities, access, credentials and keys, brute forcing email accounts, and easily laterally expanding as access is permitted between SaaS applications and cloud infrastructure. Stolen credentials continue to be the top access method for attackers as they account for 44.7% of breaches (up from ~41% in 2022). Threat actors will continue to mine where there’s gold: identity attacks across email, SaaS & IaaS, and directly across identity providers. Where We Fall Short Security teams are challenged by their lack of visibility and understanding of the entire access journey, both across human & machine identities, from when access is federated to every change to data and resource. We're also seeing gaps in real-time detection and response, whether it be limiting user privileges or accurately identifying compromised identities. These shortcomings are largely due to our reliance on threat detection and cloud security posture management technologies that fail to deliver an immediate, accurate response required to successfully contain and stop identity-based threats. What Should You Do Different? We’re observing that businesses adopting an identity-centric approach:  Gain a comprehensive understanding of their identity and access risks, further breaking data silos, Are able to better prioritize their most critical risks and remediation strategies, Can more rapidly adapt access and privileges in response to every infrastructure change , Automatically mitigate posture risks before damage is inflicted, and Confidently respond and stop active attacks. Identities and access, across your cloud, SaaS, and IAM infrastructure, is constantly changing. Your security measures must evolve in tandem. The identity-centric operating model enables businesses to proactively harden potential attack paths and detect and stop identity threats in real-time. Breaking the cycle in Verizon DBIR 2024 Now is the time to make a change. Let’s change our old set-and-forget habits and know that security needs to be as dynamic and adaptive as the infrastructure it is protecting.  For more information about how can Rezonate help you build or further mature your identity security, contact us and speak with an identity security professional today.  This post was written by Roy Akerman, CEO and Co-Founder at Rezonate, and former head of the Israeli Cyber Defense Operations.
Read More
See Rezonate in Action

Eliminate Attacker’s Opportunity To Breach Your Cloud today

Organizations worldwide use Rezonate to protect their most precious assets. Contact us now, and join them.