Go back

What is Azure Identity Protection and How to Get the Most out of It

What is Azure Identity Protection and How to Get the Most out of It


Azure Identity Protection is the enigmatic sentinel of the Microsoft realm. Understanding the inner workings of Azure Identity Protection is essential to any information security officer, and will unlock the keys to an effective user risk policy.

Unsurprisingly, cyber attackers are sharp – they have found various ways to infiltrate and compromise digital applications. Sometimes, users make their malicious conquests even easier by using weak passwords, leaving over-privileges, and failing to recognize social engineering attacks. 

So far, in 2023, 60% of medium-sized companies have reported theft of credentials. As a result, tools like Microsoft’s Azure Identity Protection have become a staple in protecting against compromised identities, account takeover, and misuse of privileges. With a strong identity protection foundation, organizations can adopt concepts like zero trust and confidently progress toward a dynamic, identity-centric model that keeps credentials safe.

In this article, we will discuss what Azure Identity Protection is, its features, and its benefits. Then, we’ll review eight best practices to help you get the most out of Azure Identity Protection.

What is Azure Identity Protection?

Azure Identity Protection is a security service that provides a consolidated view of risky user activities and potential vulnerabilities affecting your identities. It uses adaptive machine learning algorithms to detect anomalies and suspicious incidents such as leaked credentials, sign-ins from unfamiliar locations, infected devices, and impossible travel. 

The service generates reports and alerts that enable administrators to investigate and respond to possible vulnerabilities. It also provides remediation capabilities like password resets and multi-factor authentication (MFA) enforcement to resolve issues.

What are the Main Features of Azure Identity Protection?

Here are 5 key features of Azure Identity Protection:

  • Risky sign-ins detection – Analyzes sign-in patterns and flags anomalous ones like logins, unfamiliar locations, infected devices, or anonymous IP addresses.
  • Risky user detection – Identifies potentially compromised user accounts based on indicators like leaked credentials, suspicious inbox activities, and impossible travel.
  • Risk-based conditional access policies – Automatically blocks or challenges sign-ins and users via MFA or password change.
  • Reporting and investigation – Provides reports on risky users and sign-ins with detailed drill-downs for security teams to investigate.
  • API access – Allows exporting risk detections to SIEMs for further correlation and automated workflows.

How Can Azure Identity Protection Improve Security

 Overall, Azure Identity Protection has numerous benefits for organizations, including:

  • Detects potential vulnerabilities affecting identities.
  • Investigates risky incidents and takes appropriate action.  
  • Remediates issues and enforces multi-factor authentication.
  • Identifies patterns using machine learning to enhance protection.
  • Reduces the account takeover risk.
  • Promptly identifies, investigates, and responds to identity threats.
  • Helps you view and understand IAM policies

How to Get the Most out of Azure Identity Protection 

1. Engage Stakeholders Early 

Securing buy-in and clarity on expectations upfront from internal stakeholders like IT security teams, infrastructure/operations teams, application owners, business leaders, etc., ensures smooth adoption and optimal use of Identity Protection, which maximizes ROI. 


  • Identify all stakeholders impacted by Identity Protection, such as security admins, IT teams managing authentication systems, application owners, helpdesk, etc.
  • Conduct workshops to demonstrate Identity Protection capabilities like risk modeling, automated response, and reporting.
  • Define their roles and responsibilities in deploying, supporting, and getting value from the solution.
  • Get their input on policies, alerts, and integrations with other tools like SIEM based on their needs.
  • Keep them updated on timelines, changes, and requirements from their side.
  • Set up an onboarding plan for users impacted by MFA registration and risk policies.
  • Create a support/escalation process for issues faced by users.
  • Establish a feedback process for continuous improvement after rollout.

2. Configure Risk Policies

Azure Identity Protection allows the creation of Conditional Access policies to automatically respond to user and sign-in risks detected through its AI-driven risk modeling. Properly tuned risk policies act as automated sentinels, promptly responding to abnormal activity before incidents occur.


  • Enable the user risk policy to enforce actions like MFA or password change for risky users.
  • Enable the sign-in risk policy to trigger MFA prompts or block access for risky sign-in attempts.
  • Set appropriate thresholds for user/sign-in risks based on your security posture. Aggressive thresholds lead to more false positives.
  • Scope policies to all users or specific critical groups like administrators, based on coverage needed.
  • Exclude emergency access accounts from the policies to prevent accidental lockout.
  • Use report-only mode to evaluate policy impact before full enforcement.
  • Review automated remediations in usage reports to correlate risk detections with user/sign-in actions taken.
  • Adjust policies periodically based on analysis of risk patterns in your environment.
  • Export risk detections to your SIEM for further correlation and monitoring coverage.
  • Ensure sufficient testing to balance security with user experience.

3. Require MFA Registration

The Azure Identity Protection policy for MFA registration prompts users to enroll for Azure AD Multi-Factor Authentication (MFA) during sign-in, strengthening account security beyond just passwords. 


  • Enable policies for cloud services, applications, and systems, and enforce registration for all users. Gradual rollout is also an option to minimize disruption.
  • Educate users on MFA and how it safeguards their accounts. Provide clear instructions on enrolling their devices for MFA.
  • For mobile devices, ensure users have downloaded and activated the Microsoft Authenticator app.
  • For desktops/laptops, guide users to enable phone-based MFA or FIDO2 security keys as the second factor.
  • Encourage users to register multiple verification methods as backups.
  • Prioritize MFA registration for privileged accounts like administrators to protect critical access.
  • Consider excluding break-glass accounts from MFA to prevent a lockout.
  • Use report-only mode initially to gauge impact before enforcing registration.
  • Evaluate if existing MFA solutions need to be phased out after the Azure AD MFA rollout.
  • Monitor registration status and follow up with users who don’t complete registration after prompts.

4. Exclude Emergency Accounts 

When configuring Azure Identity Protection risk policies for user and sign-in risks, excluding emergency access or break-glass administrator accounts from the scope is crucial. The rationale is to maintain admin access to Azure AD in worst-case scenarios like mass user lockouts due to policy misconfiguration or synchronization errors. 


  • Create at least two emergency access accounts in your Azure AD tenant and ensure they have global administrator privileges.
  • Explicitly exclude these accounts from the Conditional Access policies enforcing MFA, password reset, etc., for risky users/sign-ins.
  • Have a documented process for keeping these accounts secure, including:
    • Ensure the credentials are known only to designated people like CISOs, security leads, etc.
    • Rotate the credentials periodically, ideally every 30-90 days.
    • Monitor the accounts for any anomalous sign-in or usage patterns.
    • Outline the verification process before use if accounts are accidentally locked out.

5. Add Trusted Locations

Configuring named locations for office networks and VPN ranges is an essential optimization for Azure Identity Protection’s risk modeling. By declaring internal networks as trusted/known locations, sign-ins originating from them will have lower risk scores in Identity Protection. This minimizes false positives and unnecessary challenges for users accessing resources on the internal corporate network.


  • Create named locations in Azure AD Conditional Access representing office IP ranges and VPNs.
  • Mark on-premises office networks as ‘trusted locations’ once configured.
  • Configure VPN IP ranges as named locations marked as ‘known.’
  • Update location definitions if office networks or VPNs change.
  • Verify location mappings by checking sign-in logs to see if IP addresses are correctly matching.
  • Exclude unnamed/unknown locations from the trusted or known designations.
  • Review sign-in logs periodically to validate mappings.

6. Enable Security Monitoring

Ongoing monitoring and alerting are critical for managing risks Azure Identity Protection detects. Robust monitoring is vital to realizing the value of Identity Protection. You can make risk visibility and response coordination a 24×7 capability through SIEM integration and smart workflows.


  • Configure email notifications for new risky users/sign-ins and weekly digests. Alert appropriate security team members.
  • Review Identity Protection reports like risky users, sign-ins, and risk detections daily.
  • Create Azure Monitor dashboards to view risk trends, policy actions taken, and track remediation status.
  • Use the Identity Protection workbook template to gain insights through interactive reports.
  • Export Identity Protection events via the Graph Security API to your SIEM solution.
  • Correlate risk detections with other identity-related security events in your SIEM for enhanced monitoring.
  • Configure playbooks in solutions like Azure Sentinel to trigger response workflows based on Identity Protection alerts.
  • Document investigation and remediation processes for security operations.

7. Train End Users

An educated user base is less prone to lapses that can jeopardize account security. Continuous training and motivation help sustain user cooperation, which is crucial for successful Identity Protection usage.


  • Inform users about new Identity Protection policies like MFA registration and risk-based sign-in challenges. Explain the needs and benefits.
  • Provide clear instructions on enrolling for MFA during sign-in prompts. 
  • Train users on proper password hygiene, like using strong passwords, password managers, and not reusing passwords across sites.
  • Set up self-service password reset (SSPR) for accounts. Raise awareness about phishing attacks and how to identify suspicious emails or links.
  • Inform users about reporting suspicious activity, like unfamiliar sign-in locations.
  • Reward security-conscious behavior by employees to drive engagement.
  • Track training completion rates and measure security awareness over time via red teaming.

8. Leverage Rezonate’s Identity Threat Protection 

While Azure Identity Protection provides fundamental risk detection capabilities, solutions like Rezonate can augment protection for cloud identities and access. Rezonate is an identity threat protection platform that integrates natively with Azure AD and continuously analyzes identity and access patterns to uncover risks. Rezonate also protects and protects and monitors potential compromises of the Azure AD admin console and infrastructure, as experienced in July this year. 

Key capabilities to help you maximize Identity Protection:

  • Discovers all human and service identities across Azure AD, Okta, Google Workspace etc., and provides a unified view.
  • Continuously monitors privileged access and entitlements to detect anomalies and generate actionable alerts.
  • Correlates cross-cloud identity management, events, and user behaviors to identify sophisticated attack patterns.
  • Recommends one-click remediation actions for resolving policy violations or mitigating risks.
  • Automates response through integration with existing workflows like Azure playbooks.
  • Provides easy-to-understand visualizations of identity relationships and access paths to resources.

Identify Vulnerabilities Before Attackers Do

Azure Identity Protection offers adaptive and intelligent capabilities to identify vulnerabilities, remediate risks, and enforce access controls. You can build a comprehensive identity-centric security strategy by leveraging features like risky sign-in policies and integrating Identity Protection with solutions like Rezonate. Rezonate provides unified visibility, detection, and response powered by industry-leading identity graphs. See Rezonate in action today.

Continue Reading

More Articles
GitHub Account Takeover of AWS and GCP Accounts

From GitHub to Account Takeover: Misconfigured Actions Place GCP & AWS Accounts at Risk

In April 2023, Rezonate research team explored prevalent misconfigurations of GitHub integration with cloud native vendors. GitHub OIDC-based trust relations have been found with the critical misconfigurations that leave connected AWS/GCP accounts vulnerable to potential takeover attacks. Although this issue was discovered and reported in the past, we have found that dozens of GitHub Public Repositories, and potentially many private ones, have demonstrated this critical issue, leaving dozens of companies vulnerable. The team notified recognizable affected organizations, but there may be more private repositories and organizations at risk. In this article, we will introduce the misconfiguration research process,  explain the OIDC implementation process that GitHub uses to authenticate to the cloud, present the misconfigurations that we have identified across various organizations, provide a step-by-step guide for discovering and fixing the problem, and propose how to avoid the issue completely. Key Points Rezonate research team has explored an attack vector that exposed AWS & GCP accounts to unauthorized access through misconfigured OpenID (OIDC) GitHub Actions role. Based on our scans, these known misconfigurations exist in dozens of the GitHub public repositories that use Github OIDC provider to AWS or GCP. During the last few weeks, the Rezonate team has reached out to a few dozens of vulnerable organizations and personal accounts, providing them with information in order to remediate this misconfiguration. To check whether your GCP or AWS accounts have been affected by this misconfiguration, please refer to the “Remediation Guidelines” section. Background Cloud and SaaS technologies have made identity and access controls the connective tissue between operational tools and cloud-native environments. To ensure business operations flow as seamlessly and as fast as possible, IAM and Devops teams are required to share administrative access and create trust relations between 3rd-party tools and their core cloud environments. The key challenge is to limit privileges and conditions on these cases, and to protect and monitor this highly-privileged administrative access. One of the most common examples of trust relations can be found in a software CI/CD pipeline, where strong privileges and access to an organization’s Cloud infrastructure are provided. We’ve seen dozens of cases where this access path was exploited due to misconfigurations, lost credentials and access keys, and compromised supply chains such as the CircleCI incident, the NPM incident, and many others. Figure 1: Attacker scans for misconfigured trust relations in GitHub, and uses them to impersonate a CI/CD app, gaining full access to AWS or GCP accounts. Due to the highly-sensitive nature of these access paths, CI/CD vendors have added support for the OpenID Connect (OIDC) protocol to supply short-term credentials, which are considered more secure. The trend toward OIDC support is reflected through the announcements of GitHub, GitLab, and CircleCI within the last 2 years. OIDC is a modern, secure authentication method for creating trust relationships, but when not configured correctly, the method can become the root cause of a hidden, critical access risk. Even though these misconfigurations were reported in the past, in books, as well as in security blogs, the Rezonate research team has identified many organizations that remain vulnerable.  GitHub OpenID Provider Integration The GitHub OIDC deployment guide highlights many advantages to using OIDC over access keys. OIDC allows you to adopt good security practices, such as: No cloud secrets: Eliminating the need to store cloud credentials as long-lived secrets. Instead, workflows will request and use a short-lived access token from the cloud provider through OIDC. Authentication and Authorization management: Having granular control over how workflows can use credentials, using the cloud provider's authentication and authorization tools to control access to cloud resources. Rotating credentials: With OIDC, the cloud provider issues a short-lived access token that is only valid for a single job, and then automatically expires. The following diagram provides an overview of how GitHub's OIDC provider integrates with GitHub Actions and the cloud provider: Figure 2: Overview of GitHub OIDC provider integration In general, integration involves the following steps: Start by creating an Identity Provider in the target cloud environment. GitHub Actions will generate an OIDC temporary token containing execution context (such as the repository, organization, or branch). GitHub will send a request to the cloud provider, containing the GitHub OIDC token. The cloud provider will validate the token and the context, and exchange it with short-lived credentials that allow access to the cloud environment. GitHub OIDC Integration with GCP & AWS Integrating GitHub Actions through OIDC includes setting up trust between the cloud infrastructure and GitHub, which workflows can then use to access roles and service accounts. Configure Trust The process of configuring trust between the cloud provider and GitHub is similar between different cloud providers, and includes two main steps: Create an OIDC Identity Provider that points to: https://token.actions.githubusercontent.com. Create a Role or Service account and establish trust with the recently created Identity Provider. Let's examine the setup for GCP and AWS. Configure Trust with GCP Based on Google Cloud documentation, the following process creates a Workload Identity Federation, which provides the issuer ID and basic attribute mapping. Figure 3: Creating a Workload Identity Federation Optionally, after configuring the attribute mapping according to the Google Cloud documentation, add Attribute Conditions such as a GitHub organization, repository, or branch. Figure 5: Adding Attribute Conditions Now that we have the Identity Provider configured, we can bind it to service accounts and allow GitHub to use them as part of its workflow. In the picture below we can see the github-actions-integration service account connected to the Identity Provider recently created.  Figure 5: Bind service accounts to Identity Provider Configure Trust with AWS Similar to GCP, AWS starts with creating a new Identity Provider (OpenID Connect) with the GitHub Provider URL. Figure 6: Create an Identity Provider in AWS Next, we can create an IAM Role and reference the recently created Identity Provider, allowing users federated by the Identity Provider to assume roles in the account. Figure 7: Create an IAM Role Next, we name the role, add a description, and modify the access conditions of the role. By default the trust policy condition only includes audience filters. Figure 8: Modify access conditions using filters As mentioned in the GitHub integration documentation, it's highly recommended that you limit access to specific repositories by adding filters against the token subject. Note that by default the AWS process does not enforce adding additional conditions. To add conditions, click the “Edit Policy” button, which reveals the following warning: Figure 9: Missing additional conditions warning in AWS Configure GitHub Workflow Now that everything is configured from the cloud infrastructure side, the next step is to use the relevant GitHub actions as part of the workflow. First, add permissions to the workflow, allowing it to read an OIDC token. Add the following permissions to the beginning of the workflow YAML file: Figure 10: Add permissions to GitHub workflow Next, add the matching action to the authentication process, based on the cloud provider:For AWS, use the configure-aws-credentials: Figure 11: Add matching action to authentication process - AWS For GCP, use the google-github-action/auth: Figure 12: Add matching action to authentication process - GCP After performing the steps above, the integration process is completed and the workflow can perform operations in the cloud environment. Potential Misconfiguration Although the integration between the cloud and GitHub is relatively simple, potential misconfigurations could expose access to unauthorized parties. GitHub articles refer to Conditional access as part of the integration process, however, the cloud infrastructure is not flagging or alerting when conditional access is not being used. The lack of notification from the cloud provider during the setup increases the chances that an organization will have misconfigured trust settings. Those misconfigurations may result in roles and service accounts that can be abused by threat actors/attackers, leading to unauthorized access. As a result of our research, we have discovered two specific types of misconfigurations that can be exploited by attackers to gain unauthorized access to a trusted cloud account. Misconfiguration - Lack of Subject Condition This misconfiguration occurs when the user who integrated the role did not add conditional limitations to cloud access. This misconfiguration allows any GitHub organization to access the cloud account. Misconfiguration - Poorly-Defined Condition Pattern This misconfiguration occurs when the conditions defined limit the subject, but are not restrictive enough and thus can be bypassed. For example, the subject condition may include a wildcard to allow all the repositories in the organization to use the same role or service account as part of the GitHub Actions workflow. If the wildcard is mistakenly placed in the organization name (and not in the repository name), the condition allows unauthorized access from any GitHub organization with a name that fits into the pattern. Identifying Vulnerable Organizations To Identify vulnerable organizations and understand how prevalent misconfigurations are, we used GitHub search, looking for Actions workflows that use OIDC with AWS or GCP. Using GitHub Code Search, we executed the following queries: Figure 13: GitHub Code Query - AWS OIDC Workflows Figure 14: GitHub Code Query - GCP OIDC Workflows We split the query into subqueries to avoid reaching GitHub’s result limit, and eventually produced a list of roles and accounts that were potentially vulnerable. Having the list, the next step was to identify what was misconfigured. GitHub actions query returned thousands of results, and checking them through GitHub actions would be time-consuming. As a workaround, the team developed a different approach to identify vulnerable organizations using self-hosted runners. Self-Hosted Runners Github Self-Hosted Runners is a feature that allows organizations to execute parts of the GitHub Actions workflows in their own environment. The team discovered that by controlling the machine that authenticates against the cloud, we can extract the OIDC token, and perform batch tests on AWS roles and GCP service accounts outside of GitHub. The batch tests included the following steps: Setup a self-hosted runner:Download and install the GitHub self-hosted runner: Figure 15: Our self-hosted runner Configure the runner to proxy all of the traffic through our local proxy:Use the http-proxy parameter within the GitHub action. Figure 16: The GitHub workflow configured to trigger token generation. Intercept a workflow request and extract the Web Identity Token:After everything was ready, we triggered the workflow by performing a commit to the repository. The commit started a GitHub workflow which performed a network request to assume the target role with the Web Identity token. Then, we extracted the token from the request sent by the runner to AWS. Figure 17: Extract token from runner request - AWS Decoding the JWT Token reveals our sub, along with other identifiers that are being sent and logged by the cloud provider as part of the authentication process. Figure 18: Decoding JWT token Copy the token to our multithreaded script, which will check the potentially vulnerable roles. Using a pre-developed script and leveraging Boto3 and Gcloud, we scanned a list of potentially vulnerable roles while having the JWT at hand. Results After reducing duplicate roles and service accounts, we scanned approximately 1500 roles and service accounts across GCP and AWS. As for the first misconfiguration, lack of subject condition, we determined that dozens of the roles were vulnerable, allowing anyone to use them to access the accounts. The second misconfiguration, poorly-defined condition, was harder to test and required setting up a dedicated GitHub organization per target. Thus, we performed a random test against 20 organizations, finding one of them to be vulnerable to this attack. The vulnerable accounts included various organizations, including private companies, non-profit organizations, software development studios, and many personal accounts. During the last few weeks we have reached out to the relevant organizations and people, sending them information regarding the vulnerable roles and sharing remediation guidance. Remediation Guidelines Could My Environment Be Affected?  Based on our research, this type of misconfiguration is a risk for AWS and GCP users who use GitHub Actions with modern authentication (OIDC). Identify Potential Misconfigurations: Lack of Subject Conditions To identify this misconfiguration In AWS, check for roles that have no subject limitations in its trust policy. Figure 19: Misconfigured Role with no Subject - Example - AWS To identify this misconfiguration in GCP, check for service accounts connected to the Identity Provider that has access to the Entire Pool. Figure 20: Misconfigured Role with no Subject - Example - GCP Identify Potential Misconfigurations: Poorly-Defined Conditions To identify this misconfiguration in AWS, locate the StringLike conditions for roles that are connected to the Identity Provider, and check for wildcards in the organization name. Figure 21: Misconfigured Role with Poorly-Defined Condition Pattern - AWS For example, in this case, we have a role that was originally intended to be used by different repositories within the Rezonate organization. Since the StringLike conditions included a wildcard in the organization name,  every organization that starts with “rezonate” can assume this role. This means that an attacker can create a new GitHub organization with the name “rezonateX” and get access to the role. This misconfiguration option does not apply to GCP, as GCP does not support wildcards. Identity Risks in the Rezonate UI Rezonate customers should look for the security risk “GitHub Actions Role vulnerable to hijacking” in the Rezonate UI and follow the guided remediation steps attached to it. Use a Script to Perform a Quick Scan For the general public, we have released a script to our GitHub repository (link here), which performs a quick scan against the AWS account or GCP project and reveals possible vulnerable roles and service accounts.
Read More
Threat Hunting for Identity Threats in Snowflake

Frosty Trails: Threat-Hunting for Identity Threats in Snowflake

The Snowflake platform has revolutionized how organizations store, process, and analyze large volumes of data. It offers a fully-managed data warehouse-as-a-service solution, providing a scalable and flexible architecture allowing seamless data integration from multiple sources. As Snowflake rises in popularity as one of the top ten cloud vendors in the world, its attractiveness to organizations also draws the attention of malicious attackers. The platform's widespread adoption and extensive use in storing valuable data make it a lucrative target for cyber threats. As a result, you must implement security measures, stay vigilant against emerging threats, and continuously update your defense mechanisms to safeguard Snowflake data sharing and infrastructure from potential attackers.This post will help you grasp how to use Snowflake's built-in logging features for your security operation routine. We will explore the relevant data Snowflake exposes for hunting and describe ten threat scenarios and how to detect them. We will also share a script to execute them and perform quick threat-hunting operations in your environment to stay secure and audit-ready. What is Snowflake? Snowflake is a cloud-based data platform that provides a fully managed and scalable solution for storing, processing, and analyzing large volumes of data. It is designed to work on top of popular cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Snowflake is built with a unique architecture that separates storage and computing, allowing identities to scale resources independently based on their needs.  This architecture, combined with its ability to process semi-structured and structured data, enables Snowflake to deliver high performance and cost-efficiency for data workloads of any size. As we'll see in the next section, a Snowflake account can hold multiple databases without taking care of the infrastructure. Key Snowflake Logging Features Each Snowflake account has a default database called “SNOWFLAKE”. It is a shared read-only database that holds metadata and historical usage data related to the objects within your organization and accounts.   The SNOWFLAKE database has a built-in schema called "ACCOUNT_USAGE", which is a system-defined schema containing a set of views providing access to comprehensive and granular usage information for the Snowflake account. It is a valuable tool for monitoring and understanding how resources are utilized within your Snowflake environment.  The schema includes views that cover  user activity  query history  warehouse usage  login history  data transfer details, and more.   There are more logging mechanisms in Snowflake, such as INFORMATION_SCHEMA and READER_ACCOUNT_USAGE. During this post, we will rely on the following ACCOUNT_USAGE views: Exploring ACCOUNT_USAGE By default, each Snowflake account has a database called SNOWFLAKE that is accessible to the ACCOUNTADMIN role. You can grant additional roles and have access through the following command: GRANT imported privileges on database snowflake to role rezonate_integration; You can explore the available views by logging in as an ACCOUNTADMIN to your Snowflake account and performing the following steps: From the left pane, choose Data and then Databases. Select the SNOWFLAKE database and expand it. 3. Expand ACCOUNT_USAGE and select any of the views within it. Each available view in the schema has its own column structure. You can see the available columns for each view by clicking on the view name, choosing the Columns tab, and selecting “Explore available columns”. Comprehensive documentation per view is available, including the retention period and logging latency. Snowflake Data Governance - How to Access Snowflake Audit Logs The Snowflake logs are accessible through a few methods, as you’ll see below. 1. Snowflake Console  The most straightforward method of accessing the logs is logging in to a Snowflake account with a user that has read permissions to the  ACCOUNT_USAGE schema. Then, choose "Worksheets" from the left pane and ensure the worksheet is querying the correct data source. You should see something like the query browser below. 2. SnowSQL SnowSQL is a command-line tool provided by Snowflake designed to interact with Snowflake's data warehouse and execute SQL queries, manage data, and perform various administrative tasks. It acts as the official command-line client for Snowflake, allowing users to connect to their Snowflake accounts and work with data using SQL commands. Information about installing, configuring, and using it is available in Snowflake’s documentation. 3. Exporting to external storage Snowflake facilitates data export to contemporary storage services like AWS S3 through  "Stage" functionality. The data exported from Snowflake can be saved in various file formats. You can find detailed information about Stages on Snowflake's official documentation page.  Once the Stage setup is complete and data is exported, you have the flexibility to utilize your preferred analysis tool to centralize the data in a location of your choice. 4. Snowflake SDKs As well as the structured methods mentioned earlier, Snowflake supports native REST API accessible through different SDKs. It can be used by any script or tool for purposes like exporting data. An example is the Rezonate Threat-Hunting Tool, which takes advantage of Snowflake Python SDK to execute threat-hunting queries. We’ll find out more later in the blog.  Besides the Python SDK, the Snowflake team has developed drivers for many popular languages, including .NET, NodeJS, and Go. The full list is available here. 10 Snowflake Threat-Hunting Techniques to Implement Now  Now we’ve learned about Snowflake’s structure, basic permissions, and integrations, we can start threat-hunting. In this section, we will guide you through some critical threat-hunting scenarios to look out for and explain each. We will also mark the relevant Snowflake views, align them to the specific MITRE ATT&CK technique, and include our own query in Snowflake query syntax. Remember, you can copy and paste them directly to your worksheet. It is important to highlight that some hunting queries may have false positives, depending on the environment and may need adjustments to reduce noisy results. Scenario 1 - Brute Force on a Snowflake User A brute force attack on a Snowflake user happens when an attacker uses trial-and-error to repeatedly submit different combinations of usernames and passwords and eventually gain unauthorized access. To hunt for this type of attack, you can search for an attacker that performed more than X failed login attempts on at least Y target users, failing or ending up with a successful login. In failure cases, the activity may result in a user's lockout. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY Query -- Get users who failed to login from the same IP address at least 5 times select CLIENT_IP, USER_NAME, REPORTED_CLIENT_TYPE, count(*) as FAILED_ATTEMPTS, min(EVENT_TIMESTAMP) as FIRST_EVENT, max(EVENT_TIMESTAMP) as LAST_EVENT from SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY where IS_SUCCESS = 'NO' and ERROR_MESSAGE in ('INCORRECT_USERNAME_PASSWORD', 'USER_LOCKED_TEMP') and FIRST_AUTHENTICATION_FACTOR='PASSWORD' and       EVENT_TIMESTAMP >= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); group by 1,2,3 having FAILED_ATTEMPTS >= 5 order by 4 desc; -- For Each result, check if the source IP address managed to login to the target user AFTER the "lastEvent" time MITRE Technique Credential Access | Brute Force | ATT&CK T1110  Scenario 2 - Password Spray on a Snowflake Account A brute force attack on a Snowflake account involves an attacker repeatedly submitting different combinations of usernames and passwords to eventually manage to log in and gain unauthorized access. To hunt for any occurrence of this scenario, you can search for an attacker that performed more than 1 failed login attempt on at least Y unique target users, from the same IP address. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY Query -- Get users who failed to login from the same IP address at least 5 times select CLIENT_IP, REPORTED_CLIENT_TYPE, count(distinct USER_NAME) as UNIQUE_USER_COUNT, min(EVENT_TIMESTAMP) as FIRST_EVENT, max(EVENT_TIMESTAMP) as LAST_EVENT from SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY where IS_SUCCESS = 'NO' and ERROR_MESSAGE in ('INCORRECT_USERNAME_PASSWORD', 'USER_LOCKED_TEMP') and FIRST_AUTHENTICATION_FACTOR='PASSWORD' and       EVENT_TIMESTAMP >= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); group by 1,2 having UNIQUE_USER_COUNT >= 1 order by 3 desc; -- For Each result, check if the source IP address managed to login to the target user AFTER the "lastEvent" time MITRE Technique Credential Access | Brute Force | ATT&CK T1110 Scenario 3 - Unauthorized Login Attempt to a Disabled/Inactive User  In some cases, Snowflake user accounts might have been disabled due to security concerns or maybe even as part of employee off-boarding. Monitoring login attempts to disabled users can help you detect unauthorized activities. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY Query -- Search for login attempts to disabled users select * from SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY where IS_SUCCESS = 'NO' and  ERROR_MESSAGE  = 'USER_ACCESS_DISABLED' MITRE Technique Credential Access | Brute Force | ATT&CK T1110  Scenario 4 - Login Attempt Blocked by Network Policy Snowflake network policies are a set of rules that govern network communication and access control within the Snowflake data platform. A network policy can deny a connection based on the client’s characteristics, such as IP address, to enforce organization policy and reduce the chances of a compromised account in case of leaked credentials.By searching for these failed logins we can identify violations of the organizational policies that may suggest compromised credentials. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY Query -- Search for network policies blocked IP addresses select * from SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY where IS_SUCCESS = 'NO' and  ERROR_MESSAGE  = 'INCOMING_IP_BLOCKED' and EVENT_TIMESTAMP >= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); Scenario 5 - Exfiltration Through Snowflake Data Sharing Snowflake administrators can share data stored in their accounts with other Snowflake accounts. An attacker might use shares to exfiltrate data from Snowflake resources stored on compromised accounts to external locations. Any unauthorized event of this nature is a big red flag. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY Query -- Search for new data shares select *  from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY  where REGEXP_LIKE(QUERY_TEXT, 'create\\s+share\\s.*','i') or REGEXP_LIKE(QUERY_TEXT, '\\s+to\\s+share\\s.*','i') and START_TIME>= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique Exfiltration | Transfer Data to Cloud Account| ATT&CK T1537  Scenario 6 - Exfiltration Through Snowflake Stage A Snowflake stage is an external storage location that serves as an intermediary for loading or unloading data into or from Snowflake, providing seamless integration with various cloud-based storage services. For example, an AWS S3 bucket can serve as a stage. You can use the following queries to search potential data exfiltration using this feature.  Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY SNOWFLAKE.ACCOUNT_USAGE.STAGES Query -- Search for stage-related statements select * from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY where QUERY_TEXT ilike '%COPY INTO%' and QUERY_TEXT ilike '%@%'; -- The following query will show the stages that were created in the last 24 hours select * from SNOWFLAKE.ACCOUNT_USAGE.STAGES where CREATED>= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique Exfiltration | Transfer Data to Cloud Account| ATT&CK T1537  Scenario 7 - Persistency Through Snowflake Procedures & Tasks Procedures and tasks are Snowflake features that automate and manage workflows. Snowflake Procedures: Procedures in Snowflake are user-defined scripts written in SQL or JavaScript that allow you to encapsulate a series of SQL or JavaScript statements as a reusable unit. Snowflake Tasks: Tasks are scheduled operations that automate repetitive tasks or workflows. They are defined using SQL or JavaScript and can include SQL queries, DML statements, or calls to procedures. Tasks are scheduled to run at specific intervals, such as hourly, daily, or weekly, making them ideal for automating data pipelines and regular data processing. An attacker might utilize procedures and tasks to maintain persistently in the organization or exfiltrate data over time. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY Query -- Search for new tasks select * from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY  where REGEXP_LIKE(QUERY_TEXT, '.*CREATE\\s+(OR\\s+REPLACE\\s+)?TASK.*', 'i') and START_TIME >= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); -- Search for new procedures select * from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY  where REGEXP_LIKE(QUERY_TEXT, '.*CREATE\\s+(OR\\s+REPLACE\\s+)?PROCEDURE.*', 'i') and START_TIME >= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique Execution | Scheduled Task/Job | ATT&CK T1053  Execution  | Automated Exfiltration | ATT&CK T1020  Scenario 8 - Defense Evasion Through Unset Masking Policy A Snowflake masking policy is a security mechanism that protects sensitive data within a database. It allows you to define rules for obscuring or redacting specific data elements, such as Social Security Numbers (SSNs) or credit card numbers, to limit their visibility to unauthorized users. Attackers might bypass masking policies by unsetting them, given the right permission, and then exfiltrating sensitive information. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY Query -- Search for unsetting of a masking policy select * from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY where QUERY_TEXT ilike '%UNSET MASKING POLICY%' and START_TIME >= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique Data Manipulation| Stored Data Manipulation | ATT&CK T1565  Scenario 9 - Data Exfiltration: Spikes in User Queries Volume If an attacker manages to infiltrate a Snowflake account, they may attempt to extract data from the databases hosted in the compromised account. To detect this type of activity, you can identify users who exhibit significantly higher data querying rates than their typical usage patterns. The subsequent query lets us pinpoint users who have executed queries resulting in larger data volumes than their average daily activity over the previous week.  Triage tip: The suspicion level increases as the difference between the calculated standard deviation of “total_bytes_written” and the sum of “stddev_daily_bytes” and “avg_daily_bytes” grows larger. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY Query -- Spikes in user queries WITH user_daily_bytes AS (   SELECT     USER_NAME AS user_name,     DATE_TRUNC('DAY', END_TIME) AS query_date,     SUM(BYTES_WRITTEN_TO_RESULT) AS total_bytes_written   FROM ACCOUNT_USAGE.QUERY_HISTORY   WHERE END_TIME >= CURRENT_TIMESTAMP() - INTERVAL '7 DAY'   GROUP BY user_name, query_date ), user_daily_average AS (   SELECT     user_name,     AVG(total_bytes_written) AS avg_bytes_written,     STDDEV_SAMP(total_bytes_written) AS stddev_bytes_written   FROM user_daily_bytes   GROUP BY user_name ) SELECT   u.user_name,   ROUND(u.total_bytes_written, 2) AS today_bytes_written,   ROUND(a.avg_bytes_written, 2) AS avg_daily_bytes,   ROUND(a.stddev_bytes_written, 2) AS stddev_daily_bytes FROM user_daily_bytes u JOIN user_daily_average a    ON u.user_name = a.user_name WHERE query_date = CURRENT_DATE()   AND u.total_bytes_written > a.avg_bytes_written   AND u.total_bytes_written > stddev_daily_bytes + avg_daily_bytes ORDER BY u.user_name; MITRE Technique Exfiltration | ATT&CK TA0010 Scenario 10 - Anomaly in Client Application For User If a user’s credentials are compromised or there is an insider threat, the attacker may attempt to use enumeration tools or client apps to perform massive data exfiltration. It’s likely that these tools haven't been used by the legitimate user in the past. For this case, detecting any new client app used by the user could be a red flag that is worth investigating. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.SESSIONS SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY Query -- User uses a new client application WITH user_previous_applications AS (   SELECT     USER_NAME AS user_name,     ARRAY_AGG(DISTINCT CLIENT_APPLICATION_ID) AS previous_applications   FROM ACCOUNT_USAGE.SESSIONS   WHERE DATE_TRUNC('DAY', CREATED_ON) < CURRENT_DATE()   GROUP BY user_name ), latest_login_ips  AS (   SELECT     USER_NAME,     EVENT_ID,     CLIENT_IP   FROM ACCOUNT_USAGE.LOGIN_HISTORY )  SELECT   s.USER_NAME AS user_name,   ARRAY_AGG(DISTINCT s.SESSION_ID),   ARRAY_AGG(DISTINCT s.CLIENT_APPLICATION_ID) AS new_application_id,   lh.CLIENT_IP as ip_address FROM ACCOUNT_USAGE.SESSIONS s JOIN user_previous_applications u   ON s.USER_NAME = u.user_name JOIN latest_login_ips lli   ON s.USER_NAME = lli.USER_NAME JOIN ACCOUNT_USAGE.LOGIN_HISTORY lh   ON s.LOGIN_EVENT_ID = lli.EVENT_ID WHERE DATE_TRUNC('DAY', s.CREATED_ON) = CURRENT_DATE()   AND NOT ARRAY_CONTAINS(s.CLIENT_APPLICATION_ID::variant, u.previous_applications) group by s.USER_NAME,lh.CLIENT_IP; MITRE Technique Credential Access |  ATT&CK TA0006 4 Additional Queries to Identify Snowflake Threats On top of the scenarios mentioned above, there are more relevant queries you can use to hunt for threats in a Snowflake environment. However, the results of these queries are harder to rely on since they require a deeper context of the regular activities in the organization to differentiate the legitimate operations from those that may be part of a threat. For example, imagine that MFA has been disabled for an administrator. This activity could be either part of a malicious operation or just an operational benign activity. To answer this question, you would need additional context: Who disabled the MFA device? Is it part of any task associated with an active project or duty? And If not,was it really the user, or is it a persistent action caused by an attacker? Query 1 - New Administrative Role Assignment  In the post-exploitation phase of a Snowflake attack, the attacker might create a new administrative user as a persistence mechanism. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_USERS Query -- Search for new admin role assignments select * from SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_USERS where ROLE in ('ORGADMIN', 'ACCOUNTADMIN')  and CREATED_ON>= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique https://attack.mitre.org/techniques/T1136/ Query 2 - New Permissions Assigned to a Role In the post-exploitation phase of a Snowflake attack, an attacker may add permissions to a non-privileged role in an effort to achieve persistence using a low-privileged user. Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_ROLES Query -- Search for new admin role assignments select * from SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_ROLES where ROLE NOT in ('ORGADMIN', 'ACCOUNTADMIN')  and CREATED_ON>= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique https://attack.mitre.org/techniques/T1136/ https://attack.mitre.org/tactics/TA0004/ Query 3 - Changes to Users Security Settings An attacker might change user security settings like passwords, MFA settings, or other authentication methods to ensure persistence, or as a post-exploitation step.  Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY Query -- Search for user security settings changes select * from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY where QUERY_TEXT ilike '%ALTER%USER%'        and QUERY_TYPE = 'ALTER_USER'        and REGEXP_LIKE(QUERY_TEXT, '.*(PASSWORD|ROLE|DISABLED|EXPIRY|UNLOCK|MFA|RSA|POLICY).*', 'i')       and START_TIME>= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique https://attack.mitre.org/techniques/T1098/ Query 4 - Changes to Network Policies An attacker might alter network policies to allow traffic from a specific IP address.    Relevant Snowflake View SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY Query -- Search for network policies settings changes select * from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY where (QUERY_TYPE in ('CREATE_NETWORK_POLICY', 'ALTER_NETWORK_POLICY', 'DROP_NETWORK_POLICY') or        QUERY_TEXT ilike any ('% set network_policy%', '% unset network_policy%') )       and START_TIME>= DATEADD(HOUR, -24, CURRENT_TIMESTAMP()); MITRE Technique https://attack.mitre.org/techniques/T1562/004/ Choose a Reliable, Fast, and Simple Snowflake Threat-Hunting Tool In our pursuit to empower organizations with proactive cybersecurity measures, Rezonate is excited to introduce our open-source tool designed specifically for threat-hunting in Snowflake. This tool leverages Snowflake SDK to run the different threat models mentioned in this post and allows you to easily start your own hunting journey in your own environment. Feel free to suggest expansions and improvements – we’d love to hear from you 🙂 You can find a link to our repository here.
Read More
Circle CI Breach

CircleCI Breach: Detect and Mitigate to Assure Readiness

On January 4, 2023, CircleCI, a continuous integration (CI/CD) and delivery service, reported a data breach. The company urged its customers to take immediate action while a complete investigation is ongoing. First critical actions recommended by CircleCI were to ‘rotate any and all secrets stored in CircleCI’ and ‘review related activities for any unauthorized access starting from December 21, 2022 to January 4, 2023’. Why is it such a big deal Malicious use of access keys in conjunction with privileged access can have a significant impact on an organization’s source code, deployment targets, and sensitive data across its infrastructure.  CI/CD pipelines operation requires exactly that - high-privileged access which in most cases is administrative and direct access to source code repositories essential for smooth operation - and as such, considered a critical component of the software development life cycle (SDLC).  Start investigating for malicious activity in your cloud environment Data breaches are unfortunately common and should no longer be a surprise. Every third-party service or application has the potential to act as a supply chain vector by an attacker. When that occurs, excessive access that was previously benign can become a critical exposure, allowing the threat actor to exploit the system freely. Here are immediate next steps security and DevOps teams should take to eliminate any possible supply chain risk - those recommended by CircleCI and beyond: Discover possible entry points - Critical first step involves mapping, linking and reviewing the access of all secrets given to the compromised third-party service to fully understand all initial access attempts and possible lateral movement across all supply chain vectors.Specific to CircleCI data breach, Rezonate observed that multiple accounts had a few AWS programmatic access keys with administrative privileges in the CircleCI configuration, allowing for creation and modification of any resource within the account. Threat containment (& traps) - Once you identify any and all keys, the first option is to deactivate or delete them and create new ones (avoid rotating an unused key). However, while you prevent any future use of these keys, you also limit any potential traces of benign or malicious activity. Why? In the case of AWS, Cloudtrail has limited authentication logging for invalid or disabled keys.A second more preferred option is to remove all privileges from users while keeping the keys and users active. This enables further monitoring of activity using ‘canary keys,’ where every access attempt triggers an alert and extracts threat intelligence artifacts (IOCs such as IP address). Activity review & behavioral profiling - Once you capture all suspected keys, you can begin analyzing their activity within the defined range reported. In our case, we used AWS Cloudtrail as the main data source and investigated the access behavioral patterns. The goal is to create a ‘clean’ baseline of activities that occurred prior to the breach. To help define a profile, understand the scope, and identify any potential areas of concern, consider asking the following questions: Reduce the overwhelming number of insignificant incident alerts and the time spent addressing them Increase operational visibility into cloud identity and access security across platforms Discover and monitor third party cross-cloud access Limit permissions and restrict access to the minimum users required without any impact to operations. Once we have a good understanding of normal operation, we can apply the same approach to inspect activities from the date of breach until the present. In this case, the context of workflows, resources, and overall architecture is paramount, so it is critical to collaborate with the dev/infra team to quickly assess, validate, and prioritize findings. Activity review & threat models - Based on the results of previous steps, further questions may indicate a potentially malicious exploitation, such as attempts to elevate privileges, gain persistence, or exfiltrate data. To help pinpoint the most relevant findings, consider the following options: Activities performed outside of our regular regionsAlerting for anomaly of regular access in an attempt to locate compromised resourcesIdentity-creation activities(ATT&CK TA0003)Activities such as CreateUser and CreateAccessKey attempting to gain persistencyResource-creation activitiesDiscover attempts to perform resource exhaustion for crypto mining and othersActivities performed outside of the regular CircleCI IP rangesIdentify any access attempts from external IPs that may relate to known bad intelErrors occurredDetect “pushing the limits” attempts to exploit user privileges resulting in error (e.g. AccessDenied)Spike in enumeration activities(ATT&CK T1580)Detect increased recon and mapping actions (e.g. user and role listing)Defense evasion techniques(ATT&CK TA0005)Detect tampering attempts to limit defense controls (e.,g. DeleteTrail or modify GuardDuty settings)Secret access attemptsDetect bruteforce actions against mapped secrets to elevate account foothold It’s important to consider all suggested actions as part of the overall context, as some may be legitimate while others may be malicious. By correlating them all together, you can reduce noise and false positives.  How Rezonate can help It’s important to note that while this guidance specifically addresses key actions related to the CircleCI data breach, it can also serve as best practice for addressing any risks for any breach. Rezonate automates the actions described above to streamline the compromise assessment process and reduce the time and effort required for manual analysis. Rezonate simplifies discovery, detection, and investigation of the compromise. Work with a system that can automatically correlate and summarize all activities of all identities to save critical time. Working directly with CloudTrail can be challenging, lacking aggregation, data-correlation  and privileged tagging eventually slowing you down.  We have been collaborating with our clients and partners to utilize the Rezonate platform to thoroughly investigate the security incident and assess its potential impact on all activities mentioned here. If you require assistance, please do not hesitate to contact us. Providing support to our clients and the community is a key purpose of Rezonate's founding.
Read More
See Rezonate in Action

Eliminate Attacker’s Opportunity To Breach Your Cloud today

Organizations worldwide use Rezonate to protect their most precious assets. Contact us now, and join them.