Skip to content
iD
InfoDive Labs
Back to blog
CloudIAMSecurity

Cloud IAM Best Practices Across AWS, Azure, and GCP: A Complete Step-by-Step Guide (Updated 2026)

Updated for 2026 - The definitive guide to cloud IAM across AWS, Azure, and GCP. Step-by-step implementation of least privilege, policy design, service accounts, federation, audit logging, and common misconfigurations to avoid.

Suraj TiwariMarch 19, 202629 min read
Cloud IAM Best Practices Across AWS, Azure, and GCP: A Complete Step-by-Step Guide (Updated 2026)

Cloud IAM Best Practices Across AWS, Azure, and GCP

Last updated: March 19, 2026 - This guide has been completely revised and expanded with step-by-step implementation instructions, more code examples, comparison tables, emergency runbooks, and multi-cloud governance frameworks reflecting the latest features and best practices across all three major cloud providers.

Cloud identity and access management is where security either succeeds or fails. Misconfigured IAM policies are consistently the root cause of the largest cloud breaches - an overly permissive S3 bucket policy, a service account with owner privileges, or a forgotten access key in a public repository. According to Gartner, through 2025 over 99% of cloud security failures were the customer's fault, and IAM misconfiguration leads the list.

The challenge is compounded by the fact that AWS, Azure, and GCP each implement IAM differently, with distinct concepts, terminology, and best practices. What AWS calls a "policy," Azure calls a "role assignment," and GCP calls a "binding." The underlying principles are the same, but the implementation details matter enormously.

This guide provides a unified, actionable framework for cloud IAM. Every section includes step-by-step instructions, real configuration examples, and the specific commands you need. Whether you operate in a single cloud or a multi-cloud environment, these patterns will help you reduce your attack surface without slowing down your engineering teams.

IAM Concepts Mapped Across Clouds

Before diving into implementation, it helps to understand how core IAM concepts translate across the three major providers. This table serves as a quick reference throughout the guide.

ConceptAWSAzureGCP
Identity providerIAM Users, IAM Identity CenterEntra ID (Azure AD)Cloud Identity / Workspace
Human identityIAM User / Identity Center UserEntra ID UserGoogle Account / Cloud Identity User
Machine identityIAM Role (for services)Managed IdentityService Account
Permission groupingIAM Policy (JSON document)Role Definition (JSON)IAM Role (predefined or custom)
Permission assignmentPolicy AttachmentRole AssignmentIAM Binding
Organization guardrailsService Control Policies (SCPs)Azure Policy / Management GroupsOrganization Policies
Temporary credentialsSTS AssumeRoleManaged Identity TokenWorkload Identity Federation
Privilege escalation preventionPermission BoundariesPIM + Conditional AccessIAM Conditions + Org Policies
Audit trailCloudTrailActivity Log + Entra Audit LogsCloud Audit Logs
Access analysisIAM Access AnalyzerEntra Access ReviewsIAM Recommender

The Principle of Least Privilege: From Theory to Implementation

Least privilege sounds simple: give every identity only the permissions it needs to perform its job and nothing more. In practice, it is one of the hardest security principles to implement consistently because the path of least resistance is to grant broad permissions and move on. Here is how to make it real on each platform.

Step 1: Audit Your Current Permissions

Before tightening anything, you need to know what you are working with.

AWS - Generate a credential report and find unused permissions:

# Generate and download the credential report
aws iam generate-credential-report
aws iam get-credential-report --output text --query 'Content' | base64 -d > credential-report.csv
 
# List all IAM users and their attached policies
aws iam list-users --query 'Users[*].UserName' --output table
 
# For each user, check their last activity
aws iam generate-service-last-accessed-details \
  --arn arn:aws:iam::123456789012:user/example-user
 
# Use Access Analyzer to find unused access
aws accessanalyzer create-analyzer \
  --analyzer-name my-analyzer \
  --type ACCOUNT

Azure - Review role assignments and sign-in activity:

# List all role assignments in a subscription
az role assignment list --all --output table
 
# List role assignments for a specific user
az role assignment list --assignee user@company.com --output table
 
# Check sign-in logs for inactive users (requires Entra ID P1/P2)
az rest --method GET \
  --uri "https://graph.microsoft.com/v1.0/auditLogs/signIns?\$filter=createdDateTime ge 2025-01-01"

GCP - Audit IAM bindings and recommendations:

# List all IAM bindings at the project level
gcloud projects get-iam-policy PROJECT_ID --format=json
 
# List all IAM bindings at the organization level
gcloud organizations get-iam-policy ORG_ID --format=json
 
# Check IAM recommender for overprivileged accounts
gcloud recommender recommendations list \
  --project=PROJECT_ID \
  --recommender=google.iam.policy.Recommender \
  --location=global

Step 2: Implement Least Privilege Policies

Once you understand your current state, start tightening permissions systematically.

Best practice: Use a "grant-then-scope" approach. Start with broader managed/predefined roles for development environments, then use access logs to generate scoped-down policies for production.


AWS IAM: Policies, Boundaries, and SCPs

Understanding AWS Policy Evaluation

AWS evaluates policies in a specific order. Understanding this chain is critical for designing effective access controls:

  1. Organization SCPs - Hard ceiling on what any account in the org can do
  2. Resource-based policies - Attached to the resource (e.g., S3 bucket policy)
  3. Permission boundaries - Maximum permissions an entity can receive
  4. Identity-based policies - Attached to the user, group, or role
  5. Session policies - Further limit permissions for an assumed-role session

An action is only allowed if it passes all applicable evaluations and no explicit deny exists anywhere in the chain.

Step-by-Step: Creating a Least Privilege Policy

Step 1: Start with a managed policy and observe usage.

# Attach a managed policy to get started
aws iam attach-user-policy \
  --user-name app-developer \
  --policy-arn arn:aws:iam::aws:policy/ReadOnlyAccess

Step 2: After 30-90 days, use Access Analyzer to generate a scoped policy.

# Generate a policy based on actual CloudTrail activity
aws accessanalyzer start-policy-generation \
  --policy-generation-details '{
    "principalArn": "arn:aws:iam::123456789012:user/app-developer",
    "cloudTrailDetails": {
      "trailArn": "arn:aws:cloudtrail:us-east-1:123456789012:trail/management-trail",
      "startTime": "2025-01-01T00:00:00Z",
      "endTime": "2025-03-31T23:59:59Z"
    }
  }'

Step 3: Create and attach the scoped custom policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowS3AppBucketAccess",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::my-app-bucket",
        "arn:aws:s3:::my-app-bucket/uploads/*"
      ],
      "Condition": {
        "StringEquals": {
          "aws:PrincipalTag/Team": "engineering",
          "s3:x-amz-server-side-encryption": "aws:kms"
        }
      }
    },
    {
      "Sid": "AllowDynamoDBAppTable",
      "Effect": "Allow",
      "Action": [
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:Query",
        "dynamodb:UpdateItem"
      ],
      "Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/app-data",
      "Condition": {
        "ForAllValues:StringEquals": {
          "dynamodb:LeadingKeys": ["${aws:PrincipalTag/TenantId}"]
        }
      }
    }
  ]
}

Step 4: Set a permission boundary to prevent privilege escalation.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowCommonServices",
      "Effect": "Allow",
      "Action": [
        "s3:*",
        "dynamodb:*",
        "sqs:*",
        "sns:*",
        "logs:*",
        "cloudwatch:*",
        "xray:*"
      ],
      "Resource": "*"
    },
    {
      "Sid": "DenyIAMChangesWithoutBoundary",
      "Effect": "Deny",
      "Action": [
        "iam:CreateRole",
        "iam:AttachRolePolicy",
        "iam:PutRolePolicy"
      ],
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "iam:PermissionsBoundary": "arn:aws:iam::123456789012:policy/developer-boundary"
        }
      }
    },
    {
      "Sid": "DenySensitiveServices",
      "Effect": "Deny",
      "Action": [
        "organizations:*",
        "account:*",
        "iam:CreateUser",
        "iam:DeleteUser"
      ],
      "Resource": "*"
    }
  ]
}
# Apply the permission boundary
aws iam put-user-permissions-boundary \
  --user-name app-developer \
  --permissions-boundary arn:aws:iam::123456789012:policy/developer-boundary

Step-by-Step: Implementing Service Control Policies

SCPs are the most powerful guardrails in AWS Organizations. They apply to every principal in the target accounts.

Step 1: Create an SCP to enforce security baselines.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyRootAccountUsage",
      "Effect": "Deny",
      "Action": "*",
      "Resource": "*",
      "Condition": {
        "StringLike": {
          "aws:PrincipalArn": "arn:aws:iam::*:root"
        }
      }
    },
    {
      "Sid": "RequireIMDSv2",
      "Effect": "Deny",
      "Action": "ec2:RunInstances",
      "Resource": "arn:aws:ec2:*:*:instance/*",
      "Condition": {
        "StringNotEquals": {
          "ec2:MetadataHttpTokens": "required"
        }
      }
    },
    {
      "Sid": "DenyNonApprovedRegions",
      "Effect": "Deny",
      "NotAction": [
        "iam:*",
        "organizations:*",
        "sts:*",
        "support:*",
        "budgets:*"
      ],
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "aws:RequestedRegion": [
            "us-east-1",
            "us-west-2",
            "eu-west-1"
          ]
        }
      }
    },
    {
      "Sid": "RequireS3Encryption",
      "Effect": "Deny",
      "Action": "s3:PutObject",
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "s3:x-amz-server-side-encryption": ["aws:kms", "AES256"]
        },
        "Null": {
          "s3:x-amz-server-side-encryption": "false"
        }
      }
    }
  ]
}

Step 2: Attach the SCP to the appropriate OUs.

# Attach to a specific organizational unit
aws organizations attach-policy \
  --policy-id p-1234567890 \
  --target-id ou-abc123-workloads
 
# Verify the SCP is active
aws organizations list-policies-for-target \
  --target-id ou-abc123-workloads \
  --filter SERVICE_CONTROL_POLICY

Key AWS IAM best practices checklist:

  • Enable AWS IAM Access Analyzer in every account and region
  • Require MFA for all human users via IAM policies or Identity Center settings
  • Use IAM Identity Center (SSO) instead of IAM users for human access
  • Tag all IAM roles and users with Team, Environment, and Application tags
  • Set up CloudTrail with log file validation in all regions
  • Review the IAM credential report monthly
  • Use permission boundaries on all roles created by developers
  • Never use the root account - lock it with hardware MFA and no access keys

Azure IAM: RBAC, Conditional Access, and PIM

Understanding Azure RBAC Hierarchy

Azure RBAC permissions flow downward through the management hierarchy:

Tenant Root Group
  └── Management Group (e.g., "Production")
        └── Subscription (e.g., "Prod-East")
              └── Resource Group (e.g., "rg-app-backend")
                    └── Resource (e.g., Storage Account)

A role assignment at a higher scope inherits to all child scopes. This makes management groups extremely powerful - and potentially dangerous if misconfigured.

Step-by-Step: Implementing Azure RBAC

Step 1: Map your teams to built-in roles.

Azure provides over 400 built-in roles. Before creating custom roles, check the existing ones.

# List all built-in roles
az role definition list --custom-role-only false --output table
 
# Find roles related to a specific service
az role definition list --query "[?contains(roleName, 'Storage')]" --output table
 
# View the exact permissions of a role
az role definition list --name "Storage Blob Data Contributor" --output json

Step 2: Assign roles at the narrowest possible scope.

# Assign at the resource group level (preferred)
az role assignment create \
  --assignee user@company.com \
  --role "Storage Blob Data Contributor" \
  --scope "/subscriptions/SUB_ID/resourceGroups/rg-app-backend"
 
# Assign at the resource level (most restrictive)
az role assignment create \
  --assignee user@company.com \
  --role "Storage Blob Data Reader" \
  --scope "/subscriptions/SUB_ID/resourceGroups/rg-app-backend/providers/Microsoft.Storage/storageAccounts/myappdata"

Step 3: Create custom roles when built-in ones are too broad.

{
  "Name": "App Deployment Operator",
  "Description": "Can deploy and manage app services but not modify networking or IAM",
  "Actions": [
    "Microsoft.Web/sites/*",
    "Microsoft.Web/serverFarms/read",
    "Microsoft.Insights/components/*",
    "Microsoft.Resources/deployments/*"
  ],
  "NotActions": [
    "Microsoft.Web/sites/config/list/action",
    "Microsoft.Web/sites/publishxml/action",
    "Microsoft.Authorization/*"
  ],
  "DataActions": [],
  "NotDataActions": [],
  "AssignableScopes": [
    "/subscriptions/SUB_ID/resourceGroups/rg-app-production"
  ]
}
# Create the custom role
az role definition create --role-definition @app-deployment-operator.json

Step-by-Step: Configuring Conditional Access

Conditional Access policies add context-aware gates to access decisions. They evaluate signals like user location, device compliance, risk level, and application sensitivity.

Step 1: Create a baseline policy requiring MFA for all users.

# Using Microsoft Graph API
az rest --method POST \
  --uri "https://graph.microsoft.com/v1.0/identity/conditionalAccess/policies" \
  --body '{
    "displayName": "Require MFA for all users",
    "state": "enabledForReportingButNotEnforced",
    "conditions": {
      "users": {
        "includeUsers": ["All"],
        "excludeUsers": ["BREAK_GLASS_USER_ID"]
      },
      "applications": {
        "includeApplications": ["All"]
      }
    },
    "grantControls": {
      "operator": "OR",
      "builtInControls": ["mfa"]
    }
  }'

Step 2: Block access from high-risk locations.

az rest --method POST \
  --uri "https://graph.microsoft.com/v1.0/identity/conditionalAccess/policies" \
  --body '{
    "displayName": "Block high risk sign-ins",
    "state": "enabled",
    "conditions": {
      "users": {
        "includeUsers": ["All"]
      },
      "applications": {
        "includeApplications": ["All"]
      },
      "signInRiskLevels": ["high"],
      "userRiskLevels": ["high"]
    },
    "grantControls": {
      "operator": "OR",
      "builtInControls": ["block"]
    }
  }'

Step 3: Require compliant devices for sensitive applications.

az rest --method POST \
  --uri "https://graph.microsoft.com/v1.0/identity/conditionalAccess/policies" \
  --body '{
    "displayName": "Require compliant device for finance apps",
    "state": "enabled",
    "conditions": {
      "users": {
        "includeGroups": ["FINANCE_TEAM_GROUP_ID"]
      },
      "applications": {
        "includeApplications": ["FINANCE_APP_ID"]
      }
    },
    "grantControls": {
      "operator": "AND",
      "builtInControls": ["mfa", "compliantDevice"]
    }
  }'

Step-by-Step: Setting Up Privileged Identity Management (PIM)

PIM provides just-in-time, time-bound role activation with approval workflows. No one should hold permanent Owner or Contributor access.

Step 1: Enable PIM for a subscription.

# Register PIM for the subscription
az rest --method POST \
  --uri "https://management.azure.com/subscriptions/SUB_ID/providers/Microsoft.Authorization/roleEligibilityScheduleRequests?api-version=2022-04-01-preview" \
  --body '{
    "properties": {
      "principalId": "USER_OBJECT_ID",
      "roleDefinitionId": "/subscriptions/SUB_ID/providers/Microsoft.Authorization/roleDefinitions/OWNER_ROLE_DEF_ID",
      "requestType": "AdminAssign",
      "scheduleInfo": {
        "startDateTime": "2026-03-18T00:00:00Z",
        "expiration": {
          "type": "AfterDuration",
          "duration": "P365D"
        }
      },
      "justification": "Eligible assignment for production owner access"
    }
  }'

Step 2: Configure activation settings.

  • Maximum activation duration: 4 hours (for Owner roles)
  • Require approval from at least one designated approver
  • Require MFA on activation
  • Require justification and ticket number
  • Send notification to security team on every activation

Step 3: Activate a role when needed (user experience).

# Request role activation (via Azure Portal or API)
az rest --method POST \
  --uri "https://management.azure.com/subscriptions/SUB_ID/providers/Microsoft.Authorization/roleAssignmentScheduleRequests?api-version=2022-04-01-preview" \
  --body '{
    "properties": {
      "principalId": "USER_OBJECT_ID",
      "roleDefinitionId": "/subscriptions/SUB_ID/providers/Microsoft.Authorization/roleDefinitions/OWNER_ROLE_DEF_ID",
      "requestType": "SelfActivate",
      "linkedRoleEligibilityScheduleId": "ELIGIBILITY_SCHEDULE_ID",
      "justification": "Deploying critical hotfix - ticket INC-4521",
      "scheduleInfo": {
        "startDateTime": "2026-03-18T10:00:00Z",
        "expiration": {
          "type": "AfterDuration",
          "duration": "PT2H"
        }
      }
    }
  }'

Key Azure IAM best practices checklist:

  • Use Entra ID as the single identity provider - avoid local accounts
  • Enable Security Defaults or Conditional Access (not both) for MFA
  • Configure PIM for all privileged roles (Owner, Contributor, User Access Administrator)
  • Create at least two break-glass accounts with hardware MFA, excluded from Conditional Access
  • Use Managed Identities for all Azure workloads - eliminate stored credentials
  • Enable Entra ID sign-in and audit logs, ship to a SIEM
  • Review access with Entra Access Reviews quarterly
  • Lock down management group hierarchy - restrict who can create subscriptions

GCP IAM: Resource Hierarchy, Org Policies, and Workload Identity

Understanding GCP IAM Model

GCP IAM binds members to roles at levels of the resource hierarchy. Bindings inherit down the hierarchy, so a role granted at the folder level applies to all projects within that folder.

Organization (company.com)
  └── Folder ("Production")
        └── Project ("prod-app-backend")
              └── Resource (Cloud Storage bucket, BigQuery dataset, etc.)

Step-by-Step: Implementing GCP IAM

Step 1: Replace primitive roles with predefined roles.

GCP's primitive roles (Owner, Editor, Viewer) are extremely broad. An Editor can modify almost any resource in a project.

# List current IAM bindings with primitive roles
gcloud projects get-iam-policy PROJECT_ID \
  --flatten="bindings[].members" \
  --filter="bindings.role:roles/editor OR bindings.role:roles/owner" \
  --format="table(bindings.role, bindings.members)"
 
# Remove overly broad Editor role
gcloud projects remove-iam-policy-binding PROJECT_ID \
  --member="user:developer@company.com" \
  --role="roles/editor"
 
# Replace with scoped predefined roles
gcloud projects add-iam-policy-binding PROJECT_ID \
  --member="user:developer@company.com" \
  --role="roles/storage.objectAdmin" \
  --condition='expression=resource.name.startsWith("projects/_/buckets/app-uploads"),title=app-uploads-only'
 
gcloud projects add-iam-policy-binding PROJECT_ID \
  --member="user:developer@company.com" \
  --role="roles/cloudsql.editor"

Step 2: Use IAM Conditions for fine-grained access.

IAM Conditions let you restrict when and where a binding applies.

# Allow access only during business hours
gcloud projects add-iam-policy-binding PROJECT_ID \
  --member="user:contractor@external.com" \
  --role="roles/compute.instanceAdmin.v1" \
  --condition='expression=request.time.getHours("America/New_York") >= 9 && request.time.getHours("America/New_York") <= 17,title=business-hours-only,description=Access limited to 9am-5pm ET'
 
# Allow access only to resources with specific labels
gcloud projects add-iam-policy-binding PROJECT_ID \
  --member="group:dev-team@company.com" \
  --role="roles/compute.instanceAdmin.v1" \
  --condition='expression=resource.matchTag("env", "development"),title=dev-only'

Step 3: Create custom roles when predefined roles are too broad.

# Create a custom role for application deployment
gcloud iam roles create appDeployer \
  --project=PROJECT_ID \
  --title="Application Deployer" \
  --description="Can deploy to Cloud Run and manage related resources" \
  --permissions="\
run.services.create,\
run.services.update,\
run.services.get,\
run.services.list,\
run.revisions.list,\
run.revisions.get,\
artifactregistry.repositories.downloadArtifacts,\
artifactregistry.repositories.uploadArtifacts,\
logging.logEntries.list,\
monitoring.timeSeries.list"

Step-by-Step: Implementing Organization Policies

Organization Policies are GCP's equivalent of AWS SCPs - they set guardrails across the organization.

Step 1: Disable service account key creation.

Service account keys are the most common credential leak vector in GCP.

# Set org policy to disable SA key creation
gcloud resource-manager org-policies set-policy \
  --organization=ORG_ID \
  policy.yaml
# policy.yaml - Disable service account key creation
constraint: constraints/iam.disableServiceAccountKeyCreation
booleanPolicy:
  enforced: true

Step 2: Restrict external sharing.

# Restrict domain sharing to only your organization
constraint: constraints/iam.allowedPolicyMemberDomains
listPolicy:
  allowedValues:
    - "C0xxxxxxx"  # Your Cloud Identity customer ID

Step 3: Enforce uniform bucket-level access.

# Prevent ACL-based access on Cloud Storage
constraint: constraints/storage.uniformBucketLevelAccess
booleanPolicy:
  enforced: true

Step 4: Restrict VM external IP addresses.

# Prevent VMs from getting external IPs (force traffic through NAT/proxy)
constraint: constraints/compute.vmExternalIpAccess
listPolicy:
  allValues: DENY

Step-by-Step: Setting Up Workload Identity Federation

Workload Identity Federation eliminates the need for service account key files by letting external workloads exchange their native identity tokens for GCP access tokens.

Step 1: Create a Workload Identity Pool.

gcloud iam workload-identity-pools create "github-actions" \
  --project="PROJECT_ID" \
  --location="global" \
  --display-name="GitHub Actions Pool" \
  --description="Pool for GitHub Actions CI/CD"

Step 2: Create a provider within the pool.

# For GitHub Actions
gcloud iam workload-identity-pools providers create-oidc "github" \
  --project="PROJECT_ID" \
  --location="global" \
  --workload-identity-pool="github-actions" \
  --display-name="GitHub" \
  --attribute-mapping="\
google.subject=assertion.sub,\
attribute.actor=assertion.actor,\
attribute.repository=assertion.repository,\
attribute.repository_owner=assertion.repository_owner" \
  --attribute-condition="assertion.repository_owner == 'your-org'" \
  --issuer-uri="https://token.actions.githubusercontent.com"

Step 3: Grant the external identity access to a service account.

# Allow GitHub Actions from a specific repo to impersonate a service account
gcloud iam service-accounts add-iam-policy-binding \
  "deploy-sa@PROJECT_ID.iam.gserviceaccount.com" \
  --project="PROJECT_ID" \
  --role="roles/iam.workloadIdentityUser" \
  --member="principalSet://iam.googleapis.com/projects/PROJECT_NUM/locations/global/workloadIdentityPools/github-actions/attribute.repository/your-org/your-repo"

Step 4: Use it in your GitHub Actions workflow.

# .github/workflows/deploy.yml
jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write  # Required for OIDC
    steps:
      - uses: actions/checkout@v4
 
      - id: auth
        uses: google-github-actions/auth@v2
        with:
          workload_identity_provider: "projects/PROJECT_NUM/locations/global/workloadIdentityPools/github-actions/providers/github"
          service_account: "deploy-sa@PROJECT_ID.iam.gserviceaccount.com"
 
      - name: Deploy to Cloud Run
        uses: google-github-actions/deploy-cloudrun@v2
        with:
          service: my-app
          region: us-central1
          image: us-docker.pkg.dev/PROJECT_ID/app/my-app:${{ github.sha }}

Key GCP IAM best practices checklist:

  • Eliminate all primitive roles (Owner, Editor, Viewer) from production projects
  • Disable service account key creation via Organization Policy
  • Use Workload Identity for GKE pods and Workload Identity Federation for external workloads
  • Enable IAM Recommender and review suggestions monthly
  • Use IAM Conditions to restrict access by time, resource attributes, or IP
  • Enforce domain-restricted sharing via Organization Policy
  • Enable VPC Service Controls for sensitive data projects
  • Export all audit logs to BigQuery for long-term analysis and querying

Cross-Account and Cross-Project Access

In multi-account (AWS), multi-subscription (Azure), or multi-project (GCP) architectures, workloads frequently need to access resources in other environments. The key principle is to use federated, temporary access rather than shared credentials.

AWS: Cross-Account Role Assumption

Step 1: Create a role in the target account.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111111111111:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "unique-external-id-12345",
          "aws:PrincipalTag/Environment": "production"
        }
      }
    }
  ]
}

Step 2: Assume the role from the source account.

# Assume the cross-account role
aws sts assume-role \
  --role-arn arn:aws:iam::222222222222:role/cross-account-reader \
  --role-session-name "pipeline-deploy-$(date +%s)" \
  --external-id unique-external-id-12345 \
  --duration-seconds 3600
 
# Use the temporary credentials
export AWS_ACCESS_KEY_ID=ASIA...
export AWS_SESSION_TOKEN=FwoG...
export AWS_SECRET_ACCESS_KEY=...

Best practice: Always set an ExternalId on cross-account roles to prevent the confused deputy problem. Use aws:PrincipalTag conditions to restrict which identities can assume the role.

Azure: Cross-Subscription Access

# Assign a role at the management group level for cross-subscription access
az role assignment create \
  --assignee "APP_OBJECT_ID" \
  --role "Reader" \
  --scope "/providers/Microsoft.Management/managementGroups/production-mg"
 
# Or use Azure Lighthouse for managed service provider scenarios
az managedservices definition create \
  --name "Cross-tenant monitoring" \
  --description "Read-only access for centralized monitoring" \
  --tenant-id "MSP_TENANT_ID" \
  --authorizations "principalId=MSP_GROUP_ID;roleDefinitionId=READER_ROLE_ID"

GCP: Cross-Project Access

# Grant a service account from Project A access to resources in Project B
gcloud projects add-iam-policy-binding PROJECT_B_ID \
  --member="serviceAccount:app-sa@PROJECT_A_ID.iam.gserviceaccount.com" \
  --role="roles/storage.objectViewer" \
  --condition='expression=resource.name.startsWith("projects/_/buckets/shared-data"),title=shared-bucket-only'

Service Accounts and Machine Identities: A Deep Dive

Service accounts and machine identities are frequently the weakest link in cloud IAM. They tend to accumulate excessive permissions over time, rarely have their credentials rotated, and often lack the monitoring applied to human accounts.

Step 1: Build a Complete Inventory

You cannot secure what you do not know about.

AWS:

# List all IAM roles (including service-linked roles)
aws iam list-roles --query 'Roles[*].[RoleName,CreateDate,Arn]' --output table
 
# Find roles not used in the last 90 days
aws iam generate-service-last-accessed-details --arn ROLE_ARN
aws iam get-service-last-accessed-details --job-id JOB_ID
 
# Find access keys older than 90 days
aws iam generate-credential-report
aws iam get-credential-report --output text --query 'Content' | base64 -d | \
  awk -F, 'NR>1 && $5=="true" && $6!="N/A"' | while IFS=, read -r user _ _ _ _ key_date _; do
    key_epoch=$(date -j -f "%Y-%m-%dT%H:%M:%S+00:00" "$key_date" +%s 2>/dev/null || date -d "$key_date" +%s)
    age=$(( ($(date +%s) - key_epoch) / 86400 ))
    [ "$age" -gt 90 ] && echo "WARNING: $user has access key that is $age days old"
  done

Azure:

# List all service principals (app registrations)
az ad sp list --all --query '[].{Name:displayName, AppId:appId, Created:createdDateTime}' --output table
 
# Find service principals with password credentials
az ad sp list --all --query '[?passwordCredentials].{Name:displayName, AppId:appId}' --output table
 
# Check for expiring credentials
az ad app list --all --query '[].{Name:displayName, AppId:appId, Creds:passwordCredentials[].endDateTime}' --output table

GCP:

# List all service accounts in a project
gcloud iam service-accounts list --project=PROJECT_ID
 
# Find service accounts with keys
for sa in $(gcloud iam service-accounts list --project=PROJECT_ID --format='value(email)'); do
  keys=$(gcloud iam service-accounts keys list --iam-account=$sa --managed-by=user --format='value(name)' 2>/dev/null)
  [ -n "$keys" ] && echo "WARNING: $sa has user-managed keys"
done
 
# Check service account usage
gcloud policy-intelligence query-activity \
  --activity-type=serviceAccountLastAuthentication \
  --project=PROJECT_ID

Step 2: Eliminate Long-Lived Credentials

CloudInstead of...Use...
AWSAccess keys on EC2IAM Instance Profile with role
AWSAccess keys in LambdaExecution role
AWSAccess keys in ECSTask role
AzureClient secrets for appsSystem-assigned Managed Identity
AzureConnection strings with passwordsUser-assigned Managed Identity + RBAC
GCPService account key filesAttached service account on GCE/GKE
GCPService account keys in CI/CDWorkload Identity Federation
AllStatic API keysShort-lived tokens via STS/OIDC

Step 3: Implement Monitoring and Alerting

AWS CloudWatch alarm for unusual service account activity:

{
  "MetricName": "UnauthorizedAccessAttempts",
  "Namespace": "CloudTrailMetrics",
  "FilterPattern": "{ ($.errorCode = \"AccessDenied\") && ($.userIdentity.type = \"AssumedRole\") }",
  "MetricTransformations": [{
    "MetricValue": "1",
    "MetricNamespace": "IAMSecurityMetrics",
    "MetricName": "ServiceAccountAccessDenied"
  }]
}

GCP log-based alert for service account key creation:

gcloud logging metrics create sa-key-created \
  --project=PROJECT_ID \
  --description="Alert when someone creates a service account key" \
  --filter='protoPayload.methodName="google.iam.admin.v1.CreateServiceAccountKey"'
 
# Create an alert policy on this metric
gcloud alpha monitoring policies create \
  --notification-channels=CHANNEL_ID \
  --display-name="Service Account Key Created" \
  --condition-display-name="SA key creation detected" \
  --condition-filter='metric.type="logging.googleapis.com/user/sa-key-created"' \
  --condition-threshold-value=0 \
  --condition-threshold-comparison=COMPARISON_GT

Temporary Credentials and Short-Lived Tokens

Long-lived credentials - API keys, service account JSON files, static passwords - are the primary target for credential theft attacks. Every cloud provider offers mechanisms to eliminate them entirely.

AWS STS Deep Dive

# AssumeRole - most common pattern for cross-account or elevated access
aws sts assume-role \
  --role-arn arn:aws:iam::123456789012:role/deploy-role \
  --role-session-name "deploy-$(git rev-parse --short HEAD)" \
  --duration-seconds 900 \
  --tags Key=Pipeline,Value=main Key=Commit,Value=$(git rev-parse HEAD)
 
# AssumeRoleWithWebIdentity - for OIDC-based federation (GitHub Actions, GitLab, etc.)
aws sts assume-role-with-web-identity \
  --role-arn arn:aws:iam::123456789012:role/github-actions-role \
  --role-session-name "gh-deploy" \
  --web-identity-token file://token.jwt \
  --duration-seconds 3600
 
# GetSessionToken - for MFA-enabled CLI access
aws sts get-session-token \
  --serial-number arn:aws:iam::123456789012:mfa/user \
  --token-code 123456 \
  --duration-seconds 43200

Azure Token Acquisition

# Get token via Managed Identity (from an Azure VM)
curl -s "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/" \
  -H "Metadata: true" | jq -r '.access_token'
 
# Get token via Azure CLI (for development)
az account get-access-token --resource https://management.azure.com/ --query accessToken -o tsv
 
# Get token for a specific scope (Microsoft Graph)
az account get-access-token --resource https://graph.microsoft.com/ --query accessToken -o tsv

GCP Token Acquisition

# Get token from metadata server (on GCE, GKE, Cloud Run)
curl -s "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" \
  -H "Metadata-Flavor: Google" | jq -r '.access_token'
 
# Impersonate a service account (for short-lived access)
gcloud auth print-access-token --impersonate-service-account=deploy-sa@PROJECT.iam.gserviceaccount.com
 
# Generate a short-lived access token via API
gcloud iam service-accounts generate-access-token \
  deploy-sa@PROJECT.iam.gserviceaccount.com \
  --lifetime=3600s

The Zero Standing Privileges Goal

The target state: zero long-lived credentials in your cloud environments. Every access key, service account key file, and static secret represents a credential that can be stolen and used indefinitely. Use this checklist to drive toward that goal:

  1. Inventory all long-lived credentials across all three clouds
  2. Classify each credential - can it be replaced with a temporary alternative?
  3. Migrate workloads to use instance profiles, managed identities, or attached service accounts
  4. Federate CI/CD pipelines with OIDC (GitHub Actions, GitLab CI, Jenkins with OIDC plugin)
  5. Set up monitoring for any new long-lived credential creation
  6. Enforce via policy - use SCPs/Org Policies to block key creation
  7. Validate quarterly - re-run the inventory and ensure no regression

Audit, Compliance, and Continuous Monitoring

IAM policies drift over time. Permissions are granted for urgent projects and never revoked. New services are adopted without proper IAM review. Continuous monitoring catches these issues before they become breaches.

Step 1: Enable Comprehensive Audit Logging

AWS CloudTrail:

# Create an organization trail that logs all accounts
aws cloudtrail create-trail \
  --name org-trail \
  --s3-bucket-name security-audit-logs \
  --is-organization-trail \
  --is-multi-region-trail \
  --enable-log-file-validation \
  --kms-key-id arn:aws:kms:us-east-1:123456789012:key/audit-key-id
 
# Enable the trail
aws cloudtrail start-logging --name org-trail
 
# Enable data event logging for S3 and Lambda
aws cloudtrail put-event-selectors \
  --trail-name org-trail \
  --advanced-event-selectors '[
    {
      "Name": "S3DataEvents",
      "FieldSelectors": [
        {"Field": "eventCategory", "Equals": ["Data"]},
        {"Field": "resources.type", "Equals": ["AWS::S3::Object"]}
      ]
    },
    {
      "Name": "LambdaDataEvents",
      "FieldSelectors": [
        {"Field": "eventCategory", "Equals": ["Data"]},
        {"Field": "resources.type", "Equals": ["AWS::Lambda::Function"]}
      ]
    }
  ]'

Azure Activity and Entra Logs:

# Create a diagnostic setting to export activity logs
az monitor diagnostic-settings create \
  --name "security-audit" \
  --resource "/subscriptions/SUB_ID" \
  --logs '[{"category": "Administrative", "enabled": true, "retentionPolicy": {"enabled": true, "days": 365}},
           {"category": "Security", "enabled": true, "retentionPolicy": {"enabled": true, "days": 365}},
           {"category": "Policy", "enabled": true, "retentionPolicy": {"enabled": true, "days": 365}}]' \
  --workspace "/subscriptions/SUB_ID/resourceGroups/rg-security/providers/Microsoft.OperationalInsights/workspaces/security-workspace"
 
# Export Entra ID logs to the same workspace
az monitor diagnostic-settings create \
  --name "entra-audit" \
  --resource "/providers/Microsoft.aadiam" \
  --logs '[{"category": "SignInLogs", "enabled": true},
           {"category": "AuditLogs", "enabled": true},
           {"category": "NonInteractiveUserSignInLogs", "enabled": true},
           {"category": "ServicePrincipalSignInLogs", "enabled": true},
           {"category": "ManagedIdentitySignInLogs", "enabled": true}]' \
  --workspace "/subscriptions/SUB_ID/resourceGroups/rg-security/providers/Microsoft.OperationalInsights/workspaces/security-workspace"

GCP Cloud Audit Logs:

# Enable Data Access audit logs for all services
gcloud projects set-iam-policy PROJECT_ID policy.yaml
# policy.yaml - Enable data access logging for all services
auditConfigs:
  - service: allServices
    auditLogConfigs:
      - logType: ADMIN_READ
      - logType: DATA_READ
      - logType: DATA_WRITE
# Export logs to BigQuery for long-term analysis
gcloud logging sinks create audit-to-bq \
  bigquery.googleapis.com/projects/PROJECT_ID/datasets/audit_logs \
  --log-filter='logName:"cloudaudit.googleapis.com"' \
  --organization=ORG_ID \
  --include-children

Step 2: Set Up Automated Access Reviews

AWS - Use IAM Access Analyzer:

# Enable Access Analyzer with external access findings
aws accessanalyzer create-analyzer \
  --analyzer-name external-access \
  --type ORGANIZATION
 
# Enable unused access analyzer (identifies unused permissions)
aws accessanalyzer create-analyzer \
  --analyzer-name unused-access \
  --type ORGANIZATION_UNUSED_ACCESS \
  --configuration '{"unusedAccess": {"unusedAccessAge": 90}}'
 
# List all findings
aws accessanalyzer list-findings \
  --analyzer-arn arn:aws:access-analyzer:us-east-1:123456789012:analyzer/unused-access \
  --filter '{"status": {"eq": ["ACTIVE"]}}'

Azure - Use Entra Access Reviews:

# Create a recurring access review for privileged roles
az rest --method POST \
  --uri "https://graph.microsoft.com/v1.0/identityGovernance/accessReviews/definitions" \
  --body '{
    "displayName": "Quarterly Owner Role Review",
    "scope": {
      "query": "/roleManagement/directory/roleAssignments?$filter=roleDefinitionId eq '\''OWNER_ROLE_ID'\''",
      "queryType": "MicrosoftGraph"
    },
    "reviewers": [
      {"query": "/users/SECURITY_LEAD_ID", "queryType": "MicrosoftGraph"}
    ],
    "settings": {
      "mailNotificationsEnabled": true,
      "reminderNotificationsEnabled": true,
      "defaultDecisionEnabled": true,
      "defaultDecision": "Deny",
      "autoApplyDecisionsEnabled": true,
      "recurrence": {
        "pattern": {"type": "absoluteMonthly", "interval": 3},
        "range": {"type": "noEnd"}
      }
    }
  }'

GCP - Use IAM Recommender and Policy Analyzer:

# Get IAM recommendations
gcloud recommender recommendations list \
  --project=PROJECT_ID \
  --recommender=google.iam.policy.Recommender \
  --location=global \
  --format="table(name, description, primaryImpact.category)"
 
# Analyze who has access to what
gcloud asset analyze-iam-policy \
  --organization=ORG_ID \
  --identity="user:developer@company.com" \
  --full-resource-name="//storage.googleapis.com/projects/_/buckets/sensitive-data"

Step 3: Build an IAM Monitoring Dashboard

Every organization should track these IAM security metrics:

MetricAlert ThresholdSource
New admin/owner role assignmentsAny occurrenceCloudTrail / Activity Log / Audit Log
Long-lived credential creationAny occurrenceCloudTrail / Entra Audit / Audit Log
Access denied events (single identity)> 10 in 5 minutesCloudTrail / Activity Log / Audit Log
Root/global admin sign-inAny occurrenceCloudTrail / Entra Sign-in / Audit Log
Cross-account role assumption from unknown accountsAny occurrenceCloudTrail
Conditional Access policy changesAny occurrenceEntra Audit Log
Organization policy changesAny occurrenceGCP Audit Log
Service account key downloadsAny occurrenceGCP Audit Log
Unused permissions not remediated> 30 days after recommendationAccess Analyzer / Recommender

Common Misconfigurations to Avoid

These misconfigurations appear repeatedly in cloud security assessments. Each one includes the specific check you can run to detect it.

1. Wildcard Actions and Resources

Policies with "Action": "*" or "Resource": "*" grant far more access than intended.

# AWS: Find policies with wildcards
aws iam list-policies --scope Local --query 'Policies[*].Arn' --output text | \
  xargs -I{} aws iam get-policy-version \
    --policy-arn {} \
    --version-id $(aws iam get-policy --policy-arn {} --query 'Policy.DefaultVersionId' --output text) \
    --query 'PolicyVersion.Document' | \
  grep -l '"Action": "\*"'
 
# GCP: Find bindings with primitive roles
gcloud asset search-all-iam-policies \
  --scope=organizations/ORG_ID \
  --query="policy:roles/editor OR policy:roles/owner" \
  --format="table(resource, policy.bindings.role, policy.bindings.members)"

2. Public Access on Storage

S3 bucket policies, Azure Blob containers, and GCS buckets with public access are a top breach vector.

# AWS: Check for public S3 buckets
aws s3api list-buckets --query 'Buckets[*].Name' --output text | \
  xargs -I{} sh -c 'echo "Checking {}..." && aws s3api get-public-access-block --bucket {} 2>/dev/null || echo "WARNING: {} has no public access block"'
 
# Azure: Check for public blob containers
az storage account list --query '[].name' -o tsv | \
  xargs -I{} az storage container list --account-name {} --query '[?properties.publicAccess!=`none`].{Name:name, Access:properties.publicAccess}' -o table
 
# GCP: Check for publicly accessible buckets
gsutil iam get gs://BUCKET_NAME | grep -i "allUsers\|allAuthenticatedUsers"

3. Overprivileged CI/CD Pipelines

Build pipelines often run with admin-level permissions. Scope them to only the resources they deploy.

Before (overprivileged):

{
  "Effect": "Allow",
  "Action": "*",
  "Resource": "*"
}

After (scoped):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowECRPush",
      "Effect": "Allow",
      "Action": [
        "ecr:GetDownloadUrlForLayer",
        "ecr:BatchGetImage",
        "ecr:PutImage",
        "ecr:InitiateLayerUpload",
        "ecr:UploadLayerPart",
        "ecr:CompleteLayerUpload",
        "ecr:BatchCheckLayerAvailability",
        "ecr:GetAuthorizationToken"
      ],
      "Resource": "arn:aws:ecr:us-east-1:123456789012:repository/my-app"
    },
    {
      "Sid": "AllowECSDeployment",
      "Effect": "Allow",
      "Action": [
        "ecs:UpdateService",
        "ecs:DescribeServices",
        "ecs:DescribeTaskDefinition",
        "ecs:RegisterTaskDefinition"
      ],
      "Resource": [
        "arn:aws:ecs:us-east-1:123456789012:service/prod-cluster/my-app",
        "arn:aws:ecs:us-east-1:123456789012:task-definition/my-app:*"
      ]
    },
    {
      "Sid": "AllowPassRole",
      "Effect": "Allow",
      "Action": "iam:PassRole",
      "Resource": "arn:aws:iam::123456789012:role/my-app-task-role",
      "Condition": {
        "StringEquals": {
          "iam:PassedToService": "ecs-tasks.amazonaws.com"
        }
      }
    }
  ]
}

4. Missing MFA on Root and Global Admin Accounts

# AWS: Check if root account has MFA
aws iam get-account-summary --query 'SummaryMap.AccountMFAEnabled'
 
# AWS: Check MFA for all users
aws iam generate-credential-report
aws iam get-credential-report --output text --query 'Content' | base64 -d | \
  awk -F, 'NR>1 && $4=="true" && $8=="false" {print "WARNING: "$1" has console access but no MFA"}'

5. Unused IAM Users and Roles

Dormant accounts are attractive targets. Automate deprovisioning.

# AWS: Find users who haven't logged in for 90+ days
aws iam generate-credential-report
aws iam get-credential-report --output text --query 'Content' | base64 -d | \
  awk -F, 'NR>1 && $5=="true"' | while IFS=, read -r user _ _ _ _ _ _ _ _ _ last_login _; do
    if [ "$last_login" != "N/A" ] && [ "$last_login" != "no_information" ]; then
      echo "$user last active: $last_login"
    fi
  done
 
# GCP: Find unused service accounts
gcloud recommender recommendations list \
  --project=PROJECT_ID \
  --recommender=google.iam.policy.Recommender \
  --location=global \
  --filter="recommenderSubtype=REMOVE_ROLE"

6. Missing Deny Policies for Sensitive Operations

Always add explicit deny policies for operations that should never happen in production.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyDangerousActions",
      "Effect": "Deny",
      "Action": [
        "ec2:CreateDefaultVpc",
        "ec2:ModifyInstanceAttribute",
        "iam:CreateAccessKey",
        "iam:DeactivateMFADevice",
        "s3:PutBucketPolicy",
        "kms:DisableKey",
        "kms:ScheduleKeyDeletion",
        "rds:ModifyDBInstance",
        "rds:DeleteDBInstance"
      ],
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "aws:PrincipalTag/SecurityClearance": "admin"
        }
      }
    }
  ]
}

Multi-Cloud IAM Governance Framework

If you operate across multiple clouds, you need a unified governance approach. Tooling alone will not solve this - you need processes.

Centralized Identity

  • Single IdP for humans: Use one identity provider (Entra ID, Okta, Google Workspace) and federate to all three clouds. No local cloud accounts for human users.
  • Consistent naming conventions: Service accounts, roles, and policies should follow a naming scheme that identifies the team, application, and environment (e.g., svc-payments-api-prod).
  • Unified group management: Map teams to groups in your IdP and assign cloud permissions to groups, not individuals.

Unified Policy Standards

Define your IAM standards once and implement them per-cloud:

StandardAWS ImplementationAzure ImplementationGCP Implementation
No long-lived credentialsSCP denying iam:CreateAccessKeyConditional Access blocking password-only authOrg Policy iam.disableServiceAccountKeyCreation
MFA required for humansIAM policy requiring aws:MultiFactorAuthPresentConditional Access requiring MFAGoogle Workspace 2SV enforcement
No public storageSCP denying s3:PutBucketPolicy with public principalAzure Policy denying public blob accessOrg Policy storage.publicAccessPrevention
Privileged access is time-boundUse IAM Identity Center with session durationPIM with activation expiryPAM (Privileged Access Manager) with time-bound grants
All actions are loggedOrganization CloudTrailDiagnostic settings at management group levelOrganization audit log sink

Quarterly Review Process

  1. Week 1: Run automated scans across all clouds (Access Analyzer, Entra Access Reviews, IAM Recommender)
  2. Week 2: Review findings with each team - classify as accept, remediate, or investigate
  3. Week 3: Implement remediations and update policies
  4. Week 4: Verify changes and update documentation

Quick Reference: IAM Emergency Runbook

When you suspect a credential has been compromised, time is critical. Here are the immediate actions for each cloud.

AWS Credential Compromise

# 1. Deactivate the compromised access key immediately
aws iam update-access-key --user-name COMPROMISED_USER --access-key-id AKIAXXXXXXXX --status Inactive
 
# 2. Revoke all active sessions for an IAM role
aws iam put-role-policy \
  --role-name COMPROMISED_ROLE \
  --policy-name RevokeOlderSessions \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [{
      "Effect": "Deny",
      "Action": "*",
      "Resource": "*",
      "Condition": {
        "DateLessThan": {"aws:TokenIssueTime": "'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'"}
      }
    }]
  }'
 
# 3. Check CloudTrail for what the compromised credential accessed
aws cloudtrail lookup-events \
  --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=AKIAXXXXXXXX \
  --start-time $(date -u -v-24H +%Y-%m-%dT%H:%M:%SZ) \
  --max-results 50

Azure Credential Compromise

# 1. Revoke all refresh tokens for the compromised user
az ad user update --id COMPROMISED_USER_ID --force-change-password-next-sign-in true
az rest --method POST --uri "https://graph.microsoft.com/v1.0/users/COMPROMISED_USER_ID/revokeSignInSessions"
 
# 2. Disable the service principal if it's a compromised app
az ad sp update --id APP_ID --account-enabled false
 
# 3. Check activity logs
az monitor activity-log list \
  --caller COMPROMISED_USER_UPN \
  --start-time $(date -u -v-24H +%Y-%m-%dT%H:%M:%SZ) \
  --output table

GCP Credential Compromise

# 1. Disable the compromised service account
gcloud iam service-accounts disable SA_EMAIL
 
# 2. Delete all keys for the service account
for key in $(gcloud iam service-accounts keys list --iam-account=SA_EMAIL --managed-by=user --format='value(name)'); do
  gcloud iam service-accounts keys delete $key --iam-account=SA_EMAIL --quiet
done
 
# 3. Check audit logs for the compromised identity
gcloud logging read \
  'protoPayload.authenticationInfo.principalEmail="SA_EMAIL"' \
  --project=PROJECT_ID \
  --freshness=24h \
  --limit=50

Need help building this?

Our team specializes in turning these ideas into production systems. Let's talk.