Skip to main content

Scopes and Segmentation Strategy

In the article on Inventory, we saw that Scopes are saved filters that logically group resources. But a Scope is not just an organizational convenience β€” it is one of the most critical security components in the entire Apono configuration. It's the Scope that defines the attack surface of each Access Flow.

The Problem: Flows Without Scope​

Imagine the simplest configuration scenario: you create an Access Flow pointing directly to the AWS integration, without any Scope. The flow allows backend team engineers to request access to AWS resources with manager approval.

What happens in practice?

When an engineer opens the Apono portal to request access, they see all resources from the AWS integration: all EC2 instances, all RDS databases, all S3 buckets, all Organization accounts. The flow doesn't filter anything β€” it just requires an approval.

The problem is not only that the engineer can request access to critical resources, but also that they see that these resources exist. Visibility itself is a risk. An engineer who shouldn't know about a PII database in a compliance account now knows it exists, what its name is, and that they can request access.

And the only barrier between them and that access is the manager's approval.

The Reality of Approvals​

In theory, the manager should evaluate each request carefully: verify whether the engineer truly needs that access, whether the resource is correct, whether the permission level is appropriate, and whether the duration is reasonable. In more critical scenarios, the ideal would be to also consult the compliance or GRC (Governance, Risk, and Compliance) team to ensure the access complies with internal and regulatory policies.

In practice, most approvers simply approve. The reasons are predictable:

  • The engineer says they need it to resolve an incident and there's pressure to unblock quickly.
  • The approver is the team manager and trusts the team.
  • The request arrives on Slack and approving is a single click.
  • The approver lacks the technical context to assess whether that level of access is truly necessary.
  • Involving compliance or GRC in every request would make the process too slow for daily operations.

This doesn't eliminate the approvers' responsibility β€” they remain an important part of the process. But relying exclusively on human approval under pressure is a fragile control. Well-configured Scopes and Access Flows reduce the margin of error and ensure that, even when the approver approves quickly, the granted access is already within safe limits.

Approval should be the last filter, not the only one. Approvers are responsible for the decision, but Scopes and Access Flows exist so that decision happens within a secure perimeter.

How Scopes Solve the Problem​

A Scope limits what an Access Flow can grant. It acts as a filter on the inventory that defines which resources are within reach of that flow. Resources outside the Scope simply don't appear to the requester β€” they can't request access to what they can't see.

With well-defined Scopes:

  • The backend engineer who needs to access development EC2 instances only sees development instances.
  • The DBA who needs to access production databases only sees production databases and goes through a different approval flow.
  • Compliance, security, or critical infrastructure resources don't appear in any engineering flow.

Even if the approver approves without thinking, the damage is limited to what the Scope allows. The blast radius of a careless approval is contained by the scope.

It's worth remembering that a single user β€” or group of users β€” can be associated with more than one Access Flow. In that case, the available accesses add up: they can request everything each flow allows. The difference between flows can be in the approvers, the visible resources, or the permission level granted by each Scope.

Segmentation Strategy​

How you define your Scopes determines the security level of the entire Apono operation. A good segmentation strategy follows two axes: environment and criticality.

Axis 1: By Environment or AWS Account​

The most basic and essential segmentation. Production and development resources should never be in the same Scope.

In practice, many companies use multiple AWS accounts to separate environments β€” one account for development, another for staging, and another for production. This is a recommendation from AWS itself (AWS Organizations with multi-account strategy). When the organization follows this model, the Scope can be defined directly by the AWS account instead of relying on environment tags:

ScopeFilterAssociated Flow
EC2 DevelopmentAccount 111111111111 (dev) + ResourceType=EC2Auto-approval, 8 hours
EC2 StagingAccount 222222222222 (staging) + ResourceType=EC2Manager approval, 4 hours
EC2 ProductionAccount 333333333333 (prod) + ResourceType=EC2Manager + security approval, 2 hours

If the company uses a single AWS account for all environments, segmentation depends on tags like Environment=production. But in organizations with separate accounts, the account itself is the most reliable filter β€” there's no risk of someone forgetting to apply a tag and the resource ending up in the wrong Scope.

Companies using AWS Organizations with multiple accounts have a natural advantage: the environment separation is already in the account structure. Even with a single integration on the main account, Scopes can filter resources by account, making segmentation more robust and less dependent on tags.

With this segmentation, the engineer who needs to troubleshoot in dev gets quick access without waiting for approval. But for production, the flow requires more rigor. And the Scope ensures that, even with auto-approval in dev, they never access production through that flow.

Axis 2: By Criticality​

Within the same environment, not all resources have the same risk level. A database with customer data (PII) is more sensitive than a Redis cache. An S3 bucket with database backups is more critical than a bucket with static website assets:

ScopeFilterAssociated Flow
RDS Production β€” PIIEnvironment=production + DataClassification=piiDBA + compliance approval, 1 hour, SELECT only
RDS Production β€” InternalEnvironment=production + DataClassification=internalDBA approval, 2 hours
RDS DevEnvironment=development + ResourceType=RDSAuto-approval, 4 hours
S3 Production β€” SensitiveEnvironment=production + DataClassification=sensitiveManager + security approval, 1 hour, ReadOnly only
S3 Production β€” PublicEnvironment=production + DataClassification=publicManager approval, 4 hours
S3 DevEnvironment=development + ResourceType=S3Auto-approval, 8 hours

Notice that the same S3 buckets in the same production environment have completely different flows depending on the DataClassification tag. A bucket with database backups, logs containing personal data, or financial report exports requires much stricter control than a bucket serving images for a public website.

Combining the Axes​

In practice, the two axes combine to form a matrix of Scopes that covers the entire organization:

The further right and higher on the matrix, the more restrictive the flow should be: more approvers, shorter duration, more granular permissions.

Practical Example​

An organization with 3 AWS accounts (dev, staging, production) and 2 teams (backend, data) could configure:

Scopes:

  • ec2-dev β€” EC2 in development accounts
  • ec2-staging β€” EC2 in staging accounts
  • ec2-prod β€” EC2 in production accounts
  • rds-dev β€” RDS in development accounts
  • rds-prod-internal β€” Production RDS with DataClassification=internal
  • rds-prod-pii β€” Production RDS with DataClassification=pii
  • s3-dev β€” S3 in development accounts
  • s3-prod-public β€” Production S3 with DataClassification=public
  • s3-prod-sensitive β€” Production S3 with DataClassification=sensitive

Access Flows:

FlowWho Can RequestScopeApprovalDuration
Dev EC2Backend + Dataec2-devAuto-approval8h
Staging EC2Backend + Dataec2-stagingManager4h
Prod EC2Backendec2-prodManager + SRE2h
Dev RDSDatards-devAuto-approval4h
Prod RDS (internal)Datards-prod-internalManager + DBA2h
Prod RDS (PII)Datards-prod-piiManager + DBA + Compliance1h, SELECT only
Dev S3Backend + Datas3-devAuto-approval8h
Prod S3 (public)Backends3-prod-publicManager4h
Prod S3 (sensitive)Datas3-prod-sensitiveManager + Security1h, ReadOnly only

The backend engineer never sees RDS databases in any flow. The data engineer never sees production EC2 instances. And S3 buckets with sensitive data are only accessible to the data team with security approval. Even within the flows each person can access, the Scope limits the visible resources.

The Importance of AWS Tags​

Not all segmentation depends exclusively on tags. If the company already has separate AWS accounts by environment, the account structure itself serves as a filter. If resources follow a clear naming convention (e.g., prod-api-rds-01, dev-worker-ec2-03), the resource name already helps differentiate them. But in practice, few organizations have such a well-organized architecture. Tags exist precisely to compensate for this lack of structure β€” and even in well-architected environments, they add an extra layer of classification that goes beyond what accounts and names can express, such as data sensitivity level (DataClassification=pii) or responsible team (Team=data).

AWS Tag Editor: Your Ally​

AWS offers Tag Editor in the console (Resource Groups & Tag Editor) that allows you to:

  • View all resources in a region and their tags on a single screen
  • Bulk edit tags β€” select multiple resources and apply the same tag at once
  • Identify untagged resources β€” filter by resources missing a specific tag
  • Audit consistency β€” verify that all production RDS instances actually have Environment=production

In practice, the process is:

  1. Access Tag Editor in the AWS console
  2. Filter by resource type (e.g., RDS, S3, EC2)
  3. Identify resources missing required tags
  4. Select the resources and add missing tags in bulk
  5. Repeat periodically β€” new resources created without tags silently break Apono Scopes

Tags and Infrastructure as Code​

Ideally, tags should be defined when the resource is created via Terraform, CloudFormation, or another IaC tool. This ensures every resource is born with the correct tags:

# Terraform example
resource "aws_db_instance" "customers" {
# ... RDS configuration

tags = {
Environment = "production"
DataClassification = "pii"
Team = "data"
Service = "customers-db"
}
}

resource "aws_s3_bucket" "reports" {
# ... bucket configuration

tags = {
Environment = "production"
DataClassification = "sensitive"
Team = "data"
Service = "financial-reports"
}
}

Without this pattern, you depend on someone remembering to add tags manually β€” and experience shows this doesn't work consistently. A resource without tags doesn't disappear from Apono β€” it remains in the integration's inventory. The risk is that it escapes Scope filters, potentially appearing in generic flows where it shouldn't be, or not being captured by any specific Scope that depends on tags for segmentation.

Common Mistakes​

1. Single Scope for the entire integration​

A Scope that points to "all resources in the AWS integration" is the same as having no Scope. All the security Apono offers depends on proper segmentation.

2. Scopes based only on resource type​

Creating a Scope for "all EC2" or "all RDS" without differentiating by environment or criticality puts production and development in the same bucket. The approver gets used to approving everything because most requests are for dev, and when a production request comes in, they approve on autopilot.

3. Not updating Scopes when infrastructure changes​

Tag-based Scopes are dynamic and update automatically. But if the tagging strategy changes (e.g., Environment becomes env), the Scopes silently become empty. Periodically monitor whether Scopes are capturing the expected resources.

4. Delegating security solely to the approver​

If your security model depends on the manager denying improper requests, you've already lost. Scopes exist to ensure that improper requests are not possible, regardless of who approves.

Scope configuration is the job of Apono administrators, not approvers. Whoever defines the Scopes defines the attack surface of the entire organization. Invest time in this configuration β€” it's the most impactful security decision you'll make in Apono.