aws-resource-counter

command module
v1.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 3, 2024 License: BSD-2-Clause Imports: 30 Imported by: 0

README

aws-resource-counter

Build, Lint and Test Release with goreleaser

Command-line utility for counting the resources in use in an AWS organization.

The AWS resource counter utility known as "aws-resource-counter" inspects a cloud deployment on Amazon Web Services to assess the number of distinct compute and storage resources. The result is a CSV file that describes the counts of each.

This repository started out as cloud-resource-counter. Reference the archived repository to view its entire history.

Table of Contents

Prerequisites

Authentication

This command line tool requires access to a valid AWS Account, and uses the same credential mechanisms as the AWS CLI. There are several ways to provide credentials.

Environment variables

If you do not specify a profile when running the tool, or the profile you specify does not contain credentials, the tool will use the following environment variables. This enables tools such as HashiCorp's Vault to work seamlessly with the tool.

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_SESSION_TOKEN

These can be long-lived credentials, or can be short-term credentials obtained from the Command line or programmatic access section of the AWS SSO account selection page. For a single run of the tool using an SSO account, this is the simplest method. Do not pass --sso to the tool if you use this method, even if the environment variables are short-term credentials created by an SSO account.

Credential profiles

If you have ever run the AWS CLI, you will already have at least one profile configured. This tool uses the same mechanism of retrieving and using stored credentials. You may store several sets of credentials, each being denoted by its own "profile name".

To create a new profile, run:

$ aws configure --profile some-profile-name
AWS Access Key ID [None]: ...

where some-profile-name is the name you would like to use to name this set of credentials. You would be prompted for several strings (AWS Access Key ID, AWS Secret Access Key, Default region name, Default output format).

For help on storing AWS credentials, see Configuration Basics.

On-demand SSO access

To use an SSO-enabled account without copying short-term credentials into an environment variable, see Configure the AWS CLI to use AWS IAM Identity Center. Pass --sso to the aws-resource-counter tool when you run it.

Using aws-resource-counter

The following command line arguments are supported:

Argument Meaning
--help Information on the command line options.
--output-file OF Write the results in Comma Separated Values format to file OF. Defaults to 'resources.csv'.
--no-output Do not save the results to any file. Defaults to false (save to a file).
--profile PN Use the credentials associated with shared profile named PN. If omitted, then the default profile is used (often called "default").
--region RN Collect resource counts for a single AWS region RN. If omitted, all regions are examined.
--sso Use SSO for authentication. Defaults to false.
--trace-file TF Write a trace of all AWS calls to file TF.
--version Display version information and then exit.
Repeated Usage

We designed the tool to make it as easy as possible to run. If you run it without any arguments, we will invoke the tool with the following defaults:

  • We will examine ALL REGIONS to give you a comprehsive view of your AWS resources.
  • We will use the credentials associated with your DEFAULT PROFILE (honoring the AWS_PROFILE environment variable).
  • We will SAVE THE RESULTS to a file called resources.csv.

If you have multiple accounts associated with your AWS Organization, you can invoke the tool repeatedly for each different profile:

  • Simply invoke the tool again with the --profile other-profile where "other-profile" is the name of your other profile.

The results of your prior runs are saved as we will automatically append rather than overwrite the output file.

If you wish to not save the results of a run to any file, use the --no-output flag on the command line.

Sample Run, CSV File

Here is what it looks like when you run the tool:

$ aws-resource-counter
Cloud Resource Counter (v0.7.0) running with:
 o AWS Profile: default
 o AWS Region:  (All regions supported by this account)
 o Output file: resources.csv

Activity
 * Retrieving Account ID...OK (240520192079)
 * Retrieving EC2 counts...................OK (5)
 * Retrieving EC2 K8 related VMs Sub-instance counts...................OK (1)
 * Retrieving Spot instance counts...................OK (4)
 * Retrieving EBS volume counts...................OK (9)
 * Retrieving Unique container counts...................OK (3)
 * Retrieving Lambda function counts...................OK (12)
 * Retrieving RDS instance counts...................OK (7)
 * Retrieving Lightsail instance counts................OK (0)
 * Retrieving S3 bucket counts...OK (13)
 * Retrieving EKS Node counts....................OK (2)
 * Writing to file...OK

Success.

As you can see above, no command line arguments were necessary: it used my default profile ("profile"), selected all regions and saved results to a file called "resources.csv".

Here is what the CSV file looks like. It is important to mention that this tool was run TWICE to collect the results of two different accounts/profiles.

Account ID,Timestamp,Region,# of EC2 Instances,# of EC2 K8 related VMs Sub-instances,# of Spot Instances,# of EBS Volumes,# of Unique Containers,# of Lambda Functions,# of RDS Instances,# of Lightsail Instances,# of S3 Buckets,# of EKS Nodes
896149672290,2020-10-20T16:29:39-04:00,ALL_REGIONS,2,3,7,3,2,3,2,2,2
240520192079,2020-10-21T16:24:06-04:00,ALL_REGIONS,5,4,9,3,12,7,0,13,0

Here are some notes on specific columns:

Column Name Column Notes
Account ID This is the account number associated with the profile that you used.
Timestamp This indicates when you collected the resource count.
Region This indicates what single region (e.g., us-east-1) was inspected. If you did not specify a region, ALL_REGIONS is shown.

The rest of the columns refer to specific counts of a type of resource.

Installing

You can build this from source or use the precompiled binaries (see the Releases page for binaries). We provided binaries for Linux (x86_64 and i386) and MacOS. There is no installation process as this is simply a command line tool.

To unzip and untar from the command line on MacOS, use this command:

$ tar -Zxvf aws-resource-counter_<<RELEASE_VERSION>>_<<PLATFORM>>_<<ARCH>>.tar.gz
x LICENSE
x README.md
x aws-resource-counter

The command on Linux is similar:

$ tar -zxvf aws-resource-counter_<<RELEASE_VERSION>>_<<PLATFORM>>_<<ARCH>>.tar.gz
x LICENSE
x README.md
x aws-resource-counter

The result is a binary called aws-resource-counter in the current directory.

These binaries can run on Linux OSes (32- and 64-bit versions) and MacOS (10.12 Sierra or later).

NOTE: It is important that this tool run from inside the Terminal application when running on MacOS. In particular, it cannot be run directly from inside the Finder application.

MacOS Download

If you are using MacOS Catalina, there is a stricter process for running binaries produced by third party developers. You must allow "App Store and identified developers" for the binary to run. Here are the detailed steps:

  1. From the Apple menu, click "System Preferences".
  2. Select "Security and Privacy"
  3. If the settings are locked, unlock them. This requires you to enter your password.
  4. From the "Allow apps downloaded from:" section, choose "App Store and identified developers".
  5. You can lock your settings if you like.

Building from Source

aws-resource-counter is written in Go. You can build and run it directly from source. We have built and tested this with the following Go versions:

  • v1.21

To run from source, use the following command line:

// Assumes that you are inside the aws-resource-counter folder
$ go run . --help

To run the unit tests, use the following command line:

// Assumes that you are inside the aws-resource-counter folder
$ go test

Minimal IAM Policy

To use this utility, this minimal IAM Profile can be associated with a bare user account:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "cloudresourcecounterpermissions",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:DescribeRegions",
                "ec2:DescribeVolumes",
                "ecs:DescribeTaskDefinition",
                "ecs:ListTaskDefinitions",
                "lambda:ListFunctions",
                "lightsail:GetInstances",
                "lightsail:GetRegions",
                "rds:DescribeDBInstances",
                "s3:ListAllMyBuckets",
                "eks:DescribeNodegroup",
                "eks:ListNodegroups",
                "eks:ListClusters"
            ],
            "Resource": "*"
        }
    ]
}

Resources Counted

The aws-resource-counter examines the following resources:

  1. Account ID. We use the Security Token Service to collect the account ID associated with the caller.

    • This is stored in the generated CSV file under the "Account ID" column.
  2. EC2. We count the number of EC2 running instances (both "normal" and Spot instances) across all regions.

    • For EC2 instances, we only count those without an Instance Lifecycle tag (which is either spot or scheduled).

    • For EC2 K8 related VMs sub-instances, we only count those with a tag of aws:eks:cluster-name.

    • For Spot instance, we only count those with an Instance Lifecycle tag of spot.

    • This is stored in the generated CSV file under the "# of EC2 Instances", "# of EC2 K8 related VMs Sub-instances", and "# of Spot Instances" columns.

  3. EBS Volumes. We count the number of "attached" EBS volumes across all regions.

    • We only count those EBS volumes that are "attached" to an EC2 instance.

    • This is stored in the generated CSV file under the "# of EBS Volumes" column.

  4. Unique ECS Containers. We count the number of "unique" ECS containers across all regions.

    • We look at all task definitions and collect all of the Image name fields inside the Container Definitions.
    • We then simply count the number of unique Image names across all regions. This is the only resource counted this way.
    • We do not check that there is more than 1 running task. If the task definition exists, we count it.
    • This is stored in the generated CSV file under the "# of Unique Containers" column.
  5. Lambda Functions. We count the number of all Lambda functions across all regions.

    • We do not qualify the type of Lambda function.
    • This is stored in the generated CSV file under the "# of Lambda Functions" column.
  6. RDS Instances. We count the number of RDS instance across all regions.

    • We only count those instances whose state is "available".
    • This is stored in the generated CSV file under the "# of RDS Instances" column.
  7. Lightsail Instances. We count the number of Lightsail instances across all regions.

    • We do not qualify the type of Lightsail instance.
    • This is stored in the generated CSV file under the "# of Lightsail Instances" column.
  8. S3 Buckets. We count the number of S3 buckets across all regions.

    • We do not qualify the type of S3 bucket.
    • NOTE: We cannot currently count S3 buckets on a per-region basis (due to limitations with the AWS SDK).
    • This is stored in the generated CSV file under the "# of S3 Buckets" column.
  9. EKS Nodes. We count the number of nodes across all clusters in all regions.

    • We do not qualify the type of EKS node.
    • This is stored in the generated CSV file under the "# of EKS Nodes" column.

Alternative Means of Resource Counting

If you do not wish to use the aws-resource-counter utility, you can use the AWS CLI to collect these same counts. For some of these counts, it will be easy to do. For others, the command line is a bit more complex.

If you do not have the AWS CLI (version 2) installed, see Installing the AWS CLI on the AWS website.

For the purposes of explaining these scripts, we are using a Bash command line on a Unix operating system. If you use another command line processor (or OS), please adapt the script appropriately.

Setup

To make these scripts easier to read, we will put your profile in a shell variable.

$ aws_p='--profile my-profile'

In the example above, we have specified "my-profile" as the name of the AWS profile we want to use. Replace this with the name of your profile.

Execute the line above in your shell for the remaining scripts to work correctly.

Account ID

To collect the account ID, use the AWS CLI sts command, as in:

$ aws sts get-caller-identity $aws_p --output text --query Account
123456789012
EC2 and Spot Instances
Regions

To collect the total number of EC2 instances across all regions, we will need to run two AWS CLI commands. First, let's get the list of accessible regions where your EC2 instances are located:

$ aws ec2 describe-regions $aws_p \
   --filters Name=opt-in-status,Values=opt-in-not-required,opted-in \
   --region us-east-1 --output text --query 'Regions[].RegionName'
eu-north-1    ap-south-1    eu-west-3 ...

Notes on the command:

  1. It filters the list of regions to just those that are either "opted in" or where "opt in" is not required.
  2. We send this command to the US-EAST-1 region. (This is required since your profile may not specify a region.)
  3. We output the query as TEXT.
  4. We extract the RegionName field from the structure.

We will be using the results of this command to "iterate" over all regions. To make our scripts easier to read, we are going to store the results of this command to a shell variable.

$ ec2_r=$(aws ec2 describe-regions $aws_p \
   --filters Name=opt-in-status,Values=opt-in-not-required,opted-in \
   --region us-east-1 --output text --query 'Regions[].RegionName' )

You can show the list of regions for your account by using the echo command:

$ echo $ec2_r
eu-north-1 ap-south-1 eu-west-3 ...
Normal Instances

Here is the command to count the number of normal EC2 instances (those that are not Spot nor Scheduled instances) for a given region:

$ aws ec2 describe-instances $aws_p --no-paginate --region us-east-1 \
      --filters Name=instance-state-name,Values=running \
      --query 'length(Reservations[].Instances[?!not_null(InstanceLifecycle)].InstanceId[])'
4

The number 4 above means that there were 4 running EC2 instances found. (Your results may vary.)

The filters clause restricts the list to just those instances that are running.

By default, the EC2 describe-instances returns "normal" EC2 instances as well as those that are "spot" instances. As such, the query argument does the following:

  1. Find all Instances (in all Reservations) and qualify each:
    • InstanceLifecycle attribute is not (!) non-null (not_null).
      • This is effectively saying, where InstanceLifecycle is null.
      • The language specification (JMESPath) does not have a null() function.
    • Get the InstanceId for each of these matching Instances and form into a flattened array.
  2. Get the length of that array.

We will need to run this command over all regions. Here is what it looks like:

$ for reg in $ec2_r; do \
      aws ec2 describe-instances $aws_p --no-paginate --region $reg \
         --filters Name=instance-state-name,Values=running \
         --query 'length(Reservations[].Instances[?!not_null(InstanceLifecycle)].InstanceId[])' ; \
  done | paste -s -d+ - | bc
 23

(This command may take 1-2 minutes, so be patient.)

The first line loops over all regions (using the variable reg to hold the current value).

The second and third lines are our call to describe-instances (as shown above).

In the fourth line, we paste all of the values into a long addition and use bc to sum the values.

Here is the command to count the number of EC K8 related VMs subcount instances for a given region:

$ aws ec2 describe-instances $aws_p --no-paginate --region us-east-1 \
      --filters Name=instance-state-name,Values=running \
      --filters Name=tag-key,Values='aws:eks:cluster-name' \
      --query 'length(Reservations[].Instances[?!not_null(InstanceLifecycle)].InstanceId[])'
1

This command is similar to the normal EC2 query, but now explicitly checks for EC2 instances whose Tags have that key aws:eks:cluster-name.

We will need to run this command over all regions. Here is what it looks like:

$ for reg in $ec2_r; do \
   aws ec2 describe-instances $aws_p --no-paginate --region $reg \
      --filters Name=instance-state-name,Values=running \
      --filters Name=tag-key,Values='aws:eks:cluster-name' \
      --query 'length(Reservations[].Instances[?!not_null(InstanceLifecycle)].InstanceId[])' ; \
done | paste -s -d+ - | bc
5
Spot Instances

Here is the command to count the number of Spot instances for a given region:

$ aws ec2 describe-instances $aws_p --no-paginate --region us-east-1 \
      --filters Name=instance-state-name,Values=running \
      --query 'length(Reservations[].Instances[?InstanceLifecycle==`spot`].InstanceId[])'
1

This command is similar to the normal EC2 query, but now explicitly checks for EC2 instances whose InstanceLifecycle is spot.

We will need to run this command over all regions. Here is what it looks like:

$ for reg in $ec2_r; do \
   aws ec2 describe-instances $aws_p --no-paginate --region $reg \
      --filters Name=instance-state-name,Values=running \
      --query 'length(Reservations[].Instances[?InstanceLifecycle==`spot`].InstanceId[])' ; \
done | paste -s -d+ - | bc
5
EBS Volumes

Here is the command to count all EBS Volumes in a given region:

$ aws ec2 describe-volumes $aws_p --no-paginate --region us-east-1 \
   --query 'length(Volumes[].Attachments[?not_null(InstanceId)].InstanceId[])'
3

It restricts the count to just those EBS volumes that are attached to an EC2 instance. Unattached EBS volumes are not counted.

To run this over all regions, use this command:

$ for reg in $ec2_r; do \
   aws ec2 describe-volumes $aws_p --no-paginate --region $reg \
      --query 'length(Volumes[].Attachments[?not_null(InstanceId)].InstanceId[])' ; \
done | paste -s -d+ - | bc
11
Unique ECS Containers

To compute the number of unique ECS container images, we must invoke two AWS CLI commands: list-task-definitions and describe-task-definition. The first command gives us a list of "Task Definition ARNs". Then for each task definition ARN, we can get a description of that task. Let's look at each part.

List Task Definitions

Here is how we get a list of task definitions in a single region:

$ aws ecs list-task-definitions $aws_p --no-paginate --region us-east-1 \
   --output text --query taskDefinitionArns[]
arn:aws:ecs:us-east-1:123456789012:task-definition/some-task-family:1
arn:aws:ecs:us-east-1:123456789012:task-definition/another-task-family:1

You can collect all task definitions for all regions using:

$ for reg in $ec2_r; do \
   aws ecs list-task-definitions $aws_p --no-paginate --region $reg \
      --output text --query taskDefinitionArns[]; done
arn:aws:ecs:us-east-1:123456789012:task-definition/some-task-family:1
arn:aws:ecs:us-east-1:123456789012:task-definition/another-task-family:1
arn:aws:ecs:us-east-2:123456789012:task-definition/my-family:1
arn:aws:ecs:us-west-1:123456789012:task-definition/last-family:3

This list cannot give us a unique count of container images, but we can use this to get the definition for each.

Describe Task Definition

Once we have a task definition ARN, we can ask for its definition. Here's how we do that for a single task in a single region:

$ aws ecs describe-task-definition $aws_p --region us-east-1 \
   --task-definition arn:aws:ecs:us-east-1:123456789012:task-definition/some-task-family:1 \
   --output text --query taskDefinition.containerDefinitions[].image
tomcat
https://github.com/docker-library/mongo:4.0

To combine this with a list of all task definition ARNs and regions, use the following command:

$ for reg in $ec2_r; do \
   for td in $(aws ecs list-task-definitions $aws_p --no-paginate --region $reg \
      --output text --query taskDefinitionArns[]); do \
      aws ecs describe-task-definition $aws_p --region $reg \
         --task-definition $td --output text \
         --query taskDefinition.containerDefinitions[].image; \
   done; done | sort | uniq | wc -l
11
Lambda Functions

To get a list of lambda functions in a given region, use the AWS CLI lambda command, as in:

$ aws lambda list-functions $aws_p --no-paginate --region us-east-1 \
   --query 'length(Functions)'
4

To get the list of lambda functions across all regions, use this command:

$ for reg in $ec2_r; do \
   aws lambda list-functions $aws_p --no-paginate --region $reg \
      --query 'length(Functions)' ; \
done | paste -s -d+ - | bc
7
RDS Instances

To get a list of RDS instances in a given region, we use the AWS CLI rds command, as in:

$ aws rds describe-db-instances $aws_p --no-paginate --region us-east-1 \
   --query 'length(DBInstances[?DBInstanceStatus==`available`])'
1

You can see that we are only counting those DB Insances whose status is "available".

To get a list of all RDS instances across all regions use:

$ for reg in $ec2_r; do \
   aws rds describe-db-instances $aws_p --no-paginate --region $reg \
      --query 'length(DBInstances[?DBInstanceStatus==`available`])' ; \
done | paste -s -d+ - | bc
5
Lightsail Instances

Lightsail instances live in different regions than EC2 instances, as such, we need a new way to collect all of the Lightsail regions:

$ aws lightsail get-regions $aws_p --region us-east-1 --output text \
   --query "regions[].name"
us-east-1    us-east-2    us-west-2 ...

As you can see, this is the correct form of the AWS Region that we want to use.

Here's how we get the number of Lightsail instances in a given region:

$ aws lightsail get-instances $aws_p --region us-east-1 \
   --query 'length(instances[?state.name==`running`])'
2

Here is how we put the two calls together to find all instances across all regions:

$ for reg in $(aws lightsail get-regions $aws_p --region us-east-1 --output text \
   --query 'regions[].name'); do \
   aws lightsail get-instances $aws_p --region $reg \
      --query 'length(instances[?state.name==`running`])';
done | paste -s -d+ - | bc
3
S3 Buckets

The last count is probably the easiest. To get a list of all S3 buckets in all regions, you need only one command:

$ aws s3api list-buckets $aws_p --query 'length(Buckets)'
10

Note that it is not possible through the AWS CLI to get S3 buckets on a per-region basis.

EKS Nodes

To get a list of EKS nodes in a given region, we use the AWS CLI eks command, as in:

$ region=us-east-2
$ clusters=$(aws eks list-clusters $aws_p --no-paginate --region $region --output text --query='clusters')
$ for cluster in $clusters; do \
   for node_pool in $(aws eks list-nodegroups $aws_p --no-paginate --region $region --cluster-name $cluster --query=nodegroups --output text); do \
      aws eks describe-nodegroup $aws_p --no-paginate --region $region --cluster $cluster --nodegroup-name $node_pool --query="nodegroup.scalingConfig.desiredSize"; \
   done; done | paste -s -d+ - | bc
1

To get a list of all EKS nodes across all regions use:

$ for reg in $ec2_r; do \
   for cluster in $(aws eks list-clusters $aws_p --no-paginate --region $reg --output text --query='clusters'); do \
      for node_pool in $(aws eks list-nodegroups $aws_p --no-paginate --cluster-name $cluster  --region $reg --query=nodegroups --output text); do \
         aws eks describe-nodegroup $aws_p --no-paginate --region $reg --cluster $cluster --nodegroup-name $node_pool --query="nodegroup.scalingConfig.desiredSize"; \
      done; done; done | paste -s -d+ - | bc
5

Documentation

The Go Gopher

There is no documentation for this package.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL