AWS Commands

Account

Account number:

aws sts get-caller-identity \
    --query 'Account' \
    --output text

IP Address Ranges

AWS documentation is here.

curl -s https://ip-ranges.amazonaws.com/ip-ranges.json
cat "$IpRanges" \
    | jq '.prefixes[]
          | select(.region = "us-east-2")
          | .ip_prefix'

Get Region of IP

(Would be cool).

Check IP address against each CIDR range. Would be cool to do this using bit comparisons.

See: https://unix.stackexchange.com/questions/274330/check-ip-is-in-range-of-whitelist-array

Regions

All regions in partition

aws ec2 describe-regions \
    | jq -r ".Regions[].RegionName"

Number of regions:

echo $AllRegions | wc -w   # Using wc

regions=($AllRegions)      # Or use an array, and
echo ${#regions[*]}        # print array length

All regions in all partitions

Should be in the botocore data files. Also, we can pull them from the IpRanges document. You have to decide whether "GLOBAL" is a "region" in your situation.

cat "$IpRanges" \
    | jq -r '.prefixes[].region' \
    | sort | uniq

Run a command in all regions

Linear

Get the number of VPCs in each region the slow and simple way.

for region in $regions; do
    vpcs=($(aws --region $region ec2 describe-vpcs | jq '.Vpcs[].VpcId'))
    echo "$region: ${#vpcs[*]}"
done

Parallel

Get the number of VPCs in each region the 🚀 and 😎 (and 🤮) way by starting a subprocess for each region in parallel. Note that ${#regions[@]} is bash syntax for the length of the $regions array.

script=$(cat <<-"EOF"
vpcs=($(aws --region {} ec2 describe-vpcs | jq '.Vpcs[].VpcId'))
echo "{}: ${#vpcs[@]}"
EOF
)

regions=($regions)
printf "%s\n" "${regions[@]}" \
    | xargs -n 1 \
            -P ${#regions[*]} \
            -I {} \
            bash -c "$script"

ACM

list-certificates

aws acm list-certificates \
    --includes "keyUsage=ANY" \
    | jq '.CertificateSummaryList[]'

describe-certificate

aws acm describe-certificate \
    --certificate-arn $cert_arn

get-certificate

aws acm get-certificate \
    --certificate-arn $cert_arn

Get certificate PEM File

aws acm get-certificate \
    --certificate-arn $cert_arn \
    | jq -r '.Certificate' > cert.pem

API Gateway V2

get-apis

aws apigatewayv2 get-apis \
    | jq '.Items[]'

CloudFormation

For create/deploy/update, see my cloudformation/README.org.

View Stack Log

aws cloudformation list-stacks \
    | jq -M '.StackSummaries[0].StackId' \
    | xargs aws cloudformation describe-stack-events --stack-name

List Stacks

List only the created stacks

aws cloudformation list-stacks \
    --stack-status-filter CREATE_IN_PROGRESS CREATE_COMPLETE

Stack Outputs

aws cloudformation describe-stacks \
    --stack-name $StackName \
    | jq '.Stacks[].Outputs[]'

Get the value of a single output:

aws cloudformation describe-stacks \
    --stack-name $StackName \
    --query "Stacks[0].Outputs[?OutputKey=='$OutputKey'].OutputValue" \
    --output text

Get Stack Parameters

Get all parameters used to create a particular stack:

aws cloudformation describe-stacks \
    --stack-name $StackName \
    | jq '.Stacks[].Parameters[]'

Get the value of a single parameter:

aws cloudformation describe-stacks \
    --stack-name $StackName \
    --query "Stacks[0].Parameters[?ParameterKey=='$Parameter'].ParameterValue" \
    --output text

Exports

Get value of the exported field with name $ExportName:

aws cloudformation list-exports \
    --query "Exports[?Name=='$ExportName'].Value" \
    --output text

List exports that start with a string:

aws cloudformation list-exports \
    --query "Exports[?starts_with(Name, 'foo-')].[Name, Value]" \
    --output text

Spec Files

URLs are documented here.

CloudWatch

Delete All Alarms

alarms=$(aws cloudwatch describe-alarms \
             | jq -r '.MetricAlarms[].AlarmName')

for alarm in $alarms; do
    aws cloudwatch delete-alarms \
        --alarm-names $alarm;
done

Create Alarm Targeting ASG policy

aws cloudwatch put-metric-alarm \
    --alarm-name AddCapacity \
    --metric-name CPUUtilization \
    --namespace AWS/EC2 \
    --statistic Average \
    --period 120 \
    --threshold 80 \
    --comparison-operator GreaterThanOrEqualToThreshold \
    --dimensions "Name=AutoScalingGroupName,Value=my-asg" \
    --evaluation-periods 2 \
    --alarm-actions $ScalingPolicyArn

CodeDeploy

list-deployments

aws deploy list-deployments \
    --application-name $app

list-tags-for-resource

aws deploy list-tags-for-resource \
    --resource-arn ""

DynamoDB

The paper: Dynamo: Amazon’s Highly Available Key-value Store. This describes a distributed database with a leaderless replication model.

From Designing Data-Intensive Applications by Martin Kleppmann:

Dynamo is not available to users outside of Amazon. Confusingly, AWS offers a hosted database product called DynamoDB, which uses a completely different architecture: it is based on single-leader replication.

List Tables

aws dynamodb  list-tables | jq '.TableNames[]'

Delete Tables Like a Maniac

AWS allows you to delete have 10 delete-table operations running at a time. We can create a pool of 10 processes and continuously pump delete-table commands into it using xargs.

script=$(cat <<"EOF"
aws dynamodb delete-table --table-name {}
aws dynamodb wait table-not-exists --table-name {}
EOF
      )
echo ${tables[*]} \
    | xargs -n 1 -P 10 -I {} sh -c "$scrpt"

TODO: When I originally wrote this, I remember needing to add a tr " " "\n" or something for portability with Linux. Test this out again.

TODO: script used to be one command (using &&); test again after splitting it into two commands.

EC2 AMIs

List AMIs

All of this account's AMIs:

aws ec2 describe-images --owners $Account

All of this account's AMIs with a particular Name tag:

aws ec2 describe-images \
    --owners $account \
    --filters "Name=tag:Name,Values=My AMI"

Get Latest Image

aws ec2 describe-images \
    --owners $Account \
    --filters "Name=state,Values=available" \
    | jq -r '.Images
             | sort_by(.CreationDate)
             | last | .ImageId'

Delete AMIs and EBS Snapshots

List AMIs to delete and set to variable AmisToDelete. Then:

jq -c '.Images
       | sort_by(.CreationDate)
       | .[]
       | {name: .Name, snap: .BlockDeviceMappings[]
       | select(.Ebs != null)
       | .Ebs.SnapshotId}' < $AmisToDelete > images.txt

And then… what did I do?

for i in (cat images.txt | jq); do
    # deregister image
    # delete the snapshot
done

List Accounts That Can Access AMI

aws ec2 describe-image-attribute \
    --image-id $ImageID \
    --attribute launchPermission

EC2 AZs

All AZs in all regions:

for region in $AllRegions; do
    azs=$(aws --region $region \
              ec2 describe-availability-zones \
              | jq -c '[.AvailabilityZones[].ZoneName]')
    printf "$region: $azs\n"
done

The Actual AZ

us-east-1b does not mean the same thing in every AWS account. If everyone creates infra in us-east-1b, that infra will actually be in several different AZs, depending on which AZ us-east-1b maps to for each account. To determine the real AZ, you need to look at the Zone ID.

Print the Zone Name and Zone ID for each AZ in a region:

aws ec2 describe-availability-zones \
    | jq -c '.AvailabilityZones[]
             | {"Name": .ZoneName, "Id": .ZoneId}'

EC2 KeyPairs

Create

aws ec2 create-key-pair --key-name $KeyName

Copy private key to ~/.ssh

KeyName=$(echo $KeyPair | jq -r '.KeyName')
KeyFile=~/.ssh/$dir/$KeyName
echo $KeyPair | jq -r '.KeyMaterial' > $KeyFile
chmod 0600 $KeyFile
echo $KeyFile

Describe

aws ec2 describe-key-pairs | jq '.KeyPairs[]'

Delete

aws ec2 delete-key-pair --key-name $KeyName

EC2 Instances

Get Running Instances

aws ec2 describe-instances \
    --query "Reservations[].Instances[?State.name=="running"]"

Get All Instance Tags

aws ec2 describe-instances \
    | jq -c '.Reservations[] | .Instances[] | .Tags[]'

Get Instances with Tag Key

With just a specific Tag key, disregarding value:

aws ec2 describe-instances \
    --filter "Name=tag-key,Values=k8s.io/role/node" \
    | jq '.Reservations[].Instances[]'

Get Instances with Tag Key/Value

With just a specific Tag key:

aws ec2 describe-instances \
    --filter "Name=tag:$TagName,Values=$TagValue" \
    | jq -r '.Reservations[].Instances[].Tags[]
             | select(.Key == "Name")
             | .Value'

With a specific tag key and value

EC2 Transit Gateway

List Routes in a TG Route Table

aws --profile shared-services --region us-east-1 \
    ec2 search-transit-gateway-routes \
    --transit-gateway-route-table-id $TGWRouteTableId \
    --filters "Name=type,Values=propagated" \
    | jq -r '.Routes[].DestinationCidrBlock'

EC2 VPCs

VPCs

aws ec2 describe-vpcs | jq '.Vpcs[]'

The names of VPCs that have a Name tag:

aws ec2 describe-vpcs \
    --filters Name=tag-key,Values=Name \
    | jq -r '.Vpcs[].Tags[]
             | select(.Key == "Name")
             | .Value'

Subnets

aws ec2 describe-subnets  | jq '.Subnets'

SecurityGroups

aws ec2 describe-security-groups \
    | jq '.SecurityGroups'

Private ALB IPs

aws ec2 describe-network-interfaces \
    | jq -r '.NetworkInterfaces[]
          | select(.Attachment.InstanceOwnerId == "amazon-elb")
          | .PrivateIpAddress'

Managed Prefix Lists

aws ec2 describe-managed-prefix-lists \
    | jq '.PrefixLists[]'

#+header :var PrefixListID=""

aws ec2 get-managed-prefix-list-entries \
    --prefix-list-id $PrefixListID \
    | jq '.Entries[]'

EC2 Volumes

List Volumes

aws ec2 describe-volumes \
    --filters Name=tag-key,Values=$Tag \
    | jq -r '.Volumes[].VolumeId'

List Unattached Volumes

aws ec2 describe-volumes \
    --filters Name=status,Values=available \
    | jq -r '.Volumes[].VolumeId'

Delete Volumes

for volume in $Volumes; do
    aws ec2 delete-volume --volume-id $volume
done

ECR

Log in

TODO: $account doesn't work (newline)

ecr=$account.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
pw=$(aws ecr get-login-password)
docker login \
       --username AWS \
       --password "$pw" \
       "https://$ecr"

Create Repository

aws ecr create-repository \
    --repository-name $repoName \
    --tags "Key=Foo,Value=True"

Add Image

Tag image for ECR:

ecr=$account.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
docker tag \
       $image:$tag \
       $ecr/$name:$tag

Push it:

ecr=$account.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
docker push \
       $ecr/$name:$tag

describe-registry

aws ecr describe-registry

describe-repositories

aws ecr describe-repositories \
    --registry-id "<id>"

list-tags-for-resource

The ARN is like arn:aws:ecr:us-east-1:<account-num>:repository/<repo-name>.

aws ecr list-tags-for-resource \
    --resource-arn "$arn"

ECS

Fix a CloudFormation deployment where an ECS service fails to stabilize. Taken from this AWS blog post. Just updates the number of ECS service tasks to 0.

aws ecs update-service \
    --cluster $cluster \
    --service $service \
    --desired-count 0

list-clusters

aws ecs list-clusters \
    | jq '.clusterArns[]'

describe-clusters

aws ecs describe-clusters \
    --clusters $cluster \
    | jq '.clusters[]'

describe-services

aws ecs describe-services \
    --cluster $cluster \
    --services $service \
    | jq '.services[]'

describe-task-definition

aws ecs describe-task-definition \
    --task-definition $task \
    | jq '.taskDefinition.containerDefinitions[]'

descibe-task-sets

aws ecs describe-services \
    --cluster $cluster \
    --services $service \
    | jq -r '.services[].taskSets[].id'
aws ecs describe-task-sets \
    --cluster $cluster \
    --service $service \
    --task-sets $task_set \
    | jq '.taskSets[]'

EKS

List Clusters

aws eks list-clusters | jq '.clusters[]'

Get Kube Config

KUBECONFIG=~/.kube/ironnet_clusters/$CLUSTER_NAME
aws eks update-kubeconfig \
    --name $CLUSTER_NAME \
    --kubeconfig $KUBECONFIG

Elasticache

List CacheClusters

aws elasticache describe-cache-clusters \
    --show-cache-node-info \
    | jq '.CacheClusters'

List ReplicationGroups

aws elasticache describe-replication-groups
        | jq '.ReplicationGroups'

ElasticBeanstalk

Config Options for Nampespace

aws elasticbeanstalk describe-configuration-options \
    | jq '.Options[]
          | if .Namespace == "aws:autoscaling:launchconfiguration"
            then .
            else null
            end'

ElasticLoadBalancingV2

Describe ALBs

aws elbv2 describe-load-balancers \
    | jq '.LoadBalancers[] | select(.Type == "application")'

Get DNS Name

aws elbv2 describe-load-balancers \
    | jq -r --arg DEPLOYMENT_NAME "$DEPLOYMENT_NAME" \
         '.LoadBalancers[]
          | select(.Type == "application")
          | select(.LoadBalancerName
          | startswith($DEPLOYMENT_NAME))
          | .DNSName'

GlobalAccelerator

GlobalAccelerator is only in us-west-2, so set the region explicitly.

List Accelerators

aws --region us-west-2 \
    globalaccelerator list-accelerators \
    | jq '.Accelerators[]'

List Listeners

aws --region us-west-2 \
    globalaccelerator list-listeners \
    --accelerator-arn $AcceleratorArn

IPs

aws --region us-west-2 \
    globalaccelerator list-accelerators \
    | jq -r '.Accelerators[].IpSets[].IpAddresses[]'

IAM

Create Policy

aws iam create-policy \
    --policy-name $PolicyName \
    --policy-document '{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Action": [
      "secretsmanager:GetSecretValue",
      "secretsmanager:DescribeSecret"
    ],
    "Resource": [
      "arn:*:secretsmanager:*:*:secret:MySecret"
    ]
  }]
}'

Describe Policy

aws iam get-policy --policy-arn $arn

Assume Role

aws sts assume-role \
    --role-session-name foo \
    --role-arn $RoleArn

List Instance Profiles

aws iam list-instance-profiles \
    | jq '.InstanceProfiles[].InstanceProfileName'

Delete Instance Profile

aws iam delete-instance-profile \
    --instance-profile-name $Name

IMDS

Kinesis

Delete Stream

aws kinesis delete-stream --stream-name $stream

Kinesis Firehose

Describe Stream

aws firehose describe-delivery-stream \
    --delivery-stream-name $StreamName \
    | jq '.DeliveryStreamDescription'

Update Stream S3 Destination Bucket

aws firehose update-destination \
    --delivery-stream-name $StreamName \
    --current-delivery-stream-version-id 1 \
    --destination-id destinationId-000000000001 \
    --extended-s3-destination-update \
    BucketARN=arn:aws:s3:::$BucketName

THEN:

  • Update the Stream's IAM policy to allow it to write to the new location
  • Update the S3 Bucket Policy to allow the Stream's IAM Role to write to it

Update Stream S3 Destination Prefix

aws firehose update-destination \
    --delivery-stream-name $StreamName \
    --current-delivery-stream-version-id 1 \
    --destination-id destinationId-000000000001 \
    --extended-s3-destination-update Prefix=$Prefix

Also may need to update the IAM Policy and the S3 Bucket policy, as above.

Lambda

Invoke function

aws lambda invoke \
    --cli-binary-format raw-in-base64-out \
    --function-name cf-Hello-World \
    /tmp/cats.json

Invoke from a bastion host:

/usr/local/bin/aws lambda invoke \
                   --cli-binary-format raw-in-base64-out \
                   --function-name cf-Hello-World \
                   /tmp/cats.json
cat /tmp/out.json | jq -r '.body' | jq

With payload file:

aws lambda invoke \
    --cli-binary-format raw-in-base64-out \
    --function-name hello-world \
    --payload file://tests/resources/event_alb.json \
    /tmp/out.json

Create Function with Zip File

aws lambda create-function \
    --function-name artifacttool \
    --runtime python2.7 \
    --role $RoleArn \
    --handler lambda_function.lambda_handler \
    --zip-file fileb://$ZipFileName

Update Function code with Zip File

aws lambda update-function-code \
    --function-name artifacttool \
    --zip-file fileb://$ZipFileName

Container Image: Run Interactively

docker run -it --rm \
       --entrypoint bash \
       public.ecr.aws/lambda/python:3.8

Organizations

aws organizations list-roots \
    | jq -r '.Roots[].Id'

Polly

aws polly synthesize-speech \
    --engine standard \
    --output-format mp3 \
    --voice-id Amy \
    --text "All these cats wear hats." \
    /tmp/speech_standard.mp3 \
    && afplay /tmp/speech_standard.mp3

Reosurce Access Manager (RAM)

This needs to be run against the AWS account that actually owns the shared resources. E.g. in a LandingZone environment, probably the shared-services account.

aws ram list-resources --resource-owner SELF

RDS

describe-db-instances

aws rds describe-db-instances \
    --db-instance-identifier "$db_name" \
    | jq '.DBInstances[]'

modify-db-instance

Run this command to apply pending maintenance:

aws rds modify-db-instance \
    --db-instance-identifier "$db_name" \
    --apply-immediately

describe-db-engine-versions

aws rds describe-db-engine-versions \
    --engine "postgres" \
    --filters 'Key=status,Values=deprecated'

pending maintenance actions for a DB

aws rds describe-pending-maintenance-actions \
    | jq '.PendingMaintenanceActions[]
          | select(.ResourceIdentifier=="$DB_ARN")'

create-db-snapshot

aws rds create-db-snapshot \
    --db-snapshot-identifier "$snap_id" \
    --db-instance-identifier work-number-dev

Check whether a snapshot exists yet:

aws rds describe-db-snapshots \
    --db-snapshot-identifier "$snap_id" \
    --db-instance-identifier "$db_name" \
    | jq '.DBSnapshots[].OriginalSnapshotCreateTime'

create-parameter-group

aws rds create-db-parameter-group \
    --db-parameter-group-name "cclark-test" \
    --db-parameter-group-family "postgres11" \
    --description "cclark test group"
aws rds modify-db-parameter-group \
    --db-parameter-group-name "cclark-test" \
    --parameters "ParameterName='max_slot_wal_keep_size',ParameterValue=4000,ApplyMethod=immediate"

describe-parameter-group

aws rds describe-db-parameters \
    --db-parameter-group-name "cclark-test" \
    | jq '.Parameters[]
          | select(.ParameterName | contains("wal_"))'

Route53

Get HostedZoneId from domain name

aws route53 list-hosted-zones \
    | jq -r --arg name $hostedZoneName \
         '.HostedZones[]
          | select(.Name == $name + ".")
          | select(.Config.PrivateZone == false)
          | .Id'

List Records for a HostedZone

aws route53 list-resource-record-sets \
    --hosted-zone-id $hostedZoneId \
    | jq '.ResourceRecordSets[]'

SageMaker

list-models

aws sagemaker list-models | jq '.Models[]'

SecretsManager

List Secrets

aws secretsmanager list-secrets \
    | jq -r '.SecretList[].Name'

Filter:

aws secretsmanager list-secrets \
    --filters 'Key=name,Values=foo/bar' \
    | jq -r '.SecretList[].Name'

Get Secret

aws secretsmanager get-secret-value \
    --secret-id $SecretName \
    | jq -r '.SecretString'

Create Secret

aws secretsmanager create-secret \
    --name $SecretName \
    --secret-string $SecretValue

ServiceCatalog

Get Product IDs

aws --profile main servicecatalog search-products-as-admin

Launch Product

Note that user, even Admin user, must be added to the Portfolio users/groups. You can do this with aws servicecatalog associate-principal-with-portfolio

Service Quotas

Get all service quotas

aws service-quotas list-service-quotas \
    --service-code rds \
    | jq -c '.Quotas[] | {Value, QuotaName}'

Get one quota

aws service-quotas list-service-quotas \
    --service-code rds \
    | jq '.Quotas[]
          | select(.QuotaName == "Parameter groups")'

Get default quota

Is this actually different from the above? What makes this default?

aws service-quotas get-aws-default-service-quota \
    --service-code firehose \
    --quota-code "L-14BB0BE7" \
    | jq '.Quota'

S3

Canonical ID

aws s3api get-bucket-acl --bucket $BucketName

list-buckets

List buckets that start with the name $prefix.

aws s3api list-buckets \
    | jq -r --arg prefix $prefix \
         '.Buckets[]
          | select(.Name | startswith($prefix)).Name'

Create Bucket

aws s3api create-bucket \
    --bucket $BUCKET_NAME \
    --create-bucket-configuration LocationConstraint=$AWS_DEFAULT_REGION

List Objects

All objects

aws s3api list-objects-v2 \
    --bucket $BucketName \
    | jq '.Contents[]'

10 most recent objects. AWS has no way of doing this filtering server-side, so you need to request all objects and then sort them.

aws s3api list-objects-v2 \
    --bucket $BucketName \
    | jq -r '.Contents
             | sort_by(.LastModified)
             | reverse | first | .Key'

Get object

aws s3api get-object \
    --bucket $BucketName \
    --key $Key ./foo

Get multiple objects

mkdir -p ~/Downloads/foo
aws s3 cp \
    --recursive \
    s3://$BucketName/ \
    ~/Downloads/foo/

Get Bucket Policy

aws s3api get-bucket-policy --bucket $BucketName \
    | jq -r '.Policy' | jq

Clear and delete buckets

To clear a bucket, I have functions for this in dotfiles/.functions.fish.

SNS

SNS topics

aws sns list-topics \
    | jq '.Topics[] | .TopicArn'

SQS

List Queues

aws sqs list-queues \
    | jq '.QueueUrls[]'

Read Message

aws sqs receive-message \
    --queue-url "$q_url" \
    --attribute-names All \
    --message-attribute-names All \
    --max-number-of-messages 2

Send Message

aws sqs send-message \
    --queue-url "$q_url" \
    --message-body '{"Message": "{\"foo\":\"bar\"}"}'

SSM

aws ssm get-parameter \
    --with-decryption \
    --name $name

Identity Store (SSO)

AWS SSO was renamed to AWS Identity Store.

Register client

This command does not require AWS credentials. You only need a region.

aws --region us-east-1 sso-oidc register-client \
    --client-name foo \
    --client-type public

Returns an object like:

{
    "clientId": "abc1234",
    "clientSecret": "a JSON Web Token",
    "clientIdIssuedAt": 1659099242,
    "clientSecretExpiresAt": 1666875242
}

Token

aws sso login
token_file=~/.aws/sso/cache/$token_name.json
cat $token_file | jq -r '.accessToken'

List my accounts

aws --region us-east-1 sso list-accounts \
    --access-token $token
aws --region us-east-1 sso list-account-roles \
    --access-token $token \
    --account-id $account

Get credentials

The jq can be improved. See: https://github.com/aws/aws-cli/issues/5261

creds=$(aws --region us-east-1 sso get-role-credentials \
            --role-name $role \
            --account-id $account \
            --access-token $token)

access_key=$(echo $creds | jq '.roleCredentials.accessKeyId')
secret_key=$(echo $creds | jq '.roleCredentials.secretAccessKey')
access_token=$(echo $creds | jq '.roleCredentials.sessionToken')

echo "AWS_ACCESS_KEY_ID=$access_key"
echo "AWS_SECRET_ACCESS_KEY=$secret_key"
echo "AWS_SESSION_TOKEN=$access_token"
echo "AWS_REGION=us-east-1"
echo "AWS_DEFAULT_REGION=us-east-1"