s3deploy

command module
v2.11.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 6, 2023 License: MIT Imports: 5 Imported by: 0

README

s3deploy

GoDoc Test Go Report Card codecov Release

A simple tool to deploy static websites to Amazon S3 and CloudFront with Gzip and custom headers support (e.g. "Cache-Control"). It uses ETag hashes to check if a file has changed, which makes it optimal in combination with static site generators like Hugo.

Install

Pre-built binaries can be found here.

s3deploy is a Go application, so you can also install the latest version with:

 go install github.com/bep/s3deploy/v2@latest

To install on MacOS using Homebrew:

brew install bep/tap/s3deploy

Note The brew tap above currently stops at v2.8.1; see this issue for more info.

Note that s3deploy is a perfect tool to use with a continuous integration tool such as CircleCI. See this for a tutorial that uses s3deploy with CircleCI.

Configuration

Flags

The list of flags from running s3deploy -h:

-V print version and exit
-acl string
    provide an ACL for uploaded objects. to make objects public, set to 'public-read'. all possible values are listed here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl (default "private")
-bucket string
    destination bucket name on AWS
-config string
    optional config file (default ".s3deploy.yml")
-distribution-id value
    optional CDN distribution ID for cache invalidation, repeat flag for multiple distributions
-endpoint-url url
	optional AWS endpoint URL override
-force
    upload even if the etags match
-h	help
-ignore string
    regexp pattern for ignoring files
-key string
    access key ID for AWS
-max-delete int
    maximum number of files to delete per deploy (default 256)
-path string
    optional bucket sub path
-public-access
    DEPRECATED: please set -acl='public-read'
-quiet
    enable silent mode
-region string
    name of AWS region
-secret string
    secret access key for AWS
-source string
    path of files to upload (default ".")
-try
    trial run, no remote updates
-v	enable verbose logging
-workers int
    number of workers to upload files (default -1)

The flags can be set in one of (in priority order):

  1. As a flag, e.g. s3deploy -path public/
  2. As an OS environment variable prefixed with S3DEPLOY_, e.g. S3DEPLOY_PATH="public/".
  3. As a key/value in .s3deploy.yml, e.g. path: "public/"
  4. For key and secret resolution, the OS environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (and AWS_SESSION_TOKEN) will also be checked. This way you don't need to do any special to make it work with AWS Vaule and similar tools.

Environment variable expressions in .s3deploy.yml on the form ${VAR} will be expanded before it's parsed:

path: "${MYVARS_PATH}"
max-delete: "${MYVARS_MAX_DELETE@U}"

Note the special @U (Unquoute) syntax for the int field.

Routes

The .s3deploy.yml configuration file can also contain one or more routes. A route matches files given a regexp. Each route can apply:

header : Header values, the most notable is probably Cache-Control. Note that the list of system-defined metadata that S3 currently supports and returns as HTTP headers when hosting a static site is very short. If you have more advanced requirements (e.g. security headers), see this comment.

gzip : Set to true to gzip the content when stored in S3. This will also set the correct Content-Encoding when fetching the object from S3.

Example:

routes:
    - route: "^.+\\.(js|css|svg|ttf)$"
      #  cache static assets for 1 year.
      headers:
         Cache-Control: "max-age=31536000, no-transform, public"
      gzip: true
    - route: "^.+\\.(png|jpg)$"
      headers:
         Cache-Control: "max-age=31536000, no-transform, public"
      gzip: false
    - route: "^.+\\.(html|xml|json)$"
      gzip: true

Global AWS Configuration

See https://docs.aws.amazon.com/sdk-for-go/api/aws/session/#hdr-Sessions_from_Shared_Config

The AWS SDK will fall back to credentials from ~/.aws/credentials.

If you set the AWS_SDK_LOAD_CONFIG enviroment variable, it will also load shared config from ~/.aws/config where you can set the global region to use if not provided etc.

Example IAM Policy

{
   "Version": "2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource":"arn:aws:s3:::<bucketname>"
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:PutObject",
            "s3:PutObjectAcl",
            "s3:DeleteObject"
         ],
         "Resource":"arn:aws:s3:::<bucketname>/*"
      }
   ]
}

Replace with your own.

CloudFront CDN Cache Invalidation

If you have configured CloudFront CDN in front of your S3 bucket, you can supply the distribution-id as a flag. This will make sure to invalidate the cache for the updated files after the deployment to S3. Note that the AWS user must have the needed access rights.

Note that CloudFront allows 1,000 paths per month at no charge, so S3deploy tries to be smart about the invalidation strategy; we try to reduce the number of paths to 8. If that isn't possible, we will fall back to a full invalidation, e.g. "/*".

Example IAM Policy With CloudFront Config
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:::<bucketname>"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::<bucketname>/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "cloudfront:GetDistribution",
                "cloudfront:CreateInvalidation"
            ],
            "Resource": "*"
        }
    ]
}

Background Information

If you're looking at s3deploy then you've probably already seen the aws s3 sync command - this command has a sync-strategy that is not optimised for static sites, it compares the timestamp and size of your files to decide whether to upload the file.

Because static-site generators can recreate every file (even if identical) the timestamp is updated and thus aws s3 sync will needlessly upload every single file. s3deploy on the other hand checks the etag hash to check for actual changes, and uses that instead.

Alternatives

  • go3up by Alexandru Ungur
  • s3up by Nathan Youngman (the starting-point of this project)

Stargazers over time

Stargazers over time

Documentation

The Go Gopher

There is no documentation for this package.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL