CI/CD Workflow for AWS ECS via Terragrunt and GitHub Actions
Adopt Terraform to auto-provision infrastructure, and GitHub Flow to continuously test and deploy code
This project leverages Terragrunt, Terraform, and GitHub Actions to deploy a basic web app (dockerized JS frontend and dockerized Python API) to AWS ECS.
Initial Setup
Terragrunt is a thin wrapper for Terraform that provides extra tools for working with multiple Terraform modules, remote state, and locking. It also provides a powerful and flexible way to hierarchically provide configuration to Terraform, without duplicating code across environments, AWS regions, and AWS accounts – keeping your Terraform config DRY.
The following hierarchy is proposed (aligned with directory structure):
terragrunt.hcl
with configuration for remote_state and AWS providercommon.terragrunt.hcl
defining common, project-specific variablesaccount.terragrunt.hcl
for each accountregion.terragrunt.hcl
for each region within an accountenv.terragrunt.hcl
for each environment within a region
This allows flexible configuration, just add additional folders and adjust the configuration files, for instance configuring…
- Accounts
main
andsecondary
- Regions
eu-west-1
andus-east-1
inmain
vs.us-east-1
insecondary
- Environments
prod
inmain
regions vs.stage
anddev
insecondary
regions
Workflow via GitHub Flow
This project leverages GitHub Flow for gradually merging changes existing on experimental branches and deployed to experimental environments, towards more mature branches and environments.
The companion repository contains functionality to deploy code to AWS ECS simply by adopting GitHub Flow principles. All integration and deployment steps are managed by GitHub Actions workflows, including: Unit testing, building and pushing Docker images, and releasing new images to the correct ECS cluster via Terraform and Terragrunt. Create a branch, push, create a pull-request, and, after verifying checks, merge all changes - these are the only steps needed to deploy new features by adopting this approach.
Assuming a running staging and production environment, here’s how to deploy changes made for a recent feature “foo” to staging and production environments:
Step 1 → Deployment to staging environment stage
via branch dev
- Create a new branch
feature/foo
and check it outgit checkout -b feature/foo
- Push to remote and set up to track remote branch
feature/foo
fromorigin
git push --set-upstream origin feature/foo
- Open pull request from branch
feature/foo
to branchdev
to plan deployment tostage
environment - Wait for checks to complete: Workflow
terragrunt
will post terraformplan
as a comment to pull request - Additional checks may include unit tests: See
pytest-api.yml
- After verifying terraform
plan
, merge pull request into branchdev
- Workflow
terragrunt
will run again andapply
deployment forstage
environment - Code (and infrastructure) changes to branch
feature/foo
are now released tostage
environment
Step 2 → Deployment to production environment prod
via branch main
- After verifying deployment to
stage
environment (e.g., e2e-testing), open pull request from branchdev
to branchmain
to plan deployment toprod
environment - Wait for checks to complete: Workflow
terragrunt
will post terraformplan
as a comment to pull request - After verifying terraform
plan
, merge pull request into branchmain
- Workflow
terragrunt
will run again andapply
deployment forprod
environment - Code (and infrastructure) changes to branch
dev
originating from branchfeature/foo
are now released toprod
environment
Configure Infrastructure and Deployment Targets
The hierarchical configuration via Terragrunt is enabled by a main configuration file in which all other more granular configuration files are imported. In terragrunt.hcl
, both remote state and AWS provider are defined according to values in more specific configuration files.
Both remote state and provider are dynamically defined for each deployment target (e.g., prod
vs. stage
environment) and AWS account ID (e.g., main
vs. secondary
account). This means, prod
and stage
environments (which may even be residing on two separate AWS accounts) adopt separate remote state backend configurations, depending on which environment subfolder terragrunt
commands are executed from. Review this file and the nested Terragrunt configuration files in the companion repository for the detailed implementation.
Common variables, such as the app name, base domain name and hosted zone name, which apply to the project are configured in a separate file:
One or any number of AWS accounts may be configured, each related to any number of regions and environments to be deployed via this AWS account:
Similarly, the region configuration is provided in the nested level:
Environment configuration regarding both infrastructure and containers are provided in the nested level. To illustrate a common use case: The stage
environment may override variables previously defined in the parent-hierarchy, in tf/common.terragrunt.hcl
, and for instance add a prefix stage.*
to app_domain_name
.
Configure Container Environment and Secrets
Environment variables for the respective deployment target (e.g., for stage
environment) are provided alongside terragrunt configuration in JSON files, following the naming .service.environment.json
, and by specifying both keys and values. These files are committed to source control, since they do not contain any sensitive data. Adding a description key-value pair will inform development and ensure consistency of variable assignment.
Similarly, JSON files following the naming .service.secrets.json
provide keys (not values) for all container secrets, which are injected by AWS Systems Manager Parameter Store into the service’s ECS tasks (containers) as environment variables. Adding a description key-value pair will inform development and ensure consistency of variable assignment.
No secrets are present in code or source control — secrets such as database passwords or secret keys are generated as terraform resources, and stored within Systems Manager Parameter Store. Following the design of terraform state, they are additionally stored within remote state backend. While S3 backend supports encryption at rest, remote state is to be considered sensitive data.
Integration via GitHub Actions – Pytest
Easily add continuous integration workflows for unit, integration, and e2e tests by adding a GitHub Action workflow. The companion repository includes an example for testing the API service via Pytest.
Deployment via GitHub Actions – Terragrunt
To support branch-aware continuous deployment of code to the respective environment, a GitHub Actions workflow is provided.
The workflow is based on Terraform’s guide for automating Terraform with GitHub Actions, and adds support for nested configuration via Terragrunt. With this workflow, pull-requests will trigger terraform plan
, and merging these pull requests will trigger terraform apply
to deploy to the correct environment: Branch dev
will deploy to environment stage
and branch main
will deploy to environment prod
– all following the specified hierarchical configuration defined in *.terragrunt.hcl
files. Review this file in the companion repository for the detailed implementation.
Conclusion
This article describes a flexible CI/CD workflow for AWS ECS based projects. Using GitHub Flow, changes to the infrastructure and/or codebase are deployed to the intended deployment targets.
Thanks for reading. I’m curious to hear your thoughts on this topic – don’t hesitate to reach out to me on Twitter or start a discussion in the companion repository on GitHub!