CI/CD with CodeBuild and CodePipeline
Recently, we've used a few of AWS's services to create full integration and delivery solutions. In this post we'll look at how we've used AWS CodeBuild and CodePipeline to create a cost effective, performant and code driven end-to-end CI/CD solution.
At DEPT we use a variety of hosted continuous integration / continuous delivery (CI/CD) platforms to help our clients deliver great products and experiences to their customers.
Recently, we've used a few of AWS's services to create full integration and delivery solutions. In this post, we'll look at how we've used AWS CodeBuild and CodePipeline to create a cost-effective, performant and code-driven end-to-end CI/CD solution.
What is CodeBuild and CodePipeline?
AWS has a number of services they've wrapped up under the label of "Developer Tools"; these include CodeCommit, CodeBuild, CodeDeploy and CodePipeline.
For this post, we're going to focus on CodeBuild and CodePipeline. So what are they?
As defined by AWS:
AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue.
AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define.
So CodeBuild jobs are the individual units of work that provide an open and flexible execution environment and CodePipeline is the orchestration layer to connect and execute CodeBuild jobs in a specific order.
Before we dig into a full-blown example of these tools working together, let's talk a little bit about what makes a CodeBuild tick: buildspecs, images and runtimes, and artifact management and caching.
Configuration with buildspecs
Like other CI/CD-as-a-Service providers, AWS CodeBuild jobs are configured using a buildspec yaml file that lives with the application code.
Here's an example buildspec that executes tests for a react app:
version: 0.2 | |
phases: | |
install: | |
runtime-versions: | |
nodejs: 10 | |
commands: | |
- echo Installing app dependencies... | |
- yarn install | |
build: | |
commands: | |
- echo Building static site... | |
- yarn build | |
- echo Run tests... | |
- CI=true yarn test | |
artifacts: | |
type: zip | |
paths: | |
- './build/**/*' | |
- './cicd/**/*' | |
cache: | |
paths: | |
- './node_modules/**/*' |
view rawcodebuild-buildspec-react-example.yml hosted with ❤ by GitHub
Buildspecs support many phases (above, we're using install and build) and other options which you can read more about here.
Execution with Images and Runtimes
Under the hood, CodeBuild jobs are containerized execution environments. AWS provides managed Docker images (for Linux and Windows) to execute CodeBuild jobs but you are free to use any Docker image you'd like hosted at Dockerhub or in any private registry of your choosing.
We've stuck with using AWS's official CodeBuild images as they are actively maintained, known to work with CodeBuild and are supported by AWS.
Conveniently, the official AWS CodeBuild images also come prepackaged with a variety of runtime environments for android, docker, dotnet, golang, nodejs, java, php, python and ruby.
You simply specify the runtime(s) you'd like in the buildspec install phase and, well, you're off and running.
From our react app example buildspec, we specify the nodejs
runtime:
phases:
install:
runtime-versions:
nodejs: 10
Artifacts and Caching
CodeBuild jobs allow for caching and build artifact management in S3.
From our react app example buildspec:
artifacts:
type: zip
paths:
- './build/**/*'
- './cicd/**/*'
cache:
paths:
- './node_modules/**/*'
Here we're saying we'd like the files in both the build
and cicd
directories compressed into a zip file and stored in S3. We'd also like the node_modules
directory cached in S3.
As we'll see below, when many CodeBuild jobs are strung together in a CodePipeline, an output artifact from one job can be used as input for another job. Compile once, reuse later.
Pull Requests and Pipelines
While CodeBuild and CodePipeline provide a ton of flexibility for creating robust delivery patterns and solutions, we're going to focus on two of the most common uses of any CI/CD platform; handling pull requests and providing continuous delivery of a specific branch like master
.
Pull Requests
We're going to use a single CodeBuild job to test all Pull Requests.
We'll use CloudFormation to create the CodeBuild job in AWS and use a GitHub webhook trigger and event filter to ensure this job only runs when a pull request is created, updated or re-opened. There's no limit to the number of CodeBuild jobs that can run in parallel so you're never left waiting for results due to job queueing.
When the CodeBuild job is triggered, the PR is cloned and whatever instructions are written in the buildspec file are executed using the runtime environment defined.
See the CloudFormation template for our example pull request CodeBuild job here. (It's too big to embed here.)
From CloudFormation template above, we're using an EVENT FilterGroup on the Webhook trigger to limit what events should trigger this job:
Triggers:
Webhook: True
FilterGroups:
-
- Type: EVENT
Pattern: PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED, PULL_REQUEST_REOPENED
What about build success/failure notifications and logs?
Great question, from the template above, we're creating a CloudWatch Events Rule to capture the SUCCEEDED, FAILED and STOPPED build status and post the results to an SNS topic (which can be customized to send email or post to Slack, etc.). The notification includes the project name, build status and a link to the output logs from job in CloudWatch Logs:
### Notifications
CodebuildStateFailureEventRule:
Type: "AWS::Events::Rule"
Properties:
Description: "Rule for sending failure notifications to SNS topic"
EventPattern:
source:
- aws.codebuild
detail-type:
- CodeBuild Build State Change
detail:
project-name:
- !Ref CodeBuildProject
build-status:
- SUCCEEDED
- FAILED
- STOPPED
State: "ENABLED"
Targets:
- Arn: !Sub "arn:aws:sns:${AWS::Region}:${AWS::AccountId}:CodebuildNotifications"
Id: "CodeBuildNotifications"
InputTransformer:
InputTemplate: '"Pull Request has <build-status> for <project-name>. Logs are here: <deep-link>"'
InputPathsMap:
project-name: "$.detail.project-name"
build-status: "$.detail.build-status"
deep-link: "$.detail.additional-information.logs.deep-link"
Surprisingly, the AWS Console provides a great view of CodeBuild jobs; you can watch it progress through the phases and tail the log output.
Pipelines
Now that we have our pull requests taken care of, we're going to use CodePipeline to string together a few simple CodeBuild jobs and create a continuous integration and delivery pipeline.
Again, let's use CloudFormation to create the CodePipeline, CodeBuild jobs and all required AWS resources. We'll target the master
branch of a GitHub repo. The pipeline will only fire when merges to master
occur.
See the CloudFormation template for our example CodePipeline here. (It's too big to embed here.)
Once provisioned with CloudFormation, the pipeline will appear in the AWS console and look like this:
CodePipelines are defined by a series of stages. From the example above, we've got three stages: Source, Test and Deploy.
Within each stage you can run actions (serially or in parallel). Actions have providers and for the most part, we're using the CodeBuild action provider but there are many to choose from to suit your specific needs.
When merges to the master
branch occur, the Source stage is triggered. This clones the repo, zips up the codebase, posts it to the artifact S3 bucket and triggers the Test stage.
In the Test stage, our CodeBuild test job is triggered; it pulls and decompresses the source archive from S3, sets up the execution environment and runs the tests. If the tests succeed, the Deploy stage is triggered.
In the Deploy stage, our CodeBuild deploy job is triggered; it pulls and decompresses the same tested source archive from S3 and deploys the app.
If the deployment succeeds, we trigger the TagRepo action which applies a deploy tag to the Github repo at the commit that was cloned from the Source action.
For a deeper dive, see the CodePipeline concepts.
Costs
In addition to the features/flexibility of CodeBuild and CodePipeline, the cost is exceedingly cheap.
From AWS, CodePipeline cost explained:
With AWS CodePipeline, there are no upfront fees or commitments. You pay only for what you use. AWS CodePipeline costs $1.00 per active pipeline* per month. To encourage experimentation, pipelines are free for the first 30 days after creation.
*An active pipeline is a pipeline that has existed for more than 30 days and has at least one code change that runs through it during the month. There is no charge for pipelines that have no new code changes running through them during the month. An active pipeline is not prorated for partial months.
AWS CodeBuild uses simple pay-as-you-go pricing. There are no upfront costs or minimum fees. You pay only for the resources you use. You are charged for compute resources based on the duration it takes for your build to execute. The per-minute rate depends on the selected compute type.
To put that in perspective, if you executed 100 builds in one month using build.general1.small
CodeBuild compute type and each build ran for 5 minutes, your monthly charges would be $2.00! (Add another $1 for each CodePipeline.)
That's insanely cheap!
Keep in mind, unlike other CI/CD-as-a-service providers, there's no concurrent build limit! You'd be hard pressed to find a more cost effective, flexible and feature rich CICD solution.
Final Thoughts
While there are many many options for building robust CI/CD solutions we've found AWS CodeBuild and CodePipeline to be rich in features, functionality and exceedingly cost effective.
If you want complete control of your continuous integration and delivery environment, you run apps and infrastructure in AWS and you'd like to limit the number of third party providers accessing your data, these services check all the boxes and for pennies on the dollar.
If building modern integration and delivery systems is something you are interested in, we'd love to speak with you. We're always looking for talented and impassioned software engineers to work with at DEPT.