Migrating from CodeShip to GitHub Actions

Nothing motivates a migration quite like a sunset notice. A few weeks after we moved our client's CI/CD from CodeShip to GitHub Actions, this appeared in the console:

Turns out we migrated just in time.

Why We Moved

Moving away from CodeShip was a matter of improving developer experience and providing a more consistent release experience for our client. While CodeShip Basic offered simplicity, GitHub Actions has better documentation and a larger ecosystem. In addition, CodeShip was outdated and lacked modern CI/CD features.

  1. Inconsistent failures due to resource restraints. Limited resources in the CI made for a sluggish headless browser, inability to run tests in parallel, and unpredictable build times.
  2. Poor debugging experience. CodeShip's SSH debugging feature sounded useful until we discovered the SSH session wasn't the container that ran our build. Build artifacts, test screenshots, and application logs aren’t there for inspection. You're debugging a clean environment that may or may not reproduce your issue.
  3. IaC locked behind Pro tier. CodeShip Basic's configuration lives entirely in the UI. If you wanted to version control your build triggers, environment variables, or deployment steps, that would require a Pro license. This meant no audit trail for configuration changes, no ability to review CI/CD changes in pull requests and tribal knowledge about build settings that disappeared when team members left.
  4. Limited cache control. CodeShip's caching is remarkably primitive. You dump files into $HOME/cache, and that's it. No cache keys. No expiration control. No branch isolation. The only way to invalidate the cache is through the UI.
  5. Limited Conditional Logic. GitHub provides rich conditional expressions that can use paths, commit messages, PR labels and other context. CodeShip provides only a few options for choosing branches or PR events.

The Writing Was On the Wall

A few weeks after our migration, CodeShip's sunset notice appeared in the console. While we didn't know the exact timeline, CodeShip's stagnation was evident: outdated documentation, missing modern CI/CD features, and a developer experience that hadn't evolved in years. The sunset notice validated what we already knew—it was time to move on.

Starting Point

CodeShip Basic operated on a straightforward premise: write shell commands in a text input, and they run on build triggers. Since configuration lived in the UI, we tracked our actual build logic in versioned shell scripts.

The environment came pre-configured with common tooling: nvm for Node.js, rvm for Ruby, JDK version switchers, and a running PostgreSQL instance on the default port. This "batteries included" approach reduced initial setup but limited flexibility.

Build steps and deployment pipelines were both configured through the UI:

Our typical CodeShip workflow looked like this:

  1. Setup commands - Set language versions, install dependencies
  2. Test commands - Run test suites (sequentially, due to resource constraints)
  3. Deployment - Push artifacts to AWS Elastic Beanstalk via UI-configured deployment step

This simplicity was CodeShip's strength and its limitation. When our needs outgrew what the UI could express, we hit a wall.

Mapping CodeShip → GitHub Actions

Migrating our shell files to GitHub action’s workflow syntax was a significant task. Theoretically, we could have just copied our scripts into a single multiline step. But we would be missing out on core GitHub Actions features. Instead of a simple shell script, GitHub Actions uses a YAML configuration file. Nearly everything about your workflow is declared in this file, from test commands to deployment commands, even container image configuration. Here are the major migrations that were made:

Dependencies

In CodeShip, we set the Ruby version using the included rvm version manager and install gems with the bundler gem

rvm use 3.2.2
gem install bundler
bundle package --all

In Github Actions, we use an official action called ruby/setup-ruby@v1

- uses: ruby/setup-ruby@v1
  with:
    ruby-version: 3.2.2
    bundler-cache: true

A similar mapping exists for NodeJS and it’s dependencies.

PostgreSQL

In CodeShip, a PostgreSQL server is available on the default port. In GitHub Actions, we setup a service with a postgres image:

services:
  postgres:
    image: postgres:14
    ports:
      # Maps tcp port 5432 on service container to the host
      - 5432:5432
    env:
      POSTGRES_HOST_AUTH_METHOD: trust

Elasticsearch

Elasticsearch required special handling due to our parallel test setup. Our test suite spawns multiple Elasticsearch processes—one per parallel worker—which means we need the Elasticsearch binary available on the PATH rather than a single service container.

We download the Elasticsearch tarball and cache it between builds to avoid repeated downloads:

- name: Set Path
  run: |
    ES_HOME="$HOME/.cache/elasticsearch-${ES_VERSION}"
    echo "ES_HOME=$ES_HOME" >> "$GITHUB_ENV"
    echo "$ES_HOME/bin" >> "$GITHUB_PATH"
    
- name: Cache Elasticsearch
  uses: actions/cache@v4
  with:
    path: ${{ env.ES_HOME }}
    key: ${{ runner.os }}-es-${{ env.ES_VERSION }}
    
- name: Install Elasticsearch
  run: |
    if [ -e $ES_HOME/bin/elasticsearch ]; then
      echo "Elasticsearch found in cache"
    else
      echo "Elasticsearch not found in cache"
      mkdir -p "$ES_HOME"
      curl -sSLO <https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-${ES_VERSION}-linux-x86_64.tar.gz>
      curl -sSLO <https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-${ES_VERSION}-linux-x86_64.tar.gz.sha512>
      shasum -a 512 -c elasticsearch-${ES_VERSION}-linux-x86_64.tar.gz.sha512
      tar -xzf elasticsearch-${ES_VERSION}-linux-x86_64.tar.gz
      mv elasticsearch-${ES_VERSION}/* "$ES_HOME/"
    fi

Deployment

CodeShip provided a UI for configuring deployment whereas GitHub Actions expects this to be declared in a workflow. Our deployment involves uploading a zip file to AWS S3 and triggering a deployment in Elastic Beanstalk. Thankfully, a community action exists for this case.

- name: Deploy to Elastic Beanstalk
  uses: einaregilsson/beanstalk-deploy@v21      
  env:
    EB_ENV_NAME: ${{ github.ref == 'refs/heads/master' && vars.EB_ENV_NAME_PRODUCTION || vars.EB_ENV_NAME_STAGING }}
    EB_APP_NAME: ${{ vars.EB_APP_NAME }}
  with:
    aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
    aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
    application_name: ${{ env.EB_APP_NAME }}
    environment_name: ${{ env.EB_ENV_NAME }}
    region: us-east-1
    version_label: "github-deployment-${{ github.sha }}"
    version_description: ${{ env.VERSION_DESC }}
    deployment_package: ${{ env.ZIP_FILE }}
    existing_bucket_name: elasticbeanstalk-us-east-1-xxxxxxxxxx
    use_existing_version_if_available: true

Debugging

CodeShip supported little in the realm of debugging builds. There was an option to SSH into a container but it’s not the instance that the tests ran on. This means that any build assets, logs, or screenshots were not available in the SSH session. In Github Actions, a community member offers an action called mxschmitt/action-tmate. This action creates a tmate session and allows you to SSH into it. You can place this action at any step in a job to inspect and debug. This proved extremely useful during the migration.

- name: Step that requires inspection
	run: ...
	
- name: Setup tmate session
  uses: mxschmitt/action-tmate@v3
  
# The workflow continues after the session ends.  
- name: Next Step
	run: ...

Workflow Highlights

Some things to note about the new workflow:

  1. More Infrastructure as Code. This centralizes our process in our codebase and allows us to track with version control. Things that moved from the UI console to IaC include build triggers, environment variables, container images, and artifact deployments.
  2. Different runners for different jobs. For running tests, we use a Linux instance with 8 cores to run our tests in parallel. For other things like building assets, deploying artifacts or invalidating caches, we use the default runner. GitHub offers 3,000 minutes/month of free actions using the default runners in private repositories and we want to take advantage of that.
  3. Avoid compiling twice. We compile assets in staging/production mode at the beginning of the workflow and use that for testing. This saves time and also makes our testing environment more similar to production. Later jobs access the build artifact using GitHub Action’s official actions/upload-artifact and actions/download-artifact.
# At the end of the build job
- name: Create deploy artifact
  run: zip -r "$ZIP_FILE" . -x "*.git*" "log/*" "tmp/*" "node_modules/*" "vendor/bundle/*"
- name: Upload artifact
	uses: actions/upload-artifact@v4
  with:
    name: ${{ env.ZIP_FILE }}
    path: ${{ env.ZIP_FILE }} 
    
# ....

# At the beginning of both test and deploy jobs
- name: Download and unpack artifact to workspace
  uses: actions/download-artifact@v4
  with:
    name: ${{ env.ZIP_FILE }}
    path: .
  1. Concurrency control. This basically cancels any in-progress build when there is a new trigger on the same branch.
concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

Outcome

This migration was about more than just replacing one tool with another. It was about rethinking our delivery pipeline to achieve faster feedback, reproducible builds, and first-class debugging.

The results validated the effort:

  1. Faster builds. On CodeShip, our full pipeline duration ranged from 24 to 48 minutes. Now, a full pipeline takes about 12-14 minutes. This is mainly due to increased parallelization with larger runners.
  2. Smaller Bill. A similarly sized image in CodeShip Pro would have cost $299/mo at the time. With GitHub Actions, we pay based on minutes and our bill last month was around $47.
  3. Zero resource-related failures since the migration. No more flaky builds!
  4. Actually debugging builds instead of making guesses because the build is not reproducible.

And we’re just getting started! There is still more to explore in GitHub Actions like matrix builds, reusable workflows and automated dependency updates, but for now, we're thrilled with the results.