Rails on Kubernetes

Build a CI/CD pipeline with Github Actions to Kubernetes for a Ruby on Rails app

Sep 1, 2021 9 min read

TL;DR

Why Kubernetes, and why Docker?

It’s essential to learn how to use containerize apps (Docker) and deploy using Kubernetes. Going from a few scheduled deploys a week to multiple adhoc deploys a day feels revolutionary.

At my work, we deployed more in the first 3 days using Kubernetes than in the previous 3 months using Terraform blue/green deployments.

Docker principles

Follow these guidelines and check out the demo app repo to see how it works.

Automate everything for local development

This goes along with my post to Auto-format and lint everything

See tasks setup in the demo app repo.

Kubernetes pre-requisites

Getting Kubneretes running on a hosted cloud provider like AWS via their EKS is thankfully getting easier over time..

Still, there’s a lot to set up. A production AWS EKS cluster will have something like this:

Install Oh My Zsh and add the kubernetes plugin to benefit from all the shortcuts below like k ... and kgp, etc.

Here’s a gist with the list of K8s Oh My Zsh aliases

Kubernetes configuration secrets

Store in your password manager, as something like “demoapp k8s .env.production.local”. Copy and paste the contents to a .env.production.local file.

Next, create a secret using that new file (do not commit it!):

# Create namespace
kubectl create namespace demoapp

# Add secret named "demoapp-secrets" as a generic type of secret from file
# with many entries.
# https://kubernetes.io/docs/concepts/configuration/secret/
kubectl -n demoapp create secret generic demoapp-secrets --from-env-file='.env.production.local'

# Show this secret
kubectl -n demoapp describe secret demoapp-secrets

# Clean up prod secrets from local machine
rm .env.production.local

Editing secrets

# Check secrets
kubectl get secret
# List all secrets
kubectl describe secret
# Show this secret
kubectl describe secret demoapp-secrets

# Delete secret
kubectl delete secrets demoapp-secrets

# Edit secrets to change/update
kubectl edit secrets demoapp-secrets

Kubernetes configuration using Kustomize

Try the commands in the next section from the demo app root and see the kustomize example with base and overlays.

Kubernetes deployments manually

You’ll only do these if you’re making changes to the /kubernetes/base or overlays files. Once everything is set up, for app code changes you’ll let Github Actions deploy for you in the next section.

# 1. Set env
K8S_ENV=canary
# K8S_ENV=prod (!! update SHA in step 2!!)

# 2. Set SHA
# Update newTag SHA in kubernetes/overlays/$K8S_ENV/kustomization.yaml

# 3. Check diff
k diff -k kubernetes/overlays/$K8S_ENV

# 4. Apply and watch
k apply -k kubernetes/overlays/$K8S_ENV --validate; kgpwide -w

# 5. Visit URL's to validate
# https://canary-deploy.<YOUR-APP-DOMAIN.com>

# 6. Check pod distribution and utilization on nodes
k resource-capacity --pods --util

# 7. Debug a running pod. Get pods: kgp; POD=...
keti $POD -- /bin/bash

# 8. Rollout history and rollback
# kubectl rollout history deployment/...
krh deployment/demo-app-canary
# kubectl rollout undo deployment/...
kru deployment/demo-app-canary

Kubernetes debugging

This is essential when you’re just getting started or making and K8s config changes.

# Be sure to read:
# https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/
# https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/

# Get pod name
# kubectl get pod
kgp

# Set env vars
POD=$(kgp -o=jsonpath='{.items[0].metadata.name}')
CONTAINER=demoapp-container

# Describe the pod to target. Shows Events on that pod
# kubectl describe pod $POD
kdp $POD

# View logs
# kubectl logs
# kl $POD $CONTAINER
kl $POD
# If failed
# kl $POD $CONTAINER --previous
kl $POD --previous

# Get interactive shell into the pod for debugging
# kubectl exec --stdin --tty $POD -- /bin/bash
keti $POD -- /bin/bash

# Create temporary debug pod copied from running pod
# k debug $POD -it --image=ubuntu --share-processes --copy-to=app-debug

# Debug a container that fails to start
# https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#copying-a-pod-while-changing-its-command
k debug $POD -it --copy-to=app-debug --container=$CONTAINER -- sh

k attach app-debug -c $CONTAINER -it

# Try running commands on container
# kubectl exec ${POD} -c ${CONTAINER} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
k exec $POD -c $CONTAINER --

# Clean up the debug pod
k delete pod app-debug

Using Github Actions to deploy

This is pretty simple overall. For the “deploy” action, it simply sets the “image” (SHA) to be deployed via kustomize edit set image "${{inputs.image}}" and then runs kubectl apply -k kubernetes/overlays/${{ inputs.k8s_env }} to use kustomize to apply the change.

Kubernetes itself then takes care of the rest.

Sample deployment script

Add a simple script /scripts/deploy_prod.sh or /scripts/deploy_canary.sh to deploy the latest code.

#!/bin/bash
set -e

# Helper function to ask to confirm with y/n
confirm() {
    local PROMPT=$1
    [[ -z $PROMPT ]] && PROMPT="OK to continue?"
    local REPLY=
    while [[ ! $REPLY =~ ^[YyNn]$ ]]; do
        echo -n "$PROMPT (y/n) "
        read -r
    done
    # The result of this comparison is the return value of the function
    [[ $REPLY =~ ^[Yy]$ ]]
}

# Stash work, checkout main, and pull latest
echo "Stashing work, checking out main/master, and pulling latest"
git stash push
git checkout main
git pull --rebase --autostash

CURRENT_SHA=$(git rev-parse --short HEAD)
if ! confirm "Deploy latest SHA ($CURRENT_SHA) to prod?"; then
  echo -n "Enter prod deploy SHA (e.g. $CURRENT_SHA) > "
  read -r DEPLOY_SHA
else
  DEPLOY_SHA=$CURRENT_SHA
fi

TAG_NAME=deploy/prod/$DEPLOY_SHA

if ! confirm "Confirm to deploy $DEPLOY_SHA to prod ($TAG_NAME)?"; then
  echo "Not deploying ❌"
  exit 1
else
  echo "Deploying... 🚀"
  git tag $TAG_NAME $DEPLOY_SHA
  git push origin --tags
fi

# Back to previous branch
echo "Checking out previous branch"
git checkout -
echo "If needed, run: git stash pop (gstp)"

Rails in Kubernetes

Database migrations

Always check and use recommendations from strong_migrations for removing a column, and more.

Because of the way Kubernetes deployments roll out pods (your containerized app), one by one and removing the old version there can often be a state with 2 versions of your code running at once.

The best way I’ve seen (so far) to handle this is to have the database migration be triggered manually, post deploy. This is also known as: Deploy new code, then migrate.

Because ActiveRecord caches the database schema, you’ll need to ignore columns and do a few more steps of code change + deploy.

Add column flow:

  1. Write migration to add column AND tell ActiveRecord to ignore column: self.ignored_columns = ['some_column']
  2. Deploy code
  3. Run migration manually
  4. Remove ignored_columns code
  5. Deploy

Remove column flow:

  1. Tell ActiveRecord to ignore column: self.ignored_columns = ['some_column']
  2. Deploy code
  3. Write migration to remove column
  4. Deploy and run migration manually
  5. Remove ignored_columns code
  6. Deploy

PUMA app server

We switched from Passenger to PUMA in production after exploring some options and it has worked perfectly.

Sidekiq workers

Liveness probes are a great feature of K8s that will automatically replace any dead (non-responsive) containers with new ones. To get this working for a sidekiq worker container simply use this excellent gem: sidekiq_alive

More code examples in the demo app

Kubernetes security best practices

These were published as NSA/CISA Kubernetes Hardening Guidance

A summary of the key recommendations from each section are:

The demo app does the checked ones above and everyone really should. They’re straightforward to implement. See the Dockerfile for “nonroot” USER setup and USER_ID

The other parts are as simple as passing some options in the K8s Deployment spec

spec:
  securityContext:
    # Must match Dockerfile's USER_ID for User and Group
    runAsUser: 1001
    runAsGroup: 1001
    # Set ownership of mounted volumes to the user running the container
    fsGroup: 1001

  # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#container-v1-core
  containers:
    - name: demoapp-container
      image: ghcr.io/briansigafoos/docker-rails-webpacker-app
      securityContext:
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
        runAsNonRoot: true

Resources

More Reading

Read more posts like this in the Software Engineering Toolbox collection.
Visit homepage
comments powered by Disqus