Published on

Deploying a Django app to DigitalOcean using GitHub Actions, Terraform and Ansible - Part 2

Authors

This series will guide you through deploying a Django application to DigitalOcean using a trio of complementary tools:
Terraform for infrastructure provisioning, Ansible for configuration management, and GitHub Actions for automation orchestration.

  • DigitalOcean is a cloud hosting provider that offers self-serve infrastructure for hosting web applications. This includes virtual machines called Droplets, managed Relational Database and Load Balancer.

  • Terraform allows you to define your desired infrastructure resources (servers, databases, etc.) in code, and provision those resources on IaaS provider such as DigitalOcean.

  • Ansible is a configuration management tool that automates software setup, configuration, and deployments.

  • GitHub Actions enables you to modify your infrastructure and deploy updates to your application automatically on code pushes.

Part 1: Setting Up the Django Project

This part explains how to create a starter Django project, configure settings from environment variables and host the code on Github.

Part 2: Provisioning Infrastructure on DigitalOcean using Terraform and Github Actions

This part guides you through provisioning the required infrastructure on Digital Ocean

Part 3: Ansible for Application Deployment Automation

This part explains how to use Ansible to automate the deployment process.

Prerequisites

To complete this step your you need:

Step 1 - Generating a DigitalOcean access token

The DigitalOcean Terraform provider uses a a DigitalOcean access token to authenticate with DigitalOcean.

To create a token, navigate to the API page in the DigitalOcean console then select "Generate New Token."

Enter a token name, select both the "Read" and "Write" scopes privileges and generate your token.

Copy the generated token to your clipboard. If you navigate away from the page the token will not be displayed again.
DigitalOcean Token Created

In your terminal, create an environment variable with your new personal access token.
Replace the youraccesstoken placeholder text with the token pasted from your clipboard.

 export DIGITALOCEAN_ACCESS_TOKEN=youraccesstoken

Step 2 - Generating a DigitalOcean Spaces Key

Terraform uses a state file to record the current state of your provisioned resources. When you make changes to your code Terraform compares the desired state to the existing state and performs the required CRUD actions to reach the desired state.

By default Terraform store state in a file on your local computer named terraform.tfstate . We use Github Actions to orchestrate infrastructure changes so the state file needs to be stored in a location accessible by Terraform in a Github runner. We will therefore use the DigitalOcean Spaces state backend instead of the local file default.

To create a DigitalOcean Spaces key, navigate to the API page in the DigitalOcean console, select the "Spaces Keys" tab, then select "Generate New Key."

Enter a name for your key, and generate your key then copy the access key to your clipboard DigitalOcean Spaces Key Created

In your terminal, create an environment variable with the access key.
Replace the youraccesskey placeholder text with the token pasted from your clipboard.

 export AWS_ACCESS_KEY_ID=youraccesskey

Copy the secret key to your clipboard. If you navigate away from the page the secret key will not be displayed again.
In your terminal, create an environment variable with your new personal access token.
Replace the yoursecretkey placeholder text with the token pasted from your clipboard.

 export AWS_SECRET_ACCESS_KEY=yoursecretkey

Step 3 - Creating a DigitalOcean Spaces bucket

The bucket name must be unique among all DigitalOcean Spaces users in all regions. If you enter a name that is already in use, the command will fail with an error that says the Space already exists.

aws s3api create-bucket --bucket your-terraform-state-bucket --endpoint-url https://nyc3.digitaloceanspaces.com

List the available buckets by running this command and confirm that the space was created successfully.

 aws s3api list-buckets --endpoint-url https://nyc3.digitaloceanspaces.com
{
  "Buckets": [
    {
      "Name": "your-terraform-state-bucket",
      "CreationDate": "2024-01-19T11:48:45.671000+00:00"
    },
    {
      "Name": "djondo-terraform-state",
      "CreationDate": "2024-01-17T11:05:20.153000+00:00"
    }
  ],
  "Owner": {
    "DisplayName": "15605925",
    "ID": "15605925"
  }
}

Step 4 - Configuring Terraform to store state in a DigitalOcean Spaces bucket

Navigate to the the top level django-on-digitalocean folder your create in Part 1 or clone the starter project from this repository

Create a infra subfolder which will contain the Terraform code for provisioning infrastructure on DigitalOcean.

The folder structure should look like this.

├── infra
└── src
    ├── myproject

Navigate to the infra folder

cd infra

Create a backend.tf file and add the following code to it that defines the Terraform backend configuration

terraform {
  backend "s3" {
    endpoints                   = { s3 = "https://nyc3.digitaloceanspaces.com" }
    key                         = "terraform.tfstate"
    bucket                      = "your-terraform-state-bucket"
    region                      = "us-east-1"
    skip_requesting_account_id  = true
    skip_credentials_validation = true
    skip_metadata_api_check     = true
    skip_s3_checksum            = true
  }
}

Create a versions.tf file that specifies the required Terraform and provider versions

terraform {
  required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "~> 2.0"
    }
  }
  required_version = "~> 1.6.3"
}

Create a main.tf file where you will define the desired infrastructure resources and add this code to it.

provider "digitalocean" {}

Initialize the Terraform backend by running this command

terraform init

Terraform will display output similar to this if the backend is initialized successfully

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Finding digitalocean/digitalocean versions matching "~> 2.0"...
- Installing digitalocean/digitalocean v2.34.1...
- Installed digitalocean/digitalocean v2.34.1 (signed by a HashiCorp partner, key ID F82037E524B9C0E8)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

The Terraform workflow consists of this loop:

  1. Define your resources by writing Terraform code.
  2. Run the terraform plan command to review the changes Terraform will make to achieve the desired state.
  3. Run the terraform apply command to create,update,chagne or delete resources to achieve the desired state.
  4. Run terraform show to inspect the newly created resources.

Step 5 - Manually creating a resource

A resource block defines a component of your infrastructure.

Lets define a Virtual Private Cloud resource by appending this code to the main.tf file

resource "digitalocean_vpc" "this" {
  name     = "django-project-network"
  region   = "nyc3"
  ip_range = "192.168.11.0/24"
}

Run terraform plan to review the changes Terraform will make

terraform plan

Inspect the planned actions

  + create

Terraform will perform the following actions:

  # digitalocean_vpc.this will be created
  + resource "digitalocean_vpc" "this" {
      + created_at = (known after apply)
      + default    = (known after apply)
      + id         = (known after apply)
      + ip_range   = "192.168.11.0/24"
      + name       = "django-project-network"
      + region     = "nyc3"
      + urn        = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

The terraform plan output shows that Terraform will create one resource

....
Plan: 1 to add, 0 to change, 0 to destroy.
...

Run terraform apply to create the VPC

terraform apply

Type type yes and press enter when prompted

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_vpc.this will be created
  + resource "digitalocean_vpc" "this" {
      + created_at = (known after apply)
      + default    = (known after apply)
      + id         = (known after apply)
      + ip_range   = "192.168.11.0/24"
      + name       = "django-project-network"
      + region     = "nyc3"
      + urn        = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

digitalocean_vpc.this: Creating...
digitalocean_vpc.this: Creation complete after 2s [id=f50cbc83-97eb-4084-b66d-06f065a89af4]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Run terraform show to inspect the newly created resources

terraform show
# digitalocean_vpc.this:
resource "digitalocean_vpc" "this" {
    created_at = "2024-01-19 12:25:52 +0000 UTC"
    default    = false
    id         = "f50cbc83-97eb-4084-b66d-06f065a89af4"
    ip_range   = "192.168.11.0/24"
    name       = "django-project-network"
    region     = "nyc3"
    urn        = "do:vpc:f50cbc83-97eb-4084-b66d-06f065a89af4"
}

Commit the current changes and push to repository

Append the Terraform gitignore configuration to your to your .gitnore file

Then stage your changes

git add -A

Then review the staged files

git status

Commit and push your changes to Github

git commit -m "add Terraform initial configuration"
git push -u origin main

Step 6 - Creating Github Secrets

On your local machine you have used the following environment variables:
DIGITALOCEAN_ACCESS_TOKEN, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
We need to make these values availabe to Terraform when run in a Github Actions workflow.
We can create Github secrets and pass the values as environment variables to jobs.
Navigate to the Secrets page for your repository in the Github console the select "New repository secret"

Enter DIGITALOCEAN_ACCESS_TOKEN as the name and the DigitalOcean access token value as the secret then select Add secret

You can retrieve the value by printing the environment variable you set earlier

echo $DIGITALOCEAN_ACCESS_TOKEN

Create another secret named AWS_ACCESS_KEY_ID with the DigtalOcean Spaces access key as the value.
Your can retrieve the value you set earlier by running this command

echo $AWS_ACCESS_KEY_ID

Create another secret named AWS_SECRET_ACCESS_KEY and the DigtalOcean Spaces secret key as the value.
Your can retrieve the value you set earlier by running this command.

echo $AWS_SECRET_ACCESS_KEY

Step 7 - Creating a Github Actions workflow

In the top level django-on-digitalocean folder create a .github/workflows subfolder

mkdir .github/workflows

Add a main.yml file within the .github/workflows folder and the following YAML code to it

name: CI

on:
  push:
    branches: ['main']
  pull_request:
    branches: ['main']

  workflow_dispatch:

env:
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  DIGITALOCEAN_ACCESS_TOKEN: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}

jobs:
  files_changed:
    name: Detect Files Changed
    runs-on: ubuntu-22.04
    outputs:
      src: ${{ steps.changes.outputs.src }}
      infra: ${{ steps.changes.outputs.infra }}
      deploy: ${{ steps.changes.outputs.deploy }}
    steps:
      - uses: actions/checkout@v4
      - uses: dorny/paths-filter@v2
        id: changes
        with:
          filters: |
            src:
              - 'src/**'
            infra:
              - 'infra/**'
            deploy:
              - 'deploy/**'

  lint:
    if: needs.files_changed.outputs.infra == 'true'
    needs: files_changed
    name: Lint Terraform Code
    runs-on: ubuntu-22.04
    steps:
      - name: Check out code
        uses: actions/checkout@v4

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.6.3

      - name: Run terraform fmt check
        working-directory: ./infra
        run: terraform fmt -check -diff -recursive

  tf_plan_apply:
    name: Provision Infrastructure
    runs-on: ubuntu-22.04
    needs: lint
    steps:
      - name: Checkout Repo
        uses: actions/checkout@v4

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.6.3

      - name: Initialize Terraform
        working-directory: ./infra
        run: terraform init -input=false

      - name: Validate Terraform
        working-directory: ./infra
        id: validate
        run: terraform validate -no-color

      - name: Plan Terraform
        id: plan
        continue-on-error: false
        working-directory: ./infra
        run: |
          terraform plan -input=false -no-color -out=tfplan \
          && terraform show -no-color tfplan

      - name: Apply Terraform
        if: github.ref == 'refs/heads/main' && github.event_name == 'push' && steps.plan.outcome == 'success'
        id: apply
        continue-on-error: false
        working-directory: ./infra
        run: |
          terraform apply \
            -input=false \
            -no-color \
            tfplan

Step 8 - Creating SSH key

In the next step we will provision 2 Droplet resources. Before that create a SSH key that will enable login to the Droplets.

ssh-keygen -t rsa -f ~/.ssh/id_digitalocean

When prompted, press enter to create the SSH key without a passphrase.

Navigate to your DigtalOcean settings page, select the "Security Tab" and select "Add SSH Key"

Copy and paste the contents of the ~/.ssh/id_digitalocean.pub file into the Public key field.

Enter a value for the Key Name field such as cicd then select Add SSH key

Step 9 - Provisioning of resources using Github Actions workflow

Append this code to your main.tf file to create 2 droplets and associate the SSH key with them.

data "digitalocean_ssh_key" "cicd" {
  name = "cicd"
}

resource "digitalocean_droplet" "django" {
  count      = 2
  image      = "ubuntu-22-04-x64"
  monitoring = true
  name       = "django-${count.index + 1}"
  region     = "nyc3"
  vpc_uuid   = digitalocean_vpc.this.id
  size       = "s-1vcpu-1gb"
  ssh_keys   = [data.digitalocean_ssh_key.cicd.id]
  tags       = ["django"]
}

Commit your changes and push to Github

git add -A
git commit -m "provision droplets"
git push -u origin main

Navigate to the Github Action workflow run log confirm that the provisioning step succeeded and created 2 droplets

Navigate to the DigitalOcean Droplets page and confirm that 2 droplets were created

Step 10 - Adding a Load Balancer

Append the following code to your main.tf file. This will create a Load Balancer that distributes incoming HTTP traffic between the 2 Droplets.

resource "digitalocean_loadbalancer" "this" {
  name     = "django-lb"
  region   = "nyc3"
  vpc_uuid = digitalocean_vpc.this.id

  droplet_ids = [
    for droplet in digitalocean_droplet.django :
    droplet.id
  ]

  forwarding_rule {
    entry_port     = 80
    entry_protocol = "http"

    target_port     = 80
    target_protocol = "http"

  }

  healthcheck {
    port     = 80
    protocol = "http"
    path     = "/healthz"
  }
}

# create a firewall that only accepts port 80 traffic from the load balancer
resource "digitalocean_firewall" "this" {
  name = "django-firewall"

  droplet_ids = [
    for droplet in digitalocean_droplet.django :
    droplet.id
  ]

  inbound_rule {
    protocol         = "tcp"
    port_range       = "22"
    source_addresses = ["0.0.0.0/0"]
  }
  inbound_rule {
    protocol                  = "tcp"
    port_range                = "80"
    source_load_balancer_uids = [digitalocean_loadbalancer.this.id]
  }

  outbound_rule {
    protocol              = "tcp"
    port_range            = "all"
    destination_addresses = ["0.0.0.0/0"]
  }
  outbound_rule {
    protocol              = "udp"
    port_range            = "all"
    destination_addresses = ["0.0.0.0/0"]
  }

}

Commit your changes and push to Github

git add -A
git commit -m "provision droplets"
git push -u origin main

Navigate to the Github Action workflow run log confirm that the provisioning step succeeded and created a load balancer and firewall

Navigate to the DigitalOcean Networking page and confirm that a load balancer was created

Step 11 - Creating a PostgreSQL Cluster, Database and User

Create a database.tf file and add the following code to it

resource "digitalocean_database_cluster" "this" {
  name                 = "db-cluster"
  engine               = "pg"
  version              = "14"
  size                 = "db-s-1vcpu-1gb"
  region               = "nyc3"
  private_network_uuid = digitalocean_vpc.this.id
  node_count           = 1
}

# create a firewall that only accepts traffic from the droplets to the cluster
resource "digitalocean_database_firewall" "this" {
  cluster_id = digitalocean_database_cluster.this.id

  dynamic "rule" {
    for_each = digitalocean_droplet.web
    content {
      type  = "droplet"
      value = rule.value.id
    }

  }
}

resource "digitalocean_database_db" "django" {
  cluster_id = digitalocean_database_cluster.this.id
  name       = "django"
}

resource "digitalocean_database_user" "django" {
  cluster_id = digitalocean_database_cluster.this.id
  name       = "django"
}

Create an outputs.tf file and add this code to it

output "database_url" {
  description = "the database url"
  value       = "postgres://${digitalocean_database_user.django.name}:${digitalocean_database_user.django.password}@${digitalocean_database_cluster.this.private_host}:${digitalocean_database_cluster.this.port}/${digitalocean_database_db.django.name}"
  sensitive   = true
}

Commit your changes and push to Github

git add -A
git commit -m "add PostgreSQL database"
git push -u origin main

Navigate to the Github Action workflow run log and confirm that the provisioning succeeded and created a cluster, database and user.

Navigate to the DigitalOcean Databases page and confirm that a database cluster was created

Step 12 - Creating Github Database Secrets

Create a Github repository secrets named DATABASE_URL.
You can retrieve the secret value by running this command locally from within the infra folder.

terraform output database_url

Part 3 will explain how to deploy the app to the Digital Ocean droplets using Ansible and GitHub Actions