03/05/2025 8:12 PM
5 minutes
At InnoPeak, we recently launched Finly’s new landing page, built with Next.js, PayloadCMS, and HeroUI. To ensure a smooth deployment, we leveraged Drone CI, Terraform, and Kubernetes, along with PostgreSQL and DigitalOcean Spaces for storage. In this post, we’ll walk through how we set up our infrastructure and automated deployment.
Ravi
CTO
We built Finly's landing page using Next.js, PayloadCMS, and HeroUI, as it’s one of the fastest and most popular stacks for creating performant, modern landing pages. With PayloadCMS, we can provide a user-friendly content management experience for editors while maintaining a highly developer-friendly stack.
When it came time to deploy the website, instead of opting for Vercel's hosting plans—the go-to choice in the Next.js world—we decided to deploy it on our own DigitalOcean-based Kubernetes infrastructure, where we also host Finly.
To make the deployment process as automated and seamless as possible, we used DroneCI to build the Next.js Docker image, run migrations, and deploy the container using Terraform. In this post, we’ll walk through the pipeline files, Dockerfile, and Terraform module we built to achieve this.
Next.js provides solid examples on how to build a Dockerfile
that ensures we exclude development dependencies in our final production build while also generating a standalone build that allows us to use Node.js to run the Next.js server inside a Docker container. For that we use multi-stage builds, which instruct Docker to only keep the final stage in the image and uses the initial stages for build steps like downloading dependencies or building the app itself.
Head over to their Docker example repository and copy the Dockerfile
.
Based on that we add some specific commands required to generate PayloadCMS's importmap (a file it uses to locate components and other source files required by the admin panel), as well as generate types and run migrations to update our database schema. These steps are added in the builder
stage, along with environment variables that ensure the Payload CLI can connect to PostgreSQL.
1FROM base AS builder
2
3ARG PAYLOAD_SECRET
4ENV PAYLOAD_SECRET=${PAYLOAD_SECRET}
5
6ARG POSTGRES_USER
7ARG POSTGRES_PASSWORD
8ARG POSTGRES_DB
9ARG POSTGRES_HOST
10ARG POSTGRES_PORT
11ENV DATABASE_URI="postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}?sslmode=require"
12
13ARG S3_BUCKET
14ENV S3_BUCKET=${S3_BUCKET}
15ARG S3_ACCESS_KEY
16ENV S3_ACCESS_KEY=${S3_ACCESS_KEY}
17ARG S3_SECRET_ACCESS_KEY
18ENV S3_SECRET_ACCESS_KEY=${S3_SECRET_ACCESS_KEY}
19ARG S3_REGION
20ENV S3_REGION=${S3_REGION}
21ARG S3_ENDPOINT
22ENV S3_ENDPOINT=${S3_ENDPOINT}
23
24WORKDIR /app
25COPY /app/node_modules ./node_modules
26
27COPY . .
28
29RUN yarn generate:importmap
30RUN yarn generate:types
31
32RUN yarn payload migrate
33
34RUN yarn build
As you can see, we generate the importmap and types before immediately running the migrations. This is why we have to configure the environment variables so Payload's CLI can connect to PostgreSQL. While the S3 configuration isn’t strictly necessary, it ensures consistency with our Kubernetes (K8s) deployment.
With ARG
we define build arguments, and set them as environment variables using the ENV
keyword. The ENV
keyword also supports interpolation which is how we can build the PostgreSQL connection string. These build arguments will later be defined in the Drone pipeline when building the image with the Docker plugin.
Before we can even build the Docker image, we need to provision key infrastructure components using Terraform. This includes setting up the PostgreSQL database and DigitalOcean Spaces bucket, which are essential for running migrations and configuring the PayloadCMS backend. Once these resources are in place, we can proceed with building the image using the Drone Docker plugin, which we’ll cover in the next section.
To achieve this, we define the following resources in our Terraform module:
1provider "digitalocean" {
2 token = var.do_token
3
4 spaces_access_id = var.do_spaces_access_id
5 spaces_secret_key = var.do_spaces_secret_key
6}
7
8data "digitalocean_database_cluster" "dikurium" {
9 name = "<name>"
10}
11
12provider "postgresql" {
13 host = data.digitalocean_database_cluster.dikurium.host
14 port = data.digitalocean_database_cluster.dikurium.port
15 database = data.digitalocean_database_cluster.dikurium.database
16 username = data.digitalocean_database_cluster.dikurium.user
17 password = data.digitalocean_database_cluster.dikurium.password
18 sslmode = "require"
19 connect_timeout = 15
20 superuser = false
21}
22
23resource "postgresql_role" "finly_landing_page" {
24 name = var.postgres_user
25 login = true
26 password = var.postgres_password
27}
28
29resource "postgresql_database" "finly_landing_page" {
30 name = var.postgres_database
31 owner = postgresql_role.finly_landing_page.name
32}
33
34resource "digitalocean_spaces_bucket" "finly_landing_page" {
35 name = "finly-landing-page"
36 region = "fra1"
37}
38
39resource "digitalocean_spaces_bucket_cors_configuration" "finly_landing_page" {
40 bucket = digitalocean_spaces_bucket.finly_landing_page.id
41 region = digitalocean_spaces_bucket.finly_landing_page.region
42
43 cors_rule {
44 allowed_headers = ["*"]
45 allowed_methods = ["GET", "PUT", "DELETE", "HEAD", "POST"]
46 allowed_origins = ["https://${local.domain}", "https://www.${local.domain}"]
47 max_age_seconds = 3000
48 }
49}
We use the cyrilgdn/postgresql
Terraform provider to connect to our cluster and provision the database, as it allows us to create new PostgreSQL roles and directly specify the database owner—something DigitalOcean's provider does not. If you're using AWS, Azure, or another provider, you may be able to use their native Terraform providers instead.
For the Spaces bucket, we also configure CORS so that the frontend can access and display files properly on our landing page.
Once the Terraform module is ready, we push it to our Git provider so that Drone can trigger the deployment (you’ll find the Drone pipeline configuration in the last section) before moving on to the next step.
Once these resources are provisioned, we add the Kubernetes deployment, service, and ingress configuration to deploy the Next.js app and expose port 3000 to the configured domain.
1resource "kubernetes_namespace" "finly_landing_page" {
2 metadata {
3 name = "finly-landing-page"
4 }
5}
6
7resource "kubernetes_deployment" "finly_landing_page" {
8 metadata {
9 name = "finly-landing-page"
10 namespace = kubernetes_namespace.finly_landing_page.metadata.0.name
11 }
12 spec {
13 replicas = 1
14 selector {
15 match_labels = local.match_labels
16 }
17 template {
18 metadata {
19 labels = local.labels
20 annotations = {
21 "dikurium.ch/last-updated" = timestamp()
22 }
23 }
24 spec {
25 image_pull_secrets {
26 name = kubernetes_secret.registry_auth.metadata.0.name
27 }
28 container {
29 image = "${var.registry}/${var.image_repository}:${var.image_tag}"
30 name = "finly-landing-page"
31 image_pull_policy = var.image_pull_policy
32 port {
33 container_port = 3000
34 name = "http"
35 }
36 env_from {
37 config_map_ref {
38 name = kubernetes_config_map.finly_landing_page.metadata.0.name
39 }
40 }
41 }
42 }
43 }
44 }
45}
46
47resource "kubernetes_service" "finly_landing_page" {
48 metadata {
49 name = "finly-landing-page"
50 namespace = kubernetes_namespace.finly_landing_page.metadata.0.name
51 }
52 spec {
53 selector = local.match_labels
54 type = "ClusterIP"
55 port {
56 port = 80
57 target_port = "http"
58 name = "http"
59 }
60 }
61}
62
63resource "kubernetes_ingress_v1" "finly_landing_page" {
64 metadata {
65 name = "finly-landing-page"
66 namespace = kubernetes_namespace.finly_landing_page.metadata.0.name
67 annotations = {
68 "cert-manager.io/cluster-issuer" = var.cluster_issuer_name
69 "nginx.ingress.kubernetes.io/proxy-body-size" = "50m"
70 }
71 }
72 spec {
73 ingress_class_name = "nginx"
74 rule {
75 host = local.domain
76 http {
77 path {
78 backend {
79 service {
80 name = kubernetes_service.finly_landing_page.metadata.0.name
81 port {
82 name = "http"
83 }
84 }
85 }
86 path = "/"
87 path_type = "Prefix"
88 }
89 }
90 }
91
92 rule {
93 host = "www.${local.domain}"
94 http {
95 path {
96 backend {
97 service {
98 name = kubernetes_service.finly_landing_page.metadata.0.name
99 port {
100 name = "http"
101 }
102 }
103 }
104 path = "/"
105 path_type = "Prefix"
106 }
107 }
108 }
109
110 tls {
111 secret_name = "finly-landing-page-tls"
112 hosts = [local.domain, "www.${local.domain}"]
113 }
114 }
115
116 depends_on = [kubernetes_namespace.finly_landing_page]
117}
118
119resource "random_id" "payload_secret" {
120 byte_length = 32
121}
122
123resource "kubernetes_config_map" "finly_landing_page" {
124 metadata {
125 name = "finly-landing-page"
126 namespace = kubernetes_namespace.finly_landing_page.metadata.0.name
127 }
128 data = {
129 PAYLOAD_SECRET = random_id.payload_secret.hex
130 DATABASE_URI = "postgresql://${postgresql_role.finly_landing_page.name}:${postgresql_role.finly_landing_page.password}@${data.digitalocean_database_cluster.dikurium.host}:${data.digitalocean_database_cluster.dikurium.port}/${postgresql_database.finly_landing_page.name}?sslmode=require"
131 S3_ACCESS_KEY = var.do_spaces_access_id
132 S3_SECRET_ACCESS_KEY = var.do_spaces_secret_key
133 S3_REGION = digitalocean_spaces_bucket.finly_landing_page.region
134 S3_BUCKET = digitalocean_spaces_bucket.finly_landing_page.name
135 S3_ENDPOINT = "https://${digitalocean_spaces_bucket.finly_landing_page.region}.digitaloceanspaces.com"
136 }
137}
138
139resource "kubernetes_secret" "registry_auth" {
140 metadata {
141 name = "registry-auth-secret"
142 namespace = kubernetes_namespace.finly_landing_page.metadata.0.name
143 }
144
145 data = {
146 ".dockerconfigjson" = jsonencode({
147 "auths" = {
148 "${var.registry}" = {
149 "auth" = base64encode("${var.registry_username}:${var.registry_password}")
150 }
151 },
152 "credsStore" = "",
153 "credHelpers" = {}
154 })
155 }
156
157 type = "kubernetes.io/dockerconfigjson"
158}
Most of the configuration you see above is a standard Kubernetes deployment, consisting of our image, along with a service, and an ingress configuration for networking. However, one interesting detail is the "dikurium.ch/last-updated" = timestamp()
which allows us to reapply the configuration multiple times. This ensures that if a new image is published with the latest
tag, the pods will automatically be recreated, forcing a fresh pull of the updated image.
Finally, we configure a registry-auth-secret
to authenticate and pull the image from our private registry.
To automate the Docker image build and provision the required infrastructure, we use Drone CI to run the build and plan the Terraform deployment on push. When promoting the build to production
, Drone also applies the Terraform changes.
To plan the deployment, we use the hashicorp/terraform
image and run the terraform plan
command.
1kind: pipeline
2name: run terraform plan on main
3type: kubernetes
4steps:
5 - image: hashicorp/terraform:latest
6 name: terraform plan
7 commands:
8 - cd deploy
9 - terraform init
10 - terraform workspace select -or-create default
11 - terraform plan
12 environment:
13 TF_VAR_do_token:
14 from_secret: digitalocean_token
15trigger:
16 branch:
17 - main
18 event:
19 - push
Since our Terraform module requires variables to configure providers, set the image registry and credentials, etc. we can use environment variables prefixed with TF_VAR_
to pass those values dynamically. For secrets, Drone's built-in secret management ensures they are securely provided.
Once the plan is generated, we execute it only when a developer promotes the build to production
which can be done from the Drone GUI after a pipeline has successfully completed.
1kind: pipeline
2name: run terraform apply on main
3type: kubernetes
4steps:
5 - name: terraform apply
6 image: hashicorp/terraform:latest
7 commands:
8 - cd deploy
9 - terraform init
10 - terraform workspace select -or-create default
11 - terraform apply -auto-approve
12 environment:
13 TF_VAR_do_token:
14 from_secret: digitalocean_token
15trigger:
16 event:
17 - promote
18 target:
19 - production
That's all it takes to apply the Terraform configuration on our infrastructure using a manual confirmation step so we can ensure the planned resource configuration is correct.
With the initial resources (such as the Postgres database) now provisioned, we move on to adding the Drone pipeline for building our Docker image. For this, we use the Drone Docker plugin, which allows us to configure:
auto_tag: true
, ensuring latest
is set on main, and version tags are used when tagging commitsThese build arguments are passed as environment variables, which we referenced earlier using the ARG
keyword in our Dockerfile
:
1kind: pipeline
2name: build main docker image
3type: kubernetes
4steps:
5 - image: plugins/docker
6 name: build image
7 settings:
8 auto_tag: true
9 dockerfile: Dockerfile
10 registry: ...
11 repo: ...
12 build_args_from_env:
13 - PAYLOAD_SECRET
14 username:
15 from_secret: gitea_username
16 password:
17 from_secret: gitea_password
18 environment:
19 PAYLOAD_SECRET:
20 from_secret: payload_secret
21trigger:
22 branch:
23 - main
24 event:
25 - push
With the image built and stored in our registry, we add the deployment configuration for our app in the Terraform module, ensuring that the app is deployed alongside the provisioned database and S3 storage.
By combining Next.js, PayloadCMS, and HeroUI, we built a modern and performant landing page. Instead of relying on Vercel’s hosting, we deployed the site to our Kubernetes infrastructure on DigitalOcean, fully automating the process with Drone CI and Terraform.
This setup allows us to deploy new changes seamlessly, ensuring that infrastructure updates, database migrations, and Docker image builds all happen in a structured and automated manner.
Going forward, we can extend this pipeline to support staging environments, add monitoring, or even integrate zero-downtime deployments. With this foundation in place, deploying Finly’s landing page is now as simple as pushing a new commit. 🚀