03/14/2025 1:05 PM

Cutting Docker Build Times in Half: Optimizing Frontend Builds with Drone and Stage Caching

3 minutes

Docker
Drone

See how we improved our frontend Docker build times by 50% using the Drone Docker plugin and smart caching of dependencies and intermediate stages, making our CI/CD pipeline faster and more efficient.

Ravi

Ravi

CTO

At InnoPeak we use DroneCI to build our applications and Docker images so that we can then deploy them to our DigitalOcean Kubernetes cluster. As Finly's codebase has grown a lot over the past year, we've noticed slow downs resulting in decreased developer productivity since waiting on the pipelines to complete - running GraphQL codegen, the linter, typechecks and build - could take up to ~16 minutes meaning an additional wait step before continuing with the next feature.

With some fine-tuning using the Drone Docker plugin's cache_from option and building specific intermediate stages as their own images we've managed to optimize Docker build times effectively slashing them in half, along with decreased resource usage during the build.

Looking at the Original State - Unused Multi-Stage Docker Builds

Since Finly's frontend is a Next.js application, we use a Dockerfile based on Node.js 23-alpine to build the application and run it. It's split into multiple stages allowing us to - so far only locally - cache steps that might not need to be re-run, such as installing dependencies or running GraphQL codegen if the package.json/yarn.lock hasn't changed.

Dockerfile
1FROM node:23-alpine AS base
2
3# Install dependencies only when needed
4FROM base AS deps
5RUN apk add libc6-compat
6
7# Install G++ and Python 3 for Sharp
8RUN apk add g++ make py3-pip
9
10WORKDIR /app
11
12# Install dependencies based on the preferred package manager
13COPY package.json yarn.lock ./
14RUN yarn --frozen-lockfile
15
16# Rebuild the source code only when needed
17FROM base AS builder
18
19WORKDIR /app
20COPY --from=deps /app/node_modules ./node_modules
21COPY . .
22
23ENV NODE_OPTIONS=--max_old_space_size=6144
24
25RUN ${CODEGEN_SCRIPT}
26RUN yarn build
27
28# Production image, copy all the files and run next
29FROM base AS runner
30WORKDIR /app
31
32ENV NODE_ENV=production
33
34RUN addgroup --system --gid 1001 nodejs
35RUN adduser --system --uid 1001 nextjs
36
37COPY --from=builder /app/public ./public
38
39# Set the correct permission for prerender cache
40RUN mkdir .next
41RUN chown nextjs:nodejs .next
42
43# Automatically leverage output traces to reduce image size
44# https://nextjs.org/docs/advanced-features/output-file-tracing
45COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
46COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
47
48USER nextjs
49
50EXPOSE 3000
51
52ENV PORT=3000
53# set hostname to localhost
54ENV HOSTNAME="0.0.0.0"
55
56CMD ["node", "server.js"]
57

As you can see, we have four different stages, the most interesting one for this post being deps, which only installs packages if package.json or yarn.lock has changed since Docker looks at the state of the image and if nothing has changed since the last build won't run the commands specified with the RUN line.

Our drone.yml uses the Docker plugin to build our Dockerfile in a dry run whenever a pull request is opened:

.drone.yml
1kind: pipeline
2type: kubernetes
3name: build dev docker image dry run
4steps:
5- image: plugins/docker
6  name: build image
7  settings:
8    dockerfile: Dockerfile
9    dry_run: true
10    registry: "<registry>"
11    repo: "<registry>/finly"
12trigger:
13  branch:
14  - develop
15  event:
16  - pull_request

Using cache_from to Cache Individual Stages

Drone's Docker plugin supports a cache_from argument which can be supplied and allows Docker to skip steps that are already cached in those images. However, if we would just set cache_from to the image of Finly, it wouldn't do much because only the final runner stage is pushed to our registry which just contains the final Next.js build files. Docker wouldn't be able to optimize packages alone based on that.

So our solution was to push the intermediate deps stage to its own repo finly-deps by adding a second pipeline with the target: deps so that it stops the build at that stage:

.drone.yml
1type: kubernetes
2kind: pipeline
3name: build deps
4steps:
5- image: plugins/docker
6  name: build image
7  settings:
8    cache_from: "<registry>/finly-deps"
9    dockerfile: Dockerfile
10    registry: "<registry>"
11    repo: "<registry>/finly-deps"
12    tags:
13    - latest
14    target: deps
15trigger:
16  branch:
17  - main
18  - staging
19  - testing
20  - develop
21  event:
22  - push
23  - pull_request

We also set cache_from so that this step is super fast as long as packages don't change, since it will use the previously cached image and skip every step.

To make use of this finly-deps image, all we have to do is set cache_from in our build pipeline:

.drone.yml
1kind: pipeline
2type: kubernetes
3name: build dev docker image dry run
4steps:
5- image: plugins/docker
6  name: build image
7  settings:
8    cache_from: "<registry>/finly-deps"
9    dockerfile: Dockerfile
10    dry_run: true
11    registry: "<registry>"
12    repo: "<registry>/finly"
13trigger:
14  branch:
15  - develop
16  event:
17  - pull_request

That's all! We can verify it worked by checking if the pipeline first starts by pulling the finly-deps image:

1+ /usr/local/bin/docker pull <registgry>/finly-deps
2Using default tag: latest
3latest: Pulling from <registgry>/finly-deps

We can also see that during the build, Docker will show us when it's using the cache during build of the deps stage:

1Status: Downloaded newer image for node:23-alpine
2 ---> 4e2b8ab84aec
3Step 2/40 : FROM base AS deps
4 ---> 4e2b8ab84aec
5Step 3/40 : RUN apk add libc6-compat
6 ---> Using cache
7 ---> 04e418978e16
8Step 4/40 : RUN apk add g++ make py3-pip
9 ---> Using cache
10 ---> 8d0f0eece443
11Step 5/40 : WORKDIR /app
12 ---> Using cache
13 ---> ceb2d59b3ea8
14Step 6/40 : COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
15 ---> Using cache
16 ---> e9e4bfdf0ca3
17Step 7/40 : RUN   if [ -f yarn.lock ]; then yarn --frozen-lockfile;   elif [ -f package-lock.json ]; then npm ci;   elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile;   else echo "Lockfile not found." && exit 1;   fi
18 ---> Using cache
19 ---> eb19256c592e
20

And that's all! Thanks to some nifty usage of multi-stage builds and Drone Docker plugin's capability to push specific targets to a registry, we can speed up the stages that change less frequently and take a long time to run. Speeding up pipelines and providing developers with a faster feedback loop to fix lint/type/build issues, as well as cutdown on resource consumption.

Finly — Cutting Docker Build Times in Half: Optimizing Frontend Builds with Drone and Stage Caching