In this task, we’ll create a Docker container for the Posts API. Containerization allows us to package our application and its dependencies into a standardized unit for software development and deployment.
Docker is an open-source platform that enables developers to build, deploy, and run applications inside containers. A container is a lightweight, standalone, executable package that includes everything needed to run an application, including the code, runtime, system tools, and libraries.
Using Docker offers several benefits:
To get started with Docker, you’ll need to install Docker Desktop on your machine. Follow the official installation instructions for your operating system: https://docs.docker.com/get-docker/.
When working with Docker, there are a few terms you should be familiar with:
Currently, the API server always listens on port 3000. Let’s make this dynamically configurable so that the application automatically listens to the port specified by the PORT
environment variable.
To do so, open the api/src/server.ts
file. Inside, change the port configuration to look at the PORT
environment variable, leaving 3000 as the fallback value:
import { serve } from "@hono/node-server";
import app from "./app";
const port = process.env.PORT || 3000; // 👈 Look here
console.log(`Server is running on port ${port}`);
serve({
fetch: app.fetch,
port,
});
Likewise, we will update the database filename to use the DB_FILE
environment variable. Open the api/src/db/index.ts
file and change the database connection configuration to look at the DB_FILE
environment variable, leaving the default value as sqlite.db
:
import { drizzle } from "drizzle-orm/better-sqlite3";
import Database from "better-sqlite3";
import * as schema from "./schema";
// Get the database file from the environment or use the default
const DATABASE_FILE = process.env.DB_FILE || "sqlite.db"; // 👈 Look here
// Initialize the SQLite database and export the connection
export const connection = new Database(DATABASE_FILE); // 👈 Look here
// Create the database and export it
export const db = drizzle(connection, { schema });
Currently, we start the API server using pnpm run dev
which runs the server in watch mode. This is helpful during development but not suitable for production. Let’s define a new script in the package.json
file to start the server in production mode:
{
"scripts": {
"start": "tsx src/server.ts",
"dev": "tsx watch src/server.ts",
// Other scripts...
},
}
Here, we’ve used the included tsx
command to run the src/server.ts
file (similar to the dev
script, but without the watch
). This is enough to start the application. Alternatively, we could compile the TypeScript code to JavaScript using tsc
and then run the compiled JavaScript file.
Dockerfile
for the APICreate a Dockerfile in the api
directory. This file will contain instructions for building our Docker image.
api
directory in your terminal.Dockerfile
(no file extension).Dockerfile
:# Use an official Node.js runtime as the base image
FROM node:lts-alpine
# Set the working directory in the container
WORKDIR /app
# Install curl for healthcheck
RUN apk add --no-cache curl
# Create the data directory to setup volumes for the database
RUN mkdir -p /app/data
# Copy package.json and pnpm-lock.yaml
COPY package.json pnpm-lock.yaml ./
# Install pnpm
RUN npm install -g pnpm
# Install dependencies
RUN pnpm install
# Copy the rest of the application code
COPY . .
# Build the database
ARG DB_FILE=/app/data/sqlite.db
ENV DB_FILE=${DB_FILE}
RUN pnpm db:generate
RUN pnpm db:migrate
# We don't seed the database in production!
# Expose the port the app runs on
ARG PORT=3000
ENV PORT=${PORT}
EXPOSE ${PORT}
# Command to run the application
CMD ["pnpm", "run", "start"]
This Dockerfile does the following:
/app
in the container where the application code will be copied. You can name this directory anything you like.curl
for health checks. This is useful for monitoring the health of the containerized application. We’ll use this later.package.json
and pnpm-lock.yaml
files and installs dependencies. This step is done separately from copying the rest of the application code to leverage Docker’s cache during builds (more on this later). The ./
indicates the current working directory inside the container (/app
)..dockerignore
file (if present). We will define a .dockerignore
file in the next section.pnpm db:generate
and pnpm db:migrate
commands. Notice that we use environment variable DB_FILE
to configure the database file. I have set it to /app/data/sqlite.db
.DB_FILE
environment variable, we use the PORT
argument to configure the port number. This allows us to specify the port when running the container.pnpm start
.Docker images are built in layers, each layer representing a specific set of changes to the base image. The FROM
instruction in the Dockerfile specifies the base image from which the new image will be built. In this case, we are using an official Node.js image as the base image.
There are different versions and variants of the Node.js image available on Docker Hub, each optimized for different use cases. The lts-alpine
tag refers to the “long term support” version of Node.js with Alpine Linux as the base operating system. Alpine Linux is a lightweight Linux distribution that provides a smaller image size and faster build times compared to other distributions.
The COPY package.json pnpm-lock.yaml ./
instruction is a common pattern used in Dockerfiles to optimize the build process by leveraging Docker’s cache mechanism. When building a Docker image, each instruction in the Dockerfile creates a new layer in the image. Docker caches the layers to speed up subsequent builds by reusing previously built layers when the source code has not changed.
By copying the package.json
and pnpm-lock.yam
files separately from the rest of the application code, we ensure that the dependencies are installed only when these files change. If the application code changes but the dependencies remain the same, Docker will reuse the cached layer with the installed dependencies, saving time during the build process.
This pattern is particularly useful when working with Node.js applications, as the dependencies are typically defined in the package.json
file and are less likely to change frequently compared to the application code. By copying the dependency files separately, we can take advantage of Docker’s cache and avoid reinstalling dependencies unnecessarily.
CMD
vs. RUN
InstructionsThe CMD
instruction specifies the command to run when the container starts, while the RUN
instruction executes a command during the build process to set up the environment or install dependencies. The RUN
instruction is used to install dependencies, set up the application, or perform other tasks that are necessary for the container to run but are not part of the runtime behavior.
In contrast, the CMD
instruction defines the default command to run when the container starts. This command is executed in the container’s default shell (usually /bin/sh -c
), and it can be overridden when running the container to provide different behavior. The CMD
instruction is typically used to start the application or process that the container is designed to run.
.dockerignore
fileSimilar to the .gitignore
file, the .dockerignore
file specifies which files and directories should be excluded when building a Docker image. This helps reduce the size of the image and speeds up the build process by avoiding unnecessary files.
Add the following content to the .dockerignore
file in the api
directory:
# Ignore environment settings
.env
# Ignore node modules
node_modules/
# Ignore logs
logs
*.log
npm-debug.log*
# Ignore version control directories
.git
.gitignore
# Ignore build output directories
/dist
/out
# Ignore temporary files
*.tmp
*.temp
# Ignore configuration files that are not needed
.dockerignore
Dockerfile
*.md
LICENSE
# Local database files
*.db
In the .dockerignore
file, we specify the files and directories that should be ignored during the Docker build process. This includes environment settings (.env
), node modules (node_modules/
), logs, version control directories (.git
), build output directories (dist
, out
), temporary files, and configuration files that are not needed in the Docker image. We also exclude local database files (.db
) that are not required for the Docker image. This ensures your local development database are not included in the production image.
Now that we have our Dockerfile, let’s build and test our Docker image.
Open a terminal window and navigate to the api
directory.
cd api
Build the Docker image:
docker build -t posts-api .
This command builds a Docker image named posts-api
based on the instructions in the Dockerfile
located in the current directory (.
). The -t
flag is used to tag the image with a name (posts-api
in this case) to identify it later.
If you get an erro message about “Cannot connect to the Docker daemon”, make sure Docker Desktop is running on your machine. Here is an example output of the build process:
[+] Building 10.6s (16/16) FINISHED docker:desktop-linux
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 829B 0.0s
=> [internal] load metadata for docker.io/library/node:lts-alpine 0.4s
=> [auth] library/node:pull token for registry-1.docker.io 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 410B 0.0s
=> [ 1/10] FROM docker.io/library/node:lts-alpine@sha256:eb8101caae9ac02229bd64c024919fe3d4504ff7f329da79ca60a04db08ce 0.0s
=> => resolve docker.io/library/node:lts-alpine@sha256:eb8101caae9ac02229bd64c024919fe3d4504ff7f329da79ca60a04db08cef5 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 1.20kB 0.0s
=> CACHED [ 2/10] WORKDIR /app 0.0s
=> CACHED [ 3/10] RUN apk add --no-cache curl 0.0s
=> [ 4/10] RUN mkdir -p /app/data 0.1s
=> [ 5/10] COPY package.json pnpm-lock.yaml ./ 0.0s
=> [ 6/10] RUN npm install -g pnpm 1.6s
=> [ 7/10] RUN pnpm install 2.2s
=> [ 8/10] COPY . . 0.1s
=> [ 9/10] RUN pnpm db:generate 0.6s
=> [10/10] RUN pnpm db:migrate 0.7s
=> exporting to image 4.7s
=> => exporting layers 3.1s
=> => exporting manifest sha256:cc1f5d4f170ba03ec091d6fdd43aabbe64eac06d09da94fbd3465613b6b3dd18 0.0s
=> => exporting config sha256:2e896b47278a62f30e7f5432b9ccd3c540bd403fbcd233c72724bb332c7425a6 0.0s
=> => exporting attestation manifest sha256:d73f0f272749f486151b77b48b9c24d781e2561606e048cab26438e7642263ef 0.0s
=> => exporting manifest list sha256:25c35fe4862bc13c294a2702a39b8528160e16cbdbd40c3b7393af39b81acef3 0.0s
=> => naming to docker.io/library/posts-api:latest 0.0s
=> => unpacking to docker.io/library/posts-api:latest 1.6s
View build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/aea8td6vu1wyv2dbrb94a25wf
What's next:
View a summary of image vulnerabilities and recommendations → docker scout quickview
By the way, you can override the environment variables PORT
and DB_FILE
during the build process by passing them as build arguments:
docker build -t posts-api --build-arg PORT=4000 --build-arg DB_FILE=posts.db .
We will use the values specified in the Dockerfile for now.
Run the Docker container:
docker run -p 3000:3000 --name posts-api-container posts-api
This command runs the Docker container based on the posts-api
image and maps port 3000 on the host machine to port 3000 inside the container. The --name
flag assigns a name (posts-api-container
) to the container for easy reference.
The -p 3000:3000
might seem strange at first, but it’s a common pattern in Docker to map ports from the host machine to the container. The syntax is -p hostPort:containerPort
, where hostPort
is the port on the host machine and containerPort
is the port inside the container.
We know the API server listens on port 3000, so we map port 3000 on the host machine to port 3000 inside the container. This allows us to access the API server running inside the container from our host machine.
Test the API: Open a Postman and test the API endpoints as you did before. You should be able to interact with the API running inside the Docker container.
http://localhost:3000/
to get the “Hello Hono!” message.http://localhost:3000/sign-up
to sign up a new user. This should also sign in the user.http://localhost:3000/sign-out
to sign out the user.http://localhost:3000/sign-in
to sign in the user again./posts
endpoint./posts/:postId/comments
endpoint.If everything is set up correctly, you should be able to interact with the API as you did before.
Stop and remove the container: Open the terminal and run the following commands to stop and remove the container.
docker stop posts-api-container
docker rm posts-api-container
Congratulations! You’ve successfully dockerized the Posts API. In the next task, we’ll dockerize the web application.
In this task, you learned how to create a Docker container for the Posts API. Containerization allows you to package your application and its dependencies into a standardized unit for software development and deployment. By following the steps outlined in this task, you’ve gained hands-on experience with Docker and learned how to build, run, and test a Docker container for the API server.