In this task, we’ll create a Docker container for the Posts API. Containerization allows us to package our application and its dependencies into a standardized unit for software development and deployment.
Docker is an open-source platform that enables developers to build, deploy, and run applications inside containers. A container is a lightweight, standalone, executable package that includes everything needed to run an application, including the code, runtime, system tools, and libraries.
Using Docker offers several benefits:
To get started with Docker, you’ll need to install Docker Desktop on your machine. Follow the official installation instructions for your operating system: https://docs.docker.com/get-docker/.
When working with Docker, there are a few terms you should be familiar with:
Currently, the API server always listens on port 3000. Let’s make this dynamically configurable so that the application automatically listens to the port specified by the PORT
environment variable.
To do so, open the api/src/server.ts
file. Inside, change the port configuration to look at the PORT
environment variable, leaving 3000 as the fallback value:
Likewise, we will update the database filename to use the DB_FILE
environment variable. Open the api/src/db/index.ts
file and change the database connection configuration to look at the DB_FILE
environment variable, leaving the default value as sqlite.db
:
Currently, we start the API server using pnpm run dev
which runs the server in watch mode. This is helpful during development but not suitable for production. Let’s define a new script in the package.json
file to start the server in production mode:
Here, we’ve used the included tsx
command to run the src/server.ts
file (similar to the dev
script, but without the watch
). This is enough to start the application. Alternatively, we could compile the TypeScript code to JavaScript using tsc
and then run the compiled JavaScript file.
Dockerfile
for the APICreate a Dockerfile in the api
directory. This file will contain instructions for building our Docker image.
api
directory in your terminal.Dockerfile
(no file extension).Dockerfile
:This Dockerfile does the following:
/app
in the container where the application code will be copied. You can name this directory anything you like.curl
for health checks. This is useful for monitoring the health of the containerized application. We’ll use this later.package.json
and pnpm-lock.yaml
files and installs dependencies. This step is done separately from copying the rest of the application code to leverage Docker’s cache during builds (more on this later). The ./
indicates the current working directory inside the container (/app
)..dockerignore
file (if present). We will define a .dockerignore
file in the next section.pnpm db:generate
and pnpm db:migrate
commands. Notice that we use environment variable DB_FILE
to configure the database file. I have set it to /app/data/sqlite.db
.DB_FILE
environment variable, we use the PORT
argument to configure the port number. This allows us to specify the port when running the container.pnpm start
.Docker images are built in layers, each layer representing a specific set of changes to the base image. The FROM
instruction in the Dockerfile specifies the base image from which the new image will be built. In this case, we are using an official Node.js image as the base image.
There are different versions and variants of the Node.js image available on Docker Hub, each optimized for different use cases. The lts-alpine
tag refers to the “long term support” version of Node.js with Alpine Linux as the base operating system. Alpine Linux is a lightweight Linux distribution that provides a smaller image size and faster build times compared to other distributions.
The COPY package.json pnpm-lock.yaml ./
instruction is a common pattern used in Dockerfiles to optimize the build process by leveraging Docker’s cache mechanism. When building a Docker image, each instruction in the Dockerfile creates a new layer in the image. Docker caches the layers to speed up subsequent builds by reusing previously built layers when the source code has not changed.
By copying the package.json
and pnpm-lock.yam
files separately from the rest of the application code, we ensure that the dependencies are installed only when these files change. If the application code changes but the dependencies remain the same, Docker will reuse the cached layer with the installed dependencies, saving time during the build process.
This pattern is particularly useful when working with Node.js applications, as the dependencies are typically defined in the package.json
file and are less likely to change frequently compared to the application code. By copying the dependency files separately, we can take advantage of Docker’s cache and avoid reinstalling dependencies unnecessarily.
CMD
vs. RUN
InstructionsThe CMD
instruction specifies the command to run when the container starts, while the RUN
instruction executes a command during the build process to set up the environment or install dependencies. The RUN
instruction is used to install dependencies, set up the application, or perform other tasks that are necessary for the container to run but are not part of the runtime behavior.
In contrast, the CMD
instruction defines the default command to run when the container starts. This command is executed in the container’s default shell (usually /bin/sh -c
), and it can be overridden when running the container to provide different behavior. The CMD
instruction is typically used to start the application or process that the container is designed to run.
.dockerignore
fileSimilar to the .gitignore
file, the .dockerignore
file specifies which files and directories should be excluded when building a Docker image. This helps reduce the size of the image and speeds up the build process by avoiding unnecessary files.
Add the following content to the .dockerignore
file in the api
directory:
In the .dockerignore
file, we specify the files and directories that should be ignored during the Docker build process. This includes environment settings (.env
), node modules (node_modules/
), logs, version control directories (.git
), build output directories (dist
, out
), temporary files, and configuration files that are not needed in the Docker image. We also exclude local database files (.db
) that are not required for the Docker image. This ensures your local development database are not included in the production image.
Now that we have our Dockerfile, let’s build and test our Docker image.
Open a terminal window and navigate to the api
directory.
Build the Docker image:
This command builds a Docker image named posts-api
based on the instructions in the Dockerfile
located in the current directory (.
). The -t
flag is used to tag the image with a name (posts-api
in this case) to identify it later.
If you get an erro message about “Cannot connect to the Docker daemon”, make sure Docker Desktop is running on your machine. Here is an example output of the build process:
By the way, you can override the environment variables PORT
and DB_FILE
during the build process by passing them as build arguments:
We will use the values specified in the Dockerfile for now.
Run the Docker container:
This command runs the Docker container based on the posts-api
image and maps port 3000 on the host machine to port 3000 inside the container. The --name
flag assigns a name (posts-api-container
) to the container for easy reference.
The -p 3000:3000
might seem strange at first, but it’s a common pattern in Docker to map ports from the host machine to the container. The syntax is -p hostPort:containerPort
, where hostPort
is the port on the host machine and containerPort
is the port inside the container.
We know the API server listens on port 3000, so we map port 3000 on the host machine to port 3000 inside the container. This allows us to access the API server running inside the container from our host machine.
Test the API: Open a Postman and test the API endpoints as you did before. You should be able to interact with the API running inside the Docker container.
http://localhost:3000/
to get the “Hello Hono!” message.http://localhost:3000/sign-up
to sign up a new user. This should also sign in the user.http://localhost:3000/sign-out
to sign out the user.http://localhost:3000/sign-in
to sign in the user again./posts
endpoint./posts/:postId/comments
endpoint.If everything is set up correctly, you should be able to interact with the API as you did before.
Stop and remove the container: Open the terminal and run the following commands to stop and remove the container.
Congratulations! You’ve successfully dockerized the Posts API. In the next task, we’ll dockerize the web application.
In this task, you learned how to create a Docker container for the Posts API. Containerization allows you to package your application and its dependencies into a standardized unit for software development and deployment. By following the steps outlined in this task, you’ve gained hands-on experience with Docker and learned how to build, run, and test a Docker container for the API server.