How does Docker build work?
The Docker build process creates a Docker image based on instructions in a Dockerfile. This file contains build instructions like copying files, installing dependencies, running commands, and more.
When you run docker build
, the Docker daemon reads the Dockerfile and executes each instruction in order, generating a layered image. Each instruction creates a lightweight, read-only filesystem snapshot known as a layer. Docker caches unchanged layers to speed up subsequent builds. Layers that haven't changed since a previous build are cached and reused, avoiding the need to rebuild those layers and significantly improving build times.
Once all instructions are executed, Docker creates the final image containing the application and its dependencies, ready to be used to run containers.
How to build a Docker image from Dockerfile: Step-by-step
To build a Docker image from Dockerfile, let's see the docker build
command in action. In the below steps, we will build a simple Docker image, a web server to serve out a web page through installation. You're going to be building, running, and testing it on your local computer.
#Step 1 - Create a working directory
Create a directory or folder to use for this demonstration ("docker-demo" is used here) and navigate to that directory by running the following commands in your terminal:
mkdir docker-demo
cd docker-demo
Create a file called "index.html" in the directory and add the following content to it:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Simple App</title>
</head>
<body>
<h1>Hello World</h1>
<p>This is running in a docker container</p>
</body>
</html>
This will serve as the web page to be served up by the server.
#Step 2 - Select a base image
Next, select a suitable base image from Docker Hub or a local repository. The base image forms the foundation of your custom image and contains the operating system and essential dependencies. Almost every single image for Docker is based on another image. For this demonstration, you'd be using nginx:stable-alpine3.17-slim as the base image.
#Step 3 - Create a Dockerfile
Now create a file named "Dockerfile". This file will define the build instructions for your image. By default, when you run the docker build
command, docker searches for the Dockerfile.
#Step 4 - Add build instructions to the Dockerfile
Open the Dockerfile in a text editor and add the following lines:
FROM nginx:stable-alpine3.17-slim
COPY index.html /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
The instruction above will create a Docker image that serves the provided "index.html" file through the Nginx web server when a container is launched based on that image.
The FROM instruction initializes a new build stage and sets the base image for subsequent instructions. The COPY instruction copies files or directories (usually the source code) from the source to the specified location inside the image. It copies the file to the "/usr/share/nginx/html" directory, which is the default location for serving static files in the Nginx web server. The main purpose of the CMD instruction is to provide defaults for executing containers. The instructions defined in Dockerfiles differ based on the kind of image you're trying to build.
#Step 5 - Build the image using the Docker build command
Before building the Docker image, confirm that you have Docker installed by running the docker --version
command.
To build your container, make sure you're in the folder where you have your Dockerfile and run the following command in the terminal:
docker build -t sampleapp:v1 .
This command initiates the Docker build process to create a Docker image based on the instructions specified in the Dockerfile located in the current directory (.)
The -t
flag specifies a tag for the image, allowing you to assign a name and version to it. In this case, the image will be tagged as "sampleapp" with the version "v1" providing a descriptive identifier for the image, making it easier to reference and manage.
You should see the build process start and an output indicating that it has finished when it is done.
#Step 6 - Verify the built Docker image
After a successful build, verify the image by running the docker images
command to list all the available images on your Docker host. You should see your newly created image listed with its assigned tag and other relevant details, ready to be used for running containers or pushing to a container registry for distribution.
#Step 7 - Run the Docker image
Next, run the Docker image as a container using:
docker run -p 8080:80 sampleapp:v1
This command tells Docker to run the sampleapp
container. The -p
flag specifies the port mapping, which maps a port from the host machine to a port inside the container. Here, you are mapping port 8080 of the host machine to port 80 of the container. You can modify the host port as per your preference. Ensure you specify the image name and version you used when building the image.
#Step 8 - Access the application
With the container running, you can go ahead to access the application. Open a web browser and navigate to localhost:8080
and you should see the sample web page displayed on your web browser.
Tagging a Docker image
When you have many images, it becomes difficult to know which image is what. Docker provides a way to tag your images with friendly names of your choosing. This is known as tagging. Let’s proceed to tag the Docker image we just built. Run the command below:
$ docker build . -t yourusername/example-node-app
If you run the command above, you should have your image tagged already. Running docker images again will show your image with the name you’ve chosen.
$ docker images
The output of the above command should be similar to this:
REPOSITORY TAG IMAGE ID CREATED SIZE
yourusername/example-node-app latest be083a8e3159 7 minutes ago 83.2MB
Running or Testing a Docker image
You run a Docker image by using the docker run API. The command is as follows:
$ docker run -p80:3000 yourusername/example-node-app
The command is pretty simple. We supplied -p argument to specify what port on the host machine to map the port the app is listening on in the container. Now you can access your app from your browser on localhost.
To run the container in a detached mode, you can supply argument -d:
$ docker run -d -p80:3000 yourusername/example-node-app
A big congrats to you! You just packaged an application that can run anywhere Docker is installed.
Pushing a Docker image to the Docker repository
The Docker image you built still resides on your local machine. This means you can’t run it on any other machine outside your own—not even in production! To make the Docker image available for use elsewhere, you need to push it to a Docker registry.
A Docker registry is where Docker images live. One of the popular Docker registries is Docker Hub. You’ll need an account to push Docker images to Docker Hub, and you can create one [here.](hub.docker.com)
With your [Docker Hub](hub.docker.com) credentials ready, you need only to log in with your username and password.
$ docker login
Enter your Docker Hub username and docker hub token or password to authenticate
Retag the image with a version number:
$ docker tag yourusername/example-node-app yourdockerhubusername/example-node-app:v1
Then push with the following:
$ docker push yourusername/example-node-app:v1
If you’re as excited as I am, you’ll probably want to poke your nose into what’s happening in this container, and even do cool stuff with Docker API.
You can list Docker containers:
$ docker ps
And you can inspect a container:
$ docker inspect
You can view Docker logs in a Docker container:
$ docker logs
And you can stop a running container:
$ docker stop
Logging and monitoring are as important as the app itself. You shouldn’t put an app in production without proper logging and monitoring in place, no matter what the reason. Retrace provides first-class support for Docker containers. This guide can help you set up a Retrace agent.
Conclusion
The whole concept of containerization is all about taking away the pain of building, shipping, and running applications. In this post, we’ve learned how to write Dockerfile as well as build, tag, and publish Docker images. Now it’s time to build on this knowledge and learn about how to automate the entire process using continuous integration and delivery. Here are a few good posts about setting up continuous integration and delivery pipelines to get you started: