Wei's
Dev
Journal

Containerise with Docker Fundamental Part 01

article 04 cover

Image source: by Wei Chu

Table:

Step 00 | Prerequisite

  1. Installing npm

    bash

  2. Installing the node.js module

    bash


Step 01 | Creating an express app with node.js for testing the container function

  1. Using the npm initto create a new package.json file for a Node.js project.

    bash

  2. We will see a package.json file is now created in the directory.

    package.json

  3. creating the express app file index.js for testing.

    index.js

Step 02 | Setting Docker container

  1. Installing docker on the local machine .
  2. Go to hub.docker.com , search “node” .
  3. As the default image of node in the hub won't have every dependencies we will need for our app, we are going to write our own customised image but based on the image that is shown on the hub. node.js docker plugin
  4. Creating a Dockerfile, . ./ will cache all the docker build result, which will allow speed up next time when run the image build again.

    Dockerfile

  5. Run the command to build the docker image

    bash

    docker build command docker build command
  6. Check the now existed docker image to see if it has successfully created.

    bash

    listing docker image
  7. (Optional) you can also remove the image by specifying the image ID.

    bash

    removing docker image
  8. (Optional) you can rebuild the image again but specifying a name for the docker image this time by adding a flag -t in the command.

    bash

    naming docker image Now go ahead and run it.

    bash

    naming docker image Double check.

    bash

    naming docker image Go to the browser and check it, the website should be not working just yet . Go to the next major step about the port and how network can talk to the container. browser error result.

Step 03 | Container and Network Traffic Management

bash


Step 04 | Starting a exited container

  1. The following command will only show the container is currently running.

    bash

    terminal result
  2. The following command will show all the containers including the one that is not active.

    bash

    terminal result
  3. Run the following command start running the exited container again.

    bash

    terminal result

    bash

    terminal result

    💡 If you want to start the container and attach to it, you can use:

    bash

    💡 If you need to run a new container from an image, you can use the docker run command:

    bash


Step 05 | .dockerignore to enhance security

Run the following commands, and we will find that the dockerfile is also included in the deployed container, which is not the safest practice:

  1. Run the follow command to enter the shell of the container node.

    bash

  2. Run the command ls in the instance shell to check the files that has been synced from the working directory to the container node. We will find the Dockerfile and node.js modules are all synced, which is not necessary for those files existed in there and potentially contain security risk. terminal result
  3. This is when .dockerignore file comes in handy.

    .dockerignore

  4. After creating the .dockerignore file and specifying the directory item to be ignored, rebuilding the image and then re-deploy the container.
    1. Stop the container by running command sudo docker stop node-app
    2. Remove the container by running command sudo docker rm node-app
    3. terminal result
    4. Run the command sudo docker image ls -a to get the image ID and run the command sudo docker image rm cec14ac641d6 to remove the image.
    5. terminal result
    6. Run the command sudo docker build -t node-app-image . to rebuild the image again
    7. terminal result
    8. Run the command sudo docker run -p 3000:3000 -d --name node-app node-app-image to redeploy the container again
    9. Run the command sudo docker exec -it node-app bash to enter the shell of the container, and we will find the items in the .dockerignore file are now ignored while deploying the container.
    10. terminal result

      💡 We might still be able to find the node_modules in the container's directory even though we asked the modules to be ignored. It is because we specify npm to install the dependencies into the container in the dockerfile (see the screenshot below). dockerfile screenshot


Step 06 | Docker Bind Mount for Directory Files Synchronisation to the Container Node

  1. Edit the frontend content in the express app definition file. change of the web source code
  2. We will find the content still looks the same and updated. That means the frontend app file in the work directory is not synced to the frontend app file in the container node. browser preview result
  3. Question is how to not constantly rebuild the container image when something is changed ? - This is when “Bind Mount” comes in handy.

    💡 What is Bind Mount

    A bind mount in Docker is a way to mount a directory or file from the host machine into a container. This allows the container to access and modify files on the host system. Bind mounts are useful for development, as changes made on the host are immediately reflected in the container.

    The following is the command syntax to set up the bind mount mechanism between the working directory and the container node.

    bash

    According to my case, it would be :

    bash

    To avoid the syntax become too long, it is better to specify the path in the dynamic way as normally docker only accept the full path address and doesn't like relative path.

    bash

    Stop and remove the existing container node and redeploy a container with the command syntax above. terminal result The new container now should be in sync with the host work directory. Remember to restart the container node process when frontend content is updated. browser preview result


Step 07 | Automatically Restarting the Node Process with Nodemon

Using together with bind mount, when frontend content has been updated and synced to the container node, restarting the node process will be required. In order to prevent from manually running commands to achieve the task, Nodemon will be setup together with Bind Mount.

💡 What is Nodemon?

Nodemon is a utility that helps develop Node.js applications by automatically restarting the application when file changes in the directory are detected. It is particularly useful during development to avoid manually stopping and restarting the server every time you make a change.

Key Features:

  • Automatically restarts the Node.js application when file changes are detected.
  • Monitors all files in the directory by default, but can be configured to watch specific files or directories.
  • Can be used as a replacement for the "node" command.

  1. We need to install nodemon package with npm. Run the command sudo npm install nodemon --save-dev. The flag --save-dev in the command indicates that the package should be added to the devDependencies section of your package.jsonfile. Development dependencies are only needed during development and not in production.
  2. We will find the devDependencies section is created in the package.json file package.json screenshot
  3. We need to adding few scripts in the package.json file to make nodemon works properly. package.json screenshot
  4. Update the Dockerfile Dockerfile screenshot
  5. Repeat the steps in the previous sections to rebuild the image.(Stop&Remove the running node >> Remove the image with ID >> Rebuild the image )
  6. As package.json is updated, therefore the Bind Mount will need to be conducted again when deploying the container node. The command: sudo docker run -v $(pwd):/app -p 3000:3000 -d --name node-app node-app-image
  7. Now the frontend content should be in sync between the work directory and container node without manually starting the node processes. gif


Step 08 | Filtering Cached Item

Context:

  1. Deleting the node_modules package hosted in the work directory as those node.js package are no longer needed after we containerise the dependency of the app.
  2. We will then find out the web app is crashed if we will deploy the container node again in the future. This is due to the caching specified in the Dockerfile. Which means that even though npm will install packages in the new container node, the caching means the work directory structure will still override the directory cached in the container node. web browser preview

Fix: Excluding a specific part of the volume from being overwritten by using the following command:

bash

-v /app/node_modules creates an anonymous volume for the /app/node_modules directory to avoid overwriting it with the host's node_modules.

💡 How COPY . ./ will impact the caching of the containerisation result

In Docker, each instruction in the Dockerfile creates a layer in the image. Docker uses a caching mechanism to speed up the build process by reusing layers that have not changed. The COPY . ./ instruction can significantly impact caching.

Caching Behavior:

  • Layer Caching: Docker caches each layer created by an instruction. If the contents being copied by COPY . ./ have not changed since the last build, Docker will reuse the cached layer.
  • Invalidating Cache: If any file in the source directory changes, the cache for the COPY . ./ instruction is invalidated, and Docker will re-execute this instruction and all subsequent instructions.

Best Practices:

  • Order Matters: Place instructions that change less frequently (e.g., COPY [package.json](http://_vscodecontentref_/0) . and RUN npm install) before instructions that change more frequently (e.g., COPY . ./). This helps maximize cache usage.
  • Selective Copying: Copy only necessary files to avoid invalidating the cache unnecessarily.


Step 09 | Restricting directory management privilege of the container node to avoid the main work directory being corrupted

Context:

  1. After the container node has being deployed, you will find that it is possible to create, update or delete files from the container node side because of the bind mount synchronisation
  2. For example, if we enter the shell of the containder node, and use the command touch test.txt to create a new file. We will find the new file immediately is synced to the root work directory. This will make the root work directory risks from being edited or corrupted. terminal view

Fix: Re-deploy the image by specifying read-only privilege with the following command:

bash

terminal view

Step 10 | Environment variables

  1. Setting default ENV value
    for example: setting port number at 3000 in the Dockerfile, and then reference the ENV variable PORT by EXPOSE for documentation purpose. Dockerfile screenshot
  2. Now killing the existing container and re-build the image. Re-deploy the container after the image is rebuild the with the environment variable referenced in the command line.
    The command:

    bash

    Terminal View
  3. To check if the environment variables are parsed into the container, enter the container shell and run the command printenv: Terminal View
  4. To set multiple environment variables with their own values, we need set up a .env file and reference it in the container deployment command.
    1. Create an .env file .env file
    2. Delete the existing container and redeploy it with the following command with .env file specified to reference the desired environment variable values.

      bash

    3. Run the command printenv again in the container bash shell to check if the environment variables are parsed into the container. terminal view


Step 11 | Cleaning up redundant volumes built-up

When a container is removed, some of the volumes attached to it will not be removed with it. In our case, we specify volume /app/node_modules when deploying a container, which is a anonymous volume that won't be deleted when a container is removed.

  1. Check the exiting redundant volumes
    Using the command docker volume ls to check what are the volumes existing in our system: terminal view
  2. There are 2 methods to handle them:>
    1. Using the command docker volume prune to trim the redundant volumes. terminal view
    2. Using -fv flag when removing a container, which will have the attached volume deleted together with the container to prevent volumes from building up. terminal view