What is Docker? A Deep Dive into Docker Container

Posted in Docker
What is Docker? A Deep Dive into Docker Container

Containers are becoming increasingly popular. This has led to the emergence of containerization-based tools, most notably Docker and Kubernetes.

While the former is a free and open-source tool (as well as a suite of PaaS products) for developing containers, the latter is an open-source container orchestration system offering a way to automate the management, scaling, and deployment of applications.

We will be focussing on Docker in this write-up. Containers can be understood as standardized executable components that pack application source code with all the required OS libraries and dependencies to run the application program in some environment.

Before delving deeper into Docker, let’s get our heads wrapped around the concept of containerization first.

What is Containerization?

In computing science, a container is defined as a package that contains code about an application program and all its dependencies.

This imparts several benefits, like speedy execution and enhanced reliability. The best benefits, however, come in the form of platform-independence and improved accessibility.

A container can be used almost on any platform without worrying too much about hardware configuration. This is a giant leap when it comes to resolving the age-long platform-dependency issues.

Container images are executable, lightweight, and standalone packages. They include everything for running the application program they are having: the code, runtime environment, settings, system tools, and libraries. They become containers during the runtime.

Docker container images, for example, become containers as soon as they start running on the Docker Engine. Irrespective of the infrastructure, containerized software applications run the same. Containers isolate applications from the environment for ensuring that they execute uniformly.

Machine virtualization allows several VMs to share the resources of one hardware server. Similarly, several application components share the resources of one OS kernel instance using a process called virtualization. This is done on the OS-level in case of containers.

Containers vs Virtual Machines

Like VMs, containers are cost-effectively scalable, operate in isolation, and easy to dispose of. The additional abstraction layer at the OS-level, however, adds more benefits. These are:

  • Better Developer Productivity: Containers are easier and quicker to deploy, provision, and restart than VMs. As such, they are a better option for using CI (continuous integration) and CD (continuous delivery) pipelines. They are also a better pick for Agile- and DevOps-based processes.
  • More Lightweight: Containers only include the processes and their concerned dependencies. On the other hand, VMs have an additional OS instance. Therefore, for the same application process, a container is lighter than that of its virtual machine counterpart.
  • Superior Resource Utilization: Like VMs, containers allow running multiple copies of an application process on the same hardware many times. They, however, are more resource-efficient.

What is Docker? - The Gold Standard of Containerization

Docker is a containerization platform from Docker, Inc. By leveraging OS-level virtualization, it allows developers to package application programs into containers. Docker facilitates building, deploying, running, updating, and managing containers.

The containerization tool features straightforward commands and effort-reducing automation that saves a great deal of both effort and time while working with containers. Although Docker is free-to-use, open-source, it also has a commercial offering.

Despite containerization having been available since the ’70s, it is only recently that it has become so widely popular. A good amount of credit goes to Docker for popularizing it.

Linux Containers, or LXC, were implemented in the Linux kernel in 2008. This enabled virtualization for one Linux instance. Although the early versions of Docker used LXC, it was able to develop its distinctive containerization technology that surpassed LXC.

Therefore, it is beneficial to learn Docker when looking forward to learning containerization. The popularity of Docker can be easily gauged by the fact that today, Docker and containers are used interchangeably.

Why Docker?

There are several convincing reasons why one should use Docker when there is a need to use containers. Most important ones are:

  • Auto Container Creation: Based on the application source code, Docker automatically builds a relevant container.
    • Creator of the Industry Standard: Docker, is responsible for creating and setting the industry standard for containers.
    • Granular Updates: Each Docker container has only one process. This allows the dependent application to keep running even when one of its parts, i.e., a process, is receiving an update or undergoing repairs.
  • Humongous Repository of Docker Images: Users of Docker are free to access an open-source registry storing tens of thousands of user-submitted Docker container images.
    • Lightweight - Docker containers share the OS system kernel of a machine and, therefore, doesn’t require an OS image for every application, as VMs do. This results in higher server efficiency as well as reducing server and licensing expenditure.
  • Most Secure - Application programs inside containers are safe. Docker offers the most robust default isolation capabilities.
    • Portable: Docker containers can be used anywhere. They can run on any cloud, desktop, or mobile environment without any modification.
  • Superior Versioning: Docker is capable of:
      • Rolling back to previous versions of container images.
      • Tracing how a version of a Docker container image was built and who built it.
      • Tracking versions of Docker container images.
      • Uploading only the deltas between the existing version and a recent version of a Docker container image.
  • Templates for New Containers: Docker allows using existing Docker containers as base images i.e., templates for developing new containers.

Docker Glossary

Docker involves a lot of technical terms and concepts. To fully appreciate the concept of Docker, one needs to understand them. Following is a list with brief explanations of various Docker terms:

1. Docker Container

The running instances of a Docker container image is a Docker container. Unlike the container images, Docker containers are transient, live, and features executable content.

The conditions and settings about a Docker container are managed by an administrator(s). This makes user interaction with the same possible.

2. Docker Container Image

A Docker container image packs executable application program source code along with all the dependencies, such as tools and libraries that are mandatory for running the “contained” application program. It is a read-only entity.

As soon as a Docker image starts running, it becomes an instance of the container. There can be one or many instances of the same Docker container running at the same time.

Although it is easy to develop a Docker image from nothingness, it is far easier to create one from a base image. Even multiple Docker images can be created with just one single base image. In that case, all the resulting Docker images will share several traits of their stack.

This ease-to-develop is what motivates many developers to pull base images from open-source/common Docker repositories, such as Docker Hub. These repositories are termed Docker registries.

Usually, a Docker image stuffs several layers. Each layer corresponds to a time when the developer makes changes. The top layer corresponds, therefore, to the latest changes made to a Docker container image.

Verily this is a useful feature for rolling back to previous versions of the Docker image in the unpleasant scenarios involving failure or when things don’t go the way they are meant to be. Also, it is possible to employ them in other projects. Talk about killing two birds with one stone.

When a Docker image runs i.e., a Docker container is formed, the topmost layer formed is dubbed the container layer. All changes made to the container are saved in the container layer and exist only during the lifetime of the container i.e., until its running.

Because of those above, running multiple container instances from a single base image offers enhanced overall efficiency.

3. DockerFile

A DockerFile is a plain text file that contains the instructions for building a Docker container image. Each Docker container has a DockerFile. It automates the entire process of creating a Docker image.

Docker Engine is the one responsible for assembling a Docker container image. The concerning DockerFile has all the commands and the particular sequence that helps Docker Engine in putting up the Docker container image.

4. Docker Hub

Self-titled as the biggest community and library of container images, Docker Hub is the supreme repository of Docker images. Submissions to the repo are usually made by:

  • Commercial software vendors
  • Open-source project teams
  • Individual developers

The public repository also contains Docker container images certified with the Docker Trusted Registry. As it is a public Docker repository, users are free to pull or push images whenever they feel like.

Docker Components

Docker SaaS has three components:

1. Docker Objects

For assembling an application in Docker, Docker objects are used. There are three main classes of Docker objects:

  • Docker containers: These are the standardized and isolated environments that run application programs. Docker API or CLI is responsible for managing Docker containers.
  • Docker images: Read-only templates for building Docker containers. Application programs are shipped and stored using the Docker images.
  • Docker services: Allows scaling of Docker containers across several Docker daemons. A group of collaborating Docker daemons that communicate via the Docker API is termed a swarm.

2. Docker Registries

The repositories for Docker images are called Docker registries. These can be either public (accessible by everyone) or private (accessible only by only those who are authorized). Docker Hub, the default registry, and Docker Cloud are the two leading public Docker registries.

Docker clients connect to Docker registries for either pulling (downloading) Docker images or pushing (uploading) the same. Event-based notifications can also be created using Docker registries.

3. The Docker Daemon

Dubbed dockerd, the Docker daemon, is a persistent process. It is responsible for managing Docker containers and handling container objects. It also listens to the requests sent over the Docker Engine API.

The gap between the user and Docker daemons is fulfilled using the Docker client program. It is labeled Docker and comes with a command-line interface.

Docker Compose

Docker Compose is a tool that helps in defining and running multi-container apps. It creates YAML files that allow deploying and running containers using only one command. These also specify the services that are included in the concerned application program.

The docker-compose.yml file helps in defining the services for an application and include configuration options as well. Docker Compose is suitable for managing the architecture of an application that is built of processes residing in several containers, all of which are present on one host. It allows:

  • Defining persistent storage volumes,
  • Document and configure service dependencies, and
  • Specifying base nodes.

The docker-compose CLI enables users to run commands on several containers simultaneously. These commands typically involve:

  • Building images
  • Scaling containers
  • Running containers

Commands such as image manipulation and user-interactive options are irrelevant in Docker Compose. This is because these commands only work on one container.

Deployment and Orchestration

The Docker Engine suffices for managing an application when there are only a few containers running. No additional tools are required. When the deployment, however, has thousands or even more containers and several hundred services, additional tools are required.

In such scenarios, there is a requirement for an orchestration tool. Two of the most popular tools for this purpose are Docker’s very own, Docker Swarm, and the best tool for managing multi-container environments, Kubernetes.

Docker Swarm - The Native Container Orchestration Tool for Docker, from Docker

Offering native clustering functionality for Docker containers, Docker Swarm combines several Docker engines into one virtual Docker engine. Swarm mode is integrated within the Docker Engine. The docker swarm CLI provides support for a range of tasks, including:

  • Creating discovery tokens,
  • Running Swarm containers, and
  • Listing various nodes in the cluster.

For managing swarms i.e., sets of collaborating Docker daemons, Docker leverages the Raft Consensus Algorithm. It states that an update is performed when most of the Swarm nodes agree to the same.

Kubernetes is an open-source container orchestration system that automates the management, scaling, routing, etc. of containers. Although originally developed by the tech mogul Google, its further development rests in the hands of CNCF (Cloud Native Computing Foundation).

Despite Docker Swarm, Kubernetes is a superior choice for managing multi-container environments based on Docker containerization. Developers prefer using Kubernetes and Docker in tandem. Docker Desktop features its Kubernetes distribution.

In the Docker-Kubernetes system, Docker serves as the tool for packing and shipping an application, while Kubernetes is the tool for deploying and scaling the same.

Conclusion

That sums up this write-up detailing Docker. In essence, it is the most popular containerization tool that leverages OS-level virtualization. Containerization is becoming one of the hottest IT technologies. As it continues to grow, so does the need for Docker.

What do you think about containerization technology? Is it worth your time? Let us know via the comments section below. Do you think you knew Docker? Assess your knowledge with these top Docker interview questions.

People are also reading:

Akhil Bhadwal

Akhil Bhadwal

A Computer Science graduate interested in mixing up imagination and knowledge into enticing words. Been in the big bad world of content writing since 2014. In his free time, Akhil likes to play cards, do guitar jam, and write weird fiction. View all posts by the Author

Leave a comment

Your email will not be published
Cancel