Introduction to Docker and Container-based development

Source : https://dev.to/

In this article, let's talk about some fundamentals of containers (Why we needed containers, what we can do with them).

Before we go directly into Docker and Containers, let's get to know about the history of cloud application architectures.

In the early days, applications were deployed in physical servers (same as today, but these were dedicated, physical servers/ physical hardware boxes).

Assume we have 3 different servers (a web server, application server, database server). In past, we had 3 physical hardware boxes for each server.

Having a dedicated server for each is very troublesome😫.

  1. Cost
  • You need to maintain them (if you have multiple servers, you need multiple people to maintain these servers).
  • Also need a place and space and a separate network for these servers.
  • Each server should have a operating system (OS) and you should maintain that OS as well.

2. Waste of resources (Because your app server or web server may not use 100% of processing power and memory in that hardware/ server).

In this generation, a hypervisor is installed to a single higher-performance server (higher CPU and higher memory). Then multiple virtual machines (VM) are created on top of this hypervisor. (This is not VMware generation)

According to the diagram you can see, there's one separate VMs for each web server, app server and DB server. VM1 uses 20% of processing power, VM2 uses 10% of processing power and VM3 uses 20% of processing power, all together these 03 VMs are using 50% of processing power. So there is another 50% of processing power remaining. Now that there is remaining processing power it can be used for anything else like creating a new VM for a mail server or if you need a proxy or a load balancer.

We install operating systems on top of these VMs. As you can see in the diagram Linux and Windows operating systems are installed in these VMs. Then we install our apps on these operating systems. Now you can call this is a virtualized environment.

But there are some major drawbacks with this architecture😕.

  1. Cost of Operating Systems.

As you can see in the figure there are 03 operating systems running. Now that could be expensive💰. We have to license these operating systems. (Ok not Linux, but assume you had to use RedHat Enterprise or Windows). So we have not only the OS licencing cost but we have to patch, maintain and update these OS. As you can see there a lot of management work with these servers.

Note: patches are small unnoticeable. They fix minor bugs, security vulnerabilities or add smaller features. Updates introduce much larger changes.

2. Long and time-consuming process of creating a new webserver.

Assume you need another web server. To have another web server, we have to create another VM. Starting a VM is not something you can do just in few seconds. There a bootup process. Still there no OS in the world that boot in a second.

On that VM, we have to install another OS and install another web server. Not only it's a bit lengthy process but it's also a time-consuming process ⌛ (When I say its time consuming it's not just a few hours but overnight work).

There could be several solutions

Ex : You can have a VM image and you can deploy that image and you can build on top of that, but still there much configurations to be done.

But these solutions, weren’t the best solutions for these problems.

So what's the perfect solution to all these problems? 🤔

The solution is non-other than Containers...!

Source: https://icon-library.com/icon/docker-container-icon-26.html

Once more let's go through the problems we have.

  1. We have to maintain multiple operating systems.
  2. The time-consuming process of creating a new Web server.

Now how do containers solve the above problems?

Container vs VM (Source: https://dzone.com/articles/how-to-build-docker-images-for-windows-desktop-ap )

Look at the architectural diagram above. In containers we have only one OS installed. So our licensing, patching updating problem is very much solved. Then we create docker engines on top of the OS. On docker-engine, we create our apps.

Note: Container / Docker != Hypervisor.

There was a cloud platform company (Paas -Platform as a Service) called dotCloud. They provided hosting for web apps and databases. So Docker was one of their internal projects. After Docker was made public in the spring of 2013, they realized Docker is growing rapidly and getting famous. So dotCloud changed their name to Docker Inc. While they provided their PaaS services under the name dotCloud, they are also maintaining Docker and Docker ecosystem. However, Docker is an open-source project developed by GO language and is licensed by Apache (Docker is not owned by Docker Inc.).

Note : VM is hypervisors, Docker is containers. They are totally different things.

Docker is a system designed for running an individual application inside containers. It's very different from a VM because a VM hosts an entire operating system. The below figure will give you a better idea.

With Containers with Docker

With Docker Containers (Source: https://dzone.com/articles/how-to-build-docker-images-for-windows-desktop-app)

When we use Dockerized environments / Containers we don't need separate operating systems or virtual machines. So with that, we have saved more and more space which we can use to create more containers.

Not only that, now we don't need to have multiple operating systems. So no longer have to worry about maintaining, licensing, patching and updating multiple operating systems 😃.

If you didn't notice, we are already working on top of a host operating system. Which means OS is already started. Now that the time to start the OS is no longer matters, you can start your application just in a flash.

Docker Engine and Docker project are not the same. Docker engine is a core (Orchestration, security, registry, services are built on top of/around this core).

Docker Engine (Source: https://www.docker.com/products/container-runtime)

With these technologies there 02 more important things you should know about. These are Orchestration and Registry.

A registry is a place where you can store your docker images. Assume you have a docker image (Ex: MognoDb docker image, MySQL docker image) and you make some customization. Now you can push these docker images to a registry.

At present Docker Hub is the largest registry/repository with more than 100,000 containers. So you can download the repository/docker image you need and then you can customize it and push it back. So if your environment has multiple hosts and if you need to use these customized images you can redirect those hosts to the required docker registry and you can download your customized version.

You can create repositories. It could be private (only you can access) or public (Anyone can pull, but only you or persons with permission can commit to the repository). So, let's not go to many details in this articles. But let's talk more about this in future.

Source: https://docs.microsoft.com/en-us/dotnet/architecture/microservices/container-docker-introduction/docker-containers-images-registries

If you are implementing a solution with docker images for a client in your workplace, the management might disagree to have the company’s docker images in a public repository. So, you can install the Docker registry on your local environment or you can get a license from Docker so that you can have your own registry backed by Docker Inc.

Orchestration is the process that takes all these containers/ instances together and goes for a common goal.

Assume you have an application with multiple containers for different services like auth, login and HTTP etc. So when you want to make a deployment, you don't define which container goes to which host. So assume you have 3 hosts and when you push these containers, the orchestration process will define where these containers should go, what are its dependencies and what should come first etc.

This may not be the most accurate or the complete answer. But I want you to have an idea of what orchestration is until we discuss it fully in a future article.

Kurbanates is one of the platforms we can use for orchestration.( Kurbanates was initiated by Google, but now its open source )

(Source: https://devopedia.org/container-orchestration)
  1. Docker is not persistent

Some believe that when you shut down docker everything gets lost. That's not true. By nature docker is persistent. Whatever you do, it will be still there when you start or shut down (Like a VM). But if it's a database docker it's always advised to store your data in an external environment as well.

2. Docker is only for new applications. It's not for legacy applications.

Well, this is a half true, half false kind of thing. When the 2nd generation(Hypervisor) came, legacy application could be easily pushed into VMs as a whole application. But docker is not like that. Micro-level service architecture is the best for Container-based applications.

To have all benefits (Ex: Self-healing) of containerization your application also should fit into this architecture.

For example. let's see how self-healing works.

Assume you have an HTTP component-(Ex: HTTP Transport) and you create this HTTP transport in a single container. Now your application is running but suddenly you get a high volume load to this HTTP container (HTTP Transport) or it's facing an issue and it cant serve. We can spin out the new container from this HTTP module and redirect the traffic over there.

So when people say legacy application does not support dockers it's not entirely true. Docker supports legacy applications but it does not fit to this architecture very well.

So if you are trying to migrate a legacy application, it's better to consider matching it to this architecture with rewriting, redesigning the current architecture.

You might be able to fit your legacy application in to containers but you will not be able to have all the benefits of conternalization.

We talked much and this is already getting too lengthy. But let's talk about one more thing before we conclude.

Source: https://www.docker.com/

When Docker Inc made their project public some other tech companies out there started to use it as well. But they soon realized it does not match their requirements and it also has some architectural issues as well. So they decided to come up with something similar to Docker called Rocket (RKT).

Now 02 companies were taking different paths for the same initiative. So both of these parties to have a mutual agreement and they formed the OCI (Open Container Initiative) and decided to take the upper hand in container development. So therefore now all containers should comply with the standards of the OCI.

Associate Software Engineer at Virtusa