Docker Architecture

Docker is an open source project created to automate deployment of application in an standard linux file system called container. Docker is known as another containerized runtime environment which is perfect for deploying microservices. Docker always guarantees that your application runs always the same way even if it runs on different machines. Docker containers run with a Docker Daemon that is running as a middleware on top of OS.

What is Docker?

docker_architecture

Docker daemon is usually lighter than OS. That makes this architecture lighter and simpler than virtual machines.

vm-vs-docker

Virtual machine architecture is usually heavier and they are more layers involved in the run time environment. In VM we have 2 OS layers involved: 1- Host OS and on top of that hypervisor middleware 2- there will be a Guest OS on top of hypervisor middleware in which the application is running.

In docker there is only 1 OS involved and Docker engine which is a middleware runs all the containers as separate processes in the OS. Each container process acts as a separate complete OS. The container is not OS but is just a raw file structure that is isolated in an OS process.

Docker Architecture

The picture bellow shows we may be running multiple applications in different containers each of which stimulates a separate operating system with a separate file system. Each container may have an application and its dependent libraries installed in a separate container without having any conflict with another container. For example, we may have different versions of node.js installed on /usr/bin/node directory in two containers each running a separate app. In another scenario, we may have redis installed with the same port on separate containers with different virtual IP addresses. This virtualization helps developer to use a homogeneous infrastructure for his application without having any conflict on different resources.

picture1

Docker is a popular container solution which is originally created to share a Linux operating system among different containers. Docker has recently announced support on windows using the Windows Container Service. A Docker container wraps up an application in a complete filesystem including everything it needs to run such as code, runtime, system tools, system libraries or anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.

picture2

Docker 1.12 includes a couple of separate components in its core engine such as clustering components and overlay networks. Docker swarm is the Docker cluster management solution which is now included in the Docker engine. It includes security components such as TLS, CA, as well as networking components such as load balancing. Docker engine is now shipped with an L3 load balancer which helps us to eliminate an external L7 load balancer to distribute incoming traffic among different instances of the app running in separate containers.

This means once you create an overlay network in your cluster, the containers can communicate with each other with an IP load balancer using virtual IP addresses assigned to each container regardless of the physical location that the container is actually running. This allows all of the containers to communicate transparently in a virtual network and their request being load balanced across multiple instances of containers of the same type.

 

Docker Deployment cycle:

Once developer is done with creating his application, he/she would need to create a docker container. A docker image is an image built from the application executable files installed in a linux raw file system (e.g., ubuntu or Centos) and include all the dependencies, tools and libraries required to run the application to that image. Once the image is ready, the image needs to be uploaded into a repository to host the image.

Now, you need to create a docker host to run the image. After you prepare a docker host you would need to install docker runtime and management tools on the host.

screen-shot-2016-10-18-at-4-39-48-pm

The next step is telling docker runtime to download the image and create a container from the image. Container is the process instance which is running in a docker standard environment. I programming docker image is comparable with notion of class and docker container is comparable with the notion of object created from a class. The container is the actual running process in the OS. The good thing about it is that no matter what version of OS you are using or what tools and services you are running on the OS level, docker container has no conflict with that. The beauty of it is that you can run multiple containers on a same host each of which act as a new OS file system.

Docker Swarm:

docker-swarm

Docker swarm is a native clustering orchestration framework built for docker. Docker swarm makes a pool of docker hosts into a single virtual docker host by hiding the docker daemon. Swarm makes docker deployment more transparent and the user can scale up instances of containers. Docker cli communicates with docker swarm in order to create, deploy, scale the application in a swarm cluster. Docker swarm benefits from an overlay network with a transparent load balancer embedded in the virtual network. Docker swarm also works based on manager/worker strategy with supporting high availability in both manager and worker nodes. Swarm supports discovery services which are key/value stores such as zookeeper, etcd, etc.

I have another post here about how to create a docker swarm and run your app in a container in a swarm environment.

Docker Swarm Architecture

As mentioned in the previous section, Docker swarm is a native clustering management component for Docker. Docker swarm makes the whole cluster including all of its workers to be viewed as a single Docker host transparently. Docker swarm consists of multiple nodes with two types of nodes: 1 – manager, 2 – worker. A Docker swarm manager has the following components:

  • API manager: creates a corresponding API for a specific service.
  • Orchestration: Orchestrates among different components such as API server, scheduler, network, and dispatcher.
  • Scheduler: Schedule running containers for a specific service in case of new service or if any of the existing containers don’t send heartbeat.
  • Dispatcher: Coordinates running of each container on a specific node. It also decides about running of containers on different hosts.

picture3

Each Docker worker node is a host on which an operating system and a Docker engine are installed. Docker engine is the layer that sits on an OS and virtualizes the operating system for the containers. Docker has other tools such as Docker Machine which is a tool that lets you install Docker Engine on virtual hosts. Docker compose is a tool which allows multiple containers to be combined and a single command creates and start them all.

 

to start: https://github.com/bhajian/express-togo