google Kubernetes

Microservice architecture is a style of componentizing monolith applications using service orientation. These services may use various technologies, programming  languages, and data-storage technologies. Some of these technologies may have conflicts with each other on resources such as memory, port, CPU, IO, name spaces or environmental variables. For instance one piece of our application may use java 6 and the other piece may use java 8. This made deployment to production a difficult phenomena for dev ops.

Virtualization is a way of avoiding this problem by sharing resources among isolated environment. This virtualization may happen for hardware layer in which 2 layers of OS appear and come with the waste of lots of disk and resources or may happen on sharing a same OS among different containers which is pretty much a light weight isolation level. Docker is the most popular way of containerization, even though, it needs some other tools for managing these containers. This post helps you understand more how docker virtualizes OS for an application.

Kubernetes is an open source middleware for managing containerized applications, automating deployment, and scaling of containers. Kubernetes is a created by google and is considered as a heterogeneous container orchestration system which works on both Linux and Windows.

google_kubernetes

Kubernetes Architecture

canvas-2

  • API Server: which controls the whole lifecycle of application developers. There are rest apis available for other Kubernetes components.
  • Scheduler: which schedules running containers and pods across all of the nodes.
  • etcd: is a configuration database/directory that is used by API server.
  • Controller Manager: manages and coordinates scheduling of the whole echosystem.
  • pod master:
  • kubectl: is a cli tool that communicates with the master components.
  • kubelet: is a node agent running on a node host to manage all of the pods and containers and communicates with the master. Kubelet is monitored by systed (in centos) or monit (in debian).
  • monit/systemd: kubelet is monitored by this component which is part of OS.
  • Supervisord: makes sure that both docker and kubelet are running all the time.
  • kube-proxy: re-directs all of the requests to an appropriate api in a pod. kube proxy is basically a simple round robin load balancer.
  • docker daemon: kubernetes uses docker in linux OS in order as a container run time solution.
  • fluentd: managing the log and managing the logs with the log manager.

Definitions

There are concepts in the Kubernetes infrastructure which are defined in the bellow.

Node

Node is a host with a container service, in this case docker host, with a kubelet (node agent) and a proxy service running on it.

Pod

A pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), the shared storage for those containers, and options about how to run the containers. Pods are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific “logical host” – it contains one or more application containers which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual machine.

for more info about pods please visit: http://kubernetes.io/docs/user-guide/pods/

Service

A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them

  • sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector (see below for why you might want a Service without a selector). As an example, consider an image-processing backend which is running with 3 replicas. Those replicas are fungible
  • frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that or keep track of the list of backends themselves. The Service abstraction enables this decoupling.

Please visit http://kubernetes.io/docs/user-guide/services/ for more info.

Label

Labels are key value pairs attached to different objects such as pods to specify identifying objects. Each key has to be unique for a given object.

Replication Controller

Kubernetes Pods are mortal. They are born and they die, and they are not resurrected. ReplicationControllers in particular create and destroy Pods dynamically. A replication controller ensures that a specified number of pod “replicas” are running at any one time. In other words, a replication controller makes sure that a pod or homogeneous set of pods are always up and available. If there are too many pods, it will kill some. If there are too few, the replication controller will start more. Unlike manually created pods, the pods maintained by a replication controller are automatically replaced if they fail, get deleted, or are terminated. For example, your pods get re-created on a node after disruptive maintenance such as a kernel upgrade. For this reason, we recommend that you use a replication controller even if your application requires only a single pod. You can think of a replication controller as something similar to a process supervisor, but rather than individual processes on a single node, the replication controller supervises multiple pods across multiple nodes.

visit: http://kubernetes.io/docs/user-guide/replication-controller/

Kubernetes installation on a linux machine (Centos / Ubuntu) using kubeadm (no-HA)

In the installation process you will need to install the following components on of your nodes including the master:

  • docker
  • kubelet
  • kubectl
  • kubeadm

1- In order to do so, you will need to execute the following instructions:

  • For Ubuntu
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
# apt-get update
# # Install docker if you don't have it already.
# apt-get install -y docker.io
# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
  • For Centos:
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# setenforce 0
# yum install -y docker kubelet kubeadm kubectl kubernetes-cni
# systemctl enable docker && systemctl start docker
# systemctl enable kubelet && systemctl start kubelet

2- Now you need to initialize your master with the following command on the master node:

$ sudo kubeadm init

Note: this will autodetect the network interface to advertise the master on as the interface with the default gateway. If you want to use a different interface, specify –api-advertise-addresses= argument to kubeadm init.

The output of the command is similar to:

<master/tokens> generated token: "f0c861.753c505740ecde4c"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 61.346626 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 4.506807 seconds
<master/discovery> created essential addon: kube-discovery
<master/addons> created essential addon: kube-proxy
<master/addons> created essential addon: kube-dns

Kubernetes master initialised successfully!

You can connect any number of nodes by running:

kubeadm join --token <token> <master-ip>

Which gives you a command with a token to execute on the other nodes to join the cluster.

  • By default kubernetes will not run a pod on a master unless you tell it to do so by the following command:$ kubectl taint nodes --all dedicated-

3- Installing a pod network (overlay network):

In order to access each pod you will need to install an overlay network(virtual network): It is necessary to do this before you try to deploy any applications to your cluster, and before kube-dns will start up. Note also that kubeadm only supports CNI based networks and therefore kubenet based networks will not work. You can install a pod network add-on with the following command:

$ kubectl apply -f <add-on.yaml>

you can install either

  • Wavenet installed by
sudo curl -L git.io/weave -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave
  • Calico installed by

$ kubectl apply -f calico.yaml

4- Joining your nodes: Now it is time to execute the following command on your worker nodes:

kubeadm join --token <token> <master-ip>

  • $ kubectl get nodes shows you all the nodes in your cluster.

The following command $ kubectl get pods --all-namespaces returns all of the running pods on your master including kubernetes components. One of the components is DNS pod which has to be in a running status after installing the overlay network. If you see all of the following components in a running state then you have successfully installed kubernete.

NAMESPACE     NAME                                READY     STATUS    RESTARTS   AGE
default       microservice1-2its5                 1/1       Running   0          22h
default       microservice1-9dw5f                 1/1       Running   0          22h
default       microservice1-e63uk                 1/1       Running   0          22h
default       microservice1-iosqc                 1/1       Running   0          22h
default       microservice1-xlf1d                 1/1       Running   0          22h
kube-system   etcd-kub-apim1                      1/1       Running   0          3d
kube-system   kube-apiserver-kub-apim1            1/1       Running   0          3d
kube-system   kube-controller-manager-kub-apim1   1/1       Running   0          3d
kube-system   kube-discovery-982812725-gp5k9      1/1       Running   0          3d
kube-system   kube-dns-2247936740-0zn8m           3/3       Running   0          3d
kube-system   kube-proxy-amd64-4om7c              1/1       Running   0          6d
kube-system   kube-proxy-amd64-e54k9              1/1       Running   0          6d
kube-system   kube-scheduler-kub-apim1            1/1       Running   0          3d
kube-system   weave-net-qluvr                     2/2       Running   0          3d
kube-system   weave-net-qp1oa                     2/2       Running   0          3d

For more info about kubernetes installation please visit: http://kubernetes.io/docs/getting-started-guides/kubeadm/

Run your app in Kubernetes

Every application consists of different components from front-end, back-end, database, middle ware, services, and etc. In order to run these components we need to containerize them and build a proper docker image. Next, we have to create appropriate pods each of which consists of logically related containers.

flower

Then we need to create a service which is a logical abstraction for each pod. A service contains labels, definitions, IPs ports, etc which glue pods to one another. A service is what makes different services available to each other in a network layer.

The other entity in Kubernetes infrastructure is replication controller through which pods are scaled to a certain numbers. Replication controller controls scalability of each service and makes sure that a certain number of pods are always running in kubernetes.

1- Create a docker image

After setting up a docker registry using the following instructions: http://www.imotif.net/index.php/2016/10/19/setup-docker-swarm/ and copying the registry certificate into all of the nodes.

Now it is time to build your docker image using the following instructions http://www.imotif.net/index.php/2016/10/19/setup-docker-swarm/  and push the built image into the registry to be available to all of the nodes. These steps containerize you application and make it available to all of the nodes in your cluster.

2- Create a pod

It is now time to create a pod definition with the following instructions: Create a file and name it microservice1-pod.yaml

apiVersion: "v1"
kind: Pod
metadata:
  name: microservice1
  labels:
    name: microservice1
    app: demo
spec:
  containers:
    - name: microservice1
      image: swarm-apim3.rtp.raleigh.ibm.com:5000/microservice1
      ports:
        - containerPort: 3000
          name: http
          protocol: TCP

Now execute the following command to create a pod in your kubernetes.

$ kubectl create -f microservice1-pod.yaml

3- Create a service

Then you will need to create a service to wrap your pods in an abstract concept and make them available through certain IP addresses and through certain ports. The selector section in the service definition is where you assign a name to a specific service to make it available in the overlay network. Create a file and name it microservice1-svc.yaml

apiVersion: v1
   kind: Service
   metadata:
     name: microservice1
     labels:
       name: microservice1
       app: demo
   spec:
     selector:
       name: microservice1
     type: NodePort
     ports:
      - port: 3000
        targetPort: 3000
        protocol: TCP
     externalIPs:
      - 9.42.103.48
      - 9.42.103.113

Now execute the following command $ kubectl create -f microservice1-svc.yaml

4- Create a replication controller

You can create a pod manually or by creating a replication controller. However, if you create a pod manually and your pod gets terminated, there is no way to recreate another instance of a pod in Kubernetes. Replication controller helps making sure certain number of instances of a pod is always running in Kubernetes. In order to create a replication controller create a file and name it microservice1-rc.yaml with the following content:

apiVersion: v1
kind: ReplicationController
metadata:
  name: microservice1
  labels:
    name: microservice1
    app: demo
spec:
  replicas: 5
  template:
    metadata:
      labels:
        name: microservice1
    spec:
      containers:
        - name: microservice1
          image: swarm-apim3.rtp.raleigh.ibm.com:5000/microservice1
          ports:
            - containerPort: 3000
              name: http
              protocol: TCP

Now execute the following command kubectl create -f microservice1-rc.yaml

This creates a 5 instances of your microservices running in a container inside a pod. you can access list of running pods, services and replication controller with the following commands:

  • $ kubectl get|update|delete pod <podId>
  • $ kubectl get|update|delete service <serviceName>
  • $ kubectl get|update|delete rc <rcName>