Getting started with K8s

Harsh Mighlani
5 min readOct 16, 2020

Step 1: Kubectl and local set-up

Kubernetes is an open-source platform for managing containerized apps in production. Kubernetes makes it easier for you to automatically scale your app, reduce downtime, and increase security. No more writing scripts to check, restart, and change the number of Docker containers. Instead, you tell K8s your desired number of containers and it does the work for you. K8s can even automatically scale your containers based on resources used.

Source: https://miro.medium.com/max/1749/1*FL7mI4mEEzPvg4k2-7wk9w.png
Source:https://miro.medium.com/max/1749/1*FL7mI4mEEzPvg4k2-7wk9w.png

In Kubernetes, pods are the basic units that get deployed in the cluster. Kubernetes deployment is an abstraction layer for the pods. The main purpose of the deployment object is to maintain the resources declared in the deployment configuration in its desired state. A deployment configuration can be ofYAML or JSON format.

Why Kubernetes?

  • Horizontal infrastructure scaling: New servers can be added or removed easily.
  • Auto-scaling: Automatically change the number of running containers, based on CPU utilization or other application-provided metrics.
  • Manual scaling: Manually scale the number of running containers through a command or the interface.
  • Replication controller: The replication controller makes sure your cluster has an equal amount of pods running. If there are too many pods, the replication controller terminates the extra pods. If there are too few, it starts more pods.
  • Health checks and self-healing: Kubernetes can check the health of nodes and containers ensuring your application doesn’t run into any failures. Kubernetes also offers self-healing and auto-replacement so you don’t need to worry about if a container or pod fails.
  • Traffic routing and load balancing: Traffic routing sends requests to the appropriate containers. Kubernetes also comes with built-in load balancers so you can balance resources in order to respond to outages or periods of high traffic.
  • Automated rollouts and rollbacks: Kubernetes handles rollouts for new versions or updates without downtime while monitoring the containers’ health. In case the rollout doesn’t go well, it automatically rolls back.
  • Canary Deployments: Canary deployments enable you to test the new deployment in production in parallel with the previous version.

Below tutorial is done after installing docker desktop on windows machine. Further in docker desktop you can enable kubernetes as:

INSTALLATION-> https://docs.docker.com/docker-for-windows/kubernetes/

KubeCtl: The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

While command line is a good place to start, files take over since we need to maintain container definition in repositories. For example, see below a very simple Nginx container wrapped in a yaml file:

In order to create above pod, run:

  • Create a pod - kubectl create -f nginx.yml — save-config
  • Describekubectl describe pod [pod-name]
  • Get changes/versionskubectl get pod [pod-name] -o yaml

The above won’t work in browser right away, to enable the same and it will start working as shown below:

NodePort — exposes a service externally to the cluster by means of the target nodes IP address and the NodePort. NodePort is the default setting if the port field is not specified.

Alternatively:

The above port mapping also exposes container on port 8080 using nodePort internally and 30036 externally.

To segregate yaml file, each config file has 3 components:

Metadata

Specification

Status(automatically created and added by K8)- comparison of desired v/s actual state

NodePort is a configuration setting you declare in a service’s YAML. Set the service spec’s type to NodePort. Then, Kubernetes will allocate a specific port on each Node to that service, and any request to your cluster on that port gets forwarded to the service.

In order to access, you either need clusterIP:8080 or nodeIP:30036

This is cool and easy, it’s just not super robust. You don’t know what port your service is going to be allocated, and the port might get re-allocated at some point.

You can set a service to be of type LoadBalancer the same way you’d set NodePort— specify the type property in the service’s YAML. There needs to be some external load balancer functionality in the cluster, typically implemented by a cloud provider.

Every time you want to expose a service to the outside world, you have to create a new LoadBalancer and get an IP address.

If we create a pod by file like above, we can get metadata by describe pods

However, if done by a command like ,

kubectl run mypod1 — image=nginx:alpine

You can get all metadata by kubectl describe deployment [deployment-name]

Rolling update strategy — This strategy aims to prevent application downtime by keeping at least some instances up-and-running at any point in time while performing the updates.

Running Kubernetes Dashboard:

  1. Run command — kubectl describe secret -n kube-system
  2. Pick token string just below- Type: kubernetes.io/service-account-token
  3. Copy token.
  4. Run kubectl proxy — it will go to 127.0.0.1:8001
  5. Go to- http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
  6. Paste the token and Voila.!

Commands used:

  • kubectl get pods -w
  • kubectl get replicasets -w
  • kubectl get deployments -w
  • kubectl get events -w

--

--

Harsh Mighlani

AWS certified solutions architect | 12+ Years experienced | Loves Serverless & Containerization use cases.