Up and running kubernetes cluster on aws eks in 10 minutes

In this post, I will share the post Up and running kubernetes cluster on aws eks in 10 minutes with cli tool called eksctl.

eksctl is cli tool written in GO by weaveworks and based on Amazon’s official CloudFormation templates.

Install eksctl

In our example, we will use wget, so make sure it is installed before you proceed.

On MAC you can do

Configure AWS API credentials

Install pip:

You can now install an awscli package using pip:

Let’s configure aws key/secret

Create Amazon EKS Cluster with eksctl

When all settings have been saved, you can now create a new cluster on EKS:

Options that can be used include:

Let’s begin creating our first kubernetes cluster with eksctl

Once you have created a cluster, the cluster credentials will be added in ~/.kube/config

To enable Auto Scaling Group for worker nodes group, use the options:

To get details about the deployed cluster or delete cluster:

Deploying Kubernetes on AWS EKS service with eksctl is easy, you do not need to struggle with aws console in UI. All cluster configurations are saved on deployment machine, you can quickly make changes and update your cluster.

Kubernetes Objects Tutorial – The Easy Way

Kubernetes Objects Tutorial – Let’s understand it in an easy way. In this blog post, I will help you to understand kubernetes objects and how we can express those in the .yaml file. Let’s enjoy our post.

Kubernetes Objects

Kubernetes Objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:

  • What containerized applications are running (and on which nodes)
  • The resources available to those applications
  • The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance

Once the kubernetes object is created, the Kubernetes system will check it constantly to make sure that object exists.

Kubernetes Objects Spec and Status

Every Kubernetes object includes 2 nested object fields that describe the object’s configuration:

  • Object spec: describe the characteristics of the object
  • Object status: describe the actual state of the object

For example, a Kubernetes Deployment is an object that can represent an application running on your cluster.

  • Object spec: 3 replicas of the application
  • Object status: 3 replicas are running or 2 replications are running….

Kubernetes Objects Example

Here’s an example .yaml file that shows the required fields and objects spec for a Kubernetes Deployment:

Download: nginx-deployment.yaml

 

The output is similar to this:

Required Fields To Create Kubernetes Objects

  • apiVersion – Which version of the Kubernetes API you’re using to create this object
  • kind – What kind of object you want to create
  • metadata – Data that helps uniquely identify the object, including a name string, UID, and optional namespace

You’ll also need to provide the object spec field. The precise format of the object spec is different for every Kubernetes object, and contains nested fields specific to that object.

Kubernetes – Choose your own deployment strategies

In Kubernetes, there are lots of ways to deploy an application to production, it depends on your strategy so you can choose the best one that fit your needs, that is reliable.

Some of the possible strategies to adopt

  • recreate: terminate the old version and release the new one
  • ramped: release a new version on a rolling update fashion, one after the other
  • blue/green: release a new version alongside the old version then switch traffic
  • canary: release a new version to a subset of users, then proceed to a full rollout
  • a/b testing: release 2 version concurrently based on HTTP headers, cookie, weight…This technique required more setup on infra side with Istio, Traefik, custom nginx/haproxy …

Recreate – best for development environment

A deployment defined with a strategy of type Recreate will terminate all the running instances then recreate them with the newer version.

 

Ramped – slow rollout

How it works:  2nd ReplicaSet is created with the new version of the application, then the number of replicas of the old version is decreased and the new version is increased until the correct number of replicas is reached.

 

Blue/Green – best to avoid API versioning issues

A blue/green deployment differs from a ramped deployment because the “green” version of the application is deployed alongside the “blue” version. After testing that the new version meets the requirements, we update the Kubernetes Service object that plays the role of the load balancer to send traffic to the new version by replacing the version label in the selector field.

 

Canary – let the consumer do the testing

A canary deployment can be done using two Deployments with common pod labels. One replica of the new version is released alongside the old version. Then after some time and if no error is detected, scale up the number of replicas of the new version and delete the old deployment.

References

  • https://github.com/ContainerSolutions/k8s-deployment-strategies
  • https://www.cncf.io/wp-content/uploads/2018/03/CNCF-Presentation-Template-K8s-Deployment.pdf

How Kubernetes Works

How Kubernetes Works | Internal of Kubernetes

#Kubernetes Overview:

  • Kubernetes is a container management Platform
  • Created by Google
  • Written in Go/GoLang
  • Also known as K8s

 

#Master Node

Master is the control-plane or the brain of k8s cluster. A Master comprises of few components:

  • api-server – Exposes REST API to talk to k8s cluster, consumes json, only api-server talks to Cluster Store.
  • Cluster Store (KV) – Cluster state and config management.
  • Scheduler – Watches api-server for new pods  and assign node to work
  • Controller –  A daemon that watches the state of the cluster to maintain desired state. Example are replication-controller, namespace-controller etc. Other than this it performs garbage collection of pods, nodes, events etc.

#Node

  • Kubelet – k8s agent which register nodes with cluster, watches api-server, instantiate pods, report back to the api-server. If pod fails, it reports to master and master decides what to do. Exposes port 10255 on node
  • Container Engine –  It does container management like pulling images, starting/stopping containers. Usually Docker is used for container runtime.
  • kube-proxy – Responsible for networking, Provide unique IP to Pods, All container in a pod share same IP, Load balances across all pods in a service

#Pods

  • An environment to run containers
  • It have network stack, kernel namespaces and one or more container running
  • Container always runs inside a pod
  • Pod can have multiple containers
  • It is unit of scaling in k8s

#Services

Pods comes and go with different IPs. To distribute load and act as a single source of interaction to all pods of an application, service play the role.

  • Has single IP and DNS
  • Created with a manifest JSON file
  • All new pods gets added/registered to the service
  • Which pod should be assigned to which services is decided by labels
  • service and pods have labels on the basis of which service identifies its pods
  • only sends traffic to healthy pods
  • service can point things outside the cluster
  • uses tcp by default (udp is also supported)

#Deployments

It is a k8s object whose task is to manage identical pods running and upgrading them in controlled way.

  • Deployed using YAML/JSON manifest
  • Deployed via api-server
  • Provide update of pods
  • Provide rollbacks

#Detailed Architecture

#Overall Flow

  • kubectl writes to the API Server
  • API Server validates the request and persists it to Cluster store(etcd)
  • Cluster store (etcd) notifies back the API Server
  • API Server invokes the Scheduler
  • Scheduler decides where to run the pod on and return that to the API Server
  • API Server persists it to etcd
  • etcd notifies back the API Server.
  • API Server invokes the Kubelet in the corresponding node
  • Kubelet talks to the Docker daemon using the API over the Docker socket to create the container
  • Kubelet updates the pod status to the API Server
  • API Server persists the new state in etcd
The 2018 DevOps RoadMap - Your Guide to become DevOps Engineer

How to become DevOps Engineer – The 2018 DevOps RoadMap

DevOps is really hot at the moment , senior developer/sysadmin are working hard to become a DevOps engineer.

I truly understand the benefit of DevOps, which is directly linked to improve software development (Dev) and deployment (Ops), and I can say that it’s not an easy job.

Many of friends/colleges asked me how to become a DevOps engineer, which tools should I learn? how about Docker and Kubernetes? Does infrastructure automation part of DevOps? should I learn Chef, Puppet, or Ansible  ….

I was casually surfing through internet and I come across this excellent GitHub page by Kamranahmedse, which shows a couple of useful roadmaps to become a front-end developer, back-end developer, a full-stack web developer and last but not the least, the DevOps Engineer.

The 2018 DevOps RoadMap for Developers

The 2018 DevOps RoadMap I am talking about:

The 2018 DevOps RoadMap - Your Guide to become DevOps Engineer

Now, let’s go through the RoadMap step by step and find out how can we learn the essential skills require to become a DevOps guru in 2018:

1. Learn a Programming Language

Obviously and I assume you guys definitely know one of the three main programming language i.e. Java, Python, or NodeJS, Go

best course to learn Java

 

best course to learn Python

2. Understand different OS concepts 

This is where the Ops part coming in, earlier it was solely supported guys and sysadmin people who were responsible for knowing about OS and hardware, but with DevOps, now developer also needs to know them. You at least need to know about Process Management, Threads and Concurrency, Sockets, I/O Management, Virtualization, Memory storage and File systems as suggested in the roadmap.

3. Learn to Live in terminal

For a DevOps guy, it’s important to have a good command in command line, particularly if he is working in Linux. Knowing some Linux shell like Bash, or Ksh and tools like find, grep, awk, sed, lsof, and networking commands like nslookup and netstat is mandatory.

If you feel you need to refresh these commands and tools then you should join the Linux Command Line Interface (CLI) Fundamentals course on Pluralsight.

best course to master Linux commands

Btw, If you need more choices and want to become master on shell scripting, you can also take a look at my list of best courses to learn shell scripting.

4. Networking and Security

Gone are the days of isolation, in today’s world, everything is connected to everything which makes networking and security very important. In order to become a good DevOps engineer, you must know about basic networking and security concepts like DNS, OSI Model, HTTP, HTTPS, FTP, SSL, TLS etc. In order to refresh this concept, you can take a look at this course on Pluralsight.

5. What is and how to setup

As a DevOps champion, you should know what is set up in your machine and how you can set that up, only that you can think about automating it. In general, a DevOps engineer should know how to set up a Web Server like IIS, Apache, and Tomcat. He should also know about Caching Server, Load balancer, Reverse Proxy, and Firewall etc.

6. Learn Infrastructure as code 

This is probably the most important thing for a DevOps engineer and this is a very vast area as well. As a DevOps engineer, you should know about containers like Docker and Kubernetes, Configuration management tools like Ansible, Chef, Salt, and Puppet, Infrastructure Provisionings like Terraform and Cloud formation. Here are some of my recommended courses to learn these tools.

best course to learn Docker
best course to learn Kubernetes

7. Learn some Continuous Integration and Delivery (CI/CD) tools

This is another very important thing for DevOps gurus and champion, i.e. to set up a pipeline for continuous integration and delivery. There are a lot of tools in the CI/CD area e.g. Jenkins, TeamCity, Drone etc.

But, I strongly recommend learning at least Jenkins, as it’s most widely used and probably the most mature CI/CD tool in the market. If you don’t know Jenkins then this course is best to start with.

best course to learn Jenkins for DevOps

8. Learn to monitor software and infrastructure

Apart from setup and deployment, monitoring is another important aspect of DevOps and that’s why it’s important for a DevOps engineer to learn about Infrastructure and application monitoring.

There are a lot of tools in this space e.g. Nagios, Icing, Datadog, Zabbix, Monit, AppDynanic, New Relic etc. You can choose some of them depending upon which one is used in your company like AppDynamic and Nagios.

9. Learn about Cloud Provides

Cloud is the next big thing and sooner or later you have to move your application to the cloud, hence it’s important for a DevOps engineer to at least know about some of the popular Cloud Providers and their basics.

While AWS is clearly the leader in the cloud it’s not alone, Google Cloud and Azure are slowly catching up and then we have some other players like Heroku, Cloud Foundry, and Digital Ocean.

Thanks for reading this article so far … Good luck on your DevOps journey! It’s certainly not going to be easy, but by following this roadmap and guide, you are one step closer to becoming a DevOps engineer.