Flaskex – Simple flask for quick prototypes

Open source project written in Python.

We do not have to reinvent the wheel with features included:
  • Encrypted user authorizaton
  • Database initialization
  • New user signup
  • User login/logout
  • User settings
  • Modern user interface
  • Bulma framework
  • Limited custom css/js
  • Easily customizable

How to get it up in minutes

Guide to remove trailing slash from WordPress URL

Guide to remove trailing slash from WordPress URL

Sometimes a very simple problem like  remove trailing slash from WordPress URL seems frustrating if you don’t know the solution to it. It’s the same with trailing slash problem in WordPress URL.

To solve this trailing slash in WordPress URL problem, log in to your website admin panel -> go to Setting Tab -> click on Permalinks under Settings. It will open a page something like the below screen:

Now, check to verify the setting you have opted for the website URL structure. Make sure that you are not leaving a trailing slash in the URL structure.

It just works like a charm !

Elasticsearch Cluster – Understanding How IT Works

Elasticsearch Cluster – How It Works

Questions will be solved

  •  How a node in cluster talks to others?
  •  What happens when a node joins or leaves the cluster?
  •  What happens when a node stops or has encountered a problem?
  •  What is the role of master/client/data in cluster ?
  •  What is memory requirement for each node ?
  •  How ES organizes data ?

What is a cluster of nodes ?

  • Start a ES instance => a cluster of single node.
  • Start another ES instance with the same cluster.name => a cluster of 2 nodes.
  • How nodes talk to each other: Over TCP
  • How nodes talk to external: JSON over HTTP
  • Each node can play one or more roles in cluster.

What is role of master/client/data ?

Master

  • Create/Delete indices
  • Add/Remove nodes from cluster
  • Broadcast changes to other nodes
  • Only 1 master node at a time.

Data

  • Holding data in the shards => CRUD, search, aggregations on data

Client

  • Routing requests to master/data => smart router

Adding a node to cluster

  • It will ping all nodes => find master node => request to join => accepted & joined.
  • If joined node is data => the master will re-allocate data to this node.

Removing a node to cluster

  • The master node will remove this node from cluster, broadcast changes the all nodes.
  • If removed node is data => the master will re-allocate data.
  • If remove node is master => one of the other master nodes will be elected to be master ( Fault Detection )

How ElasticSearch organizes data ?

  • Elasticsearch as MySQL
    • Index <=> Database
    • Type <=> Table
    • Document <=> Row
    • Field <=> Column
  • Index is one or more shards distributed on muliple nodes
  • Number of primary shards can NOT be changed after index created

References:

How to Check System Last Reboot in Linux

In Linux/Unix sometimes you have to check system last reboot to see what really happened. How to Check System Last Reboot in Linux

Check Last Reboot

Mostly Linux/Unix systems provide the last command, which provides us the history of last logins and system reboots. Run last reboot command from the terminal, and you will get the details of last reboots.

The above output shows that the system was last rebooted on Feb 11 at 12:00 PM.

Check System Uptime

Additionally, you can also use uptime command to find the system uptime from last booted. Just open the terminal on your system and type uptime and hit enter.

The system is running from 1 days, 21 hours and 57 minutes.

How Kubernetes Works

How Kubernetes Works | Internal of Kubernetes

#Kubernetes Overview:

  • Kubernetes is a container management Platform
  • Created by Google
  • Written in Go/GoLang
  • Also known as K8s

 

#Master Node

Master is the control-plane or the brain of k8s cluster. A Master comprises of few components:

  • api-server – Exposes REST API to talk to k8s cluster, consumes json, only api-server talks to Cluster Store.
  • Cluster Store (KV) – Cluster state and config management.
  • Scheduler – Watches api-server for new pods  and assign node to work
  • Controller –  A daemon that watches the state of the cluster to maintain desired state. Example are replication-controller, namespace-controller etc. Other than this it performs garbage collection of pods, nodes, events etc.

#Node

  • Kubelet – k8s agent which register nodes with cluster, watches api-server, instantiate pods, report back to the api-server. If pod fails, it reports to master and master decides what to do. Exposes port 10255 on node
  • Container Engine –  It does container management like pulling images, starting/stopping containers. Usually Docker is used for container runtime.
  • kube-proxy – Responsible for networking, Provide unique IP to Pods, All container in a pod share same IP, Load balances across all pods in a service

#Pods

  • An environment to run containers
  • It have network stack, kernel namespaces and one or more container running
  • Container always runs inside a pod
  • Pod can have multiple containers
  • It is unit of scaling in k8s

#Services

Pods comes and go with different IPs. To distribute load and act as a single source of interaction to all pods of an application, service play the role.

  • Has single IP and DNS
  • Created with a manifest JSON file
  • All new pods gets added/registered to the service
  • Which pod should be assigned to which services is decided by labels
  • service and pods have labels on the basis of which service identifies its pods
  • only sends traffic to healthy pods
  • service can point things outside the cluster
  • uses tcp by default (udp is also supported)

#Deployments

It is a k8s object whose task is to manage identical pods running and upgrading them in controlled way.

  • Deployed using YAML/JSON manifest
  • Deployed via api-server
  • Provide update of pods
  • Provide rollbacks

#Detailed Architecture

#Overall Flow

  • kubectl writes to the API Server
  • API Server validates the request and persists it to Cluster store(etcd)
  • Cluster store (etcd) notifies back the API Server
  • API Server invokes the Scheduler
  • Scheduler decides where to run the pod on and return that to the API Server
  • API Server persists it to etcd
  • etcd notifies back the API Server.
  • API Server invokes the Kubelet in the corresponding node
  • Kubelet talks to the Docker daemon using the API over the Docker socket to create the container
  • Kubelet updates the pod status to the API Server
  • API Server persists the new state in etcd
The 2018 DevOps RoadMap - Your Guide to become DevOps Engineer

How to become DevOps Engineer – The 2018 DevOps RoadMap

DevOps is really hot at the moment , senior developer/sysadmin are working hard to become a DevOps engineer.

I truly understand the benefit of DevOps, which is directly linked to improve software development (Dev) and deployment (Ops), and I can say that it’s not an easy job.

Many of friends/colleges asked me how to become a DevOps engineer, which tools should I learn? how about Docker and Kubernetes? Does infrastructure automation part of DevOps? should I learn Chef, Puppet, or Ansible  ….

I was casually surfing through internet and I come across this excellent GitHub page by Kamranahmedse, which shows a couple of useful roadmaps to become a front-end developer, back-end developer, a full-stack web developer and last but not the least, the DevOps Engineer.

The 2018 DevOps RoadMap for Developers

The 2018 DevOps RoadMap I am talking about:

The 2018 DevOps RoadMap - Your Guide to become DevOps Engineer

Now, let’s go through the RoadMap step by step and find out how can we learn the essential skills require to become a DevOps guru in 2018:

1. Learn a Programming Language

Obviously and I assume you guys definitely know one of the three main programming language i.e. Java, Python, or NodeJS, Go

best course to learn Java

 

best course to learn Python

2. Understand different OS concepts 

This is where the Ops part coming in, earlier it was solely supported guys and sysadmin people who were responsible for knowing about OS and hardware, but with DevOps, now developer also needs to know them. You at least need to know about Process Management, Threads and Concurrency, Sockets, I/O Management, Virtualization, Memory storage and File systems as suggested in the roadmap.

3. Learn to Live in terminal

For a DevOps guy, it’s important to have a good command in command line, particularly if he is working in Linux. Knowing some Linux shell like Bash, or Ksh and tools like find, grep, awk, sed, lsof, and networking commands like nslookup and netstat is mandatory.

If you feel you need to refresh these commands and tools then you should join the Linux Command Line Interface (CLI) Fundamentals course on Pluralsight.

best course to master Linux commands

Btw, If you need more choices and want to become master on shell scripting, you can also take a look at my list of best courses to learn shell scripting.

4. Networking and Security

Gone are the days of isolation, in today’s world, everything is connected to everything which makes networking and security very important. In order to become a good DevOps engineer, you must know about basic networking and security concepts like DNS, OSI Model, HTTP, HTTPS, FTP, SSL, TLS etc. In order to refresh this concept, you can take a look at this course on Pluralsight.

5. What is and how to setup

As a DevOps champion, you should know what is set up in your machine and how you can set that up, only that you can think about automating it. In general, a DevOps engineer should know how to set up a Web Server like IIS, Apache, and Tomcat. He should also know about Caching Server, Load balancer, Reverse Proxy, and Firewall etc.

6. Learn Infrastructure as code 

This is probably the most important thing for a DevOps engineer and this is a very vast area as well. As a DevOps engineer, you should know about containers like Docker and Kubernetes, Configuration management tools like Ansible, Chef, Salt, and Puppet, Infrastructure Provisionings like Terraform and Cloud formation. Here are some of my recommended courses to learn these tools.

best course to learn Docker
best course to learn Kubernetes

7. Learn some Continuous Integration and Delivery (CI/CD) tools

This is another very important thing for DevOps gurus and champion, i.e. to set up a pipeline for continuous integration and delivery. There are a lot of tools in the CI/CD area e.g. Jenkins, TeamCity, Drone etc.

But, I strongly recommend learning at least Jenkins, as it’s most widely used and probably the most mature CI/CD tool in the market. If you don’t know Jenkins then this course is best to start with.

best course to learn Jenkins for DevOps

8. Learn to monitor software and infrastructure

Apart from setup and deployment, monitoring is another important aspect of DevOps and that’s why it’s important for a DevOps engineer to learn about Infrastructure and application monitoring.

There are a lot of tools in this space e.g. Nagios, Icing, Datadog, Zabbix, Monit, AppDynanic, New Relic etc. You can choose some of them depending upon which one is used in your company like AppDynamic and Nagios.

9. Learn about Cloud Provides

Cloud is the next big thing and sooner or later you have to move your application to the cloud, hence it’s important for a DevOps engineer to at least know about some of the popular Cloud Providers and their basics.

While AWS is clearly the leader in the cloud it’s not alone, Google Cloud and Azure are slowly catching up and then we have some other players like Heroku, Cloud Foundry, and Digital Ocean.

Thanks for reading this article so far … Good luck on your DevOps journey! It’s certainly not going to be easy, but by following this roadmap and guide, you are one step closer to becoming a DevOps engineer.

Bash script to count the frequency of each word in a text file

Linux shell — text processing — Write a bash script to count the frequency of each word in a text file

Explanation:

  • cat: open content
  • tr: replace all space with a newline ( all words will be as one line )
  • sed: delete blank lines in the file
  • tr: convert all to lowercase
  • sort: sort alpha
  • uniq: count word
  • sort: sort with reverse order
  • awk: print by format we want

Ansible Ad-hoc Commands

Ansible ad-hoc commands will do a quick task but do not want to save that command for later. In below commands “-a” indicates the ad-hoc command.
To see the list of nodes from the master ad-hoc commands.

To create a demo file in node2 from the server using ad-hoc commands.

To install Java in node2 from the server using ad-hoc commands.

To start/manage services by using ad-hoc commands

To copy a file from ansible server to nodes by using ad-hoc commands.

To create a file by using ad-hoc commands

To remove a file by using ad-hoc commands

To create a directory by using ad-hoc commands

Ref:
https://devopssource.blogspot.com/2018/11/ansible-ad-hoc-commands.html
Push VS Pull

Beginning Ansible in 5 Minutes

So let me show you how easy it is to get started with Ansible.

Install

Let’s assume you’ll use pip to get this done :

$ sudo easy_install pip
$ sudo pip install ansible

Make sure it installed by running ansible --version.

Concepts

You’ll often hear that Ansible is agent-less and uses a push approach (as opposed to pull).

In a nutshell, Chef or Puppet work by installing an agent on the hosts they manage. This agent is pulling changes from a

master host, using their own channel (usually not SSH).

Push VS Pull

Ansible on the other hand is simply using SSH to push changes from wherever it runs (a server or your own laptop).

Conceptually, it’s as if instead of connecting to your machines with SSH and running commands manually, you could

script the whole thing and run it automatically.

Ansible VS SSH

We’ll get familiar below with more of Ansible’s concepts: Inventories, Playbooks, Roles and Tasks.

Adding a host with an inventory

The first thing for us to do once Ansible is installed is to specify which hosts we want to manage.

  1. Add a new machine; Fedora, Ubuntu or CentOS will do
  2. Create a folder where you’ll keep the Ansible related code for this example. In this folder, add a file named hosts  following content:
  3. That’s it. Let’s just make sure this works. Run the following command:

    You should get something like that:

This file is called an inventory, it lists the hosts that you will be managing with Ansible.

Installing NGINX with roles

Go to the folder where you created your inventory in, create a roles/ subfolder and then run command:

It will install the NGINX role in the roles subfolder, making it available to Ansible when ran from this folder.

Now, we’re gonna create our first playbook: playbook is a key concept in Ansible. It defines what needs to be configured and executed on your hosts.

Add a file named deploy.yml in the same folder as your inventory with the following content:

We’re now ready to apply this to our host. Just run the following:

Ansible will spurt out logs while running your playbook and should finally display that all tasks were run successfully (“ok”):

Now, if you point your browser at your URL, you’ll get a 404 error from NGINX since we haven’t deployed our site yet. This however means that NGINX is indeed up and running (great success!).

Ansible loop

How to work with Ansible loop

Ansible loop provides a lot of methods to repeat certain tasks until a condition is met.

A basic example which can be used to install a lot of Linux packages can be written like the below example.

 

In the above task, instead of writing 3 separate task we have consolidated them into a single task.

In each iteration, the value of with_items block will be inserted in place of {{ item }}. 

Ansible loop with Index

In some scenarios knowing the index value might come in handy. You can use the “with indexed_items” for this. The loop index will be available at item.0 and the value will be available at item.1. index value starts at zero as usual.

You can also make changes to the index value like addition, subtraction etc.

Ansible loop with conditional

You can also use the “when” conditional statement along with the loop structure. Thus you can control the looping based on a variable or system facts.

The following example will run the task when the loop value is the same as the “loop_1” variable. Note that “item” is not enclosed in double brackets.

Looping through Dictionaries

You can loop through Ansible dictionary variable using the with_dict parameter. In the following task, I have declared a variable ‘Fruits’ with 3 key-value pairs. I am using the with_dict to loop through all the values.