Thursday, 13 May 2021

Ansible - Part 2

 Ansible Folder structure

Please find below , once you install ansible , you will see below 3 files under /etc/ansible

  • ansible.cfg : Main configuration file for ansible, you can add all the settings for your exection purpose.
  • hosts : this file holds hots / server name or DNS names 
  • roles : This folder allows you to create folders for each server role , web/app.db etc.
if you want to check ping module (used for connectivity)with multiple servers use the below command

ansible ip1:ip2 -m ping

There will be scenario , where you have 100 servers , in that 20 DB 20 app servers , for all of those , you need to check the connectivity.

So , to give 20 or 30 server IP address is tedious task, to acheive this we have to group all those servers according to the type of servers in hosts(inventory) file

[web-servers]
ip1
ip2
ip3
[app-servers]
ip1
ip2

So, we can easily check the connectivty just by giving the group name with ping module as below

ansible app-servers -m ping

or

if you want to check app-server and web-servers connectivity , you can use the below

ansible app-servers:web-servers -m ping

As of now we used default hosts file which we got it from ansible installation.

What if multiple developers are working with multiple servers and which they would like to add their servers with their interested name

you can create inventory file name like sample.ini o any name with extension .ini and add your server names to that with group names and check the connectivity as below

ansible -i sample.ini app -m ping

here sample.ini : is the inventory file name
app : group name

Since we have many inventory files and its bit confusing everytime we need to run command with inventory file.

So, to avoid this we can update inventory file name in ansible.cfg file as below




Now lets see about ansible.cfg file



We can create our own ansible.cfg file in different folder and update that file path in default ansible.cfg file. So that when we execute ansible.cfg , it will take our customized cfg file details and act accordingly.



then export the same .
So, that it won't take default ansible.cfg setting for your ansible execution , it will take customized ansile.cfg file details.

Since we didn't write our custom config file , when we test as below , it won't perform any action , instead it will give error .

So, you can create inventory file as we defined above and add Ip address to check the connectivity.


You will get success message .

Inventory:
Static : IP address/DNS names/ Host names which are static and no change at any time
Dynamic: It will change regularly or very often.

There is scenario, where you have n number of servers which IP address will change dynamically when you start and stop the instances, in that case we need to get server IPs automatically by using inventory. That is called Dynamic inventory.

Usually we will use scripting language (python/golang/php etc) to achieve this.

Below is the simple python script.





When you connect with servers now, we need to update host checking to false , so that it won't check to connect., this we need to update in default ansible.cfg file.




SO, if you set host checking false , it won't ask yes/no for confirmation, 


Now you can execute and install softwares by using yum command 




Wednesday, 12 May 2021

Ansible installation

 Ansible installation

1.Creating environment 

a.Create 3 servers : 1 for server and 2 for client machines

b. Login with root access and create ansadmin user in all the servers as below


c. Provide root privileges' to ansadmin user in all servers.

for this open first server with visudo and add below highlighted line 

Add the same line in all the servers , so that ansadmin user will have root privileges'.

d. Make sure that Passwordauthenticaation Yes in all servers under /etc/ssh/sshd_config file

so, repeat this step in all the servers.

login with root

vi /etc/ssh/sshd_config

if you know other server passoword , you are allowed to connect with password.


then you need to execute below command to restart ssh server
systemctl restart sshd 

So, this process for login to the client servers with password

Now connect with one server to other server using password of ansadmin as below
 



Till now , we have tried to access other servers from one server with password, Now lets use password-less authentication.

Password-less authentication:
1.Generate ssh keys using ssh-keygen with ansadmin access.


2. copy ssh public key using ssh-copy-id <hostname> from /home/ansadmin/.ssh location
Here host name: client server (to where you would like to copy the ssh key.)


execute the command below to connect with other server
ssh username@client server address
Now you try to connect with other server  like below , it won't prompt you for password.


Now let see Installation of Ansible in RHEL linux server

This process needs to be done only on server(master)
sudo yum update

sudo yum install ansible -y 

once it successful , you can check the version



Configuration


You have to add client server IP address/DNS names to the inventory.

you can access with root permission




Now you can plug and play with ansible playbooks and run /execute commands from server to clients.


Thursday, 6 May 2021

Yaml for Ansible - Part 1

 YAML:

stands for Yet Another Mark up language.

  • is a data serialization language designed to be readable and writable by humans
  • it is commonly used for configuration files but it is used in many apps where data being stored and transmitted over the network.
  • We can communicate with another language too by using Yaml including OS and or services/apps running on OS.
  • Yaml is case sensitive scripting language
  • In Yaml , it doen't allow Tabs for indentation , you have to use spaces instead
Keys:
Key is used to store any value and that can be changed depending on condition in any other language.
For ex: x: 10

Sequence data collection:

this is same as array in another language , like list of same data type values. We can represent as below

sequence representation

name:
- value1
- value2

Here name is array name, value1, value2 are items of array.
you can also represent as below

name: [value1,value2]

Map Data collection

It is same definition as in Java

Map representation

family:
 krishh: 20
 ram: 10

you can also add sequence data inside Maps as below

family:
krishh: 20
sachin: 45
 - fullname: Ramesh Sachin Tendulkar
 - age: 50
 - score: 2000
rohit: 40
Dhawan: 46

sequences are ordered collections and Maps are unordered collections

Ansible Play book

Playbook is used to send commands to remote servers / execute commands in remote servers in a scripted way using ansible commands.

To understand playbooks we need to know the below.
  • Task
    • Install tomcat in remote server
      • yum install tomcat -y
  • Play
    • collection of tasks is called Play 
      • Install java
      • Install tomcat
      • create file in home directory
  • playbook
    • collection of Plays
      • it a combination of Play and Tasks.
      • Here we can write plays for one server or multiple servers.
  • Playbooks
    • More than one play book means more than one yml file you are writing for your requirement.

How to write Ansible playbook

  1. start with   --- (3 - ) at the start of the yml file , it means that you are writing yml file.
  2. Target section list (hosts,users etc)
  3. variables list (optional)
  4. Tasklist
    1. list all the modules that you run in the order
These are the steps we need to write in yaml or yml file and we call it this for one play.

Sample.yml


Yaml file contains sequences and each sequence is called one play.

If you are writing playbook for Ansible , hosts is the default key , where you are providing servers group name.


How to write yaml sccript 


 
first.yml : write this yml file in ansible server
---
  - hosts: ansible-servers
    vars:
      mycontent: "this file is created using var content"
    tasks:
     - copy:
        dest: /tmp/var_file.txt
        content: "{{ mycontent }}"


execute this command in ansible server:
ansible-playbook first.yml

Now open ansible host and check for file in the location : /tmp/var_file.txt.

It will be created as below

else you can execute below command to check without manually opening the file in client
ansible ansible-servers -m command -a "cat /tmp/var_file.txt ".

So that you can see the below content for the same.


Reading vars from command line using vars_prompt

Write yml file as below to test this.

read.yml
---
 - name: this is play1
   hosts: localhost
   vars_prompt:
    name: var1
    prompt: enter any value ?

and execute yml file : ansible-playbook read.yml

Note: the value which you are entering will be assigned to the variable which is in yml file.

Print variable value using command line

For this use debug module in playbook, debug module has one argument as msg.

read.yml
---
 - name: this is play1
   hosts: localhost
   vars_prompt:
    name: var1
    prompt: enter any value ?
   tasks:
- name: "this is to print the variable value"
   debug:
   msg: "this is the value you entered {{ var1 }} "


output as below


suppose if you are not assigning or entering any value , default value it set as "Hello world".

Using loops

Lets see an example without loops , how we need to create 3 siilar directories by using command and name in yml

---
  hosts: localhost
    tasks:
    - name: this will create dir1
      command: mkdir /tmp/dir1
    - name: this will create dir2
      command: mkdir /tmp/dir2
    - name: this will create dir3
      command: mkdir /tmp/dir3

once you execute above yml file, it will create 3 dirs under tmp folder,.

instead of executing same command multiple times, we can use loops concept here for easy purpose.

---
 - hosts: ansible-servers
   tasks:
   - name: this is to create dirs
     command: mkdir /tmp/"{{ item }}"
     with_items:
     - new_dir1
     - new_dir2
     - new_dir3

output in /tmp of ansible client server after executing above playbook


Now , lets see how we can pass variable while running playbook and set those varaible values inside if your plabook

ansible-playbook read.yml -e "var1= hi var2=bye "

---
- name: this is a play
    hosts: localhost
- tasks:
- debug:
    msg: "the value of var1= {{ var1 } } and var2={{ var2 }} "

When you execute above script , it will assign values what you entered.

Conditional statements : 




Use when keyword to give a condition to execute.


when above file executed , when_file.txt will be created , if you make when: false , it won't be created.       

Yaml modules: deligate_to and register

register:
module is used to store the response to a variable after completion of each task.
if you want to store the response of each task after completion , we can use register at the end of the task.

this module is very helpful in conditional statements.


register.yml

---
 - name: This is a play
   hosts: ansible-servers
   tasks:
   - command: touch /tmp/register_module.txt
     register: response_out
   - debug:
      msg: "{{response_out}}"

output :

module : delegate_to

This module will help us in executing task only servers/hosts which are given as parameter to delegate_to parameter .

for ex:

---
 - name: This is a play
   hosts: ansible-servers
   tasks:
   - command: touch /tmp/register_module.txt
     delegate_to: localhost 

here though you pass hosts: ansible-servers , the file won't create here instead of that it will create in localhost or the server which you given an argument to delegate_to.

Ansible Playbook to install vim and wget on client machines.

---
 - hosts: ansible-servers
   become: yes
   tasks:
    - yum:
       name: wget
       state: present
    - yum:
       name: vim
       state: present


Ansible automation is designed to fulfil the below requirements
  • provisioning
  • configuration
  • application deployment
  • orchestration

Friday, 30 April 2021

Docker Notes

 ARG:

if you want to pass or change something in that can set during image build you can use this instruction 

This will be available till image build.

It has life time of till building an image

ENV:

this will be environment variable available in RUN container.

It has life time of building an image and running container.

This value can be set or changed while container is created.

For ex: if you want to select database when app is running , you can choose that DB by using ENV variable.

ADD:

if you want to download file from URL(from internet) , you can use this instruction

COPY:

If you want to copy your local file to image to build image you can use this instruction.

FROM:

its a base and primary command to build docker image at the starting of Dockerfile.

CMD:

to run any scripts or any commands while building image or in container , you can use this instruction.

RUN:

Expose:

to expose your container to specific port , you can use this instruction.

Healthcheck:

you can use this instruction to check health of image or container for timeout, interval and stop states.

ONBUILD:

you can use this instruction while building an image .

.dockerignore:

If you want to ignore any files or sensitive information , you can use this file to ignore while building an image.

Sample webapp to push docker image to Docker hub





Wednesday, 28 April 2021

Kubernetes Tutorial

 Why Kubernetes

  • Self Healing containers.
  • large scaling application deployments
  • Rolling updates and rollback features
Architecture of Kubernetes

Cluster: Different machines combined together to make K8s work , here each machine is called as NODE.

In simple terms CLUSTER is combo of nodes. The responsibility of master node is to maintain the cluster.

Master node runs in Linux nodes and it can't be windows machine.

Node: where actual work is done.

Kubernetes master and Node components


kube-apiserver:

  • Responsible for the communication

etcd:

  • Whatever is happening in cluster is stored here . which node is running, how many are running , state of the nodes etc.
  • All the details of Kubernetes cluster stored here.
  • this also named it as cluster store.
  • It is distributed key value store.
  • It is desired to work with multiple machines

Scheduler:

  • When new task created , it assigns to the healthy node
  • Whenever you want to create new , this component will come into picture.
  • this will create according to your requirement like nodes and communication between them.

Controller-manager

It is responsible for maintaining desired states mentioned in the manifest

It looks like single component but with it has 

Node controller : for noticing and responding when the node goes down

Replication controller : for maintaining the correct number of pods for every replication controller

endpoint controller: Populates the endpoint object, means it exposes the app to desired endpoint(port).

Node components:



nodes can be either windows or Linux machines.

Kubelet:

This is an agent which runs on each node in the cluster

means it will listen master node commands and act accordingly

Container runtime:

means which technology you are going for running containers, in our case DOCKER.

there are different technologies for this case like rocket.

kube-proxy:

All the networking is maintained by kube-proxy, means it is a networking component on each node.

like giving IP address , DNS names or DNS resolutions etc.

Kubernetes objects

This is persistent storage and it has object specification and object state.

Basic workflow:

Write a specification in yaml format and save in a file.

To create/modify 

------

kubectl  apply -f <path of yaml>

-------

* TO delete

-----

kubectl delete -f  <path of yaml>

* To get status

-----

kubectl get <object-kind >


Pod:

Atomic unit in K8s

Pod contains Container(s)

Each pod gets an IP address, which will assign by kube-proxy.

when we have more than one contatiner in the Pod , all these inside the pod can be access by using Pod IP address, for inter container communication and container have to use Local host.

In k8s ,scaling means increasing the pods not the container(s) inside the Pod.


When you have multi container Pod 

  •  one container could be main car container
  • and another container would be side car container.
Whenever we want to write Pod specification in Yaml t, the following 4 fields should be there
-----
apiVersion:
kind:
metadata:
spec:

kubectl apply -f <name of yml/path of yaml file> -w 
using this we can check the status of container

kubectl apply -f <name of yml/path of yaml file> -o wide
Using this we can check where the container is running. 

To get the IP address or full info of Pod

---
kubectl get pods -o wide
---

Fields of yml file

Namespace: is a logical cluster , means generally we will save data as object in some location , so that location will be logically created to utilize according to our purpose. Hence we call this as a logical cluster.
Is used to create virtual environment.
In K8s we can create virtual clusters using Namespace.
Pods are belongs to the namespace.
---
kubectl get namespace
-----

api-resources: All the different resources avaialbel in K8s.

---

kubectl api-resources

---

Workloads:

Initi-containers:

which will execute before the main containers and these supposed to be executed at certain time.

For ex: if ypu want to execute certain things before starting main container and it has fixed life time.

Controllers:

control the workload objects.//

Replication controller:

In RC, we can set no:of pods as the desired state in master node , so that RC will created desired no.of Pods as below.


Every Pod will get unique IP address.

Daemon set: creating a pod on every node in your cluster., it is used majorly on agent side.

Replica set:

almost same as replication controller but RC set can be preserved and revisited.

Statefull sets:

  • It will store data 
  • stable and unique network identifiers
  • stable and persistant storage
  • ordered , graceful deployments and scaling
  • ordered and automated rolling updates.

Stateless sets:

Doesn't store any data, if they want to store data , it will rely on external system.

Like other K8S objects, controllers also have 

specification : is more about controlling the objects/workloads

  • can define the workload
  • how to manage the workload

status

Service:

consider the scenario , where we have app server and DB server .

app server will communicate with DB server to store or retrieve data like below

Now due to application load , we have done the scale pods to 3.

Now lets assume node which mysql pod is running has failed, in that case we get a new mysql pod which will have new IP address as below.

In this case application will stop working as DB is not connected.

To resolve this K8S uses "Service" , which will communicating with DB Pod.

as below

Now Pod has failed and new pod is created then DB serveice will talk to the DB Pod which is created as below.

When a service is created , in the specification we are asked to provide label information.

K8S service will forward the traffic to the pods matching with labels given in service spec.

Service is a layer 4 Load balancer(port, protocol , IP address ).

In K8s layer-7 is supported by ingress which speaks about HTTP.

Service will communicate with etcd to get the Pods which are having specific app labels.

Changing labels is not take any downtime activity.

Labels and Pod are written in metadata in yml file.

service is not a Pod, and it is a rule book created by k8s networking i.e kube-proxy.

service is a logical component not a concrete component, Hence service never fails.

Whenever you are creating a Pod , create labels to that.

K8S deployments

In k8 deployments specifications we need to describe 

  • How many replications of Pod
  • Pod spec
  • Strategy

Deployments internally use replica sets(these are not persistent)

In this scenario we are deploying 3 Pods with your application (Docker image and service) as below

Now new version of your app with tag:1.5 is available 

Then we make changes to spec and rollout deployments as below.



Always remember that Pods will have more than one label , so that if one label matches to perform our operations on Pod , it works better.

Now for some reason we have new version is defective , Now we have to roll back,

like below

Storage in K8s:

  • Volume
  • Persistent volumes
  • Storage class
  • Persistent volume claims

Areas of concerns:

  • Docker containers are ephermal(once you remove the volume , the data will be lost) to overcome this we have docker volumes.
  • Pods also ephermal , means once pod is deleted data also deleted.
  • K8s works in many platforms.

K8S storage:

  • Volumes life time is equal to life time of Pod.
    • so, it will be applicable only to Pod
  • Persistent volume life time is equal to life time of cluster.
    • it will be applicable for cluster(all the pods in the cluster)
Persistent volume workflow
  1. create a persistent volume(PV) with request able attributes
  2. while creating a Pod which needs PV , a claim/request is sent out to K8S which is persistent volume claim(PVC).
  3. K8s searches for all the available PVs with the requested attributes in PVC.
  4. if it finds PV then Pod is created with attached PV whihc has mounted to docker container.




Persistent volumes and storage classes
  • PVs can be created statically and dynamically
  • To get this dynamic provisioning storage classes will help us
Networking in K8S
  • Container networking
  • Pod  to pod networking
  • Pod to service networking
  • Ingress
  • Service discovery
Mode details here you can refer.

refer for an app using K8S deployment.

Thursday, 4 February 2021

Elastic Kubernetes Service(EKS) in AWS

 To provision EKS cluster , we need the below prerequisites 

1. IAM user : where the user will have the permission to create and manage the EKS cluster.

Create role--> select EKS -->


Select EKS as below



Click on "Next permissions" --> select policy (EKS Service policy) -->Click Tags -->Review


Select "Create Role".

Now create cloud formation stack which will provision the below
1. VPC
2.Security groups
3.Subnets

For this , search for Services --> cloud formation-->create stack

In Amazon S3 URL , select URL which is already existed in the below link.

https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-create-cluster

Since we are using here ap-south-1 region, you can replace whatever region you would like to.






Click Next , accept all the default settings and click on Next -->create stack.

Here we can see all the resources are provisioned by using this stack as below


Now lets create EKS cluster, search the same in AWS services.



Accept all default net working settings as below.



Accept all the default setting and review the cluster values then click on "Create".