Monday, 21 December 2020

Boto3 examples(SGs,DynamoDB,EC2 and Cloudwatch)

 Create a new Dynamo table using Boto3

# Get the service resource.
dynamodb = boto3.resource('dynamodb')

# Create the DynamoDB table.
table = dynamodb.create_table(
    TableName='users',
    KeySchema=[
        {
            'AttributeName': 'username',
            'KeyType': 'HASH'
        },
        {
            'AttributeName': 'last_name',
            'KeyType': 'RANGE'
        }
    ],
    AttributeDefinitions=[
        {
            'AttributeName': 'username',
            'AttributeType': 'S'
        },
        {
            'AttributeName': 'last_name',
            'AttributeType': 'S'
        },
    ],
    ProvisionedThroughput={
        'ReadCapacityUnits': 5,
        'WriteCapacityUnits': 5
    }
)

# Wait until the table exists.
table.meta.client.get_waiter('table_exists').wait(TableName='users')

# Print out some data about the table.
print(table.item_count)
This method will return a Dynamodb.Table

Once you have a DynamoDB.Table resource you can add new items to the table using DynamoDB.Table.put_item():

table.put_item(
   Item={
        'username': 'janedoe',
        'first_name': 'Jane',
        'last_name': 'Doe',
        'age': 25,
        'account_type': 'standard_user',
    }
)
You can then retrieve the object using DynamoDB.Table.get_item():

response = table.get_item(
    Key={
        'username': 'janedoe',
        'last_name': 'Doe'
    }
)
item = response['Item']
print(item)
You can then update attributes of the item in the table:

table.update_item(
    Key={
        'username': 'janedoe',
        'last_name': 'Doe'
    },
    UpdateExpression='SET age = :val1',
    ExpressionAttributeValues={
        ':val1': 26
    }
)

You can also delete the item using DynamoDB.Table.delete_item():

table.delete_item(
    Key={
        'username': 'janedoe',
        'last_name': 'Doe'
    }
)
With the table full of items, you can then query or scan the items in the table using the DynamoDB.Table.query() or DynamoDB.Table.scan() methods respectively. To add conditions to scanning and querying the table, you will need to import the boto3.dynamodb.conditions.Key and boto3.dynamodb.conditions.Attr classes. The boto3.dynamodb.conditions.Key should be used when the condition is related to the key of the item. The boto3.dynamodb.conditions.Attr should be used when the condition is related to an attribute of the item:

from boto3.dynamodb.conditions import Key, Attr
This queries for all of the users whose username key equals johndoe:

response = table.query(
    KeyConditionExpression=Key('username').eq('johndoe')
)
items = response['Items']
print(items)
Similarly you can scan the table based on attributes of the items. For example, this scans for all the users whose age is less than 27:

response = table.scan(
    FilterExpression=Attr('age').lt(27)
)
items = response['Items']
print(items)
Describing instances
import boto3
ec2 = boto3.client('ec2')
response = ec2.describe_instances()
print(response)

Monitor and unmonitor instances
import sys
import boto3


ec2 = boto3.client('ec2')
if sys.argv[1] == 'ON':
    response = ec2.monitor_instances(InstanceIds=['INSTANCE_ID'])
else:
    response = ec2.unmonitor_instances(InstanceIds=['INSTANCE_ID'])
print(response)

Start and stop instances

import boto3
from botocore.exceptions import ClientError

instance_id = sys.argv[2]
action = sys.argv[1].upper()

ec2 = boto3.client('ec2')


if action == 'ON':
    # Do a dryrun first to verify permissions
    try:
        ec2.start_instances(InstanceIds=[instance_id], DryRun=True)
    except ClientError as e:
        if 'DryRunOperation' not in str(e):
            raise

    # Dry run succeeded, run start_instances without dryrun
    try:
        response = ec2.start_instances(InstanceIds=[instance_id], DryRun=False)
        print(response)
    except ClientError as e:
        print(e)
else:
    # Do a dryrun first to verify permissions
    try:
        ec2.stop_instances(InstanceIds=[instance_id], DryRun=True)
    except ClientError as e:
        if 'DryRunOperation' not in str(e):
            raise

    # Dry run succeeded, call stop_instances without dryrun
    try:
        response = ec2.stop_instances(InstanceIds=[instance_id], DryRun=False)
        print(response)
    except ClientError as e:
        print(e)
		
Reboot instances
import boto3
from botocore.exceptions import ClientError


ec2 = boto3.client('ec2')

try:
    ec2.reboot_instances(InstanceIds=['INSTANCE_ID'], DryRun=True)
except ClientError as e:
    if 'DryRunOperation' not in str(e):
        print("You don't have permission to reboot instances.")
        raise

try:
    response = ec2.reboot_instances(InstanceIds=['INSTANCE_ID'], DryRun=False)
    print('Success', response)
except ClientError as e:
    print('Error', e)
Describe Regions and Availability Zones

ec2 = boto3.client('ec2')

# Retrieves all regions/endpoints that work with EC2
response = ec2.describe_regions()
print('Regions:', response['Regions'])

# Retrieves availability zones only for region of the ec2 object
response = ec2.describe_availability_zones()
print('Availability Zones:', response['AvailabilityZones'])

Create a security group and rules
import boto3
from botocore.exceptions import ClientError

ec2 = boto3.client('ec2')

response = ec2.describe_vpcs()
vpc_id = response.get('Vpcs', [{}])[0].get('VpcId', '')

try:
    response = ec2.create_security_group(GroupName='SECURITY_GROUP_NAME',
                                         Description='DESCRIPTION',
                                         VpcId=vpc_id)
    security_group_id = response['GroupId']
    print('Security Group Created %s in vpc %s.' % (security_group_id, vpc_id))

    data = ec2.authorize_security_group_ingress(
        GroupId=security_group_id,
        IpPermissions=[
            {'IpProtocol': 'tcp',
             'FromPort': 80,
             'ToPort': 80,
             'IpRanges': [{'CidrIp': '0.0.0.0/0'}]},
            {'IpProtocol': 'tcp',
             'FromPort': 22,
             'ToPort': 22,
             'IpRanges': [{'CidrIp': '0.0.0.0/0'}]}
        ])
    print('Ingress Successfully Set %s' % data)
except ClientError as e:
    print(e)
	
Delete a security group
import boto3
from botocore.exceptions import ClientError

# Create EC2 client
ec2 = boto3.client('ec2')

# Delete security group
try:
    response = ec2.delete_security_group(GroupId='SECURITY_GROUP_ID')
    print('Security Group Deleted')
except ClientError as e:
    print(e)

Wednesday, 16 December 2020

IAM role , ALB and EC2 Terraform creatiion

 ALB Terraform:

HTTP and HTTPs are listeners with the default actions.

With ALB rules always use actions like "forward" ,"redirect" and "Fixed response".

Below is the code for ALB.

module "alb" {
  source  = "terraform-aws-modules/alb/aws"
  version = "~> 5.0"

  name = "my-alb"

  load_balancer_type = "application"

  vpc_id             = "vpc-abcde012"
  subnets            = ["subnet-abcde012", "subnet-bcde012a"]
  security_groups    = ["sg-edcd9784", "sg-edcd9785"]

  access_logs = {
    bucket = "my-alb-logs"
  }

  target_groups = [
    {
      name_prefix      = "pref-"
      backend_protocol = "HTTP"
      backend_port     = 80
      target_type      = "instance"
    }
  ]

  https_listeners = [
    {
      port               = 443
      protocol           = "HTTPS"
      certificate_arn    = "arn:aws:iam::123456789012:server-certificate/test_cert-123456789012"
      target_group_index = 0
    }
  ]

  http_tcp_listeners = [
    {
      port               = 80
      protocol           = "HTTP"
      target_group_index = 0
    }
  ]

  tags = {
    Environment = "Test"
  }
}

IAM User:

resource "aws_iam_user" "lb" {
  name = "loadbalancer"
  path = "/system/"

  tags = {
    tag-key = "tag-value"
  }
}

resource "aws_iam_access_key" "lb" {
  user = aws_iam_user.lb.name
}

resource "aws_iam_user_policy" "lb_ro" {
  name = "test"
  user = aws_iam_user.lb.name

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "ec2:Describe*"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}
EOF
}

Monday, 26 October 2020

Ec2 state monitor with Cloud watch

 Cloud watch:

We usually monitor and get the metrics of all Ec2 resources by using Cloud watch.

So Here we create Cloud watch event to monitor the state of the EC2 instance.

Here we use the below topics to get the mail notification.

1.SNS topic

2.Cloud watch event

3.Ec2 status change

First lets create SNS topic to get the mail notification.

Go to Services and create SNS topic like below and create a topic:ec2monitor(here i created topic name as like this).


once done , we have to subscribe topic with our intention mail id , Since we selected as Email type.




Once you done, you will get a notification mail to confirm the subscription, until you confirm subscription , you won't get mail from AWS for the required topic.

Once confirmed subscription , you will see like below pic as "confirmed".



Now you can configure Cloud watch event like below to monitor the EC2 status.

Go to Services--> Cloudwatch 


Click on "Configure Details " then you create "Event Name" and "description ".

Once you configure the Event , you can change status of Ec2 from any state to other state.

So, you will get Mail notification like below.

Here how we can monitor the status of EC2 status by using "Cloud watch Events".

Like this we can monitor number of events by using Cloudwatch of AWS resources.
 

Thank you for reading 👍👍👍

Thursday, 8 October 2020

Boto3: Part 1

 Uploading data on DynamoDB using Boto3.

Create Table.

Search for DynamoDB service in AWS console and click on "Create table"

Enter the below details and click "Create"

Table is created.


Enter the data manually by clicking on "create item" manually


and click on "Save"


Now update the table using Boto3 

code :upload_db.py

import boto3

db=boto3.resource('dynamodb')
table=db.Table('employees')
table.put_item(
    Item={
        'emp_id':"2",
        'Name'"Charvik",
        'Age':"4"
    }
)

run the code and check console for successful update of table.

Get the data from table

# Get the data from table.
response=table.get_item(
    Key={
        "emp_id":"1"
    }
)
print(response["Item"])


When you run above code , you can see the output.

Refer for code in my github account: https://github.com/krishhmomdad/Boto

Thank you for reading 👍👍👍👍👍👍👍

Wednesday, 30 September 2020

AWS code pipeline with S3 and Github connection

 AWS code pipeline

is a fully managed continuous delivery service that helps you to automate your release pipelines for fast and reliable application and infrastructure updates.
Automates the build,test and deploy phases of release process every time there is a change in code based on release model you defined.
This enables you to every time you deliver features and updates of app.
We can easily integrate AWS services such as GitHub or with your custom plugin.

First Host a static website on AWS S3.

Create a bucket with name 
Click on "Properties" tab after selecting on Bucket.

Click on static website hosting



Make a note of End point and click "Save".
Edit the block public access and enable public access , so that you can access this bucket outside.



Click "Save"

Click "confirm".

Add a bucket policy that your content in the bucket will be publicly available
Bucket policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::mywebsite3009/*"
            ]
        }
    ]
}

Click "save"

you can ignore the following warning message if you would like to give access to the public.

Create "entry.html" with following content and upload the same to the created bucket.

entry.html:
<html>
<head>
    <title>My static Website Home Page</title>
</head>
<body>
  <h1>Welcome to my sample website</h1>
  <p>Am hosted on Amazon S3!</p>
</body>
</html>

Upload to the S3 bucket

Now you can copy the endpoint and paste it in the browser.

In my case the below is the figure .




We have now successfully hosted static website on S3.

Now we can select "codepipeline" under AWS services

 


Enter the pipeline name and accept the default values as below and select "Next".

Select "Github" as a source and click on "connect github"

Give your Github account name and select the repository name.

click on "install" then select "connect".


Then select "Next"

For now you can skip build stage.

Under "Deploy" select below values.



Select "Next" and review the details and select "create pipeline"

Now we have created successfully created pipeline.



Change something in "my-app" source code and check the git status.



and push the changes to github.


In another way 

you can have same website files which are uploaded to S3 and github will be same.
So that when you change the code of your website , it automatically reflect the change when you push.

Since have my Java project connected with S3 via pipeline, when you push the changes of your updated java file to github. the updated file will be shown under S3.



Now you can open sample.java , it will be shown as we have updated as below

Sample.java:
public class sample {

public static void main(String args[]){
System.out.println("Thiis is my app which is created in pipeline");
}

}

This is how codepipeline will work with static website, java project and github changes.





Thank you for reading.

Wednesday, 23 September 2020

Terraform Part-2

 In this section lets discuss how we can build reusable modules.

Here in this example lets create Ec2 and VPC, subnets dynamically by using separate modules.

Folder structure will be like below


First lets create VPC and subnets.

for this, create below files

network.tf: for vpc ,subnets

resource "aws_vpc" "main" {

  cidr_block       = var.vpc_cidr

  instance_tenancy = var.tenancy

    tags = {

    Name = "main"

  }

}


resource "aws_subnet" "main" {

  vpc_id     = var.vpc_id

  cidr_block = var.subnet_cidr

  

  tags = {

    Name = "main"

  }

}


output "vpc_id"{

    value=aws_vpc.main.id

   }

output "subnet_id"{

    value=aws_subnet.main.id

   }

vars.tf: To initialize the variables , this will be used in vpc and subnets creation.

vars.tf:

variable "vpc_cidr" {

    default="10.0.0.0/16"

}

varible "tenancy" {

    default=dedicated

}

variable "vpc_id"{

 

}

variable "subnet_cidr"{

    default="10.0.1.0/24" ---> if the value is mandatory initialize the same else assign empty.

}

Now lets create ec2 instance by following the same.

create a file : instances.tf in ec2 folder, in this case get the subnet value where you want to create instance.

instances.tf:

resource "aws_instance" "web" {

  count=var.ec2_count

  ami           = var.ec2_ami

  instance_type = "var.instatype

  subnet_id=var.subnet_id

  tags = {

    Name = "myinstance"

  }

}

vars.tf

variable "ec2_ami"{  }

variable "instatype"{

    default="t2.micro"

  }

variable "subnet_id"{  }

variable "ec2_count"{

   default="1"

  }

Lets use this to create resources in dev and prod environment.
Create two instances  

create a main.tf file in dev folder and source is where you are getting the values for CIDR or vpc or subnets.

In case of source , you can get value from anywhere like git/bitbucket or etc with relative path.

We need to get the vpc_id which is dynamically creating in runtime. To get this value from modules/vpc , we need to declare this in output (refer network.tf) file.

Dev:

main.tf:

provider "aws" {

    region = "ap-south-1"

}

module "my_vpc"{

    source="../modules/vpc"

    vpc_cidr="192.168.0.0/16"

    tenancy="default"

    vpc_id="${module.my_vpc.vpc_id}"

    subnet_cidr="192.168.1.0/24"

}

module "my_ec2"{

    source="../modules/ec2"

    ec2_count=1

    ec2_ami="ami-09052aa9bc337c78d"

    instatype="t2.micro" 

    subnet_id="${module.my_vpc.subnet_id}"

}

Now apply terraform and check the output.

VPC:

Subnets:

EC2:




Same main.tf
file we can use it for dev too and update required vpc and subnets , EC2 values and try.

main.tf:

provider "aws" {

    region = "ap-south-1"

}

module "my_vpc"{

    source="../modules/vpc"

    vpc_cidr="10.0.0.0/16"

    tenancy="default"

    vpc_id="${module.my_vpc.vpc_id}"

    subnet_cidr="10.0.1.0/24"

}

module "my_ec2"{

    source="../modules/ec2"

    ec2_count=1

    ec2_ami="ami-09a7bbd08886aafdf"

    instatype="t2.micro" 

    subnet_id="${module.my_vpc.subnet_id}"

}

This is how we can create modules and provision the AWS resources using Terraform.

Thank you for reading 

Sunday, 20 September 2020

Terraform - Part 1

 Definition:

Automate the provisioning of resources over the cloud.

To learn this topic , we need to below tools and SW are ready in our system

Terraform:https://www.terraform.io/downloads.html

Visual studio code: https://code.visualstudio.com/download

AWS CLI: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html

Programmatic access from Terraform to AWS console.

Lets create a IAM user to provide access to Terraform.

Click "Next permissions"

Once User created ,Make a note of secret key and access key IDs to configure the access from AWS CLI to the console.

Open command prompt and type AWS to check your CLI access.

type aws configure command


Next download and configure Terraform tool

Source docs.oracle.com

To see the terraform success installation, check as below


Now lets use Visual studio code editor in my case, you case whatever editor your convenient.

Create a directory and with simple file name, in my case sample.tf. Form name must have extension .tf as a naming convention.

Folder structure will be like below


sample.tf

provider "aws"{

region="us-west-2"

}

resource "aws_vpc" "main" {

  cidr_block       = "10.0.0.0/16"

  instance_tenancy = "default"  

  tags = {

    Name = "main"

  }

}

resource "aws_subnet" "subnet1" {

  vpc_id     = aws_vpc.main.id

  cidr_block = "10.0.1.0/24"

   tags = {

    Name = "Subnet1"

  }

}

Once you enter the vpc and subnet code , you need to initialise the terraform by using the below 

Before running to create AWS resources , lets see what we have us-west-2 region.

We have default VPC and subnets as below

VPC:

subnet:


Now execute "terraform apply" command in terminal and give "yes" as you want to approve to create the resources what you have asked.

See in above screen , we can see two resources are created ,lets open our console and check.

Check the resources in "us-west-2" region 

VPC:

subnet



With this we can easily provision the AWS resources with terraform code.
Once it successfully executes the creation of resources, it will create the state form where it has the code to create the resources in AWS.
In our case it will be like below.

Now lets learn deep out resource filed and how we can separate the variables and initialize them .
See vars.tf , how we are separating
execute again by using terraform apply 
This is how we can initialize the variables and execute them.
Now lets create multiple subnets using loops in terraform.

Since we are in us-west-2 region and lets check how many AZs are available at present.
Lets create CIDR blocks dynamically and create subnets against CIDR.
Now the vars.tf looks like below.
variable "region"{
    default="us-west-2"
}
variable "vpc_cidr"{
    default="10.0.0.0/16"
}
variable "subnet_cidr"{
    type=list(string)
    default=["10.0.5.0/24","10.0.2.0/24","10.0.3.0/24","10.0.4.0/24"]
}
variable "azs"{
    type=list(string)
    default=["us-west-2a","us-west-2b","us-west-2c","us-west-2d"]
}

sample.tf like below
provider "aws"{
  region=var.region
}
resource "aws_vpc" "main" {
  cidr_block       = var.vpc_cidr
  instance_tenancy = "default"
  
  tags = {
    Name = "main"
  }
}
resource "aws_subnet" "subnets" {
  count=length(var.azs)
  vpc_id     = aws_vpc.main.id
  cidr_block = element(var.subnet_cidr,count.index)
  
  tags = {
    Name = "Subnet1"
  }
}

apply the terraform and see the resources are created as below.



When you see the above pic, we have subnet names are equal. Lets change the name of the subnet using the count.index

Just change the Name = "Subnet-${count.index+1}". and apply terraform.


As of Now we are hardcoded region and AZs but we can get those also dynamically using data sources.

Just change the code in vars.tf as below
#variable "azs"{
 #   type=list(string)
  #  default=["us-west-2a","us-west-2b","us-west-2c","us-west-2d"]
#}
# Declare the data source
data "aws_availability_zones" "azs" {
  state = "available"
}

update the sample.tf as below
resource "aws_subnet" "subnets" {
  count=length(data.aws_availability_zones.azs.names)
  vpc_id     = aws_vpc.main.id
  cidr_block = element(var.subnet_cidr,count.index)
  
  tags = {
    Name = "Subnet-${count.index+1}"
  }
}

apply the terraform, so here we can see same output as above though we are getting dynamically the data of AZs.

Since we the subnets are created in single AZ like below

Lets create subnets in different AZs as you want.

sample.tf:
count=length(data.aws_availability_zones.azs.names)
  availability_zone=element(data.aws_availability_zones.azs.names,count.index)


Lets apply terraform
If you observe above pic, it created only 3 subnets again and 3 subnets deleted , because it make used the existing one and created the freshly with new AZs as below

Lets see other functions in Terraform.

Map:
Lets see without Map , how we can create ec2 instance using terraform.
vars.tf:
variable "region"{
    default="ap-south-1"
}
variable "ec2_ami"{
    default="ami-76d6f519"
}
provider.tf:
provider "aws"{
    region=var.region
}
ec2-instance.tf:
resource "aws_instance" "web" {
  ami           = var.ec2_ami
  instance_type = "t2.micro"

  tags = {
    Name = "HelloWorld"
  }
}

Apply terraform and check the EC2 is created.

In above code , we specified the region and AMI , But when we change the region  AMI id will change.
So , to get the AMI id based on the region we are using the Map.

Get the AMIs using Map and get the region using Lookup as below

vars.tf:
variable "region"{
    default="ap-south-1"
}
variable "ec2_ami"{
    type=map
    default={
        ap-south-1="ami-76d6f519"
        us-west-2="ami-e251209a"
    }
    
}
ec2-instance.tf:
resource "aws_instance" "web" {
  ami           = lookup(var.ec2_ami, var.region)
  instance_type = "t2.micro"

  tags = {
    Name = "HelloWorld"
  }
}

See the below there is no instance created in us-west-2 region
but the Ec2 instance is created in ap-south-1.


Thank you for reading 👍👍👍👍👍