Thursday, May 7, 2020

Terms

Fault tolerance: The built-in redundancy of an application's components. Avoid single point of failure.

Recoverability: Process, Policies, and procedures related to restoring a service after a catastrophic event. Restore service quickly without lost data.

Scalability: The ability of an application to accommodate growth without changing design. Measure of how quickly the application infrastructure can respond to increased capacity needs so that the application is available and performing within required standards. 

Elasticity: Time based, volume based, predictive based

Recovery Point Objective (RPO): Acceptable amount of data loss measured in time

Recovery Time Objective (RTO): The time it takes after disruption to restore a business process to it service level, as defined by the operational level agreement (OLA)



Monday, July 8, 2019

S3 commands

Copy files from local Mac to S3 recursively excluding multiple files

aws s3 cp . s3://db-terraform/ --recursive --exclude "*plan*" --exclude "*.tfstate*" --exclude .DS_Store


Thursday, January 3, 2019

Terraform commands

terraform init
terraform plan -var-file=myvalues.tfvar -out plan1
terraform apply plan1

Please create 2 files (One for variables - myvariables.tf and another for storing values myvalues.tfvar)

Monday, December 31, 2018

Using AWS ECR with Docker

1) Setup Docker on Mac Pro

Download Docker from DockerHub and install
https://docs.docker.com/docker-for-mac/install/
Skip the login
A Whale icon appears on the top right corner on the Mac

From the terminal, execute docker ps or docker --version command to confirm installation.

2) Download a sample docker image
I used rabbitmq from DockerHub
https://hub.docker.com/_/rabbitmq/

From the terminal, execute docker pull rabbitmq

3) Create AWS ECR repository
Login to AWS > ECR
Create a new repository myrabbitmq
Once created, View Push Commands for instructions.

4) Upload a new image into ECR
From the terminal, execute these commands
docker image ls (List the Docker images)
$(aws ecr get-login --no-include-email --region us-east-1)
Login successful message will be displayed

docker tag excalibur:latest {your-aws-id}.dkr.ecr.us-east-1.amazonaws.com/myrabbitmq:latest
docker image ls (List the Docker images)
docker push {your-aws-id}.dkr.ecr.us-east-1.amazonaws.com/myrabbitmq:latest

Refresh ECR repository to view the uploaded image



Wednesday, October 31, 2018

AWS CLI query to list instance types and name with sort by instace type


aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=mytag,Values=myvalue" --query "Reservations[*].Instances[*].{name: Tags[?Key=='Name'] | [0].Value, InstanceType: InstanceType}" --output text --color off | sort -n -k 2



Thursday, October 25, 2018

AWS CLI query for EC2 instances summary (multiple filters)


aws ec2 describe-instances --output text --filters "Name=instance-state-name,Values=running" "Name=tag:mytagname,Values=mytagvalue"  --query 'Reservations[*].Instances[*].[InstanceType]' | sort | uniq -c
  18 c4.2xlarge
  10 c4.xlarge
   7 m4.10xlarge
   6 m4.16xlarge
   8 m4.2xlarge
   3 m4.4xlarge
  22 m4.large
  21 m4.xlarge
  26 r4.2xlarge
   3 r4.large
  10 r4.xlarge
   6 r5.2xlarge
  10 r5.4xlarge
   3 t2.medium

Friday, June 23, 2017

Confluent Kafka Installation

Setup EC2 instance

Spin an EC2 instance (m4.large or m4.xlarge) and run  the following commands

sudo lsblk
sudo file -s /dev/xvdb
sudo mkfs -t ext4 /dev/xvdb
sudo mkdir -p /apps/kafka
sudo mount /dev/xvdb /apps/kafka
sudo useradd kafka
sudo chown -R kafka:kafka/apps/kafka
sudo vi /etc/fstab
/dev/xvdb  /apps/kafka ext4    defaults,nofail        0       2

Setup security group and add to EC2 instances

Create security group 'kafka' with the ports
22
2181 (zookeeper client port) 2888, 3888 (zookeeper internal ports)
8081 - 8083 (8081 - schema registry, 8082 - rest proxy)
9021 (control center rest listeners)
9092 (kafka broker)

Install JDK

Download jdk-8u131-linux-x64.tar.gz and copy to /apps/kafka
tar xvzf  jdk-8u131-linux-x64.tar.gz

Install Kafka platform

Download tar xzf confluent-3.2.1-2.11.tar.gz and copy to /apps/kafka
tar xvzf confluent-3.2.1-2.11.tar.gz

Update ~/.bash_profile

PATH=$PATH:$HOME/.local/bin:$HOME/bin
FS_HOME=/apps/kafka
JAVA_HOME=$FS_HOME/jdk1.8.0_131
JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Addresses"
CONFLUENT_HOME=$FS_HOME/confluent-3.2.1
SCRIPTS=$FS_HOME/scripts
LOGS=$CONFLUENT_HOME/logs

PATH=$JAVA_HOME/bin:$CONFLUENT_HOME/bin:$PATH
export PATH

Note: create scripts and logs directories

Setup Zookeeper Ensemble


mkdir -p /apps/kafka/zookeeper-data
mkdir -p /apps/kafka/zookeeper-data/dataLog

Update /apps/kafka/confluent-3.2.1/etc/kafka/zookeeper.properties

dataDir=/apps/kafka/zookeeper-data
dataLogDir=/apps/kafka/zookeeper-data/dataLog
clientPort=2181
initLimit=5
syncLimit=2
maxClientCnxns=0
tickTime=2000
server.1={server1-ip}:2888:3888
server.2={server2-ip}:2888:3888
server.3={server3-ip}:2888:3888

Create a file named myid 


mkdir -p /apps/kafka/zookeeper-data
create myid file in this directory
echo 1 > myid (1st zk server)
echo 2 > myid (2nd zk server)
echo 3 > myid (3rd zk server)

Start zookeeper
Zookeeper requires Java

Create a script start_zookeeper.sh
-------
#!/bin/sh
source ~/.bash_profile
cd $CONFLUENT_HOME/bin
./zookeeper-server-start -daemon ../etc/kafka/zookeeper.properties
-------
Run the start script.

Start Kafka

Create directory
mkdir -p /apps/kafka/kafka-logs

Create a file brokers_zks
-------
STAGE_KAFKA_BROKERS={server1-ip}:9092,{server2-ip}:9092,{server3-ip}:9092
STAGE_KAFKA_ZKS={server1-ip}:2181,{server2-ip}:2181,{server3-ip}:2181
-------

Create a script start_kafka_broker.sh 
-------
#!/bin/sh
source ~/.bash_profile
source $SCRIPTS/brokers_zks
cd $CONFLUENT_HOME/bin

CONTROL_CENTER_OPTS="--override metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter --override confluent.metrics.reporter.bootstrap.servers=$STAGE_KAFKA_BROKERS --override confluent.metrics.reporter.zookeeper.connect=$STAGE_KAFKA_ZKS --override confluent.metrics.reporter.topic.replicas=1"

./kafka-server-start -daemon ../etc/kafka/server.properties --override broker.id=2 --override log.dirs=/apps/kafka/kafka-logs  --override zookeeper.connect=$STAGE_KAFKA_ZKS $CONTROL_CENTER_OPTS
-------
Note: you can either use override parameters or update properties file directly

Start Schema Registry

Update /etc/schema-registry/schema-registry.properties
-------
listeners=http://0.0.0.0:8081
kafkastore.connection.url={server1-ip}:2181,{server2-ip}:2181,{server3-ip}:2181
kafkastore.topic=_schemas
debug=false
-------

Create a script start_schema_registry.sh 
-------
#!/bin/sh
source ~/.bash_profile
cd $CONFLUENT_HOME/bin
./schema-registry-start -daemon ../etc/schema-registry/schema-registry.properties
-------

Start Rest Proxy

Update /etc/kafka-rest/kafka-rest.properties
-------
id=kafka-rest-test-server1
schema.registry.url=http://0.0.0.0:8081
zookeeper.connect={server1-ip}:2181,{server2-ip}:2181,{server3-ip}:2181
-------
Create a script start_rest_proxy.sh 
-------
#!/bin/sh
source ~/.bash_profile
cd $CONFLUENT_HOME/bin
./kafka-rest-start ../etc/kafka-rest/kafka-rest.properties > ../logs/nohup-rest-proxy 2>&1 </dev/null &
-------

Start Control Center
Create directory /apps/kafka/control-center-data
Update /etc/confluent-control-center/control-center.properties
-------
zookeeper.connect={server1-ip}:2181,{server2-ip}:2181,{server3-ip}:2181
bootstrap.servers={server1-ip}:9092,{server2-ip}:9092,{server3-ip}:9092
confluent.controlcenter.id=1
confluent.controlcenter.data.dir=/apps/kafka/control-center-data
#confluent.controlcenter.connect.cluster=connect1:8083,connect1:8083,connect3:8083
#confluent.controlcenter.license=/path/to/license/file
-------

Create a script start_control_center.sh 
-------
#!/bin/sh
source ~/.bash_profile
cd $CONFLUENT_HOME/bin
./control-center-start ../etc/confluent-control-center/control-center.properties ../logs/nohup-control-center 2>&1 </dev/null &
-------

Delete logs

#!/bin/sh
source ~/.bash_profile
rm -Rf /apps/kafka/logs/zookeeper-dataLog/*
rm -Rf /apps/kafka/confluent-3.2.1/logs/*
rm -Rf /apps/kafka/kafka-logs/*

Friday, November 4, 2016

SSH to EC2 as a non root user directly


  1. Create a key pair under ec2 section (Example: mykp.pem)
  2. Download mykp.pem 
  3. Run this command from a linux shell to get the public key for the above .pem
    • ssh-keygen -y -f /directorypath/mykp.pem
    • Output will be ssh-rsa with long key 
  4. Add this key to the target server

Default ports used by tools

Solr: 8983
Zookeeper: 2888, 3888, 2181

AWS Elasticache; 6379

Apache Tomcat: 8080
Jenkins: 8080
Nexus: 8080
Sonarqube: 8080

If running Jenkins and Nexus on same instance, Jenkins on 8080 and Nexus on 8081

Hygieia: 3000
Pa11y: 4000

Splunk: 8000, 8089


Building Apache Zookeeper Ensemble

Zookeeper ensemble

Create a default RHEL EC2 instance

sudo lsblk
sudo file -s /dev/xvdb
sudo mkfs -t ext4 /dev/xvdb
sudo mkdir -p /apps/zookeeper
sudo mount /dev/xvdb /apps/zookeeper
sudo useradd zookeeper
sudo chown -R zookeeper:zookeeper /apps/zookeeper
sudo vi /etc/fstab
/dev/xvdb  /apps/zookeper  ext4    defaults,nofail        0       2
sudo su - zookeeper

download jdk1.8.0_102
download zookeeper-3.4.9

setup ~/.bash_profile

FS_ROOT=/apps/zookeeper
JAVA_HOME=$FS_ROOT/jdk1.8.0_102
ZOOKEEPER_HOME=$FS_ROOT/zookeeper-3.4.9
SCRIPTS=$FS_ROOT/scripts
PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH
export PATH JAVA_HOME ZOOKEEPER_HOME SCRIPTS

mkdir /apps/zookeeper/data
mkdir /apps/zookeeper/dataLog
mkdir /apps/zookeeper/logs

Create a file zoo.cfg with 
tickTime=2000
dataDir=/apps/zookeeper/data
dataLogDir=/apps/zookeeper/dataLog
clientPort=2181
initLimit=5
syncLimit=2
server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
server.3=zoo3:2888:3888

Create a file myid in /apps/zookeeper/data directory with '1' as content

Update AWS EC2 sec group with ports 22 2181 2888 3888

Create an AMI and spin 2 more instances
update myid file on server2 as 2 and server3 as 3

update zoo.cfg server ips  on all servers

create a start script  start_zookeeper.sh

source ~/.bash_profile
cd $ZOOKEEPER_HOME/bin
./zkServer.sh start

or
nohup java -cp zookeeper-3.4.9.jar:lib/slf4j-api-1.6.1.jar:lib/slf4j-log4j12-1.6.1.jar:lib/log4j-1.2.16.jar:conf org.apache.zookeeper.server.quorum.QuorumPeerMain $SCRIPTS/zoo.cfg  > $FS_ROOT/logs/zookeeper.log 2>&1 &

Some known issues
If the port 3888 is not listening, check myid file in data folder has correct id



Wednesday, October 5, 2016

CloudTrail and Splunk


1) Navigate to CloudTrail > Add new trail
    Trail: my-cloudtrail
    Apply trail to all regions: Yes
    Create new S3 bucket: Yes
    S3 bucket: my-cloudtrail
    Advanced
      Send SNS notification for every log file delivery: Yes
      Create a new SNS topic: my-sns-topic-cloudtrail
    Create
   
2) Create SQS Queue
      Services > SQS > Create New Queue > Create
        Queue Name: my-sqs-cloudtrail

3) Subscribe SQS Queue to SNS Topic
      Services > SQS > my-sqs-cloudtrail > Queue Actions > Subscribe Queue to SNS Topic
      > Choose a Topic >  my-sns-topic-cloudtrail > Subscribe

4) Setup AWS permissions

5) Setup Data Inputs
       Settings > Data Inputs > CloudTrial >


 



Monday, April 11, 2016

Install node, npm and gulp on EC2


wget https://nodejs.org/dist/v5.10.1/node-v5.10.1-linux-x64.tar.xz
tar -xvf node-v5.10.1-linux-x64.tar.xz

Node and npm are installed.
Add the path to the .bash_profile and/or .bashrc

npm install -g gulp
npm install gulp --save-dev

Gulp is installed

Jenkins on AWS EC2 (Master/Slave setup using EC2 plugin)

1) Create user ciadmin
useradd ciadmin
passwd ciadmin

2) Create a file system /apps/jenkins
chown -R ciadmin:ciadmin jenkins

3) Install JDK 1.8, Apache Tomcat 9

4) Download and Copy Jenkins.war to webapps directory

5) Update .bash_profile
PATH=$PATH:$HOME/.local/bin:$HOME/bin
JAVA_HOME=/apps/jenkins/jdk1.8.0_77
JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Addresses"
PATH=$JAVA_HOME/bin:$PATH
export PATH JAVA_HOME JAVA_OPTS

6) Update /etc/hosts for the tomcat to start properly (example below)
172.x.y.x ip-172-x-y-z ip-172-x-y-z.us-west-2.compute.internal


7) Hit Jenkins URL: http://ip:8080/jenkins

8) Install Amazon EC2 plugin
Manage Jenkins -> Manage Plugins -> Available > Cluster Management and Distributed Build > Amazon EC2 plugin > Install

9) Create 2 AWS IAM roles

9.1) jenkins-master-role
Attach the custom policy

{
    "Version": "xxxxx",
    "Statement": [
        {
            "Sid": "Stmtxxx",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeSpotInstanceRequests",
                "ec2:CancelSpotInstanceRequests",
                "ec2:GetConsoleOutput",
                "ec2:RequestSpotInstances",
                "ec2:RunInstances",
                "ec2:StartInstances",
                "ec2:StopInstances",
                "ec2:TerminateInstances",
                "ec2:CreateTags",
                "ec2:DeleteTags",
                "ec2:DescribeInstances",
                "ec2:DescribeKeyPairs",
                "ec2:DescribeRegions",
                "ec2:DescribeImages",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
Verify trust relationship

{
  "Version": "xxxxxx",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}


9.2) jenkins-slave-role

Verify trust relationship

{
  "Version": "xxxxxx",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}



10) Setup EC2 Slave Config
Manage Jenkins > Configure System > Cloud section > Amazon Ec2

Name: aws-dev-vpc
Access Key: 
Secret Key: 
Use EC2 instance profile (Checked)
Region: us-west-2
EC2 Key Pair's Private Key: Place your private key


Test your AWS connection!!

AMIs
Description: celebrity-jenkins-slave
AMI ID: ami-xxxx
Instance Type: M4Large
EBS Optimized: Checked
Availability Zone: us-west-2a

Security group names: my-security-group
Remoe FS root: /mydir/subdir
Remote user: ciadmin
AMI Type: unix
Idle termination time: 60 

Note: Please note that slave will get terminated automatically after idle termination time with no activities kicked off by Jenkins

Stop/Disconnect on Idle Timeout: Checked

Note: If you do not want the slave to get terminated automatically but to stop, check the above.

Subnet ID for VPC: subnet-xxxx

Tags:
Name: project Value: myproject
Name: Name Value: myjenkins-slave
Instance Cap: 3
IAM Instance Profile: arn:aws:iam::MyAccountNo:instance-profile/cmyjenkins-slave-role
Connect by SSH Process: Checked
Save

11) Start a Slave
Manage Jenkins > Manage Nodes > provision via aws-de-vpc
Check the logs

Mounting EBS volume on EC2

[ec2-user@ip ~]$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  10G  0 disk
ââxvda1 202:1    0   1M  0 part
ââxvda2 202:2    0  10G  0 part /
xvdb    202:16   0  50G  0 disk

[ec2-user@ip ~]$ sudo file -s /dev/xvdb
[ec2-user@ip ~]$ sudo mkfs -t ext4 /dev/xvdb
[ec2-user@ip ~]$ sudo mkdir -p /mydir/subdir
[ec2-user@ip ~]$ sudo mount /dev/xvdb /mydir/subdir

Add file system to fstab to remount on system reboot
[ec2-user@ip ~]$sudo cp /etc/fstab /etc/fstab.orig

Add the line to /etc/fstab
/dev/xvdb  /mydir/subdir  ext4    defaults,nofail        0       2

[ec2-user@ip ~]$sudo mount -a

CodeCommit vs BitBucket

CodeCommit does not have Pull Request faeture. Hence, using Bitbucket

Monday, March 28, 2016

AWS Codedeploy

Using Codedeploy to publish files from S3 to EC2 server


Preparing a POC application to use with Codedeploy
  • Create index.html with some sample content
  • Create appspec.yml with the following content
----------------------------------------------------------
version: 0.0
os: linux 
files:
 - source: /index.html
   destination: /home/ec2-user/myapp/
----------------------------------------------------------
  • Please note all other unused config has been stripped down from original appsec.yml. It is important to remove all unused syntax. Otherwise, the application will not deploy properly using Codedeploy.
  • Zip just the files index.html and appspec.yml. Name the zip file as myapp-poc.zip
  • Please note when the zip is open, appspec.yml should be at the root level without any sub directories
Create S3 bucket and upload deploy artifacts
  • Create S3 bucket myapp ( Left the default Grantee myaccount with all permissions)
  • Upload myapp-poc.zip to S3 myapp S3 bucket ( Left the default Grantee myaccount with all permissions)

Create IAM policy and roles
  • Create a policy myapp-codedeploy-ec2-policy S3 bucket myapp ( Left the default Grantee myaccount with all permissions)
  • Upload myapp-poc.zip to S3 myapp S3 bucket ( Left the default Grantee myaccount with all permissions)
  • Contents of myapp-codedeploy-policy
----------------------------------------------------------
{
   "Version": "xxxx",
   "Statement": [
{
   "Sid": "Stmtxxx",
   "Effect": "Allow",
   "Action": [
"s3:Get*",
"s3:List*"
   ],
   "Resource": [
"arn:aws:s3:::myapp/*",
"arn:aws:s3:::aws-codedeploy-us-east-1/*",
"arn:aws:s3:::aws-codedeploy-us-west-2/*",
"arn:aws:s3:::aws-codedeploy-us-west-1/*"
   ]
}
   ]
}

----------------------------------------------------------
  • Create a role myapp-codedeploy-service-role 
  • Choose AWSCodeDeploy from AWS Service Roles
  • Attach the policy AWSCodeDeployRole (AWS managed) to myapp-codedeploy-service-role
  • Edit trust relationship of myapp-codedeploy-service-role to read as 
----------------------------------------------------------
{
 "Version": "xxxx",
 "Statement": [
   {
     "Sid": "",
     "Effect": "Allow",
     "Principal": {
"Service": [
 "codedeploy.us-west-2.amazonaws.com",
 "codedeploy.us-west-1.amazonaws.com",
 "codedeploy.us-east-1.amazonaws.com"
]
     },
     "Action": "sts:AssumeRole"
   }
 ]
}
----------------------------------------------------------
  • Create a role myapp-codedeploy-ec2-role
  • Choose Amazon EC2 from AWS Service Roles
  • Attach the policy myapp-codedeploy-ec2-policy (AWS managed) to myapp-codedeploy-ec2-role
  • Edit trust relationship of myapp-codedeploy-ec2-role to read as 
----------------------------------------------------------
{
 "Version": "xxxx",
 "Statement": [
   {
     "Effect": "Allow",
     "Principal": {
"Service": "ec2.amazonaws.com"
     },
     "Action": "sts:AssumeRole"
   }
 ]
}
----------------------------------------------------------

Create an EC2 instance
  • Spin up an EC2 instance with desired instance type.
  • In "Configure Instance Details" section, choose proper vpc, subnet. choose the IAM role as myapp-codedeploy-ec2-role
  • Please note that if you use Codedeploy wizard, it creates EC2 in the default vpc. If the default vpc does not exist, it fails. That is the reason behind spinning up a custom ec2 instance,
  • Create tags on the EC2 instance: Ex: Tag name: project Tag value: myapp
  • Please note tags are used by Codedeploy to discover instances.
Deploy Codedeploy agent on EC2 instance
    • Login into EC2 instance
    • Run the following commands
      • sudo yum update
      • sudo yum install ruby
      • sudo yum install wget
      • cd /home/ec2-user
      • wget https://bucket-name.s3.amazonaws.com/latest/install
      • chmod +x ./install
      • sudo ./install auto
      • sudo service codedeploy-agent status
    Create Codedeploy config
    • Create new application 
      • Application name: myapp
      • Deployment group name: myapp-deploy-stage
      • Tags: Amazon EC2: Key: project Value: myapp (The no. of instances discovered will be displayed)
      • Choose Service Role myapp-codedeploy-service-role
      • Leave rest of them as default
      • Create application
    • In the deployment group, select the deployment myapp-deploy-stage
      • Actions: Deploy new revision
      • Create New Deployment
      • Application: myapp
      • Deployment Group: myapp-deploy-stage
      • Revision Type: My application is stored in Amazon S3
      • Revision Location: 
    • Go to S3 console, select myapp-poc.zip. Copy the complete https url link and ETag url
      • Form the revision URL as follows as an example
      • https://s3-us-west-2.amazonaws.com/myapp/myapp-poc.zip?etag=962c02cb729b2f36745acbf4102129e1
    • Paste the above URL with ETag in the Revision Location field
    • Deploy

      Tuesday, March 22, 2016

      Enable forensic log in Apache within AWS Beanstalk

      1) Change (or Uncomment) the line in file /etc/httpd/conf/httpd.conf
      #LoadModule log_forensic_module modules/mod_log_forensic.so
      to
      LoadModule log_forensic_module modules/mod_log_forensic.so

      2) Update /etc/httpd/conf.d/elasticbeanstalk.conf to include the line
      ForensicLog /var/log/httpd/forensic_log 

      3) Update /etc/httpd/conf.d/elasticbeanstalk.conf to add %{forensic-id}n at the end
      LogFormat "%h (%{X-Forwarded-For}i) %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %{forensic-id}n"
      This step is optional and help with forensic id correlation with access log.

      4) Stop apache, check no http process is out there using ps-ef | grep http and start apache
      /usr/sbin/apachectl stop
      ps -ef | grep http (check for no http process)
      /usr/sbin/apachectl start

      Wednesday, March 9, 2016

      CodeCommit

      1) Create a userid and grant codecommitfullaccess policy for the poc purpose
      2) Create a repository named jrepo in AWS CodeCommit
      3) Copy the https url (example: https://git-codecommit.us-east-1.amazonaws.com/v1/repos/jrepo)
      4) Setup AWS CLI
      5) Install AWS Tools from https://aws.amazon.com/powershell/


      6) Run windows command utility as administrator
      7) cd %PROGRAMFILES(X86)%\AWS Tools\CodeCommit
      8) git-credential-AWSS4.exe -p jcodecommit
          Note: jcodecommit is profile name stored in AWS config or credentials file  (under users home .aws dir)
          Choose Yes to generate sig
      9) Run git config --global --edit and you should see a similar block
      [credential]
      helper = !'C:\\Users\\j\\AppData\\Roaming\\GitCredStore\\git-credential-AWSSV4.exe' --profile=jcodecommit
      UseHttpPath = true

      10) Create a local directory named codecommitrepos
      11) cd c:\codecommitrepos
      12) git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/jrepo local-jrepo
      13) git config --local user.name "developer1"
      14) git config --local user.email developer1@email.com

      15) cd c:\codecommitrepos\local-jrepo
      15)  Create files index.html and index2.html
      16) git add index.html
      17) git commit -m "Added index.html'
      16) git add index2.html
      17) git commit -m "Added index2.html'

      18) git push -u origin master

      Voila! The files are now pushed to AWS CodeCommit.


      Thursday, July 2, 2015

      ELB CLI Commands

      1. Create internal load balancer aws elb create-load-balancer --load-balancer-name myelbname --listeners Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=9080 --subnets mysubnet1-in-useast1a mysubnet2-in-useast1c--scheme internal --tags Key=environment,Value=stage Key=project,Value=myproject Key=app,Value=myapp Key=product,Value=elb 

      2. Update application generated cookie stickiness aws elb create-app-cookie-stickiness-policy --load-balancer-name myelbname --policy-name myapp-cookie-policy --cookie-name JSESSIONIDX aws elb set-load-balancer-policies-of-listener --load-balancer-name myelbname --load-balancer-port 80 --policy-names myapp-cookie-policy

      3. Update internal load balancer attributes (Combined - CrossZoneLoadBalancingAccessLogsConnectionSettingsConnectionDraining) 
      aws elb modify-load-balancer-attributes --load-balancer-name myelbname --load-balancer-attributes {\"CrossZoneLoadBalancing\":{\"Enabled\":true},\"AccessLog\":{\"Enabled\":true,\"S3BucketName\":\"myelblogs\",\"S3BucketPrefix\":\"myapp\",\"EmitInterval\":5},\"ConnectionSettings\":{\"IdleTimeout\":60},\"ConnectionDraining\":{\"Enabled\":true,\"Timeout\":300}}  

      # Individual Commands 
      # Update load balancer attributes - CrossZoneLoadBalancing aws elb modify-load-balancer-attributes --load-balancer-name myelbname --load-balancer-attributes {\"CrossZoneLoadBalancing\":{\"Enabled\":true}} 

      # Update internal load balancer attributes - AccessLogs aws elb modify-load-balancer-attributes --load-balancer-name myelbname --load-balancer-attributes {\"AccessLog\":{\"Enabled\":true,\"S3BucketName\":\"myelblogs\",\"S3BucketPrefix\":\"myapp\",\"EmitInterval\":5}} 

      # Update internal load balancer attributes - ConnectionSettings aws elb modify-load-balancer-attributes --load-balancer-name myelbname --load-balancer-attributes "{\"ConnectionSettings\":{\"IdleTimeout\":60}}" 

      # Update internal load balancer attributes - ConnectionDraining aws elb modify-load-balancer-attributes --load-balancer-name myelbname --load-balancer-attributes "{\"ConnectionDraining\":{\"Enabled\":true,\"Timeout\":300}}"