Friday, June 26, 2020

AWS Hands-On: Break a Monolith Application into Microservices (Module 4)

Module 4: Deploy Microservices
Approach.
1. Switch the TrafficThis is the starting configuration. The monolithic node.js app running in a container on Amazon ECS.
2. Start MicroservicesUsing the three container images you built and pushed to Amazon ECR in the previous module, you will start up three microservices on your existing Amazon ECS cluster.
3. Configure the Target GroupsLike in Module 2, you will add a target group for each service and update the ALB Rules to connect the new microservices.
4. Shut Down the MonolithBy changing one rule in the ALB, you will start routing traffic to the running microservices. After traffic reroute has been verified, shut down the monolith.

Steps.
Creation of Task Definitions for each of the service. Configure via JSON feature of ECS was used to create the three services.

Creation of Target Groups. This time, AWS CLI was used to create the three corresponding target groups.

Configuring the Listener rules 

Deploying the microservices

Traffic switching

Validating that services are working





Wednesday, June 24, 2020

AWS Hands-On: Break a Monolith Application into Microservices (Module 3)

Module 3: Break the Monolith
The purpose of Module 3 is to demonstrate how to break a monolith service into individual services that will run on different containers. The image creation and deployment are the same with what was done in the previous modules. We will explore new ways how to do it faster since there will be three services this time which will make this module as the shortest among all.

Step 1. Provision the ECR repositories.
We need to prepare container image repositories for the users, posts, and threads services.


Step 2. Authenticate Docker with AWS (only if needed).
Step 3. Build and Push Images for each Service

'docker build'

tagging and pushing








AWS Hands-On: Break a Monolith Application into Microservices (Module 2)

Module 2: Deploy the Monolith
After containerizing the monolith service application, we now deploy it in AWS. It will run in a cluster that will have 2 replicas with an Application Load Balancer in front for minimum high-availability. 

Key services that was used in this hands-on:Amazon Elastic Container Service, Amazon Elastic Container Registry, AWS CloudFormation, and Elastic Load Balancing. Most of the procedures were done in AWS console although the AWS CloudFormation part can also be done through the AWS CLI.

Step 1. Launch an ECS Cluster using AWS CloudFormation.
This is where we run the infrastructure code in the project that we cloned from Module 1.

The resulting infrastructure will have the following resources:
  • A VPC with 2 Public subnets (1 for each container), and Internet Gateway and Route Tables for public access
  • An ECS Cluster with a defined Security Group
  • An Application Load Balancer
  • An IAM Role for the ECS service
 Step 2. Verification steps to check that the cluster is running.

Step 3. Writing the Task Definition.
Task definitions specify how Amazon ECS deploys the application containers across the cluster. This is where referencing to the application image repository that was created in Module 1 happens.

Step 4. Configure the Application Load Balancer: Target Group
The Application Load Balancer will route network traffic to the container instances through a Target Group that refers to the VPC that was created in Step 1.

Step 5. Configure the Application Load Balancer: Listener
The ALB Listener checks for incoming connection requests to the ALB.

Step 6: Deploy the Monolith as a Service

Step 7: Test the Monolith

Tuesday, June 23, 2020

AWS Hands-On: Break a Monolith Application into Microservices (Module 1)

Module 1: Containerize the Monolith
In a real world application, the assumption is that we've already containerized our 'monolith' application and that a corresponding Dockerfile has already been created for it. Module 1 is a demonstration on how we can store a Docker image of that monolith application in Amazon EC2 Container Repository (ECR). This is just a summary of what goes, the steps and definitions are well documented in the AWS hands-on project link above.

Prerequisites:
1. AWS account, Git client, a text editor. These are pretty straightforward already.
2. Docker. I am currently at Fedora 32 but there is no Docker-ce official release yet for this version as of writing so I used this guide instead. Extra steps: I added my user to the 'docker' group so that I don't need to do 'sudo' for every docker command that I run.
After installing Docker, I proceeded with installing AWS CLI. I chose to install version 2 as recommended.
There will be a Git project that needs to be downloaded/cloned. It will contain infrastructure scripts and the Dockerfile of the test monolith application.
Next is the provisioning of a container repository in ECR. The resulting docker image of the monolith app will be stored here later.
Finally, the building and pushing of the Dockerized monolith app. 
Initially, I encountered an error while attempting to authenticate Docker to ECR so I proceeded on building and tagging the image first.
For the ECR docker login part, there is a new process described here. After successfully hooking up my Docker and ECR, I was then able to push the image.

Sunday, June 7, 2020

AWS Whitepaper Series: Practicing CI and CD on AWS

Title: Practicing Continuous Integration and Continuous Delivery on AWS Accelerating Software Delivery with DevOps - June 2017

    Emphasis on the Summary of Best Practices

    • Treat your infrastructure as Code
      • Use version control for your infrastructure code.
      • Make use of bug tracking/ticketing systems.
      • Have peers review changes before applying them.
      • Establish infrastructure code patterns/designs.
      • Test infrastructure changes like code changes.
      • Put developers into integrated teams of no more than 12 self-sustaining members.
    • Have all developers commit code to the main trunk frequently, with no long-running feature branches.
    • Consistently adopt a build system such as Maven or Gradle across your organization and standardize builds.
    • Have developers build unit tests toward 100% coverage of the code base.
    • Ensure that unit tests are 70% of the overall testing in duration, number,and scope.
    • Ensure that unit tests are up-to-date and not neglected. Unit test failures should be fixed, not bypassed.
    • Treat your continuous delivery configuration as code.
    • Establish role-based security controls(that is,who can do what and when)
      • Monitor/track every resource possible.
      • Alert on services, availability, and response times.
      • Capture, learn, and improve.
      • Share access with everyone on the team.
      • Plan metrics and monitoring into the lifecycle.
    • Keep and track standard metrics
      • Number of builds.
      • Number of deployments.
      • Average time for changes to reach production.
      • Average time from first pipeline stage to each stage.
      • Number of changes reaching production.
      • Average build time.
    • Use multiple distinct pipelines for each branch and team.
    Don’ts
    • Have long-running branches with large complicated merges.
    • Have manual tests.
    • Have manual approval processes, gates, code reviews, and security reviews.