Production Ready Deployment of AWS Resources

Production Ready Deployment of AWS Resources

Server Infrastructure Deployment on any cloud with minimal repetitive resources are the most important tasks for any DevOps Engineer.

For deploying production servers on AWS, Below is the infrastructure design which will be best suited for low cost and smooth running performance.

The design was made by keeping 99.9% security in my mind where none of the security group or resources was exposed to the world apart from TCP port (443) and UDP port (1194).

Resources Used on AWS  with descriptions :

Two Reserved M4 instances in each region (Oregon, Mumbai).

 The instances are basically in Auto Scaling Group where it will automatically get scaled once the CPU usage and Memory Usage reached beyond the threshold.

Security Groups

I have an individual security group with its service name ( for EC2 we have SG with name  “app-internal”, and for internal load balancer we have SG with a name “app-internal-load balancer” ) and so on.

Each SG have only ports open which are used to connect to certain AWS resources like RDS, Elastic Cache and  Elastic Search and none of the ports are open publically whereas only accessible in the local subnet in VPC’s.

Internal Classic Load Balancer

A classic load balancer was used internally to route the request from main ALB  to EC2 server.


I have two VPC’s  in the Oregon region:

   a) VPC (A) for EC2 instance, elastic cache and elastic search.

   b) VPN VPC for VPN EC2 server (t2.medium)

And also have  one VPC in Mumbai region:

     a) VPC (B) for EC2 instance.

Application Load Balancer

ALB was internet facing, with its security group on which only TCP port 443 was open.

Elastic Cache

Redis server was used for caching lies in Oregon within app-internal SG and VPC(A).

Elastic  Search

Elastic Search lies in Oregon within app-internal SG and VPC(A).


Cloudwatch dashboard was created for monitoring all the AWS services running (like EC2, RDS and ALB).


Alarms are active to notify SNS subscriber for any incidents occur.


SNS   service was active to notify third-party apps for incident monitoring and maintenance status.


Rather taking direct access to any server or resources from outside (Publically) on ssh, I have closed all the ssh port on each resource for public access,  it was only open for internal subnets and SG. I am using VPN SSL/TLS encrypted tunnel which requires it’s own port to connect to VPN server, once you will be connected to VPN server, you will become the owner of all the resources, but here’s the  tweaks may be (0.1 %) of chance to break the TLS security to take the access of VPN server which is equal to impossible but for any reason if hackers succeeded, now for taking the access of the resources from VPN server you will need private PEM file with individual usernames, and in above design all the EC2 instances are protected with CIS guidelines ( Server Hardening). So I assume it’s difficult to invade.


RDS  data are also encrypted as encryption was enabled on each individual database, with no public access while it’s only accessible through EC2 for CURD operations.

VPC Peering

I have used VPC peering here so that we can make Elastic Cache and Elastic Search accessible to all-region across AWS.

Apart from that, VPC peering was also done with VPN VPC and two other VPC from Oregon and Mumbai region to make services accessible to VPN server.


WAF  services are also enabled and attached to Application LoadBalancer (ALB) to protect cross-site scripting, database query attack and region-specific attack.

Guard Duty services

Guard Duty services are also enabled in the region to keep on monitoring the resources in the region.

Health Check (Route 53)

Alarms implemented to continuously monitor our domain on 443 or 80. The above infrastructure design was cent per cent risk-free in every circumstance whether comparing with performance, health, security, alerts and resource optimisations.

Leave a Reply