Optimizing Solutions on AWS

The availability of a system is typically expressed as a percentage of uptime in a given year or as a number of nines. Below, you can see a list of the percentages of availability based on the downtime per year, as well as its notation in nines.

Availability (%)Downtime (per year)
90% (“one nine”)36.53 days
99% (“two nines”)3.65 days
99.9% (“three nines”)8.77 hours
99.95% (“three and a half nines”)4.38 hours
99.99% (“four nines”)52.60 minutes
99.995% (“four and a half nines”)26.30 minutes
99.999% (“five nines”)5.26 minutes
availability in nines notation

To increase availability, you need redundancy. This typically means more infrastructure: more data centers, more servers, more databases, and more replication of data. You can imagine that adding more of this infrastructure means a higher cost. Customers want the application to always be available, but you need to draw a line where adding redundancy is no longer viable in terms of revenue.

Challenges of High Availability

Create a Process for Replication: The first challenge is that you need to create a process to replicate the configuration files, software patches, and application itself across instances. The best method is to automate where you can.

Address Customer Redirection: The second challenge is how to let the clients, the computers sending requests to your server, know about the different servers. One option is to use a load balancer which takes care of health checks and distributing the load across each server.

Types of High Availability

  • Active-Passive: With an active-passive system, only one of the two instances is available at a time. One advantage of this method is that for stateful applications where data about the client’s session is stored on the server, there won’t be any issues as the customers are always sent to the same server where their session is stored.
  • Active-Active: A disadvantage of active-passive and where an active-active system shines is scalability. By having both servers available, the second server can take some load for the application, thus allowing the entire system to take more load. However, if the application is stateful, there would be an issue if the customer’s session isn’t available on both servers. Stateless applications work better for active-active systems.

AWS Elastic Load Balancing (ELB)

AWS provides a service for that called Elastic Load Balancing (ELB). The ELB service provides a major advantage over using your own solution to do load balancing, in that you don’t need to manage or operate it. It can distribute incoming application traffic across EC2 instances as well as containers, IP addresses, and AWS Lambda functions.

Features of ELB

  • The fact that ELB can load balance to IP addresses means that it can work in a hybrid mode as well, where it also load balances to on-premises servers.
  • ELB is highly available. The only option you have to ensure is that the load balancer is deployed across multiple Availability Zones.
  • In terms of scalability, ELB automatically scales to meet the demand of the incoming traffic. It handles the incoming traffic and sends it to your backend application.

Health Checks

Taking the time to define an appropriate health check is critical. Only verifying that the port of an application is open doesn’t mean that the application is working. It also doesn’t mean that simply making a call to the home page of an application is the right way either.

After determining the availability of a new EC2 instance, the load balancer starts sending traffic to it. If ELB determines that an EC2 instance is no longer working, it stops sending traffic to it and lets EC2 Auto Scaling know. EC2 Auto Scaling’s responsibility is to remove it from the group and replace it with a new EC2 instance. Traffic only sends to the new instance if it passes the health check.

One way to do that would be to create a callable monitoring function, like “/monitor”, that will make a call to the database to ensure it can connect and get data, and make a call to S3. Then, you point the health check on the load balancer to the “/monitor” page.

In the case of a scale down action that EC2 Auto Scaling needs to take due to a scaling policy, it lets ELB know that EC2 instances will be terminated. ELB can prevent EC2 Auto Scaling from terminating the EC2 instance until all connections to that instance end, while preventing any new connections. That feature is called connection draining.

ELB Components

The ELB service is made up of three main components.

  • Listeners: The client connects to the listener. This is often referred to as client-side. To define a listener, a port must be provided as well as the protocol, depending on the load balancer type. There can be many listeners for a single load balancer.
  • Target groups: The backend servers, or server-side, is defined in one or more target groups. This is where you define the type of backend you want to direct traffic to, such as EC2 Instances, AWS Lambda functions, or IP addresses. Also, a health check needs to be defined for each target group.
  • Rules: To associate a target group to a listener, a rule must be used. Rules are made up of a condition that can be the source IP address of the client and a condition to decide which target group to send the traffic to.

ELB TYPES

There are two types of ELB, application ELB and Network ELB.

Selecting between the ELB service types is done by determining which feature is required for your application. Below you can find a list of the major features

FeatureApplication Load BalancerNetwork Load Balancer
ProtocolsHTTP, HTTPSTCP, UDP, TLS
Connection draining (deregistration delay)
IP addresses as targets
Static IP and Elastic IP address
Preserve Source IP address
Routing based on Source IP address, path, host, HTTP headers, HTTP method, and query string
Redirects
Fixed response
User authentication
Types of AWS Load Balancers

EC2 Auto Scaling

Availability and reachability is improved by adding one more server. However, the entire system can again become unavailable if there is a capacity issue. Let’s look at that load issue with both types of systems we discussed, active-passive and active-active.

Vertical Scaling

This means increasing the size of the server. With EC2 instances, you select either a larger type or a different instance type. This can only be done while the instance is in a stopped state. In this scenario, the following steps occur:

  1. Stop the passive instance. This doesn’t impact the application since it’s not taking any traffic.
  2. Change the instance size or type, then start the instance again.
  3. Shift the traffic to the passive instance, turning it active.
  4. The last step is to stop, change the size, and start the previous active instance as both instances should match.

When the amount of requests reduces, the same operation needs to be done. Even though there aren’t that many steps involved, it’s actually a lot of manual work to do. Another disadvantage is that a server can only scale vertically up to a certain limit.

Horizontal Scaling

When there are too many requests, this system can be scaled horizontally by adding more servers. This requires the application to be stateless, not storing any client session on the server.

The Amazon EC2 Auto Scaling service can take care of that task by automatically creating and removing EC2 instances based on metrics from Amazon CloudWatch. You can see that there are many more advantages to using an active-active system in comparison with an active-passive. Modifying your application to become stateless enables scalability.

Integrate ELB with EC2 Auto Scaling

The ELB service integrates seamlessly with EC2 Auto Scaling. As soon as a new EC2 instance is added to or removed from the EC2 Auto Scaling group, ELB is notified. However, before it can send traffic to a new EC2 instance, it needs to validate that the application running on that EC2 instance is available.

This validation is done via the health checks feature of ELB. Monitoring is an important part of load balancers, as it should route traffic to only healthy EC2 instances. That’s why ELB supports two types of health checks.

  • Establishing a connection to a backend EC2 instance using TCP, and marking the instance as available if that connection is successful.
  • Making an HTTP or HTTPS request to a webpage that you specify, and validating that an HTTP response code is returned.

Configure EC2 Auto Scaling

The EC2 Auto Scaling service works to add or remove capacity to keep a steady and predictable performance at the lowest possible cost. By adjusting the capacity to exactly what your application uses, you only pay for what your application needs.

There are three main components to EC2 Auto Scaling.

  • Launch template or configuration: What resource should be automatically scaled?
  • EC2 Auto Scaling Group: Where should the resources be deployed?
  • Scaling policies: When should the resources be added or removed?

Launch Template

There are multiple parameters required to create EC2 instances: Amazon Machine Image (AMI) ID, instance type, security group, additional Amazon Elastic Block Store (EBS) volumes, and more. All this information is also required by EC2 Auto Scaling to create the EC2 instance on your behalf when there is a need to scale. This information is stored in a launch template. A template also supports versioning, which allows for quickly rolling back if there was an issue or to specify a default version of your launch template.

You can create a launch template one of three ways.

  • The fastest way to create a template is to use an existing EC2 instance. All the settings are already defined.
  • Another option is to create one from an already existing template or a previous version of a launch template.
  • The last option is to create a template from scratch. The following options will need to be defined: AMI ID, instance type, key pair, security group, storage, and resource tags.

Scaling Groups

An EC2 Auto Scaling Group (ASG) enables you to define where EC2 Auto Scaling deploys your resources. This is where you specify the Amazon Virtual Private Cloud (VPC) and subnets the EC2 instance should be launched in.

EC2 Auto Scaling takes care of creating the EC2 instances across the subnets, so it’s important to select at least two subnets that are across different Availability Zones.

To specify how many instances EC2 Auto Scaling should launch, there are three capacity settings to configure for the group size.

  • Minimum: The minimum number of instances running in your ASG even if the threshold for lowering the amount of instances is reached.
  • Maximum: The maximum number of instances running in your ASG even if the threshold for adding new instances is reached.
  • Desired capacity: The amount of instances that should be in your ASG. This number can only be within or equal to the minimum or maximum. EC2 Auto Scaling automatically adds or removes instances to match the desired capacity number.

Using different numbers for minimum, maximum, and desired capacity is used for dynamically adjusting the capacity. However, if you prefer to use EC2 Auto Scaling for fleet management, you can configure the three settings to the same number, for example four. EC2 Auto Scaling will ensure that if an EC2 instance becomes unhealthy, it replaces it to always ensure that four EC2 instances are available. This ensures high availability for your applications.

Scaling Policies

There are three types of scaling policies: simple, step, and target tracking scaling.

  • A simple scaling policy allows you to use a CloudWatch alarm and specify what to do when it is triggered. This can be a number of EC2 instances to add or remove, or a specific number to set the desired capacity to. You can specify a percentage of the group instead of using an amount of EC2 instances, which makes the group grow or shrink more quickly.
    Once this scaling policy is triggered, it waits a cooldown period before taking any other action. This is important as it takes time for the EC2 instances to start and the CloudWatch alarm may still be triggered while the EC2 instance is booting.
  • Step scaling policies respond to additional alarms even while a scaling activity or health check replacement is in progress. Similar to the example above, you decide to add two more instances in case the CPU utilization is at 85%, and four more instances when it’s at 95%.
  • If your application scales based on average CPU utilization, average network utilization (in or out), or based on request count, then this scaling policy type is the one to use. All you need to provide is the target value to track and it automatically creates the required CloudWatch alarms.

Leave a comment

Your email address will not be published. Required fields are marked *