Using StorReduce over AWS Direct Connect in a VPC with no internet access


This is a guide to set up a performant, load-balancing, autoscaling deployment of NAT instances for AWS. This is a specifically useful configuration for environments utilizing the Amazon Direct Connect and requiring communication to Amazon S3 via the use of a private S3 VPC Gateway endpoint. Internet connectivity is required for the instances to communicate to Amazon CloudWatch, but otherwise traffic is routed privately.

With the use of CloudFormation, following this guide will result in an easy deployment of a NAT autoscaling group scaled across subnets and potentially multiple AZ’s and prefaced with a Network Load Balancer.


  • An EC2 Key Pair to associate with the NAT instances
  • A VPC with subnet(s) that have access to a private S3 endpoint
  • A security group to associate with the NAT instances - these should at least allow incoming for ports 80, 443 from BOTH from the site UTILIZING the NAT instances AND the subnet(s) that the NATs will deploy in.

Component Summary

This configuration can be summarized by the following components:

  • A network load balancer with static IP addresses spread across {n} availability zones, each with access to an internal S3 endpoint
    • The network load balancer creates two target groups that allow TCP traffic on ports 80 and 443
    • The autoscaling group of NAT instances are registered in both target groups
  • An autoscaling group of NAT instances deployed in private subnets across {n} availability zones (same subnets as load balancer)
    • The autoscaling group must have internet access to allow data throughput metrics to be monitored via CloudWatch and enable scaling activity
    • Health is monitored by ELB for port 80 and 443 PING checks
    • Uses the official NAT AMI running Amazon Linux (released on Sep 30, 2017):
    • “AWSAMIRegion”: { “ap-northeast-1”: { “NATAMI”: “ami-17944271” }, “ap-northeast-2”: { “NATAMI”: “ami-61e03a0f” }, “ap-south-1”: { “NATAMI”: “ami-6dc38202” }, “ap-southeast-1”: { “NATAMI”: “ami-0597ea66” }, “ap-southeast-2”: { “NATAMI”: “ami-2c37d74e” }, “ca-central-1”: { “NATAMI”: “ami-f055ec94” }, “eu-central-1”: { “NATAMI”: “ami-3cec5e53” }, “eu-west-1”: { “NATAMI”: “ami-38d20741” }, “eu-west-2”: { “NATAMI”: “ami-e07d6f84” }, “sa-east-1”: { “NATAMI”: “ami-6a354a06” }, “us-east-1”: { “NATAMI”: “ami-b419e7ce” }, “us-east-2”: { “NATAMI”: “ami-8c002de9” }, “us-west-1”: { “NATAMI”: “ami-36ebdb56” }, “us-west-2”: { “NATAMI”: “ami-d08b70a8” } }
  • An EC2 role and profile which permits PUT operations to CloudWatch, S3 GET object access to the bucket storing required initialization scripts (pulled at deployment time) - the profile gets associated with the NAT instances, and the EC2 role is simultaneously used at launch time to pull relevant init scripts.
  • CloudWatch scaling alarms to control the autoscaling instances.

Deployment Steps

  1. Ensure you have your pre-requisites defined - ensure you have your subnets, VPC endpoints, NAT gateways and EC2 Key Pair already created.
  2. (Optional) Copy the contents of bucket storreduce-production-cf-templates into your own private bucket and make any edits you require.
  3. Launch the CloudFormation Create Stack function: e.g.
  4. Choose the setting “Specify an Amazon S3 template URL” and enter the following URL:, substituting storreduce-production-cf-templates with your deployment bucket if you performed step 2.
  5. Click “Next”
  6. Specify details for the template. Recommendations and details for parameters are given below:

    1. StackName - choose something 8 char or less as this field is used in several places to prepend name tags, which can cause character overflow errors
    2. AutoscalingSecurityGroup - Security group for NAT instances & LB. Recommended inbound open for TCP 80, 443 accessible by StorReduce instances.
    3. *CloudWatchScalingWindow - *
    4. DesiredCapacity -
    5. *KeyPairName *- Your EC2 key pair to associate with the NAT instances.
    6. LBName -
    7. MaxSize -
    8. MinSize -
    9. *NATInstanceNames *-
    10. *NATInstanceType - *Recommended to stay with m4.large.
    11. *ScaleInAverageThreshold - *Keep default unless instance is not m4.large and/or CloudWatchScalingWindow has been changed.
    12. ScaleInConsecutivePeriod - Keep default unless instance is not m4.large.
    13. ScaleOutAverageThreshold - Keep default unless instance is not m4.large.
    14. ScaleOutConsecutivePeriod - Keep default unless instance is not m4.large.
    15. ScalingActivityCoolOff -
    16. *SubnetsWithS3Endpoint - *Choose subnets that have VPC endpoints to S3; can specify multiple to benefit on resilience across multiple AZ’s.
    17. *VPCID - *The VPC to deploy in - this must match the subnets specified.
  7. Click “Next”

  8. Click “Next”

  9. Check the box “I acknowledge that AWS CloudFormation might create IAM resources.” and click “Create”

  10. Once deployment is completed, derive the IP address from the load balancer by finding the DNS name of it through navigating the EC2 Console: EC2→Load Balancers-> → (Field - “DNS Name”). Then, from an instance with network connectivity to the same subnet(s) as the load balancer, perform nslookup

  11. Configure DNS/hostfiles for s3[-{your-region-name}] ( (e.g. ( or, depending on your region) to point to the load balancer IP address(es).
    If you are targeting S3 in the us-east-1 region (aka “S3 Standard Region”) then use, if you are targeting S3 in another region then use

    1. Add to hostfile (example):
    2. Alternatively, you can add a CNAME record to your DNS to point to the S3 endpoint (e.g., “ CNAME”. Note this cannot be done using Route53.


In this document we have explained how to configure the NAT proxy farm for an environment utilizing VPCs and AWS Direct Connect. If you have any questions please contact StorReduce Support