avatarMohamed Aït El Kamel

Summarize

AWS — From 1 to 1000s of VPCs, easy peasy!

When it comes to expand its AWS network, some mistake can lead to security vulnerabilities or routing issues. You’ll find below best practices and network architecture example.

The golden rules

Rule #1 : 1 VPC = 1 CIDR, maintain a list of them. It can be a basic excel file. The goal is to avoid IP range overlapping. This way you will be able to connect all your VPCs in the future without any routing conflict. That would lead in lost packets going in the wrong destination. You can pike up in the 3 private IP ranges :

• Classe A : 10.0.0.0/8 (10.0.0.0–10.255.255.255)

• Classe B : 172.16.0.0/12 (172.16.0.0–172.31.255.255)

• Class C : 192.168.0.0/16 (192.168.0.0–192.168.255.255)

Rule #2 : leverage availability zones (AZ), 2 for high availability or 3 for very high availability. For reminder, regions are split into multiple AZ. An AZ is composed by one or more datacenters. Each AZ has a different combination of utility providers : energy, water, internet… In case of outage of one provider, only one AZ would be affected. So, spreading your workload across multiple AZ will improve your tolerance and resilience to AZ failure.

To go further in AZ failure management, when you create a new account, you can open a ticket to ask AWS to have the same AZ code/AZ id association. Example : eu-west-1a > euw1-az1. This way, if an AZ failed, It will be the same across all your accounts.

Rule #3 : industrialization, leverage Infrastructure As Code (IaC) to avoid manual configuration mistakes and to ensure consistency. I recommend to script the deployments, but don’t automate them because the network is a high risk component of your infrastructure.

Reference VPC architecture

This VPC is spreading across 3 AZ, and split into 3 layers :

· Public : for NAT gateways, load balancers, publicly exposed EC2 instances

· Private with internet access through NAT gateways : for EC2 instances

· Private without internet : for database, EFS, network resources…

It has an S3 gateway endpoint to avoid NAT gateway data processing cost if you have a lot of traffic between EC2 instances and S3 buckets.

CIDRs example :

• VPC : 192.168.217.0/24

• Public subnets :

• 192.168.217.0/28

• 192.168.217.16/28

• 192.168.217.32/28

• Private subnets layer 1 :

• 192.168.217.64/27

• 192.168.217.96/27

• 192.168.217.128/27

• Private subnets layer 2 :

• 192.168.217.160/27

• 192.168.217.192/27

• 192.168.217.224/27

Network evolution!

Your first project

1 project with multiple environment

There are 1 VPC per environment (ideally in different accounts to avoid the blast radius). They are isolated.

1 project with multiple environment + shared services

Each environment VPC is connected to a service VPC with a VPC peering.

Now let’s scale!

Multiple project + shared infrastructure service

The project VPC are connected to the infrastructure services VPC with a transit gateway.

The routing strategy of the transit gateway will depend of the traffic flows and your security requirements.

The star

Every VPC are isolated. They can access to the VPC service only, and the VPC service can access to all VPCs.

Here the related route tables associations and propagation :

Purple arrow : propagation/route

Green arrow : association

Environments segregation

Every VPCs from the same environment can communicate with each other.

Related route table configuration:

Environment segregation + project shared services

You can have both transit gateway and VPC peering. You can also have only transit gateway if required by governance rules (example : traffic inspection, one centralized routing configuration…)

We would go from this :

to this :

Related route table configuration:

Multiple project + shared infrastructure service + on-prem connection

VPN

Direct connect

Multiple direct connections

Multiple project + shared infrastructure service + on-prem connection + Multiple regions

Regions are connected by transit gateway peering

Conclusion

From these architecture example, you could implement features like centralized VPC endpoints, traffic inspection, centralized internet access…

Some says network is hard. But it’s all logic and predefined rules. Following the golden rules, and with the right tools like inmycloud.io, it will be easy peasy. 😉

AWS
Network
Recommended from ReadMedium