Bandwidth Problem
The last thing I did to the production system before the weekend was move Lucidchart's web servers into the VPC. These web servers are located in two of the four AZs in us-east. Remember from my VPC setup that each AZ has a single NAT instance of size t1.micro. This means that all traffic between the web servers and the MySQL database, MongoDB database, cache servers, and internal services was being funneled through two t1.micro instances.
Moving the web servers into VPC didn't immediately trigger any errors. The bandwidth was sustainable until a couple of rather obnoxious queries were made to the MySQL server, queries that returned mountains of data. To help things along, our javascript client, after timing out, retried the same query. As soon as the traffic exceeded the t1.micro's capacity, we had issues.
Bandwidth Solution
It only took one more bout of downtime to force an upgrade of the t1.micro to an m1.large - the difference in bandwidth was enough to handle any other issues. Keep in mind that this all happened over a weekend, so I was reluctant to finish the migration in its entirety, which also would have solved the problem. The NAT instances are EBS backed, so upgrading was fairly simple.
If you followed my VPC setup, and are running into this issue, here are the steps for upgrading your t1.micro to an m1.large. It does require downtime on a single availability zone at a time.
- Start a new instance, size m1.large, in the same AZ as the t1.micro NAT.
- Stop the m1.large NAT instance.
- Remove all instances, in the same AZ, from ELBs or other load balancers.
- Stop the t1.micro NAT.
- Detach and delete the root EBS volume from the m1.large NAT.
- Detach the root EBS volume from the t1.micro NAT, and attach it to the m1.large NAT.
- Delete the t1.micro NAT.
- Disable the Source/Dest check on the m1.large NAT.
- Reassign the elastic IP from the t1.micro NAT to the m1.large NAT.
- Start the m1.large NAT.
- Update the affected subnet routing table's default entry to go to the m1.large NAT.
- Add all instances, in the same AZ, to ELBs or other load balancers.
- Repeat all steps for other AZs.
Mitigation
While this problem was unfortunate and unexpected, it was certainly preventable. In hindsight, I can think of three separate ways that this issue could have been mitigated.
- Do the entire transition in a single X hour period. This method does not guarantee no issues, as anything can go wrong in those X hours. Also, this may not be plausible for every scenario.
- Begin with a larger NAT instance. Whether it's an m1.large or a hi1.4xlarge, plan ahead for the maximum bandwidth you will need, double it, and use the instance size that accommodates it.. This beefy NAT size is only temporary. After the migration to VPC is over, the NAT can be replaced with a smaller instance.
- Perform two separate migrations. The first migration would be to a public subnet, where every one of the instances has its own elastic IP address. The second migration would be to a private subnet, but only for those instances which belong in a private subnet. Not only is this tedious, it's twice as error-prone and has issues with auto-scaled instances.
Of the techniques listed above, I would choose the second. It guarantees no issues while minimizing errors. Whatever your flavor of mitigation, at least be aware of the bandwidth bottleneck that lurks in a migration to VPC.
Thanks for sharing the info .
ReplyDeleteThank you
ReplyDeleteMichael, Is there any tool or service that could provide the following high availability services a) allow for VPC to networked and monitored via a dashboard ; If tunnel between two or more VPC goes down, tool auto detects and activates a standby tunnel to ensure uptime and alerts admin b) Keep the NAT highly avaialble by setting up a standy NAT instance and montiroing it
ReplyDelete2) as well setup a HA NAT
Thanks!
ReplyDelete