A large-scale outage affecting Amazon Web Services' Elastic Compute Cloud (EC2) over the Easter break has highlighted one of the many risks associated with running thousands of applications from large clusters of virtualised servers.
The outage - which began Thursday last week and still impacted a number of Amazon customers as late as Tuesday - took out popular social networking services Foursquare, FormSpring, Heroku, HootSuite, Quora and Reddit.
It also impacted many IT service providers that use Amazon as part of their total solution to end users, such as cloud management software vendor Right Scale.
The outage began Thursday at 6pm (Sydney time), when customers hosting applications at Amazon's cloud compute service (EC2) based at the US-EAST-1 data centre (in Virginia) began experiencing connectivity, latency and error rates.
On any normal day Amazon’s EBS - a giant storage area network - dynamically distributes small volumes of storage capacity to thousands of the physical servers hosting Amazon EC2 virtual server instances and to applications using Amazon’s Relational Database Service (RDS).
According to Amazon's status updates, a mysterious network event caused software monitoring of the EC2 network to incorrectly calculate that there was insufficient redundancy available to meet the needs of these server and database instances.
The software automatically attempted to move resources around the network to adjust – in effect a mass re-mirroring of storage that flooded the network with traffic.
In the chaos that ensued, Amazon ran out of capacity in its US-EAST-1 Availability Zone – with services failing faster than Amazon engineers could re-provision them.
Six hours after the unexplained network event, Amazon's technicians reported to customers that EBS-backed instances in the US-EAST-1 region were "failing at a high rate."
"Effectively, [Amazon's high availability software] launched a Denial-of-Service attack on their own infrastructure," explained Matt Moor, technical architect at a Sydney-based Amazon customer Bulletproof Networks, which fortunately relies primarily on providing services from its own infrastructure in Australia.
The outage initially affected multiple Amazon ‘availability zones’ (data centres) but by Friday morning, the company reported that most server instances across its compute cloud were again operational - with the exception of any applications hosted in the US-EAST-1 Availability Zone.
Customers with applications hosted in this zone suffered outages well into the weekend as Amazon engineers struggled to bring the large volume of services back online.
"The work we're doing to enable customers to be able to launch EBS backed instances and create, delete, attach and detach EBS volumes in the affected Availability Zone is taking considerably more time than we anticipated," Amazon's engineers reported on the company's status page.
Instances had to be re-provisioned slowly, the company reported, "in order to moderate the load on the control plane and prevent it from becoming overloaded and affecting other functions."
By Monday, Amazon reported that most instances were operational, advising those customers still experiencing issues to "stop and restart your instance in order to restore connectivity."
How did customers rate Amazon's response to the outage? Read on...