Amazon Web Services’ Sydney-based AP-SOUTHEAST-2 Region yesterday experienced an auto-scaling issue.
The problem struck just before noon yesterday, November 26th, and was resolved at 1:22PM.
The issue was described by AWS as “increased error rates and latencies for the EC2 and EBS APIs”. The cloud colossus's status information RSS feed said “Existing instances are not affected by this issue.”
Which was both good news and bad news: good, because it meant workloads in the region weren’t impacted; bad, because it meant new instances were slow to launch, meaning AWS’ famous elasticity was less stretchy for a time.
CRN can detect no signs that AWS customers or EC2-powered services were unduly inconvenienced by the issue.
Service is operating normally: [RESOLVED] Increased Launch Times Between 4:52 PM and 6:22 PM PST we experienced increased launch latencies for Auto Scaling in the AP-SOUTHEAST-2 Region. The issue has been resolved and the service is operating normally. https://t.co/bsGDpyYOZu
— Is AWS Down? (@isAWSdownbot) November 26, 2019
The issue is the second to hit AWS Australia this year: in April 2019 AP-SOUTHEAST-2 wobbled after a networking problem.
That incident was AWS Sydney’s first reported issue for 1,059 days.
Yesterday’s incident came 211 days after the previous wobble and, like its predecessor, was resolved in about 90 minutes.
Microsoft has experienced worse outages in recent times: over the last week Australian users have copped three Azure issues, some of which resulted in significant hours-long outages to email and other services.