COMMENT | Like any US IT vendor event, AWS Re:invent 2016 was sure to have a showstopper or two. Few expected it to have 24 wheels, six axles and a Kenworth badge.
At the end of his keynote, AWS chief executive Andy Jassy introduced to the stage a 45-foot containerised data centre mounted on a semi-trailer called the Snowmobile; it’s a major piece of artillery in Amazon’s assault on the enterprise.
The Snowmobile can hold up to 100 petabytes of data and uses a high-speed network switch to turn it into the data-centre equivalent of an enormous USB stick.
Companies with exabytes of data (an exabyte is 1 million terabytes or 1000 petabytes) can rent a Snowmobile for several months to literally ship data from their data centres to Amazon’s.
This is a turning point in the story of cloud vs incumbent vendors. Amazon has been nibbling at the edges of enterprise for years but it has rarely managed to swallow a full‑sized data centre.
Even if a CIO decides that AWS is the right platform to spin up and manage new applications easily, many are unable to close down their internal data centre until they work out what to do with historical data.
And without replacing the whole data centre, AWS can only be a complementary partner. The Snowmobile clearly reveals AWS’ goal: to displace giants such as Oracle and SAP as the cornerstone vendor in the enterprise.
Consider the ways AWS has chipped away at three major objections to switching.
The first issue was infrastructure cost. Buying hardware outright in some cases is cheaper than renting servers for dollars a day, despite all the advantages of cloud provisioning.
AWS’ Glacier launched in 2012 with the headline that you could store a gigabyte “for less than a penny” a month. It directly targeted tape libraries and data archives with no set-up fee. That price has kept falling – it’s now down to 0.4 cents per GB and AWS will try to lower it further.
Management, backup, business continuity and disaster recovery become AWS’ problem. That (ideally) reduces risk for the enterprise over running your own tape library.
Bandwidth limitation is another reason. When you have a data centre full of data it can take decades to send an exabyte over a 10Gb switch, not to mention the cost of transfer.
Amazon came up with the 80TB Snowball, a ruggedised server that physically transports data from a customer’s data centre to AWS. This was never going to cut it with large enterprise, hence the evolution to the Snowmobile.
Enterprise can now dump old data retained for compliance or reporting into Glacier directly at the rate of 100PB a fortnight (including time for loading, unloading and transporting – driving! – the data).
Amazon needs this exabyte transport to solve the third challenge – a database to go after the 600-pound gorilla of enterprise IT: Oracle. The big O was firmly in Amazon’s sights with the announcement of PostgreSQL compatibility for Aurora, Amazon’s database engine. Aurora was previously compatible with MySQL only, whereas Postgre is a lot closer to Oracle.
There’s a major incentive too, which is psychological as much as anything else. AWS has an earned reputation for constantly slashing the price of its services. Contrast that to Oracle, which has a fearsome notoriety for finding new ways to charge and charge well.
Now that those three major barriers have disappeared, enterprise CIOs on tightening budgets will add AWS to their dance card.
Oracle is ramping up its cloud credentials, most recently with the purchase of NetSuite, a very strong performer in cloud ERP. However, it will take more than better products to catch Amazon.
AWS’ commitment to the cheapest cost is a classic disruptive tactic that can only be beaten by cannibalising your own position. And for a company like Oracle, with highly lucrative contracts in every corner of the globe, that will be a Herculean feat.
Sholto Macpherson is a journalist and commentator who covers emerging technology in cloud