The initial hype suggested public cloud migrations would be one-way traffic. Data and applications would march into data centres run by hyperscale firms like Microsoft, Google and Amazon, while carriers and smaller service providers would scoop up the rest. These workloads would never return.
But ask around and it's not hard to find examples of companies that have made bold steps toward the cloud, only to backtrack when costs spiralled, connections stalled, providers crashed or for myriad other reasons.
CRN quizzed dozens of customers and IT suppliers for their stories about reversing out of public and private clouds. The examples came thick and fast (though most asked us not to publish the name of the red-faced customer organisation, for obvious reasons).
Bill shock was by far the biggest reason for the change in strategy. Other issues included internet problems, underwhelming performance, project failures, a lack of full functionality in the cloud, and also issues such as regulation and data sovereignty.
This disenchantment with public cloud – or at least with the concept of public cloud as a one-size-fits-all solution for IT – explains why the world's biggest hyperscale providers are shifting to a hybrid message. Only this week, Microsoft announced general availability of Azure Stack, a best-of-both-worlds product for customers who want to pick and choose where workloads live. AWS has a partnership with VMware, and even Google announced a tie-up with Nutanix, dipping its toe into hybrid.
Bill shock
The Commonwealth Bank revealed in June how it moved to a private cloud approach using bare-metal infrastructure based on OpenStack, after mounting public cloud costs and disillusionment with conventional virtual machines. The bank had been an early, prominent and bullish user of AWS.
Quinton Anderson, CBA's head of engineering and platform products, said there had been a growing realisation that the cost benefits of public cloud services begin to dissipate as scale increases. “Once you get past 1000 servers and you’re running at a high utilisation rate, the economics quickly flip on you and they don’t make sense."
Steve Martin, head of channels at NextDC, has seen the ebbs and flows of public and private cloud from his position inside one of Australia's largest co-location firms. While NextDC has never publicly revealed the names of its hyperscale clients, it is known to host major providers, including Microsoft, along with many Australian managed service providers and hosting companies.
"There are numerous use cases on how cloud is helping organisations to move faster, rapidly deploy new systems and drive unheralded innovation," Martin told CRN.
"However, not all workloads are ready for the cloud. In the early days, a number of businesses jumped headlong into cloud only to be hit with a bit of bill shock causing a number to re-evaluate their cloud position."
He explains that some of NextDC's partners "suggest workloads in use for less than a third of a three-year period are perfect for public cloud, while higher-use workloads could be better suited to in-house infrastructure or private cloud".
The unexpected nature of these costs – it is called bill shock, of course – can be a sharp learning curve for users. Bauer Media, for instance, told CRN's sister title, iTnews, last year how it had been caught out by developers leaving servers running in AWS. The publisher solved the problem by turning to Australian cost optimisation start-up GorillaStack, which automates the process of switching off servers.
One of the highest-profile Australian software firms to move to the public cloud was TechnologyOne, a Brisbane-based company listed on the ASX that primarily serves local government.
The company was named AWS technology partner of the year in 2016, having taken everything to AWS after a three-day outage during the Brisbane floods in 2011.
However, in 2016 TechOne invested in its own NetApp storage after struggling to control costs due to a lack the flexibility.
NetApp deduplication and FlexClone technology led to an 85 percent reduction in production data – which was a “seven-figure saving", according to Iain Rouse, R&D group director, cloud at TechnologyOne. The four-node MetroCluster, which has 20TB of flash, 200TB for file storage and sits inside the Equinix SY3 data centre in Sydney, connects to AWS using Cisco Nexus switches.
Chris Nixon, distribution partner manager for NetApp Australia and New Zealand, said: "We are helping our customers get to the cloud with data management that seamlessly connects different clouds, whether they are private, public or hybrid environments."
Held for ransom by retrieval costs
Retrieval of data can catch users out. AWS' Glacier cold-storage product carries infinitesimal costs, currently just $0.005 per GB per month, but costs $0.036 per GB for the most expensive 'expedited' retrievals – roughly 7x the price.
Stephen Knights, managing director of Sydney-based IT firm Commulynx, pointed to an example of "a midmarket organisation that works in the finance industry with hosted email archive and needed the data back. The vendor in question charged them US$30,000 to get their own data back."
Kevin Allan, managing director of Perth-based Probax, has seen customers "get hit with bills in the tens of thousands specifically when restoring data because both AWS and Azure charge for outbound data transfer, which a lot of businesses overlook when scoping their public cloud needs".
Safi Obeidullah, director of sales engineering at Citrix ANZ, said one customer received bill shock because "their approach was to simply move their current VMs into the cloud 'as is' and underestimated the costs because everything looks cheap at a few cents an hour".
"While I can’t go into the details about that specific organisation, we are seeing other examples of bill shock where organisations are not accurately estimating or understanding the costs involved in running services in a public cloud."
Obeidullah said these customers are not necessarily backing out of the public cloud, but reviewing their approach.
"Simply moving workloads from on-premise to public cloud 'as is' will not necessarily be cheaper on a per-workload cost basis. However in some cases, the drivers for moving to cloud aren’t just about the workload and instead about a broader organisational desire to, for example, shift away from maintaining their own data centres, which brings with it additional costs/savings. I do believe that a hybrid approach is what will eventuate for most organisations."
It's horses for courses, added David Malcolm, executive director of Network Professional Services based in Cremorne, Victoria. Software-as-a-service tends to be cost-effective and customers will stick with SaaS, but infrastructure? Not necessarily.
"Depending on scale and requirements, putting internal infrastructure in the cloud may not be as cost effective as running it in house. Although it may be suitable where an organisation prefers to outsource these services rather than retain in-house expertise.
"Some of our customers have experience where they've moved assets to the cloud, found it uneconomical, and moved back to their own infrastructure, although their SaaS apps tend to remain cloud-based, and they adopt a hybrid model."
Malcolm points to SaaS identity management tools such as Okta and endpoint protection such as CrowdStrike as being better off in the cloud, while general business systems, industry-specific databases, Active Directory, file and print sharing is often best run internally.
Next: Bandwidth bandits
Do you have your own story of a company reversing out of cloud or redefining their public cloud strategy? Leave your comments below.
Network constraints
Australia's tyranny of distance is partly responsible for our much-maligned internet speeds (the country's politicians also carry their share of the blame). Akamai's Q1 State of the Internet report ranked Australia 50th in the world for internet connection speeds – our 11.1 Mbps on average lags behind the likes of Thailand, Lithuania, Latvia and Kenya.
The cloud demands fast, stable internet connections, and this has proved the downfall of plenty of cloud migrations.
Nowhere is this more apparent than voice and video, where latency issues have been known to curtail the shift to unified communication-as-a-service.
Ashlee Ball, sales director of Fast Track Communications, said UCaaS is "extremely sensitive to network congestion and speeds".
"Unless a client has a direct MPLS connection into the data centre then the voice is delivered over the public internet. This makes the cloud service or app unreliable if the internet connection isn’t seriously spec’d for purpose, which some customers just seem to ignore."
Fast Track was called in to rescue a multinational recruitment organisation when "their former supplier didn’t address this and ultimately we remediated their solution, which was a cloud rip and replace. Not an insignificant undertaking – especially as the configurations were held in a cloud service managed by others. and we had to start from scratch," Ball said.
The client moved to Mitel as its replacement, though Ball would not name the previous vendor.
Connectivity is doubly problematic in regional and rural Australia. Shane King, owner of Ask Itee in Muswellbrook, said a law firm's attempt to move to hosted Exchange was derailed by telecommunications.
"Comms was not up to the task. We are in a regional area where ADSL2+ is the best you’ll get," said King. "Contention and reliability of the ADSL, not an issue for browsing and email, suddenly floated to the top as a critical component. The telco tried a number of things but in the end conceded it wasn’t going to get any better unless we went to a more expensive digital service, which blew any costs savings out the window."
After 12 months, the law firm had enough. They paid out the hosting contract and returned to on-premises.
They're not alone in being once bitten, twice shy by a poor cloud experience. Peter Kantarelis, chief executive of Lookup.com in Sydney, said one of his customers "wanted us to put everything into their own data centres for now" after the "previous IT company did a terrible job of migrating to cloud".
"Turns it was their private cloud with poor performance. Customer was tied to three-year legal contract that only favoured the IT company."
Kantarelis added: "With Aussie internet being so far behind the rest of the world, we have found best practice is to have hybrid systems where some workloads are synced with cloud. This removes lag and improves speed at the ground level, while maintaining redundancy. You get best of both worlds."
Application performs like a dog
One of the prime attractions of the cloud has long been the promise of massive computing grunt at a fraction of the cost of building your own high-performance system. But while the public cloud boasts an incalculable supply of VMs, performance problems can plague applications when they are improperly migrated or poorly architected.
A client's Microsoft SQL database that required low latency and reasonable I/O ran "like a dog" when it moved to an overseas cloud provider, said Les Dunn, general manager of sales at Brisbane-based Aliva.
"The customer had persisted with intermittent poor application performance for over six months, because the previous provider kept on making promises that things would get better. The IT team, particularly the service desk, received consistent complaints regarding woeful app performance.
"The provider proposed WAN acceleration to enhance performance, but would not deliver a free proof-of-concept. So the customer saw this as a possible fix but a wasted investment if the acceleration didn’t significantly reduce the underlying latency issue."
Ultimately the application was migrated back on-premises; thankfully the client had hung onto its infrastructure as a contingency.
Paul Cowle, chief executive of North Sydney-based Fusion Professionals, draws a distinction between legacy application that are migrated to the cloud versus cloud-native apps.
"We've found that net new applications – typically web apps and websites – constructed from the ground up with autoscale, autoheal and continuous delivery in mind generally work well. It is the legacy, off-the-shelf applications that don't behave so well.
"They are not designed around ephemeral storage or dynamic scaling. In fact, they can be more complex to manage in this type of environment as compared to, say, VMWare, where the workload can by dynamically moved with zero downtime to achieve certain operational outcomes," said Cowle.
He is particularly cautious about data analytics and BI apps, and has seen clients exit the cloud for highly memory intensive apps due to the high costs involved. "The problem is they require so much memory that you end up on the largest of the available instance types and these are extremely expensive, much more so than having your own on-prem servers that you pay for once and can use for years.
"They also don't handle autoscale or autoheal well due to the fact that they need to write back to disk, so you don't get some of the perceived benefits," Cowle added.
Next: When the architecture is just plain wrong
The wrong architecture
The IT manager of a Perth building materials company that has been in business for more than four decades, who asked not to be named, told CRN that the company had tried hosting its ordering site externally in a bid for greater reliability, but brought it back in-house after constant VPN dropouts and disconnects.
"At the time the most pressing challenge was a reliable power source. The industrial area we’re located in had over-voltages and under-voltages. Sometimes one of the three phases would drop, and once or twice a year we’d lose power completely. Big storms were always dreaded. UPSs were in place but only lasted so long.
"Hosting the ordering site externally would leverage the provider’s power infrastructure, along with the usual associated benefits. However the strategy I devised at the time did not fully embrace the cloud as it was back in 2011. We more or less took a workload off an internal server and put it on an external one. To facilitate database communications we used commercial firewall and VPN software. In effect we tried to create a hybrid cloud, and when it worked it was great. The website was, and still is, highly dependent on the database.
"However the VPN connection kept dropping. The software would try reconnecting and succeeded most of the time, but other times it just couldn’t. As such, while the ordering system had a higher availability the reliability of the overall system had not improved, we’d simply swapped one problem for another".
The IT manager's takeaway was not to avoid the cloud, but rather that companies must take a holistic approach.
"Looking back at it, I think I was looking for a quick fix solution to the power problem. A long-term solution was developed when we reassessed the whole server infrastructure, and replaced everything with an IBM and VMware environment.
"Moving one workload into the cloud was a half measure, or less. Had we fully committed to the cloud concept, internal systems would have been re-architected to be interoperable with external ones. More workloads could have been moved. In all, it didn’t represent the maturity of cloud and hybrid solutions at the time, but rather the maturity of our understanding of them."
Warren Simondson, managing director of Brisbane-based reseller Ctrl-Alt-Del IT Consultancy, said he could think of plenty of examples where "clients have been very embarrassed by their move to the cloud, and often it has been a financial and logistical nightmare that has backfired on both the company and its bottom line".
In one particularly disastrous example (covered in greater depth here), the client nearly ground to a halt after moving its production environment into the public cloud. In many cases, Simondson blames reseller and vendor sales execs – "experts at spin [who] could easily cut and run after the sale, leaving the customer red faced, and looking for someone who could fix their situation, primarily with little to no budget after their cloud investment was exhausted".
Simondson cautions: "The cloud should never be designed for all-in investment. As any viable business owner knows, it is the merging of known working systems that operate together in synergy, giving a known element of success and reliability. Good IT professionals call this the hybrid cloud".
Do you have your own story of a company reversing out of cloud or redefining their public cloud strategy? Leave your comments below.