There has been such dizzying change and such rapid growth when it comes to virtualisation that it makes sense to take a step back and look at its history. If you want to get a hint as to where virtualisation may be heading, you need to look closely at its evolution.
That evolution began with the desire to utilise the VMware capabilities of vMotion, high availability and fault tolerance that drove government and corporate enterprises to rapidly purchase storage area networks (SANs).
This movement toward shared storage was a boon for the reseller channel – especially for those organisations specialising in virtualisation. Value added resellers (VARs) enjoyed the high revenues, margin dollars and services associated with selling SANs to facilitate virtualised data centres.
But it was a boon that had a sell-by-date. Around the same time that VMware was getting started in the late 1990s, Google debuted. Yahoo was the incumbent search market leader utilising, as did Alta Vista, Ebay and the other large Internet firms of the period, proprietary storage arrays for the bulk of its business.
But Google anticipated billions of users searching trillions of objects. It knew that the shared storage model simply could not scale to handle this type of volume let alone the expense and complexity it entailed.
Google recognised that a SAN utilises the same basic Intel components as a server. Rather than placing storage into a proprietary and expensive SAN or NAS, the company aggregated the local drives of custom-built simple servers.
The company hired a handful of scientists from prestigious universities to build the Google File System (GFS) in order to achieve massive parallel computing with exceptional fault tolerance. Google also invented the MapReduce and No SQL technologies to enable linear scalability without any performance degradation.
This model eliminated network traffic between the compute and storage tiers and was much simpler to manage.
Google’s converged infrastructure enabled it to have far lower storage and administrative costs than its competitors. Its SAN-less scale-out architecture helped it rocket to quickly become the dominant search engine player.
Recognising that a virtualised data centre required a very different type of compute platform than traditional servers, Cisco funded VMware co-founder, Ed Bugnion, in a separate venture which was later acquired by Cisco. Bugnion and a team of scientists spent five years building an entire new compute platform called UCS which debuted in March of 2009.
A key UCS innovation was a GUI enabling the different data centre teams to more effectively collaborate. The storage team could see, for example, that the server team was about to over-provision storage and take a LUN off-line.
As virtualisation proliferated, UCS quickly became one of the best-selling blade systems world-wide despite the company’s complete lack of experience in servers. Although virtualisation led to the proliferation of SANs, ironically, they were never built with virtualisation in mind.
SANs were meant for a one-to-one relationship between LUN and physical server rather than for many different workloads on a single LUN.
As organisations increasingly virtualised more workloads the drawbacks of shared storage became more pronounced. EMC, Cisco, VMware and Intel came up with a brilliant marketing concept to address these challenges and formed a joint venture called VCE. VCE provides a single product consisting of Cisco UCS, EMC storage and VMware.
The idea is to help mitigate the rack and stack requirements and configuration challenges of separate compute and storage tiers by pre-packaging the compute and storage components. VCE touts a much faster ability to implement a virtual infrastructure hosting environment as well as a single number for support around storage, compute or virtualisation.
This concept of converged infrastructure was readily embraced by organisations around the globe, and VCE is on track to do a billion dollars in revenue this year. But success breeds competition, and NetApp quickly came on the scene. Hitachi later followed with its Unified Compute Platform (UCP) which also uses UCS, Hitachi storage and vSphere.
While the reseller channel enjoys the increased revenue opportunities that the converged storage solutions enable, many of them have also been disappointed with promised outcomes.
Despite the flaunting of the “converged infrastructure” nomenclature, these solutions still rely upon separate tiers of compute and storage that happen to be conveniently packaged together in a chassis, sometimes with additional software.
The storage tier still requires a storage administrator and separate management, and network traffic still takes place between the tiers. And while implementation is generally faster than purchasing compute and storage separately, it often comes at a higher cost and can entail other issues such as long delays in getting necessary patches approved.
New players such as Nutanix (my company), Simplivity and Pivot 3 have introduced converged infrastructure in accordance with the Google model – incorporating true consolidation of the compute and storage tiers, but instead leveraging the hypervisor to virtualise the storage controllers.
This approach is less expensive to purchase, far faster to implement and eliminates the requirement for storage administration.
Steve Kaplan is the VP of channel and strategic sales at Nutanix