During the ’80s and ’90s, if an IT manager needed to deploy a new application, it was an acceptable, even necessary approach to go out and buy a new server to run it on. Like a quarter acre house block, a new server gave the kids plenty of RAM to play in and the isolation from your neighbours provided living space that gave a sense of privacy and security. Like property developers creating new suburbs, resellers became profitable selling high-volume, low-cost industry standard Intel x86 servers in a supply chain that became more about logistics and efficient deployment than about building efficient housing.
The result was server sprawl. And to a large part this approach to computing continues to this day.
But as in the real estate industry, available land comes at a cost. The luxury of the big house on the big block is a relatively inefficient way to provide people with a roof over their heads. In the same way, most servers are under-utilised, running at around 10 to 15 per cent of their capacity. They are deployed more to provide individual processes isolation from interference, than sufficient computing power to do their work.
Like high rise apartments, virtualisation offers a way to stack more people into a smaller space and make better use of the ground it occupies. High density living reduces the cost of amenities through the efficient use of shared resources and like the swimming pool and gym in the basement, residents can get access to better services and features than they could afford out in the ’burbs. But the shift to virtualisation is likely to happen much faster than the move to high density residential living.
Virtualisation technology is not so much so new as recently gaining momentum. What is new are the emerging business needs that make virtualisation a good solution and a few product developments that will actually make it unavoidable. The changes are going to please some vendors and disadvantage others, how resellers fare will depend on how well they have migrated their business from product margin to services.
Virtualisation is white hot. In August, EMC sold off just 10 per cent of VMware for a healthy US$957 million. The stock rose 76 percent the first day, pushing the company’s valuation to nearly US$20 billion. Networker Cisco Systems invested US$150 million for a 1.6 percent stake in the business and Intel bought 2.5 percent for US$218.5 million. It was a banner day for EMC which had bought the company for US$600 million just three years before.
The exciting thing with virtualisation is that addresses so many of the current issues IT departments face today. By increasing utilisation and improving management it promotes server consolidation, resulting in significant data centre savings compounded by savings in airconditioning, rack space, staffing levels and compliance costs. Rapid application deployment, dynamic load balancing and streamlined disaster recovery also top the list of benefits.
The total number of virtual machines deployed worldwide is expected to increase from 540,000 at the end of 2006 to more than four million by 2009, according to Gartner vice president Thomas Bittman, but this is still only a fraction of the potential market. “Virtualisation on the client is perhaps two years behind, but it is going to be much bigger. On the PC, it is about isolation and creating a managed environment that the user can’t touch. This will help change the paradigm of desktop computer management in organisations.”
VMware Australia’s managing director, Paul Harapin confirms that “virtualisation technology adoption in Australia has followed global trends with many organisations, both large and small, adopting the technology. One of the major reasons VMware is so ‘hot’ is that its products provide enterprise-class virtual machines that increase server and other resource utilisation, improve performance, increase security and minimise system downtime, reducing the cost and complexity of delivering enterprise services.”
VMware claims that already 60 to 70 percent of its customers come from the SMB market (in US terms).
Australia’s idea of small business may not benefit quite so much as our mid-market customers stand to – some analysts put a figure of about 15 servers as the sweet spot for virtualisation. This is likely to change though as we are on the eve of a significant evolution in the technology with plans by server builders, in conjunction with new chip technology from AMD and Intel, to deliver virtualisation-ready servers by the end of this year.
Increasingly, explains Paul Kangro, APAC solutions manager for Novell, virtualisation is not going to be optional as it will be the only way to make sense of the computing power being built into industry standard servers. Kangro points to the trend toward massively multi-core CPUs being developed by Intel and AMD. Just as in the old mainframe days where virtualisation had its roots, “if you have an expensive bit of kit in the server room, you are going to want to make the most of it,” he said.
As 80- and even 256-core chips become the norm, the concept of using such a server to run today’s applications, which are typically under-utilising today’s standard servers, will force a change in approach, he argued. “Intel has set out to alter the economics of computing,” he said. “In the commoditised sector of the market, they are turning the server into a mini-mainframe, but you will only get this sort of performance at a cost, so you will have to make sure it is fully utilised.”
Coupled with this revolution of on chip computing power, Intel and AMD have taken other steps to ease the way for virtualisation on industry standard servers. By tweaking the Privilege levels on their CPUs (Vanderpool and Pacifica) in such a way that virtual machines can run at a higher privilege level, removing the need for an abstraction layer of device drivers required to run virtualised machines on a standard x86 CPU. This has had the optional effect of increasing performance by eliminating that binary translation process or making it possible to run standard unaltered operating systems in a virtual environment, depending on which model of virtualisation you adopt.
In the next phase toward virtualisation as a standard, two virtualisation industry leaders have announced embedded hypervisors will become available before the end of the year. “VMware’s VI3i announcement in September launched embedded hypervisors into the servers from Dell, HP, IBM and NEC, making it easier for customers to implement and extend virtualisation delivering a more ubiquitous virtualised computing environment,” explained Harapin.
ESX Server 3i and XenExpress OEM Edition from Citrix-owned XenSource are possibly the most significant developments of all as they mean that before long, every server that ships will be virtualisation-ready right out of the box. These next generation thin hypervisors will be available right on the motherboard or stored as firmware on Flash memory, making for rapid deployment of virtualised environments that boot directly into a fully functioning hypervisor.
“By leveraging existing technology, VMware enables the roll out of new applications with less risk and lower platform costs and in much shorter time frames. In addition, organisations are able to leverage their virtualisation right across their IT infrastructure, from the desktop to the data centre, by virtualising their computing, storage and networking systems. And virtualisation makes key IT strategies such as disaster recovery more affordable to many organisations which would normally not have done in the past due to cost constraints,” said Harapin.
But this is where some manufacturers may not fare so well from the new consolidation push. Less servers mean less racks, less airconditioning and less fans, even fewer motherboards and if virtualisation comes right out of the box, it means system builders are unlikely to bundle server operating system with it. It makes far more sense for the decision of what operating system to run be made in isolation rather than as a standard non-optional extra that comes OEMed with the hardware. This could lead more companies to select Linux or Solaris as an option.
Extend this thinking to the desktop and Microsoft could lose out big time. Some analysts claim Microsoft has been caught napping at the wheel again with virtualisation. Unlike the quick catch up it managed to pull off with Internet Explorer, the release of Microsoft’s first hypervisor is not slated for delivery until at best mid-way through next year – VMware and XENsource have a long lead time.
Microsoft’s Windows Server virtualisation solution, Viridian, is scheduled for release in its first Beta form six months after the launch of Windows Server 2008 which itself is not expected until sometime in first quarter next year. Analysts warn that to have any hope of catching VMware by that stage, Microsoft will have to get Viridian right first time. But the company has already started pulling features out of the initial launch roadmap in order to achieve that distant launch date giving alternative players, whether open source or not, a great deal of time to build momentum in the mid-market.
In the interim, Microsoft does provide virtualisation software which allows virtual machines to run as a host on top of its existing desktop and server operating systems, but the performance and popularity of bare metal hypervisors seems to making this offering somewhat redundant.
If organisations can save on hardware costs and other infrastructure costs, deploy features such as workload balancing, high availability and disaster recovery more cost efficiently, there has to be a downside. Right? Wrong? “Where’s the bad part?” asked Novell’s Kangro. “There is no bad part.”
The key advantage of embedded hypervisor technology is that it does not rely on an OS removing the last layer of abstraction and making things such as diskless virtual server platforms a commodity roll-out. Team that with GB Ethernet and iSCSI and you have a pretty cost-effective virtualised environment. At least none that are significant enough to put most users off. You have the choice of using proprietary solutions or Open Source software to control your costs, you can even use existing hardware to achieve instant benefits and as a migration to future deployments on new hardware.
The one piece of the puzzle perhaps not completely filled in as yet is management of these new infrastructures. That gap is closing fast. VMware with its recently launched VI 3.5, “delivers new levels of virtualisation automation and always on technology. Features include automated patching across virtual environment, greater virtual lifecycle management with Dunes technology, and greater levels of disaster recovery automation,” said Harapin.
Even in the XEN Open Source world products there are management tools to suit. Kangro described how with Novell’s ZENworks Orchestrator you can automate and schedule jobs and loads so that if a task, such as the close of accounting books happens at 5.15pm every day, you can automatically bring up a virtual machine to run the job and then put it back to sleep once the task is completed. Used in another way, Orchestrator would allow you to preemptively migrate a virtual machine running the company website from a small server to a larger scale machine because you know it always gets a traffic spike at lunchtime, explained Kangro.
Having the tools in place to manage these virtualised environments as a single entity is the next big challenge for the industry said Laurie Wong, business manager, software products for Sun Microsystems.
Establishing your applications in virtual machines is a relatively easy task compared to maximising your ROI in an environment where heterogeneous operating systems and applications are operating in separate containers.
Wong said scale is one of the issues determining the management challenge, pointing to massively virtualised environments such as MySpace which is buying servers at the rate of one an hour or eBay with 130,000 servers currently virtualised. “It gives them better utilisation and efficiency and management of the resource, but you have to have the tools to make the most out of these environments,” said Wong.
Sun has announced its entry into the virtualisation space with plans to launch an open-source virtualisation management tool this December, which will be able to run on any operating system. Sun’s virtualisation platform, a combination of hypervisor (xVM) and a management platform (xVM Ops Center) will provide what company officials have termed a “turnkey” virtualisation environment. The xVM platform marries Xen’s open source hypervisor with a cut-down version of Solaris, while the Ops Center will be equipped to handle things such as asset discover and inventory, checking and provisioning firmware, managing hypervisors, provisioning applications, automating software updates, and compliance reporting.
The initial migration from physical to virtualised environments is also getting easier. Where some of this implementation have taken advanced skills in the past, VMware’s recent promise of Guided Consolidation goes a long way towards reducing the challenge of virtualising existing environments. The step-by-step wizard identifies physical servers for consolidation, converts them to virtual machines and intelligently places them onto the most ideal VMware ESX Server or VMware Server host.
VMware’s recent announcement also included Update Manager, which allows administrators to track patch levels and apply current security patches and bug fixes across their environment via an automated update and remediation process that allows immediate roll-back if the update doesn’t go down so well. The system works even with virtual machines which are powered down or in a suspended state.
Other new features in the recent announcement include Storage VMotion. Where VMotion allows you to move active virtual machines from one physical server to another with no impact to end-users, Storage VMotion does the same for virtual machine disks, moving them from one data storage system to another. Also, although it is billed as an “experimental feature”, VMware Distributed Power Management reduces power consumption by automatically shutting down servers not currently required to meet service levels. If server load increases, the system automatically powers on servers to meet the demand.
In the desktop space, while some large and mid-sized organisations have made enthusiastic migrations to thin client, virtualised client environments are likely to quickly gaining traction as Gartner points out above. There are significant differences in approach between the two even though they may appear the same on the surface.
In the traditional Citrix Presentation Server environment, server resources are basically time-sliced, with each user getting a share of the available resources. To make these systems work, applications have to be basically ported to this because the operating system is behaving like a multi-user OS. That’s different from a virtual machine environment in which the VM is a standard operating system with the hypervisor taking care of all the resource allocation and sharing.
This makes thin client significantly less complicated and easy to deploy. In fact, at a Citrix/Wyse Technology thin client roadshow recently touring the country, the two demonstrated how using a VMware converter it is possible to virtualise a user’s PC in under 10 minutes. The benefits are there for the taking. Ward Nash, regional sales manager for Wyse Technology told CRN that while last year’s VMware conference in Sydney was all about the server, this year it was all about virtualising the desktop.
Once a desktop is virtualised you can use the same sorts of management tools described above to maintain availability, deploy patches and application updates, do load balancing and provide a greater level of security and policy control over the desktops. A virtual desktop can use either thin client hardware on the desktop or a standard PC as an interim measure: Both use a similar Remote Desktop Protocol (RDP) to basically transfer video and keystrokes to and from the server as with traditional thin client approaches. It’s at the server end that things change.
Nash argued that using a PC in a virtualised environment detracts from many of the advantages of thin client computing here acknowledges that it is only recent advances to Wyse’s RDP and thin client hardware that some of the disadvantages of not having a fully fledged personal computer on each desk have been resolved.
Wyse has added extensions to RDP that support features such as dual monitor setups that are so important in environments such as finance, explained Nash. Multimedia redirection is another which provides better support for video on the desktop. Where a thin client system is typically only optimised to support video in a screen refresh mode, video requires a more streaming approach. So the latest Wyse technology uses a virtual channel across RDP and a local codec so that video is transferred, compressed and decoded at the desk.
USB virtualisation was also added to support devices such as cameras, iPods and BlackBerries and next year the company will start doing VoIP via softphone to its client hardware eliminating for separate IP phones which could cost nearly as much as the $450 thin client hardware.
At Sun Microsystems, another long-time proponent of thin client, Wong acknowledged that the idea’s time has perhaps finally arrived. “We had a view and everybody is now starting to subscribe to that view because a number of things have changed in the world. If you look at all the forces that have come together; the escalating cost of the desktop; greater need for security and policy enforcement; the rise of new development tools and Web 2.0 where the browser is the interface to the transaction world,” in this environment he said, “you only need a browser, you don’t need a fat client.”
Get ready for virtualisation
By
Staff Writers
on Oct 29, 2007 4:48PM

Got a news tip for our journalists? Share it with us anonymously here.
Partner Content

Build cybersecurity capability with award winning Fortinet training from Ingram Micro

Channel can help lead customers to boosting workplace wellbeing with professional headsets
Ingram Micro Ushers in the Age of Ultra

Tech For Good program gives purpose and strong business outcomes

How NinjaOne Is Supporting The Channel As It Builds An Innovative Global Partner Program
Sponsored Whitepapers
-1.jpg&w=100&c=1&s=0)
Stop Fraud Before It Starts: A Must-Read Guide for Safer Customer Communications

The Cybersecurity Playbook for Partners in Asia Pacific and Japan

Pulseway Essential Eight Framework

7 Best Practices For Implementing Human Risk Management

2025 State of Machine Identity Security Report