Everybody get virtual
While virtualising business applications may not seem a worthy goal in itself, a business owner is hardly likely to ignore the benefits if they are explained in simple business terms.
Concepts such as business continuity, disaster recovery and automated provisioning are understood by CIOs but are confusing jargon to small businesses, even though they share similar needs.
Framing these concepts as business solutions puts them in terms an SMB can understand. Would you like to reduce the chance of losing data through failing hard drives? Would you like faster, easier, multiple backups - or snapshots of your information throughout the day?
Would you like to reduce the effect of server crashes on critical applications such as email or CRM?
These propositions make sense to any business.
The take-up of virtualisation in Australia is roughly double that of the rest of the world, according to Gartner.
Peter Hedges, IBM's business development manager for System X servers, has seen anecdotal evidence of this trend from IBM's sales. As the man responsible for value solutions within IBM Australia and New Zealand, Hedges has contact with all IBM's sales reps.
"More and more we get these sorts of questions and solutions" about virtualisation, says Hedges.
"You can always tell [when a customer is rolling out virtualisation] because whenever there's a question about how a vendor licences their software that's a clear indication of what the customer wants to do."
Feature sets on entry-level servers have come to include enterprise-level capabilities in response to the virtualisation trend, says Hedges. This reflects the greater risk in losing a server. When a conventional server crashes, a company might lose a single application.
If a virtualised server goes offline, it could take out 10 or 20 applications with it.
"With virtualisation the impact of something going wrong is multiplied. Risk is the product not only of the probability of something going wrong but the impact of something going wrong," says Hedges. IBM and other vendors are designing servers to minimise that risk.
Newer servers have more features, higher specc'ed configurations and smarter management software - and higher margins.
Virtualisation requires more stable servers to handle the intensive workloads of running multiple virtual machines. Analysts quote the average usage of a traditional server processor at 18 percent; a virtualised server is closer to 80 percent.
Just as important is memory. The amount of memory plays a large role in determining how many virtual machines can be run on a server. It is important to know how many virtual machines your customers intend to run; and businesses tend to have higher counts of virtual machines than the servers they replaced because VMs are so much easier and cheaper to deploy.
Hedges points out IBM‘s entry-level X3200 supports up to 32GB of memory, more than double some competitors' equivalent products. "To run the same number of virtual machines [on competitors' servers] you need twice the number of hardware," he says.
Most server crashes are never diagnosed, but statistical data points to memory errors as a likely culprit. In the 1980s, computer systems had a couple of megabytes of memory and used parity protection, which could detect if a single bit went wrong and stopped the machine.
Now an entry-level sever can hold 32GB and support several virtualised workloads. You really need to provide the highest level of memory protection, says Hedges. Memory limits increase with the number of sockets, and servers with more sockets are sold on average with higher amounts of memory.
This is reflected in the average spend per server, which is heavily inflated by hikes in memory according to the number of cores. The average single-socket server costs US$1,600; a dual-socket is more than double at US$4,500; and a four-socket more than triple at US$14,900, according to Gartner.
Crash me not
While virtualisation software vendors have developed methods for improving uptime, these should not be relied upon, says IBM's Hedges.
One high-availability feature on IBM's entry-level X3200 server is the use of RDIMM memory, which claims to have better error correction and therefore greater reliability than the standard UDIMM memory.
"It means we have less opportunity for a data error to corrupt the system or the application in a virtualised world," says Hedges.
"Knowing the difference between memory based on RDIMM versus UDIMM can give an edge for a reseller, given that this is such a crowded market."
While slotting in the best memory for the job can lower risk, it comes at a cost. Prices vary with the size and type of memory. Cisco's memory extension lets an SMB use low-cost DIMMs so they appear as a bigger unit. For example, four 2GB DIMMs can look like an 8GB DIMM, which is 20-30 percent more expensive.
Another availability feature is predictive failure analysis. The server constantly monitors the health of major subsystems such as fans, processors and memory. IBM claims to monitor twice the number of subsystems as other vendors, which helps a server predict outages before they occur.
The "chip kill" feature isolates faulty chips in memory, so if a memory stick loses a whole chip the server will continue operating without memory loss or a crash.
IBM released a product towards end of last year called VM Control which provides a common management interface across Power VM, Hyper V and VMware. VM Control gives the customer the ability to create and manage virtual machines, says Andre Liem, who looks after IBM's RISC-based Power platform.
"Once a customer has confidence with virtualisation, the next step is to automate production of images. The third step is to optimise and build resource pools of processor and memory and storage. So all of a sudden you have a customer that is able to create their own mini-cloud," says Liem.
"At the smaller end there can be very, very little onboard technical skill within a client," says Hedges. "Virtualisation is a great way for reseller to build up the services alongside the hardware."
Servers capable of supporting virtual machines need to be much more capable, and this is reflected in the sales figures, says Liem. While the number of systems year on year is tending to flatten off, server revenues are continuing to improve, which indicates that each system is being sold in a richer configuration, he says.
Bigger, smarter servers can handle more complex applications - such as desktop virtualisation. This technology is becoming more attractive with the increase in hardware power, says Hedges.
"I think that 2010 is going to be a very interesting year" for desktop virtualisation, he says.
Working the cross-sell
Virtualising servers can lead to lucrative opportunities in other areas of the business, as the technology places higher demands on storage and networking devices. It also means resellers must change the sales pitch from a standard server upgrade.
Storage and networking ports as well as processing power need enough headroom to avoid becoming a bottleneck. However, it is not enough to just upgrade the capacity - customers also want them to be smarter too.
"As customers start to deploy virtualisation more they see they have to virtualise networks, storage and IO. And in that sense the channels and the go-to-market and the direct sales have to change from selling products on price and performance to selling solutions," says Gartner's Rasit.
McBride says SMBs are choosing virtualisation for scalability, protection and agility. But there is an expectation that they want to do the same thing with their data in the backend, which leads to upgrades in storage too.
"You want a storage platform that's going to give you the same level of scalability and agility as you get on the front end with your server virtualisation," says McBride.
"You get failover, high availability, the ability to do planned move of servers during business hours. To get a lot of those features in virtualisation you do need networked storage in the backend."
Next: Buying patterns in small business