Riding the VMwave

By on
Riding the VMwave
Ian Grayson

It started as a clever way to unhitch software applications from underlying hardware, but now virtualisation technology is poised to re-write the rules of how entire corporate data centres are designed and run.

Touted by supporters as the ‘next wave’ in the evolution of corporate computing, virtualisation is quickly becoming the biggest thing in town.

If predictions are correct, it will completely change the way organisations think about their IT resources.

Inside many companies, virtualisation technology is already being used to overcome one of the big challenges created by the last computing wave: server number blowouts.

Following the widespread adoption of client-server computing architectures in the 1980s and ’90s, IT managers have been using virtualisation to reign in the number of boxes they have to manage.

By running applications within virtual machines, it’s possible to host more than one on a single physical server.

As well as reducing the number of servers required, this allows hardware utilisation levels to be greatly improved and power costs reduced.

The technology is also being applied to storage resources where virtual pools of storage can be dynamically allocated to applications as they require it.

Storage hardware from different vendors, and even different locations, can be combined and managed as if it were a single, large array.

A third area of focus has been client-side virtualisation where user desktops are transferred to run on centralised servers.

This removes the complexity and expense of managing large numbers of client machines while ensuring users have access to their critical applications and data.

The future of the data centre

But now virtualisation technology is about to underpin yet another shift in the corporate IT landscape.

Just as it has changed the way IT managers think about servers and storage, now it’s going to do the same thing to entire corporate data centres.

Speaking at his company’s recent annual global customer conference, VMware chief executive Paul Maritz outlined his company’s vision for how virtualisation technology will make this happen. As the leading software vendor in the field with some 120,000 customers globally, VMware is well-positioned to know.

According to Maritz, data centre managers have been struggling with two seemingly conflicting forces.

On one hand there is a need to centralise IT resources, while on the other there is a need to provide rich end-user experiences on a multitude of different devices.

“What we are trying to do is look forward to what I see as a fundamental change that is rolling through the computer industry,” he told the 14,000-strong audience at the Las Vegas event. “It is all about balancing and synthesising a better experience out of these (centralising and
de-centralising) forces.”

Maritz said that, rather than focusing on both data centre and client-side hardware in the traditional way, companies will increasingly be able to approach it from a service-delivery standpoint.

“We are moving fundamentally away from a device-centric world to one that is application, information and people centric,” he said.

“The challenge now is how do we take infrastructure and treat it as a common substrate on which we can build experiences that allow services to be provisioned for users in a much more flexible way.”

Backing strategy with products

While such a vision may seem ethereal to many IT managers and CIOs, VMware is backing it with a strategy and product set that will allow such a ‘common substrate’ to be built.

It’s also pulling together an industry ecosystem of technology and channel partners to help make it happen.

The company is grouping these initiatives together under the banner of the Virtual Data Centre Operating System (VDC-OS).

In essence, a VDC-OS allows an organisation to treat its entire IT infrastructure as an internal computing ‘cloud’, providing services
to internal clients in the same way as an external hosting provider.

Once in place this cloud, which comprises an enterprise’s entire processing, storage and networking resources, can be dynamically allocated to applications and users.

Maritz said the approach will allow companies to be much more responsive to changing commercial demands. Where traditionally certain areas have needed to be over provisioned with IT resources to cope with occasional peaks in demand, now those resources can be allocated for short periods before being used elsewhere.

For some industry observers, this pooled approach to IT harks back to the days when mainframes provided the vast bulk of computing resources within most organisations.

In many ways, it’s a case of back to the future.

Kevin McIsaac, enterprise IT advisor at analyst firm IBRS, has dubbed the trend “mainframe 2.0” and said it is a logical progression from the more traditional one-application-per-server approach that is taken by most companies.

“Recent advances in virtualisation of commodity infrastructure, largely driven by VMware, now enable this mainframe architecture to be reinvented using commodity servers and commodity storage,” said McIsaac. “Rather than buying a monolithic (proprietary) server, the ‘mainframe 2.0’ is built from racks of low-cost x64 servers and modular FC/ iSCSI storage arrays. The glue that binds these together into a mainframe architecture is virtualisation, which acts as the new data centre operating system unifying and managing all these resources.”

According to Maritz, the move to an internal computing cloud involves the establishment of a new layer of software that can tie together multiple racks of infrastructure.

Such a layer goes well beyond what is delivered by traditional operating systems.

“We can increasingly think of (a company’s) IT infrastructure as a single, giant computer,” he explained. “This will be a platform that goes way beyond VMware and will encompass everyone working in this area.”

Maritz believes the existing concept of an operating system will evolve into something altogether new and more sophisticated.

“We will see the traditional operating system deconstructed and made more customised and relevant to particular applications,” he said. “We don’t know how applications will be written in five years’ time, but on the whole people are no longer writing traditionally Windows applications.”

He points to frameworks such as Ruby on Rails where the traditional operating system has all but disappeared. He believes the same thing will happen in corporate computing, both at the server level and across the data centre as well.

Channel opportunities

For those in the channel, the increasing pace of virtualisation developments and the way in which they seem likely to reshape corporate data centres offers considerable opportunities.

Those partners who understand the long-term ramifications are likely to be in a strong position to profit from the changes.

VMware’s Australian managing director, Paul Harapin, said his company has around 800 channel partners in Australia and New Zealand and he can see this number growing slightly over time.

He said VM customers don’t just buy virtualisation software as a stand-alone product. Rather, it is sold as a package along with servers, storage, networking and often consulting services.

“We look to our partners to provide a total package to customers,” he explained. “As the technology becomes more complex and sophisticated customers will be requiring more support and guidance. Our partners are well-positioned to provide this.”

The future is in the clouds

Once companies have evolved their existing IT infrastructures into internal computing clouds, VMware believes the next logical step will be to link them with clouds operated by third-party hosting providers.

If architected correctly, the result will be seamless links between internal and external resources.

VMware’s Maritz said such an approach can deliver big benefits to organisations by giving them access to “on-demand” computing resources.

“We need more freedom about where you pull compute resources from,” he said.

“It’s not about doing it all internally or all externally.’’

Under this vision a company would be able, for example, to call on extra processing or storage resources when they are needed to meet periods of peak demand.

Applications running in the internal computing could be shifted automatically to externally hosted servers better able to deal with changing workloads.

Such external services could be provided by telecommunications companies and hosting providers and many are already formulating business plans and commercial offerings in this space.

Maritz said that, for such a system to work, external clouds need to be interoperable and therefore adhere to a set of standards and technical guidelines.

With this in mind, VMware has established the vCloud initiative and has already signed up more than 100 participating companies.

Early members include large-scale telco companies such as Verizon and British Telecom.

“The aim is to be able to outsource application loads, in total or in part, and then be able to change your mind over time,” he said.

As well as peak-demand processing and storage resources, external cloud operators are likely to offer other services such as back-up and disaster recovery.

Rather than a company needing to build and manage an off-site DR facility it could simply rent space inside an external cloud.

If required, applications running on virtual machines inside the company could then be moved to physical servers at the hosting provider.

In Australia, such an approach spells big opportunities for hosting providers.

VMware’s Harapin said he expects the market to grow quickly as customer companies come to understand the benefits that utilising external resources on demand can provide.

“It’s an exciting time,” he said.

“Virtualisation has already resulted in big changes in the corporate computing space and this takes it even further.”
Got a news tip for our journalists? Share it with us anonymously here.
Tags:

Log in

Email:
Password:
  |  Forgot your password?