Who isn’t tired of the word ‘cloud’? Not only is the noun repeated ad nauseam, the concept itself is equally derided.
“A lot of sceptics of cloud say it is ‘glorified hosting’, ” says Andrew Sjoquist, founder and chief executive of ASE IT.
But Sjoquist’s Sydney-based company is just one local provider seeking to get an early advantage with an emerging technology that offers something truly unique, something traditional hosting could never offer: containers.
While virtual machines, or hypervisors, have ruled computing for the last decade, virtualisation at the operating system-level – containers – has hit the headlines more recently. Advocates say containers promise the next level of efficiency, density and portability.
Containers – also known as jails – describe self-contained slices of the operating system, each dedicated to a particular application. The independence of the containers allows many applications to run on the same machine without clashing with each other.
Operating system-level virtualisation was first born as the ‘chroot’ command in Unix, although modern container technology provided by third party vendors are now packed with so many bells and whistles that they’re almost unrecognisable as descendants of the indigenous Unix functionality.
Thanks to this history, modern containerisation is strongly associated with delivering web applications through Linux, although other operating systems are trying to catch up (see "Containerisation on Windows" below).
By only provisioning OS resources that the application needs, and leaving out ones that it doesn’t, containers have far less overhead than running whole virtual machines on the same hardware. More things can run on the same underlying kit.
This efficiency appealed to ASE IT. In December, the company launched a containerisation offering in conjunction with vendor Volt Grid.
“We’ve always been focused on emerging trends and technologies, particularly around efficiencies,” says Sjoquist.
“The efficiency gain we’ve had from virtualisation of hardware has been great. But now we hear from customers that even their VMs are now getting under-utilised. There is a lot VM sprawl that happens out there. It’s so easy to get a VM up and running these days.”
With containers, you don’t need to spend precious resources getting an entire operating system up and running “from the ground up”, says Sjoquist. “It’s all about getting better bang for their buck on their cloud infrastructure.”
The ability to provide customers more for less should grab the attention of any IT solution provider looking for that business edge.
Next: VMs vs containerisation
VMs vs containerisation
There is a certain irony that the ease of provisioning and using virtual machines has contributed to their own potential demise. If it weren’t for the sprawl, the extra density that containerisation provides may well be less tempting in the real world. “It’s collapsed under it’s own success,” jokes Sjoquist.
Unsurprisingly, VMware disagrees. The company, which has dominated the hypervisor industry, says you still need machines somewhere, and VMs are still the best way to host applications – even when they’re in containers.
“The virtual machine has been, and continues to be, the best place for an operating system to live. Where you see containerisation come into play is on the top of that,” says Aaron Steppat, senior product marketing manager, software-defined data centre, at VMware Australia & New Zealand.
“Because the container doesn’t actually hold an operating system. It contains the necessary libraries and code – the application stack itself – and it relies upon the libraries that are provided through a Linux operating system.”
Steppat says it’s not a case of choosing between virtual machines and containerisation. “It’s not an ‘either/or’; it’s an ‘and’. ”
Sjoquist says that with the advent of OS-level virtualisation, hypervisors can now be seen as clunky: “If you have this heavy, fat hypervisor strategy that doesn’t allow you to be agile, then you may well consider going back to bare metal.”
But he warns that one needs economies of scale before doing anything too dramatic. “Until you get to a significant scale, there’s still a lot to be said about having a hypervisor layer in there.”
As a major commercial player in the Linux world, open source vendor Red Hat is watching the containerisation fad with glee. Red Hat Australia’s platform business unit senior manager, Colin McCabe, tells CRN that once an organisation decides to switch to containers, it will consider the hypervisor layer redundant – but any transition will be gradual.
“The organisation will start asking, ‘Why would we spend money on yet another layer? Why would you be running containers on a OS that lives on a hypervisor?’ There will be some that retain virtual machines, but the majority will say, ‘Let’s dump everything that we’ve already got [into containers]. But it will be a stepped approach – some apps will end up over here and some apps will end up over there.”
Docker ain’t a container
The need for portability and agility also favours containerisation. Without the overhead of an entire operating system, containers can be moved and copied around to whatever hosts the administrator desires. Once it sits on the right host, the start-up speed is remarkable, only taking a few seconds. An application is running almost instantly, not waiting for a server to be booted.
Such agility is further enhanced with container management tools like Docker. The open source software is the hottest name in technology at the moment, with a slew of industry giants – including Amazon Web Services and Microsoft – coming out in support, as well as many of the specialist containerisation vendors.
However, there is a misunderstanding by some that Docker itself provides containers.
“Docker automates the deployment of applications into containers,” says Neil Morarji, ANZ general manager for containerisation vendor Parallels. “It manages the taking of applications from one environment to another, through the development cycle, and takes care of multiple people working on the applications.”
Parallels is the manufacturer of containerisation technology Virtuozzo. Its Asia-Pacific sales engineer, Alexei Anisimov, explains that Docker is an excellent complement to OS-level virtualisation. “Docker really makes it easier to create applications that are specifically designed for container technology. It makes it easy to deploy and run them. That’s why they’re an ideal partner for us.”
Next: Leaky containers make a mess
Leaky containers make a mess
Although containers are meant to be independent and isolated from each other and the host system, Red Hat’s McCabe says security is not yet foolproof enough for mass adoption in the business world.
“This is likely because, as a new technology, the full scope of potential security issues around Linux containers is still being uncovered. Containers tend to operate under a traditional security model and have core features in place that will provide a certain level of protection, but these don’t always provide the complete isolation of applications – simply put, ‘containers don’t always contain’, ” says McCabe.
“This means that improperly implemented or even malicious containers can cause significant damage, just like any other poorly-coded or malware-harbouring application. Another level of separation is required to fully secure containers and their environment.”
McCabe puts forward three developments that he hopes will enhance security and take containerisation to the mainstream. First is knowing what’s inside container: “When implementing containers, establishing trust is critical… Companies need to be sure that their containers’ contents will not introduce malicious or vulnerable code into production environments and that affected containers are identified quickly and replaced to maintain high security levels.”
Second initiative is to implement management tools. “Companies must have management tools in place to track containers across all platforms and quickly respond to threats and patching or replacement issues. Containerised applications that can be replaced with minimum effort at a large scale contribute to this secure framework.”
The third plan of attack is to use reliable “advisers”.
“IT organisations need to verify a container’s source, track the container when it is being deployed across different platforms and make sure the container receives the support and updates required throughout its lifecycle,” says McCabe. “Reliable advisers will be able to provide this ’chain of trust’, from the container creation, throughout delivery, until the end of the lifecycle. These advisers can provide both the technology and the ecosystem that supports containerisation and that makes containers enterprise-consumable.”
A natural evolution
Despite the security issues to be ironed out, McCabe says that containers are “a natural evolution”.
“It’s about time people who are spending hundreds of thousands, if not millions, of dollars on a hypervisor have a good think about putting that money towards developing better business outcomes rather than keeping the lights on.”
ASE’s Sjoquist agrees: “The scalability that containerisation gives customers is something that [traditional] hosting can’t provide.”
Perhaps with containers, we can all combat cloud fatigue.
Next: Containerisation on Windows
Containerisation on Windows
Linux dominates the world of operating system-level virtualisation. The concept of containers originates from Unix, and web applications, ideal for containerisation, are more likely to be run in Unix-like environments.
Despite this uphill battle against technical history, Microsoft has made some noise in recent times to kick-start containerisation in Windows.
Parallel’s Virtuozzo containers have supported Windows for “some time”, says Asia-Pacific sales engineer Alexei Anisimov, while conceding that “in the very beginning, [the Windows version] was almost like an experiment”.
“But it has now matured into a product that people use in production. It works with the same principle – one Windows server that can be sliced into multiple containers,” Anisimov told CRN. “I have to admit that it’s much less common. Linux containerisation is much more mature… Windows is still some way behind.”
In February 2015, US vendor DH2i launched its DxEnterprise product, spruiking it as providing both Windows containerisation and the management capabilities equivalent to Docker.
“With DxEnterprise, customers can containerise and make any new or existing Windows Server app service, file share, or Microsoft SQL Server instance portable and highly available,” says DH2i co-founder and CTO OJ Ngo.
“It eliminates OS sprawl and reduces OS cost by 8-15 times, and provides near-zero application downtime as well as protection from OS, application and infrastructure faults.”