Users need to go everywhere – customer sites, partner meetings, remote offices.
The applications they need, however, are often locked-up in distant, consolidated data centres or outsourced entirely. All of this movement conspires to break up the traditional “hub and spoke” network model of yesteryear.
At the same time, application networking is going increasingly peer-to-peer, with VoIP, SOA and Unified Communications requiring low-latency, high-bandwidth connections between any network endpoint. To accommodate this within traditional point-to-point links would require exponential growth in the number of interconnects. Impossible to provision, as we know. For relief, many enterprises have turned to “cloud”-shaped networks. A transition, like most in IT, with some interesting side effects.
There are basically three types of clouds: Multi-Protocol Label Switching (MPLS), direct Internet access coupled with multi-point VPNs, and a hybrid or combination of the first two – each type with distinct differences, but they still share many commonalities. For one thing, they get traffic between any two cloud-connected endpoints in a generally efficient way without the need to provision a specific, dedicated line. At first glance, this could represent savings in operational expenses. But other costly consequences are likely, namely increased latency and highly variable latency because the best available route can change from moment to moment.
A changeable route means changing network conditions. In many MPLS networks, the route might be completely hidden from you. The packet leaves your router (or switch) and appears “magically” on the other side at a later time. In exchange for these subtle inconveniences, the service provider enjoys economies of scale, and passes along some of the savings in terms of both price and a dramatic reduction in your complexity.
Although the tradeoff is acceptable to most organisations, they can’t always ignore the little inconveniences. Take for example, the more insidious inconvenience of cloud network boundaries. Here, the differences between MPLS and direct-to-the-Internet clouds offer stark contrast. On the one hand, going direct to the Internet makes for an obvious, but potentially very scary, boundary.
What if you want to connect directly to the Internet? The price is definitely right, and getting all of that Internet back-haul off your WAN is very appealing. It’s also elegant. The traffic is going there anyway, why not let the Internet carriers pay to move it around? Unfortunately, the security and routing issues are as serious as they are daunting. First, you’ll have to do all the routing, including site-to-site VPNs for privacy.
Still, high-quality Internet connectivity is available just about everywhere at relatively bargain prices. Security and networking infrastructure is also getting cheaper and easier to manage. Combined, these benefits can justify shifting the costs of backhauled Internet to the carriers.
What about the other cloud, MPLS? More expensive than the Internet do-it-yourself cloud, but cheaper than the costly point-to-point alternative. But like the point-to-point, hub and spoke networks of yore, MPLS tries its best to look like a natural extension of your network. Most providers will even manage the router for you, and drop off direct Ethernet to your edge.
Convenient, but the lack of a clear boundary shouldn’t lull you into a false sense of security. This is a semi-public network, where your traffic co-mingles and is subject to inspection. Moreover, MPLS clouds are generally black; all routing information through the cloud is completely hidden. Without a clear boundary, you also need to be careful how you let traffic out onto the
MPLS network.
Carriers are happy to handle your accidental overflow, for a price. You can’t burst forever, at least not for free, so you need to consider how to keep cloud-bound traffic contained. Regardless of the type of cloud you choose, one last issue remains – increased latency. To address latency in the cloud, there are options. Generally, they fall into one of two categories – fix the application with caching, compression and protocol optimisations; and/or application prioritisation.
The ultimate goal is compression and optimisation to reduce traffic entirely. New forms of caching and inline compression can dramatically reduce the bandwidth needed to service applications. Bandwidth, network latency, and application performance aren’t directly related, but if you can avoid transmitting data at all you save user time, along with time on the wire. Some applications – file services, email and even web applications – can be intercepted and re-worked. These protocol optimisations, combined with caching and compression, can provide startling improvements.
While some latency is unavoidable, you can do something about packets sitting around waiting for bandwidth. Even with this minor turbulence, the convenience of cloud networks is too much to resist.
Yes, a little extra latency must be overcome and additional common-sense security is required.
But it’s nothing a network champion can’t overcome – if you can spare the time from filling out all those point-to-point cancellation forms.
Data on the move looks to the clouds
By
Staff Writers
on Feb 7, 2008 11:23AM
Got a news tip for our journalists? Share it with us anonymously here.
Partner Content
Ingram Micro Ushers in the Age of Ultra

How NinjaOne Is Supporting The Channel As It Builds An Innovative Global Partner Program

Build cybersecurity capability with award winning Fortinet training from Ingram Micro

Channel can help lead customers to boosting workplace wellbeing with professional headsets

Tech For Good program gives purpose and strong business outcomes
Sponsored Whitepapers
-1.jpg&w=100&c=1&s=0)
Stop Fraud Before It Starts: A Must-Read Guide for Safer Customer Communications

The Cybersecurity Playbook for Partners in Asia Pacific and Japan

Pulseway Essential Eight Framework

7 Best Practices For Implementing Human Risk Management

2025 State of Machine Identity Security Report