It literally took an earth-shattering event for the University of Auckland to implement a meshed network fabric, but now it has one it will be able to predict when and where the next major rumbles are likely to occur on the Shaky Isles.
James Harper, the university’s associate director of operations, says it was considering replacing a traditional, three-tiered “tree” network but the earthquakes that devastated the country last year hastened the decision to spread its data centre eggs across more than the one basket.
Although the university doesn’t officially acknowledge the quakes as triggers to installing Juniper’s QFabric, he says it played on people’s minds.
“The catalyst for QFabric is we recently completed construction of a new data centre at Tamaki, a satellite campus 10 kilometres away,” Harper explains.
“We had two data centres about 300 metres from each other: that posed significant risks for us. It was something we’ve been planning for a long time but [the quakes] certainly reminded senior management of the importance of business continuity.”
The fabric has enabled Auckland University to play service provider to other institutions and e-science researchers, including helping New Zealand’s eScience Infrastructure researchers with their earthquake prediction simulations.
“It’s not just a straight replacement [of the network]; we’re seeing a lot of investment into high-performance computing,” Harper says.
Auckland University is playing host to sector-wide initiatives throughout New Zealand as well as supporting a number of Crown Research Institutes (CRIs), separately run government research groups with specific scientific goals.
“We were looking to put in a new structure in the data centre to support high-performance, highly scalable services we expect to see over the next 10 years,” Harper says.
That saw top-of-rack switching replaced with a simpler 10GB Ethernet QFabric solution at the Auckland and Tamaki campuses with a fibre backbone for processing, failover and disaster recovery. Harper says it has “deterministic latency” – researchers know how long and how many hops a data packet takes, essential for intensive computation.
Although there are risks associated with using a Layer-2 technology this way, Harper notes that technology Juniper developed for it allows the university to run two fabrics as one.
“We wanted to deploy something in one data centre and know it would work the same way. We have dark fibre between the two so we had no shortage of bandwidth.”
He and his team have succeeded in keeping resources connected because, as they migrate between data centre virtual local area networks, they keep their internet protocol addresses.
“Previously that meant you had traffic going from one data centre to the other [unnecessarily]. Now traffic stays local and traverses the dark fibre if it needs to get to a server at the other data centre.”
And because the institution knows the upper latency bound, it simplifies network-aware applications, such as those used by researchers.
Harper estimates fabric cut power costs by up to 95 percent and “from my point of view, the cost per port and scalability was compelling”. And given that the university was an existing Juniper user with a number of switching units already installed, its IT team was already familiar with the Junos operating system.
Sky high
One of Auckland University’s more notable service provider customers is the Auckland University of Technology, which is analysing results from the Square Kilometre Array (SKA), peering into space from across South Africa, Australia and New Zealand, in Harper’s data centres. It’s creating an explosion of data up and down the country that will see the UoA move to 40 GB Ethernet, Harper says.
Auckland University runs a parallel fibre channel storage array knitted with Cisco switches but there are no plans to collapse it into the internet protocol Juniper network, he says.
“Fibre channel gives you resilience, load balancing, the way you just plug it in and it sorts itself out and seeing those features pop in on a QFabric solution is [excellent]. It’s taken far too long for that to happen. QFabric would support Fibre Channel if we went in that direction.”
It took Harper’s team and Juniper engineers two weeks to install. Harper estimates the data centre now has cable runs about 5 percent of that it used previously, reducing to about 1.6 kilometres.
Back on the other side of the ditch, Fox Sports has deployed Juniper aided by Sydney reseller, ICT Networks. It generates 30 terabytes of data a weekend and a petabyte every six months and with the move to a new data centre at Gore Hill on the cards next year, it needed to handle its explosive data growth.
Fox Sports estimates the Junos-QFabric solution is about two years ahead of Cisco’s, its previous network switch provider, its chief technology officer Michael Tomkins told The Australian.
“This is a distributed architecture, so I can have parts of it in different racks and I can allocate the switch and the high bandwidth where it’s required,” Tomkins said. “It gives me redundancy, flexibility and [Juniper’s Junos network operating system] . . . management of the platform is much simpler. The throughput is better than I can get anywhere else.”
Juniper Networks A/NZ vice-president Mark Iles says the single-layer, single way to look at switching is attractive to end customers. Other architectures insist on system administrators corralling hundreds or thousands of switches, he says.
“We do away with the three hops where you investigate the data at the access, aggregation and core layers and where it needs to go that makes for a high-density environment that’s very complex to manage,” Iles says. “QFabric connects any device to any other device and we can do that for upwards of 6000 10GB Ethernet ports.”
Juniper is implemented in stages alongside a customer’s infrastructure, Iles explains further: “Use the equipment you have now and start to migrate as you grow, so it’s not a rip and replace.”
Iles emphasises that Juniper relies on the channel for its income and has accordingly invested in partner training, inviting resellers to bring customers into its data centre to see fabric at work. And it helps resellers model customer needs to see the best time to move to fabric – helpful if they are nervous about write downs.
Enterprises’ new clothes
However, Gartner analyst Bjarne Munch cautions against over hyping the network fabric space, which he says “is very much an emerging area … with few deployments”.
“It will be quite a number of years before you see any fully-fledged solutions,” he says. “Products are still evolving and we’re still talking to clients about benefits.”
And with regard to Juniper’s implementations, he feels these may be the exceptions that prove the rule at the moment.
“Juniper sits into the scheme of things by lifting discussion on higher levels and looking at the entire data centre network as a switch – and Brocade is doing that as well but on a completely different scale – Juniper is a higher scale, but you could argue not for very large networks,” Munch says.
“It lifts the discussion to the level where you could speak of the data centre network as a fabric, which is why you talk about it as a one-switch type of architecture.“
QFabric lifts it up on that layer where you have that control architecture with policies, router switching, spanning the entire data centre network.
“They have the right architecture but it’s still too early to say it’s completely proven.”
Nevertheless Gartner estimates there are nearly 50,000 data centres in Australia likely to spend about $2 billion this year and next, with some of the money to go on fabrics. It predicts that about 4.4 percent of servers sold within the next 3 years are destined for fabrics, on par with the historical growth of blade servers.
But Munch cautions that fabric’s promise – connecting anything to everything – is retarded by proprietary solutions, conservative data centre owners and under-baked technology.
“But there’s a need to be more dynamic in the data centre and virtualisation is pushing these discussions. Those that would benefit most are [the] most conservative enterprises with very large data centres where they want to manage those resources as efficiently as they can.”
Munch advises resellers to probe how customers handle – or would like to handle – virtual servers. For instance, if they move them depending on power or other tariffs, to follow the sun, put applications closer to users or consolidate resources. That work could be automated and spread over a fabric.
But he says inherent problems with Layer-2 switching over networks aren’t licked, despite what vendors say.
“There’s serious constraints when you talk about how to network data centres on Layer-2 and we haven’t seen too many walk down that path.
“We have a lot of hype with moving virtual machines between data centres and cloud bursting” but the reality is different,” he says.
Secure fabric
Juniper’s senior manager of data centres, Brian Hutson, sees another overlooked advantage of fabric: security. A certified information security practitioner, he says users bringing their devices into the network means that security is no longer a razor-wire ring-fence around the data centre but must permeate its weft and weave.
“The biggest thing I see is an advantage in visibility because I’m looking at one switch and I’m seeing a lot of the activities in the data centre switching network as opposed to the old, multi-tiered way of looking at firewalls,” he says.
“We can’t have those firewalls we used to have that blocked off the internet or separated application from database or presentation layers; that’s not going to work with physical firewalls protecting virtual servers. You now need to look at how you protect your virtual environment.”
Hutson says resellers must understand their customers’ data flows before upgrading their networks.
“The customer or integrator has to really understand what is going on in terms of traffic and the implications of moving switches around or consolidating a network from three layers to one layer,” he stresses.
Customers may not understand what the network is doing and if they rush into deploying any switching those issues can be built up alarmingly, so it’s important, he says, to get operations technicians involved at the outset.
Hutson helps resellers and their customers test real data, perhaps captured with a media device in the customer’s data centre and then played back using their apps and devices in Juniper’s lab. So before customers even start their rollout, they should set up a lab in their own physical environment to allay fears and make sure everything is covered.
Fibre channel
Although seemingly ahead of many of its competitors, Juniper is fresh to fabrics, its QFabric offering launched barely a year ago.
But for those with big storage networks, emerging network upstart Brocade has provided its Fibre Channel arrays for some time. It now offers solutions for the Ethernet needs of the rest of the data centre.
Cloud provider 6YS is one of the first in Australia to take on Brocade and is hoping promised developments to knit its five data centres around the country will soon bear fruit. The company’s chief technology officer Martin von Stein says the shifting demands of customers make a reactive approach to data centre management challenging.
“[6YS’] primary environment was already relatively flat; we were running a core-to-edge architecture but the main issue we had was bandwidth management,” von Stein says.
“We had great difficulties in ensuring we had the right amount of bandwidth in the right rack at the right time so we were constantly shuffling uplinks.”
Von Stein was attracted to Brocade’s guaranteed traffic scheduling to avoid blocks of data getting lost in the middle that’s destined for disk I/O, a problem that frequently leads to crashes.
He says 6YS had the option to rip and replace but decided initially to overlay the new fabric; it’s now backfilling the old topology with the new. But it was almost an accident:
“We came into the Brocade fold mid-last-year and we weren’t looking for fabric; our specification called for reliable 10GB switches. Brocade had a good price and features and in the first meeting they mentioned Ethernet fabric. It was almost an unknown to us; we went and did research and they gave us demo units to look at.”
A big advantage for 6YS is how redundant switches are bonded.
“You have two network interface cards but the switchgear fools it into thinking it’s one 20GB trunk – if a switch fails you are just reduced to 10GB; it’s like gravy. How it load balances across the NICs is not 100 percent perfect but it’s close and you can’t do that with Fibre Channel.”
It reduced von Stein’s sleepless nights and made network configuration trivial, especially when dealing with dynamic customer demands. And where change notices were fraught, now they are a formality; preparation reducing from hours to about half an hour.
And managing the new world of virtualisation has become less of a headache too.
“Virtualisation is becoming more used in all manner of business,” he says. “We’re now seeing larger clusters spanning large racks and areas of the data centre. You have islands that have an unknown interconnectivity requirement so how do you budget bandwidth when you don’t know which virtual machines are asking for connections?”
Fabric eases the reactive “pain” of capacity management: “We can’t get away from being reactive because customers don’t tell us what they’re doing, [but] being reactive in a fabric environment is a piece of cake.
“If capacity is approaching, then all we do is run another link, activate and set as fabric; we literally just run another cable between two [devices], the switches find themselves and add that bandwidth”.
And von Stein points to virtual desktops as another factor straining the traditional client-server strategy.
“Most organisations are online 24/7 so with the need to move applications and introduce new technologies into the data centre we don’t have the luxury of downtime.”
The macro trends driving the change swivel around virtualisation on the server and desktop and how end-users consume services through the cloud.
Easy access
“Everyone has a smartphone or iPad or remote access to applications in the data centre,” says Brocade A/NZ regional director Graham Schultz.
“[It’s] driving more users to centralised data [and] putting more stress on applications and infrastructure, driving the requirement for efficiencies in the data centre.”
According to surveys in the past year, Australia has the most virtualised data centres in the world, yet our businesses are decidedly diminutive by international SMB standards, Schultz says.
“It’s not about how many employees you have but the type of business you’re running, such as cloud provision, that makes fabric worthwhile.”
And while fabric makes its way in the data centre, vendors such as HP see it moving down the value chain over the next 3 to 5 years into campus and branch offices. That’s according to HP’s solutions architecture manager, Wojtek Malewski.
“The sprawl of devices on campuses is impacting services and applications you are providing in the data centre so it’s important that fabric awareness and control not just be focused on the data centre itself,” he says.
And like vendors such as Cisco, HP hitches itself to the wagons of virtualisation vendors, most notably VMware.
“When talking about thousands of virtual servers, you’re talking about hundreds of thousands of lines of code,” Malewski notes.
“We see the death of the [command line interface]. A lot of vendors are stuck on [it] but to scale and move to cloud you have to abstract those requirements.”
He says many still do that heavy lifting by hand but what is needed is a single step to define [the] network and install services so sysadmins can rapidly provision that with a click.
“Don’t worry about policies or application programming interfaces; the network is provisioned on the fly as those services are looked at.”
Most see the move from 10GB Ethernet to 40GB Ethernet fabric in the data centre over the next 3 to 5 years as a given, with a similar order-of-magnitude rise in fibre channel fabrics in the same period to keep up with upward spiralling demand.
Holistic approach
And while network behemoth Cisco has a big investment in the traditional three-tier architecture, it’s adapting to a holistic approach to the data centre, says its Australian chief technology officer Kevin Bloch. FabricPath is its foot in the door for this future, which consolidates switches so they are easier to manage.
“Say you have six switches in the data centre, FabricPath makes it look like they are one; it’s a huge simplification and prevents human error,” Bloch says. “And if you have another data centre, we can make them look like two switches.”
The question then is how to bridge the two data centres.
“The solution to that is overlay transport virtualisation, which bridges those two fabrics over any network, even Layer-3,” Bloch proclaims.
“Where we’re going is to make those six switches to look like one fabric. Now we’re going over wide area, it could be dark fibre, and still simplify in a Layer-2 environment so you’re enjoying the benefits of Layer-2 with the flexibility of Layer-3.”
Nexus integration
Cisco’s Nexus switches are the first to benefit as Cisco parlays its market position to integrate with virtualisation leaders VMware and Microsoft, offering management benefits in the hypervisor. For instance, the Nexus 1000V demonstrated at Cisco’s conference in June supports Windows Server 2012.
Cisco hopes the conservativeness of data centre owners and its immense weight in the market gives it an edge even against aggressive competition. Bloch pushes Cisco’s total value proposition and reduced risk, the support it offers to partners and the surety the company boasts is infused with its brand.
And while he acknowledges that resellers not on Cisco’s top-tier do struggle against those that are, that’s an opportunity to differentiate with skills such as data centre automation or service catalogues.
Intel has also tooled around with fabrics for a few years but recently upped its game, buying QLogic’s Infiniband and the fabric division of legendary supercomputer maker, Cray.
Intel Australia enterprise technical specialist Peter Kerney sees Ethernet gathering the bulk of new business because that is where many technicians’ skills lie. But for high-performance architectures such as those used in big business, research and academia, Infiniband holds promise, he says.
“A lot of customers are looking at converged fabric storage over the Ethernet infrastructure to reduce switching gear and connections, which also collapses storage and network teams at customers and ultimately resellers as well,” Kerney.
* Nathan Cochrane attended Cisco Live! as a guest of the vendor.