While hyperscalers spend eye-watering amounts on AI data centres, others are attempting to make it less daunting for everyone else to buy their own AI infrastructure.
They include Dicker Data and Cisco, which announced this week that in early 2026 “select” Australian partners will get access to Cisco AI PODs – pre-validated, modular AI infrastructure stacks.
The pitch is that AI PODs simplify enterprise AI infrastructure procurement, and the building, training, testing and running of enterprise AI applications – and provide more control, cost predictability and security than public cloud might offer in some scenarios.
The AI PODs combine high performance computing, storage, networking, data fabric, security and software, and can be configured for building, training, testing and inferencing.
The intention is that partners use them to trial use cases and build solutions.
“Rather than starting from scratch, partners can trial battle-tested use cases and build confidence in AI deployment,” said Cisco managing director of partner and routes to market, Rodney Hamill.
For Dicker Data, the announcement builds on its AI Accelerate program by giving partners "a road to start building bespoke AI solutions,” said Ben Johnson, Dicker Data general manager – marketing & strategy, ANZ, at the Cisco Live event in Melbourne this week.This is "something quite different and new in the Australian and New Zealand markets," he claimed.
Components
Cisco AI PODs are “pre-loaded with industry-specific use cases” designed for healthcare and aged care, mining, earth moving, oil and gas, industrial and manufacturing, retail and convenience stores, and food and hospitality.
The offering includes NVIDIA Enterprise Reference Architectures for a range of workloads, including industrial and “perception AI”, HPC, data analytics, visual computing, generative and agentic AI.
Under the hood is “full-stack AI software” including Red Hat OpenShift, Cisco Intersight, Ansible, Terraform, and NVIDIA AI Enterprise – the latter is a suite of software tools, libraries and frameworks for speeding up and simplifying AI application development.
The hardware combines Cisco UCS servers, NVIDIA GPUs, Cisco Nexus switches, and extendable storage such as VAST, Pure Storage and Netapp. Cisco’s marketing pitch is that VAST makes massive, unstructured data “instantly” usable by AI, unlike “GPU-focused AI stacks that lack integrated data intelligence.”
Cisco Intersight and Nexus Dashboard provide management, visibility and automation.
Who’s infrastructure?
For organisations that have deployed AI workloads in a public cloud, why move them and will they do so?
“We are seeing quite a bit of repatriation to the on prem environment," said Cassie Roach, Cisco global vice president of cloud and AI infrastructure partner sales. "It's driven by the data sensitivity,” she said, also noting costs.
“I think it's easy to go spin up something quickly for a proof of concept, but when you actually have to run that in production, and you have to look at those cost barriers and everything else, and think about taking it to the edge.”
Johnson added: “I think when we look at the POC stage - and even when it comes to the production stage around training LLMs for particular models where partners are deploying them - the costs don't seem to stack up in the hyperscaler end of town when it comes to that training side of things.”
“That's where we're seeing more and more demand now to bring that on-prem, and to have solutions like the Cisco AI POD to be able to do that stage of it and also to run it into the future as well.”
According to Cisco’s marketing, global customers in healthcare, finance and public research are using Cisco AI POD architectures in their production environments to run secure GenAI inference next to governed data, fine-tune domain models without moving sensitive intellectual property, and burst workloads across AI PODs and facilities as projects scale.
Cisco is working on AI sizing tools. “We've got TCO and ROI-based tools as well,” Roach said.
Disconnect
Partners will need all the help they can get with AI infrastructure, if the experience of Tal Nathan, VP, application solutions, data & analytics & AI, NTT Data, is any indication.
Business’s enthusiasm for AI is stronger than their appreciation of the infrastructure required, according to Nathan.
“They have no context as to what will be the cost required to drive that transformation and the maturity of the organisation's ability to drive it,” he said at Cisco Live in Melbourne.
Talking about “hyper automation” in healthcare, Nathan asked: “What happens when the network goes down and you’ve digitised and automated your processes end to end? And they're not considering that with the power of artificial intelligence, the need for low latency, scalable, redundant infrastructure becomes that much more important.”
Then there are the energy costs. “The reality is energy demand as a result of AI use is going to explode,” said Mary de Wysocki, Cisco SVP & chief sustainability officer, in a session about sustainable AI infrastructure in Melbourne.
On the topic of sustainability, Nathan saw a need to focus on the "science of the possible": give customers clear numbers not just about value, but also about implementation cost and impact — environmental and financial."
William Maher travelled to Cisco Live Melbourne as a guest of Cisco.




