Activeport, an Australian developor of software for telecommunications providers and data centre operators, has rolled out an AI model routing platform, positioning Activeport’s software to deliver public and private cloud access for execution of AI-driven applications.
The new Activeport AI Platform combines unified routing intelligence, an enterprise-grade generally compatible API and secure, locally hosted GPU execution on private infrastructure, delivering a low-latency private cloud inference solution that can be integrated with public platforms like AWS Bedrock, Google Vertex, Cerebras and Groq.
This launch marks Activeport’s strategic expansion from GPU orchestration for cloud gaming into GPU orchestration for AI inference, intending to position the company as the preferred software layer for private AI clouds across telecommunications, government, and enterprise sectors.
Built on Activeport’s telco-grade orchestration platform, customers can access models via standard APIs while maintaining full data sovereignty, compliance, and performance control.
Peter Christie, chairman and CEO of Activeport Group Ltd, said many of the company's telco customers are making significant investments in GPU’s to operate sovereign AI models.
"Activeport already delivers a zero-touch deployment and orchestration solution for GPU’s for cloud gaming so extending this to AI inference is a natural progression," he said.
"We’re particularly excited about opportunities in hosting Diffusion models that can leverage our expertise in streaming optimisation to boost performance."
Already in 2025, Activeport has received orders for more than 75 new Network as a Service (NaaS) ports worth in excess of $1.2 million; launched a dedicated fibre service offering variable bandwidth for private cloud connectivity; received "firm commitments" to raise $6.68 million to accelerate business development, sales and marketing, and product development; and integrated NBN circuit provisioning into its Global Edge NaaS platform.




