Dell Technologies has unveiled a comprehensive suite of infrastructure tech, designed to help organisations deploy artificial intelligence workloads more efficiently and cost-effectively across edge, data centre, and cloud environments.
The announcements are part of Dell's expanded AI Factory approach and include the industry's first enterprise-grade discrete neural processing unit (NPU) in a mobile form factor.
It's not just computing gear: Dell claims its cooling technology can reduce energy costs to run AI by up to 60 per cent.
The Dell AI Factory, in partnership with NVIDIA, is a secure AI solution with a portfolio of products, solutions, and services tailored for AI workloads, from desktop to data center to cloud.
The company claims its AI Factory approach can be up to 62 per cent more cost-effective for inferencing large language models on-premises compared to public cloud alternatives.
At the centre of Dell's mobile AI push is the Dell Pro Max Plus laptop featuring Qualcomm's AI 100 PC Inference Card, positioning it as the world's first mobile workstation with enterprise-grade discrete NPU capabilities.
The device includes 32 AI cores and 64 GB of memory, enabling fast and secure on-device inferencing for very large AI models that typically require cloud processing, including current 109 billion parameter models.
Dell has also introduced the PowerCool Enclosed Rear Door Heat Exchanger (eRDHX), an industry-first cooling solution that captures 100 per cent of IT-generated heat through a self-contained airflow system.
The cooling system operates with water temperatures between 32 and 36 degrees Celsius, eliminating reliance on expensive chillers whilst enabling organisations to deploy up to 16 per cent more racks of dense compute without increasing power consumption.
Up to 80 kilowatts per rack for dense AI and high-performance computing deployments is supported by the cooling system.
In server hardware, Dell's PowerEdge XE9785 and XE9785L servers will support AMD Instinct MI350 series GPUs with 288 GB of HBM3e memory per GPU, delivering up to 35 times greater inferencing performance in both liquid-cooled and air-cooled configurations.
The company has expanded its AI partner ecosystem with new collaborations including on-premises deployments of Cohere North, Google Gemini integration, and solutions built with Meta's Llama Stack distribution and Llama 4 models.
Dell claims over 3,000 customers across different industries as AI Factory customers.