HPE teams with Nvidia on AI-native portfolio

By on
HPE teams with Nvidia on AI-native portfolio

HPE unveiled new additions to its AI-native portfolio at Nvidia GTC on Tuesday US time, including two co-engineered generative AI (GenAI) solutions.

The portfolio aims to helps businesses operationalise GenAI, deep learning and machine learning (ML) applications.

HPE launched its supercomputing solution for GenAI, which is designed to help organisations develop and train large AI models, as well as accelerate deep learning projects, including LLMs, recommender systems and vector databases.

Supporting up to 168 Nvidia GH200 Grace Hopper Superchips, the solution helps streamline the model development process with an AI/ML software stack.

Delivered with services for installation and set-up, the solution is designed for AI research centres and large enterprises to realise improved time-to-value and speed up training by two to three times, HPE said. 

Enterprise computing solution for GenAI tuning and inference

Also launched was HPE's enterprise computing solution for GenAI, which is available to customers directly or through HPE GreenLake with a pay-per-use model.

Co-engineered with Nvidia, the pre-configured fine-tuning and inference solution is designed to "reduce ramp-up time and costs by offering the right compute, storage, software, networking and consulting services that organisations need to produce GenAI applications," HPE said.

"The AI-native full-stack solution gives businesses the speed, scale and control necessary to tailor foundational models using private data and deploy GenAI applications within a hybrid cloud model."

The solution features a high-performance AI compute cluster and software from HPE and Nvidia, and is designed for lightweight fine-tuning of models, RAG and scale-out inference.

"The fine-tuning time for a 70 billion parameter Llama 2 model running this solution decreases linearly with node count, taking six minutes on a 16-node system," HPE said.

Using HPE ProLiant DL380a Gen11 servers, the solution is pre-configured with Nvidia GPUs, the Nvidia Spectrum-X Ethernet networking platform and Nvidia BlueField-3 DPUs.

It is enhanced by HPE's ML platform and analytics software, Nvidia AI Enterprise 5.0 software with new Nvidia NIM microservice for optimised inference of GenAI models, as well as Nvidia NeMo Retriever and other data science and AI libraries. 

HPE Machine Learning Inference Software

The company announced that HPE customers can now preview its HPE Machine Learning Inference Software, which "will allow enterprises to rapidly and securely deploy ML models at scale."

The new offering will integrate with Nvidia NIM to deliver Nvidia-optimised foundation models using pre-built containers.

HPE has also launched a reference architecture for enterprise RAG that aims to help enterprises that need to build and deploy GenAI applications featuring private data.

Based on the Nvidia NeMo Retriever microservice architecture, the offering consists of a data foundation from HPE Ezmeral Data Fabric Software and HPE GreenLake for File Storage. 

To aid in data preparation, AI training and inferencing, the solution merges open-source tools and solutions from HPE Ezmeral Unified Analytics Software and HPE's AI software.

This includes HPE Machine Learning Data Management Software, HPE Machine Learning Development Environment Software, and the new HPE Machine Learning Inference Software.

HPE's AI software is available on both HPE's supercomputing and enterprise computing solutions for GenAI "to provide a consistent environment for customers to manage their GenAI workloads," the company said.

Future solutions built on Nvidia Blackwell platform

HPE said it will develop future products based on the newly announced Nvidia Blackwell platform, with additional details and availability for forthcoming HPE products featuring the Nvidia GB200 Grace Blackwell Superchip, the HGX B200 and the HGXB100 to be announced.

"To deliver on the promise of GenAI and effectively address the full AI lifecycle, solutions must be hybrid by design," said Antonio Neri, president and CEO at HPE.

"From training and tuning models on-premises, in a colocation facility or the public cloud, to inferencing at the edge, AI is a hybrid cloud workload.

"HPE and Nvidia have a long history of collaborative innovation, and we will continue to deliver co-designed AI software and hardware solutions that help our customers accelerate the development and deployment of GenAI from concept into production."

Nvidia's CEO Jensen Huang said the company's "growing collaboration with HPE will enable enterprises to deliver unprecedented productivity by leveraging their data to develop and deploy new AI applications to transform their businesses."

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © nextmedia Pty Ltd. All rights reserved.
Tags:

Log in

Email:
Password:
  |  Forgot your password?