June 22, 2023
At a Glance
- HPE unveils GreenLake for Large Language Models, offering customers the chance to train AI models on supercomputers.
- GreenLake for LLMs is powered by renewable energy and offers pre-trained models users can use, including Luminous
Announced at the company’s Discover event in Las Vegas, HPE GreenLake will now offer supercomputing access to companies wanting to train and develop models. HPE GreenLake for Large Language Models will allow enterprise users to privately train, tune and deploy large-scale AI.
Among the models GreenLake for LLMs users will have access to include Luminous, a pre-trained model from German AI startup Aleph Alpha that customers can use in multiple languages. Users can leverage their own data to train and fine-tune a customized model for use cases requiring text and image processing and analysis.
GreenLake for LLMs will run off HPE Cray XD supercomputers and users will also be able to leverage the HPE Cray Programming Environment, a software suite for optimizing HPC and AI applications. The supercomputing platform also provides support for the HPE Machine Learning Development Environment to rapidly train large-scale models and HPE Machine Learning Data Management Software to integrate and audit data.
GreenLake for LLMs is powered by nearly 100% renewable energy, according to HPE. The new solution will run on supercomputers initially hosted in QScale’s Quebec colocation that provides power from 99.5% renewable sources.
“We have reached a generational market shift in AI that will be as transformational as the web, mobile, and cloud,” said Antonio Neri, president and CEO, at HPE.
AI was the focus of CEO Neri’s keynote at Discover 2023. Image: HPE
HPE is accepting orders now for HPE GreenLake for LLMs and expects additional availability by the end of the calendar year 2023 - starting in North America with availability in Europe expected to follow early next year.
GreenLake for LLMs isn’t the only solution HPE wants to offer in the AI cloud space, with the company saying it plans to launch “a series of industry and domain-specific AI applications” in the future, including applications to support climate modeling, health care and life sciences, financial services, manufacturing and transportation.
Among its other AI announcements at Discover 2023, HPE also unveiled expansions to its AI inferencing compute line, the ProLiant Gen11.
The new ProLiant DL380a and DL320 Gen11 servers are optimized for AI workloads and use Nvidia H100s as well as L4 Tensor Core GPUs and L40 GPUs.
According to HPE, they offer five times the performance on AI inferencing compared to prior hardware.
Read more about:ChatGPT / Generative AI
About the Author(s)
You May Also Like