AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

IT & Data Center

Dell unveils Omnia: Open source software to automate AI and HPC workloads

by Ben Wodecki
Article ImageAlso announced: VMware expansion for Dell’s HPC on demand service and Nvidia GPU availability for PowerEdge servers

Dell has launched Omnia, an open source software stack designed to simplify compute-intensive workload deployments.

Developed in collaboration with Intel and Arizona State University (ASU), Omnia automates the provisioning and management of workloads.

It offers a set of Ansible playbooks that speed up the deployment of converged workloads with Kubernetes and Slurm, along with library frameworks, services, and applications.

Designed by Red Hat, Ansible aids app deployment and configuration management; Slurm is a job scheduler for Linux used and can be found working in several leading supercomputers; Kubernetes is orchestration software for containers that host the components of modern applications.

“Omnia automatically imprints a software solution onto each server based on the use case — for example, HPC simulations, neural networks for AI, or in memory graphics processing for data analytics — to reduce deployment time from weeks to minutes,” Dell said in a press release.

Convergence

“As AI with HPC and data analytics converge, storage and networking configurations have remained in siloes, making it challenging for IT teams to provide required resources for shifting demands,” Peter Manca, Dell’s senior vice president of integrated solutions, said.

“With Dell’s Omnia open source software, teams can dramatically simplify the management of advanced computing workloads, helping them speed research and innovation.”

Arizona State University’s research computing facilities worked alongside Dell to develop the Omnia software.

“It’s been a rewarding effort working on code that will simplify the deployment and management of these complex mixed workloads, at ASU and for the entire advanced computing industry,” Douglas Jennewein, ASU’s senior director of research computing, said.

The Omnia toolkit is available on Github under an Apache-2.0 License – with users allowed to distribute and modify the code.

VMware expansion for HPC on demand

In other news, Dell announced an expansion if its HPC on demand services and EMC PowerEdge server line to support VMware virtualization environments.

Dell’s HPC on demand service offers cloud-based access to its PowerEdge R systems on a pay-as-you-go basis. VMware support will allow Dell customers to adopt a hybrid cloud operating model for resource-intensive HPC workloads, the company said.

Mercury Marine, the marine engine division of Brunswick Corp, said it has already utilized Dell's HPC on demand infrastructure for use in its computer-aided hydrodynamic simulations to create new propulsion systems. Mercury Marine said the setup had reduced its simulation times to just two hours, where it was previously around two days.

Yet another Nvidia announcement

Dell also announced that Nvidia A30 and A10 Tensor Core GPUs are now available as options for its EMC PowerEdge R750, R750xa, and R7525 servers.

The option to choose Nvidia’s A10 GPUs will give Dell customers the ability to support mixed AI and graphics workloads on common infrastructure, the company said, adding that this could be “ideal for deep learning inference and computer-aided design.”

EBooks

More EBooks

Latest video

More videos

Upcoming Webinars

Archived Webinars

More Webinars
AI Knowledge Hub

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up