Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
July 2, 2021
Also announced: VMware expansion for Dell’s HPC on demand service and Nvidia GPU availability for PowerEdge servers
Dell has launched Omnia, an open source software stack designed to simplify compute-intensive workload deployments.
Developed in collaboration with Intel and Arizona State University (ASU), Omnia automates the provisioning and management of workloads.
It offers a set of Ansible playbooks that speed up the deployment of converged workloads with Kubernetes and Slurm, along with library frameworks, services, and applications.
Designed by Red Hat, Ansible aids app deployment and configuration management; Slurm is a job scheduler for Linux used and can be found working in several leading supercomputers; Kubernetes is orchestration software for containers that host the components of modern applications.
“Omnia automatically imprints a software solution onto each server based on the use case — for example, HPC simulations, neural networks for AI, or in memory graphics processing for data analytics — to reduce deployment time from weeks to minutes,” Dell said in a press release.
“As AI with HPC and data analytics converge, storage and networking configurations have remained in siloes, making it challenging for IT teams to provide required resources for shifting demands,” Peter Manca, Dell’s senior vice president of integrated solutions, said.
“With Dell’s Omnia open source software, teams can dramatically simplify the management of advanced computing workloads, helping them speed research and innovation.”
Arizona State University’s research computing facilities worked alongside Dell to develop the Omnia software.
“It’s been a rewarding effort working on code that will simplify the deployment and management of these complex mixed workloads, at ASU and for the entire advanced computing industry,” Douglas Jennewein, ASU’s senior director of research computing, said.
The Omnia toolkit is available on Github under an Apache-2.0 License – with users allowed to distribute and modify the code.
In other news, Dell announced an expansion if its HPC on demand services and EMC PowerEdge server line to support VMware virtualization environments.
Dell’s HPC on demand service offers cloud-based access to its PowerEdge R systems on a pay-as-you-go basis. VMware support will allow Dell customers to adopt a hybrid cloud operating model for resource-intensive HPC workloads, the company said.
Mercury Marine, the marine engine division of Brunswick Corp, said it has already utilized Dell's HPC on demand infrastructure for use in its computer-aided hydrodynamic simulations to create new propulsion systems. Mercury Marine said the setup had reduced its simulation times to just two hours, where it was previously around two days.
Dell also announced that Nvidia A30 and A10 Tensor Core GPUs are now available as options for its EMC PowerEdge R750, R750xa, and R7525 servers.
The option to choose Nvidia’s A10 GPUs will give Dell customers the ability to support mixed AI and graphics workloads on common infrastructure, the company said, adding that this could be “ideal for deep learning inference and computer-aided design.”
Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.
You May Also Like
Generative AI Journeys with CDW UK's Chief TechnologistFeb 28, 2024
Qantm AI CEO on AI Strategy, Governance and Avoiding PitfallsFeb 14, 2024
Deloitte AI Institute Head: 5 Steps to Prepare Enterprises for an AI FutureJan 31, 2024
Athenahealth's Data Science Architect on Benefits of AI in Health CareJan 19, 2024