In this video, Ian Ferriera and John Curran from Core Scientific discuss the differences between traditional enterprise data centers, and data centers built specifically for AI workloads.
The primary distinction is higher power density, which enables data centers to house equipment like the DGX-2, Nvidia’s machine learning supercomputer-in-a-box.
“If you look at the traditional enterprise server, you are drawing down one to two kilowatts; you compare that with a DGX-2, it’s drawing down 10 kilowatts,” Ferriera told AI Business.
“Our facilities typically have a lot more power – newest facilities have 130 megawatts, compared to an average data center, which is around five to 25 megawatts, that’s a pretty substantial difference.”
The conversation took place at the AI Summit in San Francisco in September.