Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
To take on Nvidia’s CUDA
July 30, 2021
To take on Nvidia’s CUDA
Artificial intelligence startup OpenAI has released a new programming language to help developers build machine learning models.
The company claims that the language is easier to use than Nvidia's CUDA, a dominant framework that is tied to the GPU company's hardware.
"Triton has made it possible for OpenAI researchers with no GPU experience to write screaming-fast GPU code," OpenAI's CTO Greg Brockman said.
"Makes it not-so-scary to try out ideas that are outside the bounds of what PyTorch provides natively."
OpenAI says that its open-source Python-like programming language will allow researchers with no CUDA experience to write highly efficient GPU code, often on par with what an expert could manage.
CUDA programming requires understanding the three core components of a GPU - DRAM, SRAM, and Arithmetic Logic Units (ALUs) - and ensuring that they work optimally together.
Memory transfers from DRAM must be coalesced into large transactions, data has to be manually stashed on SRAM, and computations have to be partitioned and scheduled carefully to make use of ALUs effectively.
This is a lot of work, OpenAI argues, and one that most programmers struggle to do effectively.
"The purpose of Triton is to fully automate these optimizations, so that developers can better focus on the high-level logic of their parallel code," OpenAI scientist Philippe Tillet said in a blog post.
This, Tillet claimed, would allow for modifications and efficiencies that "would be out-of-reach for developers without exceptional GPU programming expertise."
Like CUDA, however, Triton is also not available on CPUs or AMD GPUs. "But we welcome community contributions aimed at addressing this limitation," Tillet said.
Tillet began working on Triton when a graduate student at Harvard University, continuing the project when at OpenAI.
Other staff members at the company also helped, as did Da Yan of the Hong Kong University of Science and Technology, the team working on Microsoft's DeepSpeed optimization library, and Anthropic.
The latter company was formed by eleven OpenAI employees, who broke off from the Microsoft-backed company earlier this year.
You May Also Like