AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

AI Practitioner

OpenAI launches Triton, a programming language for building neural networks

by
 
Article ImageTo take on Nvidia’s CUDA

Artificial intelligence startup OpenAI has released a new programming language to help developers build machine learning models.

The company claims that the language is easier to use than Nvidia's CUDA, a dominant framework that is tied to the GPU company's hardware.

Offloading optimization to AI

"Triton has made it possible for OpenAI researchers with no GPU experience to write screaming-fast GPU code," OpenAI's CTO Greg Brockman said.

"Makes it not-so-scary to try out ideas that are outside the bounds of what PyTorch provides natively."

OpenAI says that its open-source Python-like programming language will allow researchers with no CUDA experience to write highly efficient GPU code, often on par with what an expert could manage.

CUDA programming requires understanding the three core components of a GPU - DRAM, SRAM, and Arithmetic Logic Units (ALUs) - and ensuring that they work optimally together.

Memory transfers from DRAM must be coalesced into large transactions, data has to be manually stashed on SRAM, and computations have to be partitioned and scheduled carefully to make use of ALUs effectively.

This is a lot of work, OpenAI argues, and one that most programmers struggle to do effectively.

"The purpose of Triton is to fully automate these optimizations, so that developers can better focus on the high-level logic of their parallel code," OpenAI scientist Philippe Tillet said in a blog post.

This, Tillet claimed, would allow for modifications and efficiencies that "would be out-of-reach for developers without exceptional GPU programming expertise."

Like CUDA, however, Triton is also not available on CPUs or AMD GPUs. "But we welcome community contributions aimed at addressing this limitation," Tillet said.

Tillet began working on Triton when a graduate student at Harvard University, continuing the project when at OpenAI.

Other staff members at the company also helped, as did Da Yan of the Hong Kong University of Science and Technology, the team working on Microsoft's DeepSpeed optimization library, and Anthropic.

The latter company was formed by eleven OpenAI employees, who broke off from the Microsoft-backed company earlier this year.

Trending Stories
All Upcoming Events

Upcoming Webinars

More Webinars

Latest Videos

More videos

EBooks

More EBooks

Research Reports

More Research Reports
AI Knowledge Hub

Newsletter Sign Up


Sign Up