Switch between Nvidia and AMD chips with AITemplate
AI researchers from Facebook parent Meta are open-sourcing AI tools designed to make it easier to switch between chips.
Based on PyTorch, the AITemplate engine would allow developers to seamlessly switch between hardware from Nvidia and AMD when deploying models.
“With AIT, it is now possible to run performant inference on hardware from both GPU providers,” according to Meta.
The Facebook parent said that using AITemplate can boost performance levels – noting 12 times increase on Nvidia GPUs such as the A100 and four times increase on AMD GPUs such as the MI250 chip.
Meta explains how the system works: “The tool has two layers — a front-end layer, where it performs various graph transformations to optimize the graph, and a back-end layer, where it generates C++ kernel templates for the GPU target.”
In addition, AIT maintains a minimal dependency on external libraries – meaning generated runtime library for inference is self-contained, thereby increasing efficiency.
“The unified GPU back-end support gives deep learning developers more hardware vendor choices with minimal migration costs,” Meta claims.
The system can be easily deployed – since it’s compiled into a self-contained binary without dependencies, the AITemplate can work in any environment across similar hardware. “This simplifies the deployment process and allows practitioners to deploy PyTorch pretrained models easily.”
And considering AITemplate increases GPU efficiency, Meta claims its new open source offering is better for the environment. “Studies show that GPU usage can be tied to carbon emissions. AITemplate reduces GPU execution time, which will also reduce emissions.”
AITemplate can be accessed via GitHub.
About the Author
You May Also Like