September 21, 2022
NeMo LLM and BioNeMo to launch in beta
Nvidia is launching tools for customizing AI applications based on large language models that makes training processes exponentially faster.
To help developers work with such systems, Nvidia plans to launch new offerings to cover language models ranging in size from three billion parameters, all the way to its own Megatron 530B, which is among the largest monolithic transformer-based language models.
The NeMo Large Language Model Service is designed for developers to tailor several pretrained foundation models via prompt learning on Nvidia-managed infrastructure.
BioNeMo enables the deployment of customized AI applications for uses such as content generation, text summarization and chatbots. The service can also be applied to science-focused applications, like drug discovery.
The new tools enable training processes on related models to take minutes to hours compared with weeks or months, according to Nvidia.
“Large language models hold the potential to transform every industry,” said Jensen Huang, founder and CEO of Nvidia, in a statement. “The ability to tune foundation models puts the power of LLMs within reach of millions of developers who can now create language services and power scientific discoveries without needing to build a massive model from scratch.”
NeMo LLM and BioNeMo services will be available in early access from October. Developers are required to apply to access the offerings.
The beta release of the NeMo Megatron framework is available from Nvidia NGC and is optimized to run on Nvidia DGX Foundry and DGX SuperPOD, as well as accelerated cloud services from AWS, Microsoft Azure and Oracle Cloud Infrastructure.
About the Author(s)
You May Also Like