IBM Doubles Down on Enterprise AI With New Watsonx Assistants, Models

At Think 2024, IBM unveils new assistant features to help developers and open sources its Granite AI models

Ben Wodecki, Jr. Editor

May 21, 2024

4 Min Read
David Ramos/Getty Images

IBM has unveiled new updates to its generative AI platform watsonx, including new assistant tools, third-party models and a wider commitment to open source AI.

At the company’s annual Think conference, IBM announced a shift in focus toward supporting open source.

The company has made its family of Granite large language models open source, meaning businesses can use them to power their commercial applications.

The family of newly open sourced models includes powerful coding models developers can use to perform bug fixes, generate code and maintain repositories across 116 programming languages.

Granite was previously locked away under watsonx, but IBM announced at Think that it wants businesses to use the models to “push the boundaries of what AI can achieve in enterprise environments.”

“We firmly believe in bringing open innovation to AI,” said Arvind Krishna, IBM’s CEO. “We want to use the power of open source to do with AI what was successfully done with Linux and OpenShift.

“Open means choice. Open means more eyes on the code, more minds on the problems and more hands on the solutions. For any technology to gain velocity and become ubiquitous, you’ve got to balance three things: competition, innovation and safety. Open source is a great way to achieve all three.”

Related:IBM Strikes AI Security Partnership With Palo Alto Networks

The open source Granite models can be found on Hugging Face and GitHub under an Apache 2.0 license, allowing users to create their own proprietary software and offer the licensed code to customers.

New Watsonx Assistants, Extended GPU Support

IBM also unveiled new AI assistant tools in its watsonx platform to improve user workflows.

Among the new assistants include watsonx Code Assistant for Enterprise Java Applications and watsonx Code Assistant for Z, which focuses on mainframe applications.

The new code generation assistants are designed to help enhance the productivity of business developers, helping them improve their code quality and streamline their workloads by automating tasks like applying fixes.

Watsonx is also getting a feature that would let users build their own AI assistants. Coming soon to watsonx Orchestrate, the new feature would enable users to craft their own assistants to apply to their specific market.

IBM also unveiled plans to expand its GPU offerings to cover Nvidia’s L40S and L4 Tensor Core GPUs, enabling watsonx users to access powerful hardware to process their workloads. Enterprises can also benefit from extended support for Red Hat Enterprise Linux AI (RHEL AI) and OpenShift AI to power their AI workloads.

Related:IBM Strikes AI Security Partnership With Palo Alto Networks

Watsonx Gets New Models, Tools

Along with the now open source Granite, watsonx users could access a host of AI models to power their work, including StarCoder and Meta’s Llama 2.

At Think, IBM announced even more third-party models were being added to the platform, including Meta’s Llama 3.

Other models now covered by the AI platform include ALLaM, an Arabic-focused language model and Mistral AI’s Large model.

Users can pick from a variety of models to run their applications. The platform is also being extended beyond IBM in partnership with other vendors.

Among them is AWS, with IBM’s watsonx.governance tool being made available to Amazon SageMaker users, enabling them to monitor and manage AI tools for risk and compliance issues.

Adobe is also teaming up with IBM to bring Red Hat OpenShift and watsonx to the Adobe Experience Platform, enabling users to access IBM’s AI features in IBM’s design platforms.

IBM’s “Fixation” on Enterprise

IBM’s Think event followed OpenAI’s Spring Update and Google I/O, where both companies unveiled consumer-focused AI agents.

Unlike those companies, IBM’s focus has “always been unashamedly on the enterprise,” according to Kareem Yusuf, IBM Software’s senior vice president for product management and growth.

“When we engage our customers, our focus remains that the products we build are enabling our clients to achieve the goals that they need while addressing the problems that they've expressed to us,” Yusuf said during a press briefing. “That in my mind is the tone that surrounds our conference and it's our ongoing fixation at all times.”

The AI agents demonstrated by OpenAI and Google leverage a variety of underlying models and modalities. Mohamad Ali, IBM Consulting’s chief operating officer, said an approach of using a variety of AI models would help power business applications.

“There are very few clients that I go to where they say, 'I'm only going to use one model' because you can't. Models are good for different things… the model that is good for most things could be the most expensive and then you have to make that trade-off if you're going to deploy 200,000 agents or something,” Ali said. “We built IBM Consulting Advantage which allows us to utilize almost any model that exists. The assistants that we use, they do use a lot of different models. This idea of multi-model is extraordinarily important.”

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like