Enterprises Considering Generative AI Should Look to Trusted NamesEnterprises Considering Generative AI Should Look to Trusted Names
The first in a series of webinars exploring enterprise application of generative AI.
May 11, 2023
At a Glance
- AI analysts from Omdia advise that businesses looking to adopt generative AI solutions should look to trusted partners.
- Analysts said that big-name partners would have put more consideration into concerns around bias.
Enterprises looking to adopt generative AI solutions should work with a trusted partner to ensure accountability, according to analysts from Omdia.
AI experts from Omdia offered advice to business leaders looking to implement generative AI in the wake of ChatGPT, during a webcast hosted by AI Business.
The analysts said that businesses should look for the “adults in the room.”
“If you work with a company like an SAP, Salesforce or Oracle and you utilize a large language model from them, which they're all building into their line of business solutions, you know that before they finalized that model, they did their best to ensure that it was inclusive, free of bias and representative of actual customers,” said Bradley Shimmin, chief analyst for AI and data analytics at Omdia.
According to Mark Beccue, principal analyst covering AI and natural language processing, Amazon Bedrock was “one to watch.”
Bedrock, unveiled in April, provides users with access to a series of AI models from various entities, including Stable Diffusion developers Stability AI and Anthropic, the AI startup founded by former OpenAI engineers.
Bedrock allows enterprises to experiment and think about generative AI “in a way that they would have some help,” Beccue said.
Shimmin added that Bedrock’s benefit is it doesn’t lock users into a single model. “They're saying this is always going to be a heterogeneous world, so let us bring in the models that we know of and let us control them in an MLOps style facility that recognizes risks.”
Existential Risk of AI?
Recently, Geoffrey Hinton, known as one of the 'godfathers of AI,' left Google so he can freely speak out about the tech's dangers.
Natalia Modjeska, Omdia’s AI and Intelligent Automation research director, said Hinton’s departure was “hopefully the beginning of a growing trend” where senior AI researchers and practitioners call out the risks associated with deploying AI incorrectly.
Modjeska added that "lots of women are speaking up about bias in AI” including computer scientists and ethicists Timnit Gebru and Margaret Mitchell, two of the authors of the famous ‘On the Dangers of Stochastic Parrots’ paper, for which both were fired from Google.
The Omdia research director warned that not enough attention is being paid to “the dark side of large language models.”
“With technology, like with everything, you have to be able to trust it,” she said. “Trust is the foundation of society and of business. It enables commerce and it enables business. And if we can't trust the output (of generative AI), if we don't even understand how the technology works and we're sharing all kinds of information that we can't even attribute, we don't even know what's true.”
“We could potentially end up living in a very dangerous world where there's zero trust, and going back to Geoff Hinton, that's probably one of the reasons that he decided to leave Google.”
Catch the next webinar in the series
Omdia analysts were taking part in the first in a series of online webcasts in conjunction with AI Business.
Check out the audio highlights from a webcast where our expert analysts unpacked the most talked about elements of Generative AI – ChatGPT and Stable Diffusion. To watch the full session on demand, please register.
The next webcast on generative AI will focus on enterprise adoption. It will take place on June 7.
Read more about:ChatGPT / Generative AI
About the Author(s)
You May Also Like