Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
December 8, 2023
The leading tech companies in AI may have different views about how they each approach generative AI, but they all agree on one thing: In a decade, all enterprises will be using it.
“I think it will be 100%,” said Adam Goldberg, member of the GTM team, ChatGPT Enterprise, at OpenAI, during a panel discussion at AI Summit New York that included AI executives from Google, Meta, Amazon and Deloitte.
“The next two to three years are going to be wild. You are going to have organizations that aggressively repave their business processes of all shapes and sizes, and whether they build something or they buy something … (there will be) material change for every organization that is using it in significant ways,” Goldberg continued.
Sy Choudhury, director of business development, AI partnerships at Meta, agreed. He said that enterprises are experimenting with smaller models that tend to be less expensive.
“I think over the next year, you are going to continue to see innovation in that area,” Choudhury said. “Why that is important for the enterprise is because you want efficient models in order to make sure when you are running (a model), your costs are not going through the roof.”
Salman Taherian, generative AI partner lead (EMEA) for AWS, said he sees three trends coming: generative models will become smaller, more efficient and offer better performance; there will be a lot more fine-tuned and customized models for industries; and the increasing use of multiple large language models (LLMs) in combination.
For example, users will use an LLM to fact-check another LLM’s output to reduce hallucinations, which is the propensity for an LLM to generate wrong information but do so convincingly.
Taherian cited a Gartner forecast that said $3 trillion will be spent on AI over the next four years and generative AI will capture a third of that spending.
Oz Karan, Deloitte, partner within risk and financial advisory practice and its trustworthy AI leader, said the real jump in adoption would be among users rather than companies.
“The more we can trust these technologies, the more that we see them in our everyday lives, we are going to be a lot more comfortable” in using them, he said. “Five years from now, 10 years from now, as more regulations come into play, as more controls are established around these models, I think user adoption will equally increase. The 2x or 3x or whatever percentage jump that we are going to see is going to be more noticeable in user adoption than it is in the percentage of businesses that are using it.”
Such is the nature of the technology curve. “It was the internet era. It was the mobile era and now it is the AI era. The way you do not see anybody not using the internet or not using a mobile phone (today), eight years from now, it would not be possible to see nobody using AI,” said Hitesh Wadhwa, Google Cloud business and sales Leader, GenAI ambassador.
Given the current level of interest and adoption in generative AI, OpenAI’s mission to build artificial general intelligence (AGI) in which machines will learn to think broadly like a human, is “almost the wrong focus,” Goldberg said, adding that within the company “there are lots of opinions about that.”
But a hindrance to faster growth in generative AI is its slew of risks including hallucinations, copyright infringement issues, privacy, security and bias, among others. Meta’s Choudhury said the company’s latest effort in responsible AI is Purple Llama, a program encapsulating trust and safety tools for generative AI.
Why purple? “In the world of security, there is red-teaming” where hackers try to break into the model while “blue-teamers” try to protect it, he said. Blending the two colors together yields purple, as it combines elements of both. It is called Llama after Meta’s family of large language models.
Purple Llama aims to help developers adhere to Meta’s responsible use guide of its LLMs. The first releases under Purple Llama are CyberSec Eval, which is a set of cybersecurity evaluation benchmarks for LLMs, and Llama Guard, which filters inputs and outputs and classifies them for safety.
Meta also recently partnered with IBM to form the AI Alliance, whose more than 50 founding members include AMD, AWS, Google Cloud, Hugging Face, Intel, Lightning AI, Microsoft, MLCommons, Nvidia, Scale AI and others.
The goal of the alliance is to collaborate and share information to innovate faster as well as identify and mitigate risks before products are released to the public. OpenAI and Microsoft are notably absent from the list.
At Meta, as part of its responsible AI practices, the company puts prompts through four Llama models for safety and performance before the output is generated.
For example, if a user types in a prompt in WhatsApp, one Llama model is used to understand the intent of the user. Next, it goes to a Llama safety model to check if the prompt is legitimate or it is trying to trick the generative AI. Third, it goes to a model that answers the prompt. Lastly, the fourth model cleans up the answer to make it “crisp and nice.”
“I mention this because in order to provide a safe and desirable and an enjoyable experience to the end consumer, we go through all these steps to make sure that this system is put together … to deliver essentially what you think of as generative AI,” Choudhury said.
The proactiveness also goes beyond tech. For example, legal teams are being brought in early in the development process now, even as early as the proof of concept stage to flag risks. This is “much earlier than what we have had before in AI/ML,” according to Amazon’s Taherian.
Communication teams also are being brought in early as well to handle any PR fallout from misbehaving AI. “It’s a different world now,” Choudhury said.
Read more about:ChatGPT / Generative AI
You May Also Like