March 21, 2023
At a Glance
- ChatGPT and generative AI were all the buzz at the Gartner Data and Analytics Summit.
- Benefits include the ability of a model to do a task it was not explicitly trained to do, domain adaptability, accessibility.
- Best practices include creating usage guidelines, model monitoring and ensuring diverse voices are included when building.
The buzz at the Gartner Data and Analytics Summit was all about ChatGPT and generative AI.
One particular presentation at the summit, which drew an estimated 4,600 attendees to the Disney World-abutting Dolphin and Swan Resorts in Orlando this week, laid out the enterprise implications of ChatGPT.
“The response to ChatGPT has been swift and dramatic,” said Arun Chandrasekaran, distinguished vice president Gartner analyst. “Sometimes it’s dangerous.”
To highlight the speed of market adoption, Chandrasekaran said that to attain one million users, it took Twitter two years, Facebook 10 months, Dropbox seven months, Spotify five months, Instagram two and a half months but it took ChatGPT only five days.
“It got to 100 million users in 45 days,” he said.
Chandrasekaran noted that companies including Google AI, Microsoft, Cohere, Anthropic, A121 Labs, Alibaba Group, Baidu and Tencent are involved in generative AI, but he mainly focused on OpenAI’s ChatGPT.
As for its benefits, he cited the ability of a model to do a task it was not explicitly trained to do (referred to as emergence), domain adaptability, accessibility and an innovative ecosystem.
In the category of risks of large foundational models like GPT-4, he included copyright issues, its black box nature, the potential for misuse and hallucination – a confident but false assertion that has no bearing to reality.
“It has a propensity to hallucinate,” he said, but added that reinforcement learning acts as a curb.
He did highlight the benefits of foundation models that underpin chatbots like ChatGPT. “Foundation models represent a huge step change in the field of AI, due to their massive pretraining, which makes them effective at few-shot and zero-shot learning, enabling them to be versatile,” said Chandrasekaran.
He cited four sets of use cases for foundation models:
NLP: It could include text generation, Q&A, summarization, search, classification, entity extraction, intent recognition, translation, rewrite and text to speech.
Computer vision: It includes text-to-image, image classification, object detection, video classification and image-to-text.
Software engineering: It includes text-to-code and code completion.
General science and others: It includes drug discovery, genomic sequencing, chemical formulation and human-robot interaction.
Among the best practices proposed by the analyst were creating usage guidelines in order to prevent misuse, model monitoring and ensuring diverse voices are brought in when building or deploying.
“Create a feedback loop between AI experts, AI service and all users,” he said.
For operational best practices, the analyst recommended using pretrained models from APIs, adding that the largest model isn’t always the best fit and businesses should instead look to fine tune and optimize models.
“The raw potential is enormous, but safety and veracity remain questionable.”
Read more about:ChatGPT
About the Author(s)
You May Also Like