5 Key Takeaways from AI Summit NY 2023

Here are the highlights from our AI Summit, which saw record crowds and standing-room-only sessions

Deborah Yao, Editor

December 11, 2023

5 Min Read
AI Summit New York logo

This year’s AI Summit New York, which is a conference organized by the parent division of AI Business, saw record crowds with many sessions standing room only.

For good reason: Generative AI is the talk of the tech community, with developers, data scientists and business managers trying to figure out how to best apply it in their companies.

News_Image_-_2023-12-11T133843.991.png

If you missed the event, here are the key takeaways:

1. OpenAI, hyperscalers see all companies using generative AI in 10 years.

"I think it will be 100%,” said Adam Goldberg, member of the GTM team, ChatGPT Enterprise, at OpenAI. “The next two to three years are going to be wild. You are going to have organizations that aggressively repave their business processes of all shapes and sizes, and whether they build something or they buy something … (there will be) material change for every organization that is using it in significant ways.”

Sy Choudhury, director of business development, AI partnerships at Meta, agreed, adding that enterprises are experimenting with smaller models that tend to be less expensive.

Salman Taherian, generative AI partner lead (EMEA) for AWS, said he sees three trends coming: generative models will become smaller, more efficient and offer better performance; there will be a lot more fine-tuned and customized models for industries; and the increasing use of multiple large language models (LLMs) in combination.

Taherian cited a Gartner forecast that said $3 trillion will be spent on AI over the next four years and generative AI will capture a third of that spending.

Hitesh Wadhwa, Google Cloud business and sales Leader, GenAI ambassador, said “now it is the AI era. The way you do not see anybody not using the internet or not using a mobile phone (today), eight years from now, it would not be possible to see nobody using AI.”

2. Generative AI will stay in the ‘copilot’ stage for a while until its risks are adequately addressed.

“There is a confidence problem” among top leaders in the use of large language models with its attendant risks around security, copyright issues, hallucinations and others, said Lucinda Linde, senior data scientist at Ironside, a technology consultancy.

Gaurav Dhama, director of product development − AI at Mastercard, sees generative AI staying in the copilot phase – assisting humans rather than autonomous − “for a long time,” especially for companies in heavily regulated industries such as financial services. That means a human being will still be in the loop for a while.

Especially when it comes to writing code, generative AI can introduce a security vulnerability. “We use it carefully and the skill of the programmers using it should be higher,” Dhama added.

3. Multimodal models in generative AI will supercharge the tech.

Kanika Narang, senior AI research scientist at Meta, said what is next for large language models is multimodality.

“I’m very excited about this,” she said. During pre-training, the model aligns different modalities such as image, video, audio along with language. “It can be used to power a lot of applications” such as visual question-answering.

For example, upload an image of an alcoholic beverage and ask the model to not only identify it but also what recipes it can be used in, she said.

Another example is when a bicyclist asks the AI for directions, the model can understand you are on a bike and identify bike-friendly lanes to use.

Other applications include those in health care where medical imaging analysis yields more holistic patient reporting and diagnosis, Narang said.

4. OpenAI's drama underscores the importance of using multiple models.

Lucinda Linde, senior data scientist at Ironside, said the recent firing and rehiring of OpenAI CEO Sam Altman shows the importance of using multiple models, not just those from OpenAI - to avoid being overly reliant on one company.

Currently, 95% of generative AI code being written use OpenAI’s tech, she said. Another reason to use multiple models: Some models do some things better than others.

Meanwhile, OpenAI continues to see strong demand for its products. Adam Goldberg, a member of OpenAI’s GTM team for ChatGPT Enterprise, said OpenAI is working with a “handful” of clients to create custom models for them and there is a waiting list of “a couple of hundred” others.

Goldberg said the GPT store is coming in Q1. GPTs are custom versions of ChatGPT that companies can build for specific purposes. The GPT store will be a marketplace for GPTs, similar to an app store. After Q1, users will be able to sell the GPTs they created. 

However, he has no date for when OpenAI will re-open its ChatGPT Plus service to new subscribers. Registration closed after it got “absolutely crushed with demand” following its developers conference, so it suspended subscriptions rather than risk a degraded experience for all users, he said.

5. Companies are taking safeguards earlier in AI than in other technological revolutions.

Companies are bringing in their legal teams as early as the proof of concept stage in AI projects to flag risks, according Salman Taherian, generative AI partner lead (EMEA) for AWS. This is “much earlier than what we have had before in AI/ML,” he said.

Communication teams also are being brought in early as well to handle any PR fallout from misbehaving AI. “It’s a different world now,” said Sy Choudhury, director of business development, AI partnerships at Meta.

Companies are also being proactive in another way. For example, Meta, as part of its responsible AI practices, puts prompts through four Llama models for safety and performance before the output is generated. 

If a user types a prompt into WhatsApp, one Llama model is used to understand the intent of the user. Next, it goes to a Llama safety model to check if the prompt is legitimate or it is trying to trick the generative AI. Third, it goes to a model that answers the prompt. Lastly, the fourth model cleans up the answer to make it “crisp and nice,” Choudhury said.

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like