This Week's Most Read: 5 Key Takeaways from AI Summit NY
Highlights from our sold-out AI Summit New York, Gemini Pro and other top stories
Here are this week's most popular stories about AI:
1. 5 Key Takeaways from AI Summit NY 2023
AI Summit New York - the recently held flagship conference series from AI Business' parent division Informa Tech - saw record crowds with many sessions standing room only.
Here are a few of the key takeaways; read the full list here.
-OpenAI, hyperscalers see all companies using generative AI in 10 years.
"I think it will be 100%,” said Adam Goldberg, member of the GTM team, ChatGPT Enterprise, at OpenAI. “The next two to three years are going to be wild. You are going to have organizations that aggressively repave their business processes of all shapes and sizes, and whether they build something or they buy something … (there will be) material change for every organization that is using it in significant ways.”
-Generative AI will stay in the 'copilot' stage for a while until its risks are adequately addressed.
Gaurav Dhama, Mastercard's director of product development in AI, sees generative AI staying in the copilot phase – assisting humans rather than autonomous − “for a long time,” especially for companies in heavily regulated industries such as financial services. That means a human being will still be in the loop for a while.
2. Google Gemini Pro is Coming to Businesses and Developers
Google is giving businesses and developers access to its most powerful large language model, Gemini, through its API, for free.
The Gemini Pro API is available to developers via Google’s free web-based developer tool, AI Studio (formerly Makersuite). Gemini Pro is also available to enterprises through Google Cloud’s Vertex AI platform. Companies can use it to build applications starting today.
Google said it plans to further fine-tune the model in the coming weeks based on user feedback. “We can’t wait to see what developers and enterprises build with Gemini,” the company said in a blog post.
Gemini Pro is already powering Bard, Google’s answer to ChatGPT. The initial version has a low 32,000 context window for text, which means it can handle around 5,333 words (32,000 tokens). By comparison, GPT-4 Turbo, OpenAI’s newest model, can handle 128,000 tokens. However, Google said later versions of Gemini Pro will have greatly expanded lengths.
3. Nations Pledge to Make AI 'Secure By Design.’ Can They Go Beyond Nice Platitudes?
The recent pledge by 18 countries to create AI systems that are "secure by design” is only the beginning of what is necessary to safeguard them, experts say.
Signing on to published guidelines from the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the U.K. National Cyber Security Centre (NCSC) on building secure AI, these nations concurred that companies involved in creating and implementing AI should ensure its usage is safe for consumers and the general populace while safeguarding them from potential misuse.
But the pact, which is not legally enforceable, mainly offers broad guidelines. These include monitoring AI systems for misuse, securing data against unauthorized alterations, and conducting thorough evaluations of software providers.
“This is a good start,” Fred Rica, the former head of cyber risk at KPMG and currently a partner at accounting and consulting firm BPM, said in an interview. “It focuses on risks, provides a set of principles, creates alignment, and raises awareness. But those things, while good and important, are a far cry from any sort of prescriptive guidance as to what exactly constitutes ‘secure’ or, even more specifically, ‘secure by design.’”Read more
4. Meta Scientist: How Large Language Models Work, AI Summit NY 2023
Staying with the AI Summit, Kanika Narang, senior AI research scientist at Meta, demystified generative AI during a session.
Unlike other forms of AI − which typically have predetermined rules, use structured data and are meant for specific tasks − generative AI harnesses neural networks to create new and original content autonomously without being explicitly programmed to do so.
As many users well know, generative AI models are “actually so good right now that they can do many different tasks,” she said, such as in writing poems or generating realistic images of animals – on Mars.
The backbone for these models, especially for text, are large language models. They are large because they are trained on a vast amount of data. “Think about everything which is out there on the web − all the books that have been printed,” she said. “Humans would take 20,000 years to read all the knowledge encapsulated in these models.”
Using a vast dataset to pre-train these LLMs enables them to do many tasks such as summarizing, translation or question-answering.
5. AI Startup Roundup: OpenAI Rival Mistral AI Set to Raise $485 Million
Mistral AI is a French rival to OpenAI that has emerged as one of Europe’s most prominent AI startups. It was founded by former scientists from DeepMind and Meta who worked on large language models.
Latest funding: Set to raise around €450 million ($485 million), according to Bloomberg
Lead investor: Andreesen Horowitz
Other investors: General Catalyst, Lightspeed Venture Partners, Bpifrance and others participated in the round. Nvidia and Salesforce contributed €120 million ($129 million) in convertible debt.
About the Author
You May Also Like