AI-enhanced search, PaLM 2 and AI tools for businesses and consumers. Executives mentioned AI over 110 times in the two-hour keynote.

Ben Wodecki, Jr. Editor

May 10, 2023

7 Min Read

At a Glance

  • Google unveiled an AI-enhanced search, a new large language model and a host of new AI tools at its annual I/O conference.
  • The company showcased its latest large language model, PaLM 2, which is designed to be fine-tuned on domain-specific data.
  • Google also unveiled several AI tools for the consumer, including a writing assistant for Gmail and Magic Editor for images.

Google doubled down on AI at the company’s annual I/O developer conference, announcing a slew of AI features and tools, including an update to its large language model, PaLM.

AI was the main focus of I/O’s two-hour keynote, with the term mentioned by various Google executives more than 110 times.

“We are at an exciting inflection point, we have an opportunity to make AI even more helpful for people, for businesses, for communities, for everyone. We've been applying AI to make our products radically more helpful for a while," said CEO Sundar Pichai.

“With generative AI, we are taking the next step with a bold and responsible approach. We are reimagining all our core products, including search."

A big part of the I/O keynote was “making AI helpful for everyone” with executives keen to stress the company was taking a responsible approach to AI adoption and development - just a week after visionary AI researcher Geoffrey Hinton left the company to voice concerns about the dangers of AI.

PaLM 2

Among the biggest announcements at I/O 2023 was PaLM 2, the latest iteration of its large language model. Currently in preview, PaLM 2 supports over 100 languages and is designed to be fine-tuned for domain-specific uses and applications.

“PaLM really shines when fine-tuned on domain-specific knowledge,” Pichai said.

Related:Google I/O Analysis: PaLM 2 vs. Hyperscalers' Approach

For example, during the keynote, CEO Pichai spoke about Sec-PaLM, a cybersecurity-focused large language model to better detect malicious scripts so security experts can understand and resolve threats. He also touched upon Med-PaLM-2, in which users can prompt the model to determine medical issues with images, like X-rays. Med-PaLM-2 achieved a nine times reduction in inaccurate reasoning, approaching the performance of clinicians to answer the same set of questions.

No mention of size was made. The initial iteration of PaLM, unveiled last April, boasted 540 billion parameters. It was designed to be used in conversational applications.

PaLM 2 will be published in a variety of sizes – each of which was named after an animal to represent its size. Gecko is the smallest, and then there's Otter, Bison, and up to Unicorn, the largest.

“Gecko is so lightweight that it can work on mobile devices, fast enough for greater interactive applications on devices, even when offline,” the CEO said.

The PaLM 2 family is stronger than the previous iteration in logic and reasoning as they are trained on scientific and mathematical topics, Pichai said.

In March, the company opened up access to PaLM 1, giving developers the ability to customize the model using synthetic data via the MakerSuite platform, which can be accessed via a browser.

Bard Updates

Also announced at I/O 2023 were updates to Bard, Google’s answer to ChatGPT.

Among them was that Bard is now fully powered by PaLM. This was teased by Pichai back in April, with PaLM now replacing the LaMDA large language model.

Sissie Hsiao, general manager of Google Assistant and Bard, announced at I/O that the company had removed the waitlist for Bard. Hsiao also said the application is set to become more visual – with users able to find relevant images as part of generated outputs. Users will soon be able to use images to prompt Bard, with Lens coming to Bard shortly.

Bard now also supports two new languages: Japanese and Korean with plans to expand to 40 languages soon. Bard now also boasts code-generation capabilities, with Google showcasing that the application has a greater understanding of a variety of coding languages.

Also announced was that Bard users will soon be able to export Python code generated in the application, to Replit. Coming to Bard as well are tool additions from both Google and partners, meaning users can use external tools in the application.

Among the most prominent external tools coming to Bard is Adobe Firefly, a family of AI models for uses like image generation.

Of all the improvements and updates to Bard, however, the biggest cheer was for the launch of a Dark Mode feature.


Another addition to Bard set to come is Gemini, an AI model the company is developing that is meant to represent the next phase in its AI journey.

Pichai said Gemini is still in training and that it is being “created from the ground up to be multimodal, highly efficient … to enable future innovations like memory and planning.”

“While still early, we are already seeing impressive multimodal capabilities not seen in prior models,” the Google CEO said.

The company is using its chips, called TPUs, to train Gemini. Once fine-tuned and tested for safety, Gemini will be available in various sizes and capabilities, Pichai said, but offered no timeframe.

One point the Google CEO did confirm, however, was that all of its AI models will be designed to produce watermarking and metadata in outputs like images to prevent the spread of misinformation.

Showcased during I/O ’23 was the addition of more AI to Google’s bread and butter: Search. A generative AI function was shown to replace the top results snapshot at the top of a search page.

Under the label of ‘experiment,' the snapshot generates a response to a user search query. Users can now interact with the snapshot in what the company described as an “integrated search.”

For example, questions on search can now be asked in more natural language, whereas before a user would break questions up into smaller queries to get refined results that they would then compile themselves. The idea, according to Google, is to make search smarter and simpler, with conversational capabilities now in search offering an integrated experience.

Cathy Edwards, Google's vice president of engineering, described it as “search, supercharged.” She demoed the function at I/O, saying that the process would get faster over time.

Vertex AI Updates + Duet AI

I/O ’23 also saw several updates to Vertex AI, Google’s end-to-end application development environment.

Among them was the availability of three new models in Vertex. These new models can be accessed via API, tuned through a simple UI in Generative AI Studio, or deployed to a data science notebook.

The models include the following:

Codey: A text-to-code foundation model that can be embedded in an SDK or application to help improve developer velocity with code generation and code completion and to improve code quality.

Imagen: A text-to-image foundation model that lets organizations generate and customize studio-grade images at scale for any business need.

Chirp: A speech-to-text foundation model designed to help organizations to more deeply and inclusively engage with their customers in their native languages, plus captioning and voice assistance.

Alongside Vertex updates, Google also announced Duet AI for Google Cloud, an AI-powered collaborator that can be embedded across Google Cloud interfaces. Duet AI is designed to help users with contextual code completion, similar to Copilot from rival Microsoft.

Consumer products

While there was a host of enterprise-focused announcements at I/O, Google led its keynote showcasing more consumer-focused AI applications.

Among them was Help Me Write, an expansion of the text completion tool found in Gmail. This new tool drafts emails from users' natural language prompts. Launched to trusted testers back in March, it is expected to be rolled out in a future Workspace update.

Also showcased was Magic Editor, an AI-powered expansion of the Magic Eraser tool seen on Pixel handsets.

Google Photos users can erase objects in images similar to Eraser, but also move in-image objects and even generate parts of an image not in the original photo. It is coming to Google Photos later this year.

Another new offering is Immersive View for Routes, which expands on Immersive View, a mixed reality overlap for Google Maps showcased back in February at the Google in Paris event. The new tool uses AI to overlay the XR offerings for to provide users with directions and will be launched sometime in the summer.

Not mentioned during I/O, however, was Magi, the new search product the company is working on in the wake of rival Microsoft nipping at its dominance in search. The pair have since sought to adopt AI products and services at breakneck speeds, though Pichai previously said it does not so much matter who is first, but that AI is developed sensibly.

However, Google is accelerating its AI charge. It recently consolidated its research division with AI subsidiary Deepmind in an attempt to focus its AI efforts. The company also changed its policy of openly sharing its AI research with the public. It will now only do so after a product is developed.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like