Investors disappointed with the presentation but liked Microsoft’s event

Ben Wodecki, Jr. Editor

February 8, 2023

5 Min Read

At a Glance

  • Google showed how it is applying AI to Lens and Maps.
  • Google demoed new AR overlay for city navigation.
  • Bard is to remain in the testing phase as LaMDA API is set for launch.

This year marks the 25th anniversary of Google, when Stanford students Larry Page and Sergey Brin created the web’s most-used search engine.

It is also the year when the tech giant arguably saw its first serious challenge to search, its core business. With the phenomenal success of ChatGPT, Microsoft said yesterday it will incorporate the large language model into Bing and Edge to give Google Search a run for its money.

In a dueling presentation today called ‘Live from Paris,’ Google showcased its own AI strategy days after revealing its rival ChatGPT chatbot, dubbed Bard. At the event, it unveiled new AR tools for Maps, improved translation tools and updates to Lens, among other capabilities.

But investors were not impressed with a presentation that had technical glitches compared to Microsoft’s event a day earlier. Shares of Google fell steeply, down 7.4% to $100 and trading volume doubled from its daily average, which signals that many are selling the stock. Google lost more than $100 billion in market value in a day. In contrast, Microsoft saw its stock go up by 4% on the day it held its AI event.

Generative AI in Search

Arguably the most anticipated news at the event was to hear more details about Bard, but Google did not disclose more than what it already did. The company reiterated that a “lightweight” version of the model is being made available in an initial testing phase before being fully launched.

Related:Microsoft Adds ChatGPT Capabilities to Bing and Edge

However, Google did reveal that one potential use case for applying generative AI to search was in generating multiple results for queries, according to Prabhakar Raghavan, senior vice president at Google responsible for Search and Google Assistant, who also led the event. Apps like Bard and ChatGPT now only generate one response per query.

The reason for multiple answers is due to NORA, or no one right answer, according to Raghavan. Many questions can be answered differently, such as ‘What time of the year is it best to go stargazing?’ So the new Search will show ChatGPT-like answers as well as traditional search links.

Raghavan also confirmed that developers would soon be given access to a suite of tools and APIs to make AI-powered apps, which would include LaMDA, the model behind Bard. The chatbot itself will not be part of that API suite initially, however.

Also, he showcased the way AI can generate a 360-degree view of a product from just a handful of still images compared to the current method of using hundreds of product photos and “costly” technology. This functionality could be used in online shopping websites, for example.

Related:Google Unveils Bard: Its Version of ChatGPT

“The potential for generative AI goes far beyond language and text,” Raghavan said.

Lens and Live View

A key showcase was Google Lens, the image recognition technology first released back in 2017. Using Lens, Google users can search for anything on the web by taking a picture of what they are looking for or using an existing photo from their device’s library.

Raghavan said Lens is used 10 billion times a month, which “signals that visual search has moved from a novelty to reality.”

In an update unveiled at ‘Live from Paris,’ users will be able to search for items in photos or videos.

Elizabeth Reid, vice president of engineering at Google, demoed an example of using the functionality to search for the name of the building in a video. Lens identified the building.

"If you can see it, you can search it,” Reid said, adding that this new Lens capability is being made available “in the coming months.”

Google Multisearch, which lets users in the U.S. search for an item in a different color, brand or visual attribute, also got some love at the event, with Reid announcing the feature is set to be rolled out globally for any image search results.

Reid also unveiled a new "Near Me" function, where users search for an image of a food item, for example, and find stores locally that sell it. Near Me will only be available in the U.S. for the time being.

Related:Google Goes to War: CEO Outlines Generative AI Strategy

Immersive View

Another AI-augmented offering showcased was Immersive View, a mixed reality overlap for Google Maps.

Users can go on Google Maps and view directions, with AR arrows imprinted onto real-world environments on their devices. Immersive View uses AI technology to “fuse billions of images from Street View to create 3D representations of the real world. It can even show users what a destination will look like at different times of the day under various weather conditions, and also when it would be less crowded.

Immersive View users can even use the tool to see inside of restaurants in order to explore venues prior to going. It is being rolled out in a handful of cities, including London, Paris and Tokyo, before being expanded in the future.

Also showcased was Indoor Live View, a similar AR offering for indoor environments such as airports and shopping centers to quickly spot things like baggage claim, the nearest elevator and food courts. It is being rolled out in 1,000 new venues such as airports, train stations and shopping malls.

AI-powered arts

Also announced at the Paris event was a series of AI-powered arts and culture tools and apps.

Google gave an update on Woolaroo, which launched in May 2021 as an AI-powered photo-translation platform designed to identify photos of objects and inform users of their names in several endangered languages.

Now, it turned its attention to using the tool to do such things as preserving women’s contribution to science. The arts and culture team used AI to analyze images as well as historical scientific records to bring to light previously ignored achievements by women scientists.

Google also used AI tools to study famous paintings, which provided users an AR experiences to view the art right down to the brush stroke by simply using their handsets.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like