At Google’s annual developer conference, Google I/O, the company showcased some of their new applications of AI and machine learning. Here are their biggest AI-related announcements from the show.
Google’s annual developer conference, Google I/O, was held at their Mountain View headquarters this week and demonstrated the huge steps they are making in Artificial Intelligence (AI) and machine learning.
Despite not announcing any new hardware, Google showed off a plethora of innovative applications to a packed crowd, and here at AI Business, we’ve selected the most impressive AI-related announcements from the show.
The new and improved Google Photos app
Google Photos is currently being used by over 5oo million people. It’s a very popular app, yet Google has made it even more impressive through machine learning.
The app can now sort through your photos and accurately see, through Google’s image recognition software, what’s inside our snap shots. The app can now group relevant photos together, which will enable you to share your picture with ease.
During the conference, Google spoke of how people take a huge amount of photos, only to do absolutely nothing with the large majority of them. The new image recognition software can now spot somebody in your photos, your good friend Axel for instance, and then will automatically suggest that you send the picture to him.
Shared Libraries goes beyond this. You can now create a shared library with your wife or husband, and it will give you the option of automatically sharing any photos of your children to one another.
This is a lovely little feature, yet some people may be worried about Google invading our privacy. Google assured people in the crowd that there will be no unexpected sharing of the photos you want to remain private.
Improved Google Home (and Google Assistant coming to iPhone)
We’re cheating here a bit, yet there’s a reason why we’ve combined these two announcements together. Google has announced that they’ll be releasing an SDK – software development kit – which will allow third-party developers to integrate Google Assistant into their own apps and other products.
This means that Google Assistant will now be seen on Apple devises. It goes a long way to prove that Google is more focused on getting their software onto as many platforms as possible, as opposed to selling their own products. Google is the first of the big tech companies to do this, yet Amazon also announced that they’ll be doing something similar with their virtual assistant, Alexa.
Speaking of Amazon, Google Home has still some ways to go before it can bridge the gap between the two voice-activated speakers. Yet Google is working on it. At Google I/O they announced that you can now make phone calls through your Google Home speaker. Moreover, thanks to its voice recognition capabilities, Google Assistant will know which member of the family made the request and make the call via their own number.
VPS (visual positioning system)
Over the years we have become very familiar with GPS. We can’t even begin to fathom how we would know where we’re going without it. Go back to physical maps? You must be joking! Google announced the next step in travel, VPS (visual positioning system).
This new system uses Tango’s 3D visualisation technology to locate objects around you in order to provide the user with their accurate location. Google claimed that VPS was accurate within a few centimetres, and can help people locate products in a shop for instance.
“GPS can get you to the door,”explained Google’s head of virtual reality, Clay Bavor, on stage, “and then VPS can get you to the exact item that you’re looking for”.
The only real problem with VPS at the moment is that so few phones are equipped with Tango’s technology. At the moment only Lenovo has released Tango-enabled smartphones. Hopefully more phones in the future will be able to run VPS, but we’ll just have to wait and see for now.
Google Lens won’t be available for a while, but it promises to be groundbreaking. It was the centrepiece of the Google I/O conference, and rightly so. The app uses image recognition to identify the objects you are viewing through your camera lens in real-time.
This means that you can point your phone’s camera towards a shop, restaurant, menu… anything really, and get accurate information on what you are looking at. An example they highlighted on stage was pointing your lens towards a restaurant and its reviews will pop up on your screen.
Another food-related feature which we found particularily useful was Google Lens’ ability to look at a menu in a different language and show you an image of the dish you are looking at.
Yet, Google Lens’ real winning feature is the fact that you can get it to look at a router’s label with the wifi codes and passwords on it, and it’ll use image recognition to connect you automatically to the router’s network. This announcement was met with whoops of joy from the people in attendance.