At Apple’s annual World Wide Developers Conference in San Francisco on Monday, executives announced a raft of features in the company’s upcoming desktop and mobile operating systems that are powered by artificial intelligence, or the blend of powerful computing capabilities and software algorithms.

According to Apple, implementing AI behind-the-scenes could make it easier for users to organize their ever-growing photo collections, communicate and use online services more efficiently and toggle less between devices. The moves also come at a time when tech giants and a wave of new start-ups are racing to create similar artificial-intelligence based products.

For example, Apple will now scan users’ photos using facial recognition to cluster people together in their photo collection, while another feature, called Memories, groups people together by location, and even shows where photos were taken on a map. Behind the scenes, the software is doing 11 billion computations on each photo to make this happen, Apple said.

Another AI announcement is that Siri is now coming to desktop computers: soon consumers will be able to talk to their Mac computer in just the way that they talk to their phone.

Overall, more consumers are talking to technology: Google recently said that roughly 20% of all queries are initiated by voice rather than typing. While Cortana has enabled users to talk to their desktops for more than a year, Apple has the advantage of being able to integrate a popular tool on mobile onto desktop, making the experience of moving between devices more seamless. Like Cortana, Siri will also now scan people’s communications and make suggestions. If the system sees two people discussing a meeting over text message, a calendar icon will pop up, enabling the users to schedule the meeting from within their texting thread. Apple will also suggest relevant emoji, which will get bigger and more interesting in the new operating system.

In opening major applications to third parties, Apple is nodding to a growing view in Silicon Valley that consumers are seeking an alternative to toggling between the dizzying number of apps they store on their phones. They want to call an Uber or a Lyft, for example, without having to open an app; now outside developers will be able to build those services directly into Apple’s messaging platform. Facebook recently launched a similar feature inside its popular messaging app.

In addition to having Siri on the desktop, the changes will allow developers to supercharge their apps with Siri’s voice. Users will soon be able to use Slack, Uber, or Skype by talking directly to Siri. This widely anticipated move takes a page from Amazon. For some time now, the company’s Alexa smart home assistant device has been allowing third parties to build services onto its platform. So consumers can ask Alexa to read out the weather or connect to smart locks.

Opening up its platforms to third parties has also historically been a point of discomfort for Apple, as the company’s impulse to control the quality and integrity of its own products has butt up against major trends in AI. Those trends emphasize merging more data from third parties to increase the amount of services that can be offered on a single platform. The original Siri included integrations with many third parties that were dissolved after Apple bought the Siri startup five years ago. Today, Apple just about came full circle.


For the latest news and conversations about AI in business, follow us on Twitter @Business_AI and join us on LinkedIn – AI Business Community


Feature image source: Flickr