With the announcement of the iPhones 8 and X, it is readily apparent that augmented reality, or AR, is set to become a key new tech space for business innovation. As well as providing limitless new marketing opportunities, AR represents a completely new way of interacting with the physical world using technology.
What many people don’t realise, however, is that AI and machine learning technologies sit at the heart of AR platforms. This was amply demonstrated by Facebook’s AR engine announcement earlier this year. The new Facebook mobile app integrates a local deep neural network in order to enable up-to-the-second machine vision. Although the app is currently using these technologies for Snapchat-style filter overlays, the social network giant say they are using it as a foundation for a long-term pipeline of core AR technologies.
This is a visual example of the augmented reality feature MLBAM is working on for iPhone 8. pic.twitter.com/0CXzJ6cdxD
— Eric Fisher (@EricFisherSBJ) September 12, 2017
Building Augmented Reality Interfaces With AI
Although we are at the beginning of the journey when it comes to these technologies, AI has a vital early role to play in the construction of intelligent adaptive interfaces – at least according to Tyler Lindell, Co-Founder of Holographic Interfaces and a software engineer helping to develop AR interfaces for one of SpaceX’s Hyperloop partners. As the head of a bespoke augmented and mixed reality (A/MR) start-up specialising in interface design and delivery, Tyler believes that AI and augmented reality are ‘ideally suited’ to working together. “In fact, one might go so far as to suggest that AR relies on AI to be effective,” he says. “AI could be used in AR / VR to predict the interface a user might want or need within a given situation, and show options for a User Interface (UI) – or automatically bring up a UI that is perfectly suited and timed.”
This is all thanks to the possibilities that come with machine vision – a “cornerstone” of augmented reality applications. AR is able to deploy AI for object recognition and tracking – as well as gestural input. Gestural recognition and tracking are, of course, what allow people to use their hands to manipulate 2D and 3D objects within a virtual space using specific movements. The incorporation of eye tracking and voice commands as a means of manipulating the virtual environment are equally relevant. Ultimately, it is AI that will enable AR interfaces to become truly multidimensional – and generate a whole new layer of insight.
“AI can be used to understand not just that a physical table is a table, but that the bowl of soup on it is a solid container with liquid in it that will spill if the bowl is tilted,” Tyler says. “High speed transportation companies (such as Hyperloop) could use AI to predict earthquakes and other natural disasters and alter speed / route to avoid catastrophe if the prediction did prove accurate. This info could be fed into the display for a remote operator using AR to monitor the status of Hyperloop pods.” This virtual object recognition and understanding, he believes, will be key to the creation of compelling experiences in the future.
The applications of AI within AR are vast, namely because it provides a means for intelligent, multidimensional interaction with a physical environment. One example of this is real world object tagging. By generating auditory clues such as reading out the names of objects within a camera’s field of view, Tyler believes that an AR application could even provide greater accessibility to those with disabilities. “This could help people who have suffered from brain trauma relearn how to read, speak, and recognize the names of objects in the world around them,” he explains. “This could allow someone to interact with others in a different language. It could even extend into providing a sort of vision for people who are blind.”
Beyond Disruption: Getting Started With AI In Business Today
“Although it may seem so at first glance, AI is not limited to tech disruptors. AI is important to nearly every business and industry – it’s just that not all businesses have discovered their use case yet,” Tyler argues. “Disruptors are the first adopters, and perform a great service to the industry by lowering the barriers of entry to the technology. These early adopters are spending the money and the energy working out many of the use cases and problems that later – or smaller – adopters can’t.”
“However, the benefit for these early adopters is that they will have a strong base of knowledge, technology, mindshare, and market share around them that could be more difficult for later adopters to compete with. Eventually, most businesses will want – or need – to adopt AI in some capacity in order to run their business efficiently and compete within their respective markets.”
“Learning AI is tough. Finding good places to get started as a developer is not as easy as it should be. Many of the introduction tutorials are written using lots of math that looks like this:
“This is much deeper than most developers are willing to go just to learn how to get started with AI. This may lead to many developers not even trying because it may take weeks just to get a ‘Hello World’ going.”
However, while the number of people that are experts in the AI field are limited, Tyler is firm in his belief that there are many excellent software engineers that can learn how to implement AI fairly quickly. This is reflected in the challenges facing businesses when it comes to attracting great talent. “The barrier to entry can seem intimidating – AI seems more challenging because developers believe they must know lots of maths to implement neural nets.”
“A good way to get started is by doing something fun. At Holographic Interfaces we use Unity for AR and VR. If you know how to use Unity, you can use our write-up, Learning How to Implement NEAT AI in Unity to train some cubes how to move around within a 3D physics engine.” NEAT is a genetic algorithm which generates artificial neural networks (ANN) to find the best balance between weights and structure of an ANN. Tyler believes that, while the number of people that are experts in the AI field are limited, there are many excellent software engineers that can learn how to implement AI fairly quickly thanks to resources such as his.
“The thing that many companies find challenging is finding great talent. The barrier to entry can seem intimidating – AI seems more challenging because developers believe they must know lots of math to implement neural nets,” Tyler says. “The truth is that if someone knows how to use an Application Programming Interface (API) and a popular programming language, they can implement AI right now.”
Tyler recommends that if developers are interested in getting started fast, Microsoft Cognitive Services, Artificial Intelligence on AWS, and Google Cloud Machine Learning are great places to start. “Once a developer learns how to implement AI, they may be interested in learning more about how to create a Neural Network (NN) on their own and then implement that. There are a number of excellent resources for this such as Udemy, PluralSight, and Coursera. For developers, if you don’t already know Python, it’s time to get familiar. Many tutorials and implementations utilize Python. It is not a requirement of course, because Neural Nets can be written in almost any language, but in the end your Neural Net (NN) will likely need to be written in the production language of your particular software stack.”
Tyler Lindell is a Software Engineer at Tesla, an AI / AR / VR software engineer at Holographic Interfaces, and has served as the software team lead for rLoop (a Hyperloop company). He is interested in helping companies like yours understand how emerging technologies can be harnessed to improve operations, safety, and sales. Connect with him on LinkedIn and via email email@example.com.