Autonomous Driving May Hold Key to Incorporating AI On Personal Devices

Robert Woolliams

June 16, 2017

4 Min Read

What does autonomous driving have to do with AI on personal devices? László Kishonti, founder of AImotive, has the answer, as he explains in this exclusive piece for AI Business.

[caption id="attachment_8242" align="aligncenter" width="187"]Laszlo-Kishonti-new-rs1.jpg László Kishonti - entrepreneur, visionary, founder & CEO of AImotive[/caption]

The use of Artificial Intelligence on personal devices has been limited primarily because of existing hardware bottlenecks. However, a new generation of power efficient chips will hit the market soon that hold the promise of real breakthrough, cutting the price of running neural networks radically and mimicking the human nervous system better than ever.

As you might imagine, the latter is particularly hard to do. The human brain has about 100 billion nerve cells that form thousands of links with other neurons, giving a typical brain well over 100 trillion synapses! These various brain connections together make up intelligence.

That complex structure is impossible to replicate artificially with conventional hardware solutions. Graphics processor manufacturers were the first to offer a feasible alternative -  they placed thousands of graphics processors on a single chip, which paved the way to handle capacity-intensive technology like virtual reality. For this reason, it made sense that AI-focused development has, to date, relied on running Graphics Processing Units (GPUs). As a matter of fact, in the server parks of Facebook and Google, chips are used that were initially designed for GPU-accelerated computing.

Tapping into autonomous development

At the moment, however, using AI via a personal device is still far from seamless. For one, neural networks continue to be run by machines located in server parks where high-speed and solid internet connections are essential. For example, intelligent personal assistants such as Apple Siri or Google Assistant work by this same principle - namely the reason they can help you out with personalized suggestions is because they are doing the math on those remote computers.

But when the internet connection breaks down, a common enough occurrence, you are left on your own. AI holds the key to making it possible to for your assistant to keep learning, even in offline mode.

Interestingly, the solution to this issue may come from a slightly unexpected place - the self-driving industry. At AImotive our self-driving concept revolves around the use of AI, and as such, our efforts have focused on designing chips specifically optimized to meet the accelerated capacity needs of AI systems.

Reimagined hardware capabilities

A much needed breakthrough on the hardware front became fiercely urgent during autonomous driving development, because of the sheer volume of parallel tasks that had to be processed by AVs, the high power consumption and the very real concern of overheating. Self-driving cars equipped with GPUs consume up to 2,000W of energy, and with the e-revolution unfolding, such high consumption would harshly affect driving range. And unlike in ventilated server parks, cooling becomes a major issue for technology being driven at high speeds all over the world.

In the autonomous industry we have been hard at work on these issues and in the past few months, a new breed of hardware has emerged that redefines the power efficiency of algorithms used in neural networks - they are now 20-30x more efficient compared to existing GPU solutions.

Such an efficiency leap opens the doors to developers as they now have options to squeeze all kinds of intelligent functions into mobile devices, which are currently stuck using  hardware of server park magnitude.

Endless opportunity

If you can prove these newly invented, low-consumption systems can handle a complex set of duties like self-driving, then portable devices should be able to perform similarly difficult tasks. In fact, the need to run AI-based solutions locally rather than in a server environment already exists. Several apps use the technology in various areas such as health, work, travel, and many more. By running AI locally, it could not only share information and use a centrally programmed scheme but it could constantly adapt to personal preference.

Google’s recently introduced, beefed-up translation tool, known as the Google Neural Machine Translation system (GNMT), is among the first steps down that road as it applies deep learning methods to produce better translations the longer it works.GNMT will facilitate communication between humans in real time thanks to AI, which also hints at the ultimate edge of AI-optimized hardware.

Autonomous driving has been forced to solve some of the existing AI issues, due to the constraints and limitations posed by the vehicle environment. Yet those solutions have far reaching powers beyond self-driving. The power-efficient chips designed for autonomous tech can learn continuously and offer options to use AI-based solutions locally, on personal devices. This new frontier offers unlimited potential for smart tools that make our lives easier and much richer.

Feature image credit: Pexels

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like