Artificial intelligence at the edge will overshadow AI processing in the cloud over the next six to seven yearsArtificial intelligence at the edge will overshadow AI processing in the cloud over the next six to seven years
To unlock the power and growth potential of AI at the edge in IoT devices, software providers must work across the AI ecosystem to promote the implementation of AI in silicon
April 24, 2020
by Jenalea Howell, Informa Tech
Artificial intelligence (AI) technologies have already become a part of everyday life for billions of consumers across the world, from digital assistants, to smart home devices, to self-parking cars.
However, AI holds vastly greater potential, moving beyond such narrow, task-specific applications and into the realm of general intelligence.
Before realizing the vision of general AI, the artificial intelligence industry must tackle some of the biggest challenges facing the business, including security and ethical issues.
Solving these problems will require solutions providers to work across the entire AI ecosystem, employing not only software, but chip-level hardware solutions that can deliver the required performance and security to support the next stages of AI technology development, according to Omdia.
The AI business in the past has largely been focused on software. However, to unlock the power and growth potential of AI at the edge in Internet of Things devices, software providers must work across the AI ecosystem to promote the implementation of AI in silicon. Specialized chips provide not only the processing horsepower required for AI, they also provide built-in hardware security.
A new frontier
Current AI models are largely processed in the cloud, in terms of training and inference. A shift is beginning in the consumer devices market and some other markets, such as security cameras and automotive, where hardware and software advances allow for AI model inference to be run on the device itself.
Therefore, the emerging narrative for AI processing is that artificial intelligence at the edge will overshadow AI processing in the cloud over the next six to seven years.
To accomplish this, AI-enabled devices and products will require sufficient processing power. These devices are moving AI processing tasks from software to hardware, which will require chips with AI-specific enhancements.
Almost all suppliers of processor core technology are expected to integrate AI enhancements into their products in the coming years. This will trigger a major increase in AI support in system-on-chip (SOC) integrated circuits, potentially bringing capabilities like voice recognition, face recognition and object recognition to billions of devices.
Less than 20 per cent of SOC devices included AI capabilities in 2019. However, with the increasing integration of AI into processor cores, more than half of SOCs will be AI capable by 2023, according to the Omdia Processors Intelligence Service.
This rising penetration of chip-level AI will require software vendors to collaborate with companies across the AI ecosystem to ensure the right kinds of capabilities are built into hardware, including security.
For example, processor IP supplier ARM is implementing security at the most basic level on semiconductor devices. These security features need to be taken into account at each level, from the chip, to the device, all the way to the cloud.
By delivering this intelligence on micro-controllers designed securely from the ground up, ARM is reducing silicon and development costs, and speeding up time to market for product manufacturers seeking to enhance digital signal processing and machine learning capabilities on-device efficiently.
The move to device and edge-level AI processing will also immediately address one of the most serious security issues facing the market, which is the risks involved with sending data to the cloud for processing. By conducting AI processing tasks on the chip, the need to send data through public networks will be reduced.
Makers of tablets, smartphones and smart speakers are developing products that use the capabilities of 5G to perform visual AI processing tasks by edge servers and appliances, bypassing the privacy risks involved in sending data to the cloud.
By 2025, two out of three smartphones are expected to include built-in AI capabilities. Global revenues for AI smartphones are forecast to increase to $378 billion, up from $29 billion in 2017, according to Omdia.
With security so intrinsic to AI systems, businesses may need to consider a security vendor at the same time they evaluate an AI provider.
Solving security issues goes hand in hand with addressing larger ethical issues related to AI. These ethical issues cannot be solved without addressing the security challenges and a strong ethical foundation will be essential as AI approaches human or superhuman levels of intelligence.
Ethical issues include fundamental questions such as whether an organizations AI technology provides an overall benefit to society or how much an organization needs to disclose about its AI activities to stakeholders and the general public.
One major question for organizations is whether they should be in control of their destiny when it comes to AI data.
To answer this question, organizations need to take a look at their practices regarding machine learning and AI development. For example, when conducting AI modelling, organizations need to ask themselves whether the training data they are using should comply with privacy laws like the European Union General Data Protection Regulation or US Health Insurance Portability and Accountability Act.
With the age of general AI approaching, software vendors must work across the AI industry ecosystem to find the solutions for today’s security issues and tomorrow’s ethical challenges.