Machine Learning: Is There Space At The Edge?Machine Learning: Is There Space At The Edge?
Machine Learning: Is There Space At The Edge?
November 28, 2018
by Jack Melling
Machine learning (ML) is playing an increasingly prominent role in our everyday lives and how we interact with our devices. One of the key trends for ML in this area is having the capability and flexibility to perform tasks at the edge – on the device – rather than sending data to the cloud.
The power and cost required to shift massive amounts of data is prohibitive and can produce a noticeable lag or delay, something that ‘mission-critical’ applications will not be able to tolerate, but which can be avoided with on-device processing. In addition to performance benefits, ML at the edge can offer increased security for users, as sending data back and forth from the cloud creates vulnerabilities and potential exposure to security threats.
ML at the edge: use cases and applications
There are already plenty of applications that use ML at the edge. Many of these are considered to be ‘mission-critical’, such as medical triage and monitoring (check out Arm’s partnership with Great Ormond Street Hospital in London where we are providing artificial intelligence (AI) at the edge for patient and staff management), environmental monitoring, and security and surveillance.
Other applications with ML at the edge on devices include image recognition, gaming, speech to text, and battery management. However, the overall consensus is that ML at the edge is just getting started.
Unsurprisingly, ML at the edge emerged as an important theme at Arm TechCon in October. Several keynotes explored how intelligence at the edge enables limitless use cases, a fresh new wave of innovation, and greater security protection for users. Jem Davies argued that, as more ML is introduced into more devices, we will see a world where “most things are equipped with a new level of smartness.”
Moreover, Jim McGregor, the Principal Analyst at TIRIAS Research, provided an insightful talk around the evolution of machine learning, as well as his own take on ML at the edge and future growth opportunities. He revealed that, at the moment, 29 percent of new systems leverage some form of ML, but this could grow to 95 percent by 2025.
One of the reasons behind this huge growth is the fact that ML is a core enabling technology. This means that many technologies will be innovated further by ML in the future, including computer vision applications, sensors and sensor fusion, augmented reality (AR), natural language processing, event prediction, security, and autonomous machines, to name some examples.
Related: At The Edge - Thinking Big & Small With AI
Edge ML: a core enabling technology
In his talk, McGregor notes that there is no ‘one size fits all’ solution for ML at the edge. This is because devices that use ML today have a number of different workloads, each with their own requirements. A common question we get asked at Arm is which processors are best for running ML, but this really depends. Each processor has a spectrum of compute, with varying degrees of power and area.
Enterprises will therefore need to select the processor that is able to perform tasks relevant to their needs and requirements. With many devices that have complex and changing ML workloads, enterprise will often choose multiple types of processors with a common software framework for their systems. This supports the notion that enterprises should invest in ML solutions that tackle business problems relevant to them.
Choosing the correct solution
Ultimately, the move to the edge appears to be inevitable, simply because the world does not have the bandwidth to cope with transmitting all the new data that will be produced through ML-enabled features. One quote from Google says that if every Android device in the world performed three minutes of voice recognition each day, then it would need twice as much computing power to cope.
We are still at the start of the ML journey, but the use cases for on-device ML are “limitless”. As more processing becomes possible on-device, this will drive new waves of innovation, enable different technologies and applications, and create these new and exciting use cases.
There is no set path to ML at the edge innovation, with each application having its own performance requirements. Our message to enterprise is to think carefully about their own specific requirements so they can find the right ML solutions for them that are open, scalable and efficient.
Jack Melling works within the Client Line of Business at Arm, which covers a range of device markets that use Arm’s innovative processor designs, including smartphones, PCs, DTVs, tablets, wearables and set-top boxes. At the heart of Arm’s products are compute performance improvements for ML at the edge, which enables responsive and secure experiences on a range of devices.
About the Author(s)
You May Also Like