Edge-first will be the dominant model for intelligent systems of the futureEdge-first will be the dominant model for intelligent systems of the future
I contend that the days of sucking up all the data that you can get your hands on are over. Cloud-first will be replaced with edge-first thinking for AI applications
September 30, 2020
Cloud-first has become the dominant mantra in today’s best performing digital organizations.
Data is oil; collect it into a massive data lake and unleash the might of big data analytics and AI to build insights that drive new revenues or cut costs. However, this might be reaching its peak.
I contend that the days of sucking up all the data that you can get your hands on are over.
Cloud-first will be replaced with edge-first thinking for AI applications, which is the topic of our new whitepaper - The future of AI is at the edge - which can be downloaded here.
The paper is essential reading for any business looking to bring greater levels of intelligence to the edge, unearthing the technologies and strategies that can make this a reality.
In the paper, we conclude that for increasing numbers of applications, the optimal user experience and lowest cost of service will come from an architecture that conducts at least some portion of sensor fusion, data processing, perception and decision making in the device, or at the very least at the network edge.
Centralized, or cloud, computing will not disappear in this model. Cloud will continue to be crucial in providing orchestration and management services but as we move closer to a world of pervasive computing, where everything is connected, the balance of processing will shift towards the end user.
What does AI at the edge offer you?
This can be summarized into three broad areas: responsiveness, security and reduced dependence on networks.
Communications systems these days are fast. Nonetheless, every extra step that data needs to be transferred adds delay. How comfortable would you feel if the pedestrian detection algorithm for the automatic brakes in your car was executed in a data center on the other side of the planet? Where latency is important, processing needs to be onboard. Clearly, that is important for safety critical use cases. It is also vital for ensuring high quality consumer experiences. VR systems for example are highly dependent on minimizing latency to a level which enables a realistic experience.
Data governance has, rightly, become a hot topic as increasingly personal data is collected by a variety of organizations that we all interact with. Often there are many worthwhile collective benefits to sharing individual data – in medical research for example. However, once that data leaves your own smartwatch, or medical center what happens to it? The more networks that data traverses, and the more organizations that are involved the larger the attack surfaces become.
Federated approaches to learning, where an individual’s raw data is not shared beyond the edge device and only collectively valuable inference is transmitted, are part of the answer to these challenges.
Reduced dependence on the network
Decades of cumulative investment in wireline and wireless communications infrastructure and ongoing innovation in the underlying technologies have illuminated many of our homes and business settings with a fast connection to the infrastructure of the internet. Innovation in 5G, WiFi, satellite communications and others will keep pushing on the frontier of coverage and performance. That progress has enabled many applications to centralize processing in the cloud.
However, not spots and network congestion will remain a reality as communications infrastructure cannot lead too far ahead of demand and subscribers’ willingness to pay.
If your use case is mission critical and needs to be able to function when a sufficiently high performance network connection cannot be guaranteed, then edge processing is a strategy to build in resilience.
Our whitepaper also explores the progress on both the hardware and software side that is enabling AI to be pushed to ever lower power and size. The highly responsive, learning services that users and enterprises are increasingly demanding rely on this to meet those expectations. Interactive VR or AR applications in smart factories or in immersive entertainment experiences are a great example of use-cases which rely on processing large amounts of data, with a latency that is imperceptible to the user.
Tomorrow’s services need to learn in real time
Rapid service innovation will be essential in securing and maintaining competitive advantage and rich customer relationships.
As my colleague Martin Cookson points out: to engage in that rapid service innovation “…we need to go beyond delivering a smart interaction and be smart enough to constantly learn from interactions.” This means getting both ML learning and inference as close to the user as possible.
Specialized, low power silicon
The entire silicon industry has rushed to address the opportunity at the intersection between AI and the Internet of Things such that products and IP are increasingly being made available.
Smartphones commonly include ‘neural’ or ‘bionic’ chips are available from all the main vendors.
Low-ish power (<10W) off the shelf products are available as modules from the likes of Nvidia (Jetson Nano) and Google (Coral.ai) for under $100
Startups all the way to established industry players offer everything from IP blocks for low power neural accelerators, to chips, to entire systems like the Jetson Nano above
Emerging silicon architectures like the Chiplets (the topic of our previous whitepaper) approach offer a lower barrier to entry to ASICs designed with a specific low power task in mind
Custom development work can help go beyond the limits of these off the shelf solutions. Cambridge Consultants’ Sapphyre ecosystem is an example of a tool that enables us to rapidly design ASICs optimized to specific use cases. Using this tool we were able to design a Voice Activity Detector that used one hundredth of the power of a modern hearing aid.
ML tools and frameworks
Alongside the increasing availability of specialized silicon, software tools and frameworks suited to AI at the edge use cases are also developing quickly.
Google’s Tensorflow Lite for deep learning on mobile platforms has been available since 2017 and both Facebook and Google have supported federated learning capabilities since last year. Having these frameworks in place means developers can move faster and learn from collective industry experience rather than break new ground entirely for every new development.
Service management platforms are also key to distributed architectures which would otherwise be cumbersome to operate, and the big names all have an offering in place (e.g. Arm, Azure, AWS).
What’s your edge AI strategy?
In a world with approaching 30 billion connected devices by 2023, growing by 30% year on year (according to Cisco’s latest forecast), more and more data will be generated at the edge. If you want to incorporate intelligence into your products or services, increasingly often it will make sense to move the AI to the data, rather than the data to the AI.
Cloud will remain part of the architecture – not least for management and orchestration purposes but increasingly intelligence will be redistributed out of the traditional data center, and closer to the end user.
Cambridge Consultants has decades of experience in helping customers conceive, design and develop high performance products and services. We would be delighted to discuss how AI at the edge could help you discover the next step-change in your industry.
Michal works with clients to explore how their businesses can be transformed with the right mix of cutting-edge technologies. Michal helps our customers apply Cambridge Consultants’ world-leading expertise in AI, silicon, sensing and connectivity to realize their ambitions with AI at the edge.
You May Also Like