Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
Using smaller models on edge networks reduces costs and energy requirements
Artificial intelligence (AI) will potentially add $13 trillion to the global economy during 2020-2030, according to Harvard Business Review. But the associated environmental cost is worrisome. AI already has a huge carbon footprint — even larger than that of the airline industry. Training AI models costs millions of dollars.
Clearly, those predictive search and typing features on smartphones or smart speakers that provide weather conditions aren’t worth the energy consumed to train their AI models. For instance, the cost to train the GPT-3 model, a natural language neural network that can write basic news articles, is $12 million.
To overcome the power- and money-hungry nature of AI, technologists are looking at tiny AI, a concept that helps reduce costs and energy requirements by shrinking the models. The technology conducts AI inference and training on edge networks rather than on cloud servers.
Small AI Models for Small Devices
AI models don’t necessarily have to be hundreds of gigabytes in size to work effectively. Smaller models such as MobileNet, which is 20 MB, work just as well, if input data, edge hardware and model architecture are selected appropriately. Moreover, model compression techniques such as knowledge distillation, network pruning and quantization can reduce the number of parameters that go into an AI model, without much loss in output accuracy.
As 25-30 billion Internet of Things (IoT) devices could come into circulation by 2025, the processing power requirements may explode due to the sheer amount of data generated by them. It is imperative to shift some of the compute load to edge devices. Such small AI models can be pushed to edge IoT devices that require minimal energy and processing capacity.
Catalysts for Tiny AI
The AI ecosystem is evolving rapidly with advancements in federated learning, decentralized web and battery-less IoT devices. These developments are likely to act as catalysts for tiny AI adoption.
Federated learning, a concept pioneered by Google, eliminates the need for edge devices to share all data collected with cloud servers for processing. Instead, the connected devices train their own models using local data and share only a periodic summary update with the cloud to train the centralized model. This decreases overall processing requirements, in turn reducing the energy consumed to train AI models.
Further, the web’s future appears to be decentralized. Blockchain-based networks such as Helium, which can effectively make anyone and everyone a network coverage provider within the unlicensed spectrum, are transforming the way the internet is delivered. Telecom companies may no longer command the tight control they currently possess over the web, particularly for IoT applications for which low bandwidth and wider coverage are more important.
Lastly, the onset of battery-less IoT devices will further support the environment, as these devices draw processing power from their ambient environment instead of from waste-generating batteries.
Sustainable and Responsible AI
AI is everywhere and firms should ensure its responsible and sustainable deployment. While responsible AI means all initiatives must adhere to the core rights (e.g.,) associated with humans, such as privacy and equality, sustainable AI targets carbon neutrality. Tiny AI would play a massive role in achieving this goal. As regulators and nations employ stricter ethical and environmental norms, the ones who incorporate sustainability as a founding principle will surely succeed with AI.
You May Also Like