Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
October 20, 2022
Cambridge Consultants's Ram Naidu outlines the how to pick the right technique for your AI needs.
AI development is going through a period of exponential growth. Just following the complete set of submissions on ‘artificial intelligence’ at arXiv is an impossible task, with typically 80 papers a day appearing across a range of AI subdisciplines. Inevitably the field is starting to fragment as researchers focus on their area of interest, with few able to take in developments across the whole of the AI space. Industry, meanwhile, seeks engineering solutions to specific real-world problems sooner rather than later.
This tension provides the backdrop to my role as event chair at IoT World & The AI Summit in Austin, Texas, on Nov. 2-3, 2022. Expectant attendees at the show – including those from the city’s vibrant start-up community – are as ever looking to come away with tools, insights, and ideas on how to apply emerging tech in transformative ways. My focus as I open the summit will be on the growing trends and techniques that are making AI development more efficient. The message is that business doesn’t need to be locked out of development, because there are now more ways to put AI into practice.
Let me explain. At one end of the development spectrum, we have huge amounts of data available – and the models are growing larger and larger with increasing demands on compute resources. I’m thinking here of the use of large transformer models for natural language processing such as BERT (Bidirectional Encoder Representations from Transformers).
At the other end, we have the challenges of AI where the quantity of data is limited either through being expensive to obtain or – as is often the case – having not enough resources to label and make useful the data you do have. Such a problem is highly apparent in medical uses where the labeling requires high levels of expertise. As such, having an approach to AI that is efficient in data and efficient in labeling requirements is one of the important challenges we face as the field develops.
The big question is, in summary, how can we do more with less? Yann LeCun, one of the godfathers of AI, proclaimed that ‘the future of AI is unsupervised’. That is half right in that the future of AI is semi-supervised or self-supervised. This combination of unsupervised methods with supervised methods to leverage small amounts of labeled data has proven the way forward. Equally, as we seek to make AI more efficient, we can build in the things we know.
The resource-greedy data science approach of declaring that everything is in the data – and we can learn all we want from it – has been successful, but it has been a curse in terms of efficiency. By using what we already know, for example, basic physical laws, we save the AI from reinventing Newton and therefore need less data to achieve our goals.
This midpoint between purely data-driven approaches and old classical models has obvious advantages beyond just data efficiency. Building in physics means your model won’t suggest actions that contradict it - something rather crucial in, for instance, aviation or physical control processes.
Taking a middle road has further advantages. A persistent criticism of the black box nature of AI has led to the challenges of explainability - crucial for medical applications or anywhere with critical dependency, including finance and transport. We need confidence in our AI solutions. By inbuilding some model details, we not only become more efficient, but we provide more of a grey box where we know some rules are always obeyed. One further step along the path to AI assurance.
Another approach where data is limited has been through generative AI. The enormous cultural impact of DALL.E 2 and Stable Diffusion has shown the world the power of generative techniques in AI. But using them to provide synthetic data for training hasn’t been as well publicized. The use then of a generative technique is to enhance other learning methods. Again - it is that combination of unsupervised (generative methods) with supervised techniques that hits the sweet spot of using what we know from a data set to best effect.
In all of these examples, there is often a trade-off that we need to be aware of. Being efficient with labeling or data may require more compute resources. So, the answer of what is right doesn’t come from the technical perspective but from the analysis of the costs. If data is cheap there is one solution. But if labeled data is expensive, then we can make up for it with compute-intensive algorithms or we can sit in the middle.
Where does this leave your business? The promise of universal AI solutions is still to be fulfilled – there is no one size fits all. The enormous growth in AI methods means the selection of the right technique relies on an understanding of the problem - and more understanding typically provides more efficiency. Engineers are used to thinking in terms of energy scales for their devices in AI. We must learn to think in terms of data scales and the most efficient methods for each scale rather than the ‘best method’. Achieving this will allow more AI at the edge and the production of a range of intelligent devices.
Every business now has a concern over the rising cost of energy, linked to the carbon cost and the need for sustainable solutions. The more we can get away from the greedy energy demands of large compute costs and adopt efficient AI methods the better. This leads to an exploration of energy efficiency, whether through neuromorphic methods or using low-bit encoding. Again, there will not be a universal off-the-shelf solution to cutting energy costs. But it is a parameter we must consider and find where the right compromise can be made.
So, what does a successful AI solution look like? Its approach must depend on data quantity, labeled data availability, and energy cost implementation amongst a host of other considerations. Looking at one component of this in isolation isn’t the path to success. A successful AI solution requires a holistic approach to cover the needs and costs with a mature view of all the competing drivers. This was inevitable - AI had so much success so soon with low-hanging fruit. As the field matures, so must our ability to approach AI with a clear eye on the value it can bring. If you’re all set for the summit, great. I look forward to seeing you, and perhaps continuing the conversation, at the IoT World & The AI Summit in Austin, Texas, on Nov. 2-3, 2022.
Ram Naidu is senior vice president, AI at Cambridge Consultants, part of Capgemini Invent. He has an exceptional record of leadership bringing world-class AI powered innovations to market. Ram’s passion is inspiring and mentoring teams that are dedicated to solving tough problems and building great products and services. With an MBA from Questrom School of Business and PhD from Boston University College of Engineering, he has significant expertise in product strategy and commercialization, innovation management, and AI. [email protected]
You May Also Like