Learning to learn: machine learning is improving, but it is about to get a whole lot better

While full AI consciousness might be some ways off, there are some exciting developments

January 20, 2022

6 Min Read

While full AI consciousness might be some ways off, there are some exciting developments

The need to train machines more quickly, efficiently, and less expensively is a pressing imperative.

AI-powered technologies, dependent on machine learning, have the limitless potential to propel business and benefit society.

If we could imbue machines with a childlike curiosity that encourages them to learn more naturally, more intuitively and more effectively – drawing on building blocks of information and picking things up naturally and incrementally. Is that too much to ask of machines? Well, perhaps not.

Like many businesses and academic institutions, our team at Cambridge Consultants invests in extracurricular research to advance machine learning (ML), which uses algorithms and neural network models to progressively improve the performance of computer systems.

The extraordinary progress I’m seeing in machine learning convinces me that the ultimate goal of meta-learning – essentially learning to learn – is inching ever closer.

The implications of this are, of course, profound. Commercial opportunities for business will be propelled to new levels, society will evolve, and ethical, philosophical, and moral questions will be high on the agenda in a world where AI and human behavior mirror each other much more closely.

With that in mind, in this article, I plan to provide a little perspective by summarising the current state of play in machine learning. A progress report if you will on some of the very latest academic and practical developments in AI. I’ll also examine some exciting and more recent developments which in my view take us closer to the notion of ‘learning to learn.’

GANing knowledge – helping machines learn from limited data

If we keep the comparisons to how children learn, the fundamental challenge in machine learning is that a machine starts with a tabula rasa, a clean slate. In other words, it comes into the world essentially as if it was born yesterday. Each system must be trained from scratch and exposed to hundreds of thousands of training examples for every single task.

In comparison, children can utilize what they've already learned and can immediately 'get' something. One way to get machines to learn more naturally is to help them to learn from limited data.

Using generative adversarial networks (GANs) we can create examples from small sets of core training data rather than having to capture every situation in the real world. The idea is to generate data from your own training sets rather than going out into the real world. The “adversarial” bit is because one neural network is pitted against another to generate new synthetic data.

There are other techniques, for example, there’s synthetic data rendering – this uses gaming engines or computer graphics to render new scenarios. Then there are algorithmic techniques such as domain adaption which involves transferable knowledge (using data in summer that you have collected in winter, for example) or few shot learning, which makes predictions from a limited number of samples.

Finally, taking a different approach to limited data is multi-task learning. This is particularly fascinating as it is where commonalities and differences are exploited to solve multiple tasks simultaneously. Machine learning is generally supervised – with input and target pairing labels – but progress is being made in unsupervised, semi-supervised and self-supervised learning. This concerns learning without a human teacher having to label all the examples.

With clustering, for instance, an algorithm might cluster things into groups with similarities that may or may not be identified and labeled by a human. Examining the clusters will reveal the system’s thinking.

The new kids of the block: Transformer architecture and closed-loop experimentation

Everything I’ve described so far is bringing incremental advances. But is it taking us close to a great leaping-off point into meta-learning? While full AI consciousness might be some ways off, there are some exciting developments. Namely, these are the transformer architecture and closed-loop experimentation.

Most neural network algorithms have to be adapted to perform one job. What the Transformer architecture does is make fewer assumptions about the format of the input and output data and so can be applied to different tasks – similar to the idea of machines exploiting building blocks of learning. The Transformer initially used self-attention mechanisms as the building block for machine translation in natural language processing. However, it is now being applied to other tasks, like image recognition and 3D point cloud understanding.

So, where next for the Transformer? Recent academic work has, for example, looked at applying it alongside data-efficient training techniques to protein applications.

The team here at Cambridge Consultants built on this research to create an AI model that can optimize protein function for a specific task. We applied this to fluorescent proteins, specifically whether it is possible to recommend protein structures that fluoresce more brightly. There’s no time to go into detail here, but I can say that results are very encouraging. The model predicted variants with six amino acid changes across the length of the sequence to improve the fluorescence of the protein.

This is nothing but a glimpse of an exciting future. Protein manipulation has the potential to be applied to a range of applications, including medicine, where it could be applied to improving cancer treatments or reducing organ rejection rates. New and more effective antibiotics could also be created using protein manipulation. In the materials space, there could be a role for removing plastic waste more efficiently. The technique could also be used to create better-performing textiles.

Now, the second up-and-comer is the process of training AI models in an experimentation loop. This essentially turns the traditional data-first approach on its head. Rather than asking, “we’ve got all this data, what will it solve”, the idea is to start with the problem and then create the data sets you need.  The idea is to start with the problem and then create the data sets you need.

You ask the AI to say what it would like to know, you run an experiment in the lab to find missing pieces of information that you feed back into the neural network so it can fill in its knowledge gaps. Any knowledge gaps start to get filled. This is at a fairly early stage of development intending to close the loop and automate the whole experimentation process.

This is powerful stuff, and indicative of the point I made at the beginning. The better machines become at learning, the better the outcomes for business, society, and the world. In short, watch this space.

Ram Naidu is senior VP, artificial intelligence at Cambridge Consultants. He works with clients across global market sectors to help them succeed by identifying, developing and deploying world-changing innovation powered by AI. Naidu has an exceptional leadership record of bringing world-class AI-powered innovations to market, with significant expertise in innovation management, product strategy and commercialization.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like