Google's DeepMind Now Has a Memory

Ed Lauder

March 15, 2017

3 Min Read

Google's AI system, DeepMind, can now learn how to play an Atari video game and remember what it learnt and use it's newly acquired knowledge to play a different game.

Google's AI system, DeepMind, burst onto the scene back in 2014 and became famous for being able to attain higher scores than any human could in Atari's video games such as Space Invaders and Breakout. However, it wasn't able to remember what it had learned and would have to be reprogrammed the AI's neural network and start from scratch in order to play again.

Yet, Google has rectified that problem and managed to fit DeepMind with a memory, meaning that it can play as many video games as it pleases, and use knowledge it has acquired playing previous games. "Previously, we had a system that could learn to play any game, but it could only learn to play one game," James Kirkpatrick, a research scientist at DeepMind told WIRED. "Here we are demonstrating a system that can learn to play several games one after the other".

The researchers' work was published in both a blog post and a paper in the Proceedings of the National Academy of Sciences journal, written by Kirkpatrick himself. He explained how they managed to give DeepMind's AI a memory by using supervised learning and reinforcement learning tests. "The ability to learn tasks in succession without forgetting is a core component of biological and artificial intelligence," wrote Kirkpatrick.

He went on to explain how one of the biggest shortcomings in neural networks and artificial intelligence is their inability to retain any of the information it has learned, which is why DeepMind's recent upgrade is such a major step forward for the new technology.

Google's scientists claimed that they were able to get DeepMind to demonstrate "continual learning", which is similar to the way we humans learn and memorise things. In order to do this, DeepMind's researchers developed an algorithm called elastic weight consolidation (EWC). "Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks," wrote Kirkpatrick in the paper.

"We only allow them to change very slowly [between games]," he said. "That way there is room to learn the new task but the changes we've applied do not override what we've learned before".

In order to test this algorithm, DeepMind was equipped with the same neural network, called Deep Q-Network (DQN), it used to beat those Atari games, only this time it was beefed up with the EWC algorithm. "Previously, DQN had to learn how to play each game individually," the paper states. "Whereas augmenting the DQN agent with EWC allows it to learn many games in sequence without suffering from catastrophic forgetting."

In short, DeepMind is using DQN to memorise what it has learned from playing one game, and then uses this acquired knowledge in the next video game it moves on to. However, it isn't perfect. DeepMind's new artificial memory still is only capable of retaining a limited amount of information, which means that once it changes game it won't perform as well as it would do with a fresh neural network.

"At the moment, we have demonstrated sequential learning but we haven't proved it is an improvement on the efficiency of learning," wrote Kirkpatrick. "Our next steps are going to try and leverage sequential learning to try and improve on real-world learning."

Image courtesy of DeepMind

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like