AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
Researchers at Facebook have developed a method of making AI systems forget certain information, dubbing the technique Expire-Span.
Revealing it in a blog post, Facebook said Expire-Span forgets unnecessary information at scale, while retaining the rest of its memory.
The new method “improves efficiency across several long-context tasks in language modeling, reinforcement learning, object collision, and algorithmic tasks,” reads the post authored by research scientists Angela Fan and Sainbayar Sukhbaatar.
“It works by first predicting information that’s most relevant to the task at hand. Based on the context, Expire-Span then assigns an expiration date to each piece of information — much like the expiration date on a bottle of milk.
“When the date has passed, the information gradually expires from the AI system. Intuitively, more relevant information is retained longer, while irrelevant information expires more quickly. With more memory space, AI systems can process information at drastically larger scales.”
Describing it as “a step toward achieving human-like memory in machines,” the authors noted that current AI systems that selectively focus on certain parts of their input are struggling with vast amounts of less important information, impacting computational power and costs.
Using Expire-Span, AI-based systems can gradually forget irrelevant information to allow for operations to be “continuously optimize[d].”
“The main challenge with forgetting in AI is that it’s a discrete operation, meaning you either forget or not — there is no in-between,” the authors explained.
“Optimizing such discrete operations is really hard, which is why most systems process information indiscriminately and incur heavy computational costs. Previous approaches to this problem often focus on compression, so information that’s far in the past is compressed to be smaller. While this allows the model to extend to longer ranges in the past, compression yields blurry versions of memory.”
A scientific paper describing the method has been published as part of International Conference on Machine Learning. The accompanying code has been published on GitHub under a NonCommercial 4.0 International license.
Dubbed Not All Memories are Created Equal: Learning to Forget by Expiring, the paper outlines Expire-Span in detail, suggesting that it can scale to memories that are “tens of thousands in size, setting a new state of the art on incredibly long context tasks such as character-level language modeling and a frame-by-frame moving objects task.”
“The impressive scalability and efficiency of Expire-Span has exciting implications for one day achieving a wide range of difficult, human-like AI capabilities that otherwise would not be possible,” the authors said.
“Theoretically, one day, Expire-Span could empower people to more easily retain information they find most important for these types of long-range tasks and memories.”