Meta AI launches long-term research on brain and AI language models
Initiative part of its broader investments toward human-level AI
Initiative part of its broader investments toward human-level AI
Meta, the parent of Facebook, is embarking on long-term research of how the brain processes language, as it seeks to develop better AI models for understanding spoken and written words. This initiative is part of the its broader investments toward developing human-level AI.
While AI models have come a long way in interpreting language, there is much room for improvement. In a blog post, Meta researchers explained that children, for instance, can understand from a few examples that the word ‘orange’ can refer to both the fruit and the color. AI cannot do this as simply and easily.
For AI to get there, researchers will use deep learning to analyze brain signals pertaining to language using an original neuroimaging dataset – instead of publicly available ones. This dataset is being created by Neurospin, a neuroimaging center that is a study partner along with the National Institute for Research in Digital Science and Technology (INRIA).
“We’ll use insights from this work to guide the development of AI that processes speech and text as efficiently as people,” wrote researchers Jean Remi King, Alexandre Defossez, Charlotte Caucheteux and Theo Desbordes.
The current initiative expands upon Meta’s efforts in the past two years to analyze how the brain processes words and sentences, using public neuroimaging datasets that were collected and shared by the Max Planck Institute for Psycholinguistics, Princeton University and other institutions.
This past partnership has led to the following key insights: AI language models that “closely resemble” brain activity perform better in predicting the next word based on context. For example, it would predict that the word ‘time’ follows “once upon a.”
Figure 1:
The second insight is that certain parts of the brain have “long-range forecasting ability” – they can predict words and ideas “far ahead in time.” Most AI language models today are trained to forecast the “very next word,” according to Meta researchers.
Creating AI models with this long-range forecasting ability could be a leap forward.
Using AI to develop better AI models
Analyzing brain signals is not easy – they are often “opaque and noisy,” the researchers said. Enter deep learning, which uses layers of neural networks to do the heavy lifting to generate the volume of data needed for the research. It uses AI on the brain to develop better AI models.
The researchers said several studies show that the brain is systematically organized like AI language models. “Deep learning tools have made it possible to clarify the hierarchy of the brain in ways that wasn’t possible before,” they wrote.
And they are excited to find that there are “quantifiable similarities between brains and AI models. And these similarities can help generate new insights about how the brain functions. This opens new avenues, where neuroscience will guide the development of more intelligent AI, and where, in turn, AI will help uncover the wonders of the brain.”
About the Author
You May Also Like