Sun. Jul 3rd, 2022

Scientists in the Meta AI field are working on a language model that mimics the way the human brain handles language.In an effort to better understand how the human brain processes language, Meta AI has launched a long-term research project. A neuroimaging institute, NeuroSpin, and the French National Institute for Research will be involved in the study. We’ll use Meta AI to compare the responses of AI language models and human brains to the same set of words.

Artificial neural networks for language processing are becoming closer and closer to mimicking the functions of the human brain, shedding new information on how neural tissue may be used to implement thinking. In order to mimic human speech as precisely as possible, the most accurate AI models now attempt to predict the next word in a sentence using machine learning.

Even while these technologies may offer customers the feeling of “humanness,” algorithms can predict the next word based on massive databases of past talks. Human brains, on the other hand, are able to anticipate words and concepts in advance, taking into account all that a statement or notion implies.

An example of a one-time technique for guessing the word “time” is to give an AI model the sentence “Once upon a” and have it guess the next word. People who grew up on fairy tales are more likely to anticipate the word “time” when they hear “Once upon a time” than others. Other culturally significant figures, such as evil witches and dragons, are all conjured up in the same way by this.

When making these predictions, the brain creates “brain states” that may be seen during brain imaging. Scanners for functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) recorded brain activity as subjects read or listened to a story. When researchers used machine learning on brain scans from public data sets in conjunction with fresh fMRI and MEG pictures, they observed something strange. The findings suggest that human brains handle language in a hierarchical fashion similar to how AI language models work.

It’s possible to think of the brain as a sort of visual processing algorithm, a word comprehension algorithm, and a huge network of AI language transformers at the same time.

To understand the world, we use narratives and representations that are generated by the interplay of certain brain regions. The prefrontal and parietal cortices (located in the front and middle of the brain, respectively) were shown to better represent language models with far-off future word predictions, according to the research.

In this way, it appears that brains and algorithms share certain internal representations that help the algorithm process language. In a basic reading test, 200 volunteers had their brains activated and the results were quickly confirmed. An independent team from the Massachusetts Institute of Technology (MIT) conducted an examination a week after the first, with the same results.

Using quantitative comparisons between human brains and AI models, this study offers fresh insights into how the brain works. Artificial intelligence (AI) that is more in tune with human language use will have an easier time relating to people.