Don't miss the latest stories
AI Models Have Been Mimicking The Human Brain, Although No One Taught Them To
By Ell Ko, 20 Dec 2021
Subscribe to newsletter
Like us on Facebook
Image via Kittipong Jirasukhanont | Dreamstime.com
With the growth of artificial intelligence, technological breakthroughs are, seemingly, increasing by the day. One of the concerns, however, is that the machines will one day develop their own sentience and decide to overthrow humanity—something along those lines.
While it’s not to say that this would ever happen, an article published in the Proceedings of the National Academy of Sciences in November does have the beginnings of that sentiment.
A group of researchers from MIT behind this study focused on language-based tasks, since it has been said before that AI operates similarly to the human brain when it comes to language-related cognitions. Specifically, word prediction.
In their investigations, the team compared data from 43 artificial neural network language models against recordings of the brain in human subjects while they listened to, or read words.
The most outstanding finding is the fact that some of the AI models are capable of predicting neural data—resembling the brain’s mechanics when it comes to language processing, despite not having been trained specifically to do so.
Another major finding is that the more “brain-like” a model is, the more it’s able to replicate a human behavior, like reading times. Additionally, a model capable of next-word prediction is also capable of mirroring subjects’ brain scores.
This, in turn, has the potential to be used to predict human behavior.
In the paper, the team speculates that the natural language processing community might be tinkering in “something like community evolution,” first author Martin Schrimpf tells Interesting Engineering.
“If you take an [AI] architecture and it works well, then you take the parts of it that work, ‘mutate’ it, recombine it with other architectures that work well, and build new ones. It’s not too different [from evolution] in the broad sense, at least.”
The researchers’ next step is to build variants of these language processing models, so that they can investigate the effects of small changes in their architecture.
[via Interesting Engineering, cover image via Kittipong Jirasukhanont | Dreamstime.com]
Receive interesting stories like this one in your inbox
Also check out these recent news