A team of researchers has developed a system that utilizes MRI scans and artificial intelligence to decode a person’s stream of words. Instead of attempting to replicate each individual word, the technology is able to reconstruct the general idea of what the person is hearing or imagining. This approach is focused on deciphering the semantics and meaning behind the words. The study’s co-author, Alexander Huth, explains that the technology can only function when the participant is actively cooperating with the scientists and does not have the ability to read minds. Nevertheless, this type of system may one day aid individuals who are unable to speak due to brain injury or disease, while also enhancing our understanding of how the brain processes language and thoughts. Unlike previous efforts that relied on sensors positioned directly on the brain’s surface to detect signals involved in articulating words, this approach employs MRI scans.
Marcel Just, a psychology professor at Carnegie Mellon University, who wasn’t involved in the study, says that the Texas team’s approach is an effort to decode more complex thoughts. This may have implications beyond communication, as understanding mental illness, a brain dysfunction, is a significant medical challenge. According to Just, this type of approach may hold the key to solving this puzzle someday.
The purpose of the study was to investigate how the brain processes language. To achieve this, researchers had three individuals spend up to 16 hours each inside a functional MRI scanner, which detects activity throughout the brain. Participants listened to podcasts through headphones, specifically stories from The Moth Radio Hour, while in the scanner. Surprisingly, the MRI scans showed that listening to the podcasts activated numerous areas of the brain, not just those associated with speech and language. These included areas involved in navigation, mental math, and tactile processing. The MRI data was then analyzed by a computer, which learned to match particular brain activity patterns with specific streams of words. During subsequent tests, participants listened to new stories while in the scanner, and the computer attempted to reconstruct these stories by analyzing each participant’s brain activity.
Researchers used functional MRI scans to understand how the brain processes language. Participants listened to podcasts while in the scanner, and the researchers used the MRI data to match brain activity with specific streams of words. They were able to decode what a participant heard and imagined saying, and even create language descriptions of videos without words. Although the MRI-based system is slower and less accurate than a brain-implant system being developed for paralyzed people, it is noninvasive and raises fewer ethical concerns. However, future versions of the system could potentially read a person’s thoughts, which could be harmful.