Scientists can now “decipher” people’s thoughts without even touching their heads, The scientist reported.
Past Mind Reading Techniques Relied On electrode implant deep into people’s brains. The new method, described in a report published on September 29. to the bioRxiv preprint database, instead it is based on a Non-invasive brain scan technique called functional magnetic resonance imaging (fMRI).
fMRI tracks the flow of oxygenated blood through the brainAnd because active brain cells need more energy and oxygen, this information provides an indirect measure of brain activity.
By its nature, this scanning method cannot capture brain activity in real time, since the electrical signals emitted by brain cells move much faster than blood through the brain.
But surprisingly, the study authors found that they could still use this imperfect proxy measure to decode the semantic meaning of people’s thoughts, even though they couldn’t produce word-for-word translations.
“If you had asked any cognitive neuroscientist in the world 20 years ago if this was feasible, they would have laughed at you,” said the lead author. alexander hutha neuroscientist at the University of Texas at Austin, said The scientist.
Related: Scientists identify ‘universal language network’ in the brain
For the new study, which has not yet been peer-reviewed, the team scanned the brains of one woman and two men in their 20s and 30s. Each participant listened to a total of 16 hours of different podcasts and radio shows during various sessions in the scanner.
The team then fed these scans into a computer algorithm they called a “decoder,” which compared the patterns in the audio with the patterns in recorded brain activity.
The algorithm could then take an fMRI recording and generate a story based on its content, and that story would match the original plot of the podcast or radio show “pretty well,” Huth said. The scientist.
In other words, the decoder could infer which story each participant had heard based on their brain activity.
That said, the algorithm did make some mistakes, such as changing the pronouns of the characters and the use of the first and third person. “He knows what’s going on pretty precisely, but he doesn’t know who’s doing things,” Huth said.
In additional tests, the algorithm was able to fairly accurately explain the plot of a silent movie that the participants watched on the scanner. You could even tell a story that the participants imagined telling in their heads.
In the long term, the research team aims to develop this technology so that it can be used in brain-computer interfaces designed for people who cannot speak or write.
Learn more about the new decoder algorithm at The scientist.
Related content:
This article was originally published by living science. Read the original article here.