WEIRD SCIENCE: Yes, AI can read your mind (a little bit)
For one, it depends on how we train AI to associate a thought with the electrical activity that the thought generates. Two, your consent is everything
Last month, researchers at the University of Texas, Austin, published a paper describing an AI system that could read people’s thoughts while they were listening to a story, and translate those thoughts into a continuous stream of text. Titled 'Semantic reconstruction of continuous language from non-invasive brain recordings', and published in Nature Neuroscience, it was not the first study of its kind, but it improved on previous ones by using a technology that was noninvasive, requiring no implants.
The questions that the breakthrough raises are obvious. How efficient are such systems? What does it mean when we say they can read someone’s mind? To go to the very basics, what is the “mind”?
The science of thought
Thought is an abstract, even philosophical concept, but it has physical manifestations. It is these manifestations that the AI reads, interprets, and responds to, depending on the way it has been trained.
Every action of our bodies, including what we think, is controlled by the neurons or nerve cells in our brain. Neurons communicate with one another and with the muscles by sending and receiving electrical and chemical signals. These complex structures have inspired the building of artificial neural networks, which have digital “neurons” that try to mimic biological neural networks by sending, receiving and processing signals.
When we talk about AI reading the mind, we are referring to technology that measures the electrical activity caused by signals in the brain. Such technology, called brain-computer interface (BCI), can be trained to associate an activity with its trigger, or the thought or the reaction that led to the electrical activity.
Training neural networks
Training an artificial neural network requires “input-output mapping”, based on a large number of examples fed into it. V Srinivasa Chakravarthy, a computational neuroscientist at the Indian Institute of Technology- Madras, cited the example of a person affected by paralysis moving a cursor with thought.
“What you ask them is, imagine you have to move the control the cursor left or right. When they visualise that, the activity in the motor cortical areas of the brain will have a certain pattern. It’s very complicated, it’s very noisy, but there is an underlying pattern,” Chakravarthy said.
The pattern will be different when the subject visualises moving the cursor left and when they visualise moving it right. “You take a person, put electrodes on their head, and ask them to visualise left and right, left and right, hundreds of times. We record that data, feed it into the network and classify this into left or right,” Chakravarthy said.
To illustrate “mind reading” at a more complex level, let’s take a study from Purdue University, published in Cerebral Cortex in 2017. As three women watched nearly a thousand video clips, including those of humans and animals, a functional MRI (fMRI) device recorded the signals from their brains. This data was then fed into an artificial neural network.
Once it had been trained to associate various video images with various kinds of brain signals, the AI was able to decode subsequent fMRI data into specific categories and reconstruct the videos. These included videos that the AI system had not “seen” before.
How chatbots work
The Texas University, Austin research published last month used not only fMRI but also GPT, the language model on which ChatGPT works.
There are a couple of concepts to understand here: deep learning, and natural language processing. Deep learning, a kind of machine learning, which tech giant IBM defines as a neural network that attempts to simulate the behaviour of the human brain from large amounts of data. Natural language processing is a field of AI that allows machines to organise human text into patterns and extract meaning from that language. Predictive text in your Gmail replies is an example of this.
Such language processing systems mine large datasets of conversations among humans. From that, they learn to predict what kind of words should follow a given word or phrase.
What’s new
Researchers at the University of Texas at Austin showed that a new AI system, which they called a semantic decoder, could translate a person’s brain activity into a continuous stream of text. Their study used fMRI to read brain signals of three people listening to stories on podcasts. The system could, in fact, read their thoughts even when they were silently imagining telling a story.
Here again, input-output mapping was involved. For each listener, the system produced a set of maps that associated the listener’s brain response with the word or phrase that triggered that response. The AI was thus trained to predict how the brain of each individual would react to these words and phrases, and others with a similar meaning.
The system also incorporated GPT, which helped it predict words that could go together with the words being “heard”. Eventually, when the participants were asked to hear a new story or imagine telling one, the system was able to generate text from their brain activity.
Importantly, this was not a word-for-word transcript, but the general idea.
“Researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant’s brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words,” the university said in a statement.
For example, when a participant listening to the podcast heard the words, “I don’t have my driver’s licence yet”, the AI translated the participant’s thoughts as, “She has not even started to learn to drive yet.”
Implications
Advances in mind reading come with limitations. The system developed at UTexas, for example, can read brain signals only from a person it has been trained on (it won’t work on a new subject), and all the study participants had given their consent to their thoughts being read.
Are we then looking at a future when AI can read anyone’s mind? Chakravarthy of IIT Madras underlined the necessity of consent of people whose brains are to be read by an fMRI scanner or an electroencephalogram (EEG) system (which he described as being not so efficient).
“So, unless you are in some future dystopian world where everybody is forced to have a brain implant, then we would be looking at a very sad vision… but I don’t think that will happen,” he said.
Kabir Firaque is the puzzles editor of Hindustan Times. His column, Weird Science, tackles a range of subjects from the history of inventions and discoveries to science that sounds fictional, but isn't it.