More than a squeak: Why new study on how mice talk holds significance for human speech
The study could have implications for investigating speech disorders in humans and could be used as a surrogate for human language in future studies of autism.
Human speech is meaningful because of the way we string syllables into words and words into sentences. Birdsong is less expansive, but its structure too is a string of predictable sequences, allowing females to anticipate and comprehend the call of males.
New knowledge has now emerged about mice, which communicate with one another by vocalising in the ultrasonic range, which is beyond human hearing. In a study that could have implications for investigating speech disorders in humans, IIT Kharagpur researchers have found that mouse vocalisation, much like human speech and birdsong, is made up of predictable sequential structures that vary according to the situation. Birdsong is already a proxy for studying human vocalisation, and the new findings offer the ultrasonic vocalisation of mice as another option, scientists not involved in the study told HT.
The study was conducted by Swapna Agarwalla (now a post-doc at the University of Rochester), Amiyangshu De (a PhD student at IIT Kharagpur) and Dr Sharba Bandyopadhyay (assistant professor of electronics engineering). It has been accepted for publication in the Journal of Neuroscience, which has released an early version online.
Experiments and findings
The study exposed mice to predictable as well as random sound sequences, and examined various aspects of their response — behaviour, gene expression in the auditory cortex, and electrical response in single neurons.
In the behavioural experiments, the researchers placed adult mice in three different situations: males alone in a recording chamber, males and females in the same chamber but with a separator between them, and males and females together. They recorded the sounds emitted in each context, and identified five distinct syllables and 8-10 meaningful sequences formed with these.
The team subsequently played these recordings to female mice from one side of the chamber, and artificially generated random sequences from the other side. The females spent more time towards the side playing the natural predictable sequences, even when the sides of the sounds were switched.
“Interestingly, this behaviour was observed in females with no prior exposure to adult males. This highlights that even mice are attracted towards sound sequences with some regularity in them rather than random structures, like we humans are,” said first author Agarwalla.
Additionally, exposure to a “meaningful” sound sequence resulted in higher expression of a gene called c-fos in the auditory cortex of the females, compared to its expression following exposure to random sequences, said De, who performed the c-fos experiments.
Electrophysiological measurements, again, showed female mice’s neuronal response had a preference towards natural rather than random sequences. After an interaction with males, the firing patterns of certain types of neurons in the auditory cortex were found to get altered.
Significantly, while these neurons became more sensitive to natural sequences than random sequences, their sensitivity to the constituent syllables remained unaltered. The researchers see a parallel between this and humans’ ability to learn to understand new words with letters that are already familiar, and new sentences with familiar words.
Previous work had shown that females prefer to vocalise when they are with vocalising males rather than with silent males. “Additionally, we have observed that the number of syllables produced differs in the three experimental contexts. It was evident that the highest number of syllables are produced in the ‘together’ context,” De said.
Ultrasonic vocal communications, in fact, works between mouse pups and their mothers too. Agarwalla and De both referred to previous findings that showed how mouse pups separated from their mother emit vocalisations, triggering a search by the mother.
Neuroscientist Sumantra “Shona” Chattarji, director of the Centre for High Impact Neuroscience and Translational Applications (CHINTA), Kolkata, described the study as “very powerful”, one that looks at its subject across behavioural, molecular and single-neuron levels. Dr Chattarji, who was not involved in the study, said the findings could be very useful in future studies of autism, an area he is working on.
“One of the key aspects of autism is that a lot of affected individuals face challenges with verbal communication. And that’s not easy to study in autism because it is difficult to capture facets of human language in animal models. This study opens up exciting new possibilities in this regard,” he said.
The way one mouse communicates with another, using ultrasonic vocal calls, could be used as a surrogate for human language, and any of the genetic mutations that cause autism can be engineered in mice, Chattarji said. “Then you look at those mice models to see: do they have these signatures of ultrasonic calls? Do they understand these sequences or not? My prediction is that they won’t; it will be mixed up. And this study offers a powerful framework for studying communication in animal models of autism.”
Birds and mice
Given that birdsong is a model for studying human vocalisation, what more does mouse vocalisation bring to the table? Dr Soumya Iyengar, a scientist at the National Brain Research Centre in Manesar, weighed in. She specialises in the study of birdsong, specifically how substances called neuromodulators affect vocalisation and vocal learning in zebra finches.
In zebra finches, males attract females using what Dr Iyengar described as “female-directed” songs, while they also sing “undirected” songs. She also referred to a study on male collared flycatchers (Behavioral Ecology, 2021), which found that these birds produced syllables that were predictable rather than random.
“Mice produce ultrasonic vocalisations, unlike birds. However, their songs resemble those of birds, in the sense that they are composed of syllables and inter-syllable intervals,” she said. “The importance of Dr Bandyopadhyay’s study is that for the first time, they have demonstrated a predictable sequential structure in mouse ultrasonic vocalisations, which is similar to that in both birdsong and speech. Whereas songbirds continue to be used as an excellent model system to study vocalisation and vocal learning in humans, the results suggest that mice can also be used for such studies. This is important because transgenic mice can be created more easily than transgenic birds.”