tones in tonal language - Mandarin

The Neurolinguistics of Tonal Languages

First of all, if the title of this article grabbed your attention, then I would bet the quantity in my wallet right now (which is, by the way, not a lot!) that you are already familiar with the concept of linguistic tone. If not, however, then I hope those already in-the-know would not mind a small digression to illustrate the concept. ‘Tone’ in linguistics basically means the ability to indicate a different lexical unit (word) via pitch alone.

This is a concept very alien to most Western languages (with the arguable exception of the Norwegian and Swedish pitch-accent). When introducing the notion of tones, there is one example that pops up disproportionately and I am not about to break with tradition here. It is taken from Mandarin and depending on the pitch you apply when producing the sound ‘ma’ you can express completely different words. I have no doubt many Chinese mothers have been innocently called horses by unwitting, well-meaning learners of Mandarin.

tones in tonal language - Mandarin

Mandarin has four tones, as illustrated above. Cantonese has more (between six and nine; the exact number is disputed). Another language of China, Kam, is said to have up to 15 tones! As an analogy with the linguistics term phoneme, some people like to talk about tonemes instead of tones. With a larger number of tonemes, a language is freer to have a more simple syllable structure since it has more linguistic building blocks with which to play with.

So why are tonal languages interesting from a neurolinguistic point of view? Before we can answer that, we need to take a very crude summary of the early received wisdom in the neurolinguistics literature. Back in the 1800s, Paul Broca and Karl Wernicke had been studying patients who had left-hemisphere brain damage and discovered that this damage interfered with normal language processing. Curiously, lesions to the right-hemisphere did not seem to lead to any language disruption.

This is how we first discovered the left-hemisphere’s specialisation for language in the vast majority of the population. However, the story became more interesting in the 1870s when John Hughlings Jackson discovered that people suffering from aphasia (linguistic deficits due to brain damage) could express a lot of information by modulating the pitch of meaningless statements. In effect, they were communicating messages successfully via pitch that they could not put into pronounceable words. The left-hemisphere did not seem to reign supreme over all linguistic function as once thought.

Since then, we know from a huge body of research that the right hemisphere processes a vast amount of prosodic information during language comprehension, though not the entirety of it. It is common to see the general idea given in many neurolinguistics textbooks, such as The Handbook of Neurolinguistics (1998), “Yet, the evidence seems to support the notion that the right hemisphere modulates dominantly the graded, affective components of language.” Here, affective is derived from the specialised term affect roughly meaning emotional in layman’s terms. A key brain structure underlying emotional intonation is the right-hemisphere version of Broca’s area, which on the left hemisphere was the first ever language-area to be discovered.

“So, if the emotional aspects of prosody are all mainly processed in the right hemisphere, could there not be a connection to the processing of music?,” I hear you ask. Music is processed better (at a finer granularity) in the right hemisphere in a whole host of interesting ways from melody all the way up to rhythmic processing, though the left hemisphere still subserves some aspects of musical processing. This connection didn’t go unnoticed for long. Research carried out at the University of California around 2000 reported that speakers of Mandarin and Vietnamese (another tonal language) were much more likely to also have perfect pitch, the ability to recognise or produce a specific musical note at will.

Is the tonal processing of language the key to explaining this curious finding? This was an interesting idea and it is one that is still widely held today. There is not a small amount of studies that have shown areas of the frontoparietal network that show increased activation when compared to the left-hemisphere during the processing of tonal information in tonal-language speakers.

The interesting question is then whether tones are treated more in the “musical domain” of the right hemisphere and whether this is the reason for the increased propensity for people with perfect pitch. This is why the neurolinguistic processing of tones garnered so much interest. It provides an insight into aspects of language unfamiliar to the traditional Western world.

In the last decade, we have since discovered (to a fairly certain, but inconclusive degree) that the likely tonal specialisation of linguistically-relevant information is not in the right hemisphere per se but actually happens in an area of the brain previously thought to be completely unrelated to linguistic processing. The plot did indeed thicken!

The same signals are received by the ear, but depending on whether the prosodic information is linguistic or non-linguistic, the signals are then redirected to different hemispheres for later preprocessing. You see increased activation in the right hemisphere when tonal language speakers are asked to infer things like, “Does this person sound happy or sad?” (just like speakers of non-tonal languages), but when people are asked to state which word is being spoken, i.e. where tone discrimination is required, these signals are quickly picked up by the left hemisphere instead. When English speakers hear the exact same stimuli, their right hemispheres light up as expected, because this prosodic information is non-linguistic to them and as default is processed in the right-hemisphere.

Studies showed us that tonal language speakers that suffered damage to the left-hemisphere lost the ability to correctly discern between multiple tonemes (i.e. they couldn’t hear the difference between the Mandarin words for horse, mother etc.) This pattern was not observed with patients who had right-hemisphere brain damage, though they lost the ability to correctly infer non-linguistic pitch information, such as being able to infer extralinguistic cues in prosodic speech. So, where is this area I alluded to before, which decides what is linguistically-salient versus prosodically-salient? Believe it or not, it appears to be in the brainstem.

Here’s how it works. If tone is lexically meaningful in a language, then this experience trains neurons in the brainstem to process signals differently compared to non-tonal languages. The sound signals, first processed in the ear, are sent to the brainstem before ultimately reaching primary auditory cortex, meaning the experientally dependent neurons alter how the incoming sounds to auditory cortex are to be interpreted, acting functionally as a hemispheric gatekeeper.

They achieve this through a mechanism known as the “frequency-following response” (FFR), which involves neurons in the brainstem changing their firing rate to match linguistically-relevant tonal information. How the brain achieves this gating mechanism is still quite mysterious, but the differential effects strongly suggest it is happening. The work of Krishnan & Gandour is of interest to anyone who wants to know the details.

The main evidence for the FFR came from experiments with Mandarin, Thai and English. Strong FFRs were seen in both the Mandarin and Thai data, both being tonal languages, while it was absent (as expected) in the English data. This was a fairly recent discovery in neurolinguistics. Researchers traditionally believed linguistic discrimination of sounds only occurred after reaching both primary auditory cortices and that neurons in the brainstem could not be conditioned by experience.

There are many more quirks to the story and many more discoveries to be made in terms of hemispheric lateralisation and specialisation, particularly with regard to how bilinguals and split-brain patients deal with tonal processing. For the remainder of this article, however, I would like to take a brief leap into the world of genetics.

Back in 2007, Dan Dediu and Robert Ladd at the University of Utah released a very intriguing paper on the correlation of two genes with the presence of linguistic tonality in those human populations with an ancestral (non-mutated) allele of the gene. The two genes in question are ASPM and Microcephalin. Each gene has two alleles, one “original/ancestral” and one “derived” (mutated). ASPM’s derived allele is approximately 5.8 thousand years old, while Microcephalin’s is 37 thousand years old.

Here is where it gets interesting. The “original” (non-derived, i.e. older) versions of these genes are found disproportionately in areas where tonal languages are spoken, while the newer (“derived”) versions are disproportionately found in areas where non-tonal languages are spoken. While it is never a good idea to equate correlation with causation, the authors’ argument is an interesting one. This line of research is hypothesis-generating rather than hypothesis-testing. Later research in 2012 also examined these genes in relation to lexical tone processing and the results were not wholly inconsistent with the earlier research.

Tonal languages are without a doubt the norm in sub-Saharan Africa (particularly among the Niger-Congo language family). We also know that homo sapiens evolved in sub-Saharan Africa. Tonal languages are associated with the “original/ancestral” alleles of the genes mentioned in the last paragraph. Taken together, this leads to a very curious hypothesis that our homo sapien ancestors could have spoken tonal languages and the non-tonal variants are newer.

Research conducted at the Max Planck Institutes for Psycholinguistics, Evolutionary Anthropology and Mathematics in the Sciences in 2015 also showed that tonal languages were much more prominent in regions of high humidity (yes, sub-Saharan Africa again) than in drier regions. None of this research is conclusive in and of itself, but it is quite suggestive and certainly intriguing. I hope it serves as an indication of the kind of research that goes on and the interesting questions that scientists are trying to address when trying to solve linguistic questions through disciplines such as neuroscience and genetics.

Get more interesting language and linguistics content in our magazine. Click here to subscribe https://sillyli.ng/nJCGLE or you can subscribe below

Level Price Action
Silly Linguistics Magazine

$3.00 per Month.

Select
Silly Linguistics Magazine Annual Plan

$30.00 per Year.

Select
Picture of Alex Murphy

Alex Murphy

related