Speech, Interpreting and the Brain

by Sandra Young

On Friday and Saturday I attended the ITI Medical and Pharmaceutical Network’s most recent workshop on the neurological processes involved in speech. Over the two days we heard from four researchers, Professor Richard Wise, Dr Anne Symonds and Professor Paul Matthews from Imperial College London and Professor Sophie Scott from University College London.

Today I want to share with you some of what I learnt from these talks, as well as thinking about these processes in the context of simultaneous interpreting.

How did we evolve speech?

Before looking at anything else, it is helpful to understand why we are physically capable of speaking. If we hadn’t evolved in the way that we did, we wouldn’t have the physical components necessary to make speech happen. Richard Wise brought the example of the Turkana boy to our attention. The boy is from approximately 1.5 million years ago, but his skeleton was found nearly intact. Using clues from his skeleton experts decided that he couldn’t have been capable of speech.

The reason for this is that he doesn’t have an expanded thoracic canal (see the image below). We need this so that complex neural structures can flow down our vertebrae to allow for the fine control of our intercostal muscles, which run along our ribcage. This allows us to control airflow in such a way to permit speech. Otherwise we would only be able to say one…word…at…a…time.

thoracic spinal canal

 

Fine control of our intercostal muscles is central to our ability to speak. This would not be possible if we were not bipedal. Standing up straight released our intercostal muscles from the supporting functions required during four-legged movement, allowing them to develop this fine control. Without these two features, we would not have been able to free these muscles to use in speech, or develop the increased innervation which allows us to control the flow of air to be able to speak fluidly, slowly releasing the air from our lungs. Our intercostal muscles have the same level of fine motor control as our hands, so it’s some pretty impressive stuff.

Add to this the use of our larynx (voice box), vocal cords and the motor skills of the tongue, you have speech! An interesting article about the evolution of speech can be found here. Also check out these links if you are interested in seeing our larynx and tongue in action.

 

Speech perception, production and semantics

Now we have looked, albeit briefly, at how we evolved the power of speech, we can take a look at what happens in our brain when we are listening to and producing speech. Many discoveries regarding language localisation – sites in the brain directly related to speech perception and production, were made in the 1860s and 70s. It was during this period that the Wernicke-Broca pathway was discovered.

brocas etc

Wernicke’s area is a part of the brain directly related speech perception, whereas Broca’s area is related to speech production. This McGill page goes into more detail about these two areas and how they were discovered. Lichtheim later proposed the theory of a concept area, in which semantic analysis would take place, so damage to the “connections” between this and the Broca’s or Wernicke’s areas would lead to different types of aphasia.

From here we start to think about the laterality of language – which side of the brain is involved in which activity. It would appear that:

  • The left hemisphere is generally used for semantics – understanding what is being said
  • The right hemisphere is more involved in processing other information relating to that speech – pitch, mood, emotion, etc.

Therefore, if someone flattens their speech then it is the right brain that will usually react to this change. This laterality is not found in 100% of people, but around 90% of right-handed people, and around 70% of left-handed people.

atl hub

 

The semantics system is found in the anterior temporal lobe regions (highlighted in pink above), and is strongly left lateralised in general (nearly always has strong activation in the left, rather than the right, hemisphere). What I found particularly interesting about this is when you are listening to someone else, both the left (semantics) and right (other information) are activated, but when you speak these areas are depressed, or switched off. The implication of this is that you don’t need to process what you are saying – you have planned this before you say it. However, I believe that in the context of interpreting these activation sites may alter.

The Brain and Interpreting

Obviously I don’t have any of the answers, but the talks over the weekend really made me think about some of the issues and peculiarities of how brain activity might differ when performing simultaneous interpreting.

There are just a couple of things I would like to highlight.

Laterality

I would be interested to see if left- and right-handedness affect brain activation during simultaneous interpreting, and also if this is linked to ear preference for headphone use.

Also, it would be interesting to look at the differences in brain activation during interpretation:

  1. when interpreting to the interpreter’s A language in comparison to the B language, to see if there are different activation levels for semantics, or in the motor areas of the brain, or
  2. the differences between monolingual brains and bilingual brains and those of professional interpreters.

Semantics system

Learning that the semantics system is usually suppressed when we speak was a fascinating discovery. When performing simultaneous interpreting then we are simultaneously listening and speaking. What’s more, we are listening to the original, producing the translation and monitoring our production of the translation.

Therefore it would seem that simultaneous interpreters’ brains may be able to cancel the suppression of parts of the brain, or perhaps even activate different parts of the brain during this task.

I found a study by Green et al, back in 1990, looking at the lateralisation differences between monolinguals, (matched) bilingual controls and professional interpreters. They gave the groups shadowing, paraphrasing (monolingual) and interpreting (bilingual and professional interpreters) tasks, using finger tapping as a measurement for interference (comparison with a baseline performing no verbal task).

If you want to read more about the study, please follow this link. Here were the general conclusions:

  • In monolinguals the LH interference was greatest.
  • Monolinguals were LH lateralised for paraphrasing, whereas both bilinguals and interpreters were bilateral for interpreting and LH for shadowing.
  • There was an absence of significant differences between bilinguals and professional interpreters. This means that the brain activity is associated specifically with the task of interpretation, not that the changes occur as a result of experience in the practice of interpretation.
  • Tapping disruption was also much greater in paraphrasing/interpreting than in shadowing as a result of higher levels of processing – phonemes vs semantic.

I would love to hear your thoughts on this subject, so please comment below. Throughout the week I will try to find further studies to share to try to build a more complete picture about what is going on in our brains when we perform the task of interpreting.

On another note, Professor Sarah Scott said she would be fascinated to do a study on simultaneous interpreters, so if anyone is interested, maybe you could contribute to research in the field.

 

 

 

 

Advertisements

One thought on “Speech, Interpreting and the Brain

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s