Corpus analysis techniques

As I mentioned in a blog earlier this year, one of my projects for 2016 is to develop my skill set in corpus analysis, intending to use this to develop my translation skills and also to build terminology bases and to identify the grammatical characteristics of the language used in my specialist areas.

In this blog I want to go into more detail about different analyses that can be performed using corpus tools and what they can show us. For this post I used a corpus that I built for a recent translation assignment, using the WebBootCat feature, which I described in a previous post.

Today I will introduce another corpus analysis tool, AntConc, developed by Laurence Anthony. It is open source and can be freely downloaded, along with other related tools.

Building the corpus

As I explained in my earlier post, I used the WebBootCat function to create this ad hoc corpus. To do this you need to access SketchEngine. This is the process I use:

  • Select seed words using terms/words that are used in the target subject area (for example, in this case: subsidies, FIT, premiums, installed, capacity, margin, power, etc.).
  • WebBootCat trawls the internet and produces a list of different URLs that match the search criteria.
  • Check the data that came through to remove any sources that may not be reliable.

If you do not have a subscription to SketchEngine, you can create your own corpus using documents you have selected yourself. To use these in AntConc, they must all be in text file (.txt) format in UTF8 (check out the AntFile Converter to convert).

Below are the basic types of analysis that you can perform using AntConc (and corpus tools in general). For more information on how to use these features in the AntConc tool, please refer to Laurence Anthony’s website, where there are a number of tutorials available.

Word lists

It produces a list of all the words included in the corpus, ordered by frequency. While this can be useful, often it is used as a basis for other analyses. You will find when you create word lists that prepositions and articles often come at the top of the list before any nouns, adjectives and verbs.

Keyword lists

Here you have to load a word list of your choice (in this case the British National Corpus word list). This function then creates a list of keywords that are comparatively more frequent in the corpus being analysed. Another example of where this might be useful is if you want to compare vocabulary used in two different genres, or different registers within a genre.

In my case, I created an adhoc corpus from seed words, so there is some bias to these words. However, I was looking for the usage of these specific terms for the translation I was doing, so it is not a problem. However, it is worth being aware of this in case you are interested in building a corpus for other research purposes.

As you can see, some of the seed words are up in the most comparatively frequent words, but there are also other words that are unusually frequent in the corpus, which can give us insight into the use of vocabulary in a certain area, and can give indications of collocations and clusters to look at.

Collocations, clusters and N-grams

N-grams/Clusters

N-grams demonstrate the frequency of two-, three- or four-word clusters in a text. This can help to identify possible multiword expressions (MWE), as well as common grammatical formations. In translation, for example, if you are looking for a possible term in a target language, but you are not necessarily sure of the correct translation, this might be a good place to look. It can also help you to identify grammatical patterns. Contrary to collocations, n-grams are shown without context, but give frequency as a number (see second column below). If you have been looking for suitable terms, once you identify a possible term you may want to then use the collocation function to look at it in context.

Collocations

This feature looks at usage of a specific word in context, and can be used to identify common collocations of words, either to identify multiword elements or also grammatical collocations such as verb-noun collocations, or adjective-noun collocations, verb-preposition collocations, etc.

Example of how these analyses work

For the purposes of this post I am going to look at the use of the word ‘margin’. When you search for collocations, you can search aligning to the right or the left, up to three places each side. With a noun such as ‘margin’, if you are looking for common noun collocations, it is likely a good idea to search left – if you want to see verb-use patterns, then search right.

Margin – 481 hits

  • Common collocations

Capacity margin

Definition (The capacity margin is difference between capacity and peak load, expressed as a percentage of capacity (instead of peak load).

This was a term that formed part of the seed words for compiling the glossary, but the frequency and also spread of its use added to its viability. A number of variations of this term came up, but also different terms, such as:

Reserve margin

Definition (The reserve margin is the difference between generating capacity and peak load expressed as a percentage of peak load).

As you can see, the collocation tool allows you to not only identify and see the context in which certain phrases/terms are used, but also potentially identify other terms, and determine whether these terms are used in specific companies, or specific contexts. I had not used the term ‘reserve margin’ in my seed words, as it was not a term that had come up in my translation. However, it did come up in the corpus. When I first saw this term I was unsure if it was a synonym of capacity margin, given the context in which I found both terms used. However, from further research I found out that they are two ways of referring to the same thing, but expressed using different criteria (as can be seen in the definitions).

Another use of the collocation tool is to see which verbs are commonly used with the terms you are searching for – as you can see in the screenshot, the verbs ‘provide’, ‘meet’ and ‘retain’ seem to be common collocations with the term ‘capacity margin’. This can be useful when translating as the verb used in the source language does not always directly correspond with the use in the target language. This tool can also be used to see typical tenses used in certain contexts, which is another area in which there are often differences between source and target texts.

Concordance plotter

Concordance plotters show where in the corpus terms appear. I decided to contrast the use of ‘reserve margin’ with ‘capacity margin’. This works better if each file is separate as you can see in which files the term appears, but even so it will give you an idea if a term is specific to one file or is used generally.

“Reserve margin”

“Capacity margin”

I hope this brief introduction to different analytical features will have given you some insight into the different ways in which corpus tools can help you in your translations and other language work.

Are you paying attention?

distracted

The world today is full of constant distractions, constantly tempting us to flit from one activity to another without a second’s thought. How does this affect our learning, its effectiveness and our productivity?

Claire broached the subject in her blog ‘The Distraction Trap’ last year with some handy tips to reduce distractions in our work. In this blog I want to focus more specifically on learning, sharing my experiences from the ‘Learning how to learn’ course I took in January.

I started the course as I felt that I had become increasingly scatty and forgetful as 2015 drew to a close, so this year I decided to make a conscious effort to reduce distractions and improve my learning.

The concept of ‘Deep Work’

As part of the background reading for the course, I read ‘Deep Work’ by Cal Newport, which looks at the value of uninterrupted, focused concentration on our work and study.

A state of constant distraction in which multiple things are going on in your mind at once puts a huge strain on your working memory. This means that you will be unable to effectively retain information, or concentrate on one task properly to innovate or solve problems. As regards memory, this implies that you may use the information once but will not retain it for later use. You may say that this doesn’t matter, you have Google, but I believe that this negatively affects your productivity and also means that you are likely to advance slower than colleagues who are capable of working deeply (applying focused concentration to single tasks or problems). Being able to concentrate and to fully explore ideas, to learn and apply new knowledge acquired (relatively!) quickly through effective working is desirable in all areas of life.

How does this relate to translators and interpreters?

I believe this concept is key to both our work and learning. Translation and interpreting are professions in which you need to be able to grasp new concepts quickly, while honing your language skills. Learning how to learn and to acquire periods of undistracted focus in your day will help you to improve your translation speed (both through lack of distraction and heightened expertise), will improve the accuracy and fluidity of your translations and/or interpretations and help you to gain specialist knowledge more efficiently.

Are you really learning?

I had been increasingly finding myself in the situation at work that I knew I had come across a term or concept before but I was unable to recall its translation or meaning. I recognise that at times this is inevitable, but it should not be the norm. Here are some tips that may help you to recall past information better.

Just reading and rereading doesn’t work

As Claire mentioned in her article – are you actually reading or are you scanning? Focused reading is the first step to remembering information.

Recall is in fact one of the simplest ways to properly remember some information – just think about if you tell someone about what you have learned in comparison with if you don’t. The former stays with you much longer. This works as it strengthens the links used to retrieve the memory, reinforcing the neural pathway to this memory.

Spaced repetition (reviewing new information at spaced intervals over time) is another example which works on the same principle.

Anything which requires that you manipulate the information will help you to remember it, such as answering questions on the subject or manipulating the information to adapt it to something practical (a blog post, for instance). These sorts of activities will help your brain to analyse the information, which promotes chunking, or the collation of various elements of information into one, easy to handle piece.

Why is chunking important?

  • Means you have understood
  • Takes less effort for the brain to use
  • Can help to link different aspects of information from different areas

NOTE: the more ‘real’ learning you do, the quicker you will understand texts and be able to link previous work to what you are doing now. This highlights the importance of specialising.

Do you suffer from einstellung?

The brain applies two modes when thinking: focused and diffuse, which it switches between throughout the day. Focused thinking is when you are concentrating on a specific problem and tackling it directly. Diffuse thinking is when your mind wanders, such as when you go for a walk, or look out of a train window. Both of these modes are important for advancing your learning and innovation.

Einstellung describes when our brain gets stuck on a loop, which does not retrieve the correct answer, but our focused mind does not allow us to conjure up a different solution. The course taught us about the importance of intertwining the two modes of thinking.

Focused mode is important for a specific task with specific goals, but diffuse mode allows you to open your mind up to other possibilities. Also, in diffuse mode your brain continues to process ideas in the background while your mind wanders onto other topics. This is why if you skip an exam question you can often tackle it better when you come back to it later, or that word you were searching for so desperately comes to you in the middle of the night.

Beat procrastination!

I will only mention this briefly, as Claire wrote an interesting article about time management last year for those interested in procrastination-beating techniques. I will mention however that the course emphasised the importance of not only breaking down daunting tasks into smaller chunks but also focusing on the process, rather than the product, of the task. This means focus on doing a little bit frequently (‘I will do half an hour on …’) rather than ‘I will finish the blog post today’. This way you will reduce the amount of willpower required to embark on the task, without the added stress of feeling that you have to complete it right away for it to be worthwhile.

So, are you concentrating?

To conclude, we live in an attention-deprived era, which often promotes multi-tasking as a bonus. However, it severely affects productivity and your ability to learn. Since completing the course I have applied many of the techniques mentioned by Claire, and I already feel much more focused and productive. Just being aware of your triggers can be a great start to a new, focused you.

What do you think? Do you think multi-tasking is detrimental to your work-life? I would love to hear your thoughts on how you learn best, any tips you may have.

 

 

 

 

 

Using corpora in translation

by Sandra Young

With the beginning of a new year come new ideas, challenges and resolutions. For the first blog of 2016 I wanted to invite you to explore what I consider to be an invaluable tool for our work as translators, particularly when working in technical fields with very specific terminology. One of my professional resolutions for the year is to succeed in fully harnessing the benefits of corpora for my work.

Corpus: “A collection of written or spoken material in machine-readable form, assembled for the purpose of linguistic research.” (Oxford English Dictionary)

I first came across corpora in a professional sense when working on a dictionary project with the Oxford University Press (OUP). The examples for each sense (the different meanings of a single word in specific contexts) in the dictionary entries (the collection of these senses under one headword) had been extracted from a European and Brazilian Portuguese corpus, purpose-created by the OUP. To search this corpus the translation team had access to an online corpus building and mining tool called Sketch Engine.  We used this tool to find entry words and phrases in context, search for additional or more appropriate examples for senses of words and suggest further meanings, which was essential to producing appropriate translations. Words without context have no meaning at all, any choices of translation without this would be arbitrary.

On the target language side, we could also use the British National Corpus (BNC) to search for examples of our suggested translations in context and to cross-check against contexts and usage in the original language, in this case Portuguese. This made us confident that our choice of translation was fit for purpose.

Throughout the two-year dictionary project I found working with corpora not only useful, but fascinating. With very little effort you can produce lists of in-context words or collocations that appear in your conglomeration of text (which is about 100 million words in the case of the BNC), facilitating the quick analysis of information. For the dictionary project I used corpora to check the usage of specific words in context to be able to make informed decisions on the correct translation of said words, their most common grammatical forms and common collocations; however corpora can be used for many other purposes too.

When the dictionary project drew to a close, I continued to dabble with corpora in my work, but for some time I failed to follow a clear path. I started a MOOC course on Corpus Linguistics but, as with many free courses, I found it difficult to juggle both work and study and work won out. This course, run by Lancaster University, is of particular use to researchers, so there are elements that may not be directly applicable to our day-to-day work as translators.

However, last year at the MedTranslate Conference in Freiburg, I attended Anne Murray’s talk on corpus building and mining. In the talk, Anne took us through the steps to building our own corpora within Sketch Engine. It is a subscription-based tool costing £78/year, with a discount for MET members. The tool allows you to search existing official corpora, from Arabic to Yoruba, as well as building your own corpora up to a total capacity of one million words.

There are two main ways to build your own corpora within Sketch Engine. The first is WebBootCat, in which you input specific search terms that the program uses to dredge the internet for matching websites and files. The other option is to upload specific documents you have found (and vetted for reliability) and compile a corpus from them. The table below outlines the main tendencies of each.

WebBootCat File-based corpus
Quick to build Slow to build
Less reliable content More reliable content
Reliant on usage of appropriate and thorough search criteria Based on the assumption that with hand-picked documents you will have had more time to refine the search criteria and collate a sound base of information

As WebBootCat automatically dredges the internet, you gain quick access to a lot of information but you have less control over the content, so it can be assumed to be less reliable on the whole, as it is more difficult to check the quality of the information. You can vet the websites included in the final corpus to exclude any outliers, but this will not ensure same the quality as hand-picked material.

If you work from a file-based corpus, it will be considerably more time-consuming as you will have to search for and check each and every document for reliability and appropriateness before compiling (e.g. native author, correct spelling variation if required, correct subject matter and register). However, once you have built the corpus, you can be confident that the information within it is reliable.

Despite this, with Sketch Engine you should always be able to go back to the original text of each entry, which can help you to make a judgement on the reliability of the results produced whether using WebBootCat or your own file-based corpus. Also, as you can see, both styles offer viable options for different situations. Often we do not have the time to produce a specific, well-researched corpus for every single job we have.

How do I use corpora now?

I usually use corpora to analyse the usage of terms in the target language text, for correct translations of unfamiliar terms. Corpora are also very useful for familiarising yourself with a specific style of writing, or with common collocations in a specific subject area. In case you miss these on our twitter feed, here are some other blogs on corpora that you may find useful:

https://karenrueckert.wordpress.com/2013/11/12/part-5-corpora-and-parallel-texts/

http://jaltranslation.com/2014/04/21/using-corpora-in-your-translation-work/

I often use WebBootCat for efficiency, but recently I had 35 thousand words of pharmaceutical regulatory reports to translate. It was a sizeable job, so I decided to compile my own file-based corpus on this subject. Given the subject matter, it was relatively easy to find official, reliable documents as the FDA publishes a great deal of food and drug product guidance, compliance and regulatory information. I selected documents and compiled a corpus in Sketch Engine.

As a result of the corpus, I was confident in my choice of vocabulary as I could see clear evidence of how terminology and collocations were used in verifiable English texts, and I could see how sentences were structured around these terms to mimic the style of the official texts. Also, if the client were ever to query my use of certain terms, I would be able use results from the corpus to provide evidence to support my choices.

There are many other corpus building and analysis tools out there. I use Sketch Engine for its ease of use (you can upload documents in a variety of formats, the interface is very user-friendly, I already knew how to use the tool, etc.), but you do have to pay for it. In a later post I will go into detail about AntConc, Laurence Anthony’s free corpus tool. This is an incredibly powerful and useful tool which I aim to master this year and further develop my corpus techniques. I attended his workshop at the MET Conference in Coimbra at the end of last year and in addition to the corpus analysis tool there are a number of other interesting tools he has developed that may be of use to translators. For those of you who are interested, the corpus linguistics course by FutureLearn uses AntConc, so you could learn to use the tool that way.

Do you use corpora? If so, what do you use them for? What are the advantages and disadvantages of corpora?

Thanks for reading and happy 2016! I wish you all a great year.

 

 

Speech, Interpreting and the Brain

by Sandra Young

On Friday and Saturday I attended the ITI Medical and Pharmaceutical Network’s most recent workshop on the neurological processes involved in speech. Over the two days we heard from four researchers, Professor Richard Wise, Dr Anne Symonds and Professor Paul Matthews from Imperial College London and Professor Sophie Scott from University College London.

Today I want to share with you some of what I learnt from these talks, as well as thinking about these processes in the context of simultaneous interpreting.

How did we evolve speech?

Before looking at anything else, it is helpful to understand why we are physically capable of speaking. If we hadn’t evolved in the way that we did, we wouldn’t have the physical components necessary to make speech happen. Richard Wise brought the example of the Turkana boy to our attention. The boy is from approximately 1.5 million years ago, but his skeleton was found nearly intact. Using clues from his skeleton experts decided that he couldn’t have been capable of speech.

The reason for this is that he doesn’t have an expanded thoracic canal (see the image below). We need this so that complex neural structures can flow down our vertebrae to allow for the fine control of our intercostal muscles, which run along our ribcage. This allows us to control airflow in such a way to permit speech. Otherwise we would only be able to say one…word…at…a…time.

thoracic spinal canal

 

Fine control of our intercostal muscles is central to our ability to speak. This would not be possible if we were not bipedal. Standing up straight released our intercostal muscles from the supporting functions required during four-legged movement, allowing them to develop this fine control. Without these two features, we would not have been able to free these muscles to use in speech, or develop the increased innervation which allows us to control the flow of air to be able to speak fluidly, slowly releasing the air from our lungs. Our intercostal muscles have the same level of fine motor control as our hands, so it’s some pretty impressive stuff.

Add to this the use of our larynx (voice box), vocal cords and the motor skills of the tongue, you have speech! An interesting article about the evolution of speech can be found here. Also check out these links if you are interested in seeing our larynx and tongue in action.

 

Speech perception, production and semantics

Now we have looked, albeit briefly, at how we evolved the power of speech, we can take a look at what happens in our brain when we are listening to and producing speech. Many discoveries regarding language localisation – sites in the brain directly related to speech perception and production, were made in the 1860s and 70s. It was during this period that the Wernicke-Broca pathway was discovered.

brocas etc

Wernicke’s area is a part of the brain directly related speech perception, whereas Broca’s area is related to speech production. This McGill page goes into more detail about these two areas and how they were discovered. Lichtheim later proposed the theory of a concept area, in which semantic analysis would take place, so damage to the “connections” between this and the Broca’s or Wernicke’s areas would lead to different types of aphasia.

From here we start to think about the laterality of language – which side of the brain is involved in which activity. It would appear that:

  • The left hemisphere is generally used for semantics – understanding what is being said
  • The right hemisphere is more involved in processing other information relating to that speech – pitch, mood, emotion, etc.

Therefore, if someone flattens their speech then it is the right brain that will usually react to this change. This laterality is not found in 100% of people, but around 90% of right-handed people, and around 70% of left-handed people.

atl hub

 

The semantics system is found in the anterior temporal lobe regions (highlighted in pink above), and is strongly left lateralised in general (nearly always has strong activation in the left, rather than the right, hemisphere). What I found particularly interesting about this is when you are listening to someone else, both the left (semantics) and right (other information) are activated, but when you speak these areas are depressed, or switched off. The implication of this is that you don’t need to process what you are saying – you have planned this before you say it. However, I believe that in the context of interpreting these activation sites may alter.

The Brain and Interpreting

Obviously I don’t have any of the answers, but the talks over the weekend really made me think about some of the issues and peculiarities of how brain activity might differ when performing simultaneous interpreting.

There are just a couple of things I would like to highlight.

Laterality

I would be interested to see if left- and right-handedness affect brain activation during simultaneous interpreting, and also if this is linked to ear preference for headphone use.

Also, it would be interesting to look at the differences in brain activation during interpretation:

  1. when interpreting to the interpreter’s A language in comparison to the B language, to see if there are different activation levels for semantics, or in the motor areas of the brain, or
  2. the differences between monolingual brains and bilingual brains and those of professional interpreters.

Semantics system

Learning that the semantics system is usually suppressed when we speak was a fascinating discovery. When performing simultaneous interpreting then we are simultaneously listening and speaking. What’s more, we are listening to the original, producing the translation and monitoring our production of the translation.

Therefore it would seem that simultaneous interpreters’ brains may be able to cancel the suppression of parts of the brain, or perhaps even activate different parts of the brain during this task.

I found a study by Green et al, back in 1990, looking at the lateralisation differences between monolinguals, (matched) bilingual controls and professional interpreters. They gave the groups shadowing, paraphrasing (monolingual) and interpreting (bilingual and professional interpreters) tasks, using finger tapping as a measurement for interference (comparison with a baseline performing no verbal task).

If you want to read more about the study, please follow this link. Here were the general conclusions:

  • In monolinguals the LH interference was greatest.
  • Monolinguals were LH lateralised for paraphrasing, whereas both bilinguals and interpreters were bilateral for interpreting and LH for shadowing.
  • There was an absence of significant differences between bilinguals and professional interpreters. This means that the brain activity is associated specifically with the task of interpretation, not that the changes occur as a result of experience in the practice of interpretation.
  • Tapping disruption was also much greater in paraphrasing/interpreting than in shadowing as a result of higher levels of processing – phonemes vs semantic.

I would love to hear your thoughts on this subject, so please comment below. Throughout the week I will try to find further studies to share to try to build a more complete picture about what is going on in our brains when we perform the task of interpreting.

On another note, Professor Sarah Scott said she would be fascinated to do a study on simultaneous interpreters, so if anyone is interested, maybe you could contribute to research in the field.

 

 

 

 

How can style sheets help you to improve your business?

I attended a webinar by Karen Tkaczyk entitled Take charge: develop your technical style set, hosted by Alexandria Library in May.

I wanted to write about what I learned in the webinar and I also feel it fits in quite well with Claire’s blog post last week on time management.

The webinar focused on developing personal style sheets for your clients in a technical setting, and considering the importance of this in moving forward in your career. This was particularly relevant for me as I work predominantly in the technical sector, but I also think that this tool can be applied to any area of translation.

Why are personal style sheets important?

I’m sure that all of you reading refer to standard style guides in your work at times – the Chicago Manual of Style, the Economist Style Guide, etc. As language professionals, we can use these to guide us when we have doubts, to provide us with solid arguments if our choices are questioned or if we question the choices of others. If you want to read more about the effective use of style guides in our work, take a look at Nikki Graham’s blog post on the subject.

Personal style sheets take it one step further. By developing these we can then have the choices and preferences of our repeat clients at our fingertips. This not only helps to ensure consistency, but also speeds up our work and makes us more productive. This is of the utmost importance in areas such as technical translation, where there is an abundance of abbreviations, acronyms, terminology or spelling preferences. The inconsistencies I often find in the technical texts I translate make this all the more relevant.

The first time I used a personal style sheet/checklist was when I was working on a Portuguese-English dictionary project for Oxford University Press. A full style guide was available, but it was very long, making it difficult to look up specific queries quickly when finishing a batch of entries to deliver. I therefore pulled out the aspects that were most relevant to me and collated them in a very simple checklist.

Dictionary translation is different from other types of translation as you are working with very short lengths of text, with a particular focus on many different linguistic aspects of words, such as phonetics, register, dialect, etc. However, the reasons behind using a checklist or style sheet are the same – to remind you of anomalies to look for, to ensure consistency, and to speed up the whole process of translation and editing.

Since working on the dictionary project, I have worked with a number of other style guides (both client ones and professional ones) to aid me in my work. In the past I have generally made checklists to highlight specific aspects for different clients. However, the template provided by Karen after the workshop was in table form, which  I think will be more effective due to the visual way it spreads out the information.

Karen said something that really struck a chord with me during the webinar: technical writing is often considered to be badly written. However, our job as professional linguists is to create a report, article or information leaflet that is concise, accurate and well written. Style guides, and moreover personal style sheets that we have developed for our clients, can help us to achieve this more efficiently.

What can you include in a style set?

Anything that changes from client to client, a specific requirement for a client, or specific aspects of the language for which consistency is paramount to ensure a coherent text. Here are some examples:

  • Use of decimal points
  • Units of measurement
  • Formatting – bold, underlined, font size, etc.
  • Client-specific terminology preferences
  • Inconsistent use of vocabulary
  • Inconsistent use of spelling (between US and UK English)
  • Numbers (numerals or letters – a mix is often used without following normal style rules).

Are these similar in your area of translation? Would style sheets be useful in keeping track of these and correcting them when necessary?

How does having a style guide help you to eliminate inconsistencies from your translations?

I worked on the Oxford Dictionary project consistently for two years, yet I would still forget aspects of the style guide as it was so extensive. Having a checklist to highlight particular aspects that often slipped through the net was essential for giving my brain a nudge in the right direction, and for focusing on the specific issues to look out for when checking batches for delivery.

The same applies to style sheets. Currently I work with a mix of clients: I have a couple of main clients with whom I work most weeks, others with whom I work most months, and others with whom I work on the odd project. A style sheet ensures that you don’t forget the issues specific to each client, and that you continue to provide a consistent service. Rather than wasting time wading through paperwork and trying to find the specific requirements for each client, you will have all the details on your style sheet. You’ll also have your extra notes on the terminology choices you have made (when not otherwise specified) or that you have decide on using a suitable general style guide of your choice.

 What does this mean for the client?

By developing a style sheet, you can provide your clients with an improved, sleeker service. Furthermore, taking the time to attend to details in order to ensure consistency throughout the text will show your client that you care about the quality of the text. It is worth highlighting your efforts to new clients, firstly to make them aware of the consistency measures you are taking with their texts, and secondly so you can collate a list of their preferences.

Do you think this technique works in your area of translation? What are the similarities/differences in the issues that come up in comparison with the technical sector? Please comment below!

 

 

ITI Medical and Pharmaceutical Network workshop on Diabetes

 

By Sandra Young

This May I attended my second ITI Mednet workshop, this time on the subject of diabetes. For the morning sessions, the group had invited an expert in the field, Dr Shanti Vijaraghavan, a Consultant Physician specialising in this area. The first half of the day consisted of talks in which she outlined the management and complications of the disease, highlighting differences between type I and II diabetes.

The talks allowed me to consolidate my knowledge on the subject of diabetes and its complications, assimilate new terminology and discuss the appropriateness of certain terms. Here are some examples of what I took away with me:

Diabetes and its complications

  • Good blood glucose control is essential for a person with diabetes’ health and to minimise complications. However, a person living with diabetes will develop complications such as neuropathies and retinopathies after living with the disease for a number of years, despite good blood glucose control.
  • Hypoglycaemic awareness fades as a result of damage to the sympathetic nervous system, meaning that symptoms (the warning signs of hypoglycaemia) disappear with time.

Terminology

  • Charcot joint – complete lack of sensation in the joint, which leads people to injure themselves without realising. This eventually results in a disfigured joint.
  • Claudication – pain caused by too little blood flow, usually brought on by exercise.
  • Hyperosmolar Hyperglycaemic State (HHS) – Incredibly high blood sugar which results in “sludgy” blood.
  • Secretagogue – a substance that stimulates secretion, also a term used for insulin-releasing pills.

Appropriateness of terms – what do the experts really say?

  • Brittle diabetes – to describe someone with a type of severe diabetes characterised by blood sugar levels that are difficult to control.
  • Fundus – the correct terminology for the “back of eye” exam.

A morning of absorbing information was perfectly paired with an afternoon of working in language pair groups on a diabetes-related text. In my opinion, this combination is central to the success of the Mednet workshops and constitutes a fertile ground for learning.

The text dealt with complications of diabetes and its association with oxidative stress. It was a very interesting text to work on in a group of translators with varying backgrounds and experience. Our group, the Spanish to English group, was made up of translators who were originally from scientific backgrounds, pure-linguist backgrounds, editing backgrounds and native Spanish translators.

The input from those with a scientific background was invaluable, as they could use their understanding of the subject to decipher the more ambiguous sentences. The text used acronyms and abbreviations in a haphazard and non-standard way, in most cases failing to give a definition in the first instance. An example of this was the use of English acronyms ROS and RNS for reactive oxygen species and reactive nitrogen species, but then the Spanish acronym was used for nitrous oxide (ON).

There was also a spelling mistake in which “citoaldehídos” appeared instead of “cetoaldehídos”. With an understanding of the context it was clear that it referred to something relating to ketones, not cells, but to the untrained eye this could cause a great deal of confusion. This highlights the importance of having a good understanding of the subject you are translating.

As regards editing, I learned that journals do not like the use of bulleted lists as a general rule. There was a section at the beginning of the article which had a problematic list, which contained a number of pairs of opposing functions. I had considered making a bulleted list of these opposing pairs. However,  advice was that a good solution might be to keep the list in the main body of the text, but to separate the pairs by semi-colons.

Being fairly new to medical translation, the group translations at these workshops are particularly useful for me as I get the opportunity to discuss problematic issues of a text with more experienced medical translators, hear their perspectives on these issues and learn from this. The group session this time helped me not only to better understand the concepts within the text, but also to learn more about editing and terminology within medical translations, all of which I can apply to my future work.

I have listed some resources for medical translations that were recommended during the group session: