Intelligibility does not depend on having a standard accent. In this presentation, (register here now!) I will suggest that it is possible to teach pronunciation in a way which is more flexible and accepting of different accents. The presentation will present practical teaching ideas (including ‘Pair Squares’, which can be downloaded here) , while at the same time introducing four principles for accent-friendly pronunciation teaching: 1 ‘Effective’ beats ‘correct’ 2 If it isn’t broken, don’t fix it 3 Make the difference you need 4 Speak locally, hear globally
I have made 50 of these Pair Squares (minimal pairs, squared) to accompany The Minimal Pairs Collection, for different phoneme contrasts in English (download the PDFs in two files below!). One person says one of the phrases, the others have to identify which one they think they heard. There are some which are really important for learners to master, both receptively and productively, in order to understand and be understood. There are others which show accent alternatives, both of which are equally valid, and these are more for building learner awareness of different varieties of English (see here for details of a webinar connected to this resource).
The images in this collection of pair squares have been generated by AI. Little by little, I’m becoming aware of the environmental impact of this kind of data-heavy technology usage. In order for these pictures to perhaps justify the ‘cost’ of their generation, I hope their benefit (if any) will be spread as widely as possible. I would therefore like to make them freely available to as many ELT teachers as can be reached. If you’re interested, please download them from here: (and if you like them, spread the word!).
For many teachers, the very heart of pronunciation is the set of individual sounds in the target language. Yes, there is stress, intonation and connected speech too, but sounds feel more basic. However, they are also frustratingly difficult to deal with in class, and hard to integrate with other topics in the lesson. They can also leave the teacher feeling dis-empowered, especially teachers whose own accent is not like standard accents often presented in text books. It’s easy to see why this aspect of pronunciation is often neglected in the language class. In what follows, I would like to outline in a bit more detail three kinds of problems teachers have with individual phonemes, and then go on to suggest some practical solutions to these problems.
PROBLEMS
1 Accent issues
Accent issues play a part in discouraging teachers from dealing with individual sounds in class. Sometimes, pronunciation sections in books seem to assume all learners need to master RP (Received Pronunciation: a standard British accent) or GA (General American). This is particularly noticeable with vowel sounds. For example, your text book might present the ‘correct’ pronunciation for bath as having the long A sound (like art), implying that the short A sound (like cat) is wrong for this word. If the goal of the class is intelligibility, then this kind of accent-bias is totally unnecessary – the short A sound would be perfectly intelligible. Perceived accent-bias in pronunciation materials can also be very alienating for teachers whose own accent is non-standard.
2 Mixed needs
Another problem with teaching individual sounds is relevance to the learners’ needs. One student may find a particular sound difficult while the next has no problem with it at all. This problem is especially acute in classes with mixed first languages. If you teach the distinction between /r/ and /l/ for your Japanese learners, it will be irrelevant for their European classmates. Even if all your class share the same mother tongue, your coursebook may not provide material on their specific L1 issues, since such materials are often intended for a wider market than just one country. To provide targeted pronunciation activities, you will probably need to step away from the coursebook and source your material elsewhere.
3 Unconnected fragments
Sounds are difficult to fit into the storyline of a lesson. Many teachers like lessons to have a good sense of flow, some kind of narrative linking one part of the lesson to the next. But whatever your class is about, it’s unlikely to create an obvious need for a section on ship and sheep. That’s because there is no direct connection between phonemes and meaningful units of language like words and sentences. Topics rarely contain a large number of words featuring any particular sound. If you want to focus on a sound in class, it will usually have to be in a small, separate chunk of the lesson with no strong connection to the rest of it. This makes teaching segmental aspects of pronunciation feel bitty – a collection of random fragments.
SUGGESTIONS
1 Use minimal pairs
I’m going to suggest that taking a minimal pair approach to teaching pronunciation goes a long way to answering the accent issues problem. But first of all, I should make one assumption clear. I assume that for most pronunciation teachers around the world, our job is to help learners understand and be understood. In other words, the goal is intelligibility rather than training them to sound like a native. This is important, because you can be intelligible without sounding like a native.
What are the implications of this assumption? Well, it means that learners do not have to acquire sounds which are exactly the same as the standard model. For example, the vowel in the word gate does not have to be identical to the RP or GA diphthong. What’s important is that it should be different from other phonemes, for example, the vowel in get.
This is the strength of minimal pair activities: they switch the focus away from the individual sound and onto the contrast between a phoneme pair, such as gate-get. The basic message is this: it doesn’t matter what accent you have, as long as you can make the important contrasts. This message is so different from implying that learners must sound like the queen (or any other idealized native speaker). You can sound like yourself.
2 Put minimal pairs in groups
A good way to make your pronunciation lesson relevant to a mixed group of learners is to focus on a set of related pairs rather than a single pair. Take for example the set of words beat, bait, bite, bit, bet, bat. We could call this a minimal group. Within this group, there are a range of different challenges. Some learners may find it difficult to distinguish beat-bit; others may have no problem with this, but instead have a problem with bait-bet, or with bet-bat. By focusing on a minimal group, hopefully, there’s something for everybody.
For all the learners, even those who have no specific problem with any of the pairings, there is also the benefit of seeing how these vowel phonemes work together as a system, allowing them to be compared and contrasted. This connects the individual minimal pairs to the bigger picture, making the pronunciation segment of your class seem less fragmented, or bitty. This goes some way to offering a solution to problem 3 above: although it doesn’t help to integrate your pronunciation section to the topic of the rest of the lesson, it does make it more weighty chunk in its own right. You could also introduce some kind of phoneme syllabus, having a different phoneme group at regular intervals. Here is a suggestion of some groups you could cover (Hancock 2024):
Consonants
– The P group (lip consonants): pea, bee, fee, V, we
– The T group (tongue and gum consonants): tear, deer, cheer, jeer, year
– The S group (sibilants): sip, zip, ship, vision
– The K group (palate and throat consonants): cold, gold, hold, old
– The TH group (teeth consonants): thin, tin, fin, sin, then, den, Venn, zen
– The L group (liquids): light, right, night
– The N group (nose consonants): sun, sum, sung, sunk
Vowels
– The E group (front vowels): beat, bait, bite, bit, bet, bat
– The A group (open vowels): caught, coat, cot, cat, cut
– The U group (rounded vowels): foul, foal, fool, full, gull
– The R vowel group (vowels before ‘r’): steer, stair, star, store, stir
Accent is a problem in ELT, particularly in pronunciation
teaching. In the real world, accents are diverse, and yet we often seem to
teach as if only one or two of them are valid. Why is that, and is there any
way to make pronunciation teaching more accent-friendly? In this short article,
we explore those questions, and I’ll suggest that the answer may be to set a
good example.
You have an accent
You sometimes hear people say things like, ‘I don’t have an accent’. On the one hand, this seems like nonsense – like saying, ‘I don’t have an appearance’. On the other hand, I guess we know what they mean. They mean that they have a way of speaking which is felt to be ‘normal’ or ‘neutral’, not marked as being strongly regional or foreign. But whatever that is, it’s an accent too. If you speak a language, you have an accent. Alene Moyer writes, ‘In any language – native or not – everyone has an accent, yet the idea of a neutral accent standard persists in our collective consciousness’ (Moyer, 2013 p.89).
Accent reduction and elocution lessons
The idea of a neutral accent is exploited by courses which
offer ‘accent reduction’. If we accept Moyer’s claim, then these courses could
be better described as ‘accent training’, not so much losing an accent as
replacing it with another – one which is more acceptable in the community where
you are living. These kinds of courses are popular nowadays with immigrants in North America, but the idea is not new. They are like
‘elocution lessons’, which were popular in the past in the UK. These were in effect accent
training for native speakers, with an emphasis on social climbing: learning to
speak your own language in a way which is more acceptable in more upper class
circles. In this context, the model accent was ‘Received Pronunciation’ (RP),
the word ‘received’ here is being used as a synonym for ‘accepted’. RP is an
accent of English which is regarded as standard in the UK and
elsewhere, but there is a strong evaluative element here too: the idea that
this accent is not only ‘standard’ but a ‘higher standard’ than others.
‘Standard’ does not mean ‘better’
Pronunciation classes often set up a model for learners to
aim at. This model is a native accent, and more specifically, a ‘standard’
native accent – RP or GA (General American). But it’s important to understand
that being ‘standard’ does not mean
those accents are somehow better than other accents. John Wells points out, ‘A
standard accent is regarded as a standard not because of any intrinsic
qualities it may possess, but because of an arbitrary attitude adopted towards
it by society’ (Wells, 1982 p34).
In the world today, English is an international language, with many more non-native than native speakers. If you want to understand and be understood by as many of those people as possible, having an RP or GA accent is no guarantee. As Wells says, those accents are not intrinsically superior – they are not, for example, more intelligible. Their usage as models is essentially random, relating more to local prejudices within the UK and US than to anything else.
Pronunciation teaching goals
Robin Walker and Gemma Archer outline two alternative pronunciation goals for learners of English – a. a native speaker accent, or b. comfortable intelligibility (Walker & Archer 2024). A native speaker accent was the goal of pronunciation teaching before the 1980s. The model chosen was overwhelmingly either RP, or GA.
The focus on comfortable intelligibility emerged when
communicative approaches became more fashionable, with the idea was that we
should help learners to understand and be understood. Speakers can be
intelligible without necessarily speaking RP or GA.
Of course, the native speaker accent goal still exists
alongside the comfortable intelligibility goal. Although the majority of
learners around the world mainly need to be intelligible, some specifically
want or need to sound like native speakers. To keep the two goals clear and
separate, I think we could refer to them using different terms: accent
training aims at a native speaker accent goal while pronunciation
teaching aims at comfortable intelligibility.
Models or examples?
I’ve suggested that we keep pronunciation teaching separate
from accent training, but in practice, the boundary between these two is very
often blurred. A lot of teachers teach pronunciation as if they are doing
accent training, correcting perfectly intelligible speech simply because it is
not native-like. I think a part of the reason for this is to do with an
insistence on models.
The strongest argument often given in favour of standard
models (such as RP or GA) is that we need a fixed target to aim at, and why not
choose one which is widely accepted? If anybody suggests abandoning a standard
model, people ask, ‘But what can we replace it with?’ It’s a tough question,
because no suggested replacement comes without problems. Yes, it’s a tough
question, but maybe it’s not a question that needs an answer. Let me suggest
that if we want to develop an accent-friendly approach to pronunciation
teaching, perhaps we don’t need to attach so much importance to the question of
models in the first place. Instead of models, we simply offer examples –
starting with ourselves, and whatever accent we have as teachers.
Teaching by example
Let’s say standard accents such as RP, our own accents, and
all other intelligible accents of English are all examples of successful
English pronunciation. Any of them can serve as a model; none of them
has to be the model. In talking about them this way there is no
evaluative judgement going on; no assumption that one accent is superior to
another.
We don’t need to protect learners from the reality of accent
variation. As listeners, learners will inevitably encounter many different
accents, not only the standard ones. Equally inevitably, as speakers, most
learners will end up with accents different from the standards, and this is not
a shameful fact we should hide away from them.
Nor do we need to create accent anxiety among teachers by
implying that we should all have standard accents. If you are an intelligible
speaker of English, you are a good example for your learners, whatever your
accent.
Walker,
R & Archer, G. (2024) Teaching English Pronunciation for a Global World Oxford: Oxford
University Press
Moyer, A. (2013) Foreign Accent: the Phenomenon of Non-Native Speech Cambridge: Cambridge University Press Wells, JC (1982) Accents of English 1: An Introduction Cambridge: Cambridge University Press
Wells, JC (1982) Accents of English 1: An Introduction Cambridge: Cambridge University Press
(First published in IATEFL Conference Selections 2023; this article is a written summary of the conference presentation)
Unlike the written word, the spoken word is different every
time you hear it – think of all the different voices and accents in the world.
How do listeners ever recognise these various versions as being the same word? This
crucial aspect of the listening skill is known as ‘spoken word recognition’. In
this presentation, we look at some of the difficulties involved in this, and
some of the things we can do in the language class. The analysis is divided
into four parts, which I call spelling, storing, priming
and processing.
Spelling
According to John Field, the written forms of words tend to
stick in the memory more strongly than the spoken forms. Unfortunately, in
English, the written form is often misleading and often leads to
mispronunciation. For example, many learners pronounce ‘comfortable’ like ‘come
for table’.
But what about the consequences for listening? The problem
is this: if the learner expects words to sound like their written form,
they may not recognise them in speech. We should bear this in mind when
teaching. Encourage learners to make some kind of note of words which are
pronounced very differently from their spelling. If possible, give them
guidance about spelling rules. For example, make sure they are aware of vowel
reduction: the letter ‘a’ in ‘comfortable’ is not the same as in
‘table’!
Storing
When we hear a word, we compare it to words which are stored
in our memory and look for a match. However, words don’t have only one form.
John Field gives the example of ‘actually’. If you’re speaking really
carefully, this may have four syllables, but said quickly it may come out as
only two, like ‘ashley’. Field proposes that instead of having storing a single
form of a word, the listener stores multiple versions (or ‘exemplars’) of it. To
help learners build up their repertoire of stored exemplars of a word, we need
to expose them to more variety. We can assemble multiple examples of the same
word in different contexts, and for this purpose, the online tool called ‘Youglish’
is very useful. It’s like a search engine of video material, and you type in
the word (or phrase) you want to hear, and it gives you thousands of examples
in different voices, accents and speeds. In order to give learners a sense of
how the spoken form of words like this vary, we can use YouGlish in class like
this, or else encourage them to make regular use of it at home.
Priming
Listeners do not hear neutrally. We are primed to pay
attention to features which are important in our language, while ignoring
features which are not. For example, if word stress is important in your first
language, you tend to notice it; if it is not, then you tend to be what Anne
Cutler calls ‘stress deaf’. With our learners, we somehow need to prime them to
pay attention to features which may not be common in their L1, but which are
common in English. One approach is to use texts which have a high density of
certain common patterns in English such as word endings. For example, I have
designed this rhyme to draw attention to the ending ‘able/ible’:
They’re comfortable and durable
They’re lovable, adorable
Fashionable but sensible
To me they’re indispensable
You can make short texts with lots of examples for yourself.
Try using Chat GPT: Instruct it to write a brief text containing… and then give
a list of words with the suffix you want to focus on.
Processing
Listeners have to process what they’re hearing in real time.
According to John Field, ‘listeners may need to form tentative matches on the
basis of the available evidence and to confirm or change them as they hear more
and more of the utterance’. In the examples below, after hearing the first part
of a sentence, the listener understands a, but then after hearing the ending,
they must change their interpretation to b:
a. It’s a fish… b. It’s official.
a. Pay a ten… b. Pay attention.
a. It’s a nun… b. It’s an onion.
Expert listeners do this all the time; learners on the other
hand tend to stick with the first interpretation, no matter how bizarre. We can
use dictations like the examples above in class to raise awareness of this.
Read out ‘a’ first and ask learners to write what they hear. Then read out ‘b’
and ask them to correct and complete what they wrote.
References
Cutler, A. (2012). Native
Listening : Language Experience and the Recognition of Spoken Words
MIT Press
Field, J. (2008). Listening in the Language Classroom.
Cambridge University Press
(First published in Modern English Teacher Volume 32 Issue 6)
Phrasal homophones
Look at the phrases in the table below and try saying them
to yourself. Do you find that the phrases on the left sound like the phrases on
the right? I mean, not just similar, but identical? It seems incredible,
but for many speakers of English, they can indeed be identical. They are what I
call ‘phrasal homophones’.
A
B
packs and bags plants and berries chops and potatoes cooks and meals sauce and pasta drinks and milk
pack some bags plant some berries chop some potatoes cook some meals saw some pasta drink some milk
Writing listening activities from the audio script
The existence of phrasal homophones like the examples above shows us a couple of interesting things about listening in ELT. The first is this: if you base your listening activities on the audio script (as many course book authors do), you will miss something important. If you’re working from the audio script, you tend to notice the difficult vocabulary and structures and focus on these. However, difficulties are often caused by sections of the text which appear simple in the script, and we fail to focus on these. For instance, as the phrasal homophones demonstrate, in a spoken context some may be confused with and, despite appearing completely different in print.
Scripted versus authentic listening
The phrasal homophones in the table do not always sound
identical. If the speaker is articulating carefully, they may sound different.
For instance, speakers may pronounce the ‘d’ in and. This raises a
second issue about listening in ELT: it’s often based on scripted audio.
This is created by actors in a studio reading from a script, and in these
circumstances, they tend to speak more clearly than they would normally. This
is why Sheila Thorn argues so forcefully that we need to integrate authentic
listening into the language classroom (Thorn, 2021). Learners need to have
exposure to features of spoken English which are often absent in careful
speech.
Connected speech
In writing, and and some look completely
different. So how is it possible for them to sound the same in the phrasal
homophones above? The explanation lies in a set of features of spoken English
which are often referred to by the label ‘connected speech’. Here are some of
those features which explain the examples in the table:
1 The words and and some
are unstressed. As often happens in such cases, the vowel is reduced to the
weak, neutral vowel known as schwa. So there’s no difference between the
‘a’ in and and the ‘o’ in some.
2 The ‘d’ in and is lost. This
often happens, especially when there are consonants on either side of the ‘d’.
It’s so common that it’s sometimes written that way, for example fish ‘n
chips. This kind of change is known as elision.
3 With the ‘d’ cut, the ‘n’ is now the
last sound in and. This ‘n’ often changes to an ‘m’ if the following
word begins with ‘p’, ‘b’ or ‘m’. This kind of change is known as assimilation.
4 The ‘s’ at the end of the first word
links to the ‘a’ of and. This is known as linking and it often
happens when one word ends in a consonant and the next begins with a vowel.
Teaching pronunciation for listening
Connected speech is an aspect of spoken English which is
often covered in pronunciation materials. We often think of pronunciation as
something productive, relating to the speaking skill, but pronunciation is
equally important receptively, for listening. In fact, there are some aspects
of pronunciation which are much more important for listening than they are for
speaking, and I think connected speech is an example of this. Your learners can
speak perfectly clearly without the features outlined in 1-4 above.
However, as listeners, they will not be able to avoid hearing them, and it’s
best for them to be prepared. I’m suggesting that we need to teach
pronunciation for listening, and that includes raising awareness of connected
speech. In the remainder of this article, I will suggest two ways this may be
achieved using fun activities based on phrasal homophones.
Trictation
This activity is a dictation with a trick, which I’ve named
‘trictation’ (Hancock 2022). Read out a few of the phrases in the box below and
give learners time to write down what they hear. It doesn’t matter if you say
the phrase from Column A or Column B since they should sound the same anyway!
Now for the trick: Write up the phrases for the learners to
check their answers – but only write the phrases from Column A. At this point,
many in the class will look disappointed because they wrote the phrase in
Column B. Finally, write up the answers in Column B too and explain that they are
equally correct – that both phrases are pronounced the same.
At this stage, the learners will probably want to know how
such different phrases can sound the same, and you can elicit or explain some
of the features of connected speech outlined above, such as linking, vowel
reduction, elision and assimilation. You don’t need to use this technical
jargon, of course.
As a follow up, write some more of the phrases from Column A on the board and ask learners to guess what the sound-alike phrase in Column B could be.
Dictation bloopers
Like the previous activity, this again is based on phrasal homophones and what they reveal about connected speech. In Trictation, the phrases in both the A and B columns make sense. In this activity, one of the phrases is nonsense. Learners have to work out what the intended phrase should be. The nonsense could be contextualised as what voice recognition software might write. For example, it hears cakes and biscuits, but misunderstands and writes cake some biscuits – which sounds identical, but doesn’t make sense.
1 Give out copies of the photocopiable activity
Dictation bloopers or project it on a slide. Go through the example in
number 1 and show how the wrong phrase and the corrected phrase sound the same.
Note that this example has all the same features of
connected speech outlined above – schwa, elision, linking and assimilation,
causing and and some to sound the same.
2 Ask the class to suggest what the
correct versions of some of the other phrases are. They are in a grid rather
than a list to suggest that learners can do them in any order. In that way,
they can go straight for the ones they find easiest first.
3 After correcting each of the bloopers,
encourage learners to explain how the mishearing happened if they can. This is
to raise their awareness of connected speech. Features you may wish to focus on
inclued vowel reduction, elision, linking and assimilation outlined above in
the connected speech section. This activity also has two examples of ‘blending’,
in 3 and 11. This is where the consonant at the end of one word blends together
with the ‘y’ of your to make a different consonant. For example, paid
your sounds like page or.
Note: Your learners may want to know why the
‘b’ of parrots’ beak sounds like the ‘p’ parrot speak. It’s
because unvoiced consonants like ‘p’ or ‘t’ sound voiced like ‘b’ or ‘d’ when
they come after ‘s’ at the beginning of a word. There are other examples of
this in 4 and 12.
References
Hancock, M. (2022). PronPack: Connected Speech for Listeners. Hancock McDonald ELT
Thorn, S. (2021). Integrating Authentic Listening into the Language Classroom. Pavilion Publishing
First published: IATEFL 2022 Conference Selections
Pronunciation for receptive purposes
We often think of pronunciation in terms of productive
skills, but it’s equally important for receptive ones. Indeed, I would argue
that some aspects of pronunciation learning are primarily for the benefit of
listening – connected speech in particular. This point is made very clearly if
you consider the pairs of sentences below:
A B
Give him a hug. Give
them a hug.
Done as a favour. Done
us a favour.
Get a receipt. Get
her a seat.
Gave them an aim. Gave
her my name.
Speakers may pronounce A and B exactly the same. This is
because in connected speech, features such as elision, linking and weak forms
can obscure the differences. Obviously, we don’t necessarily want our learners
to do this in their own speech – it’s usually better to pronounce clearly!
However, as listeners, they have no choice – they’re bound to hear this kind of
connected speech, and we need to prepare them for it.
Raising awareness of connected speech with micro-listening
One approach to preparing learners for real connected speech
is to focus in detail on very short segments of audio – what John Field calls micro-listening.
You can do this by choosing short segments of any audio text which you’re
using, but an easy alternative is to use the online tool YouGlish. Type in any
chunk you’re interested in and this search engine will find it for you across a
whole corpus of online video material. For example, I typed in Give them a.
YouGlish then searched and found the phrase in thousands of videos, and played
them with a few words before and a few words after my chosen phrase. In this
way, my class could hear it in many different contexts, with different speeds,
voices and accents. In most of them, the class could hear how the pronoun them
was reduced in connected speech, for example to ‘em, and how it was
linked up to its neighbouring words.
Integrating connected speech with grammar
A focus on connected speech is important, but it can feel
rather random and difficult to integrate with other aspects of a course. One
idea would be to integrate it into your grammar syllabus. For instance, if you
are teaching a structures such as Give them a hug (that is, ditransitive
verb phrases), you can focus on object pronouns in connected speech. Most
grammar structures have strings of words including function words like
pronouns, articles, auxiliaries and so on – and these are exactly the kinds of
words which are most affected by the features of connected speech. This is the
approach I took in my book PronPack: Connected Speech for Listeners.
Saying it to hear it
Alongside micro-listening, another approach to raising
awareness of connected speech involves learners actually producing it
themselves. Although the procedure is productive, the objective is receptive –
actually hearing yourself produce this kind of speech is one of the best ways
of becoming fully familiar with how it sounds. Any kind of drill which includes
examples of connected speech can be used in this approach, but one which is
very easy to set up is what I call the counting drill. Here’s an example for
object pronouns after ditransitive verbs. You read each line out and the class
repeats:
Give ‘em a ONE, Give ‘em a TWO, Give ‘em a THREE, Give
‘em a FOUR
Send ‘er a ONE, Send ‘er a TWO, Send ‘er a THREE, Send
‘er a FOUR
Buy ‘im a ONE, Buy ‘im a TWO, Buy ‘im a THREE, Buy ‘im a
FOUR
The idea is that the numbers are so predictable, the
learners can focus their attention on the bits which come before and how they
are connected up.
Use earworms
Another kind of drill I would recommend for a connected
speech focus is a short and simple text, preferably with a bit of rhythm and
rhyme. The word-play helps to make the sound of the text ‘stick in the head’ –
the earworm effect. Again, you can say the text line by line getting the
learners to repeat. Here’s an example, focusing on the same grammar point as
the counting chant. The bold shows the stress.
TITLE: Spoken word recognition for listeners NAME(S) OF PRESENTER(S): Mark Hancock DAY: Tuesday 18 April 2023 TIME: 14:50-15:20 LENGTH: 30 mins ROOM: Queen’s Suite 7 – Harrogate Convention Centre AUDIENCE CAPACITY: 75
Knowing
a word is one thing; recognising it in the continuous stream of speech is
something else. How do listeners accomplish this, and how can we help our
learners to achieve the same? In this presentation, we will look at research
into spoken word recognition and try out some classroom activities for
developing this key aspect of the listening skill.
This article first appeared in the TEIS Newsletter (TESOL Teacher Educator Interest Section Newsletter), December 2022
Know weigh!
Learners are sometimes amazed to discover that words which
look completely different in written form are sometimes pronounced exactly the
same. It seems almost unbelievable that know weigh sounds the same as no
way! With English spelling being so unreliable, it’s no wonder that
learners and teachers look for alternative ways to represent pronunciation in
writing. One popular option is to write the word using the spelling conventions
of your first language. For example, I once noticed White House written
as guait haus in a piece of graffiti in Madrid. I often see learners using similar
kinds of informal phonetics in their notebooks. I’ve done the same thing
myself, representing French enfant as onfon. Seeing the
pronunciation in a written form can help to understand it and fix it in the
memory – ears and eyes are better than ears alone. But these kinds of
informal spellings are very personal – each learner will have their own version
– and they are often inaccurate. It’s useful to have something more reliable,
and this is where phonemic symbols come in.
IPA symbols
In the world of English Language Teaching (ELT), it’s common
to use a set of symbols to represent pronunciation, and the most widely used
symbols come from the International Phonetic Alphabet (IPA). Becoming familiar
with these symbols is a rite-of-passage for trainee teachers, but too often we
fail to understand what they are and how they work. There’s a widespread and
unhelpful belief that the symbols somehow only represent one specific accent,
and I think this derives from a confusion of ‘phonemic’ and ‘phonetic’.
Phonemic versus phonetic
The first thing we should understand about the IPA that we typically use in ELT is that it’s not the full set –that is designed to cover all languages – but only that small set of symbols needed to represent English. It’s also important to note that in ELT, we normally use the symbols phonemically rather than phonetically. Let me explain this with a concrete example. There are two kinds of L – the clear L and the dark L, and there is a different phonetic symbol for each of these – [l] and [ɫ]. English does have both of these sounds, but there is no meaningful difference between them – they are simply different ‘flavours’ of the same phoneme. We use one phonemic symbol /l/ to represent this phoneme. In other words, the phoneme /l/ includes both sounds [l]and [ɫ]. Trainees should know that a phonemic symbol does not represent one exact and specific sound. By the way, make sure they notice that phonetic symbols are shown between square brackets and phonemic symbols between slash brackets!
Love them or hate them?
There seems to be a love-hate relationship between teachers and
the IPA symbols. Some teachers love them, others won’t use them, or only ever
use them for the observed lessons they did as trainees. So what’s the problem? The
main objection which I’ve heard to the IPA goes like this: ‘My accent is not
the same as the accent shown by the IPA, so I can’t use it!’ I believe this
worry is based on an important misunderstanding. Phonetic symbols may
represent one specific accent, but phonemic symbols don’t.
Symbols and accents
I think the phonemic symbols are best regarded as accent-neutral.
Take for example the word bet in a typical English accent and a typical New Zealand
accent. The vowel sounds quite different in the two accents – New Zealand pet
sounds like bit to English ears. Or from the opposite point of view, English
bet sounds like bat to New Zealand ears. However, we can
use the same phonemic symbol /e/ for the vowel sound in both accents.
This is because the symbol represents a phoneme, not a sound. If we wanted to
represent a sound, we would use a phonetic symbol instead.
Phonemes are like chess pieces
The pieces in different chess sets often have slightly
different shapes. For example, in one set, the knight may look like a horse’s
head; in another set the knight may be a more abstract shape. But despite the
differences in shape, both of these pieces play the same role in the game.
Phonemes are like this. The /e/ in UK English sounds different from the /e/ in
New Zealand English, but they both play the same role in the system as a whole.
You could define it this way: /e/ represents the vowel sound in ‘bet’
whatever your accent. As a teacher trainer, this is the message I try to
get across to trainees: phonemic symbols don’t represent only one accent; if
you are an intelligible speaker of English, they can represent YOUR accent too!
Why do UK
and US books often use different symbols?
If phonemic symbols are accent-neutral, then why would
British and American books use different ones? I think the answer is that the
differences more about academic tradition than accent. Take for example the
vowel phoneme in boot, which is often given as /u:/ in UK texts but /uw/ in US ones. This
difference has nothing to do with a contrast between the British and American
pronunciations of boot; it is merely a different habitual use of
symbols. The symbols in themselves are arbitrary – it’s the role they play in
the system as a whole which matters.
A chart as a box of chocolates
Phonemic charts often look rather like a box of chocolates –
a collection of intriguing symbols, each one in its own separate compartment. Naturally,
our attention is drawn to the symbols, like the chocolates in the box, but what
if the box itself is actually the important part? I think that’s the case with
a phonemic chart – the system as a whole is more important than the individual
symbols within.
A system of distinctions
So how is the box more important than the symbols? Well,
it’s this: the system of phonemes in English is a system of distinctions. What
matters about the vowel in bet is not so much its intrinsic quality, but
more the fact that it is distinguishable from the vowels in bit, beat
or bait, for example. What is important is not the precise quality of the
occupant of each cell in the chart, but the fact that it is different from its
neighbours. English and New
Zealand speakers may pronounce those
individual vowels differently, but they can still distinguish the words and
that’s what counts. We have to keep the chocolates separate from one another!
What if you don’t have a distinction in your accent?
I should acknowledge a difficulty with the phonemic chart. Unfortunately,
it can’t always be as accent neutral is we might want. Some accents have only one
phoneme where other accents have two. It’s as if two of the chocolates in your
box have melted together into one. Take for example the two vowel phonemes in full
and fool. For many Scottish speakers, there’s only one phoneme here and
these two words are homophones. If you are a Scottish teacher and your class
asks you to explain the difference between these two symbols in the chart, you
will be obliged to say something like, ‘Well, they are the same in my accent,
but different in some other accents’. It’s not ideal, but nor is it a reason to
reject the entire IPA. That would be like throwing the baby out with the
bathwater.
The big picture
No doubt difficulties arise from time to time when we try to use the same set of phonemic symbols for a variety of accents of English, as illustrated with the full and fool example above. But I think the essential point to bear in mind in teaching and teacher training is that the IPA symbols that we use in class are phonemic and not phonetic. This means that they do not represent specific, precise sounds but rather a range of sounds, for example, /l/ represents both the clear and the dark L. It also means that they don’t represent one specific accent, but are flexible enough to accommodate a range of accents – for example, /e/ can represent the vowel phoneme in bet in both British and New Zealand accents. Dear teacher educator, the phonemic symbols can represent your trainees’ accents too; encourage them to feel that they can own them!
This article first appeared in the TEIS Newsletter (TESOL Teacher Educator Interest Section Newsletter), December 2022
“Connected Speech for Listeners”: New book is now out in electronic format on Kobo and Apple Books. Some folks prefer to hold a physical book, but the ebook has certain advantages too, such as easy click-to-hear audio for a quick idea of what this all sounds like.