Sensitivity to the linguistic comfort
zone of other human beings certainly is, to my mind, a good thing to
nurture. It’s also a matter of etiquette. It makes others feel good
in our company, and it makes us feel good, too, because awareness of
our surroundings allows us to feel in control. Not least, this kind
of sensitivity is usually reciprocated, whether we’re being hosts
or guests, including in our languages. It’s not that difficult,
either: as newcomers to a party or a business meeting, say, we use
the same mechanism to monitor the ongoing atmosphere, so that we may
gain entitlement to merge with it. Or
not, of course: if we find ourselves surrounded by deliberately
hostile merrymakers and moneymakers, or by speaker-unfriendly
listeners, it doesn’t matter how mood-friendly or how
listener-friendly we strive to be. It’s a matter of will, and of
awareness that all of us, habitués or rookies, have been “ongoing”
too, for more or less extended periods of time, around more or less
varied kinds of people, with the effect that our speaking and
listening habits have become set, in what may feel more or less like
stone.
Linguistic friendliness matters, both
ways, because there is a sophisticated interplay between speaking and
listening, rooted in a law which is very, very familiar and very,
very dear to all of us: The Law of Least Effort. As
speakers, we’re quite reluctant to disturb the comfy humdrum routines that we’ve patiently trained our vocal tracts to observe.
As listeners, we simply stop listening to whatever threatens to
engage skills beyond what we deem to be our territorial listening rights.
Call it laziness,
if you so wish. I prefer to say
that our human speech and hearing hardwares are optimised to account
for effort-effect tradeoffs: less effort from speakers
means added inconvenience to listeners, and vice versa. Never has “Do
Unto Others” had such a practical, everyday application.
One
sure way to create linguistic friendliness is to literally tune into
the rhythmical patterns which characterise our fellow speakers. By
this I mean their body language, facial expressions, visible and audible articulatory movements,
anything that can help us decode the cadences underlying the ways in
which our interlocutors use their language(s). Speakers and listeners
are individuals, like you and me: the Upanishads put it precisely the way
I believe matters of “languages” should be put, with the remark
that “It is not the language but the speaker that we want to
understand.” It’s all about people, like you and me.
All cultures, as far as we know, have
developed characteristic ways of harnessing human vocalisations and
body movements as a means of nurturing commonality. This is what we
came to call “song” and “dance”,
respectively. Steven Mithen, in
his book The Singing Neanderthals,
draws on archaeological, neurological and other evidence to propose a
unified account of The
Origins of Music, Language, Mind, and Body,
as in the subtitle of the book. More recently, Gustavo Arriaga, Eric
P. Zhou and Erich D. Jarvis, in an article titled Of mice, birds, and men: The mouse ultrasonic song system has some features similar to humans and song-learning birds,
report that fellow mammals share with us what we already knew we
shared with songbirds, the ability to communicate through the use of
learned vocalisations, which we fine-tune to match what we hear
around us.
Falling in with
other people cannot then be rocket science: even when simply
strolling around with somebody else, we end up moving with a
shared rhythm which makes everybody happy, because our heads bob in
synchrony so that we can talk to one another easily. Cadences form a
core part of our survival: breathing, chewing, sleeping, digesting,
pumping blood through our bodies take place in cycles of natural
tempos, amplitudes, frequencies, durations. Small wonder that our
languages follow suit: facts are that we can’t open our mouths, in
any language, without assigning tempos, amplitudes, frequencies,
durations to the sounds we produce. In short, without prosody.
This is why
linguistic prosodies are not just niceties, a waste of our precious
executive learning time, cherries on cakes, and so on, even if we’ve
never been told about these things in our language lessons. Even if
we believe that this is irrelevant at preliminary Me-Tarzan-You-Jane
stages of acquisition, and even if we believe that language learners
must go through Me-Tarzan-You-Jane acquisitional stages, which
is far from a universal truth.
Even at this
supposed learning stage, are you telling Jane your name and hers? Or
are you asking, or repeating what Jane said, or are you expressing
stupefaction at a sudden realisation that people can have such names,
or names at all?
Prosody
is so central to our languages that we felt the need to create, for
them, meaningless carriers of meaningful prosodies, precisely because
prosody is enough. Nearly thirty years ago, Melvin J. Luthy conducted
a pioneering study on Nonnative speakers’ perception of English “nonlexical” intonation signals,
which found that core American English-bound melodic signals were
either missed or misinterpreted by newcomers to the language.
If you also happen to be a newcomer to
American English, have a look at how Judy B. Gilbert implements
listener-friendliness in the classroom, in her book (aptly!) titled
Clear Speech.
One of her teaching mottoes is that “Small chunks of language
should be learned like little songs.” You can also watch a video
of one of her presentations of her teaching method.
Next time, I’ll
take back everything I’ve said in this post.
© MCF 2012
No comments:
Post a Comment
Note: only a member of this blog may post a comment.