Languages are not just the things that we have been trained to dissect while sitting in general or specialised classrooms, where we are told to recast sentences in the past tense or identify phonetic properties of allophones (sorry, I just had to include this wink to fellow linguists here). If they were, how come we make faces and gesture emphatically even when we are talking on the phone? It seems that there’s more to languages than meets the ear.
Some languages have no sounds, and therefore cannot be spoken. This is because sound, the medium of many languages around the world, makes sense only if you hear it. Some of us cannot hear, and so use languages that make sense when you see them. Saying that some languages use audible movements of the mouth to produce meaningful exchanges, whereas other languages use visible movements of the hands for the same purpose, makes it look (or sound) like we’re talking about two completely distinct kinds of language. We’re not. As I hope is becoming clear throughout my posts here, things about languages are not as all-or-none as we sometimes like to believe: it’s rather a matter of clines.
Human means of expression are, to use a fancy word, multimodal. This “multi” word, very welcome in this blog, means that we draw on several modes: spoken modes use mostly the mouth, and sign modes mostly the hands. But all of us use both hand and mouth movements in human-to-human interaction – and sometimes human-to-other too: I cannot be the only one shaking fists and uttering profanity at the vagaries of my internet connection, for example. Mouth and hand gestures each offer unique expressive possibilities, which we combine in order to produce more meaning than what a single mode can achieve. In this sense, we are all multilinguals, and we all mix our languages.
Like in any exchange, sometimes there may be glitches, or what some of us might interpret as such: a hand gesture may reinforce, but also contradict a mouth gesture, and vice versa. We may do this intentionally, for example for sarcastic purposes. Or we may fail to notice that we are giving out ambiguous, or even unintelligible signals. If you are a lip-reader, or would like to see what it’s like to be one, you can try one experiment. Experiments are of course artificial, and often probe for extreme effects, but this one may give you a feeling of how visual and auditory cues can interfere with each other. This experimental paradigm became known as the McGurk Effect.
There is a common misconception that sign languages are spoken languages “written” in signs, as it were. This reminds of the misconception that spoken languages are simple reproductions of one another, mentioned in a previous post. One reason that might explain it is that a number of sign languages are, or contain, fingerspelling, where hand gestures correspond to printable symbols. Spelling is of course a visual representation of spoken languages too. Printed forms of language are extremely interesting, by the way, because they have managed to take over spoken ones as tokens of so-called good linguistic usage. I will have quite a few things so say about this in a future post. My point here is that sign languages are not the same as hand spellings.
Sign languages are as sophisticated means of communication as spoken ones. If they weren’t, they couldn’t serve their users. All of our languages are acquired in the same way: babies babble, with their hands if they’re acquiring sign languages, with their mouths if acquiring spoken ones. All languages show geographical, historical and individual variation. We can be multilingual in all of them, sign, spoken, or both. Sign and spoken languages are mutually unintelligible, obviously, but so are spoken languages and sign languages among themselves. It may come as a surprise, for example, that British Sign Language (BSL) and American Sign Language (ASL) are not different variants of the same language, like spoken American English and British English: they’re different languages altogether. In addition, fingerspelling can vary, for the same spoken language: BSL fingerspelling and ASL fingerspelling are different languages too.
There is a second misconception about gestural language. Gestures that go with spoken languages are often seen as just flourish: you add them because you belong to a funny culture – those who “add” gestures have equally definite opinions about those who don’t, of course. Take Latinos, for example, by which word I mean anyone sharing a Latin background. They have a reputation for not being able to keep their hands still when they’re talking, so the old joke goes that the way to shut them up is to tie their arms behind their backs. My own Latino roots are often betrayed by my gestural exuberance (I come from Portugal, in case I forgot to mention this), and so I see it as my duty to put the record straight on this one, publicly: I do use my hands a lot when I speak, but not whenever I speak. I’ve lived in several European, African and Asian countries, with two consequences: one, I’ve noticed that different peoples use different visual cues when they talk; and two, I’ve learned to adapt. So when I use my hands, I use them not because I’m Portuguese, but because I’m being Portuguese, which is an entirely different thing. In case this ability to be being different things reminds you of a trait commonly attributed to multilinguals, a “split personality”, I/we hereby pledge to say more about it in a future post.
Gestural language, and body language in general, are not ornamental. They are a necessary part of intelligible exchanges, and they have their own grammar, in that they pattern regularly. We can hear smiles in a voice and we can see passion in a face. If, that is, we’ve learned to associate passion with that particular expression, and perhaps on that particular face, just like we’ve learned to associate the word passion with its meaning. Meanings don’t come out of the blue (or out of dictionaries): we shape them, according to our cultural conventions. Some of us are professionally trained to gain awareness of cultural habits of this kind, and to interpret them in order to assess our overall state of health, including linguistic health. We’ll see how, next time.
© MCF 2010
Next post: The fight for a fair deal. Wednesday 3rd November 2010.