Opinions and decisions about multilingualism involving sign languages suffer from the same resilient fantasies which have plagued multilingualism in general over the past 100 years or so. With sign languages, however, there’s the aggravating factor that fantasies about them join the chorus. Only the other week, for example, I had a couple of (speech-speech) multilingual friends wonder why all the fuss about sign languages among linguists like me, since these languages are but a set of universal gestural primitives, like rubbing your tummy to indicate you’re hungry, as they put it. Aren’t they?, they nevertheless asked at the end of their reasoning. No, I replied. This would be roughly equivalent to saying that spoken languages are but a set of universal groany primitives to indicate your mood, as I put it.
I took this chance to dispel their other illusion, that sign languages are straightforward fingerspelling systems, which draws on the interesting assumption that all signers must be literate. Many sign languages do include fingerspelling components, but the fact that, say, BSL (British Sign Language) and ASL (American Sign Language) use two-handed and one-handed spelling, respectively, for the same printed language, should help reassess the presumed straightforwardness of fingerspelling. In addition, BSL and ASL are as mutually unintelligible as other sign languages around the world.
My friends are well educated, cosmopolitan professionals. Their take reflects the overarching myth that sign languages really aren’t languages at all, which goes on shaping policies devised by other professionals, those who have been empowered to deal with language education and who therefore aren’t in the habit of asking questions at the end of their reasonings. In a book chapter discussing The British Sign Language community up to the early 1990s, Paddy Ladd gives a distressing review of the ignorance and associated prejudice which, among other rulings, sanctioned physical violence to ‘cure’ deaf children of their signing ‘compulsion’. Just like, as I reported elsewhere, multilingualism came to be beaten out of hearing schoolchildren, the hands of deaf schoolchildren were tied behind their backs in order to force them to use spoken language. Just like, as I also reported elsewhere, multilingualism came to be medicalised, the language of deaf people was “pathologised” (Ladd’s word). Small wonder, then, that sign-speech multilinguals came to be viewed as doubly ‘handicapped’.
When sign languages finally became legitimised, as it were, as objects of linguistic enquiry, sign multilingualism turned out, unsurprisingly, to match speech multilingualism. It comes complete with mixes, as David Quinto-Pozos reports for LSM (Lengua de Señas Mexicana) and ASL in Sign language contact and interference, for example, and with a lingua franca, International Sign, which Anja Hiddinga and Onno Crasborn discuss in Signed languages and globalization. But sign multilingualism remained the business of signers, so hearing communities needn’t bother with the eccentricities of deaf communities. Dealing with sign-speech multilingualism, however, appears to invite regression to hand-tied Fantasy Land: sign languages may be languages after all, but they are less so than spoken ones and should therefore not take priority in (so-called) multilingual education.
It may help to understand that we’re talking about difference here, not winner-takes-it-all competition of gradable merits. It is as useful to compare the contexts of use of distinct linguistic modes as it’s useful to compare multilinguals and monolinguals. Insisting on doing so fails to recognise one of the many paradoxes reflecting our perennial difficulty in defining what languages are: do we want to say that speech beats sign, hands down, because we’re persuaded that auditory resources rank higher than visual ones in linguistic sophistication? Or should we rank those resources the other way around, because we believe that spoken languages are subsidiary to spelt ones?
Language is as independent of the modes we’ve found to represent it – whether natural, sense-bound ones like sight, hearing, touch, or artificial ones like print – as music is independent of the instruments (our voice included) through which we produce it. What’s more, our senses seldom serve us to the exclusion of other senses. Manual gestures, for example, are intrinsic to spoken interaction, where attention to both visual and sound clues necessarily assists (de)coding. There’s even evidence that adequate gesturing enhances learning, as Martha W. Alibali and colleagues showed for a speech-based maths class in Students learn more when their teacher has learned to gesture effectively. In this sense, speakers and signers alike are multimodal users of language, and so are all of us, speakers or signers, who are literate.
There may be some overlap between gestural uses in spoken and signed interaction, as Trevor Johnston argued for pointing gestures in Towards a comparative semiotics of pointing actions in signed and spoken languages, but the fundamental issue is that signs and speech belong to two different linguistic modes, each with their rules, standards and practices. Precisely for this reason, sign-speech multilinguals can avail themselves of means of linguistic expression which monomodal interaction lacks, in that “distinct modalities allow for simultaneous production of two languages”, as Karen Emmorey and colleagues discuss in Bimodal bilingualism.
This means that sign-speech multilinguals, like any language users, must draw on the whole of their linguistic resources in order to be able to develop as human beings. The Position Statement on Early Cognitive and Language Development and Education of Deaf and Hard of Hearing Children, adopted by the NAD (National Association of the Deaf, USA) in June this year, makes for as engrossing reading as Paddy Ladd’s chapter – with many thanks to Beppie van den Bogaerde, who brought this publication to my attention on Twitter, @HU_DeafStudies. The document examines the relationship between sign, speech and print modes, debunking the usual myths about minority languages causing delayed development of mainstream languages (why never the other way around, one wonders?), about the primacy of spoken languages over signed ones, about reading abilities presupposing “phonological awareness”, and about multilingualism itself. This is the specialist side of the sign-speech tandem. From a personal side, Jenny Froude’s book Making Sense in Sign. A Lifeline for a Deaf Child is a gripping account of her family’s journey as hearing caregivers of a deaf child.
Deaf children must be allowed to acquire a language which is meant for deaf people, because they are not hearing people in (temporary) disguise. Why should we deprive our children of their languages? Would hearing people wish to be raised in monomodal sign language? Evidence that sign is the mode that best serves deaf children from the outset lies in their spontaneous creation of languages such as the Nicaraguan Sign Language (Idioma de Señas de Nicaragua), which made headlines in the 1970s. And evidence that we all resort to whatever language modes best serve our needs comes from adults, too.
Next time, I’ll have some more to say about depriving people of entitlement to their languages.
© MCF 2014
Next post: Native multilinguals. Saturday 18th October 2014.