Resource Center


Publications

Does Cued Speech Entail Speech? An Analysis of Cued and Spoken Information in Terms of Distinctive Features By Earl Fleetwood, M.A. and Melanie Metzger, Ph.D.

Part 3

Discussion

An implicit assumption is commonly made by researchers in previous studies of Cued Speech. The assumption is that the production and subsequent reception of cued utterances entail the production and reception of speech. Untested, this assumption has commonly allowed for data interpretation to feed conclusions that simply reinforce the assumption. Thus, it is significant that the results of the current study provide evidence counter to the conclusion that cueing entails, includes, provides, or equates to knowledge of, or competence in, either the production or reception of speech.

Four points were initially identified in this paper as possible reasons for the prevailing assumption mentioned above. Evidence counter to the assumption is provided by the current study and contextualized below in light of these four points. For convenience, the points are reprinted below as well.

1) The production and comprehension of cued information like the production and comprehension of spoken information involves use of the mouth.

The current study provides evidence in support of this statement. Nevertheless, according to the current findings, evidence that the statement is true is not evidence that the mouth serves the same articulatory function with regard to the production of cued information and spoken information. In the current study, responses to cued information differs from responses to spoken information, as predicted, for the N test items — 50% of the simultaneously cued and spoken test material. For example, and as noted earlier, a cued allophone of /k/ and a spoken allophone of /h/ were rendered simultaneously, both allophones resulting from non-velar productions. Because a spoken allophone of /k/ would be velar and a cued allophone of /k/ need not be, it appears that the relevant place of production for cued allophones need not be the same as the relevant place of production for spoken allophones. Although the mouth is employed both in cueing and in speaking, this disparity serves as evidence that articulatory relevance of the mouth with regard to the conveyance of linguistic information in the visible and acoustic channels differs. Thus, where the reception and comprehension of linguistic information is concerned, the mouth neither functions as an articulatory instrument of speech to the deaf native cuer nor as an articulatory instrument of cuem to the hearing native speaker.

2) Cued phonemic referents and spoken phonemic referents can coincide with regard to their linguistic values.

The current study provides evidence in support of this statement. Nevertheless, according to the current findings, evidence that they can coincide is not evidence that they must coincide. In the current study, the responses within each group tested coincide for 100% of the simultaneously cued and spoken isolated phonemic referents. This suggests that each of speaking and cueing systematically produce phonemic referents, (i.e., spoken allophones and cued allophones respectively). The current study also finds that responses to only the C test items (50% of the items tested) coincide across groups. This supports the findings of previous studies that examine production and/or reception of co-presented and linguistically matched cued and spoken information. It also provides evidence that each of speaking and cueing produce allophones autonomously and with respect to different sets of articulatory features.

3) Speakers may cue while they talk and cuers may talk while they cue.

The current study provides evidence in support of this statement. Nevertheless, according to the current findings, evidence that cueing and talking can co-occur is not evidence that one entails the other nor that the same articulatory features are relevant to both. In the current study, responses within each group coincide among group members for 100% of the items tested. However, as expected, 50% of the isolated phonemes, isolated words, and short phrases (i.e., the N test items) elicited different responses across the two groups tested. This is despite the fact that cued information and spoken information were rendered simultaneously. The disparity of responses across groups is evidence that cueing does not entail speaking, that speaking does not entail cueing, and that cueing and speaking are autonomous articulatory processes. The simultaneous production of cueing and speaking is simply evidence that these two articulatory processes can co-occur.

4) Cueing employs the system named Cued Speech.

This is a true statement. Nevertheless, because 1) articulatory relevance of the mouth with regard to the conveyance of linguistic information in the visible and the acoustic channels differs, and 2) cueing and speaking are autonomous articulatory modes and processes, the name Cued Speech might be more indicative of what motivates the cueing decisions of hearing cuers than it is descriptive of the information received and comprehended by deaf native cuers.

Implications

The claim that cueing conveys phonemic information is not disputed by this study. In fact, the current study finds evidence that supports this claim. Furthermore, this study does not challenge the claim that speaking and cueing can co-occur and can simultaneously convey the same linguistic information to deaf native English cuers and hearing native English speakers. This claim is also supported by the data collected. Nevertheless, as predicted, deaf native English cuers and hearing native English speakers derived different information from 50% of the simultaneously cued and spoken phonemes, words, and phrases — the N test items. The deaf native English cuers as well as the hearing native English speakers (i.e., the control group) provided responses that were consistent with the test items that they received via their respective native mode (e.g., cuem or speech) of communication. The study’s finding that the linguistic information need not coincide serves as evidence that linguistic information conveyed via cueing and linguistic information conveyed via speaking are carried via two distinct articulatory sytems, even when that information does coincide. In the current study, the linguistic decisions of deaf native cuers are unaffected by that acoustic (speech) information utilized by the hearing native spoken English users. The deaf native English cuers and the hearing native English speakers systematically deferred to a different set of articulatory features when making linguistic decisions. Thus, the fact that speaking and cueing can co-occur seems irrelevant to the deaf native cuer’s comprehension of linguistic structures. This suggests that speech need neither be produced nor received in order for a cued message to carry. Furthermore, it suggests that knowledge of speech on the part of sender and receiver is neither requisite of nor relevant to the linguistic integrity of cued information.

While the data reveals that the linguistic value of simultaneously cued and spoken information need not coincide, the data also reveals that they can coincide. Test items were chosen with the goal that 50% would coincide linguistically across the two groups tested, beginning at the phonemic level. In other words, for 50% of the simultaneously cued and spoken test items, the test was designed with the goal that both groups would provide responses referring to the same phoneme, word, or phrase. Responses to these C test items were not only consistent within each group, there was consistency between the two groups tested. While the data clearly shows that cuem is systematic and that it utilizes a different set of articulatory features than does speech, the current study also reveals that both sets of articulators (i.e., cuem and speech) can be employed either exclusively or simultaneously toward conveying the phonemes, words, and syntax of American English. This suggests that the production of speech in conjunction with cued utterances may well provide a useful redundancy when provided those deaf or hard of hearing individuals who make use of residual hearing with or without assistive listening devices.

The fact that each of cueing and speaking function autonomously suggests that Cued Speech is not inherently the “oral” system that it is often labeled. In this study, the deaf participants do not use the acoustic features of “oral” language to comprehend the utterances presented to them. Instead, deaf cuers identify as linguistically relevant a different set (i.e., an autonomously functioning visible set) of distinctive features than the set that constitutes speech. Where place of articulation, manner of articulation, and voicing status are the salient features of speech production, cued utterances are autonomously generated via hand shape, hand placement, and mouth formation (see Appendix A).

When presented with the whispered form of a familiar spoken language, hearing people can recognize and process the remaining acoustic information as an acoustically impoverished spoken message. Findings of this study raise an interesting parallel issue: Does absence of hand shapes and hand placements present deaf native cuers a similar exercise? Specifically, when presented with the mouth-only version of a familiar cued language, is the visible mouth information that remains (in the absence of hand shapes and hand placements) processed as part of a visibly impoverished cued message rather than as part of an acoustically impoverished spoken one? In other words, to a deaf cuer, is what some would assume is “speechreading” actually more like receiving visibly “whispered” cueing than like receiving silent speech?

This study does not examine how deaf native cuers use knowledge of cued English if learning to speak English nor how hearing native speaker use knowledge of spoken English if learning to cue English. Thus, the findings do not challenge the possibility that cued phonemic referents (allophones) and spoken ones can be coordinated. However, because the current study reveals no inherent relationship between the distinctive articulatory features of cuem and of speech, it seems that any relationship between one and the other is contrived, perhaps as a strategy for teaching a cuer how to speak or a speaker how to cue. Given that deaf native English cuers recall cued linguistic values without demonstrating access to how speech is articulated (i.e., air is exhaled and channeled inside the mouth/nose in the presence or absence of voice) or what it articulates (e.g., acoustic allophones), it appears that cueing and speaking are processes that employ autonomously functioning articulatory systems. That is, even for those sighted deaf or hard of hearing people who choose to make use of residual hearing with or without assistive devices, cueing provides the necessary and linguistically relevant information visually, whether or not it is accompanied by speech production or speech products.

Traditional definitions and most of the relevant research have associated Cued Speech with speech, speechreading, and/or sound. The current study finds evidence suggesting that such definitions and research do not accurately refer to the salient articulatory information sent to, sent by, sent among, or perceived by deaf native cuers. In fact, because findings of the current study indicate that the visible articulatory products of cueing function autonomously and do not entail the visible and acoustic articulatory products of speaking, it would be inaccurate to describe deaf native cuers as responding to spoken English rendered via Cued Speech; because speech is not conveyed, it is not spoken English that is conveyed when English is cued.

This study also finds that deaf native English cuers are able to consistently identify phonological, lexical, and syntactic aspects of English. Perhaps the only way to reconcile the current study’s several findings is to conclude that deaf native cuers respond to a cued version of English rather than to a spoken version of English that is rendered via Cued Speech. Even if hearing cuers think that they are cueing speech, the current study suggests that it is language (i.e., linguistic structure) rather than speech that deaf cuers perceive and process when receiving cued messages. Thus, just as the terms spoken language and spoken English denote both a mode of communication (spoken) and refer to that which is communicated (language/English), findings of the current study suggests that the terms cued language and cued English accurately represent the articulatory (cued) and linguistic (language/English) products of cueing.

Findings of this study provide evidence that ‘mode in, mode out’ consistency might be important to the process of human linguistic development. Specifically, these findings extend the idea of natural language acquisition beyond the fact that early language exposure in any wholly accessible mode seems critical for language acquisition. It seems that deaf individuals who are provided input via a cued language and produce output via a spoken language are performing an exercise in changing linguistic form (i.e., mode), a process also known as transliteration.
Transliteration is not a stage in natural language development. Thus, for those deaf children who receive, for example, cued French toward developing their literacy in written French, even the simple expectation that they cue French expressively could make a positive difference in the natural acquisition of phonological representations (for example, see Leybaert 1998). Findings of the current study have implications for those who make decisions regarding the rationale and approach for children who cue English, French, or other languages.

In the current study, the deaf participants, like those in Nicholls (1979) and Nicholls and Ling (1982), do not demonstrate any significant performance difference when presented the acoustic plus visual input or the visual-only input. Rather, the visual-only input provides complete linguistic information at the phonemic, lexical, and syntactic levels. As it relates to the current study, this visual-only input is the product of a set of features distinct from the set that generates the products of speech. Thus, evidence of an autonomous and completely visual articulatory system serves to counter the sound-based characterization of cueing that was once assumed.

The role of the cueing hand in the perception of cued messages has traditionally been characterized as augmentive to speech and supplemental to spoken language. Researchers and others seem to have assumed a priori the notion that findings about Cued Speech relate to the effectiveness of augmenting speech via hand cues (Nicholls, 1979; Nicholls & Ling, 1982; Périer, 1987; Leybaert & Alegría, 1990). Certainly, prevailing descriptions and discussions of Cued Speech accept the speech-supplement view as fact. Nevertheless, outside of the current study, none has been designed specifically to determine whether the requisite articulatory features of cueing function autonomous of, rather than augmentive to, those required of speaking. In other words, no study has tested what most, if not all, seem to have assumed. That assumption has, thereby, implicitly functioned as a null hypothesis in those studies. Thus, it is significant that findings of the current study 1) are the only evidence to date resulting from testing the assumption and 2) provide compelling evidence counter to the integrity of that assumption.

From a theoretical perspective, this counterevidence has interesting implications. For example, as it is applied in the literature, the term ‘supplement’ suggests that the tripartite features of speech are part of and salient to the production, reception, perception, and/or processing of cued messages. Findings of the current study indicate otherwise. In fact, implications of the current findings suggest a paradigm shift in terms of how cueing is characterized. Instead of supplementing the voice, manner, and place features of spoken languages, it appears that the hand shapes, hand placements, and mouth configurations of Cued Speech function autonomously as some of the features that define cued languages.

From a practical perspective, the current findings suggest that for parents of deaf children and the professionals that work with them, the decision to cue a particular language is not limited by the hearing acuity of the receiver. Nor is the decision constrained by the receiver’s ability to access, perceive, process, or produce the features of speech. Nor is the decision detrimental to the use or development of any of these abilities.

In light of the current findings, it appears that, both individually and collectively, voice, manner, and place of articulation for speech are not systematically present in the articulation of linguistic information via cueing. This might explain why deaf cuers who wish to speak go through the same speech training exercises as oral or signing deaf youth might, at least to a point. If speech is a goal, one advantage to the deaf cuer might be that the linguistic segmentation provided by cueing parallels that provided by speaking. That segmentation can subsequently serve the speech therapist as a relevant point of reference for associating linguistic knowledge with speech production. Essentially, because the deaf cuer has already acquired linguistic segments via exposure to the visible symbols of a particular cued language (e.g., English), the speech therapist can reference those segments when teaching the deaf cuer how to produce the acoustic symbols of the counterpart spoken language (e.g., English). This is one reason that acquisition of a cued language might support oral/aural and even auditory/verbal goals.

Findings of this study also serve as strong evidence that sufficient exposure to a cued language provides for acquisition of the scope of linguistic structures beginning at the phonologic level and do so completely in the visual mode (cf Metzger, 1994; Hauser & Klossner, this issue). Thus, providing appropriate exposure to cued English, for example, appears to support the goals of those interested in acquisition of English but without necessitating use of or dependence on speech and/or audition.

In a practical sense, consistently exposing a sighted deaf individual to a cued language in natural interaction seems to provide for the development of native or native-like competence in a given consonant-vowel language, including that of hearing family or friends as well as foreign languages studied in school. Findings of the current study suggest that cueing does this without the need for speech production, speech reception, or knowledge of either. Thus, as a completely visible articulatory process, cueing a language supports the goals of those interested in visual language and written literacy development in mono-lingual and multi-lingual contexts.

The distinction between language modality and language structure prompts the need to re-examine discipline-specific application of research findings. This distinction suggests, for example, that “inner speech” as a construct is not limited to traditional notions of “speech” or “speech perception.” Recognizing that deaf native cuers can internalize through an autonomous visual articulatory system the phonological, morphological, and syntactic aspects of traditionally spoken languages has implications for a variety of disciplines, including psychology (e.g., language perception, neurofunctional localization of the brain), linguistics (e.g., language acquisition and the development of literacy), and education (e.g., bilingual and multilingual programming). Related issues in each of these disciplines are ripe areas for further research.

References

Alegría, J., Dejean, K., Capouillez, J.-M., & Leybaert, J. (1990). Role played by cued speech in the identification of written words encountered for the first time by deaf children. Cued Speech Journal, 5, 4-9.

Alegría, J., Lechat, J., & Leybaert, J., (1988). Role of Cued Speech in the identification of words by the deaf child: Theory and preliminary data. Glossa, , 9, 36-44.

Beaupré, W. (1983). Basic Cued Speech Proficiency Rating. University of Rhode Island: Kingston.

Beaupré, W. (1984). Gaining Cued Sspeech proficiency: A manual for parents, teachers, and clinicians. Washington, D.C.: Gallaudet College.

Bellugi, U., Fischer, S., & Newkirk, D. (1979). The rate of speaking and signing. In E. Klima & U. Bellugi (Eds), The signs of language (pp. 181-194). Cambridge, MA: Harvard University Press.

Caldwell, B.(1994). Why Johnny can’t read. Cued Speech Journal, 5, 55-64.

Campbell, & Dodd, . (19XX)

Channon, R. (2000). Temporal characteristics in sign and speech. Paper presented at the Texas Linguistics Society Conference 2000, Austin, TX.

Chilson, R. (1985). Effects of Cued Speech instruction on speechreading skills. Cued Speech Annual, 1, 60-68.

Clark,, B., & Ling, D. (1976). The effects of using Cued Speech: A follow-up study. Volta Review. 78. 23-34.

Cornett, R. O. (1967). Cued Speech. American Annals of the Deaf, 112, 3-13.

Cornett, R. O. (1972). Effects of Cued Speech upon speechreading. In G. Fant (Ed.), International Symposium on Speech Communication Ability and Profound Deafness (pp. 223-230). Washington, D.C.: A.G. Bell Association for the Deaf.

Cornett, R.O. (1973). Comments on the Nash case study. Sign Language Studies. 3, 92-98.

Cornett, R. O., & Daisey, M. E. (2001). Cued Speech resource book for parents of deaf children. Raleigh, NC: National Cued Speech Association.

Daisey, M. E. (1987). Language development through communication with Cued Speech. Cued Speech Annual, 3, 17-31.

Davidson, M., Newport, E., & Supalla, S. (1996). The acquisition of natural and unnatural linguistic devices: Aspect and number marking in MCE children. Paper presented at the Fifth International Conference on Theoretical Issues in Sign Language Research, Montreal, Canada.

Fleetwood, E., & Metzger, M. (1991). ASL and cued English: A contrastive analysis. Paper presented at Deaf Awareness Conference, Dothan, AL.

Fleetwood, E., & Metzger, M. (1998). Cued language structure: An analysis of cued American English based on linguistic principles. Silver Spring, MD: Calliope Press.

Grote, K. (2000). The effect of language modality on the architecture of the mental lexicon. Paper presented at the Texas Linguistics Society Conference 2000, Austin, TX.

Hage, C., Alegria, J., & Périer, O. (1991). Cued Speech and language acquisition: The case of grammatical gender morpho-phonology. In D. Martin (Ed.), Advances in cognition, education, and deafness. Washington, D.C.: Gallaudet University Press. 395-399.

Hildebrandt, U., & Corina, D. (2000). Phonological similarity in American Sign Language. Paper presented at the Texas Linguistics Society Conference 2000, Austin, TX.

Kaplan, H. (1974). The effects of Cued Speech on the speechreading ability of the deaf. Dissertation Abstracts International, 36(2), 645B.

Kipila, E. (1985). Analysis of an oral language sample from a prelingually deaf child’s Cued Speech: A case study. Cued Speech Annual, 1, 46-59.

Kluwin, T., (1981). The grammaticality of manual representations of English in classroom settings. American Annals of the Deaf, 126(4), 417-421.

LaSasso, C., & Metzger, M. (1998). An alternate route for preparing deaf children for bi-bi programs: The home language as L1 and Cued Speech for conveying traditionally spoken languages. Journal of Deaf Studies and Deaf Education, 3(4), 265-289.

Leybaert, J. (1993). Reading in the deaf: The roles of phonological codes. In M. Marschark & D. Clark (Eds.), Psychological perspectives on deafness (pp. 269-309). Hillsdale, NJ: Erlbaum.

Leybaert, J. (1998). Effects of phonetically augmented lipspeech on the development of phonological representations in deaf children. In M. Marschark & M. Clark (eds), Psychological perspectives on deafness (vol. 2). (pp. 103-130).Mahwah, NJ: Erlbaum.

Leybaert, J. & Alegria, J. (1990). Cued Speech and the acquisition of reading by deaf children. Cued Speech Journal, 4, 24-38.

Leybaert, J., & Charlier, B. (1996). Visual speech in the head: The effect of Cued Speech on rhyming, remembering, and spelling. Journal of Deaf Studies and Deaf Education, 1(4), 234-248.

Leybaert, J., Alegría, J., Hage, C., & Charlier, B. (1998). The effect of exposure to phonetically augmented lipspeech in the prelingual deaf. In R. Campbell, B. Dodd, and D. Burnham (Eds.), Hearing by eye II: Advances in the psychology of speechreading and auditory- visual speech. (pp. 283-301). East Sussex, UK: Psychology Press.

Ling, D. & Clark, D. (1975). Cued Speech: An evaluative study. American Annals of the Deaf. 120, 480-488.

Lucas, C., & Valli, C. (1992). Language contact in the American Deaf community. San Diego, CA: Academic Press.

Lucas, C., Bayley, R., Valli, C., Rose, M., Dudis, P., Schatz, S., & Sanheim, L. (2001). Sociolinguistic variation in American Sign Language: Sociolinguistics in Deaf Communities, vol. 7. Washington, D.C.: Gallaudet University Press.

Marmor, G., & Petitto, L. (1979). Simultaneous communication in the classroom: How well is English grammar represented? Sign Language Studies, 23, 99-136.

Massaro, D. (1987). Speech perception by ear and eye: A paradigm for Psychological Inquiry. Hillsdale, NJ: Lawrence Erlbaum Associates Inc.

Mathur, G., (2000). Modality effects in the verb agreement morphology of signed languages. Paper presented at the Texas Linguistics Society Conference 2000, Austin, TX.

Maxwell, M. (1983). Language acquisition in a deaf child of deaf parents: Speech, signs, variations, and print variation. In K. Nelson (Ed.), Children’s language (vol. 4). (pp. 283- 313). Hillsdale, NJ: Erlbaum.

Maxwell, M. (1987). The acquisition of English bound morphemes in sign form. Sign Language Studies, 57, 323-352.

McBurney, S. (2000). A typological study of pronominal reference. Paper presented at the Texas Linguistics Society Conference 2000. Austin, TX.

McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746-748.

Metzger, M. (1994). Involvement strategies in cued English discourse: Soundless expressive phonology. Unpublished manuscript. Georgetown University, Washington, DC.

Mohay, H. (1983). The effects of Cued Speech on the language development of three deaf children. Sign Language Studies, 38. 25-49.

Mosley, M., Williams-Scott, B., & Anthony, C. (1991). Language expressed through Cued Speech: A pre-school case study. Poster session presented at the American Speech- Language Hearing Association, Atlanta, GA.

Nash, J. (1973). Cues or signs: A case study in language acquisition. Sign Language Studies, 3, 80-91.

National Cued Speech Association Board of Directors. (1994). Terminology guidelines for Cued Speech materials. C. Boggs (Ed.), Cued Speech Journal, 5, 69-70.

Neef, N. (1979). An evaluation of Cued Speech training on lipreading performance in deaf persons. Unpublished doctoral dissertation, Western Michigan University, Kalamazoo.

Nicholls, G. (1979). Cued Speech and the reception of spoken language. Unpublished masters thesis. McGill University, Montreal, Canada.

Nicholls, G., & Ling, D. (1982). Cued Speech and the reception of spoken language. Journal of Speech and Hearing Research, 25, 262-269.

Nicholls-Musgrove, G. (1985). Discourse comprehension by hearing-impaired children who use Cued Speech. Unpublished doctoral dissertation, McGill University, Montreal, Canada.

Perigoe, C., & LeBlanc, B. (1994). Cued Speech and the Ling speech model: Building blocks for intelligible speech. Cued Speech Journal, 5, 30-36.

Périer, O. (1987). The psycholinguistic integration of Signed French and Cued Speech: How can speech components be triggered? Paper presented at the Symposium on Oral Skills and Total Communication, Gent, Belgium.

Périer, O., Charlier, B., Hage, C., & Alegría, J. (1987). Evaluation of the effects of prolonged Cued Speech practice upon the reception of spoken language. In I.G. Taylor (Ed.), The Education of the Deaf – Current Perspectives (vol. 1.) (pp. 616-628). Beckenham, Kent, UK: Croon Helm, Ltd.

Pfau, R., (2000). Accessing nonmanual features in phonological readjustment: Sentential negation in German Sign Language. Paper presented at the Texas Linguistics Society Conference 2000, Austin, TX.

Quenin, C., (1992). Tracking of connected discourse by deaf college students who use Cued Speech. Unpublished doctoral dissertation, Pennsylvania State University, University Park.

Ryalls, J., Auger, D., & Hage, C. (1994). An acoustic study of the speech skills of profoundly hearing-impaired children who use Cued Speech. Cued Speech Journal, 5, 8-18.

Schick, B., & Moeller, M. (1992) What is learnable in manually-coded English sign systems? Applied Psycholinguistics, 13, 313-340.

Schwartz, J., Robert-Ribes, J., & Escudier, P. (1998). Ten years after Summerfield: A taxonomy of models for audio-visual fusion in speech perception. In R. Campbell, B. Dodd, & D. Burnham (Eds.), Hearing by eye II: Advances in the psychology of speechreading and auditory-visual speech. (pp. 85-108). East Sussex, UK: Psychology Press.

Sneed, N. (1972). The effects of training in Cued Speech on syullable lipreading scorfes of normally hearing subjects. Cued Speech Parent Training and Follow-up Program. pp 38-44 Project Report to Department of Health, Education, and Welfare, U.S. Office of Education. 85-108. Washington, D.C.

Stack, K., (1996). The development of a pronominal system in the absence of a natural target language. Paper presented at the Fifth International Conference on Theoretical Issues in Sign Language Research, Quebec, Canada.

Summerfield, A. Q. (1987). Some preliminaries to a comprehensive account of audio-visual speech perception. In B. Dodd & R. Campbell (Eds.), Hearing by eye: The psychology of lipreading. (pp. 3-51). Hove, UK: Lawrence Erlbaum.

Supalla, S. (1990). Segmentation of manually-coded English: Problems in the mapping of English in the visual/gestural mode. Unpublished doctoral dissertation, University of Illinois, Urbana-Champaign.

Supalla, S. (1991). Manually-coded English: The modality question in signed language development. In P. Siple and S. Fischer (Eds.), Theoretical issues in sign language research. 85-109. Chicago: University of Chicago Press.

Wandell, J. (1989). Use of internal speech in reading by hearing and hearing-impaired students in oral, total communication, and Cued Speech programs. Unpublished doctoral dissertation, Columbia University, New York.

Wood, S., & Wilbur, R. (2000). When is a modality effect not a modality effect? Aspectual marking in signed and spoken languages. Paper presented at the Texas Linguistics Society Conference 2000, Austin, TX.

the logo of NCSA

NCSA is a not-for-profit, section 501(c)(3) organization
EIN: 51-1263121 | Federal CFC code #12036

ribbon of certification for America's Best Charities

Quick Links


Privacy Sitemap