Papers by Freya Watkins

Viewing angle is an adverse condition in British Sign Language comprehension, 2024
The impact of adverse listening conditions on spoken language perception is well established, but... more The impact of adverse listening conditions on spoken language perception is well established, but the role of suboptimal viewing conditions on signed language processing is less clear. Viewing angle, i.e. the physical orientation of a perceiver relative to a signer, varies in many everyday deaf community settings for L1 signers and may impact comprehension. Further, processing from various viewing angles may be more difficult for late L2 learners of a signed language, with less variation in sign input while learning. Using a semantic decision task in a distance priming paradigm, we show that British Sign Language signers are slower and less accurate to comprehend signs shown from side viewing angles, with L2 learners in particular making disproportionately more errors when viewing signs from side angles. We also investigated how individual differences in mental rotation ability modulate processing signs from different angles. Speed and accuracy on the BSL task correlated with mental rotation ability, suggesting that signers may mentally represent signs from a frontal view, and use mental rotation to process signs from other viewing angles. Our results extend the literature on viewpoint specificity in visual recognition to linguistic stimuli. The data suggests that L2 signed language learners should maximise their exposure to diverse signed language input, both in terms of viewing angle and other difficult viewing conditions to maximise comprehension.

Cognition, 2017
Unlike the phonological loop in spoken language monitoring, sign language users' own production p... more Unlike the phonological loop in spoken language monitoring, sign language users' own production provides mostly proprioceptive feedback and only minimal visual feedback. Here we investigate whether sign production influences sign comprehension by exploiting hand dominance in a picture-sign matching task performed by left-handed signers and right-handed signers. Should all signers perform better to right-handed input, this would suggest that a frequency effect in sign perception drives comprehension. However, if signers perform better to congruent-handed input, this would implicate the production sys-tem's role in comprehension. We found evidence for both hypotheses, with variation dependent on sign type. All signers performed faster to right-handers for phonologically simple, one-handed signs. However, left-handed signers preferred congruent-handed input for phonologically complex, two-handed asymmetrical signs. These results are in line with a weak version of the motor theory of speech perception, where the motor system is only engaged when comprehending complex input.
Conference Presentations by Freya Watkins

While the sociolinguistics of gender and sexuality have been heavily studied for spoken languages... more While the sociolinguistics of gender and sexuality have been heavily studied for spoken languages, very little has dealt with questions specifically related to sign languages (SLs). Why are the majority of SL learners and interpreters women or queer men? Does the expressive physicality required by SLs present a challenge to hegemonic masculinities, as Deaf academic Ladd (2003) theorises? Do the shared experiences of oppression in the Deaf and LGBT+ communities explain the relatively large queer (and out) Deaf population? Historically, the intersection of a Deaf and queer identity has led to in-group linguistic developments, such as the emergence of Gay Sign Variation. Sign linguist Michaels (2010) has studied differences between British SL and GSV, comparing it to spoken Polari in British gay subculture. Queer slang presents unique issues in interpreting settings, especially if the interpreter does not identify as LGBT+.
SLs can also be studied with regard to how they express concepts of gender. Due to their spatial referencing, it can be argued that pronominal systems in SLs break the gender binary and use gender-neutral pronouns. However, examples from kinship terminology in American SL as well as name signs, verb marking and pronouns in some Asian sign languages suggest that the gender binary can also be reinforced grammatically. While there is borrowing from spoken languages and hearing culture, linguistic determinism theories would argue that SL users perceive the world differently, predominantly visually. This may have implications for how gender expression and identity is understood among the Deaf. I argue that studying the language and identity of Deaf transgender people, both binary and non-binary, is one way to begin exploring this issue.

Die historische Entwicklung von Nationalgebärdensprachen bzw. der daraus entstehenden Sprachfamil... more Die historische Entwicklung von Nationalgebärdensprachen bzw. der daraus entstehenden Sprachfamilien stellt ein überraschend anderen Stammbaum der Sprachverwandtschaft
dar, als der von gesprochenen Sprachen. Obwohl Deutsch und Englisch als westgermanische Sprachen relativ nah verwandt sind, haben die Entwicklungspfade der BSL und der DGS sehr wenig gemeinsam. Noch ein verbreiteter Mythos ist die vermutete eins-zu-eins Beziehung der gesprochenen und gebärdeten Sprachen. Die Amerikanische Gebärdensprache hat sich nämlich zum Teil aus der Altfranzösischen Gebärdensprache entwickelt und ist der Britischen gegenüber unverständlich. Dieses Sprachtutorium beschäftigt sich mit Fragen der Gebärdensprachtypologie anhand von Beispielen aus der britischen und deutschen Gebärdensprachen. Besprochen werden geschichtliche Einflüsse der verwandten Gebärdensprachen auf spezifische linguistische Phänomene in
den beiden Sprachen. Sowohl Unterschiede in der Phonologie, Syntax, Morphologie und der regionalen Vielfalt als auch die verschiedenen Beeinflussungen von der gesprochenen
Umgebungssprache werden diskutiert. Eine weitere interessante Frage ist inwiefern die weitverbreitete Ikonizität in Gebärdensprachen zu sprachübergreifenden Ähnlichkeiten
führen kann.
Poster Presentations by Freya Watkins

For hearing speech-sign bilinguals who know both a signed and spoken language, distinct sensory-m... more For hearing speech-sign bilinguals who know both a signed and spoken language, distinct sensory-motoric modalities allow for simultaneous perception and production of two languages. Because of this ability and experience with comprehending and producing two languages together, multimodal bilinguals may be relatively better at integrating information from multiple modalities, or from multiple languages at the same time. There is some evidence for the benefit of dual language, dual modality input for semantically-based decisions ('is it edible?') such that responses are speeded when words are presented in both English and ASL compared to English or ASL only (Emmorey, et.al., 2012). The present study investigates whether dual input similarly affects phonological processing. Crucially, while semantic processing in the two modalities both point to a shared meaning, there is no overlap for phonological systems of spoken and sign languages. Investigating phonological processing across modality conditions will therefore develop our understanding of how speech-sign bilinguals store and access English words and British Sign Language (BSL) signs, namely the degree to which phonological
processing overlaps or is separate.
Methods: Thirty English-BSL bilinguals (15 native hearing signers, 15 late hearing signers who began signing after the age of 16 and have reached a minimum level 2 in BSL) will make a phonological decision to video stimuli (produced by a native hearing BSL signer) in sign-only (n=50), sign-with-speech (n=50), speech-only (n=50), and speech-with-sign (n=50) conditions. Testing is currently on-going. We make two separate comparisons: (1) we compare sign-only and sign-with-speech (dual input) conditions, in which participants decide whether a sign has a straight or curved handshape, following Thompson, et. al (2010) and (2) speech-only and speech-with-sign (dual input) conditions, in which participants decide whether a given sound (e.g., /b/) is present.
Implications: An effect of phonological processing in dual input conditions would suggest that signers automatically activate both BSL and English phonology despite making decisions about BSL only, or English only. Further, a facilitation effect (faster response latencies in dual input conditions) would indicate tightly linked systems for phonological processing, while inhibition would suggest more separate systems. An effect of processing in only one dual input condition (i.e., dual input for English sound decisions or BSL handshape decisions) would provide evidence for the automaticity of phonology in either the auditory or visual modality.
Across language groups, we predict effects of language experience and linguistic environment on phonological processing. Late hearing BSL signers will have grown up almost exclusively with speech-only input. The converse is true for the native BSL signing group. For these bilinguals, signing and speaking simultaneously is common both in production (e.g., when interacting with a mixed group of deaf signers and hearing non-signers) and comprehension (input to multimodal bilingual children is frequently from parents who sign and speak simultaneously; Petitto, et.al., 2001). Thus there may be differences in the way signs and words are stored and accessed for native and late speech-sign bilinguals, namely more tightly-linked phonological systems for native hearing BSL signers compared to late-hearing second language bilinguals
Dissertations by Freya Watkins

Hearing speech-sign bilinguals know languages in distinct sensory-motoric modalities. This allows... more Hearing speech-sign bilinguals know languages in distinct sensory-motoric modalities. This allows simultaneous perception and production, entailing potential benefits for multimodal integration. Existing evidence shows dual-input processing benefits for semantically-based decisions. The present study investigates whether this holds for the non-overlapping phonological systems in these bilinguals. 13 fluent English-BSL bilinguals, 13 intermediate signers and 13 monolingual English controls made phonological decisions to audio/video stimuli. In Experiment 1, participants made BSL handshape decisions (sign-only vs. sign-with-speech). Intermediate and fluent signers were significantly more accurate with dual-input, but reaction times did not differ. In Experiment 2, participants monitored English phonemes (speech-only vs. speech-with-sign). Here both signing groups responded faster with dual-input, but only fluent signers performed significantly more accurately. Results suggest intermediate signers performed a speed-accuracy trade-off in both tasks. Overall, sign experience seemingly leads to phonological systems becoming linked, such that signers even profit from their weaker L2 when making English decisions.

The “bilingual advantage” in non-linguistic tasks requiring conflict resolution and executive con... more The “bilingual advantage” in non-linguistic tasks requiring conflict resolution and executive control is well-documented (Bialystok et al. 1999). Is this also applicable to the unique population of bilinguals who are proficient in both a spoken and a sign language (“bimodal bilinguals”)? Initial studies on visual attention tasks in English-ASL native bilinguals suggest such an advantage is absent for hearing signers, who perform like monolinguals (Emmorey 2008). This is explained via bimodal bilinguals’ unique ability to code-blend across modalities, producing elements of speech and sign at the same time.
However, building on recent findings of an acquirable auditory attention advantage in late L2 spoken bilinguals (Bak & Sorace, submitted), a battery of auditory and visual attention tests was given to a group of late L2 English-BSL bimodal bilinguals. The non-overlapping modalities of these languages should not entail the development of a sophisticated executive control system; thus the prediction that they would experience no bilingual advantage in attentional tests. However, initial studies on native and L2 bimodal bilinguals suggest that acquiring visuospatial aspects of sign language grammar can lead to benefits in related areas of cognition, e.g. visuospatial processing (Keehner & Gathercole 2007) and image generation (Emmorey et al 1993).
One of the attentional-switching tasks coincidentally engaged visuospatial aspects of cognition required in BSL, explaining a significant “signing advantage” for the bimodal bilingual group. This study provides further evidence for modality constraints on bilingual attentional benefits, and another example of a signing advantage - one that can be acquired as a late L2 learner.
Essays by Freya Watkins
Newspaper Articles by Freya Watkins
Uploads
Papers by Freya Watkins
Conference Presentations by Freya Watkins
SLs can also be studied with regard to how they express concepts of gender. Due to their spatial referencing, it can be argued that pronominal systems in SLs break the gender binary and use gender-neutral pronouns. However, examples from kinship terminology in American SL as well as name signs, verb marking and pronouns in some Asian sign languages suggest that the gender binary can also be reinforced grammatically. While there is borrowing from spoken languages and hearing culture, linguistic determinism theories would argue that SL users perceive the world differently, predominantly visually. This may have implications for how gender expression and identity is understood among the Deaf. I argue that studying the language and identity of Deaf transgender people, both binary and non-binary, is one way to begin exploring this issue.
dar, als der von gesprochenen Sprachen. Obwohl Deutsch und Englisch als westgermanische Sprachen relativ nah verwandt sind, haben die Entwicklungspfade der BSL und der DGS sehr wenig gemeinsam. Noch ein verbreiteter Mythos ist die vermutete eins-zu-eins Beziehung der gesprochenen und gebärdeten Sprachen. Die Amerikanische Gebärdensprache hat sich nämlich zum Teil aus der Altfranzösischen Gebärdensprache entwickelt und ist der Britischen gegenüber unverständlich. Dieses Sprachtutorium beschäftigt sich mit Fragen der Gebärdensprachtypologie anhand von Beispielen aus der britischen und deutschen Gebärdensprachen. Besprochen werden geschichtliche Einflüsse der verwandten Gebärdensprachen auf spezifische linguistische Phänomene in
den beiden Sprachen. Sowohl Unterschiede in der Phonologie, Syntax, Morphologie und der regionalen Vielfalt als auch die verschiedenen Beeinflussungen von der gesprochenen
Umgebungssprache werden diskutiert. Eine weitere interessante Frage ist inwiefern die weitverbreitete Ikonizität in Gebärdensprachen zu sprachübergreifenden Ähnlichkeiten
führen kann.
Poster Presentations by Freya Watkins
processing overlaps or is separate.
Methods: Thirty English-BSL bilinguals (15 native hearing signers, 15 late hearing signers who began signing after the age of 16 and have reached a minimum level 2 in BSL) will make a phonological decision to video stimuli (produced by a native hearing BSL signer) in sign-only (n=50), sign-with-speech (n=50), speech-only (n=50), and speech-with-sign (n=50) conditions. Testing is currently on-going. We make two separate comparisons: (1) we compare sign-only and sign-with-speech (dual input) conditions, in which participants decide whether a sign has a straight or curved handshape, following Thompson, et. al (2010) and (2) speech-only and speech-with-sign (dual input) conditions, in which participants decide whether a given sound (e.g., /b/) is present.
Implications: An effect of phonological processing in dual input conditions would suggest that signers automatically activate both BSL and English phonology despite making decisions about BSL only, or English only. Further, a facilitation effect (faster response latencies in dual input conditions) would indicate tightly linked systems for phonological processing, while inhibition would suggest more separate systems. An effect of processing in only one dual input condition (i.e., dual input for English sound decisions or BSL handshape decisions) would provide evidence for the automaticity of phonology in either the auditory or visual modality.
Across language groups, we predict effects of language experience and linguistic environment on phonological processing. Late hearing BSL signers will have grown up almost exclusively with speech-only input. The converse is true for the native BSL signing group. For these bilinguals, signing and speaking simultaneously is common both in production (e.g., when interacting with a mixed group of deaf signers and hearing non-signers) and comprehension (input to multimodal bilingual children is frequently from parents who sign and speak simultaneously; Petitto, et.al., 2001). Thus there may be differences in the way signs and words are stored and accessed for native and late speech-sign bilinguals, namely more tightly-linked phonological systems for native hearing BSL signers compared to late-hearing second language bilinguals
Dissertations by Freya Watkins
However, building on recent findings of an acquirable auditory attention advantage in late L2 spoken bilinguals (Bak & Sorace, submitted), a battery of auditory and visual attention tests was given to a group of late L2 English-BSL bimodal bilinguals. The non-overlapping modalities of these languages should not entail the development of a sophisticated executive control system; thus the prediction that they would experience no bilingual advantage in attentional tests. However, initial studies on native and L2 bimodal bilinguals suggest that acquiring visuospatial aspects of sign language grammar can lead to benefits in related areas of cognition, e.g. visuospatial processing (Keehner & Gathercole 2007) and image generation (Emmorey et al 1993).
One of the attentional-switching tasks coincidentally engaged visuospatial aspects of cognition required in BSL, explaining a significant “signing advantage” for the bimodal bilingual group. This study provides further evidence for modality constraints on bilingual attentional benefits, and another example of a signing advantage - one that can be acquired as a late L2 learner.
Essays by Freya Watkins
Newspaper Articles by Freya Watkins
SLs can also be studied with regard to how they express concepts of gender. Due to their spatial referencing, it can be argued that pronominal systems in SLs break the gender binary and use gender-neutral pronouns. However, examples from kinship terminology in American SL as well as name signs, verb marking and pronouns in some Asian sign languages suggest that the gender binary can also be reinforced grammatically. While there is borrowing from spoken languages and hearing culture, linguistic determinism theories would argue that SL users perceive the world differently, predominantly visually. This may have implications for how gender expression and identity is understood among the Deaf. I argue that studying the language and identity of Deaf transgender people, both binary and non-binary, is one way to begin exploring this issue.
dar, als der von gesprochenen Sprachen. Obwohl Deutsch und Englisch als westgermanische Sprachen relativ nah verwandt sind, haben die Entwicklungspfade der BSL und der DGS sehr wenig gemeinsam. Noch ein verbreiteter Mythos ist die vermutete eins-zu-eins Beziehung der gesprochenen und gebärdeten Sprachen. Die Amerikanische Gebärdensprache hat sich nämlich zum Teil aus der Altfranzösischen Gebärdensprache entwickelt und ist der Britischen gegenüber unverständlich. Dieses Sprachtutorium beschäftigt sich mit Fragen der Gebärdensprachtypologie anhand von Beispielen aus der britischen und deutschen Gebärdensprachen. Besprochen werden geschichtliche Einflüsse der verwandten Gebärdensprachen auf spezifische linguistische Phänomene in
den beiden Sprachen. Sowohl Unterschiede in der Phonologie, Syntax, Morphologie und der regionalen Vielfalt als auch die verschiedenen Beeinflussungen von der gesprochenen
Umgebungssprache werden diskutiert. Eine weitere interessante Frage ist inwiefern die weitverbreitete Ikonizität in Gebärdensprachen zu sprachübergreifenden Ähnlichkeiten
führen kann.
processing overlaps or is separate.
Methods: Thirty English-BSL bilinguals (15 native hearing signers, 15 late hearing signers who began signing after the age of 16 and have reached a minimum level 2 in BSL) will make a phonological decision to video stimuli (produced by a native hearing BSL signer) in sign-only (n=50), sign-with-speech (n=50), speech-only (n=50), and speech-with-sign (n=50) conditions. Testing is currently on-going. We make two separate comparisons: (1) we compare sign-only and sign-with-speech (dual input) conditions, in which participants decide whether a sign has a straight or curved handshape, following Thompson, et. al (2010) and (2) speech-only and speech-with-sign (dual input) conditions, in which participants decide whether a given sound (e.g., /b/) is present.
Implications: An effect of phonological processing in dual input conditions would suggest that signers automatically activate both BSL and English phonology despite making decisions about BSL only, or English only. Further, a facilitation effect (faster response latencies in dual input conditions) would indicate tightly linked systems for phonological processing, while inhibition would suggest more separate systems. An effect of processing in only one dual input condition (i.e., dual input for English sound decisions or BSL handshape decisions) would provide evidence for the automaticity of phonology in either the auditory or visual modality.
Across language groups, we predict effects of language experience and linguistic environment on phonological processing. Late hearing BSL signers will have grown up almost exclusively with speech-only input. The converse is true for the native BSL signing group. For these bilinguals, signing and speaking simultaneously is common both in production (e.g., when interacting with a mixed group of deaf signers and hearing non-signers) and comprehension (input to multimodal bilingual children is frequently from parents who sign and speak simultaneously; Petitto, et.al., 2001). Thus there may be differences in the way signs and words are stored and accessed for native and late speech-sign bilinguals, namely more tightly-linked phonological systems for native hearing BSL signers compared to late-hearing second language bilinguals
However, building on recent findings of an acquirable auditory attention advantage in late L2 spoken bilinguals (Bak & Sorace, submitted), a battery of auditory and visual attention tests was given to a group of late L2 English-BSL bimodal bilinguals. The non-overlapping modalities of these languages should not entail the development of a sophisticated executive control system; thus the prediction that they would experience no bilingual advantage in attentional tests. However, initial studies on native and L2 bimodal bilinguals suggest that acquiring visuospatial aspects of sign language grammar can lead to benefits in related areas of cognition, e.g. visuospatial processing (Keehner & Gathercole 2007) and image generation (Emmorey et al 1993).
One of the attentional-switching tasks coincidentally engaged visuospatial aspects of cognition required in BSL, explaining a significant “signing advantage” for the bimodal bilingual group. This study provides further evidence for modality constraints on bilingual attentional benefits, and another example of a signing advantage - one that can be acquired as a late L2 learner.