Developing an fMRI localizer for German Sign Language (DGS)

Resumen

Introduction: Functional localizers in fMRI enable the precise and participant-specific identification of voxels that respond to a particular cognitive function or task of interest (e.g., Kanwisher et al., 1997; Saxe et al., 2006) and have been increasingly popular also in the cognitive neuroscience of language (Fedorenko et al., 2010; Nieto-Castañon & Fedorenko, 2012; Fedorenko et al. 2016). Yet, there is currently no functional localizer for any sign language, presumably because existing language localizers rely on written or auditory stimuli (Fedorenko et al., 2010, 2012, 2015; Scott et al., 2017), which are not modalities used by sign languages. Moreover, pseudoword or jabberwocky conditions are often used in localizers (Fedorenko et al., 2010, 2012, 2013), but creating analogous pseudosigns is difficult due to the larger iconic potential of the visual-kinesthetic modality (Emmorey et al., 2011). Against this background, we document here our ongoing effort of developing the first language localizer for any sign language, using German Sign Language (DGS) as an example. Methods: We adapt the approach of Malik-Moraleda et al. (2022), who have used degraded versions of auditory stimuli to create a contrast between comprehensible and incomprehensible sentences when studying spoken languages that lack a written form, to the visual-kinesthetic modality of sign languages. Accordingly, we use video clips of complex sentences in DGS from Trettenbrein, Maran, et al. (2024) and determine the degree of visual degradation necessary to be able to create the desired contrast between comprehensible and incomprehensible DGS. Research on visual degradation of sign language stimuli is scarce. One exception is Pavel et al. (1987) who systematically added degrees of Gaussian noise to stimuli, however, nearly one third of their stimulus set was still intelligible to deaf participants, even at the at the highest degradation level. Here, we adopt a novel approach for degrading stimuli by increasing the size of pixels in a video clip. That is, all frames in a video are first downsampled and then upsampled by a chosen pixelation factor to create an output video with larger individual pixels than in the input. The result is a video that appears “blocky,” obscuring visual details afforded by a high pixel count such as facial expression, handshape and, to some extent, location which all carry linguistically relevant information in sign languages. To determine the appropriate degree of degradation for incomprehensibility, we are running an online gating study wherein deaf native signers of DGS view and rate the intelligibility of videos with different degrees of degradation on a 5-point Likert scale. Outlook: The validation of out materials and collection of pilot fMRI data is currently still ongoing. Also, we are working on creating and including an additional condition that uses point-light displays derived from motion-tracking of the signers in our stimulus clips to further reduce comprehensibility while still triggering a robust response of the language network. This approach should generalize to other sign languages and we intend to make all our scripts and the localizer task itself publicly available to the community.

Fecha
Localización
Washington, DC (EE.UU)