GeSpIn is an interdisciplinary event for researchers working on the interaction between speech and visual communicative signals, such as articulatory, manual, and bodily gestures co-occurring with speech. At GeSpIn 2023 we hope to bring together researchers working on visual signals together with vocalization or speech, from multidisciplinary perspectives in order to exchange ideas and present the cutting edge of their field. This 8th edition of GeSpIn will be held in Nijmegen, the Netherlands and will focus on the theme of “Broadening Perspectives, Integrating Views: Towards General Principles of Multimodal Signaling Systems”.
As such, we encourage researchers working on (multimodal) prosody, social anthropology, philosophy, (psycho)linguistics, psychology, cognitive science, neuroscience, human movement science, computer science (e.g., human-computer interaction), comparative biology, and more to submit their research to address topics such as:
Please note that all researchers and theoreticians/philosophers working on the interaction between gesture/visual and sound-producing cues (e.g., in terms of pragmatics, prosody, semantics) should feel invited, also if their particular study does not fit these topics exactly.
OrganizersWim Pouw & James Trujillo (main contacts: firstname.lastname@example.org / email@example.com)
Hans Rutger Bosker
Lieke van Maastricht
Junior CommitteeEzgi Mamus (main contact: firstname.lastname@example.org)
Marlijn ter Bekke
Nuria Esteve Gibert (Open University of Catalonia)
Nuria Esteve Gibert investigates language acquisition in infancy and childhood, both in typical and atypical populations. She is particularly interested in how speech prosody interacts with body movements in the expression and comprehension of linguistic meaning. Nuria Esteve Gibert has an experimental approach, using behavioral tasks and eye-tracking methodologies.
Keynote abstract: Prosody as a key force in the development of the gesture-speech relationshipIn this talk I will present evidence that prosodic abilities are intimately linked with how gesture and speech relate to each other in development. When the gesture-speech relation is examined from a temporal point of view, prosodic abilities determine infants’ and children’s use of adult-like coordination patterns. When the gesture-speech relation is examined from a functional point of view, prosody and gesture work together to compensate for other structural linguistic abilities that are impaired or still to be developed. This is especially the case in non-referential contexts, so much so, that prosody and body movements are two sides of the same coin when speakers convey pragmatic meanings.
Yifei He (Philipps University Marburg)
Yifei He works as a postdoc researcher at the Translational Neuroimaging Lab, Philipps University Marburg. He is primarily interested in the underlying brain mechanisms of how gesture interacts and integrates with speech during online processing in both healthy and clinical populations. He also investigates sentence processing, speech perception, and action perception. These research questions were mainly answered through EEG, fMRI, simultaneous EEG-fMRI, and behavioral methods.
Keynote abstract: Processing co-speech gestures: a neural perspectiveIn daily communication, visual input such as hand gestures plays an important role besides auditory speech. To date, the neural basis of how gesture integrates and interacts with speech during online comprehension remains elusive. In this talk, I will present evidence from EEG, fMRI, and simultaneous EEG-fMRI, showing the brain dynamics of how speech and gestures are integrated as coherent semantic representations. I will also present studies on how gestures impact the semantic processing of speech. (i) EEG data from a controlled experiment show that the social aspects of gesture (body orientation) may directly influence the N400 amplitude during sentence processing. ii) With a naturalistic paradigm, EEG and fMRI data consistently suggest that gestures may facilitate the neural processing of passages; at the single-word level, both lexical retrieval and semantic prediction of single words also benefit from the presentation of co-speech gestures.
Susanne Fuchs (ZAS Berlin)
Susanne Fuchs investigates the biopsychosocial foundations of human interaction and focuses specifically on physiological processes, such as breathing and motor control.
Her main areas of interests are:
1) The interplay between motion, breathing and cognition,
2) Speech preparation and pauses,
3) Multimodality and iconicity,
4) Biological and social aspects shaping individual behaviour in speech production and perception.
She uses manifold techniques, among them optitrack, inductance plethysmography, electropalatography and intraoral pressure sensors.
Keynote abstract: The role of bones, joints and muscles for speech and gesture in interactionDid you ever wonder why we use the index finger for pointing gestures? In this talk, I like to answer this question. Furthermore, I will propose to broaden the view on GEsture and SPeech in INteraction by integrating motor control and biomechanics into the discussion of gesture-speech links. Specifically, I would like to focus on three key aspects: 1.) Body properties of speech articulators and limbs (e.g., mass, dynamics) and their impact on the coordination between gesture and speech 2.) Breathing as an integral part of body motions and the voice (gesture-speech physics) 3.) The impact of pointing motions on body posture and head motion I believe that such an integrative view will be fruitful for understanding the foundations of speech and gesture and have consequences on theoretical accounts.
Franz Goller (The University of Utah)
Franz Goller studies the behavioral physiology of sound production and song learning in birds. Current projects focus on 1) physical mechanisms of sound production; 2) the motor coordination between all motor systems involved in singing; 3) coordination between vocal and visual displays (i.e., multimodal signaling); 4) motor aspects of vocal development; 5) acoustic models and song syntax; 6) energetics of song production. The integrative aspects of these studies at the interface of neurobiology and behavior provide a unique opportunity to bridge neural control of a complex learned behavior to its evolutionary and ecological relevance in the natural environment.
Keynote abstract: Sweet songs and hot dances - mechanistic and evolutionary perspectives on multimodal signaling in non-human animals
Multi-modal signaling is widespread among non-human animals and covers all functional aspects of communication behaviors. A diverse array of sensory modalities is used for communication, and I will highlight a few of the most remarkable multimodal display behaviors. After this overview, a few examples will be presented in which integration of auditory and visual communication signals has been studied from the perspective of neural control. Detailed understanding of the neuromuscular control strategies of producing two independent challenging signals simultaneously allows inferences about the selection scenarios leading to complex displays. Comparative analyses provide additional insights into the evolution of multimodal signals, as will be shown with a few examples. The review of studies of non-human animal multimodal signaling illustrates a remarkable diversity, which provides a feature landscape with highly extreme display characteristics. This landscape facilitates comparative assessment of human multimodal communication from the perspective of proximate and ultimate mechanisms.
Early-bird registration deadline: July 12 2023
Standard registration deadline: August 31 2023
Keynotes and oral talks
Keynotes are scheduled to last 60 minutes (45 minutes for the talk, 15 minutes for questions). The slot for each oral presentation is 20 minutes (15 minutes for the talk, 5 minutes for Q&A and switchin speakers). Note that we will have to be very strict with these times, so please make sure your talk does not exceed the time limit.
Interpretation into International Sign
All talks will be interpreted into International Sign. To allow the interpreters some time to prepare, we kindly ask you to send your (draft) slides to Ezgi Mamus (email@example.com) by August 25. Please also note that sign interpretation means that the speed of talking should not be too fast to allow the interpreters to keep up. We kindly request you to take that into account when planning and timing your presentation. Likewise, if you play videos or refer to graphics on your slides, it is necessary to briefly wait until the interpreters have caught up and only then talk about what can be seen on the slides (otherwise deaf members of the audience will miss what you are referring to).
If you indicated in your registration that you consent to being video recorded, we will record your presentation and make it publicly available after the conference. Please bear in mind that this includes any content shown on your slides. If your slides contain personally sensitive information that should not be publicly shared (such as the identity of participants who have not consented to their data being distributed to third parties), then please let us know and opt out of the recording. The responsibility of agreeing to share the content of the presentation lies with the presenters
Posters should be sized A0. The poster boards can be set up in either landscape or portrait orientation.
If you indicated in your registration that you are willing to provide a 5-minute video summary of your presentation, we encourage you to do so. You can use a file transfer service (e.g., Wetransfer, Surfdrive, Maildrop) to send your video to firstname.lastname@example.org. Please send your video presentation before September 1.
If you need financial assistance in order to attend the conference, you can apply for a travel grant. We have funding available for 4 travel grants of 500 euros each, one of which is a Donders Institute Travel Grant, kindly provided by the Donders Institute for Brain, Cognition, and Behaviour.
- Motivation letter (max 1 A4),
- and statement confirming that you have no other funding available
Deadline: June 30 2023 (anywhere in the world).
You will be notified of the outcome before the early bird registration deadline of July 12. If demand exceeds availability, priority will be given to presenting authors (oral or poster).
GeSpIn 2023 will take place on the Radboud campus in Nijmegen. The oral and poster sessions will take place in the Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD, Nijmegen.
You can use this map to find the main conference venue, but also suggestions for hotels and restaurants in Nijmegen.
How to travel to Nijmegen/MPI
We kindly ask everyone to think of the environment and travel by train when possible. We of course understand this may not be possible for everyone, for various reasons. In that case we encourage you to donate to carbon offsetting sites. For more information: The Best Carbon Offset Programs for 2023.
You can use website 9292 (also as an app) to find information about local trains and buses, including timetables, fares and up-to-date information.
The nearest airports to Nijmegen are Schiphol and Eindhoven (Netherlands), and Dusseldorf and Weeze (Germany). Nijmegen is about 90 minutes by train from each of these airports.
For information about local bus services, use 9292.
Get around in NijmegenIf you live in the centre of Nijmegen, you can take those buses to get to the Radboud campus / MPI:
- bus number 10 (direction Heyendaal, bus stop Spinozagebouw)
- bus number 12 (direction Druten, bus stop Spinozagebouw)
- bus number 6 (direction Station Dukenburg, bus stop van Peltlaan)
- bus number 15 (direction Wijchen, bust stop Spinozagebouw)
- bus number 83 (direction Venlo Station, bus stop Sint Annastraat)
For getting around Nijmegen, you can also rent an e-bike using the Bolt app or e-scooter using felyx app (payment via credit card).
If you own an OV-chipkaart, you can use bikes that are available at the central station in Nijmegen. For more information visit NS.nl.
You can contact us by reaching out to email@example.com