Webinar #5: Social Neuroergonomics

Neuroergonomics Webinar #5
Social Neuroergonomics

Date/Time: July 21, 3:30pm – 6:00pm (Central European Summer Time; CEST)

The 5th Webinar of the Neuroergonomics Conference 2021 highlights the speciality challenges raised in a recent publication (Krueger & Wiese, 2021). How do people people process, store, and apply social information in order to get work done? How does this apply to and how should it influence the way we (re)design human-machine systems on a continuum from automation to autonomy?

This webinar features five talks by world-leading experts (see detailed program below):
Challenge A: From Observation to Interaction (Emily Cross, U. Glasgow)
Challenge B: From Automation to Autonomy – Peter Hancock (University of Central Florida)
Challenge C: From Explicit to Implicit Measures – Lorna C. Quandt (Gallaudet University)
Challenge D: From Dyads to Groups – Brian Scassellati (Yale University)
Challenge E: From Laboratory to Natural Environments – Antonia Hamilton (University College London)

Krueger, F., & Wiese, E. (2021). Specialty Grand Challenge Article-Social Neuroergonomics. Frontiers in Neuroergonomics, 2, 10. Link

Frank Krueger
Eva Wiese

Program

(See detailed program below)

1:00 pm (CEST): Meet-up and Networking

3:30 pm (CEST): Introduction to the Challenges of Social Neuroergonomics

3:50 pm (CEST): Challenge A: From Observation to Interaction – Emily S. Cross (University of Glasgow)

4:10 pm (CEST): Challenge B: From Automation to Autonomy – Peter Hancock (University of Central Florida)

4:30 pm (CEST): Challenge C: From Explicit to Implicit Measures – Lorna C. Quandt (Gallaudet University)

4:50 pm (CEST): Challenge D: From Dyads to Groups – Brian Scassellati (Yale University)

5:10 pm (CEST): Challenge E: From Laboratory to Natural Environments – Antonia Hamilton (University College London)

5.30 pm (CEST): Q&A Session with all speakers

6.15 pm (CEST): Meet-up and Networking

How to participate?

You can watch all talks, which will be live-streamed to YouTube.
Click the right button and set a notification.

In addition, we have set up a meeting space on Gather to allow participants to meet up and interact.

To join a restricted guest list during this event, please register your name and email on the right. Note that it is not necessary to create a Xing account.

Detailed Program

Emily S. Cross (University of Glasgow, Scotland; Macquarie University, Australia)
Peter A. Hancock (University of Central Florida, USA)
Lorna C. Quandt (Gallaudet University, USA)

Challenge A: From Observation to Interaction

Title: Probing the Flexibility of Social Perception Reveals Important Insights into Observation and Interaction

Abstract: As humans, we gather a wide range of information about other agents from watching them move. The information we gather by observing others provides vital knowledge for informing our own interactions in a complex social world. A network of brain regions has been implicated in understanding others’ actions by means of an automatic matching process that links actions we see others perform with our own motor abilities. Current views of this network assume a matching process biased towards familiar actions; specifically, those performed by conspecifics and present in the observer’s motor repertoire. However, emerging social neuroscience research is raising some interesting challenges to this dominant theoretical perspective. Specifically, recent work has questioned if this system is built for and biased towards familiar human actions, then what happens when we watch or interact with artificial agents, such as robots or avatars? In addition, is it only the similarity between self and others that leads to engagement of brain regions that link action with perception, or do affective or aesthetic evaluations of another’s action also shape these matching processes that take us from observation to interaction? In this talk, I discuss evidence that provide some first answers to these questions. Broadly speaking, results challenge previous ideas about how we perceive social agents and suggest broader, more flexible processing during social perception. The implications of these findings are further considered in light of whether motor resonance with robotic agents may facilitate human-robot interaction in the future.

Challenge B: From Automation to Autonomy

Title: From Automation to Autonomy: MALICIOUS INTENT – the Crumbling Boundaries of Individual Humanity

Abstract: The human brain is a manifestly effective but insufficiently developed organ which processes information in such a fashion to create a global, peak predator. Arguably, its existence, in fostering local-individual optimization has resulted in a global dysfunctionality of existential proportions. The organ itself is designed to assimilate sensory information and via perceptual interpretation to support decisions and actions, predicated upon prior memory of previously, contextual effectivity. Contingent upon species variation, it accomplishes this, potentially on a ratio scale, to a greater lesser degree of efficiency. But now comes computational systems which promise to revolutionize each of these respective functional capacities whilst, at the same time, creating an artificial species of their own gradualist origins. A transitional phase of this burgeoning and bifurcating line of development is represented by a progressively greater degree of human-machine intimacy; referred in the appropriate literature as, neuroergonomics. At present, the filtering mechanisms associated with limited sensory information assimilation, attention gating, and output execution abilities remain vestigial expressions still largely realized in the artificial displays and system controls that characterize current human-machine systems. Even marginal neuroergonomic successes can and will challenge and supersede these extant limitations. Once let autonomy in your head, there is no going back. In particular, the apparently inherent cycle times and baud rate limits associated with the temporal domain of human processing will be subject to radical, and essentially immediate revision. As autonomous systems express an ever-greater level of self-intentionality, although not an intentionality which would be immediately recognized as of a human character, then the utility of, and relatively ubiquity od human processing units will prove irresistible. And if the entry portal to the brain is readily available and unregulated, it will be used. Whilst this may sound unnecessarily pessimistic, ask you teenager to abandon that portal for autonomy’s action that they now carry around in their hand, and test the antithesis to my proposition

Challenge C: From Explicit to Implicit Measures

Title: Embodied sign language learning in virtual reality: using EEG as an implicit measure of learning

Abstract: In this talk, Dr. Quandt will share recent work on using EEG methodology to provide a window into implicit embodied learning processes. The talk will draw upon results from behavioral and cognitive neuroscience studies from the past few years of her work in the Action & Brain Lab at Gallaudet University, the world’s premiere university for deaf and hard-of-hearing students. Immersive virtual reality presents an enormous potential for learning 3-dimensional, spatially complex signed languages, especially with recent advances in motion capture, animation, and hand tracking. Dr. Quandt’s research team has undertaken the mission of designing, developing, and testing an immersive virtual reality environment in which non-signing adults can learn American Sign Language from signing virtual human avatars, created from motion capture recordings of fluent signers. One significant challenge of this work lies in how signed language learning can best be measured in following a shortterm educational experience. Alongside traditional learning measures of memorization and recall, sensorimotor EEG activity will be assessed in order to understand how the sensorimotor system changes in response to learning signed language content. This work centers upon the question of how sensorimotor system activity may allow us a window into the process of learning physical movements, either within the domain of signed languages, or in any other domains of motor skill.

Brian Scassellati (Yale University, USA)
Antonia Hamilton (University College London, UK)

Challenge D: From Dyads to Groups

Title: From dyads to groups: What robots teach us about human behavior

Abstract: In this short talk, I will argue three points. First, social behavior of individuals varies depending on whether they are interacting in dyads or groups in significant ways. By looking at the problems of machine perception of social cues, I will show examples of. how systems that work to perceive social signals with dyads fail when applied to individuals in triads and richer social groups (featuring joint work with Iolanda Leite). Second, that robots can influence not only social behavior of humans during direct dyadic interactions but also impact human-to-human social behavior when interacting with mixed human-robot teams (featuring joint work with Sarah Sebo). Finally, by moving from a focus on dyadic interactions to group interactions, human-robot systems have become successful therapeutic options for children with autism.

Challenge E: From Laboratory to Natural Environments

Title: Social Neuroergonomics: From the lab to natural environments

Abstract: Collecting neuroimaging data in natural environments is critical if we want to understand how people interact with each other and with the world outside the lab. Here, I will describe a series of studies using functional near-infrared spectroscopy to measure cognitive function during complex social tasks. The first study examines neural mechanisms of lying and lie-detection as participants play a card-game similar to Poker. The second examines neural mechanisms of prospective memory, that is, remembering to do an action at some point in the future. Participants walked around an outdoor urban environment while performing a prospective memory task; fNIRS data collected from prefrontal cortex reveals specific systems for social prospective memory. The third study examines the theatre as a microcosm of social interaction. fNIRS data was captured from professional actors as they rehearsed short pieces from Shakespeare, to examine how taking on a new role changes the sense of self. In all these studies, I will discuss the methods we use to examine neural mechanisms in complex environments, as well as the challenges that arise when we analyse and interpret this data. The novel approaches highlighted here demonstrate how this kind of research can be done, and open the way to future studies of the neural and cognitive processes that underlie our real world social interactions.