If you are interested in sharing community news, please contact info <at> neuroergonomicsconference {dot} org.


Objective: Freezing of gait in people with Parkinson’s disease (PwP) is associated with executive dysfunction and motor preparation deficits. We have recently shown that electrophysiological markers of motor preparation, rather than decision-making, differentiate PwP with freezing of gait (FOG+) and without (FOG-) while sitting. To examine the effect of locomotion on these results, we measured behavioural and electrophysiological responses in PwP with and without FOG during a target response time task while sitting (single-task) and stepping-in-place (dual-task).
Methods: Behavioural and electroencephalographic data were acquired from 18 PwP (eight FOG+) and seven young controls performing the task while sitting and stepping in place.
Results: FOG+ had slower response times while stepping compared with sitting. However, response times were significantly faster while stepping compared with sitting for controls. Electrophysiological responses showed no difference in decision-making potentials (Centroparietal Positivity) between groups or conditions but there were differences in neurophysiological markers of response inhibition (N2) and motor preparation (Lateralized Readiness Potential, LRP) in FOG+ while performing a dual-task.
Conclusion: This suggests that the addition of a second complex motor task (stepping-in-place) impacts automatic allocation of resources in FOG+, resulting in delayed response times. The impact of locomotion on the generation of the N2 and LRP potentials, particularly in freezers, indirectly implies that these functions compete with locomotion for resources.
Significance: In the setting of multiple complex tasks or cognitive impairment, severe motor dysfunction may result, leading to freezing of gait.
Bio:
John S. Butler is a Lecturer in the School of Mathematical Sciences in Technological University Dublin. He has published over 60 papers in diverse venues from Cerebral Cortex to the International Journal for Numerical Methods in Fluids. In these papers he utilized numerical methods to analyse and help interrupt and predict large and complex datasets. John received his PhD in Numerical Analysis from the School of Mathematics, Trinity College Dublin under the supervision of Prof John Miller. Following this, he was a post-doctoral fellow from 2006-2009 at the Max Planck Institute for Biological Cybernetics in Tuebingen, Germany. During this time he collaborated with Prof Heinrich Bülthoff on research in self-motion perception using a variety of methods from low level psychophysics to complex closed loop tasks to electrophysiological recordings. From 2009 to 2013 he worked with Prof John Foxe and Prof Sophie Molholm at the Albert Einstein College of Medicine, Bronx, New York where he expanded the scope of his work to include a translational research component, specifically investigating the neural developmental differences in multisensory integration between typically developing children and children with Autism Spectrum disorder. From 2013 to 2015 he worked with Prof Richard Reilly at the Trinity Centre for Bioengineering in Trinity College Dublin on movement disorders such as Parkinson’s disease and Dystonia.
People around the world make millions of decisions every day, some of which are critical, potentially having significant financial and human implications (even leading to loss of life). Yet, many such decisions are time sensitive and are based on incomplete, inconsistent and/or overwhelming/overloading information. So, errors and their devastating consequences are frequent.
To overcome this, where possible, humans tend to assess situations and make corresponding decisions in groups. Groups have inherent error correction capabilities, but, unfortunately, they also suffer from many biases (e.g., strong but incompetent leaders), which may lead to sub-optimal decisions. Also, decisions in groups are inherently slower than individual decisions.
Brain-Computer Interfaces (BCIs) have traditionally been used for restoring capabilities in people with disabilities. However, an emerging line of research has extended their scope from assistive devices to tools for augmenting human functions in healthy people.
In the last 8 years a major strand of research within the Essex Brain-Computer Interfaces and Neural Engineering laboratory, which I co-direct, has focused on the idea of combining brain signals (and other physiological and behavioral data) across multiple people to achieve a form of emergent group augmentation particularly for decision making.
Over this period, we have developed a technology which has delivered statistically significant (and in fact in some cases remarkable) improvements over the group performance achieved by more traditional methods of integrating individual decisions for progressively more and more realistic environments.
In my plenary presentation I will review the approach and its applications and will look at potential future developments.
Recent advances have made it possible to acquire of brain-electrical signals (EEG) outside of the laboratory environment. While mobile EEG is clearly an emerging field, some major challenges have not been solved. I will review advances in device mobility, motion tolerance and device wearability and outline the requirements for future EEG technology that could be used in daily life
Interpersonal physiological synchrony refers to the similarity of physiological signals between individuals over time. A high similarity of EEG signals for individuals who watch and/or hear the same stimuli has been shown to correspond to high outward directed attention. While physiological synchrony in EEG has been demonstrated in the classroom, easier, less obtrusive collection and processing of data would make interpersonal physiological synchrony a more suitable measure for attention in the (virtual) classroom. Therefore, within the NeurolabNL Startimpulse project, we determined whether other physiological measures, namely heart rate and skin conductance, provide similar insights into attention allocation as EEG. In our main study, participants listened to the same audiobook, mixed with other sounds. Half of the participants were asked to focus their attention on the audiobook; half were asked to focus their attention on the sounds. We showed that indeed, even though EEG is more sensitive, heart rate and skin conductance also contain reliable information on attention allocation. In addition, the reliability of the wearable data that were collected with equipment suitable for real-life settings was similar to the reliability of laboratory equipment, and we observed physiological synchrony in skin conductance and heart rate recorded through wearables in actual classroom settings. Finally, in collaboration with ethicists, we started qualitative research with focus groups among adolescents to investigate whether using wearable measurements to improve classroom attention and engagement carries their support.
Wascher, E., Arnau, S., Gutberlet, M., Rinkenauer, G. & Reiser, J.E.
Behavior in natural environments such as the workplace provides only limited possibilities to measure EEG-correlates of information processing or task related mental load since discrete and temporally distinct events are mostly missing. Adding visual probes or auditory stimuli to such a scene potentially alters the working situation, thereby interfering with natural behavior. In the past we have shown that eye blink-related potentials (bERPs) of the EEG potentially overcome this problem since eye blinks reflect discrete moments of visual information segmentation. In a more recent study, we could demonstrate that bERPs are modulated by the visual demand of walking tasks while ERPs to the stimuli of an auditory task were not. This finding supports Wickens’ multiple resource theory and demonstrates that visual demands of natural behavior can be analyzed without artificial visual stimulation.
Mental Task (MT)-based Brain-Computer Interfaces (BCIs) are communication and control systems that translate users’ brain activity into control commands for an application. Despite promising applications in numerous domains, including assistive technologies for motor-impaired users, video games or motor rehabilitation, MT-BCIs are still scarcely used outside laboratories, mainly due to a lack of reliability. These last few years, BCI user training has been shown to be a promising way to increase MT-BCI reliability. Indeed, MT-BCI control is a skill that needs to be learned and mastered. Thus, adequate user training can increase BCI control skills and therefore MT-BCI reliability. Unfortunately, there is currently a lack of knowledge about how to best train BCI users, or even about how this learning takes place. In this talk, I will thus present our ongoing efforts towards building theories of MT-BCI user training, in order to understand and explain the BCI learning process as well as to model and optimize how to train effectively and efficiently BCI users. I will notably describe different types of MT-BCI user learning that we identified, some results and hypotheses about who can more effectively control MT-BCIs as well as some results and hypotheses about how to optimise this training. I will conclude by providing a number of outstanding research questions that will need to be answered in order to complete these theories.
Neuroadaptive games utilise signals from the brain as a form of dynamic difficulty adjustment that operates in real-time. Previous research has demonstrated that increased game difficulty distracts attention from painful stimulation, leading to increased pain tolerance and reduced levels of subjective pain. This presentation describes the development and evaluation of a neuroadaptive game designed to distract from pain. Neurophysiological data was provided by functional near-infrared spectroscopy (fNIRS) measured from frontal and parietal sites. The first study describes the generation of training data as 20 participants played a racing game at three levels of difficulty. The resulting data were subjected to machine learning analyses (Support Vector Machine) and yielded accuracy levels between 0.85 and 0.95. The resulting, subject-independent classification model was used to create a working prototype that could adjust game difficulty based on real-time analyses of fNIRS. Ten participants (who did not take part in the first study) experienced the neuroadaptive game prototype in two forms: (1) a version that adjusted game difficulty using real-time fNIRS data, and (2) a version that adjusted game difficulty randomly. The cold pressor test was used as an experimental pain protocol during this study. Results indicated that version (1) resulted in higher levels of pain tolerance compared to version (2). The design of the system and implications for future design of neuroadaptive games are discussed.Today
Fatigue can impair operator performance, their judgment, and their decision-making capabilities via various neurocognitive and psychological mechanisms and has been associated with a two-fold increase in risk of injuries and errors, and four-fold increase in safety-compromising behaviors during safety-critical events. Proactive augmentation paradigms that tackle imminent fatigue deficits directly at the source, i.e., the brain, may prove more effective but have been poorly explored. There is now considerable evidence that transcranial direct current stimulation (tDCS) can boost brain plasticity processes and cognitive performance in complex tasks, however the nature and extent to which they impact cognitive fatigue delays is poorly understood. In this talk, we will describe our efforts on building a framework for closed-loop, non-invasive neurostimulation – one that remains explicitly informed by physiological, and (or) cognitive biomarkers, unconstrained by the task-specificity that encumbers prior developments in this space. We will also discuss our current translation efforts to apply this technology in the emergency response domain.
The report of Phineas Gage; the young man who survived an iron bar passing through his head, is one of the most famous clinical cases in the field of neuropsychology. However, this case is actually also a neuroergonomics one, as the promising young and intelligent Gage was prevented from continuing his work as railway construction supervisor due to the consequences of the severe traumatic brain injury. Gage was earlier described as ‘the most efficient and capable’ and ‘as a shrewd, smart businessman, very energetic and persistent in executing all his plans of action’, but lost due to the brain injury the ‘ability to anticipate the future and plan accordingly in a complex social environment’, and thus could not resume his previous work. This case is considered a historical milestone study exploring the relationship between brain functioning, behaviour, and cognition, and represents a classical example of executive dysfunctioning. What is the current status of the latest methods and theories from neuroscience to understand how the brain implements executive functions in everyday life?
This talk explores how neuroscientific knowledge regarding healthy interindividual differences in brain anatomy, executive functions and their relation to everyday life performance contributes to current challenges in the real world. First, the definitions of executive functions and their theoretical and methodological frameworks are presented. This includes the well-established unity and diversity model of Miyake (Miyake et al., 2000, Karr et al., 2018), the dual process framework of Braver (Braver et al., 2012), and the cognitive continuum approach from Dehais (Dehais et al., 2019). Second, the current neuroscientific knowledge regarding the development of neurofeedback trainings to enhance executive functioning is discussed (Enriquez-Geppert et al., 2014). The relevance of executive functions and its enhancement is discussed using examples from aviation (ID 149, ID 164).
The report of Phineas Gage; the young man who survived an iron bar passing through his head, is one of the most famous clinical cases in the field of neuropsychology. However, this case is actually also a neuroergonomics one, as the promising young and intelligent Gage was prevented from continuing his work as railway construction supervisor due to the consequences of the severe traumatic brain injury. Gage was earlier described as ‘the most efficient and capable’ and ‘as a shrewd, smart businessman, very energetic and persistent in executing all his plans of action’, but lost due to the brain injury the ‘ability to anticipate the future and plan accordingly in a complex social environment’, and thus could not resume his previous work. This case is considered a historical milestone study exploring the relationship between brain functioning, behaviour, and cognition, and represents a classical example of executive dysfunctioning. What is the current status of the latest methods and theories from neuroscience to understand how the brain implements executive functions in everyday life?
This talk explores how neuroscientific knowledge regarding healthy interindividual differences in brain anatomy, executive functions and their relation to everyday life performance contributes to current challenges in the real world. First, the definitions of executive functions and their theoretical and methodological frameworks are presented. This includes the well-established unity and diversity model of Miyake (Miyake et al., 2000, Karr et al., 2018), the dual process framework of Braver (Braver et al., 2012), and the cognitive continuum approach from Dehais (Dehais et al., 2019). Second, the current neuroscientific knowledge regarding the development of neurofeedback trainings to enhance executive functioning is discussed (Enriquez-Geppert et al., 2014). The relevance of executive functions and its enhancement is discussed using examples from aviation (ID 149, ID 164).
ID 149 Theta Neurofeedback and Pilots’ Executive Functioning. Lafont, A., Enriquez-Geppert, S., Roy, R., Leloup, V., Dehais, F.
ID 164: Executive functions and flying performance in pilots. Smit, D., Enriquez-Geppert, S. Daneshnia, N., Lafont, A., Dehais, F.
Braver, T. S. (2012). The variable nature of cognitive control: a dual mechanisms framework. Trends in cognitive sciences, 16(2), 106-113
Dehais, F., Hodgetts, H. M., Causse, M., Behrend, J., Durantin, G., & Tremblay, S. (2019). Momentary lapse of control: A cognitive continuum approach to understanding and mitigating perseveration in human error. Neuroscience and Biobehavioral Reviews, 100, 252–262.
Enriquez-Geppert, S., Huster, R. J., Figge, C., & Herrmann, C. S. (2014). Self-regulation of frontal-midline theta facilitates memory updating and mental set shifting. Frontiers in behavioral neuroscience, 8, 420.
Harlow, J.M. (1868). Recovery from the passage of an iron bar though the head. Publications of the Massachusetts Medical Society, 2:327-47
Karr, J. E., Areshenkoff, C. N., Rast, P., Hofer, S. M., Iverson, G. L., & Garcia-Barrera, M. A. (2018). The unity and diversity of executive functions: A systematic review and re-analysis of latent variable studies. Psychological bulletin, 144(11), 1147.
Miyake, A., Friedman, N. P., Emerson, M. J., Witzki, A. H., Howerter, A., & Wager, T. D. (2000). The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology, 41(1), 49-100.
Dr. Quandt will present recent work on neuroscience-informed educational technology. Immersive virtual reality presents an enormous potential for learning 3-dimensional, spatially complex signed languages, especially with recent advances in motion capture, animation, and hand tracking. Dr. Quandt’s research team has undertaken the mission of designing, developing, and testing an immersive virtual reality environment in which non-signing adults can learn American Sign Language from signing virtual human avatars, created from motion capture recordings of fluent signers. One significant challenge of this work lies in how signed language learning can best be measured in following a short-term educational experience. Alongside traditional learning measures of memorization and recall, the project will involve sensorimotor EEG activity in order to understand how the sensorimotor system changes in response to learning signed language content. This talk will touch on the motivated theories of embodied cognition which drive this work, as well as the implications of embodied cognition upon other forms of immersive learning.
Stroke rehabilitation has to address many factors that are impaired by brain lesion caused by brain hemorrhage. Patients show diverse symptoms reaching from movement disorders, impaired sensation and cognitive deficits, which must be addressed by therapy. To compensate movement disorders, exoskeletons can be applied (1-4). They can sense movements and support them to enable a patient with smallest remaining muscular activity to regain control over a disabled limb (Fig. 1). Electromyogram and force measurements can be used to adapt the support to the patients need (5). In case that no control of the limb is left, brain activity can be analyzed by embedded brain reading (eBR) to infer on the intention of the patient and to trigger movements implicitly (6,7). For a successful therapy not only “assist as needed” is required but further, patients must be able to follow instructions - mental stress must be avoided to assure optimal therapy conditions. Again, eBR can be used to detect A task load on a patient while he or she is performing an ongoing action, e.g., performing a specific arm movement to infer whether a patient (while performing the movement) is still able to understand instructions by the therapist or a serious game (8). Further, EEG signals can be used to adapt the behavior of the system to the subjective preferences of the patient. This talk will show new therapy approaches for upper limb rehabilitation made possible by means of an active exoskeleton with highly adaptive embedded control combined with movement intention recognition based on EEG, EMG and eye tracking data. In future our approach will be supported by task-load detection based on P300-related activity and ErrP-related activity evoked by subjective misbehavior of the system (9).
(1) Platz T, Roschka S (2009) Rehabilitative Therapie bei Armparese nach Schlaganfall. Neurol Rehabil 15(2):81– 106
(2) Platz T (2011) Rehabilitative Therapie bei Armlähmungen nach einem Schlaganfall. S2-Leitlinie der Deutschen Gesellschaft für Neurorehabilitation. NeuroGeriatrie 3(4):104–116
(3) Nitschke J., Kuhn D, Fischer K, Röhl K (2014) Comparison of the usability of the ReWalk, Ekso and HAL. OrthOpädietechnik 9(14):22
(4) Kirchner, E. A., Will. N., Simnofske, M., Vaca Benitez, L. M., de Gea Fernández, J., Kampmann, P., Kirchner, F. (2019) Exoskelette und künstliche Intelligenz in der klinischen Rehabilitation. Editors: Mario A. Pfannstiel, Patrick
Da-Cruz, Harald Mehlich. In: Digitale Transformation von Dienstleistungen im Gesundheitswesen V, Springer Nature, chapter 21, pages 413-435, Aug/2019. ISBN: 978-3-658-23986-2.
(5) Kirchner, E. A., Albiez, J., Seeland, A., Jordan, J., Kirchner F. (2013), Towards Assistive Robotics for Home Rehabilitation, In Proceedings of the 6th International Conference on Biomedical Electronics and Devices, (BIODEVICES-13), 11.2.-14.2.2013, Barcelona, Feb/2013.
(6) Kirchner, E. A., Fairclough, S., Kirchner, F. (2019) Embedded Multimodal Interfaces in Robotics: Applications, Future Trends, and Societal Implications. Editors: S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, A. Krueger. In: The Handbook of Multimodal-Multisensor Interfaces, Morgan & Claypool Publishers, volume 3, chapter 13, pages 523-576, 2019. ISBN: e-book: 978-1-97000-173-0, hardcover: 978-1-97000-175-4, paperback: 978-1-97000-172-3, ePub: 978-1-97000-174-7.
(7) Kirchner, E. A., Kim, S.-K., Straube, S., Seeland, A., Wöhrle, H., Krell, M.M., Tabie, M., Fahle M. (2013) On the Applicability of Brain Reading for Predictive Human-Machine Interfaces in Robotics. In PLoS ONE, Public Library of Science, volume 8, number 12, pages e81732, Dec/2013.
(8) Kirchner, E. A., Kim S.-K. (2018), Multi-Tasking and Choice of Training Data Influencing Parietal ERP Expression and Single-Trial Detection—Relevance for Neuroscience and Clinical Applications, In: Frontiers in Neuroscience, volume 12, pages 188, DOI: 10.3389/fnins.2018.00188
(9) Kim, S.-K., Kirchner, E. A., Stefes, A., Kirchner F. (2017). Intrinsic interactive reinforcement learning - Using error-related potentials for real world human-robot interaction. In Scientific Reports, Nature, volume 7: 17562, pages n.a., Dec/2017.
Approaches for non-invasive transcranial electrical stimulation of the brain has been investigated intensively for decades including transcranial Direct Current Stimulation (tDCS) and transcranial Alternative Current Stimulation (tACS). This talk aims to address 1) What is the state-of-the-art in technology platforms include wearable platforms; 2) What are the mechanistic foundation of boosting performance and learning; 3) What challenges limit deployment to medical centers and in everyday life? Topics covered include: A) The use of computational model to optimize neuromodulation, including novel “adaptive” pipelines; B) Emerging everyone for Neuro-vascualr Modulation and implications for brain function and disease treatment; and C) Hardware and programming challenges around closed-loop stimulation, for example using EEG, fMRI, or fNIRS
The advancements in the few recent years in neuroscience field regarding technology (i.e. the possibility to record user’s biosignals with wearable and effective technologies) and algorithms (i.e. machine learning methodologies) have encouraged the bioengineering community to employ the neurophysiological measures not only for research purposes, but even in everyday-life applications. It is now possible to catch, even in real-time, humans mental and emotional insights by using body activity (e.g. brain, sweating, etc.), without asking to the user or interfering with the task that he/she is interacting with. Also, such information can be used to change the behavior of the surrounding environment or interface that the user is handling with (i.e. passive Brain-Computer Interface, pBCI, [1]). One of the possible applications in which pBCIs have been investigated is the evaluation of users’ mental states during working activities, to mitigate/prevent human errors, or to enhance human-computer interaction [2]. In such safety-critical environments, where operators are subjected to multiple sources of information and tasks, bad performance could induce serious consequences. In this perspective, Workload and Stress have been so far considered some of the most relevant aspects, that could affect and consequently induce unacceptable performances [1], [3].
[1] P. Arico et al., “Human Factors and Neurophysiological Metrics in Air Traffic Control: a Critical Review,” IEEE Rev. Biomed. Eng., vol. PP, no. 99, pp. 1–1, 2017.
[2] B. Blankertz et al., “The Berlin Brain–Computer Interface: Non-Medical Uses of BCI Technology,” vol. 4.
[3] S. Arora, N. Sevdalis, D. Nestel, M. Woloshynowych, A. Darzi, and R. Kneebone, “The impact of stress on surgical performance: A systematic review of the literature,” Surgery, vol. 147, no. 3, 2010.
Robots are currently the center of attention in various fields of research, due to their potential use as assistants for daily living. However, in order to design robots that can actually be helpful for humans, we need to understand what is the most beneficial design of robots both in terms of appearance and behavior, for optimal human-robot interaction. With the use of neuroscience methods, and classical paradigms of experimental psychology adapted to human-robot interaction, we examine how the brain processes various signals delivered by robots. In this talk, I will focus on social signals, such as gaze contact, and I will present a series of studies in which gaze contact has been examined in the context of various cognitive tasks. The results show that gaze contact elicited by a robot is a powerful social signal, which increases engagement with a robot. However, it can also be distracting in a task that requires attentional focus. I will present behavioural results (performance measures, eye tracking data), as well as EEG responses to gaze contact elicited by a robot, indicating that processing of robot social gaze can be costly in terms of cognitive resources. The results will be discussed in the context of an antagonistic relationship between cognitive control and the social brain. Implications for social and assistive robotics will be proposed.