Projects | The Brain-Computer Interface and Neuroengonomics Lab
Projects
Artificial Intelligence
EEG Data Augmentation in Cognitive States Recognition: Generative Models-based Approaches
This research aims to improve Deep Learning classifiers for EEG-based cognitive states recognition by generating synthetic EEG signals using Generative Models (GMs), representing data distributions, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). This study will solve not only the data scarcity problem in EEG experiment domains but also the overfitting problem of Deep Learning-based classifiers using EEG.
EEG Data Augmentation in Cognitive States Recognition: Generative Models-based Approaches
This research aims to improve Deep Learning classifiers for EEG-based cognitive states recognition by generating synthetic EEG signals using Generative Models (GMs), representing data distributions, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). This study will solve not only the data scarcity problem in EEG experiment domains but also the overfitting problem of Deep Learning-based classifiers using EEG.
Hyperscanning
Hyperscanning: Quantification Method for Inter-Brain Neural Synchrony
The objective of this research is to develop a computational framework that can quantify the degree of the inter-brain synchrony change over time. This study provides numerical convergence information between inter-brains. It helps to identify the consensus of information between agents in a specific paradigm like social interaction.
Brain-Computer Interfaces
The objective of this research is to develop and evaluate a tactile-based BCI, induced by vibratory stimulation. Preliminary study towards detecting conscious awareness of, and communicate with, behaviorally non-responsive patients. This project involves interdisciplinary collaboration between engineering (ISE and EE), neuroscience (Neurology), assistive technology and medical science. These SSSEP features will be used to develop a hybrid BCI for behaviorally non-responsive patients.
Collaborative Brain-Computer Interface for People with Severe Motor Disabilities
The objective of this research is to investigate the collaborative behavior of people with motor disabilities (i.e., amyotrophic lateral sclerosis, ALS) using a collaborative BCI system that utilizes steady-state visually evoked potentials (SSVEPs). The NC State Brainbot BCI was used as a testbed for the research.
This research focuses on the usability analysis of P300-based BCI applications with four independent variables: luminosity contrast, stimulus duration, interface type, and screen size. The difference in task performance between participants with and without motor disabilities is also analyzed. P300 Speller is used as the testbed in the research. This research intends to provide invaluable empirical data and meaningful insights to the future research of P300-based BCI applications.
A Brain-Computer Interface (BCI) Driven Robotic Wheelchair: Developing a Solution with Wheelchair-Bound Patients in Mind
Thanks to motor-imagery, an EEG signal based on imaginary motor movements, we can decode a user’s intended motion from brain activity alone. This allows a user to instruct our robotic wheelchair where to go through thought, opening new possibilities for paralyzed or other mobility-impaired individuals. Our research seeks to validate this device with real-world wheelchair users, not just healthy college students! The wheelchair will also serve as a platform for our lab to test and improve future BCI devices, providing a physical system to demonstrate the amazing capability of brain-computer interfacing.
Rehabilitation
The objective of this research is to address a general lack of understanding of the wearing comfort and usability of the hand orthosis as well as efficacy of brain-computer interface (BCI)-driven orthosis as a potential rehabilitation tool. Findings from this study will also allow us to conduct long-term rehabilitation studies with stroke patients who have severe motor impairments.
The objective of this research is to validate if paralyzed multiple limbs can be controlled in real-time by a Motor Imagery (MI) based Brain-Computer Interface (BCI) with Functional Electrical Stimulation (FES) and to improve system accuracy based on subject-specific reference and stimulation time epochs. This research study will be extended to improve fine motor control of the hands.
The Smart Healthcare project is a research initiative to investigate and develop a new system for allowing physical therapists to remotely monitor the rehabilitation progress of their patients. The hardware component of this project includes an array of wearable, wireless sensors for monitoring upper limb range of motion, as well as trunk movement. The software component consists of an application to gather the motion data from the sensor arrays, process the data to recreate the patients’ movements, and then present the clinically relevant information for each patient to a therapist, with a special note made for any abnormalities found with the recorded motions.
Neuroergonomics
The Effects of Individual Difference on Discrete Affective Stimuli Processing: An EEG Study
The objective of this research is to investigate how people process discrete affective picture stimuli, using electroencephalogram (EEG). Event-related potential (ERP) and coherence of brain areas are analyzed, subject to emotional granularity and discrete emotional categories. This study helps to understand human emotions would give insights beyond traditional knowledge and experiences.
Human-Computer Interactions
Haptic User Interfaces for the Visually Impaired: Implications for Haptically Enhanced Science Learning Systems
The objective of this research is to support visually impaired students’ science learning through sensorial feedback that was systematically studied to investigate task performance and user behavior. The results of this study, as well as a set of refined design guidelines and principles, should provide insights into the future research of haptic user interfaces that can be used when developing haptically enhanced science learning systems for the visually impaired.
Brain-to-Brain Interfacing (B2BI)
Direct Bidirectional Brain-to-Brain Communication in Humans via Non-Invasive Neurotechnology
This project seeks to expand the newly budding field of B2BI by establishing one of the first direct bidirectional systems. What this means is that we aim to transmit information directly between two brains using brain recording and neuromodulation simultaneously. Two participants will be able to communicate simple information between each other using nothing other than thought (no speech or touch or sight involved).
Collaborative Nociception: Sensory Augmentation through a Direct Unidirectional Brain-to-Brain Interface
Expanding on what B2BI is truly capable of, this project aims to add a new sense to the human body: the ability to sense the pain of others. This technology could allow a doctor to detect when they’ve accidentally knicked a still-conscious patient during surgery or allow soldiers to innately feel when one of their own has been shot before they could even radio it in. Importantly, the receiver never feels the injury themselves; they are simply indicated of its occurrence through direct neurostimulation, granting the human body a new sense.