Deep Machine Learning
> News >
Commanding a Brain-Controlled Wheelchair using Steady-State Somatosensory Evoked Potentials
We propose a novel brain-controlled wheelchair, one of the major applications of brain-machine interfaces (BMIs), that allows an individual with mobility impairments to perform daily living activities independently. Specifically, we propose to use a steady-state somatosensory evoked potential (SSSEP) paradigm, which elicits brain responses to tactile stimulation of specific frequencies, for a user’s intention to control a wheelchair. In our system, a user had three possible commands by concentrating on one of three vibration stimuli, which were attached to the left-hand, right-hand, and right-foot, to selectively control the wheelchair. The three stimuli were associated with three wheelchair commands: turn-left, turn-right, and move-forward. From a machine learning perspective, we also devise a novel feature representation by combining spatial and spectral characteristics of brain signals. With the SSSEP-based control, all subjects successfully completed the task without making any collision while four subjects failed it with MI-based control. It is also noteworthy that in terms of the average time to complete the task, the SSSEP-based control outperformed the MI-based control. In the other more challenging task, all subjects successfully reached the target location.
Lower Limb Exoskeleton Control System based on Steady State Visual Evoked Potential (SSVEP)
We developed an asynchronous brain-machine interface (BMI)-based lower limb exoskeleton control system based on steady-state visual evoked potentials(SSVEPs). By decoding electroencephalography (EEG) signals in real-time, usersare enabled to walk forward, turn right, turn left, sit, and stand while wearing exoskeleton.SSVEP stimulation is implemented with a visual stimulation unit, consisting of five lightemitting diodes (LEDs) fixated to the exoskeleton. A canonical correlation analysis (CCA)method for the extraction of frequency information associated with the SSVEP was usedin combine with k-nearest neighbors (KNN). Overall, 11 healthy subjects participated in the experiment to evaluate performance. For achieving the best classification, CCA was first calibrated within an offline experiment. In the subsequent online experiment, our results exhibit accuracies of 91.3±5.73%, response time of 3.28±1.82 s, information transfer rate (ITR) of 32.9±9.13 bits/min, and completion time of 1100±154.92 s for the experimental parcour studied. The ability to achieve such high quality BMI control indicates that SSVEP-based lower limb exoskeleton for gait assist is becoming feasible.
Detection of Driver’s Braking Intention during Simulated Driving based on EEG Feature Combination
We developed a simulated driving environment for studying neural correlates of emergency braking in diversified driving situations. We further investigated to what extent these neural correlates can be used to detect a participantʼs braking intention prior to the behavioral response. We measured electroencephalographic (EEG) and electromyographic signals during simulated driving. Fifteen participants drove a virtual vehicle and were exposed to several kinds of traffic situations in a simulator system, while EEG signals were measured. After that, we extracted characteristic features to categorize whether the driver intended to brake or not. Our system shows excellent detection performance in a broad range of possible emergency situations. In particular, we were able to distinguish three different kinds of emergency situations (sudden stop of a preceding vehicle, sudden cutting-in of a vehicle from the side and unexpected appearance of a pedestrian) from non-emergency (soft) braking situations, as well as from situations in which no braking was required, but the sensory stimulation was similar to stimulations inducing an emergency situation (e.g., the sudden stop of a vehicle on a neighboring lane). We proposed a novel feature combination comprising movement-related potentials such as the readiness potential, event-related desynchronization features besides the event-related potentials (ERP) features used in a previous study. The performance of predicting braking intention based on our proposed feature combination was superior compared to using only ERP features. Our study suggests that emergency situations are characterized by specific neural patterns of sensory perception and processing, as well as motor preparation and execution, which can be utilized by neurotechnology based braking assistance systems.
A Robot Arm Control System based on Decoding 3D Trajectory of Imagined Arm Movements from EEG Signals
Decoding motor commands from non-invasively measured neural signals has become important in braincomputer interface (BCI) research. Applications of BCI include neurorehabilitation after stroke and control of limb prostheses. Until now, most studies have tested simple movement trajectories in two dimensions by using constant velocity profiles. However, most real-world scenarios require much more complex movement trajectories and velocity profiles. In this study, we decoded motor commands in three dimensions from electroencephalography (EEG) recordings while the subjects either executed or observed/imagined complex upper limb movement trajectories. We compared the accuracy of simple linear methods and nonlinear methods. In line with previous studies our results showed that linear decoders are an efficient and robust method for decoding motor commands. However, while we took the same precautions as previous studies to suppress eye-movement related EEG contamination, we found that subtracting residual electro-oculogram (EOG) activity from the EEG data resulted in substantially lower motor decoding accuracy for linear decoders. This effect severely limits the transfer of previous results to practical applications in which neural activation is targeted. We observed that non-linear methods showed no such drop in decoding performance. Our results demonstrate that eyemovement related contamination of brain signals constitutes a severe problem for decoding motor signals from EEG data. These results are important for developing accurate decoders of motor signal from neural signals for use with BCI-based neural prostheses and neurorehabilitation in real-world scenarios.
A Novel Bayesian Framework for Discriminative Feature Extraction in Brain-Computer Interfaces
As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI, in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatio-spectral filter optimization is formulated as the estimation of an unknown posterior pdf that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes.
From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.
Reconstruction of Partially Damaged Face Images Based on a Morphable Face Model
In this paper, we propose an efficient reconstruction method of partially damaged faces based on a morphable face model. This method has two prior conditions. First, positions of pixels in a damaged region of an input face are given. Second, correspondence of points in an undamaged region, of which the number is larger than that of prototypes is given. In order to reconstruct the shape and texture in the damaged region, we use the following two steps of a strategy. First, the linear coefficients to minimize the difference between the given shape/texture and the linear combination of the shape/texture prototypes are computed in the undamaged region. Second, the obtained coefficients are applied to the shape and texture prototypes in the damaged region, respectively. If these prior conditions are satisfied, this method does not require iterative processes, as well as is suitable for obtaining an optimal reconstruction image by simple projection for LSM.