RIKEN BSI > RIKEN BSI News > No. 34 (Dec. 2006) > Research Results at BSI
RIKEN BSI News No. 34 (Dec. 2006)

Language: English » Japanese

Research Results at BSI

Towards a Real Time Human Brain-Computer Interface with Neurofeedback: Improving Differentiability by Blind Source Separation

Laboratory for Advanced Brain Signal Processing


"Our emotional brain makes us more than a ‘mere computer processing information’, because it allows us to deal with and to take into account the value of this information, enabling us to have feeling or a sense for the beautiful." Masao Ito (2002)


Brain computer interfaces (BCI), are systems that use brain signals (electric, magnetic, metabolic) to control external devices such as computers, switches, wheelchairs, or neuroprostheses. While BCI research hopes to create new communication channels for disabled or elderly persons using their brain signals, recently efforts has been focused on developing potential applications in rehabilitation, multimedia communication and relaxation (such as immersive virtual reality control). The various BCI systems use different methods to extract the users intentions from her/his brain electrical activity. In our lab, we want to improve the performance of current BCI systems by focusing on how and when EEG brain signals can trigger consistent computer commands. We will use that insight to develop BCI technologies that function in real-time and respond to multiple-commands. Such advances would significantly augment human-computer interaction (HCI).


Fig.1: Multi-stage procedure for on-line BCI: Preprocessing, blind source separation (BSS) and feature extractions play a key role in real-time high performance BCI systems. Since the EEG are rather weak and are substantially disturbed by on-going activity, artifacts and noise, needed to extract and discriminate useful brain patterns, we need powerful blind signal processing and machine learning algorithms.



Fig. 2: (Upper) Diagram of one of our developed EEG-BCI classification method for imaginary hands movements. In the preprocessing stage, the time-domain EEG signals are transformed into the time-frequency representation by the Morlet wavelet transform. NMF with the alpha divergence is applied to determine representative basis vectors and associated discriminant features s(t). A probabilistic, model-based classifier processes NMF-based features as inputs to generate a final decision.
(Lower) Typical topographic spatial patterns during imaginary hand and foot movements allow us to design optimal layout of electrodes. For three commands BCI variant, we achieved a classification rate between 84% - 92%.


Fig. 3: Spectrogram (time-frequency representation) of EEG for SSVEP BCI, Horizontal red segments represent discriminated SSVEPs corresponding to various flicking frequency (10,12,15,20,30 Hz) of images for which a subject was focused on.


Fig. 4: Conceptual BCI system with various kinds of Neurofeedbacks. In the development of a BCI we need to handle two learning systems: The machine should learn to discriminate between different complex patterns of brain activity as accurate as possible and the BCI users should learn via different neurofeedback configurations to modulate their EEG activity and self-regulate or control them.

Our BCI system has an EEG data acquisition module, signal preprocessing with online blind-source separation (BSS) to reduce artifacts and improvement signal to noise ratio, a features extraction system and an ensemble of adaptive classifiers (Fig. 1).


We are currently investigating and testing several promising EEG paradigms including: (i) measuring the brain activity over the primary motor cortex that results from imaginary limbs and tongue movements, (ii) detecting the presence of EEG periodic waveforms, called steady-state visual evoked potentials (SSVEP) elicited by a flashing light sources (e.g., LEDs or phase-reversing checkerboards), and (iii) identifying characteristic event related potentials (ERP) in EEG that follow an event noticed by the user (or his/her intention), e.g., P300 peak waveforms after a flash of a character the user focused attention on.


One approach we take, exploits temporal/spatial changes and/or spectral characteristics of SMR (sensorimotor rhythm) oscillations, or mu-rhythm (8-12 Hz), and beta rhythm (18-25 Hz). These oscillations typically decrease during movement or when preparing for movement (event related desynchronization, ERD) and increase after movement and in relaxation (event-related synchronization, ERS). ERD and ERS help identify those EEG features associated with the task of motor imagery EEG classification.


Taking a different approach, we designed a smart multiple choice table in a form of an array of small checkerboard images flickering with different frequency. When a BCI user focuses his/her attention, or gazes on a specific flickering image or a symbol, a corresponding periodic component (SSVEP) can be in EEG signals (although it is buried in large noise and is difficult to extract). By applying BSS and bank of sub-band filters, we showed that it is possible to decompose the SSVEP waveforms in a series of cortical components. Paradigm based on SSVEP remains one of the most promising and reliable communication modes for a fast BCI system that could discriminate of a high number of unique commands or symbols, so as to allow autonomous navigation computer screen cursor (or realize a virtual joystick).


Although the above paradigms are quite well known, our designs of experiments are quite original. Moreover, we developed and implemented novel and efficient, real-time signal preprocessing tools, and feature extraction algorithms. For example, we have applied adaptive spatial-temporal filtering and new nonnegative matrix factorization (NMF) technique to extract features (Fig. 2) and novel ICA and sub-band filtering approach to reduce noise (Fig. 3). On going research in our lab is attempting to optimize mu-rhythm and SSVEP detection algorithms for high performance, real-time, multi commands BCI.


Another promising extension of BCI is to incorporate various neurofeedbacks to train subjects to modulate EEG brain patterns and parameters such as ERPs, ERD, SMR, P300 or slow cortical potentials (SCPs) to meet a specific criterion or to learn self-regulations skills. The subject then changes their EEG patterns in response to some feedback. Such integration of neurofeedback in BCI (Fig. 4) is an emerging technology for rehabilitation, but we believe is also a new paradigm in neuroscience that might reveal previously unknown brain activities associated with behavior or self-regulated mental states. The possibility of automated context-awareness as a new interface goes far beyond the standard BCI with simple feedback control. We hope to develop the next level of BCI system using neurofeedbacks for some selective cognitive phenomena. To do so, we need to rely increasingly on findings from other disciplines, especially, neuroscience, information technology, biomedical engineering, machine learning, and clinical rehabilitation.



Published by

  • RIKEN Brain Science Institute
    Brain Science Promotion Division
    2-1 Hirosawa, Wako, Saitama, 351-0198 JAPAN
    Tel: +81 48 462 1111
    Facsimile: +81 48 462 4914
    Email: bsi@riken.jp
  • All copyrights reserved and protected by Japanese and International Copyright Law.