Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2018, International Journal of Information Sciences and Techniques (IJIST)
https://doi.org/10.5121/ijist.2018.8501…
11 pages
1 file
This work proposes to recognize a user's commands by analysing his/her brainwaves captured with single channel electroencephalogram (EEG). Whenever a user intends to issue one of the pre-defined commands, the proposed system prompts him/her all the candidate commands in turn. Then, the user is asked to be concentrated as possible as he/she can, when the desired command is shown. It is assumed that the concentration will present a certain pattern of "Yes" in the captured EEG, as opposed to a certain pattern of "No" when the user is relaxed. Accordingly, the task is to determine that the captured EEG is "Yes" or not. This work compares three recognition methods, respectively, based on Gaussian mixture models, hidden Markov models and recurrent neural network, and conducts experiments using 2400 test EEG samples recorded from 10 subjects.
2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), 2012
The Electroencephalographic signals are commonly used for developing brain-machine interfaces (BMI), in fact is the most used biological signal to translate brain's commands to the computer. Some additional physiological measures have been used along with EEG in order to obtain more robust and more accurate BMI systems. However, since very sophisticated recording devices are more available, signal processing is getting complicated, mainly due to the invested computational time in signal extraction and pattern recognition. Therefore, processing time in BMI could be too long, which is useless for some applications, for instance, devices used in rehabilitation engineering, or some robotic systems. In this paper, we propose a six commands recognition algorithm using only one EEG bipolar connection (O1-P3) in combination with bilateral electrooculographic signals. Our algorithm could identify these six commands based on simple temporal analysis with an average recognition accuracy of 97.1% for the selected sample of subjects. The average recognition time do not last more than 0.5 seconds after one of the events occurred.
Hassan II University, 2023
Brain-Computer Interface (BCI) is one of the most advanced systems in biomedical engineering, with various software and medical device problems. BCI systems are important electronic devices for analyzing complex brain diseases. These systems can assist certain patients to control and communicate with external devices, for which the scientific community is looking to solve all Electroencephalogram (EEG) problems. Therefore, this document sets out certain methodologies and approaches that increase the EEG signal prediction system’s accuracy by 90%. First, we focus on the development of an EEG signal acquisition device. This device consists of dry electrodes placed in accurate positions in the scalp, a part for the EEG signals’ amplification, and a microcontroller that can record and send data to an external server created using Python. On the other hand, the server is composed of a preprocessing phase that reduces the EEG signals’ non-stationarity property using an elastic equation that converges the signal variations to the origin (the normalization of all activities), this singular resolution increases the accuracy by approximately 40% (compared to the use of raw data). In addition, we have self-optimized the bandpass filter using several optimization algorithms such as the Sine Cosine Algorithm (SCA) and the Harris Hawks Optimization (HHO). This digital filter removes all unused signal frequencies from the classification step. Also, we reduced all possible correlations between acquisition channels based on spatial filters (using the Common Spatial Pattern (CSP)). Moreover, our work increases the proper information portion for each mental task by extracting as many features as possible from the EEG signals, which helps to increase the accuracy value by about 10%. Finally, we select the optimum prediction model by combining one or more optimizers with each classifier. So, this suggested improvement allows the recognition of different EEG tasks with an increased accuracy of about 10%. Therefore, the accuracy value is increased by 50% and 25%, respectively, to obtain an accuracy greater than 90% and 80% in binary and multiclass cases. These results confirm the robustness of all approaches and algorithms developed for each stage of the EEG prediction system.
Lecture Notes in Computer Science, 2013
This paper describes a new EEG pattern recognition methodology in Brain Computer Interface (BCI) field. The EEG signal is analyzed in real time looking for detection of "intents of movement". The signal is processed at specific segments in order to classify mental tasks then a message is formulated and sent to a mobile device to execute a command. The signal analysis is carried out through eight frequency bands within the range of 0 to 32 Hz. A feature vector is conformed using histograms of gradients according to 4 orientations, subsequently the features feed a Gaussian classifier. Our methodology was tested using BCI Competition IV data sets I. For "intents of movements" we detect up to 95% with 0.2 associated noise, with mental task differentiation around 99%. This methodology has been tested building a prototype using an Android based mobile telephone and data gathered with an EPOC Emotive headset, showing very promising results.
2004
An approach for Electroencephalogram (EEG) processing is presented. Along with the theoretical development of stochastic processing techniques, two application areas are suggested: EEG sleep recording analysis and Brain Computer Interface (BCI). Many methods have been already developed in the area of sleep staging, nevertheless the automatic scoring in not still so effective as the manual scoring. Our sleep scoring method has the advantage of better temporal resolution (1 second) compared to the classical manual approach (30 seconds). In case of BCI this is a quite new approach offering mainly support for disable people in terms of controlling personal computer. The algorithm for cue movements determination has been designed resulting in detecting the movements within one second interval.
International Joint Conference on Neural Networks (IJCNN'01), 2001
A new method of communication is proposed for paralysed patients using EEG signals. A scheme based on Morse code is used to construct meaningful words using recognised mental tasks. A benefit of this system is as a means of communication between paralysed patients and their external environment i.e. as an interface for use by people with severe physical disabilities. As the technology advances, it is envisaged that this technique could be used by anyone for rudimentary user-interface actions, like popping up windows and making menu choices. The EEG signals are segmented and power spectral values are extracted using Wiener-Khintchine theorem with Parzen window smoothing. A Fuzzy ARTMAP (FA) classifier is used to classify these signals into three different mental tasks, where each task either represents dit, dah or space. We have analysed different mental tasks and show that the performance with different tasks varies greatly for different subjects. The FA classification results in this paper have shown that it is highly possible to recognise three different mental tasks for each subject provided that these three tasks, which varies for different subjects are chosen after some preliminary simulations. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=938793
2017
The aim of this work is to design, implement and validate a primary communication system based on a brain-computer interface. There are people who — for various reasons — are affected in their ability to externalize their communication, however, they receive and process information from different sources. This system would allow basic communication — allowing the user to answer closed questions — through thought. The system was implemented by analyzing and interpreting electrical signals from brain activity, collected through electrodes attached to the scalp. The analog electrical signals were received by a data acquisition system and digitized for computer analysis. We implemented different signal processing techniques, pattern analysis, and classification and discrimination methods. By analyzing these signals and interpreting the electrical patterns, was achieved understand answers to simple questions. The system has been validated testing with healthy volunteers in the laboratory...
2018
In the design of brain machine interfaces it is common to use motor imagery which is the mental simulation of a motor act, it consists of acquiring the signals emitted when imagining the movement of different parts of the body. In this paper we propose a machine learning algorithm for the analysis of electroencephalographic signals (EEG) in order to detect body movement intention, combined with the signals issued in a state of relaxation and the state of mathematical activity. That can be applied for brain computer interface (BCI). The algorithm is based on the use of recurrent networks (RRN) and can recognized four tasks which can be used for control of machinery. The performance of the proposed algorithm has an average classification efficiency of 80.13%. This proposed method can be used to translate the motor imagery signals, relaxation activity signals and mathematical activity signals into control signal using a four state to control the directional movement of a drone.
IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2003
Multilayer neural networks were successfully trained to classify segments of 12-channel electroencephalogram (EEG) data into one of five classes corresponding to five cognitive tasks performed by a subject. Independent component analysis (ICA) was used to segregate obvious artifact EEG components from other sources, and a frequency-band representation was used to represent the sources computed by ICA. Examples of results include an 85% accuracy rate on differentiation between two tasks, using a segment of EEG only 0.05 s long and a 95% accuracy rate using a 0.5-s-long segment .
International Journal of Innovative Research in Computer and Communication Engineering, 2015
Brain computer interface (BCI) facilitates a connection between the human brain and external device like computer and is used for assisting the physically disabled and impaired people. The BCI system can be used for analysis and classification of EEG signals corresponding to different emotions, including self-report, startle response, behavioral response, autonomic measurement, and neurophysiologic measurement. The EEG signals can play an important role in detecting the emotional states for developing the BCI based analysis and classification of emotions. Since the BCI based on emotion detection can be useful in many areas like as entertainment, education, and health care. Here in this project we have proposed a prototype embedded controller based model to assist patients which can control home appliances. We have used DWT algorithms for feature extraction and features like energy, entropy, mean are computed. The KNN classifier is used to classify EEG signal into two states through ...
IAES International Journal of Robotics and Automation (IJRA), 2022
Over the past decades, brain-computer interface (BCI) has gained a lot of attention in various fields ranging from medicine to entertainment, and electroencephalogram (EEG) signals are widely used in BCI. Braincomputer interface made human-computer interaction possible by using information acquired from EEG signals of the person. The raw EEG signals need to be processed to obtain valuable information which could be used for communication purposes. The objective of this paper is to identify the best combination of features that could discriminate cognitive stimuli-based tasks. EEG signals are recorded while the subjects are performing some arithmetical based mental tasks. Statistical, power, entropy, and fractional dimension (FD) features are extracted from the EEG signals. Various combinations of these features are analyzed and validated using random forest classifier, K-nearest neighbors (KNN), multilayer perceptron, linear discriminant analysis, and support vector machine. The combination of entropy-FD features gives the highest accuracy of 90.47% with the KNN algorithm when compared to individual entropy and FD features which achieves 79.36% with random forest classifier, multilayer perceptron, and 82.53% with linear discriminant analysis, respectively. Our results show that the hybrid of entropy-FD features with KNN classifier can efficiently classify the cognition-based stimuli.
2021
This work presents a supervised machine-learning approach to build an expert system that provides support to the neuroscientist in automatically classifying ERP data and matching them with a multisensorial alphabet of stimuli. To do this, two different approaches are considered: a hierarchical tree-based algorithm, XGBoost, and feedfoward neural networks, highlighting the pros and cons of both approaches in the different steps of the classification task. Moreover, the sensitivity of the classification capabilities of the tool as a function of the number of available electrodes is also studied, highlighting what can be achieved by applying the method using commercial, wearable EEG systems. The main novelty of this work consists in significantly enlarging the pool of stimuli that the expert system can recognize and comprising different, possibly mixed, sensorial domains. The obtained results open the way to the design of portable devices for augmented communication systems, which can be of particular interest for the development of advanced Brain-Computer Interfaces (BCI) for communication with different types of neurologically impaired patients.
Expert Systems with Applications, 2021
This work presents a supervised machine-learning approach to build an expert system that provides support to the neuroscientist in automatically classifying ERP data and matching them with a multisensorial alphabet of stimuli. To do this, two different approaches are considered: a hierarchical tree-based algorithm, XGBoost, and feedfoward neural networks, highlighting the pros and cons of both approaches in the different steps of the classification task. Moreover, the sensitivity of the classification capabilities of the tool as a function of the number of available electrodes is also studied, highlighting what can be achieved by applying the method using commercial, wearable EEG systems. The main novelty of this work consists in significantly enlarging the pool of stimuli that the expert system can recognize and comprising different, possibly mixed, sensorial domains. The obtained results open the way to the design of portable devices for augmented communication systems, which can be of particular interest for the development of advanced Brain-Computer Interfaces (BCI) for communication with different types of neurologically impaired patients.
Brain Sciences, 2022
There are many applications controlled by the brain signals to bridge the gap in the digital divide between the disabled and the non-disabled people. The deployment of novel assistive technologies using brain-computer interface (BCI) will go a long way toward achieving this lofty goal, especially after the successes demonstrated by these technologies in the daily life of people with severe disabilities. This paper contributes in this direction by proposing an integrated framework to control the operating system functionalities using Electroencephalography signals. Different signal processing algorithms were applied to remove artifacts, extract features, and classify trials. The proposed approach includes different classification algorithms dedicated to detecting the P300 responses efficiently. The predicted commands passed through a socket to the API system, permitting the control of the operating system functionalities. The proposed system outperformed those obtained by the winners of the BCI competition and reached an accuracy average of 94.5% according to the offline approach. The framework was evaluated according to the online process and achieved an excellent accuracy attaining 97% for some users but not less than 90% for others. The suggested framework enhances the information accessibility for people with severe disabilities and helps them perform their daily tasks efficiently. It permits the interaction between the user and personal computers through the brain signals without any muscular efforts.
Proceedings of the 20th International Conference on Enterprise Information Systems, 2018
The objective of this work is to compare the performance of two brain-computer interfaces developed by our research group. Both interfaces collect the electrical signals produced by the human body while a person try to move a cursor on a digital screen, using only his thought. The collected signals are classified using the artificial neural networks paradigm, where the first interface uses electroencephalogram signals, collected from the scalp, to classify the mental command, and the second uses the electrodermal signal, collected from any right-hand finger. Besides analysing the performance of the two approaches, this research contributes to reduce the training time achieved by similar systems, reported in the literature as being in an average of 45 days, to about only 40 minutes. Our motivation is to facilitate the accessibility of people with temporary or permanent physical limitations. In addition, we have developed a low-cost signal collection platform, providing a solution that can help a large group of people.
IFMBE Proceedings
The objective of this paper is to develop a powered Room Automation and Robot control based on Electro encephalogram signals (EEG) for users with high-level spinal cord injury. A Brain Computer Interface (BCI) provides a new non-muscular channel for sending messages and commands to the external world. This BCI was designed with two Operational Modes (OM): Training with Feedback OM and Execute OM. Our project emphasized on design to implementation of Execute OM in BCI. The main functionality of a Execute OM is described as follows: it receives EEG data taken from the user, interprets and classifies it as a Mental Activity Task defined in a previous Training Phase and finally executes a specific action using a rendering component, in this case the one dimensional(1D) platform. This classification can be achieved using a classification technique known as Wavelet feature Extraction followed by Neural Network. The development of this 1D platform is also part of this paper, such platform uses a Graphical user interface (GUI) developed in Visual Basic (VB). Its task is to receive the Mental Activity Task output from MATLAB and execute a predetermined action such as 1D movement of cursor. Additionally hardware is integrated for room automation and robotic control which can be controlled by Offline EEG data taken from the user. In the Training with Feedback OM shows visual clues to the user, treats the input data and shows a visual feedback to the user. The body of paper starts with a brief introduction to the BCI system architecture, its implementation using MATLAB followed with the experimental results of the system and ends with conclusion.
Journal of Software
Nowadays the UX design become on a next level. Together with new way of interaction are introduced as finger and hand movement. The technology offer and thought-driven approach with so called brain-computer interface (BCI). This possibility opens new challenges for uses as well as for designers and researchers. More than 15 years there are devices for brain signal interception, such as EMotiv Epoc, Neurosky headset and others. The reliable translation of user commands to the app on a global scale, with no leaps in advancement for its lifetime, is a challenge. It is still un-solve for modern scientists and software developers. Success requires the effective interaction of many adaptive controllers: the user's brain, which produces brain activity that encodes intent; the BCI system, which translates that activity into the digital signals; the accuracy of aforementioned system, computer algorithms to translate the brain signals to commands. In order to find out this complex and monumental task, many teams are exploring a variety of signal analysis techniques to improve the adaptation of the BCI system to the user. Rarely there are publications, in which are described the used methods, steps and algorithms for discerning varying commands, words, signals and etc. This article describes one approach to the retrieval, analysis and processing of the received signals. These data are the result of researching the capabilities of Arduino robot management through the brain signals received by BCI.
2013 6th International Conference on Human System Interactions (HSI), 2013
Abstract. Brain Computer Interfaces (BCI) are becoming increasingly studied as methods for users to interact with computers because recent technological developments have lead to low priced, high precision BCI devices that are aimed at the mass market. This paper investigates the ability for using such a device in real world applications as well as limitations of such applications. The device tested in this paper is called the Emotiv EPOC headset, which is an electroencephalograph (EEG) measuring device and enables the measuring of brain activity using 14 strategically placed sensors. This paper presents: 1) a BCI framework driven completely by thought patterns, aimed at real world applications 2) a quantitative analysis of the performance of the implemented system. The Emotiv EPOC headset based BCI framework presented in this paper was tested on a problem of controlling a simple differential wheeled robot by identifying four thought patterns in the user: "neutral", "move forward", "turn left", and "turn right". The developed approach was tested on 6 individuals and the results show that while BCI control of a mobile robot is possible, precise movement required to guide a robot along a set path is difficult with the current setup. Furthermore, intense concentration is required from users to control the robot accurately.
The brain-computer interface (BCI) aim to use Electroencephalography (EEG) activity or other electrophysiological measures of brain functions as new non-muscular channels for communication with different smart devices for disabled persons. For connection with different smart devices was used recorded with experimental setup electrophysiological signals for execution of five different mental tasks. BCI records brain signals and processes them to produce device commands. This signal processing has two stages. The first stage is feature extraction and calculation of the values of specific features of the signals. The second stage is a translation algorithm that translates these features into device commands. Processed signals after noise filtering, clustering and classification with Bayesian Network classifier and pair-wise classifier was estimated and put into brain-computer interface for connection with smart devices.
Journal of Neuroscience Methods, 2008
Machine learning methods are an excellent choice for compensating the high variability in EEG when analyzing single-trial data in real-time. This paper briefly reviews preprocessing and classification techniques for efficient EEG-based brain-computer interfacing (BCI) and mental state monitoring applications. More specifically, this paper gives an outline of the Berlin brain-computer interface (BBCI), which can be operated with minimal subject training. Also, spelling with the novel BBCI-based Hex-o-Spell text entry system, which gains communication speeds of 6-8 letters per minute, is discussed. Finally the results of a real-time arousal monitoring experiment are presented.
Recently, a new technology known as the braincomputer interface (BCI) has received a substantial amount of interest among various research groups worldwide. The human brain can be represented by self-organising and complex biochemical states. Due to continuous neuronal activity in the brain, chaotic electric potential waves are observed in Electroencephalogram (EEG) recordings of the brain. A BCI involves extracting information from the highly complex EEG. This is achieved by obtaining the dominant discriminating features from different EEG signals recorded during specific thought processes. A class of features is usually obtained from each thought process and subsequently a classifier is trained to learn which feature belongs to which class. This ultimately leads to a system that can determine which thoughts belong to which set of EEG signals. This work outlines a novel method which utilises cybernetic intelligence in the form of Neural Networks (NN). Three NNs are coalesced to perform simplified simulations of a number of the characteristic and complex processes that are sub-consciously performed in the human brain. These include prediction, feature extraction and classification. These processes are combined in this system to produce a pattern recognition system which distinguishes between similar complex patterns from a noisy environment with classification accuracy which compares satisfactorily to current reported results. The classification accuracy is achieved by increasing the separability between the features extracted from two EEG signals recorded from subjects during imagination of left and right arm movement.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.