Coding with high-frequency stimuli could alleviate the visual fatigue of users generated by the brain-computer interface (BCI) based on steady-state visual evoked potential (SSVEP). It would improve the comfort and safety of the system and has promising applications. However, most of the current advanced SSVEP decoding algorithms were compared and verified on low-frequency SSVEP datasets, and their recognition performance on high-frequency SSVEPs was still unknown. To address the aforementioned issue, electroencephalogram (EEG) data from 20 subjects were collected utilizing a high-frequency SSVEP paradigm. Then, the state-of-the-art SSVEP algorithms were compared, including 2 canonical correlation analysis algorithms, 3 task-related component analysis algorithms, and 1 task discriminant component analysis algorithm. The results indicated that they all could effectively decode high-frequency SSVEPs. Besides, there were differences in the classification performance and algorithms' speed under different conditions. This paper provides a basis for the selection of algorithms for high-frequency SSVEP-BCI, demonstrating its potential utility in developing user-friendly BCI.
Brain-computer interface (BCI) systems based on steady-state visual evoked potential (SSVEP) have become one of the major paradigms in BCI research due to their high signal-to-noise ratio and short training time required by users. Fast and accurate decoding of SSVEP features is a crucial step in SSVEP-BCI research. However, the current researches lack a systematic overview of SSVEP decoding algorithms and analyses of the connections and differences between them, so it is difficult for researchers to choose the optimum algorithm under different situations. To address this problem, this paper focuses on the progress of SSVEP decoding algorithms in recent years and divides them into two categories—trained and non-trained—based on whether training data are needed. This paper also explains the fundamental theories and application scopes of decoding algorithms such as canonical correlation analysis (CCA), task-related component analysis (TRCA) and the extended algorithms, concludes the commonly used strategies for processing decoding algorithms, and discusses the challenges and opportunities in this field in the end.
Attention can concentrate our mental resources on processing certain interesting objects, which is an important mental behavior and cognitive process. Recognizing attentional states have great significance in improving human’s performance and reducing errors. However, it still lacks a direct and standardized way to monitor a person’s attentional states. Based on the fact that visual attention can modulate the steady-state visual evoked potential (SSVEP), we designed a go/no-go experimental paradigm with 10 Hz steady state visual stimulation in background to investigate the separability of SSVEP features modulated by different visual attentional states. The experiment recorded the EEG signals of 15 postgraduate volunteers under high and low visual attentional states. High and low visual attentional states are determined by behavioral responses. We analyzed the differences of SSVEP signals between the high and low attentional levels, and applied classification algorithms to recognize such differences. Results showed that the discriminant canonical pattern matching (DCPM) algorithm performed better compared with the linear discrimination analysis (LDA) algorithm and the canonical correlation analysis (CCA) algorithm, which achieved up to 76% in accuracy. Our results show that the SSVEP features modulated by different visual attentional states are separable, which provides a new way to monitor visual attentional states.
In recent years, hybrid brain-computer interfaces (BCIs) have gained significant attention due to their demonstrated advantages in increasing the number of targets and enhancing robustness of the systems. However, Existing studies usually construct BCI systems using intense auditory stimulation and strong central visual stimulation, which lead to poor user experience and indicate a need for improving system comfort. Studies have proved that the use of peripheral visual stimulation and lower intensity of auditory stimulation can effectively boost the user’s comfort. Therefore, this study used high-frequency peripheral visual stimulation and 40-dB weak auditory stimulation to elicit steady-state visual evoked potential (SSVEP) and auditory steady-state response (ASSR) signals, building a high-comfort hybrid BCI based on weak audio-visual evoked responses. This system coded 40 targets via 20 high-frequency visual stimulation frequencies and two auditory stimulation frequencies, improving the coding efficiency of BCI systems. Results showed that the hybrid system’s averaged classification accuracy was (78.00 ± 12.18) %, and the information transfer rate (ITR) could reached 27.47 bits/min. This study offers new ideas for the design of hybrid BCI paradigm based on imperceptible stimulation.
This study investigates a brain-computer interface (BCI) system based on an augmented reality (AR) environment and steady-state visual evoked potentials (SSVEP). The system is designed to facilitate the selection of real-world objects through visual gaze in real-life scenarios. By integrating object detection technology and AR technology, the system augmented real objects with visual enhancements, providing users with visual stimuli that induced corresponding brain signals. SSVEP technology was then utilized to interpret these brain signals and identify the objects that users focused on. Additionally, an adaptive dynamic time-window-based filter bank canonical correlation analysis was employed to rapidly parse the subjects’ brain signals. Experimental results indicated that the system could effectively recognize SSVEP signals, achieving an average accuracy rate of 90.6% in visual target identification. This system extends the application of SSVEP signals to real-life scenarios, demonstrating feasibility and efficacy in assisting individuals with mobility impairments and physical disabilities in object selection tasks.
Brain control is a new control method. The traditional brain-controlled robot is mainly used to control a single robot to accomplish a specific task. However, the brain-controlled multi-robot cooperation (MRC) task is a new topic to be studied. This paper presents an experimental research which received the "Innovation Creative Award" in the brain-computer interface (BCI) brain-controlled robot contest at the World Robot Contest. Two effective brain switches were set: total control brain switch and transfer switch, and BCI based steady-state visual evoked potentials (SSVEP) was adopted to navigate a humanoid robot and a mechanical arm to complete the cooperation task. Control test of 10 subjects showed that the excellent SSVEP-BCI can be used to achieve the MRC task by appropriately setting up the brain switches. This study is expected to provide inspiration for the future practical brain-controlled MRC task system.
Brain-computer interface (BCI) based on steady-state visual evoked potential (SSVEP) have attracted much attention in the field of intelligent robotics. Traditional SSVEP-based BCI systems mostly use synchronized triggers without identifying whether the user is in the control or non-control state, resulting in a system that lacks autonomous control capability. Therefore, this paper proposed a SSVEP asynchronous state recognition method, which constructs an asynchronous state recognition model by fusing multiple time-frequency domain features of electroencephalographic (EEG) signals and combining with a linear discriminant analysis (LDA) to improve the accuracy of SSVEP asynchronous state recognition. Furthermore, addressing the control needs of disabled individuals in multitasking scenarios, a brain-machine fusion system based on SSVEP-BCI asynchronous cooperative control was developed. This system enabled the collaborative control of wearable manipulator and robotic arm, where the robotic arm acts as a “third hand”, offering significant advantages in complex environments. The experimental results showed that using the SSVEP asynchronous control algorithm and brain-computer fusion system proposed in this paper could assist users to complete multitasking cooperative operations. The average accuracy of user intent recognition in online control experiments was 93.0%, which provides a theoretical and practical basis for the practical application of the asynchronous SSVEP-BCI system.
Brain-controlled wheelchair (BCW) is one of the important applications of brain-computer interface (BCI) technology. The present research shows that simulation control training is of great significance for the application of BCW. In order to improve the BCW control ability of users and promote the application of BCW under the condition of safety, this paper builds an indoor simulation training system based on the steady-state visual evoked potentials for BCW. The system includes visual stimulus paradigm design and implementation, electroencephalogram acquisition and processing, indoor simulation environment modeling, path planning, and simulation wheelchair control, etc. To test the performance of the system, a training experiment involving three kinds of indoor path-control tasks is designed and 10 subjects were recruited for the 5-day training experiment. By comparing the results before and after the training experiment, it was found that the average number of commands in Task 1, Task 2, and Task 3 decreased by 29.5%, 21.4%, and 25.4%, respectively (P < 0.001). And the average number of commands used by the subjects to complete all tasks decreased by 25.4% (P < 0.001). The experimental results show that the training of subjects through the indoor simulation training system built in this paper can improve their proficiency and efficiency of BCW control to a certain extent, which verifies the practicability of the system and provides an effective assistant method to promote the indoor application of BCW.
Brain-computer interface (BCI) system is a system that achieves communication and control among humans and computers and other electronic equipment with the electroencephalogram (EEG) signals. This paper describes the working theory of the wireless smart home system based on the BCI technology. We started to get the steady-state visual evoked potential (SSVEP) using the single chip microcomputer and the visual stimulation which composed by LED lamp to stimulate human eyes. Then, through building the power spectral transformation on the LabVIEW platform, we processed timely those EEG signals under different frequency stimulation so as to transfer them to different instructions. Those instructions could be received by the wireless transceiver equipment to control the household appliances and to achieve the intelligent control towards the specified devices. The experimental results showed that the correct rate for the 10 subjects reached 100%, and the control time of average single device was 4 seconds, thus this design could totally achieve the original purpose of smart home system.
Brain-computer interface (BCI) has great potential to replace lost upper limb function. Thus, there has been great interest in the development of BCI-controlled robotic arm. However, few studies have attempted to use noninvasive electroencephalography (EEG)-based BCI to achieve high-level control of a robotic arm. In this paper, a high-level control architecture combining augmented reality (AR) BCI and computer vision was designed to control a robotic arm for performing a pick and place task. A steady-state visual evoked potential (SSVEP)-based BCI paradigm was adopted to realize the BCI system. Microsoft's HoloLens was used to build an AR environment and served as the visual stimulator for eliciting SSVEPs. The proposed AR-BCI was used to select the objects that need to be operated by the robotic arm. The computer vision was responsible for providing the location, color and shape information of the objects. According to the outputs of the AR-BCI and computer vision, the robotic arm could autonomously pick the object and place it to specific location. Online results of 11 healthy subjects showed that the average classification accuracy of the proposed system was 91.41%. These results verified the feasibility of combing AR, BCI and computer vision to control a robotic arm, and are expected to provide new ideas for innovative robotic arm control approaches.