west china medical publishers
Author
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Author "LYU Baoliang" 2 results
  • Multi-source adversarial adaptation with calibration for electroencephalogram-based classification of meditation and resting states

    Meditation aims to guide individuals into a state of deep calm and focused attention, and in recent years, it has shown promising potential in the field of medical treatment. Numerous studies have demonstrated that electroencephalogram (EEG) patterns change during meditation, suggesting the feasibility of using deep learning techniques to monitor meditation states. However, significant inter-subject differences in EEG signals poses challenges to the performance of such monitoring systems. To address this issue, this study proposed a novel model—calibrated multi-source adversarial adaptation network (CMAAN). The model first trained multiple domain-adversarial neural networks in a pairwise manner between various source-domain individuals and the target-domain individual. These networks were then integrated through a calibration process using a small amount of labeled data from the target domain to enhance performance. We evaluated the proposed model on an EEG dataset collected from 18 subjects undergoing methamphetamine rehabilitation. The model achieved a classification accuracy of 73.09%. Additionally, based on the learned model, we analyzed the key EEG frequency bands and brain regions involved in the meditation process. The proposed multi-source domain adaptation framework improves both the performance and robustness of EEG-based meditation monitoring and holds great promise for applications in biomedical informatics and clinical practice.

    Release date:2025-08-19 11:47 Export PDF Favorites Scan
  • A method for emotion transition recognition using cross-modal feature fusion and global perception

    Current studies on electroencephalogram (EEG) emotion recognition primarily concentrate on discrete stimulus paradigms under controlled laboratory settings, which cannot adequately represent the dynamic transition characteristics of emotional states during multi-context interactions. To address this issue, this paper proposes a novel method for emotion transition recognition that leverages a cross-modal feature fusion and global perception network (CFGPN). Firstly, an experimental paradigm encompassing six types of emotion transition scenarios was designed, and EEG and eye movement data were simultaneously collected from 20 participants, each annotated with dynamic continuous emotion labels. Subsequently, deep canonical correlation analysis integrated with a cross-modal attention mechanism was employed to fuse features from EEG and eye movement signals, resulting in multimodal feature vectors enriched with highly discriminative emotional information. These vectors are then input into a parallel hybrid architecture that combines convolutional neural networks (CNNs) and Transformers. The CNN is employed to capture local time-series features, whereas the Transformer leverages its robust global perception capabilities to effectively model long-range temporal dependencies, enabling accurate dynamic emotion transition recognition. The results demonstrate that the proposed method achieves the lowest mean square error in both valence and arousal recognition tasks on the dynamic emotion transition dataset and a classic multimodal emotion dataset. It exhibits superior recognition accuracy and stability when compared with five existing unimodal and six multimodal deep learning models. The approach enhances both adaptability and robustness in recognizing emotional state transitions in real-world scenarios, showing promising potential for applications in the field of biomedical engineering.

    Release date: Export PDF Favorites Scan
1 pages Previous 1 Next

Format

Content