Objective To investigate the security and efficiency of endovascular repair for Stanford type B aortic dissection (AD) with severe complications. Methods Between January 2003 and December 2009, 60 patients having Stanford type B AD with severe compl ications were treated, including 39 males and 21 females with an average age of 43.7 years (range, 34-71 years). Severe compl ications included 27 cases of huge hemothorax, 1 case of paraplegia, 7 cases of acute renal failure,10 cases of cel iac trunk ischemia, 10 cases of superior mesenteric artery ischemia, and 5 cases of severe limb schemia. Emergency stent-graft deployment were appl ied in all patients, and 64 stent-grafts were successfully implanted. Results All patients survived and were followed up 3-86 months. Hemothorax disappeared after 28 days to 3 months of operation in all hemothorax patients; renal function returned normal after 1 to 9 days; l imb and visceral ischemia disappeared gradually after 1 to 14 days; and muscular strength of lower limb in the paraplegia patient began to recover after 4 hours of operation. The postoperative CT angiography showed enlarged true lumen and thrombosis in the false lumen. Conclusion Emergency endovascular repair is a safe and effective method to treat Stanford type B AD with severe complications.
Current studies on electroencephalogram (EEG) emotion recognition primarily concentrate on discrete stimulus paradigms under controlled laboratory settings, which cannot adequately represent the dynamic transition characteristics of emotional states during multi-context interactions. To address this issue, this paper proposes a novel method for emotion transition recognition that leverages a cross-modal feature fusion and global perception network (CFGPN). Firstly, an experimental paradigm encompassing six types of emotion transition scenarios was designed, and EEG and eye movement data were simultaneously collected from 20 participants, each annotated with dynamic continuous emotion labels. Subsequently, deep canonical correlation analysis integrated with a cross-modal attention mechanism was employed to fuse features from EEG and eye movement signals, resulting in multimodal feature vectors enriched with highly discriminative emotional information. These vectors are then input into a parallel hybrid architecture that combines convolutional neural networks (CNNs) and Transformers. The CNN is employed to capture local time-series features, whereas the Transformer leverages its robust global perception capabilities to effectively model long-range temporal dependencies, enabling accurate dynamic emotion transition recognition. The results demonstrate that the proposed method achieves the lowest mean square error in both valence and arousal recognition tasks on the dynamic emotion transition dataset and a classic multimodal emotion dataset. It exhibits superior recognition accuracy and stability when compared with five existing unimodal and six multimodal deep learning models. The approach enhances both adaptability and robustness in recognizing emotional state transitions in real-world scenarios, showing promising potential for applications in the field of biomedical engineering.