Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.
This article aims to combine deep learning with image analysis technology and propose an effective classification method for distal radius fracture types. Firstly, an extended U-Net three-layer cascaded segmentation network was used to accurately segment the most important joint surface and non joint surface areas for identifying fractures. Then, the images of the joint surface area and non joint surface area separately were classified and trained to distinguish fractures. Finally, based on the classification results of the two images, the normal or ABC fracture classification results could be comprehensively determined. The accuracy rates of normal, A-type, B-type, and C-type fracture on the test set were 0.99, 0.92, 0.91, and 0.82, respectively. For orthopedic medical experts, the average recognition accuracy rates were 0.98, 0.90, 0.87, and 0.81, respectively. The proposed automatic recognition method is generally better than experts, and can be used for preliminary auxiliary diagnosis of distal radius fractures in scenarios without expert participation.
In the extraction of fetal electrocardiogram (ECG) signal, due to the unicity of the scale of the U-Net same-level convolution encoder, the size and shape difference of the ECG characteristic wave between mother and fetus are ignored, and the time information of ECG signals is not used in the threshold learning process of the encoder’s residual shrinkage module. In this paper, a method of extracting fetal ECG signal based on multi-scale residual shrinkage U-Net model is proposed. First, the Inception and time domain attention were introduced into the residual shrinkage module to enhance the multi-scale feature extraction ability of the same level convolution encoder and the utilization of the time domain information of fetal ECG signal. In order to maintain more local details of ECG waveform, the maximum pooling in U-Net was replaced by Softpool. Finally, the decoder composed of the residual module and up-sampling gradually generated fetal ECG signals. In this paper, clinical ECG signals were used for experiments. The final results showed that compared with other fetal ECG extraction algorithms, the method proposed in this paper could extract clearer fetal ECG signals. The sensitivity, positive predictive value, and F1 scores in the 2013 competition data set reached 93.33%, 99.36%, and 96.09%, respectively, indicating that this method can effectively extract fetal ECG signals and has certain application values for perinatal fetal health monitoring.
Melanocytic lesions occur on the surface of the skin, in which the malignant type is melanoma with a high fatality rate, seriously endangering human health. The histopathological analysis is the gold standard for diagnosis of melanocytic lesions. In this study, a fully automated intelligent diagnosis method based on deep learning was proposed to classify the pathological whole slide images (WSI) of melanocytic lesions. Firstly, the color normalization based on CycleGAN neural network was performed on multi-center pathological WSI; Secondly, ResNet-152 neural network-based deep convolutional network prediction model was built using 745 WSI; Then, a decision fusion model was cascaded, which calculates the average prediction probability of each WSI; Finally, the diagnostic performance of the proposed method was verified by internal and external test sets containing 182 and 54 WSI, respectively. Experimental results showed that the overall diagnostic accuracy of the proposed method reached 94.12% in the internal test set and exceeded 90% in the external test set. Furthermore, the color normalization method adopted was superior to the traditional color statistics-based and staining separation-based methods in terms of structure preservation and artifact suppression. The results demonstrate that the proposed method can achieve high precision and strong robustness in pathological WSI classification of melanocytic lesions, which has the potential in promoting the clinical application of computer-aided pathological diagnosis.
The brain-computer interface (BCI) based on motor imagery electroencephalography (MI-EEG) enables direct information interaction between the human brain and external devices. In this paper, a multi-scale EEG feature extraction convolutional neural network model based on time series data enhancement is proposed for decoding MI-EEG signals. First, an EEG signals augmentation method was proposed that could increase the information content of training samples without changing the length of the time series, while retaining its original features completely. Then, multiple holistic and detailed features of the EEG data were adaptively extracted by multi-scale convolution module, and the features were fused and filtered by parallel residual module and channel attention. Finally, classification results were output by a fully connected network. The application experimental results on the BCI Competition IV 2a and 2b datasets showed that the proposed model achieved an average classification accuracy of 91.87% and 87.85% for the motor imagery task, respectively, which had high accuracy and strong robustness compared with existing baseline models. The proposed model does not require complex signals pre-processing operations and has the advantage of multi-scale feature extraction, which has high practical application value.
Objective To construct and evaluate a screening and diagnostic system based on color fundus images and artificial intelligence (AI)-assisted screening for optic neuritis (ON) and non-arteritic anterior ischemic optic neuropathy (NAION). MethodsA diagnostic test study. From 2016 to 2020, 178 cases 267 eyes of NAION patients (NAION group) and 204 cases 346 eyes of ON patients (ON group) were examined and diagnosed in Zhongshan Ophthalmic Center of Sun Yat-sen University; 513 healthy individuals of 1 160 eyes (the normal control group) with normal fundus by visual acuity, intraocular pressure and optical coherence tomography examination were collected from 2018 to 2020. All 2 909 color fundus images were as the data set of the screening and diagnosis system, including 730, 805, and 1 374 images for the NAION group, ON group, and normal control group, respectively. The correctly labeled color fundus images were used as input data, and the EfficientNet-B0 algorithm was selected for model training and validation. Finally, three systems for screening abnormal optic discs, ON, and NAION were constructed. The subject operating characteristic (ROC) curve, area under the ROC (AUC), accuracy, sensitivity, specificity, and heat map were used as indicators of diagnostic efficacy. ResultsIn the test data set, the AUC for diagnosing the presence of an abnormal optic disc, the presence of ON, and the presence of NAION were 0.967 [95% confidence interval (CI) 0.947-0.980], 0.964 (95%CI 0.938-0.979), and 0.979 (95%CI 0.958-0.989), respectively. The activation area of the systems were mainly located in the optic disc area in the decision-making process. ConclusionAbnormal optic disc, ON and NAION, and screening diagnostic systems based on color fundus images have shown accurate and efficient diagnostic performance.
Objective To develop a deep learning system for CT images to assist in the diagnosis of thoracolumbar fractures and analyze the feasibility of its clinical application. Methods Collected from West China Hospital of Sichuan University from January 2019 to March 2020, a total of 1256 CT images of thoracolumbar fractures were annotated with a unified standard through the Imaging LabelImg system. All CT images were classified according to the AO Spine thoracolumbar spine injury classification. The deep learning system in diagnosing ABC fracture types was optimized using 1039 CT images for training and validation, of which 1004 were used as the training set and 35 as the validation set; the rest 217 CT images were used as the test set to compare the deep learning system with the clinician’s diagnosis. The deep learning system in subtyping A was optimized using 581 CT images for training and validation, of which 556 were used as the training set and 25 as the validation set; the rest 104 CT images were used as the test set to compare the deep learning system with the clinician’s diagnosis. Results The accuracy and Kappa coefficient of the deep learning system in diagnosing ABC fracture types were 89.4% and 0.849 (P<0.001), respectively. The accuracy and Kappa coefficient of subtyping A were 87.5% and 0.817 (P<0.001), respectively. Conclusions The classification accuracy of the deep learning system for thoracolumbar fractures is high. This approach can be used to assist in the intelligent diagnosis of CT images of thoracolumbar fractures and improve the current manual and complex diagnostic process.
ObjectiveTo study a deep learning-based dual-modality fundus camera which was used to study retinal blood oxygen saturation and vascular morphology changes in eyes with branch retinal vein occlusion (BRVO). MethodsA prospective study. From May to October 2020, 31 patients (31 eyes) of BRVO (BRVO group) and 20 healthy volunteers (20 eyes) with matched gender and age (control group) were included in the study. Among 31 patients (31 eyes) in BRVO group, 20 patients (20 eyes) received one intravitreal injection of anti-vascular endothelial growth factor drugs before, and 11 patients (11 eyes) did not receive any treatment. They were divided into treatment group and untreated group accordingly. Retinal images were collected with a dual-modality fundus camera; arterial and vein segments were segmented in the macular region of interest (MROI) using deep learning; the optical density ratio was used to calculate retinal blood oxygen saturation (SO2) on the affected and non-involved sides of the eyes in the control group and patients in the BRVO group, and calculated the diameter, curvature, fractal dimension and density of arteriovenous in MROI. Quantitative data were compared between groups using one-way analysis of variance. ResultsThere was a statistically significant difference in arterial SO2 (SO2-A) in the MROI between the affected eyes, the fellow eyes in the BRVO group and the control group (F=4.925, P<0.001), but there was no difference in the venous SO2 (SO2-V) (F=0.607, P=0.178). Compared with the control group, the SO2-A in the MROI of the affected side and the non-involved side of the untreated group was increased, and the difference was statistically significant (F=4.925, P=0.012); there was no significant difference in SO2-V (F=0.607, P=0.550). There was no significant difference in SO2-A and SO2-V in the MROI between the affected side, the non-involved side in the treatment group and the control group (F=0.159, 1.701; P=0.854, 0.197). There was no significant difference in SO2-A and SO2-V in MROI between the affected side of the treatment group, the untreated group and the control group (F=2.553, 0.265; P=0.088, 0.546). The ophthalmic artery diameter, arterial curvature, arterial fractal dimension, vein fractal dimension, arterial density, and vein density were compared in the untreated group, the treatment group, and the control group, and the differences were statistically significant (F=3.527, 3.322, 7.251, 26.128, 4.782, 5.612; P=0.047, 0.044, 0.002, <0.001, 0.013, 0.006); there was no significant difference in vein diameter and vein curvature (F=2.132, 1.199; P=0.143, 0.321). ConclusionArterial SO2 in BRVO patients is higher than that in healthy eyes, it decreases after anti-anti-vascular endothelial growth factor drugs treatment, SO2-V is unchanged.
ObjectiveTo systematically evaluate the efficacy and safety of computer-aided detection (CADe) and conventional colonoscopy in identifying colorectal adenomas and polyps. MethodsThe PubMed, Embase, Cochrane Library, Web of Science, WanFang Data, VIP, and CNKI databases were electronically searched to collect randomized controlled trials (RCTs) comparing the effectiveness and safety of CADe assisted colonoscopy and conventional colonoscopy in detecting colorectal tumors from 2014 to April 2023. Two reviewers independently screened the literature, extracted data, and evaluated the risk of bias of the included literature. Meta-analysis was performed by RevMan 5.3 software. ResultsA total of 9 RCTs were included, with a total of 6 393 patients. Compared with conventional colonoscopy, the CADe system significantly improved the adenoma detection rate (ADR) (RR=1.22, 95%CI 1.10 to 1.35, P<0.01) and polyp detection rate (PDR) (RR=1.19, 95%CI 1.04 to 1.36, P=0.01). It also reduced the missed diagnosis rate (AMR) of adenomas (RR=0.48, 95%CI 0.34 to 0.67, P<0.01) and the missed diagnosis rate (PMR) of polyps (RR=0.39, 95%CI 0.25 to 0.59, P<0.01). The PDR of proximal polyps significantly increased, while the PDR of ≤5 mm polyps slightly increased, but the PDR of >10mm and pedunculated polyps significantly decreased. The AMR of the cecum, transverse colon, descending colon, and sigmoid colon was significantly reduced. There was no statistically significant difference in the withdrawal time between the two groups. Conclusion The CADe system can increase the detection rate of adenomas and polyps, and reduce the missed diagnosis rate. The detection rate of polyps is related to their location, size, and shape, while the missed diagnosis rate of adenomas is related to their location.
Breast cancer is a malignancy caused by the abnormal proliferation of breast epithelial cells, predominantly affecting female patients, and it is commonly diagnosed using histopathological images. Currently, deep learning techniques have made significant breakthroughs in medical image processing, outperforming traditional detection methods in breast cancer pathology classification tasks. This paper first reviewed the advances in applying deep learning to breast pathology images, focusing on three key areas: multi-scale feature extraction, cellular feature analysis, and classification. Next, it summarized the advantages of multimodal data fusion methods for breast pathology images. Finally, the study discussed the challenges and future prospects of deep learning in breast cancer pathology image diagnosis, providing important guidance for advancing the use of deep learning in breast diagnosis.