The accurate segmentation of breast ultrasound images is an important precondition for the lesion determination. The existing segmentation approaches embrace massive parameters, sluggish inference speed, and huge memory consumption. To tackle this problem, we propose T2KD Attention U-Net (dual-Teacher Knowledge Distillation Attention U-Net), a lightweight semantic segmentation method combined double-path joint distillation in breast ultrasound images. Primarily, we designed two teacher models to learn the fine-grained features from each class of images according to different feature representation and semantic information of benign and malignant breast lesions. Then we leveraged the joint distillation to train a lightweight student model. Finally, we constructed a novel weight balance loss to focus on the semantic feature of small objection, solving the unbalance problem of tumor and background. Specifically, the extensive experiments conducted on Dataset BUSI and Dataset B demonstrated that the T2KD Attention U-Net outperformed various knowledge distillation counterparts. Concretely, the accuracy, recall, precision, Dice, and mIoU of proposed method were 95.26%, 86.23%, 85.09%, 83.59%and 77.78% on Dataset BUSI, respectively. And these performance indexes were 97.95%, 92.80%, 88.33%, 88.40% and 82.42% on Dataset B, respectively. Compared with other models, the performance of this model was significantly improved. Meanwhile, compared with the teacher model, the number, size, and complexity of student model were significantly reduced (2.2×106 vs. 106.1×106, 8.4 MB vs. 414 MB, 16.59 GFLOPs vs. 205.98 GFLOPs, respectively). Indeedy, the proposed model guarantees the performances while greatly decreasing the amount of computation, which provides a new method for the deployment of clinical medical scenarios.
ObjectiveTo systematically review the methodological quality of guidelines concerning attention-deficit/hyperactivity disorder (ADHD) in children and adolescents, and to compare differences and similarities of the drugs recommended, in order to provide guidance for clinical practice. MethodsGuidelines concerning ADHD were electronically retrieved in PubMed, EMbase, VIP, WanFang Data, CNKI, NGC (National Guideline Clearinghouse), GIN (Guidelines International Network), NICE (National Institute for Health and Clinical Excellence) from inception to December 2013. The methodological quality of included guidelines were evaluated according to the AGREE Ⅱ instrument, and the differences between recommendations were compared. ResultsA total of 9 guidelines concerning ADHD in children and adolescents were included, with development time ranging from 2004 to 2012. Among 9 guidelines, 4 were made by the USA, 3 in Europe and 2 by UK. The levels of recommendations were Level A for 2 guidelines, and Level B for 7 guidelines. The scores of guidelines according to the domains of AGREE Ⅱ decreased from "clarity of presentations", "scope and purpose", "participants", "applicability", "rigour of development" and "editorial independence". Three evidence-based guidelines scored the top three in the domain of "rigour of development". There were slightly differences in the recommendations of different guidelines. ConclusionThe overall methodological quality of ADHD guidelines is suboptimal in different countries or regions. The 6 domains involving 23 items in AGREE Ⅱ vary with scores, while the scores of evidence-base guidelines are higher than those of non-evidence-based guidelines. The guidelines on ADHD in children and adolescents should be improved in "rigour of development" and "applicability" in future. Conflicts of interest should be addressed. And the guidelines are recommended to be developed on the basis of methods of evidence-based medicine, and best evidence is recommended.
Objective To explore the white matter microstructural abnormalities in patients with different subtypes of attention-deficit/hyperactivity disorder (ADHD) and establish a diagnostic classification model. Methods Patients with ADHD admitted to West China Hospital of Sichuan University between January 2019 and September 2021 and healthy controls recruited through advertisement were prospectively selected. All participants underwent diffusion tensor imaging scanning. The whole brain voxel-based analysis was used to compare the diffusion parameter maps of fractional anisotropy (FA) among patients with combined subtype of ADHD (ADHD-C), patients with inattentive subtype of ADHD (ADHD-I) and healthy controls. The support vector machine classifier and feature selection method were used to construct the individual ADHD diagnostic classification model and efficiency was evaluated between each two groups of the ADHD patients and healthy controls. Results A total of 26 ADHD-C patients, 24 ADHD-I patients and 26 healthy controls were included. The three groups showed significant differences in FA values in the bilateral sagittal stratum of temporal lobe (ADHD-C<ADHD-I<healthy controls) and the isthmus of corpus callosum (ADHD-C>ADHD-I>healthy controls) (P<0.005). The direct comparison between the two subtypes of ADHD showed that ADHD-C had higher FA than ADHD-I in the right middle frontal gyrus. The classification model differentiating ADHD-C and ADHD-I showed the highest efficiency, with a total accuracy of 76.0%, sensitivity of 88.5%, and specificity of 70.8%. Conclusions There is both commonality and heterogeneity in white matter microstructural alterations in the two subtypes of patients with ADHD. The white matter damage of the sagittal stratum of temporal lobe and the corpus callosum may be the intrinsic pathophysiological basis of ADHD, while the anomalies of frontal brain region may be the differential point between different subtypes of patients.
Accurate segmentation of ground glass nodule (GGN) is important in clinical. But it is a tough work to segment the GGN, as the GGN in the computed tomography images show blur boundary, irregular shape, and uneven intensity. This paper aims to segment GGN by proposing a fully convolutional residual network, i.e., residual network based on atrous spatial pyramid pooling structure and attention mechanism (ResAANet). The network uses atrous spatial pyramid pooling (ASPP) structure to expand the feature map receptive field and extract more sufficient features, and utilizes attention mechanism, residual connection, long skip connection to fully retain sensitive features, which is extracted by the convolutional layer. First, we employ 565 GGN provided by Shanghai Chest Hospital to train and validate ResAANet, so as to obtain a stable model. Then, two groups of data selected from clinical examinations (84 GGN) and lung image database consortium (LIDC) dataset (145 GGN) were employed to validate and evaluate the performance of the proposed method. Finally, we apply the best threshold method to remove false positive regions and obtain optimized results. The average dice similarity coefficient (DSC) of the proposed algorithm on the clinical dataset and LIDC dataset reached 83.46%, 83.26% respectively, the average Jaccard index (IoU) reached 72.39%, 71.56% respectively, and the speed of segmentation reached 0.1 seconds per image. Comparing with other reported methods, our new method could segment GGN accurately, quickly and robustly. It could provide doctors with important information such as nodule size or density, which assist doctors in subsequent diagnosis and treatment.
To accurately capture and effectively integrate the spatiotemporal features of electroencephalogram (EEG) signals for the purpose of improving the accuracy of EEG-based emotion recognition, this paper proposes a new method combining independent component analysis-recurrence plot with an improved EfficientNet version 2 (EfficientNetV2). First, independent component analysis is used to extract independent components containing spatial information from key channels of the EEG signals. These components are then converted into two-dimensional images using recurrence plot to better extract emotional features from the temporal information. Finally, the two-dimensional images are input into an improved EfficientNetV2, which incorporates a global attention mechanism and a triplet attention mechanism, and the emotion classification is output by the fully connected layer. To validate the effectiveness of the proposed method, this study conducts comparative experiments, channel selection experiments and ablation experiments based on the Shanghai Jiao Tong University Emotion Electroencephalogram Dataset (SEED). The results demonstrate that the average recognition accuracy of our method is 96.77%, which is significantly superior to existing methods, offering a novel perspective for research on EEG-based emotion recognition.
Objective To assess atomoxetine and methylphenidate therapy for attention- deficit/ hyperactivity disorder (ADHD) .Methods We electronically searched the Cochrane Library (Issue 2, 2008), PubMed (1970 to 2008), MEDLINE (1971 to 2008), EMbase (1971 to 2008), Medscape (1990 to 2008), CBM (1978 to 2008), and NRR (1950 to 2008). We also hand-searched some published and unpublished references. Two independent reviewers extracted data. Quality was assessed by the Cochrane Reviewer’s Handbook 4.0. Meta-analysis was conducted by The Cochrane Collaboration’s RevMan 4.2.8 software. Results We finally identified 3 randomized controlled trials that were relevant to the study. Treatment response (reducing ADHD-RS Inattention subscale score) was significantly greater for patients in the methylphenidate group than in the atomoxetine group with WMD= – 1.79 and 95%CI – 2.22 to 1.35 (Plt;0.000 01). There was no statistical difference in other outcome measures between two groups (Pgt;0.05). Conclusions The effectiveness and tolerance of methylphenidate and atomoxetine are similar in treatment of ADHD. Further large randomized, double blind, placebocontrolled trials with end-point outcome measures in long-term safety and efficacy are needed.
The synergistic effect of drug combinations can solve the problem of acquired resistance to single drug therapy and has great potential for the treatment of complex diseases such as cancer. In this study, to explore the impact of interactions between different drug molecules on the effect of anticancer drugs, we proposed a Transformer-based deep learning prediction model—SMILESynergy. First, the drug text data—simplified molecular input line entry system (SMILES) were used to represent the drug molecules, and drug molecule isomers were generated through SMILES Enumeration for data augmentation. Then, the attention mechanism in the Transformer was used to encode and decode the drug molecules after data augmentation, and finally, a multi-layer perceptron (MLP) was connected to obtain the synergy value of the drugs. Experimental results showed that our model had a mean squared error of 51.34 in regression analysis, an accuracy of 0.97 in classification analysis, and better predictive performance than the DeepSynergy and MulinputSynergy models. SMILESynergy offers improved predictive performance to assist researchers in rapidly screening optimal drug combinations to improve cancer treatment outcomes.
The conventional fault diagnosis of patient monitors heavily relies on manual experience, resulting in low diagnostic efficiency and ineffective utilization of fault maintenance text data. To address these issues, this paper proposes an intelligent fault diagnosis method for patient monitors based on multi-feature text representation, improved bidirectional gate recurrent unit (BiGRU) and attention mechanism. Firstly, the fault text data was preprocessed, and the word vectors containing multiple linguistic features was generated by linguistically-motivated bidirectional encoder representation from Transformer. Then, the bidirectional fault features were extracted and weighted by the improved BiGRU and attention mechanism respectively. Finally, the weighted loss function is used to reduce the impact of class imbalance on the model. To validate the effectiveness of the proposed method, this paper uses the patient monitor fault dataset for verification, and the macro F1 value has achieved 91.11%. The results show that the model built in this study can realize the automatic classification of fault text, and may provide assistant decision support for the intelligent fault diagnosis of the patient monitor in the future.
Deep learning-based automatic classification of diabetic retinopathy (DR) helps to enhance the accuracy and efficiency of auxiliary diagnosis. This paper presents an improved residual network model for classifying DR into five different severity levels. First, the convolution in the first layer of the residual network was replaced with three smaller convolutions to reduce the computational load of the network. Second, to address the issue of inaccurate classification due to minimal differences between different severity levels, a mixed attention mechanism was introduced to make the model focus more on the crucial features of the lesions. Finally, to better extract the morphological features of the lesions in DR images, cross-layer fusion convolutions were used instead of the conventional residual structure. To validate the effectiveness of the improved model, it was applied to the Kaggle Blindness Detection competition dataset APTOS2019. The experimental results demonstrated that the proposed model achieved a classification accuracy of 97.75% and a Kappa value of 0.971 7 for the five DR severity levels. Compared to some existing models, this approach shows significant advantages in classification accuracy and performance.
In response to the issues of single-scale information loss and large model parameter size during the sampling process in U-Net and its variants for medical image segmentation, this paper proposes a multi-scale medical image segmentation method based on pixel encoding and spatial attention. Firstly, by redesigning the input strategy of the Transformer structure, a pixel encoding module is introduced to enable the model to extract global semantic information from multi-scale image features, obtaining richer feature information. Additionally, deformable convolutions are incorporated into the Transformer module to accelerate convergence speed and improve module performance. Secondly, a spatial attention module with residual connections is introduced to allow the model to focus on the foreground information of the fused feature maps. Finally, through ablation experiments, the network is lightweighted to enhance segmentation accuracy and accelerate model convergence. The proposed algorithm achieves satisfactory results on the Synapse dataset, an official public dataset for multi-organ segmentation provided by the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), with Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) scores of 77.65 and 18.34, respectively. The experimental results demonstrate that the proposed algorithm can enhance multi-organ segmentation performance, potentially filling the gap in multi-scale medical image segmentation algorithms, and providing assistance for professional physicians in diagnosis.