Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.
The electroencephalogram (EEG) signal is a general reflection of the neurophysiological activity of the brain, which has the advantages of being safe, efficient, real-time and dynamic. With the development and advancement of machine learning research, automatic diagnosis of Alzheimer’s diseases based on deep learning is becoming a research hotspot. Started from feedforward neural networks, this paper compared and analysed the structural properties of neural network models such as recurrent neural networks, convolutional neural networks and deep belief networks and their performance in the diagnosis of Alzheimer’s disease. It also discussed the possible challenges and research trends of this research in the future, expecting to provide a valuable reference for the clinical application of neural networks in the EEG diagnosis of Alzheimer’s disease.
Objective To develop a deep learning system for CT images to assist in the diagnosis of thoracolumbar fractures and analyze the feasibility of its clinical application. Methods Collected from West China Hospital of Sichuan University from January 2019 to March 2020, a total of 1256 CT images of thoracolumbar fractures were annotated with a unified standard through the Imaging LabelImg system. All CT images were classified according to the AO Spine thoracolumbar spine injury classification. The deep learning system in diagnosing ABC fracture types was optimized using 1039 CT images for training and validation, of which 1004 were used as the training set and 35 as the validation set; the rest 217 CT images were used as the test set to compare the deep learning system with the clinician’s diagnosis. The deep learning system in subtyping A was optimized using 581 CT images for training and validation, of which 556 were used as the training set and 25 as the validation set; the rest 104 CT images were used as the test set to compare the deep learning system with the clinician’s diagnosis. Results The accuracy and Kappa coefficient of the deep learning system in diagnosing ABC fracture types were 89.4% and 0.849 (P<0.001), respectively. The accuracy and Kappa coefficient of subtyping A were 87.5% and 0.817 (P<0.001), respectively. Conclusions The classification accuracy of the deep learning system for thoracolumbar fractures is high. This approach can be used to assist in the intelligent diagnosis of CT images of thoracolumbar fractures and improve the current manual and complex diagnostic process.
Heart failure is a disease that seriously threatens human health and has become a global public health problem. Diagnostic and prognostic analysis of heart failure based on medical imaging and clinical data can reveal the progression of heart failure and reduce the risk of death of patients, which has important research value. The traditional analysis methods based on statistics and machine learning have some problems, such as insufficient model capability, poor accuracy due to prior dependence, and poor model adaptability. In recent years, with the development of artificial intelligence technology, deep learning has been gradually applied to clinical data analysis in the field of heart failure, showing a new perspective. This paper reviews the main progress, application methods and major achievements of deep learning in heart failure diagnosis, heart failure mortality and heart failure readmission, summarizes the existing problems and presents the prospects of related research to promote the clinical application of deep learning in heart failure clinical research.
For patients with partial jaw defects, cysts and dental implants, doctors need to take panoramic X-ray films or manually draw dental arch lines to generate Panorama images in order to observe their complete dentition information during oral diagnosis. In order to solve the problems of additional burden for patients to take panoramic X-ray films and time-consuming issue for doctors to manually segment dental arch lines, this paper proposes an automatic panorama reconstruction method based on cone beam computerized tomography (CBCT). The V-network (VNet) is used to pre-segment the teeth and the background to generate the corresponding binary image, and then the Bezier curve is used to define the best dental arch curve to generate the oral panorama. In addition, this research also addressed the issues of mistakenly recognizing the teeth and jaws as dental arches, incomplete coverage of the dental arch area by the generated dental arch lines, and low robustness, providing intelligent methods for dental diagnosis and improve the work efficiency of doctors.
Magnetic resonance imaging (MRI) is an important medical imaging method, whose major limitation is its long scan time due to the imaging mechanism, increasing patients’ cost and waiting time for the examination. Currently, parallel imaging (PI) and compress sensing (CS) together with other reconstruction technologies have been proposed to accelerate image acquisition. However, the image quality of PI and CS depends on the image reconstruction algorithms, which is far from satisfying in respect to both the image quality and the reconstruction speed. In recent years, image reconstruction based on generative adversarial network (GAN) has become a research hotspot in the field of magnetic resonance imaging because of its excellent performance. In this review, we summarized the recent development of application of GAN in MRI reconstruction in both single- and multi-modality acceleration, hoping to provide a useful reference for interested researchers. In addition, we analyzed the characteristics and limitations of existing technologies and forecasted some development trends in this field.
Brain age prediction, as a significant approach for assessing brain health and early diagnosing neurodegenerative diseases, has garnered widespread attention in recent years. Electroencephalogram (EEG), an non-invasive, convenient, and cost-effective neurophysiological signal, offers unique advantages for brain age prediction due to its high temporal resolution and strong correlation with brain functional states. Despite substantial progress in enhancing prediction accuracy and generalizability, challenges remain in data quality and model interpretability. This review comprehensively examined the advancements in EEG-based brain age prediction, detailing key aspects of data preprocessing, feature extraction, model construction, and result evaluation. It also summarized the current applications of machine learning and deep learning methods in this field, analyzed existing issues, and explored future directions to promote the widespread application of EEG-based brain age prediction in both clinical and research settings.
Organoids are an in vitro model that can simulate the complex structure and function of tissues in vivo. Functions such as classification, screening and trajectory recognition have been realized through organoid image analysis, but there are still problems such as low accuracy in recognition classification and cell tracking. Deep learning algorithm and organoid image fusion analysis are the most advanced organoid image analysis methods. In this paper, the organoid image depth perception technology is investigated and sorted out, the organoid culture mechanism and its application concept in depth perception are introduced, and the key progress of four depth perception algorithms such as organoid image and classification recognition, pattern detection, image segmentation and dynamic tracking are reviewed respectively, and the performance advantages of different depth models are compared and analyzed. In addition, this paper also summarizes the depth perception technology of various organ images from the aspects of depth perception feature learning, model generalization and multiple evaluation parameters, and prospects the development trend of organoids based on deep learning methods in the future, so as to promote the application of depth perception technology in organoid images. It provides an important reference for the academic research and practical application in this field.
Clinically, non-contrastive computed tomography (NCCT) is used to quickly diagnose the type and area of stroke, and the Alberta stroke program early computer tomography score (ASPECTS) is used to guide the next treatment. However, in the early stage of acute ischemic stroke (AIS), it’s difficult to distinguish the mild cerebral infarction on NCCT with the naked eye, and there is no obvious boundary between brain regions, which makes clinical ASPECTS difficult to conduct. The method based on machine learning and deep learning can help physicians quickly and accurately identify cerebral infarction areas, segment brain areas, and operate ASPECTS quantitative scoring, which is of great significance for improving the inconsistency in clinical ASPECTS. This article describes current challenges in the field of AIS ASPECTS, and then summarizes the application of computer-aided technology in ASPECTS from two aspects including machine learning and deep learning. Finally, this article summarizes and prospects the research direction of AIS-assisted assessment, and proposes that the computer-aided system based on multi-modal images is of great value to improve the comprehensiveness and accuracy of AIS assessment, which has the potential to open up a new research field for AIS-assisted assessment.