With the advancement and development of computer technology, the medical decision-making system based on artificial intelligence (AI) has been widely applied in clinical practice. In the perioperative period of cardiovascular surgery, AI can be applied to preoperative diagnosis, intraoperative, and postoperative risk management. This article introduces the application and development of AI during the perioperative period of cardiovascular surgery, including preoperative auxiliary diagnosis, intraoperative risk management, postoperative management, and full process auxiliary decision-making management. At the same time, it explores the challenges and limitations of the application of AI and looks forward to the future development direction.
Non-small cell lung cancer is one of the cancers with the highest incidence and mortality rate in the world, and precise prognostic models can guide clinical treatment plans. With the continuous upgrading of computer technology, deep learning as a breakthrough technology of artificial intelligence has shown good performance and great potential in the application of non-small cell lung cancer prognosis model. The research on the application of deep learning in survival and recurrence prediction, efficacy prediction, distant metastasis prediction, and complication prediction of non-small cell lung cancer has made some progress, and it shows a trend of multi-omics and multi-modal joint, but there are still shortcomings, which should be further explored in the future to strengthen model verification and solve practical problems in clinical practice.
Motor imagery electroencephalogram (EEG) signals are non-stationary time series with a low signal-to-noise ratio. Therefore, the single-channel EEG analysis method is difficult to effectively describe the interaction characteristics between multi-channel signals. This paper proposed a deep learning network model based on the multi-channel attention mechanism. First, we performed time-frequency sparse decomposition on the pre-processed data, which enhanced the difference of time-frequency characteristics of EEG signals. Then we used the attention module to map the data in time and space so that the model could make full use of the data characteristics of different channels of EEG signals. Finally, the improved time-convolution network (TCN) was used for feature fusion and classification. The BCI competition IV-2a data set was used to verify the proposed algorithm. The experimental results showed that the proposed algorithm could effectively improve the classification accuracy of motor imagination EEG signals, which achieved an average accuracy of 83.03% for 9 subjects. Compared with the existing methods, the classification accuracy of EEG signals was improved. With the enhanced difference features between different motor imagery EEG data, the proposed method is important for the study of improving classifier performance.
With the development of artificial intelligence (AI) technology, great progress has been made in the application of AI in the medical field. While foreign journals have published a large number of papers on the application of AI in epilepsy, there is a dearth of studies within domestic journals. In order to understand the global research progress and development trend of AI applications in epilepsy, a total of 895 papers on AI applications in epilepsy included in the Web of Science Core Collection and published before December 31, 2022 were selected as the research objects. The annual number of papers and their cited times, the most published authors, institutions and countries, and their cooperative relationships were analyzed, and the research hotspots and future trends in this field were explored by using bibliometrics and other methods. The results showed that before 2016, the annual number of papers on the application of AI in epilepsy increased slowly, and after 2017, the number of publications increased rapidly. The United States had the largest number of papers (n=273), followed by China (n=195). The institution with the largest number of papers was the University of London (n=36), and Capital Medical University in China had 23 papers. The author with the most published papers was Gregory Worrell (n=14), and the scholar with the most published articles in China was Guo Jiayan from Xiamen University (n=7). The application of machine learning in the diagnosis and treatment of epilepsy is an early research focus in this field, while the seizure prediction model based on EEG feature extraction, deep learning especially convolutional neural network application in epilepsy diagnosis, and cloud computing application in epilepsy healthcare, are the current research priorities in this field. AI-based EEG feature extraction, the application of deep learning in the diagnosis and treatment of epilepsy, and the Internet of things to solve epilepsy health-related problems are the research aims of this field in the future.
Objective To develop a neural network architecture based on deep learning to assist knee CT images automatic segmentation, and validate its accuracy. Methods A knee CT scans database was established, and the bony structure was manually annotated. A deep learning neural network architecture was developed independently, and the labeled database was used to train and test the neural network. Metrics of Dice coefficient, average surface distance (ASD), and Hausdorff distance (HD) were calculated to evaluate the accuracy of the neural network. The time of automatic segmentation and manual segmentation was compared. Five orthopedic experts were invited to score the automatic and manual segmentation results using Likert scale and the scores of the two methods were compared. Results The automatic segmentation achieved a high accuracy. The Dice coefficient, ASD, and HD of the femur were 0.953±0.037, (0.076±0.048) mm, and (3.101±0.726) mm, respectively; and those of the tibia were 0.950±0.092, (0.083±0.101) mm, and (2.984±0.740) mm, respectively. The time of automatic segmentation was significantly shorter than that of manual segmentation [(2.46±0.45) minutes vs. (64.73±17.07) minutes; t=36.474, P<0.001). The clinical scores of the femur were 4.3±0.3 in the automatic segmentation group and 4.4±0.2 in the manual segmentation group, and the scores of the tibia were 4.5±0.2 and 4.5±0.3, respectively. There was no significant difference between the two groups (t=1.753, P=0.085; t=0.318, P=0.752). Conclusion The automatic segmentation of knee CT images based on deep learning has high accuracy and can achieve rapid segmentation and three-dimensional reconstruction. This method will promote the development of new technology-assisted techniques in total knee arthroplasty.
Glaucoma is the leading cause of irreversible blindness, but its early symptoms are not obvious and are easily overlooked, so early screening for glaucoma is particularly important. The cup to disc ratio is an important indicator for clinical glaucoma screening, and accurate segmentation of the optic cup and disc is the key to calculating the cup to disc ratio. In this paper, a full convolutional neural network with residual multi-scale convolution module was proposed for the optic cup and disc segmentation. First, the fundus image was contrast enhanced and polar transformation was introduced. Subsequently, W-Net was used as the backbone network, which replaced the standard convolution unit with the residual multi-scale full convolution module, the input port was added to the image pyramid to construct the multi-scale input, and the side output layer was used as the early classifier to generate the local prediction output. Finally, a new multi-tag loss function was proposed to guide network segmentation. The mean intersection over union of the optic cup and disc segmentation in the REFUGE dataset was 0.904 0 and 0.955 3 respectively, and the overlapping error was 0.178 0 and 0.066 5 respectively. The results show that this method not only realizes the joint segmentation of cup and disc, but also improves the segmentation accuracy effectively, which could be helpful for the promotion of large-scale early glaucoma screening.
ObjectiveTo systematically evaluate the efficacy and safety of computer-aided detection (CADe) and conventional colonoscopy in identifying colorectal adenomas and polyps. MethodsThe PubMed, Embase, Cochrane Library, Web of Science, WanFang Data, VIP, and CNKI databases were electronically searched to collect randomized controlled trials (RCTs) comparing the effectiveness and safety of CADe assisted colonoscopy and conventional colonoscopy in detecting colorectal tumors from 2014 to April 2023. Two reviewers independently screened the literature, extracted data, and evaluated the risk of bias of the included literature. Meta-analysis was performed by RevMan 5.3 software. ResultsA total of 9 RCTs were included, with a total of 6 393 patients. Compared with conventional colonoscopy, the CADe system significantly improved the adenoma detection rate (ADR) (RR=1.22, 95%CI 1.10 to 1.35, P<0.01) and polyp detection rate (PDR) (RR=1.19, 95%CI 1.04 to 1.36, P=0.01). It also reduced the missed diagnosis rate (AMR) of adenomas (RR=0.48, 95%CI 0.34 to 0.67, P<0.01) and the missed diagnosis rate (PMR) of polyps (RR=0.39, 95%CI 0.25 to 0.59, P<0.01). The PDR of proximal polyps significantly increased, while the PDR of ≤5 mm polyps slightly increased, but the PDR of >10mm and pedunculated polyps significantly decreased. The AMR of the cecum, transverse colon, descending colon, and sigmoid colon was significantly reduced. There was no statistically significant difference in the withdrawal time between the two groups. Conclusion The CADe system can increase the detection rate of adenomas and polyps, and reduce the missed diagnosis rate. The detection rate of polyps is related to their location, size, and shape, while the missed diagnosis rate of adenomas is related to their location.
With the development of artificial intelligence, machine learning has been widely used in diagnosis of diseases. It is crucial to conduct diagnostic test accuracy studies and evaluate the performance of models reasonably to improve the accuracy of diagnosis. For machine learning-based diagnostic test accuracy studies, this paper introduces the principles of study design in the aspects of target conditions, selection of participants, diagnostic tests, reference standards and ethics.
Organoids are an in vitro model that can simulate the complex structure and function of tissues in vivo. Functions such as classification, screening and trajectory recognition have been realized through organoid image analysis, but there are still problems such as low accuracy in recognition classification and cell tracking. Deep learning algorithm and organoid image fusion analysis are the most advanced organoid image analysis methods. In this paper, the organoid image depth perception technology is investigated and sorted out, the organoid culture mechanism and its application concept in depth perception are introduced, and the key progress of four depth perception algorithms such as organoid image and classification recognition, pattern detection, image segmentation and dynamic tracking are reviewed respectively, and the performance advantages of different depth models are compared and analyzed. In addition, this paper also summarizes the depth perception technology of various organ images from the aspects of depth perception feature learning, model generalization and multiple evaluation parameters, and prospects the development trend of organoids based on deep learning methods in the future, so as to promote the application of depth perception technology in organoid images. It provides an important reference for the academic research and practical application in this field.
Valvular heart disease (VHD) ranks as the third most prevalent cardiovascular disease, following coronary artery disease and hypertension. Severe cases can lead to ventricular hypertrophy or heart failure, highlighting the critical importance of early detection. In recent years, the application of deep learning techniques in the auxiliary diagnosis of VHD has made significant advancements, greatly improving detection accuracy. This review begins by introducing the etiology, pathological mechanisms, and impact of common valvular heart diseases. It then explores the advantages and limitations of using electrocardiographic signals, phonocardiographic signals, and multimodal data in VHD detection. A comparison is made between traditional risk prediction methods and large language models (LLMs) for predicting cardiovascular disease risk, emphasizing the potential of LLMs in risk prediction. Lastly, the current challenges faced by deep learning in this field are discussed, and future research directions are proposed.