Objective To explore the use of ChatGPT (Chat Generative Pre-trained Transformer) in pediatric diagnosis, treatment and doctor-patient communication, evaluate the professionalism and accuracy of the medical advice provided, and assess its ability to provide psychological support. Methods The knowledge databases of ChatGPT 3.5 and 4.0 versions as of April 2023 were selected. A total of 30 diagnosis and treatment questions and 10 doctor-patient communication questions regarding the pediatric urinary system were submitted to ChatGPT versions 3.5 and 4.0, and the answers to ChatGPT were evaluated. Results The answers to the 40 questions answered by ChatGPT versions 3.5 and 4.0 all reached the qualified level. The answers to 30 diagnostic and treatment questions in ChatGPT 4.0 version were superior to those in ChatGPT 3.5 version (P=0.024). There was no statistically significant difference in the answers to the 10 doctor-patient communication questions answered by ChatGPT 3.5 and 4.0 versions (P=0.727). For prevention, single symptom, and disease diagnosis and treatment questions, ChatGPT’s answer scores were relatively high. For questions related to the diagnosis and treatment of complex medical conditions, ChatGPT’s answer scores were relatively low. Conclusion ChatGPT has certain value in assisting pediatric diagnosis, treatment and doctor-patient communication, but the medical advice provided by ChatGPT cannot completely replace the professional judgment and personal care of doctors.
Valvular heart disease (VHD) ranks as the third most prevalent cardiovascular disease, following coronary artery disease and hypertension. Severe cases can lead to ventricular hypertrophy or heart failure, highlighting the critical importance of early detection. In recent years, the application of deep learning techniques in the auxiliary diagnosis of VHD has made significant advancements, greatly improving detection accuracy. This review begins by introducing the etiology, pathological mechanisms, and impact of common valvular heart diseases. It then explores the advantages and limitations of using electrocardiographic signals, phonocardiographic signals, and multimodal data in VHD detection. A comparison is made between traditional risk prediction methods and large language models (LLMs) for predicting cardiovascular disease risk, emphasizing the potential of LLMs in risk prediction. Lastly, the current challenges faced by deep learning in this field are discussed, and future research directions are proposed.
As one of the hot topics in the field of artificial intelligence, large language models are being applied in various domains, including medical research. ChatGPT (Chat Generative Pre-trained Transformer), as one of the most representative and leading large language models, has gained popularity among researchers due to its logical coherence and natural language generation capabilities. This article reviews the applications and limitations of ChatGPT in three key areas of medical research: scientific writing, data analysis, and drug development. Furthermore, it explores future development trends and provides recommendations for improvement, offering a reference for the application of ChatGPT in medical research.
ObjectiveTo explore the application value of artificial intelligence in medical research assistance, and analyze the key paths to achieve precise execution of model instructions, improvement of model interpretation completeness, and control of hallucinations. MethodsTaking esophageal cancer research as the scenario, five types of literature including treatises, case reports, reviews, editorials, and guidelines were selected for model interpretation tests. The model performance was systematically evaluated from five dimensions: recognition accuracy, format correctness, instruction execution precision, interpretation reliability, and interpretation completeness. The performance differences of Ruibing Agent, GPT-4o, Claude 3.7 Sonnet, DeepSeek V3, and DouBao-pro models in medical literature interpretation tasks were compared. ResultsA total of 1875 tests were conducted on the five models. Due to the poor recognition accuracy of the editorial type, the overall recognition accuracy of Ruibing Agent was significantly lower than other models (92.0% vs. 100.0%, P<0.001). In terms of format correctness, Ruibing Agent was significantly better than Claude 3.7 Sonnet (98.7% vs. 92.0%, P=0.002) and GPT-4o (98.7% vs. 78.9%, P<0.001). In terms of instruction execution precision, Ruibing Agent was better than GPT-4o (97.3% vs. 80.0%, P<0.001). In terms of interpretation reliability, Ruibing Agent was significantly lower than Claude 3.7 Sonnet (84.0% vs. 92.0%, P=0.010) and DeepSeek V3 (84.0% vs. 94.7%, P<0.001). In terms of interpretation completeness, the median scores of Ruibing Agent, GPT-4o, Claude 3.7 Sonnet, DeepSeek V3, and DouBao-pro were 0.71, 0.60, 0.85, 0.74, and 0.77, respectively. ConclusionRuibing Agent has significant advantages in terms of formatted interpretation of medical literature and instruction execution accuracy. In the future, it is necessary to focus on optimizing the recognition ability of editorial types, strengthening the coverage ability of core elements of various types of literature to improve interpretation completeness, and improving content reliability through optimizing the confidence mechanism to ensure the rigor of medical literature interpretation.
As technology continues to advance and artificial intelligence technology is widely applied, ChatGPT (Chat Generative Pre-trained Transformer) is beginning to make its mark in the field of healthcare consultation services. This article summarizes the current applications of ChatGPT in healthcare consultation services, reviewing its roles in four areas: dissemination of disease knowledge, assisting in the understanding of medical information, personalized health education and guidance, and preliminary diagnostic assistance and medical guidance. It also explores the development prospects of ChatGPT in healthcare consultation services, as well as the challenges and ethical dilemmas it faces in this field.