Non-rigid registration plays an important role in medical image analysis. U-Net has been proven to be a hot research topic in medical image analysis and is widely used in medical image registration. However, existing registration models based on U-Net and its variants lack sufficient learning ability when dealing with complex deformations, and do not fully utilize multi-scale contextual information, resulting insufficient registration accuracy. To address this issue, a non-rigid registration algorithm for X-ray images based on deformable convolution and multi-scale feature focusing module was proposed. First, it used residual deformable convolution to replace the standard convolution of the original U-Net to enhance the expression ability of registration network for image geometric deformations. Then, stride convolution was used to replace the pooling operation of the downsampling operation to alleviate feature loss caused by continuous pooling. In addition, a multi-scale feature focusing module was introduced to the bridging layer in the encoding and decoding structure to improve the network model’s ability of integrating global contextual information. Theoretical analysis and experimental results both showed that the proposed registration algorithm could focus on multi-scale contextual information, handle medical images with complex deformations, and improve the registration accuracy. It is suitable for non-rigid registration of chest X-ray images.
This article aims to combine deep learning with image analysis technology and propose an effective classification method for distal radius fracture types. Firstly, an extended U-Net three-layer cascaded segmentation network was used to accurately segment the most important joint surface and non joint surface areas for identifying fractures. Then, the images of the joint surface area and non joint surface area separately were classified and trained to distinguish fractures. Finally, based on the classification results of the two images, the normal or ABC fracture classification results could be comprehensively determined. The accuracy rates of normal, A-type, B-type, and C-type fracture on the test set were 0.99, 0.92, 0.91, and 0.82, respectively. For orthopedic medical experts, the average recognition accuracy rates were 0.98, 0.90, 0.87, and 0.81, respectively. The proposed automatic recognition method is generally better than experts, and can be used for preliminary auxiliary diagnosis of distal radius fractures in scenarios without expert participation.
Remarkable results have been realized by the U-Net network in the task of medical image segmentation. In recent years, many scholars have been researching the network and expanding its structure, such as improvement of encoder and decoder and improvement of skip connection. Based on the optimization of U-Net structure and its medical image segmentation techniques, this paper elucidates in the following: First, the paper elaborates on the application of U-Net in the field of medical image segmentation; Then, the paper summarizes the seven improvement mechanism of U-Net: dense connection mechanism, residual connection mechanism, multi-scale mechanism, ensemble mechanism, dilated mechanism, attention mechanism, and transformer mechanism; Finally, the paper states the ideas and methods on the U-Net structure improvement in a bid to provide a reference for later researches, which plays a significant part in advancing U-Net.
Medical image registration plays an important role in medical diagnosis and treatment planning. However, the current registration methods based on deep learning still face some challenges, such as insufficient ability to extract global information, large number of network model parameters, slow reasoning speed and so on. Therefore, this paper proposed a new model LCU-Net, which used parallel lightweight convolution to improve the ability of global information extraction. The problem of large number of network parameters and slow inference speed was solved by multi-scale fusion. The experimental results showed that the Dice coefficient of LCU-Net reached 0.823, the Hausdorff distance was 1.258, and the number of network parameters was reduced by about one quarter compared with that before multi-scale fusion. The proposed algorithm shows remarkable advantages in medical image registration tasks, and it not only surpasses the existing comparison algorithms in performance, but also has excellent generalization performance and wide application prospects.
To address the issue of a large number of network parameters and substantial floating-point operations in deep learning networks applied to image segmentation for cardiac magnetic resonance imaging (MRI), this paper proposes a lightweight dilated parallel convolution U-Net (DPU-Net) to decrease the quantity of network parameters and the number of floating-point operations. Additionally, a multi-scale adaptation vector knowledge distillation (MAVKD) training strategy is employed to extract latent knowledge from the teacher network, thereby enhancing the segmentation accuracy of DPU-Net. The proposed network adopts a distinctive way of convolutional channel variation to reduce the number of parameters and combines with residual blocks and dilated convolutions to alleviate the gradient explosion problem and spatial information loss that might be caused by the reduction of parameters. The research findings indicate that this network has achieved considerable improvements in reducing the number of parameters and enhancing the efficiency of floating-point operations. When applying this network to the public dataset of the automatic cardiac diagnosis challenge (ACDC), the dice coefficient reaches 91.26%. The research results validate the effectiveness of the proposed lightweight network and knowledge distillation strategy, providing a reliable lightweighting idea for deep learning in the field of medical image segmentation.
Intelligent medical image segmentation methods have been rapidly developed and applied, while a significant challenge is domain shift. That is, the segmentation performance degrades due to distribution differences between the source domain and the target domain. This paper proposed an unsupervised end-to-end domain adaptation medical image segmentation method based on the generative adversarial network (GAN). A network training and adjustment model was designed, including segmentation and discriminant networks. In the segmentation network, the residual module was used as the basic module to increase feature reusability and reduce model optimization difficulty. Further, it learned cross-domain features at the image feature level with the help of the discriminant network and a combination of segmentation loss with adversarial loss. The discriminant network took the convolutional neural network and used the labels from the source domain, to distinguish whether the segmentation result of the generated network is from the source domain or the target domain. The whole training process was unsupervised. The proposed method was tested with experiments on a public dataset of knee magnetic resonance (MR) images and the clinical dataset from our cooperative hospital. With our method, the mean Dice similarity coefficient (DSC) of segmentation results increased by 2.52% and 6.10% to the classical feature level and image level domain adaptive method. The proposed method effectively improves the domain adaptive ability of the segmentation method, significantly improves the segmentation accuracy of the tibia and femur, and can better solve the domain transfer problem in MR image segmentation.
Computer-aided diagnosis (CAD) systems play a very important role in modern medical diagnosis and treatment systems, but their performance is limited by training samples. However, the training samples are affected by factors such as imaging cost, labeling cost and involving patient privacy, resulting in insufficient diversity of training images and difficulty in data obtaining. Therefore, how to efficiently and cost-effectively augment existing medical image datasets has become a research hotspot. In this paper, the research progress on medical image dataset expansion methods is reviewed based on relevant literatures at home and abroad. First, the expansion methods based on geometric transformation and generative adversarial networks are compared and analyzed, and then improvement of the augmentation methods based on generative adversarial networks are emphasized. Finally, some urgent problems in the field of medical image dataset expansion are discussed and the future development trend is prospected.
In response to the issues of single-scale information loss and large model parameter size during the sampling process in U-Net and its variants for medical image segmentation, this paper proposes a multi-scale medical image segmentation method based on pixel encoding and spatial attention. Firstly, by redesigning the input strategy of the Transformer structure, a pixel encoding module is introduced to enable the model to extract global semantic information from multi-scale image features, obtaining richer feature information. Additionally, deformable convolutions are incorporated into the Transformer module to accelerate convergence speed and improve module performance. Secondly, a spatial attention module with residual connections is introduced to allow the model to focus on the foreground information of the fused feature maps. Finally, through ablation experiments, the network is lightweighted to enhance segmentation accuracy and accelerate model convergence. The proposed algorithm achieves satisfactory results on the Synapse dataset, an official public dataset for multi-organ segmentation provided by the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), with Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) scores of 77.65 and 18.34, respectively. The experimental results demonstrate that the proposed algorithm can enhance multi-organ segmentation performance, potentially filling the gap in multi-scale medical image segmentation algorithms, and providing assistance for professional physicians in diagnosis.
To address the challenges faced by current brain midline segmentation techniques, such as insufficient accuracy and poor segmentation continuity, this paper proposes a deep learning network model based on a two-stage framework. On the first stage of the model, prior knowledge of the feature consistency of adjacent brain midline slices under normal and pathological conditions is utilized. Associated midline slices are selected through slice similarity analysis, and a novel feature weighting strategy is adopted to collaboratively fuse the overall change characteristics and spatial information of these associated slices, thereby enhancing the feature representation of the brain midline in the intracranial region. On the second stage, the optimal path search strategy for the brain midline is employed based on the network output probability map, which effectively addresses the problem of discontinuous midline segmentation. The method proposed in this paper achieved satisfactory results on the CQ500 dataset provided by the Center for Advanced Research in Imaging, Neurosciences and Genomics, New Delhi, India. The Dice similarity coefficient (DSC), Hausdorff distance (HD), average symmetric surface distance (ASSD), and normalized surface Dice (NSD) were 67.38 ± 10.49, 24.22 ± 24.84, 1.33 ± 1.83, and 0.82 ± 0.09, respectively. The experimental results demonstrate that the proposed method can fully utilize the prior knowledge of medical images to effectively achieve accurate segmentation of the brain midline, providing valuable assistance for subsequent identification of the brain midline by clinicians.
Aiming at the problems of missing important features, inconspicuous details and unclear textures in the fusion of multimodal medical images, this paper proposes a method of computed tomography (CT) image and magnetic resonance imaging (MRI) image fusion using generative adversarial network (GAN) and convolutional neural network (CNN) under image enhancement. The generator aimed at high-frequency feature images and used double discriminators to target the fusion images after inverse transform; Then high-frequency feature images were fused by trained GAN model, and low-frequency feature images were fused by CNN pre-training model based on transfer learning. Experimental results showed that, compared with the current advanced fusion algorithm, the proposed method had more abundant texture details and clearer contour edge information in subjective representation. In the evaluation of objective indicators, QAB/F, information entropy (IE), spatial frequency (SF), structural similarity (SSIM), mutual information (MI) and visual information fidelity for fusion (VIFF) were 2.0%, 6.3%, 7.0%, 5.5%, 9.0% and 3.3% higher than the best test results, respectively. The fused image can be effectively applied to medical diagnosis to further improve the diagnostic efficiency.