This paper presents a kind of automatic segmentation method for white blood cell based on HSI corrected space information fusion. Firstly, the original cell image is transformed to HSI colour space conversion. Because the transformation formulas of H component piecewise function was discontinuous, the uniformity of uniform visual cytoplasm area in the original image was lead to become lower in this channel. We then modified formulas, and then fetched information of nucleus, cytoplasm, red blood cells and background region according to distribution characteristics of the H, S and I-channel, using the theory and method of information fusion to build fusion imageⅠand fusion imageⅡ, which only contained cytoplasm and a small amount of interference, and fetched nucleus and cytoplasm respectively. Finally, we marked the nucleus and cytoplasm region and obtained the final result of segmentation. The simulation results showed that the new algorithm of image segmentation for white blood cell had high accuracy, robustness and universality.
In view of the evaluation of fundus image segmentation, a new evaluation method was proposed to make up insufficiency of the traditional evaluation method which only considers the overlap of pixels and neglects topology structure of the retinal vessel. Mathematical morphology and thinning algorithm were used to obtain the retinal vascular topology structure. Then three features of retinal vessel, including mutual information, correlation coefficient and ratio of nodes, were calculated. The features of the thinned images taken as topology structure of blood vessel were used to evaluate retinal image segmentation. The manually-labeled images and their eroded ones of STARE database were used in the experiment. The result showed that these features, including mutual information, correlation coefficient and ratio of nodes, could be used to evaluate the segmentation quality of retinal vessel on fundus image through topology structure, and the algorithm was simple. The method is of significance to the supplement of traditional segmentation evaluation of retinal vessel on fundus image.
The diagnosis of pancreatic cancer is very important. The main method of diagnosis is based on pathological analysis of microscopic image of Pap smear slide. The accurate segmentation and classification of images are two important phases of the analysis. In this paper, we proposed a new automatic segmentation and classification method for microscopic images of pancreas. For the segmentation phase, firstly multi-features Mean-shift clustering algorithm (MFMS) was applied to localize regions of nuclei. Then, chain splitting model (CSM) containing flexible mathematical morphology and curvature scale space corner detection method was applied to split overlapped cells for better accuracy and robustness. For classification phase, 4 shape-based features and 138 textural features based on color spaces of cell nuclei were extracted. In order to achieve optimal feature set and classify different cells, chain-like agent genetic algorithm (CAGA) combined with support vector machine (SVM) was proposed. The proposed method was tested on 15 cytology images containing 461 cell nuclei. Experimental results showed that the proposed method could automatically segment and classify different types of microscopic images of pancreatic cell and had effective segmentation and classification results. The mean accuracy of segmentation is 93.46%±7.24%. The classification performance of normal and malignant cells can achieve 96.55%±0.99% for accuracy, 96.10%±3.08% for sensitivity and 96.80%±1.48% for specificity.
With the change of medical diagnosis and treatment mode, the quality of medical image directly affects the diagnosis and treatment of the disease for doctors. Therefore, realization of intelligent image quality control by computer will have a greater auxiliary effect on the radiographer’s filming work. In this paper, the research methods and applications of image segmentation model and image classification model in the field of deep learning and traditional image processing algorithm applied to medical image quality evaluation are described. The results demonstrate that deep learning algorithm is more accurate and efficient than the traditional image processing algorithm in the effective training of medical image big data, which explains the broad application prospect of deep learning in the medical field. This paper developed a set of intelligent quality control system for auxiliary filming, and successfully applied it to the Radiology Department of West China Hospital and other city and county hospitals, which effectively verified the feasibility and stability of the quality control system.
To address the challenges faced by current brain midline segmentation techniques, such as insufficient accuracy and poor segmentation continuity, this paper proposes a deep learning network model based on a two-stage framework. On the first stage of the model, prior knowledge of the feature consistency of adjacent brain midline slices under normal and pathological conditions is utilized. Associated midline slices are selected through slice similarity analysis, and a novel feature weighting strategy is adopted to collaboratively fuse the overall change characteristics and spatial information of these associated slices, thereby enhancing the feature representation of the brain midline in the intracranial region. On the second stage, the optimal path search strategy for the brain midline is employed based on the network output probability map, which effectively addresses the problem of discontinuous midline segmentation. The method proposed in this paper achieved satisfactory results on the CQ500 dataset provided by the Center for Advanced Research in Imaging, Neurosciences and Genomics, New Delhi, India. The Dice similarity coefficient (DSC), Hausdorff distance (HD), average symmetric surface distance (ASSD), and normalized surface Dice (NSD) were 67.38 ± 10.49, 24.22 ± 24.84, 1.33 ± 1.83, and 0.82 ± 0.09, respectively. The experimental results demonstrate that the proposed method can fully utilize the prior knowledge of medical images to effectively achieve accurate segmentation of the brain midline, providing valuable assistance for subsequent identification of the brain midline by clinicians.
Brain image segmentation algorithm based on deep learning is a research hotspot at present. In this paper, firstly, the significance of brain image segmentation and the content of related brain image segmentation algorithm are systematically described, highlighting the advantages of brain image segmentation algorithms based on deep learning. Then, this paper introduces current brain image segmentation algorithms based on deep learning from three aspects: the brain image segmentation algorithms based on problems existent to brain image, the brain image segmentation algorithms based on prior knowledge guidance and the application of general deep learning models in brain image segmentation, so as to enable researchers in relevant fields to understand current research progress more systematically. Finally, this paper provides a general direction for the further research of brain image segmentation algorithm based on deep learning.
Colorectal polyps are important early markers of colorectal cancer, and their early detection is crucial for cancer prevention. Although existing polyp segmentation models have achieved certain results, they still face challenges such as diverse polyp morphology, blurred boundaries, and insufficient feature extraction. To address these issues, this study proposes a parallel coordinate fusion network (PCFNet), aiming to improve the accuracy and robustness of polyp segmentation. PCFNet integrates parallel convolutional modules and a coordinate attention mechanism, enabling the preservation of global feature information while precisely capturing detailed features, thereby effectively segmenting polyps with complex boundaries. Experimental results on Kvasir-SEG and CVC-ClinicDB demonstrate the outstanding performance of PCFNet across multiple metrics. Specifically, on the Kvasir-SEG dataset, PCFNet achieved an F1-score of 0.897 4 and a mean intersection over union (mIoU) of 0.835 8; on the CVC-ClinicDB dataset, it attained an F1-score of 0.939 8 and an mIoU of 0.892 3. Compared with other methods, PCFNet shows significant improvements across all performance metrics, particularly in multi-scale feature fusion and spatial information capture, demonstrating its innovativeness. The proposed method provides a more reliable AI-assisted diagnostic tool for early colorectal cancer screening.
To address the issue of a large number of network parameters and substantial floating-point operations in deep learning networks applied to image segmentation for cardiac magnetic resonance imaging (MRI), this paper proposes a lightweight dilated parallel convolution U-Net (DPU-Net) to decrease the quantity of network parameters and the number of floating-point operations. Additionally, a multi-scale adaptation vector knowledge distillation (MAVKD) training strategy is employed to extract latent knowledge from the teacher network, thereby enhancing the segmentation accuracy of DPU-Net. The proposed network adopts a distinctive way of convolutional channel variation to reduce the number of parameters and combines with residual blocks and dilated convolutions to alleviate the gradient explosion problem and spatial information loss that might be caused by the reduction of parameters. The research findings indicate that this network has achieved considerable improvements in reducing the number of parameters and enhancing the efficiency of floating-point operations. When applying this network to the public dataset of the automatic cardiac diagnosis challenge (ACDC), the dice coefficient reaches 91.26%. The research results validate the effectiveness of the proposed lightweight network and knowledge distillation strategy, providing a reliable lightweighting idea for deep learning in the field of medical image segmentation.
Most current medical image segmentation models are primarily built upon the U-shaped network (U-Net) architecture, which has certain limitations in capturing both global contextual information and fine-grained details. To address this issue, this paper proposes a novel U-shaped network model, termed the Multi-View U-Net (MUNet), which integrates self-attention and multi-view attention mechanisms. Specifically, a newly designed multi-view attention module is introduced to aggregate semantic features from different perspectives, thereby enhancing the representation of fine details in images. Additionally, the MUNet model leverages a self-attention encoding block to extract global image features, and by fusing global and local features, it improves segmentation performance. Experimental results demonstrate that the proposed model achieves superior segmentation performance in coronary artery image segmentation tasks, significantly outperforming existing models. By incorporating self-attention and multi-view attention mechanisms, this study provides a novel and efficient modeling approach for medical image segmentation, contributing to the advancement of intelligent medical image analysis.
In response to the issues of single-scale information loss and large model parameter size during the sampling process in U-Net and its variants for medical image segmentation, this paper proposes a multi-scale medical image segmentation method based on pixel encoding and spatial attention. Firstly, by redesigning the input strategy of the Transformer structure, a pixel encoding module is introduced to enable the model to extract global semantic information from multi-scale image features, obtaining richer feature information. Additionally, deformable convolutions are incorporated into the Transformer module to accelerate convergence speed and improve module performance. Secondly, a spatial attention module with residual connections is introduced to allow the model to focus on the foreground information of the fused feature maps. Finally, through ablation experiments, the network is lightweighted to enhance segmentation accuracy and accelerate model convergence. The proposed algorithm achieves satisfactory results on the Synapse dataset, an official public dataset for multi-organ segmentation provided by the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), with Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) scores of 77.65 and 18.34, respectively. The experimental results demonstrate that the proposed algorithm can enhance multi-organ segmentation performance, potentially filling the gap in multi-scale medical image segmentation algorithms, and providing assistance for professional physicians in diagnosis.