west china medical publishers
Keyword
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Keyword "Generative adversarial networks" 2 results
  • A generative adversarial network-based unsupervised domain adaptation method for magnetic resonance image segmentation

    Intelligent medical image segmentation methods have been rapidly developed and applied, while a significant challenge is domain shift. That is, the segmentation performance degrades due to distribution differences between the source domain and the target domain. This paper proposed an unsupervised end-to-end domain adaptation medical image segmentation method based on the generative adversarial network (GAN). A network training and adjustment model was designed, including segmentation and discriminant networks. In the segmentation network, the residual module was used as the basic module to increase feature reusability and reduce model optimization difficulty. Further, it learned cross-domain features at the image feature level with the help of the discriminant network and a combination of segmentation loss with adversarial loss. The discriminant network took the convolutional neural network and used the labels from the source domain, to distinguish whether the segmentation result of the generated network is from the source domain or the target domain. The whole training process was unsupervised. The proposed method was tested with experiments on a public dataset of knee magnetic resonance (MR) images and the clinical dataset from our cooperative hospital. With our method, the mean Dice similarity coefficient (DSC) of segmentation results increased by 2.52% and 6.10% to the classical feature level and image level domain adaptive method. The proposed method effectively improves the domain adaptive ability of the segmentation method, significantly improves the segmentation accuracy of the tibia and femur, and can better solve the domain transfer problem in MR image segmentation.

    Release date: Export PDF Favorites Scan
  • Stroke-p2pHD: Cross-modality Generation Model of Cerebral Infarction from CT to DWI images

    Among numerous medical imaging modalities, diffusion weighted imaging (DWI) is extremely sensitive to acute ischemic stroke lesions, especially small infarcts. However, magnetic resonance imaging is time-consuming and expensive, and it is also prone to interference from metal implants. Therefore, the aim of this study is to design a medical image synthesis method based on generative adversarial network, Stroke-p2pHD, for synthesizing DWI images from computed tomography (CT). Stroke-p2pHD consisted of a generator that effectively fused local image features and global context information (Global_to_Local) and a multi-scale discriminator (M2Dis). Specifically, in the Global_to_Local generator, a fully convolutional Transformer (FCT) and a local attention module (LAM) were integrated to achieve the synthesis of detailed information such as textures and lesions in DWI images. In the M2Dis discriminator, a multi-scale convolutional network was adopted to perform the discrimination function of the input images. Meanwhile, an optimization balance with the Global_to_Local generator was ensured and the consistency of features in each layer of the M2Dis discriminator was constrained. In this study, the public Acute Ischemic Stroke Dataset (AISD) and the acute cerebral infarction dataset from Yantai Yantaishan Hospital were used to verify the performance of the Stroke-p2pHD model in synthesizing DWI based on CT. Compared with other methods, the Stroke-p2pHD model showed excellent quantitative results (mean-square error = 0.008, peak signal-to-noise ratio = 23.766, structural similarity = 0.743). At the same time, relevant experimental analyses such as computational efficiency verify that the Stroke-p2pHD model has great potential for clinical applications.

    Release date: Export PDF Favorites Scan
1 pages Previous 1 Next

Format

Content