Training deep neural networks for medical imaging commonly requires large image datasets and paired label datasets. However, in the medical imaging research field, labeling costs are very expensive. To overcome this issue, we propose a data generation method which is StyleGAN2-based architecture that jointly generates multi-contrast magnetic resonance (MR) images and segmentation maps. The effectiveness of our generation model is validated in terms of segmentation performance for tumors. We demonstrate that the segmentation model only trained with the fake data generated from our method achieves comparable performance to that trained with real data.
This abstract and the presentation materials are available to members only; a login is required.