Quantification of 129Xe MRI relies on accurate segmentation of the thoracic cavity. This segmentation could potentially be performed directly on the 129Xe ventilation image using an automated convolutional neural network, but this task is challenging, especially in cases where peripheral ventilation defects obscure the lung boundary. Currently, overcoming this obstacle requires large, diverse training datasets created by time-consuming manual segmentation. Here, we demonstrate the use of a generative Pix2Pix model to synthesize both 129Xe images with defects, and corresponding segmentation masks. We then test the effects of this additional training data on the performance of an existing U-net segmentation algorithm.
This abstract and the presentation materials are available to members only; a login is required.