A deep neural network is presented to synthetically generate T2FLAIR weighted images from other standard neuroimaging acquisitions. Network performance improved with input images that share components with similar physical sources of contrast as the T2FLAIR contrast, while performance was degraded when disparate sources of contrast, like fractional anisotropy, were included. This suggests that a level of feature engineering is appropriate when building deep neural networks to perform style transforms with respect to MRI contrast, with input features containing shared physical sources of contrast with the desired output contrast. In the optimally trained network, pathology present in the acquired T2FLAIR images and not present in the training dataset was correctly reconstructed.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords