An approach to reduce motion artifacts in Quantitative Susceptibility Mapping using deep learning is proposed. We use an affine motion model with randomly created motion profiles to simulate motion-corrupted QSM images. The simulated QSM image is paired with its motion-free reference to train a neural network using supervised learning. The trained network is tested on unseen simulated motion-corrupted QSM images, in healthy volunteers and in Parkinson’s disease patients. The results show that motion artifacts, such as ringing and ghosting, were successfully suppressed.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords