Subject motion is a common artifact in MR acquisition that can severely degrade image quality. We take advantage of the recent advances in deep generative network to compensate motion and generate images of increased quality, measured by evaluating changes in MSSIM and normalized L2 distance (NRMSE). We trained an image to image network to predict motion compensated magnitude images given motion-corrupted input images, coupled with an adversarial network to help refine those predicted images. For the discriminator loss, we use the Wasserstein objective. The results suggest clear improvements on MSSIM and NRMSE metrics for the majority of cases.