We use simulated motion-corrupted images to compute associated image quality metrics and quantify the corresponding severity of motion. We train models with four different inputs (full image, Foreground only, Background only or both Foreground and Background in two channels) to regress to those metrics. To obtain a ground-truth as acceptable or not acceptable image quality, we choose acceptance thresholds within a reasonable range, depending on the level of tolerable motion. The network shows high accuracy within this range. For both metrics used (MSSIM and NRMSE), BG-models perform better than FGBG-models.