Deep learning models are state-of-the-art for numerous medical imaging prediction tasks. Exact understanding of learned prediction features is hard, slowing down their clinical application. New methods for interpreting such models are needed to enable clinical translation. Autoencoders are models that allow visualization of learned features, however they can lack detail in their visualizations and thus, cannot provide guidance on features that hinders their use. We propose a method for understanding relevant learned features by visualizing them in detailed images. We show that a model trained to predict age based on brain MR data learns known features of the aging brain.