Model-based accelerated MRI reconstruction networks leverage large datasets to reconstruct diagnostic-quality images from undersampled k-space. To deal with inherent dataset variability, the current paradigm trains separate models for each dataset. This is a demanding process and cannot exploit information that may be shared amongst datasets. In response, we propose multi-task learning (MTL) schemes that jointly reconstruct multiple datasets. Introducing inductive biases to the network allows for positive information sharing. We test MTL architectures and weighted loss functions against single task learning (STL). Our results suggest that MTL can outperform STL across a range of dataset ratios for two knee contrasts.
This abstract and the presentation materials are available to members only; a login is required.