Deep neural network models have demonstrated state-of-the-art performance in MR image reconstruction. These models require information regarding imaging operators during training, which limits their generalization. A recent framework is based on zero-shot learned generative models that learn MR image priors during training and couple imaging operator during inference on test acquisitions. Such models are however based on convolutional architectures that suffer from sub-optimal capture of long range dependencies. Here, we propose a novel architecture based on zero-shot learned generative adversarial transformers that enables efficient capture of long range dependencies via cross-attention transformers while removing reliance on imaging operator during training.
This abstract and the presentation materials are available to members only; a login is required.