Interpretable Deep Learning(DL) models are the next step in establishing DL prediction models as accepted tools that provide researchers with data-driven methods to further understand neuroimaging data. In this work, we developed two interpretable DL models to predict Working Memory(WM) scores from task fMRI data to assess neural circuitry pertaining to WM; wherein a traditional Convolutional Neural Network(CNN)(1-3) contained fMRI activation data from cortical vertices as a single image, and the second contained cortical activation data from both hemispheres as separate channels. Overall, the interpretable DL model provided high quality saliency maps potentially displaying novel regions pertaining to WM.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords