Multimodal brain imaging acquires complementary information of the brain. However, due to the high dimensionality of the data, it is challenging to capture the underlying joint spatial and cross-modal dependence required for statistical inference in various brain image processing tasks. In this work, we proposed a new multimodal image fusion method that synergistically integrates tensor modeling and deep learning. The tensor model was used to capture the joint spatial-intensity-modality dependence and deep learning was used to fuse spatial-intensity-modality information. Our method has been applied to multimodal brain image segmentation, producing significantly improved results.
How to access this content:
For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.
After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.
After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.
Keywords