Meeting Banner
Abstract #3467

Contrastive Learning of Inter-domain Similarity for Unsupervised Multi-modality Deformable Registration

Neel Dey1, Jo Schlemper2, Seyed Sadegh Mohseni Salehi2, Bo Zhou3, and Michal Sofka2
1Computer Science and Engineering, New York University, New York City, NY, United States, 2Hyperfine, New York City, NY, United States, 3Yale University, New Haven, CT, United States

Synopsis

We propose an unsupervised contrastive representation learning framework for deformable and diffeomorphic multi-modality MR image registration. The proposed deep network and data-driven objective function yield improved registration performance in terms of anatomical volume overlap over several previous hand-crafted objectives such as Mutual Information and others. For fair comparison, our experiments train all methods over the entire range of a key registration hyperparameter controlling deformation smoothness using conditional registration hypernetworks. T1w and T2w brain MRI registration improvements are presented across a large cohort of 1041 high-field 3T research-grade acquisitions while maintaining comparable deformation smoothness and invertibility characteristics to previous methods.

How to access this content:

For one year after publication, abstracts and videos are only open to registrants of this annual meeting. Registrants should use their existing login information. Non-registrant access can be purchased via the ISMRM E-Library.

After one year, current ISMRM & ISMRT members get free access to both the abstracts and videos. Non-members and non-registrants must purchase access via the ISMRM E-Library.

After two years, the meeting proceedings (abstracts) are opened to the public and require no login information. Videos remain behind password for access by members, registrants and E-Library customers.

Click here for more information on becoming a member.

Keywords